text
stringlengths
100
500k
subset
stringclasses
4 values
Journal of Economics July 2019 , Volume 127, Issue 2, pp 99–124 | Cite as Attorney fees in repeated relationships Brad Graham Jack Robles First Online: 12 September 2018 We investigate contracts between a law firm and a corporate client involved in a repeated relationship. In contrast to the previous literature pertaining to one-time interactions between clients and attorneys, we find that the contingent fee is not the best arrangement. Rather, the contingent fee is dominated by a contract which, we argue, an outside observer could not distinguish from simple hourly fee contract. This contract includes an hourly fee equal to the law firm's opportunity cost, a lump sum, and a retention function. The lump sum payment is independent of the number of hours worked by the law firm and the outcome of the case. The repeated nature of the relationship allows the client to create a contract where the desire to maintain the relationship induces the law firm to exert the optimal level of effort in the current case. Legal services Contract Contingent fee Repeated relationship K40 K41 L14 Open image in new window Open image in new window Before we proceed with the Proofs of Proposition 1 and Theorem 1, we verify the claims regarding Assumption 1. Let \(F_i(\xi )\) and \(f_i(\xi )\) denote the CDF and density of the noise term \(\xi (i,\theta )\). Clearly \(F_i(\xi (i,\theta )) =1/2 + \theta \). Taking a derivative with respect to \(\theta \) yields \(f_i(\xi ) \cdot \xi _{\theta } = 1\) or $$\begin{aligned} f_i(\xi ) = \frac{1}{\xi _{\theta }} \end{aligned}$$ Now clearly \(\xi \) is symmetrically distributed around zero if and only if \(f_i(\xi ) = f_i(-\xi )\), which (from the above) holds if and only if \(\xi _{\theta }(i,-\theta ) = \xi _{\theta }(i,\theta )\). Hence, \(\xi \) is symmetrically distributed if and only if \(\xi (i,-\theta ) = -\xi (i,\theta )\), which is Assumption 1.2. Taking a derivative of \(f_i(\xi )\) yields $$\begin{aligned} \frac{d f_i}{d \xi } = \frac{-\xi _{\theta \theta }}{[\xi _{\theta }]^3} \end{aligned}$$ So Assumption 1.1 is the same as assuming that the density on \(\xi \) is weakly increasing for \(\xi <0\) and weakly decreasing for \(\xi >0\). If there are no point masses, then this is a necessary and sufficient condition for the distribution to be uni-modal. The above Equations are not defined if \(\xi _{\theta }=0\). If \(\xi _{\theta }=0\) over some range \(\theta \in [a,b]\), then there is a point mass at \(\xi (i,a)\), and \(F_i\) takes a discrete jump at this point. However, notices that Assumption 1.1 assures that \(\xi _{\theta } \ge 0\) achieves its minimum at zero. Hence the only possible point mass is at \(\xi (i,0) =0\). Again, we have a uni-modal distribution. B Proof of Proposition 1 Let \(\mathcal {Q}= z P + (1-z)\) for \(z \in (0,1]\). Clearly \(\mathcal {Q}\) is a more general retention probability function. Rather than proving Proposition 1, we prove a more general result. We use the following Assumption to make the statement of the more general result easier. \(\Gamma \) is sufficiently small and one of the following holds: \(z=1\), or \(\delta \) is sufficiently large. Proposition 1 uses Assumption 2.1. Let \(\alpha =0\), \(w=c\) and \(R_i=h_i^*\). Fix \(z \in (0,1]\). Fix a value i and then fix \(b_j\) and \(\Delta _j\) for \(j \not =i\). Further, assume that for \(j \not =i\), the law firm must set \(h_j=r_j =h_j^*\). If Assumptions 1 and 2 hold, then there exist L, \(\Delta _i\), and \(b_i\) such that in any period t with \(i_t=i\): The client prefers the long term contract to any one shot contract. The long term contract induces the law firm to set hours efficiently and report hours truthfully. The firm acts to maximize it's discounted expected stream of profits. Hence, he acts to satisfy first order conditions of his Bellman Equation $$\begin{aligned} {\hat{V}}(i) = \max _{h,r} \left\{ L + \alpha {\hat{A}} -c\cdot h + w \cdot r + \delta {\hat{\mathcal {Q}}} {\tilde{V}} \right\} \end{aligned}$$ The firm's first order condition with respect to h is $$\begin{aligned} \frac{d {\hat{\mathcal {Q}}}}{dh} = \frac{c-\alpha {\hat{A}}_h}{\delta {\tilde{V}}} \end{aligned}$$ while the first order condition with respect to r is $$\begin{aligned} \frac{d {\hat{\mathcal {Q}}}}{dr} = \frac{-w}{\delta {\tilde{V}}} \end{aligned}$$ Our next step is to replace the derivatives of \({\hat{\mathcal {Q}}}\) in Eqs. 15 and 16 with more explicit expressions. We recall that the expectation of P is Taking a derivative of Eq. 11 with respect to h yields $$\begin{aligned} \frac{d {\hat{P}}}{d h}= & {} \tau ({\overline{\theta }}_i) \frac{d\overline{\theta }_i}{dh} -\tau ({\underline{\theta }}_i) \frac{d\underline{\theta }_i}{dh} +\int _{{\underline{\theta }}_i}^{{\overline{\theta }}_i} \frac{d\tau }{dh} d\theta -\frac{d{\overline{\theta }}_i}{dh}\\= & {} -\tau ({\underline{\theta }}_i) \frac{d{\underline{\theta }}_i}{dh} + \left[ \tau ({\overline{\theta }}_i)-1\right] \frac{d{\overline{\theta }}_i}{dh} +\int _{{\underline{\theta }}_i}^{{\overline{\theta }}_i} \frac{d\tau }{dh} d\theta \\= & {} \int _{{\underline{\theta }}_i}^{{\overline{\theta }}_i} \frac{d\tau }{dh} d\theta . \end{aligned}$$ The final equality follows from the relationship between \(\tau \) and \({\underline{\theta }}_i\) and \({\overline{\theta }}_i\). In particular, if \(\tau ({\overline{\theta }}_i) <1\), then \({\overline{\theta }}_i =1/2\) and \(\frac{d{\overline{\theta }}_i}{dh} =0\). Likewise, if \(\tau ({\underline{\theta }}_i) >0\), then \({\underline{\theta }}_i =-1/2\) and \(\frac{d{\underline{\theta }}_i}{dh} =0\). Since \(\frac{d {\hat{\mathcal {Q}}}}{d h} = \frac{d {\hat{\mathcal {Q}}}}{d {\hat{P}}} \cdot \frac{d {\hat{P}}}{d h}\) and \(\tau (i,h,r,\theta ) = \frac{A-c\cdot r -b_i}{\Delta _i}\) we have $$\begin{aligned} \frac{d {\hat{\mathcal {Q}}}}{d h} = z\int _{{\underline{\theta }}_i}^{\overline{\theta }_i} \frac{A_h}{\Delta _i} d\theta \end{aligned}$$ Combining Eqs. 15 and 17 yields Eq. 18. $$\begin{aligned} \frac{c - \alpha {\hat{A}}_h}{\delta {\tilde{V}}} = z\int _{\underline{\theta }_i}^{{\overline{\theta }}_i} \frac{A_h}{\Delta _i} d\theta \end{aligned}$$ By arguments analogous to those used in the derivation of Eq. 17, the derivative of \({\hat{\mathcal {Q}}}\) with respect to r is $$\begin{aligned} \frac{d {\hat{\mathcal {Q}}}}{d r} = z\int _{{\underline{\theta }}_i}^{\overline{\theta }_i} \frac{-c}{\Delta _i} d\theta = \frac{-c(\overline{\theta }_i-{\underline{\theta }}_i)z}{\Delta _i} \end{aligned}$$ Combining Eqs. 16 and 19 yields Eq. 20 below. $$\begin{aligned} \frac{\Delta _i}{z} = \left( \frac{c}{w} \right) ({\overline{\theta }}_i-{\underline{\theta }}_i) \delta {\tilde{V}} \end{aligned}$$ Equations 18 and 20 both include \({\tilde{V}}\) which is endogenous. This motivates us to divide Eq. 18 by Eq. 20 to remove common terms including \({\tilde{V}}\). Doing this yields Eq. 21. $$\begin{aligned} \int _{{\underline{\theta }}_i}^{{\overline{\theta }}_i} A_h d\theta = \left( \frac{c}{w}\right) (c-\alpha {\hat{A}}_h) \cdot \left( \overline{\theta }_i-{\underline{\theta }}_i\right) \end{aligned}$$ Equation 21 is a necessary condition for both first order conditions to hold. Proof of Lemma 1 By definition \({\hat{A}}_h^* =c\). If \(A_{h,\theta }^*=0\), then \(A_h^*\) is constant in \(\theta \). That is to say \(A_h^*={\hat{A}}_h^* =c\) within the integral in Eq. 21. This reduces Eq. 21 to \(w=(1-\alpha ) c\). \(\square \) As in the body, we set \(\alpha =0\) and \(w=c\) henceforth. Doing this, and requiring that Eq. 21 holds when \(h=r=h_i^*\) yields the first order condition for h presented in the body If \(\alpha =0\), \(w=c\), and Eq. 21 holds, then Eqs. 18 and 20 both reduce to $$\begin{aligned} \frac{\Delta _i}{z} = ({\overline{\theta }}_i-{\underline{\theta }}_i) \delta {\tilde{V}} \end{aligned}$$ We convert Eq. 22 into Eq. 24 by replacing \({\tilde{V}}\) with an endogenously determined value. If \(\alpha =0\), \(w=c\), and the first order conditions hold so that \(h=r=h^*_i\), then the firm's Bellman Equation 5 becomes $$\begin{aligned} {\hat{V}}(i) = L + \delta {\hat{\mathcal {Q}}}_i^* {\tilde{V}} \end{aligned}$$ Taking expectations over i, and recalling that, e.g., \({\tilde{\mathcal {Q}}}^* = E_i({\hat{\mathcal {Q}}}_i^*)\), we have $$\begin{aligned} {\tilde{V}} = \frac{L}{1- \delta {\tilde{\mathcal {Q}}}^*} \end{aligned}$$ Plugging Eq. 23 into Eq. 22 and setting \(r=h=h_i^*\) yields $$\begin{aligned} 1=\left( \frac{{\overline{\theta }}_i^*-{\underline{\theta }}_i^*}{\Delta _i}\right) \left( \frac{z\delta L}{1- \delta {\tilde{\mathcal {Q}}}^*}\right) \end{aligned}$$ Setting \(z=1\) makes \({\tilde{\mathcal {Q}}}^* = {\tilde{P}}^*\), and turns Eq. 24 into Eq. 12. We now argue that it is possible to solve Eqs. 13 and 24 simultaneously. The first step to this is to show that we have some freedom in choosing pairs \((b_i,\Delta _i)\) to solve Eq. 13. There are two simple cases which we dispense with first. If \(A_{h,\theta }^* =B_h^* \cdot \xi _{\theta }=0\), then Eq. 13 holds automatically. Likewise, if \(b_i\) and \(\Delta _i\) are such that \(-1/2 = {\underline{\theta }}^*_i\) and \({\overline{\theta }}^*_i= 1/2\), then Eq. 13 holds by the definition of \(h_i^*\). Equations 8 and 9 imply that if \(-1/2 = {\underline{\theta }}^*_i\) and \({\overline{\theta }}^*_i= 1/2\), then \(\Delta _i \ge A(i,h^*_i,{\overline{\theta }}_i^*) - A(i,h^*_i,{\underline{\theta }}_i^*)\), while if \(-1/2< {\underline{\theta }}^*_i< {\overline{\theta }}^*_i< 1/2\), then $$\begin{aligned} \Delta _i = A\left( i,h^*_i,{\overline{\theta }}_i^*\right) - A\left( i,h^*_i,\underline{\theta }_i^*\right) . \end{aligned}$$ Lemma B.1 Set \(\alpha =0\) and \(w=c\). Let Assumption 1 hold, fix i and assume that \(B_h(i,h^*_i) \not =0\). If \(0 < \Delta _i \le A(i,h^*_i,1/2) - A(i,h^*_i,-1/2)\), then \(\exists !b_i(\Delta _i)\) with \(b_i +c\cdot h^*_i \in [A(i,h^*_i,-1/2), A(i,h^*_i,0))\) such that Eqs. 13 and 25 hold for \({\underline{\theta }}^*_i(b_i)\) and \({\overline{\theta }}^*_i(b_i,\Delta _i)\). Furthermore, \(b_i(\Delta _i)\) is strictly decreasing in \(\Delta _i\), and \({\underline{\theta }}^*_i(b_i) = -{\overline{\theta }}^*_i(b_i,\Delta _i)\). Before we prove Lemma B.1, there are some preliminaries to attend. We consider the comparative statics from changing \(b_i\) and \(\Delta _i\). Let \({\underline{A}}^*= A(i, h^*_i, {\underline{\theta }}^*_i)\) and \({\overline{A}}^*= A(i, h^*_i, {\overline{\theta }}^*_i)\). Since it is the case for Lemma B.1, assume that \(B_h^* \not =0\) and \(-1/2< {\underline{\theta }}^*_i< {\overline{\theta }}^*_i<1/2\). In this case \({\underline{\theta }}^*_i\) is defined by \({\underline{A}}^*-ch^*_i-b_i =0\). It follows that $$\begin{aligned} \frac{d {\underline{\theta }}^*_i}{d b_i} = \frac{1}{{\underline{A}}^*_{\theta }} >0 \end{aligned}$$ Given \({\underline{\theta }}^*_i\), we determine \({\overline{\theta }}^*_i\) by \(\int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i} A_h d \theta = c({\overline{\theta }}^*_i- {\underline{\theta }}^*_i)\). It follows that $$\begin{aligned} \frac{d {\overline{\theta }}^*_i}{d {\underline{\theta }}^*_i} = \frac{{\underline{A}}^*_h -c}{{\overline{A}}^*_h -c} <0 \end{aligned}$$ Using the functional form \(A={\hat{A}} +B\cdot \xi \) and Assumption 1, we have $$\begin{aligned} \frac{d {\overline{\theta }}^*_i}{d {\underline{\theta }}^*_i} =\frac{{\hat{A}}^*_h -B^*_h\cdot \xi ({\underline{\theta }}^*_i)-c}{{\hat{A}}^*_h -B^*_h\cdot \xi ({\overline{\theta }}^*_i)-c} =\frac{\xi ({\underline{\theta }}^*_i)}{\xi ({\overline{\theta }}^*_i)}= -1 \end{aligned}$$ The final equality derives from the following. We know that when \({\underline{\theta }}^*_i= -\frac{1}{2}\), then \({\overline{\theta }}^*_i= \frac{1}{2}= - {\underline{\theta }}^*_i\). By Assumption 1.2, the final equality must then hold at \({\underline{\theta }}^*_i= -\frac{1}{2}\). Furthermore, as long as \({\underline{\theta }}^*_i=- {\overline{\theta }}^*_i\), the equality will hold, and as long as the equality holds along the path from \(-\frac{1}{2}\) to \({\underline{\theta }}^*_i\), then \({\underline{\theta }}^*_i=- {\overline{\theta }}^*_i\). Hence, Eq. 26 holds, as does the following Lemma. If \(\alpha =0\), \(w=c\) and Assumption 1 holds, then \({\overline{\theta }}^*_i= - {\underline{\theta }}^*_i\). Given the negative symmetry of \(\xi \) it follows that \(\xi _{\theta }(-\theta ) = \xi _{\theta }(\theta )\) so that \({\underline{A}}^*_{\theta } = {\overline{A}}^*_{\theta }\). Using the above results, we have that $$\begin{aligned} \frac{d {\overline{\theta }}^*_i}{d b_i} = \frac{d {\overline{\theta }}^*_i}{d {\underline{\theta }}^*_i} \cdot \frac{d {\underline{\theta }}^*_i}{d b_i} =\frac{-1}{{\underline{A}}^*_{\theta }} <0 \end{aligned}$$ Finally, \(\Delta _i ={\overline{A}}^*-{\underline{A}}^*\), which leads us to $$\begin{aligned} \frac{d \Delta _i}{d {\underline{\theta }}^*_i} = {\overline{A}}^*_{\theta } \cdot \frac{d {\overline{\theta }}^*_i}{d {\underline{\theta }}^*_i} - {\underline{A}}^*_{\theta } = -2 {\underline{A}}^*_{\theta } \end{aligned}$$ $$\begin{aligned} \frac{d \Delta _i}{d b_i} = \frac{d \Delta _i}{d {\underline{\theta }}^*_i} \cdot \frac{d {\underline{\theta }}^*_i}{d b_i} = -2 \end{aligned}$$ Proof of Lemma B.1 Given the above, we can define \(b_i(\Delta _i) = {\hat{A}}^*_i -ch_i^* - \frac{1}{2}\Delta _i\) for \(0 < \Delta _i \le A(i,h^*_i, \frac{1}{2}) - A(i,h^*_i, -\frac{1}{2})\). It remains only to verify uniqueness. To this end we first consider if, given \({\underline{\theta }}^*_i\) it is possible to choose \({\overline{\theta }}^*_i\not = -{\underline{\theta }}^*_i\). Equation 13 can be written as $$\begin{aligned} \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i} A_h^* d \theta - c({\overline{\theta }}^*_i-{\underline{\theta }}^*_i)=0. \end{aligned}$$ Now if \(B^*_h >0\) (resp. \(<0\)) then we know that \(A_h^* < c\) (resp. \(>c\)) if and only if \(\theta <0\). Hence, \({\underline{\theta }}^*_i<0 < {\overline{\theta }}^*_i\). The derivative of the LHS of the above equation w.r.t. \({\overline{\theta }}^*_i\) is \({\overline{A}}^*_h -c = {\hat{A}}^*_h +B_h^* \cdot \xi ({\overline{\theta }}^*_i) -c = B_h^* \cdot \xi ({\overline{\theta }}^*_i)\) which does not change sign for \({\overline{\theta }}^*_i>0\). Hence as one moves away from \({\overline{\theta }}^*_i=-{\underline{\theta }}^*_i\) the difference between the LHS and 0 only gets larger. This verifies that \({\overline{\theta }}^*_i= - {\underline{\theta }}^*_i\) is the only possibility. Now \(\Delta _i = A(i,h^*_i,{\overline{\theta }}^*_i) - A(i,h^*_i,{\underline{\theta }}^*_i) = A(i,h^*_i,-{\underline{\theta }}^*_i) - A(i,h^*_i,{\underline{\theta }}^*_i)\). The RHS is monotonically decreasing in \({\underline{\theta }}^*_i\). Hence there is a unique value of \({\underline{\theta }}^*_i\) for a given \(\Delta _i\). It is trivial from the definition of \({\underline{\theta }}^*_i\) that there is a unique \(b_i\) for each value of \({\underline{\theta }}^*_i\) which establishes uniqueness. \(\square \) Lemma B.1 establishes the relationship \(b_i(\Delta _i)\) for \(0 < \Delta _i \le A(i,h^*_i, \frac{1}{2}) - A(i,h^*_i, -\frac{1}{2})\) and \(B_h^* \not =0\). It is convenient to have \(b_i(\Delta _i)\) defined in all cases. To that end, if \(\Delta _i > A(i,h^*_i, 1/2) - A(i,h^*_i, -1/2)\), then we set \(b_i(\Delta _i) = A(i,h^*_i, -1/2) - c\cdot h^*_i\). Finally, if \(0 < \Delta _i \le A(i,h^*_i, \frac{1}{2}) - A(i,h^*_i, -\frac{1}{2})\) and \(B_h^*=0\), then we set \(b_i(\Delta _i)\) so that \({\underline{\theta }}^*_i(b_i) = -{\overline{\theta }}^*_i(b_i,\Delta _i)\). So long as \(b_i=b_i(\Delta _i)\), then we know that \((b_i,\Delta _i)\) solves Eq. 13. Now in trying to choose \(\Delta _i\) to satisfy Eq. 24, we see that both \(({\overline{\theta }}^*_i-{\underline{\theta }}^*_i)\) and \({\tilde{\mathcal {Q}}}^* = z \cdot \sum _j q_j \cdot P^*_j + (1-z)\) depends upon \(\Delta _i\). However, this dependence is well behaved. If \(\alpha =0\), \(w=c\), and Assumption 1 holds, then \(\frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i}\) is weakly increasing in \(\Delta _i\) and unbounded above. Let \(y = \frac{\Delta _i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i}\). If \(\Delta _i > A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\), then \(y = \Delta _i\). This establishes both that the derivative is positive in this range and that the function is unbounded above. Now consider \(\Delta _i < A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\). Let \(x = ({\overline{\theta }}^*_i-{\underline{\theta }}^*_i)^2 \cdot \frac{dy}{d \Delta _i}\). Clearly x and \(\frac{dy}{d \Delta _i}\) have the same sign. \(x= ({\overline{\theta }}^*_i-{\underline{\theta }}^*_i) - \Delta _i\left( \frac{d {\overline{\theta }}^*_i}{d \Delta _i} - \frac{d {\underline{\theta }}^*_i}{d \Delta _i}\right) = ({\overline{\theta }}^*_i-{\underline{\theta }}^*_i) - \Delta _i\left( \frac{d {\overline{\theta }}^*_i}{d b_i} - \frac{d {\underline{\theta }}^*_i}{d b_i} \right) \cdot \frac{d b_i}{d \Delta _i} =({\overline{\theta }}^*_i-{\underline{\theta }}^*_i) - \frac{\Delta _i}{{\underline{A}}^*_\theta }\). We observe that \(\frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i} \le {\underline{A}}^*_\theta \) because the first term is the average slope which by Assumption 1.1 is weakly less than the slope at the edge, \({\underline{A}}^*_\theta \). Hence \(x >0\). Finally, there is a kink at \(\Delta _i = A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\). However, at a kink a function is inarguably increasing if both its left and right hand derivatives are positive. \(\square \) Let \(\alpha =0\), \(w=c\), and \(b_i = b_i(\Delta _i)\) as defined above. If Assumption 1 holds, then \({\hat{P}}^*_i=\frac{1}{2}\) for \(\Delta _i \le A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\) and is decreasing in \(\Delta _i\) for \(\Delta _i > A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\). We first consider the case in which \(\Delta _i \le A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\). In this case \(b_i = A^*(i,{\underline{\theta }}^*_i) -c\cdot h_i^*\) so that $$\begin{aligned} A^*-c\cdot h^*_i -b_i= & {} A^* - {\underline{A}}^*= \left[ {\hat{A}}^* +B^*\cdot \xi (\theta )\right] - \left[ {\hat{A}}^* +B^*\cdot \xi ({\underline{\theta }}^*_i)\right] \\= & {} B^*\left[ \xi (\theta ) -\xi ({\underline{\theta }}^*_i)\right] \end{aligned}$$ $$\begin{aligned} \Delta _i= & {} {\overline{A}}^*-{\underline{A}}^*= \left[ {\hat{A}}^* + B^*\cdot \xi (i,{\overline{\theta }}^*_i)\right] - \left[ {\hat{A}}^* + B^*\cdot \xi (i,{\underline{\theta }}^*_i)\right] \\= & {} B^*\left[ \xi (i,{\overline{\theta }}^*_i)-\xi (i,{\underline{\theta }}^*_i)\right] = 2B^* \cdot \xi ({\overline{\theta }}^*_i) \end{aligned}$$ Hence we have that $$\begin{aligned} {\hat{P}}^*_i= & {} \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i} \frac{B^* \left[ \xi (\theta ) -\xi ({\underline{\theta }}^*_i)\right] }{2B^* \cdot \xi ({\overline{\theta }}^*_i)} d \theta +\left( \frac{1}{2}-{\overline{\theta }}^*_i\right) \\= & {} \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i}\frac{\xi (\theta ) }{2\cdot \xi ({\overline{\theta }}^*_i)} d \theta + \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i}\frac{\xi ({\overline{\theta }}^*_i) }{2\cdot \xi ({\overline{\theta }}^*_i)} d \theta +\left( \frac{1}{2}-{\overline{\theta }}^*_i\right) \\= & {} 0 + \frac{1}{2}\left( {\overline{\theta }}^*_i-{\underline{\theta }}^*_i\right) + \frac{1}{2}-{\overline{\theta }}^*_i= \frac{1}{2}\end{aligned}$$ On the other hand, if \(\Delta _i > A(i,h^*_i, \frac{1}{2}) -A(i,h^*_i, -\frac{1}{2})\) then \(b_i = A(i,h^*_i, -\frac{1}{2}) - c\cdot h^*_i\), \({\underline{\theta }}^*_i= -\frac{1}{2}\) and \({\overline{\theta }}^*_i= \frac{1}{2}\). That is, none of these parameters depend upon \(\Delta _i\). Hence $$\begin{aligned} \frac{d {\hat{P}}^*_i}{d \Delta _i} = -\left( \frac{1}{\Delta _i} \right) ^2 \int _{-\frac{1}{2}}^{\frac{1}{2}} \left( A^* - c \cdot h^*_i - b_i\right) d \theta <0 \end{aligned}$$ Let Assumption 1 hold, and set \(\alpha =0\), \(w=c\) and \(b_i =b_i(\Delta _i)\) as defined above. In this case \(\frac{\Delta _i (1-\delta {\tilde{\mathcal {Q}}}^*)}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i}\) is weakly increasing in \(\Delta _i\) and unbounded above. We note that if \(j \not =i\), then \(P_j^*\) does not depend upon \(\Delta _i\), hence by Lemma B.4\({\tilde{\mathcal {Q}}}^*= z\sum _j P_j^*+ (1-z)\) is weakly decreasing in \(\Delta _i\). The result then follows from an application of Lemma B.3. \(\square \) With Lemma B.5 in hand, there are no difficulties if \(\delta L\) is large enough. In particular, if \(\delta L\) is large enough that the LHS of Eq. 24 is larger than the RHS when \(\Delta _i = A(i,h^*_i, 1/2) - A(i,h^*_i, -1/2)\), then we can simply keep increasing \(\Delta _i\) until Eq. 24 holds. We now address the case in which \(\Delta _i < A(i,h^*_i, 1/2) - A(i,h^*_i, -1/2)\). Let \(b_i = b_i(\Delta _i)\) as defined above. \(\lim _{\Delta _i \rightarrow 0} \frac{\Delta _i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i} = A_{\theta } (i,h^*_i,0)\). \(\lim _{\Delta _i \rightarrow 0} \frac{\Delta _i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i} =\lim _{\Delta _i \rightarrow 0} \frac{A(i,h^*_i,\overline{\theta }_i^*) - A(i,h^*_i,{\underline{\theta }}_i^*)}{\overline{\theta }_i^*-{\underline{\theta }}_i^*}= A_{\theta } (i,h^*_i,0)\). The first equality follows from Eq. 25 which holds for \(\Delta _i\) sufficiently small. The second equality is just the definition of a derivative. \(\square \) We rewrite Eq. 24 as $$\begin{aligned} z \delta L = \left( \frac{\Delta _i}{{\overline{\theta }}_i^*-{\underline{\theta }}_i^*} \right) \left( 1- \delta {\tilde{\mathcal {Q}}}^*\right) \end{aligned}$$ We use \(1-\delta {\tilde{\mathcal {Q}}}^* = 1-\delta (1-z) -z \delta {\tilde{P}}^*\) and \({\tilde{P}}^* = q_i \cdot {\hat{P}}_i^* + \sum _{j \not = i} q_j \cdot {\hat{P}}_j^*\) to transform Eq. 27 into $$\begin{aligned} z \delta L = \left( \frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i} \right) \left( 1-\delta (1-z)- z \delta \left( \sum _{j \not = i} q_j \cdot {\hat{P}}_j^* \right) - z \delta \cdot q_i \cdot {\hat{P}}_i^* \right) \end{aligned}$$ Let \(K_i = 1-\delta (1-z) -\delta z \left( \sum _{j \not = i} q_j {\hat{P}}^*_j\right) - \frac{q_i z \delta }{2}\). We note that \(K_i \in (0,1)\) and is constant in \(\Delta _i\) and \(b_i\). We use Eq. 11 to transform Eq. 28 to $$\begin{aligned} z\delta L = \left( \frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i} \right) \left( K_i +z \delta \cdot q_i \cdot {\overline{\theta }}^*_i- \delta z\cdot q_i \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i} \left( \frac{A^*-c\cdot h^*-b}{\Delta _i}\right) d\theta \right) \nonumber \\ \end{aligned}$$ We break Eq. 29 into two pieces, divide by z, and factor \(\frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i}\) out of the second piece to arrive at $$\begin{aligned} \delta L = \left( \frac{\Delta _i}{{\overline{\theta }}^*- {\underline{\theta }}^*}\right) \left( \frac{ K_i + \delta z q_i {\overline{\theta }}^*_i}{z}\right) - \delta q_i \int _{{\underline{\theta }}^*}^{{\overline{\theta }}^*} \frac{A*-ch^*-b_i}{{\overline{\theta }}^*-{\underline{\theta }}^*} d \theta \end{aligned}$$ Recall that \({\bar{X}}\) is the minimum (over i) efficiency loss from a one shot contract. We notice that \(\lim _{\Delta _i \rightarrow 0} b_i(\Delta _i) =\beta _i \equiv A(i,h_i^*,0)-c\cdot h_i^*\). We have that $$\begin{aligned} \lim _{\Delta _i \rightarrow 0} \left( \frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i}\right) \left( \frac{ K_i + \delta z q_i {\overline{\theta }}^*_i}{z}\right)= & {} A_{\theta }(i,h_i^*,0) \left( \frac{ K_i + \delta z q_i/2 }{z}\right) \nonumber \\< & {} \delta {\bar{X}} \left( \frac{ K_i + \delta z q_i/2 }{z}\right) \end{aligned}$$ $$\begin{aligned} \lim _{\Delta _i \rightarrow 0} \int _{{\underline{\theta }}^*_i}^{{\overline{\theta }}^*_i} \left( \frac{A^*-c\cdot h_i^*-b_i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i}\right) d\theta= & {} A(i,h_i^*,0)-c\cdot h_i^* -\beta _i =0 \end{aligned}$$ We must now consider which part of Assumption 2 holds. If \(z=1\), then we have $$\begin{aligned} \frac{ K_i + \delta z q_i/2 }{z} = 1 - \delta \left( \sum _{j\not =i} q_j \cdot {\hat{P}}_j^*\right) <1. \end{aligned}$$ Consequently, the limit as \(\Delta _i \rightarrow 0\) of the RHS of Eq. 30 is less than \(\delta {\bar{X}} (K_i + \delta q_i {\overline{\theta }}^*_i) < \delta {\bar{X}}\). Hence, \(\exists \epsilon >0\) such that if \(L = {\bar{X}} -\epsilon \), then the contract is feasible and the client strictly prefers the current expected payoff from this contract to any one shot contract. We next turn to the case in which \(\delta \) is sufficiently close to one. We can see that $$\begin{aligned} \lim _{\delta \rightarrow 1} \frac{ K_i + \delta z q_i/2 }{z}= & {} lim_{\delta \rightarrow 1} \frac{1-\delta (1-z) -\delta z \left( \sum _{j \not = i} q_j {\hat{P}}^*_j\right) }{z}\\= & {} \frac{z - z\left( \sum _{j \not = i} q_j {\hat{P}}^*_j\right) }{z} = 1- \sum _{j \not = i} q_j {\hat{P}}^*_j <1 \end{aligned}$$ Hence for \(\delta \) sufficiently close to 1, it is again the case that the limit as \(\Delta _i \rightarrow 0\) of the RHS of Eq. 30 is less than \(\delta {\bar{X}}\). It remains only to establish that the firm cannot make itself better off by attempting to defraud the client. The choice of \(r=h=h_i^*\) is locally optimal. However, choosing h and r so as to defraud the client is a non-local alternative. Being a non-local alternative, it must lead to a discrete drop in the probability of being retained. Since being retained has a strictly positive value, this leads to discrete drop in the expected present value of future payoffs. For \(\Gamma \) sufficiently small, this lowers the payoff to the law firm. This concludes the proof of Proposition 1. C Proof of Theorem 1 and Proposition 2 We note that the difference between these results is that in Theorem 1 Assumption 2.1 holds, while in Proposition 2 Assumption 2.2 hold. We proceed with \(\Delta _i\) determining \(b_i\), \({\underline{\theta }}^*_i\), and \({\overline{\theta }}^*_i\). This assures that Eq. 13 holds for each i. It remains to show that we can also simultaneously satisfy Eq. 24 for each i. Let \(\rho _i(\Delta _1, \ldots \Delta _n) = \left( \frac{\Delta _i}{{\overline{\theta }}^*_i-{\underline{\theta }}^*_i} \right) \left( 1 - \delta {\tilde{\mathcal {Q}}}^* \right) \). Equation 24 can be written as \(z \delta L = \rho _i\). Let \(\Delta _i^e\) denote the solution to Eq. 24 given \(\Delta = (\Delta _1 \ldots \Delta _n)\). Lemma C.1 Let \(\alpha =0\), \(w=c\), and \(b_i = b_i(\Delta _i)\) as defined above. \(\Delta _i^e\) is continuous in and weakly decreasing in \(\Delta _j\). We notice that \(\Delta _i^e\) acts to set \(z\delta L = \rho _i\). The function \(\rho _i\) is: continuous in both \(\Delta _i\) and \(\Delta _j\), strictly monotonically increasing in \(\Delta _i\), and weakly monotonically increasing in \(\Delta _j\). Hence an infinitesimal increase in \(\Delta _j\) must be met by no more than an infinitesimal decrease in \(\Delta _i^e\).\(\square \) Let \(\Delta ^e \equiv (\Delta _1^e, \ldots \Delta _n^e)\). The Brouwer Fixed Point Theorem states that there is a fixed point if \(\Delta ^e\) is a continuous function on a compact and convex domain. Lemma C.1 states that \(\Delta ^e\) is continuous. However, the domain of \(\Delta \) is \(\mathfrak {R}^n_{++}\) which is not compact. We now demonstrate that there is a compact and convex sub-domain of \(\mathfrak {R}^n_{++}\) which \(\Delta ^e\) maps into itself. This will suffice, since there must be a fixed point on this sub-domain. We use monotonicity to establish an upper bound for \(\Delta _i^e\) which we denote as \({\bar{\Delta }}_i\). Let us assume for the moment that \(\bar{\Delta }_i > A(i,h_i^*,\frac{1}{2}) - A(i,h_i^*,-\frac{1}{2})\). In this case \({\overline{\theta }}^*_i-{\underline{\theta }}^*_i=1\), and $$\begin{aligned} \Delta _i^e = \frac{z\delta L}{(1-\delta {\tilde{\mathcal {Q}}}^*)} \le \frac{\delta L}{(1-\delta )} \end{aligned}$$ Hence \(\Delta _i^e \le {\bar{\Delta }}_i \equiv \max \{ \frac{\delta L}{(1-\delta )}, A(i,h_i^*,\frac{1}{2}) - A(i,h_i^*,-\frac{1}{2}),\}\). We now establish a lower bound for \(\Delta _i^e\) which we denote as \({\underline{\Delta }}_i\). Clearly 0 is a lower bound for \(\Delta _i^e\). However, neither \(\tau \) nor \(\mathcal {Q}\) are defined for \(\Delta _i =0\). Hence, what we need is to establish a lower bound \({\underline{\Delta }}_i >0\). We work with \(\Delta _j \le {\bar{\Delta }}_j\) in which case we have $$\begin{aligned} \frac{\Delta _i^e}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i} = \frac{z\delta L}{1-{\tilde{\mathcal {Q}}}^*} > z\delta L \end{aligned}$$ The strict inequality follows since \({\hat{\mathcal {Q}}}^*_j =0\) only if \(\Delta _j = \infty \) and \(z=1\). We know from Lemmas B.3 and B.6 that we may set the LHS of Eq. 33 to any value strictly greater than \(A_{\theta }(i,h_i^*,0)\) which itself is strictly less than \(\delta {\bar{X}}\). Hence if \(z=1\), then we can fix \(\epsilon \) with \(0< \epsilon < X - \frac{A_{\theta }(i,h^*_i, 0)}{\delta }\). Set \(L= {\bar{X}} - \epsilon \). Now since \(\delta L > A_{\theta }(i,h^*_i, 0)\) and \(\frac{\Delta _i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i}\) is increasing, it follows that there exists a \({\underline{\Delta }}_i >0\) such that \(\frac{{\underline{\Delta }}_i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i} =\delta L\). From Eq. 33 it follows that \(\Delta _i^e > \underline{\Delta }_i\) as long as \(\Delta _j \le {\bar{\Delta }}_j\). Now on the other hand, suppose that \(z<1\). In this case we note that $$\begin{aligned} \lim _{\delta \rightarrow 1}\frac{z\delta L}{1-{\tilde{\mathcal {Q}}}^*} = \lim _{\delta \rightarrow 1}\frac{z\delta L}{1 -\delta (1-z) - {\tilde{P}}^*} = \frac{L}{1-{\tilde{P}}^*} > L \end{aligned}$$ Hence, for \(\delta \) sufficiently close to one, we can for identical reasons find a \({\underline{\Delta }}_i >0\) such that \(\frac{\underline{\Delta }_i}{{\overline{\theta }}^*_i- {\underline{\theta }}^*_i} =\delta L\). Again, from Eq. 33 it follows that \(\Delta _i^e > {\underline{\Delta }}_i\) as long as \(\Delta _j \le {\bar{\Delta }}_j\). With this lower bound established, we may apply the Brouwer Fixed Point Theorem. A fixed point to \(\Delta ^e\) is a simultaneous solution to Eq. 24 for each value of i. Hence, the firm voluntarily sets \(h_j=r_j=h_j^*\) for each value of j. This renders moot the fact that \(\Delta ^e\) was defined as the solution for Eq. 24 with \(h_j=r_j=h_j^*\) for \(j\not =i\). That is, define \({\bar{\Delta }}^e\) as we defined \(\Delta ^e\), but with the requirement that each \(h_j\) and \(r_j\) are chosen to maximize the discounted present value of payments to the firm. The fixed point to \(\Delta ^e\) must also be a fixed point for \({\bar{\Delta }}^e\). Hence, at this fixed point the law firm is choosing \(h_i=r_i=h_i^*\) for each value of i absent any assumptions concerning how other \(h_j\) and \(r_j\) are chosen. Finally, we note that L was set less than \({\bar{X}}\). Hence, the client prefers the long term contract for each value of i. As in the proof of Proposition 3, the law firm has no desire to defraud the client. This follows for the exact same reasons. This concludes the proof of Theorem 1. To complete the proof of Proposition 2 we need to show that we can choose z to set \({\tilde{\mathcal {Q}}}^*\) to any value in [1 / 2, 1). We first note that \({\tilde{\mathcal {Q}}}^* = {\tilde{P}}^* \le 1/2\) when \(z=1\) and \({\tilde{\mathcal {Q}}}^* \rightarrow 1\) as \(z \rightarrow 0\). Hence, it remains only to show that \({\tilde{\mathcal {Q}}}^*\) is continuous. We note that an infinitesimal change in z creates an infinitesimal change in the law firm's incentives which can be rebalanced with an infinitesimal change in L, and \(\{\Delta _i\}_i\). Hence, if we think of L and \(\{\Delta _i\}_i\) as functions of z, then \({\tilde{\mathcal {Q}}}^*\) is continuous in z. This completes the proof of Proposition 2. Baker G, Gibbons R, Murphy KJ (1994) Subjective performance measures in optimal performance contracts. Quart J Econ 109:1125–1156CrossRefGoogle Scholar Banks JS, Sundaram RK (1998) Optimal retention in agency problems. J Econ Theory 82:293–325CrossRefGoogle Scholar Calloway JA, Robinson MA (2002) Winning alternatives to the billable hour: strategies that work, 2nd edn. American Bar Association, ChicagoGoogle Scholar Claremont KM, Currivan JD (1978) Improving the contingent fee. Cornell Law Rev 63:529–599Google Scholar Coates JC, DeStefano MM, Nanda A, Wilkins DB (2011) Hiring teams, firms, and lawyers: evidence of the evolving relationships in the corporate legal market. Law Soc Inq 36:999–1031CrossRefGoogle Scholar Dana JD, Spier KE (1993) Expertise and contingent fees: the role of asymmetric information in attorney compensation. J Law Econ Org 9:349–367Google Scholar Danzon PM (1983) Contingent fees for personal injury litigation. Bell J Econ 14:213–224CrossRefGoogle Scholar Garoupa N, Gomez-Pomar F (2008) Cashing by the hour: why large law firms prefer hourly fees over contingent fees. J Law Econ Org 24:458–475CrossRefGoogle Scholar Gilson RJ, Mnookin RH (1985) Sharing among the human capatilists: an economic inquiry into the coporate law firm and how partners split profits. Stanf Law Rev 37:313–392CrossRefGoogle Scholar Hadfield GK (2000) The price of law: how the market for lawyers distorts the justice system. Mich Law Rev 98:953–1006CrossRefGoogle Scholar Halpern PJ, Turnbull SM (1983) Legal fees contracts and alternative cost rules: an economic analysis. Int Rev Law Econ 3:3–26CrossRefGoogle Scholar Hay BL (1996) Contingent fees and agency costs. J Legal Stud 25:503–533CrossRefGoogle Scholar Heinkel Robert, Stoughton Neal M (1994) The dynamics of portfolio management contracts. Rev Financ Stud 7:351–387CrossRefGoogle Scholar Jones A (2003) The counsel you keep. Corp Counsel 10:82–107Google Scholar Kim J-Y (2015) An attorney fee as a signal in pretrial negotiation. J Econ 114:75–102CrossRefGoogle Scholar Klein B, Leffler KB (1981) The role of market forces in assuring contractual performance. J Polit Econ 89:615–641CrossRefGoogle Scholar Kritzer HM, Felstiner WLF, Sarat A, Trubek DM (1985) The impact of fee arrangement on lawyer effort. Law Soc Rev 19:251–278CrossRefGoogle Scholar Levin J (2003) Relational incentive contracts. Am Econ Rev 93:835–857CrossRefGoogle Scholar MacLeod WB (2007) Reputations, relationships, and contract enforcement. J Econ Lit 45:595–628CrossRefGoogle Scholar MacLeod WB, Malcomson JM (1989) Implicit contracts, incentive compatibility, and involuntary unemployment. Econometrica 57:447–480CrossRefGoogle Scholar McChesney FS (1982) Team production, monitoring, and profit sharing in law firms: an alternative hypothesis. J Legal Stud 11:379–393CrossRefGoogle Scholar Ohlendorf Susanne, Schmitz Patrick W (2012) Repeated moral hazard and contracts with memory: the case of risk-neutrality. Int Econ Rev 53:433–452CrossRefGoogle Scholar Olcay Nadide Banu (2012) Dynamic incentive contracts with termination threats. Rutgers University, New BrunswickGoogle Scholar Posner RA (1986) Economic analysis of law, 3rd edn. Little Brown, BostonGoogle Scholar Rhein EM (1982) Judicial regulation of contingent fee contracts. J Air Law Commerce 48:151–178Google Scholar Rubinfeld Dl, Scotchmer S (1993) Contingent fees for attorneys: and economic analysis. Rand J Econ 24:343–356CrossRefGoogle Scholar Santore R, Viard AD (2001) Legal fee restrictions, moral hazard, and attorney rents. J Law Econ XLIV:549–572CrossRefGoogle Scholar Schwartz ML, Mitchell DJB (1970) An economic analysis of the contingency fee and personal injury litigation. Stanf Law Rev 22:1125–1162CrossRefGoogle Scholar Shapiro C, Stiglitz JE (1984) Equilibrium unemployment as a worker discipline device. Am Econ Rev 74:433–444Google Scholar Smith JK, Cox SR (1985) A contractual solution to the problem of bilateral opportunism. J Legal Stud 14:167–183CrossRefGoogle Scholar Spear SE, Wang C (2005) When to fire a CEO: optimal termination in dynamic contracts. J Econ Theory 120:239–256CrossRefGoogle Scholar © Springer-Verlag GmbH Austria, part of Springer Nature 2018 1.Department of EconomicsGrinnell CollegeGrinnellUSA 2.School of Economics and FinanceVictoria University of WellingtonWellingtonNew Zealand Graham, B. & Robles, J. J Econ (2019) 127: 99. https://doi.org/10.1007/s00712-018-0633-1 Received 29 November 2017 First Online 12 September 2018 Publisher Name Springer Vienna
CommonCrawl
How many numbers greater than 10 lacs be formed from 2, 3, 0, 3, 4, 2, 3 ? (a) 420 (b) 360 (c) 400 (d) 300 10 lakhs consists of seven digits. Number of arrangements of seven numbers of which 2 are similar of first kind, 3 are similar of second kind $=\frac{7 !}{2 ! 3 !}$ But, these numbers also include the numbers in which the first digit has been considered as 0. This will result in a number less than 10 lakhs. Thus, we need to subtract all those numbers. Numbers in which the first digit is fixed as $0=$ Number of arrangements of the remaining 6 digits $=\frac{6 !}{2 ! 3 !}$ Total numbers greater than 10 lakhs that can be formed using the given digits $=\frac{7 !}{2 ! 3 !}-\frac{6 !}{2 ! 3 !}$ $=420-60$ $=360$ ← How many numbers greater than 10 lacs be formed from 2, 3, 0, 3, 4, 2, 3 ? A quadratic equation whose one root is 2 and →
CommonCrawl
OSA Publishing > Optics Express > Volume 28 > Issue 22 > Page 32858 A tellurite glass optical microbubble resonator J. Yu, J. Zhang, R. Wang, A. Li, M. Zhang, S. Wang, P. Wang, J. M. Ward, and S. Nic Chormaic J. Yu,1,2 J. Zhang,1 R. Wang,1 A. Li,1,2 M. Zhang,1 S. Wang,1 P. Wang,1,3,5 J. M. Ward,2,4 and S. Nic Chormaic2,6 1Key Laboratory of In-Fiber Integrated Optics of Ministry of Education, College of Science, Harbin Engineering University, Harbin 150001, China 2Light-Matter Interactions for Quantum Technologies Unit, Okinawa Institute of Science and Technology Graduate University, Onna, Okinawa 904-0495, Japan 3Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province College of Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, China 4Physics Department, University College Cork, Cork, Ireland 5pengfei.wang@tudublin.ie 6sile.nicchormaic@oist.jp J. Zhang https://orcid.org/0000-0003-1919-4055 P. Wang https://orcid.org/0000-0001-9321-3395 S. Nic Chormaic https://orcid.org/0000-0003-4276-2014 J Yu J Zhang R Wang A Li M Zhang S Wang P Wang J Ward S Nic Chormaic pp. 32858-32868 •https://doi.org/10.1364/OE.406256 J. Yu, J. Zhang, R. Wang, A. Li, M. Zhang, S. Wang, P. Wang, J. M. Ward, and S. Nic Chormaic, "A tellurite glass optical microbubble resonator," Opt. Express 28, 32858-32868 (2020) Get PDF (965 KB) High-Q, ultrathin-walled microbubble resonator for aerostatic pressure sensing (OE) Dispersion analysis of whispering gallery mode microbubble resonators (OE) Tunable erbium-doped microbubble laser fabricated by sol-gel coating (OE) Diode pumped lasers Integrated photonics Original Manuscript: August 26, 2020 Revised Manuscript: October 6, 2020 Manuscript Accepted: October 8, 2020 Fabrication of the tellurite microbubble Passive tellurite glass microbubbles Active Yb-Er co-doped tellurite glass microbubble We present a method for making microbubble whispering gallery resonators (WGRs) from tellurite, which is a soft glass, using a CO2 laser. The customized fabrication process permits us to process glasses with low melting points into microbubbles with loaded quality factors as high as 2.3 × 106. The advantage of soft glasses is that they provide a wide range of refractive index, thermo-optical, and optomechanical properties. The temperature and air pressure dependent optical characteristics of both passive and active tellurite microbubbles are investigated. For passive tellurite microbubbles, the measured temperature and air pressure sensitivities are 4.9 GHz/K and 7.1 GHz/bar, respectively. The large thermal tuning rate is due to the large thermal expansion coefficient of 1.9 × 10−5 K−1 of the tellurite microbubble. In the active Yb3+-Er3+ co-doped tellurite microbubbles, C-band single-mode lasing with a threshold of 1.66 mW is observed with a 980 nm pump and a maximum wavelength tuning range of 1.53 nm is obtained. The sensitivity of the laser output frequency to pressure changes is 6.5 GHz/bar. The microbubbles fabricated using this method have a low eccentricity and uniform wall thickness, as determined from electron microscope images and the optical spectra. The compound glass microbubbles described herein have the potential for a wide range of applications, including sensing, nonlinear optics, tunable microcavity lasers, and integrated photonics. © 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement In the past few decades, the level of research activity on whispering gallery mode (WGM) resonators has increased rapidly [1–7]. Concurrently, the number of WGM geometries has also increased. These resonators, or microcavities, have small sizes, high uniformity, and smooth surfaces which combine to deliver an extremely high quality (Q-) factor and small mode volume, thereby having potential in many applications, such as high-sensitivity sensing, nonlinear optics, optomechanics, active photonics devices, cavity quantum electrodynamics, and nanoparticle control [8–17]. One of the resonator geometries that has attracted much attention is the microbubble, in which the WGMs propagate in the wall of a thin spherical shell typically made of glass. First reported in 2010 [18], a silica microbubble can be fabricated by heating and expanding a silica capillary using a CO$_2$ laser and air pressure [19]. The wall thickness of the bubble can be close to the wavelength of light propagating in the WGMs, resulting in evanescent fields on the inner and outer walls; therefore, the modes are extremely sensitive to changes in refractive index. The thin walls also give a large sensitivity to changes in pressure [13]. Microbubbles are predominately made from silica glass because it is relatively easy to manipulate when heat-softened. Other soft glasses have many advantages over silica, but their low melting points makes them more difficult to control and cast into the desired shape. In previous work, lead silicate microbubbles with single and double stems were fabricated. However, the shapes were not spherical, the Q-factors were limited to about $10^5$, and the fabrication process was very difficult to control with a low success rate [20]. In this work, we report on the development of a microbubble fabrication method for soft glasses to yield doped and undoped tellurite glass microbubble whispering gallery resonators (WGR). Tellurite glass has many properties which may lend themselves to the functionality of the WGR. For example, its thermal expansion coefficient [21] is larger than that for silica [22], thereby increasing the sensitivity of temperature sensors. This is also true for the nonlinear coefficients of tellurite glass; the zero dispersion of tellurite microbubbles can be tuned over a large range making them viable for visible to mid-infrared frequency comb applications [23–25]. One of the benefits of laser emission in a microbubble is the high degree of wavelength tunability and the prospect of integrating a microlaser into a hollow WGR. The common fabrication method involves covering the surface of the microcavity with a gain material, such as sol-gel or some other rare-earth ion doped compound glass [15,26] — a review of several techniques is contained in [27]. An alternative is to inject a gain liquid (such as an organic gain dye) into the microbubble for laser emission [28]. In this case, a large loss will be induced in the microbubble resonator since the introduction of a gain material, which is not intrinsic to the capillary, results in a higher lasing threshold. Additionally, the material of choice for the capillary has been predominantly silica, which is limited in its rare-earth ion doping concentration and low phonon energy, so it can be challenging to know the exact concentration of the gain medium after preparing the resonator. Finally, the wavelength range of laser emissions from silica is largely limited to visible and near-infrared light. The aforementioned drawbacks with existing techniques can be largely overcome by using compound tellurite glass to prepare the microbubble resonator, details of which are contained herein. A polished tellurite glass tube was initially formed into a microcapillary by tapering using a large fiber drawing tower. Then the capillary was further drawn down to its final diameter by heating and stretching it in a custom-made pulling rig consisting of a small ceramic heater. Next, a CO$_2$ laser was used to form the tellurite glass microbubble using a unique method. Finally, the temperature and air pressure dependent optical characteristics of the WGMs were investigated for both passive and active versions of the microbubble. 2. Fabrication of the tellurite microbubble As a first step, we fabricated both passive and active tellurite glass rods, with composition 75TeO$_2$-5ZnO-15Na$_2$CO$_3$-5Bi$_2$O$_3$ and 75TeO$_2$-5ZnO-15Na$_2$CO$_3$−4.25Bi$_2$O$_3$−0.5Yb$_2$O$_3$−0.25Er$_2$O$_3$, respectively, using the melt-quenching method [29]. We placed 50 g of high-purity chemical material [TeO$_2$ (99.99%), ZnO (99.99%), Na$_2$CO$_3$ (99.99%), Bi$_2$O$_3$ (99.99%), Yb$_2$O$_3$ (99.99%), Er$_2$O$_3$ (99.99%)] in an agate mortar and stirred for 10 minutes. The material was then stored in an alumina crucible and heated in a closed furnace at $900^\circ$C for 60 minutes. Next, the melt was poured quickly into a tube furnace, which had been heated to $300^\circ$C for 3 hours. The rotation speed of the tube furnace was set to 30 rev/min for one minute. A glass tube was formed in the tube furnace and annealed at $310^\circ$C for 4 hours to remove any remaining internal stress. We removed the glass tube and polished it using low-mesh to high-mesh sandpaper until the ratio of the inner and outer diameters was between 0.6-0.7. Finally, we mounted the prepared polished glass tube in the nitrogen-filled chamber of the fiber drawing tower. The temperature of the furnace was increased to 345$^{\circ }$C and tellurite glass capillaries with outer diameters of 300 $\mu$m were obtained by controlling the speed and the tractive force of the fiber drawing tower. To make microbubbles, several steps were needed, as illustrated in Fig. 1(a)-(f). First, we had to further decrease the diameter of the tellurite capillaries to about 15 $\mu$m using a ceramic heater, see Fig. 1(d). In order to make a tellurite microbubble, a CO$_2$ laser (48-2KWM, Synrad) was used to process the tellurite capillaries even further. The laser beam was divided into two parts, which were then overlapped near their focal points, see Fig. 1(a). The size of the beam at the location of the bubble was about 350 $\mu$m and the lenses are 400 mm focal length. A solid microsphere with a diameter of 35 $\mu$m was first formed on the tip of the tellurite glass capillary. This was done by affixing the capillary to a one-dimensional translation stage and then slowly moving it into the center of the two laser beams until the microsphere forms. At this point, we opened a gas valve and the capillary was pressurized to 3.5 bar, while, at the same time, we increased the power of the CO$_2$ laser. The tellurite microsphere expanded into a microbubble with a diameter of about 150 $\mu$m, as shown in Fig. 1(f). This is a self-terminating process: once the CO$_2$ laser power and gas pressure are fixed, the bubble expansion stops at the point where the heat loss from the wall exceeds the absorbed heat from the laser. We controlled the size of the microspheres in advance to adjust the diameter of the resulting microbubble. This method is simpler and reduces cavity loss when compared with the method of directly making a microbubble from a capillary [20]. During the glass capillary fabrication process, some tiny air bubbles are formed when the molten glass is rotated in the tube furnace. A rough surface is also created by uneven polishing. If a bubble is made directly from the capillary by simply softening the glass, the air bubbles and rough surface may be preserved. However, if the microcapillary is completely remelted into a microsphere, the bubbles inside disappear and the surface is very smooth. Fig. 1. (a-c) Three step fabrication processes for the tellurite glass microbubble. BS: beam splitter, R: mirror. (d-f) Microscope images of the tellurite glass capillary, microsphere, and microbubble in steps (a-c). (g) SEM image of a broken tellurite glass microbubble, A-H represent the different measurement positions in the microbubble, with the wall thickness at position B being 670 nm. (h) The measured thickness at the different positions in (g). To characterize the wall thickness of the microbubble, we broke it and imaged it with a scanning electron microscope (SEM). A typical measured thickness is about 670 nm, see position B in Fig. 1(g). We measured multiple points to determine the uniformity of the microbubble and the results are shown in Fig. 1(h). Except for the thickness near the stem, which is around 900 nm, most other points on the microbubble wall are around 670 nm. The thickness of the wall is calculated to be 640 nm by assuming conservation of volume from the tellurite glass in the microsphere to the resulting microbubble and is consistent with the measured value [30]. The thickness is calculated from (1)$$\frac{4}{3}\pi(\frac{c_1}{2})^3=\frac{4}{3}\pi(\frac{c_2}{2})^3-\frac{4}{3}\pi(\frac{c_2-a_1}{2})^3,$$ where c$_1$ and c$_2$ are the diameters of the microsphere and microbubble, respectively, and a$_1$ is the thickness of the microbubble wall. The above formula is different from the mass conservation of the cross-sectional area described in the literature [31,32]. Since the tellurite capillary tends to collapse to some degree when it is tapered with the ceramic heater, its accurate inner diameter cannot be obtained. The microbubble shown in Fig. 1(g) differs from other conventional silica microbubbles reported in the literature [13], not only in the fabrication method but also the final geometry. The bubble is highly uniform both in shape and wall thickness. The fact that the microbubble is blown out from a microsphere means that the resulting bubble also has a low degree of eccentricity. Microbubbles from other low-melting compound glasses, such as fluoride and chalcogenide glasses, could be prepared using this method. 3. Experimental setup The experimental setup is schematically illustrated in Fig. 2(a). We used two lasers in a pump/probe arrangement. A tunable laser (TLB-6700, Newport) with a center wavelength of 1550 nm was used to probe the WGM resonances of the tellurite glass microbubbles. The laser frequency was scanned over 36 GHz at a rate of 10 Hz. A pump diode laser with a wavelength of 980 nm (BL976-SAG300, Thorlabs) and a linewidth of 1 nm was used to both control the temperature inside the passive tellurite glass microbubble and act as the pump for the Yb$^{3+}$-Er$^{3+}$ doped tellurite glass microbubble. A silica fiber with single-mode transmission at 980 nm and 1550 nm was selected (1060XP, Thorlabs) to make the tapered fiber coupler, with a final diameter of around 1 $\mu$m. The output lasing was observed using an optical spectrum analyzer (MS9740A, Anritsu). Fig. 2. (a) Experimental setup for the tellurite glass microbubble. The blue line represents the optical path and the black lines are the electrical connections. TL: tunable laser; DL: diode laser; FG: function generator; WDM: wavelength division multiplexer; PD: photodetector; OSC: oscilloscope; OSA: optical spectrum analyzer. (b) The observed WGM resonance spectrum of the tellurite glass microbubble (diameter $\sim$ 130 $\mu$m, wall thickness $\sim$ 800 nm) with a Lorentzian fit (red line), corresponding to a loaded $Q$-factor of $2.3\times 10^6$. 4. Passive tellurite glass microbubbles As a first step, we measured the Q-factor of a passive tellurite glass microbubble by scanning the frequency of the tunable laser. A typical transmission spectrum as a function of laser frequency is shown in Fig. 2(b) and the fitted Q is $2.3\times 10^6$, which is close to the value of passive tellurite glass microspheres presented previously [33]. Next, we used the 980 nm diode laser to control the temperature inside the microbubble while the 1550 nm WGM resonance frequency was recorded. The results are shown in Fig. 3(a). As we increased the pump laser output power from 0 to 1.37 mW, the temperature of the glass increased due to absorption, resulting in a red shift of the resonance frequency, f, by 16.7 GHz. A linear fit yields a power sensitivity of $-12$ GHz/mW. The total shift, $\Delta {f}$, of the frequency is given by (2)$$\Delta\textit{f}=\textit{f}(\frac{1}{n}\frac{\Delta\textit{n}}{\Delta\textit{T}}+\frac{1}{d}\frac{\Delta\textit{d}}{\Delta\textit{T}})\Delta\textit{T},$$ where n is the refractive index and d is the diameter of the tellurite microbubble. The thermal expansion coefficient for tellurite glass is $1.9\times 10^{-5}$ K$^{-1}$ [21], which is 38 times larger than the corresponding value for silica glass of $0.51\times 10^{-6}$ K$^{-1}$ [22]. The thermo-optic coefficient $\Delta$n/$\Delta$T is $1.08\times 10^{-5}$ K$^{-1}$ [34], and the frequency shift as a function of temperature $\Delta$f/$\Delta$T was calculated to be 4.9 GHz/K, which is about 4 times lager than for a silica microsphere of 1.28 GHz/K at room temperature [35]. Fig. 3. Resonance shift of the passive microbubble with varying (a) 980 nm laser power or (c) air pressure. The black arrow shows the direction of the resonance shift. Resonance frequency shift as a function of (b) 980 nm laser power and (d) air pressure. The red lines are linear fits to the experimental data. According to Eq. (2), the frequency shift can also be affected by air pressure, which changes the diameter of the microbubble and the refractive index by stress, see Fig. 3(c). The resonance red shifted by 27.9 GHz as the air pressure inside the bubble was increased from 0 to 4 bar, yielding a pressure sensitivity of $-7.1$ GHz/bar. For a silica microbubble with a diameter of 141 $\mu$m and a thickness of 1.3 $\mu$m, the pressure sensitivity of the resonance is $-8.2$ GHz/bar [36], which is close to the value we have obtained for the tellurite glass microbubble with a diameter of 130 $\mu$m and a thickness around 800 nm herein. In order to characterize the mechanical properties of the passive tellurite glass microbubbles, the elasticity equations described in [31] were used to calculate the frequency shift of the resonances as a function of pressure. The required material parameters used were: shear modulus $G=27.5$ GPa, bulk modulus $K=40$ GPa, refractive index $n=2$, Young's modulus $E=67.2$ GPa, elasto-optical constants $C_1=-1.8\times 10^{-12}$ m$^{2}$/N and $C_2=-2\times 10^{-12}$ m$^{2}$/N [37,38]. Wall thicknesses from 700 to 900 nm were used in the calculation and the results are shown in Fig. 4. Note that the theoretical value of air pressure sensitivity was between $-6.4$ to $-8.3$ GHz/bar for the wall thicknesses used. Compared with silica, tellurite has higher shear and bulk moduli, but lower elasto-optical constants. As a comparison, the pressure sensitivity of a silica microbubble with the same diameter and wall thicknesses was calculated to be $-8.7$ to $-11.2$ GHz/bar. Fig. 4. Simulated resonance frequency shift of a passive tellurite glass as a function of air pressure and wall thickness. The color bar represents the frequency shift in units of GHz. 5. Active Yb-Er co-doped tellurite glass microbubble The Yb$^{3+}$-Er$^{3+}$ co-doped tellurite microbubbles (with diameters around 130 $\mu$m) were pumped using a 980 nm diode laser. When the pump laser output power was increased from 0 mW to 1.34 mW, a fluorescence spectrum was observed on the OSA. A free spectral range (FSR) of 3.1 nm was fitted to the fluorescence spectrum from a theoretical calculation result of 3.02 nm [39]. For this particular microbubble, we observe 8 modes in a single FSR, see Fig. 5(a). The main reason for this is that the shape of the fabricated tellurite microbubble is not perfectly spherical, hence some polar modes are excited within. Fig. 5. (a) The measured output spectrum when the pump power is below threshold; the red arrows indicate that the FSR is about 3.1 nm, l and m are the polar and azimuthal mode numbers in the microbubble. (b) The electric field distribution of different polar modes, when the mode number $l=467$. (c) The simulation and measurement of $m$ order spacing as function of polar mode number. The inset is a microscope image of the microbubble, where a and b are the major and minor axes. The mode number at 1578.56 nm was calculated to be $l=m=467$. The number of field maxima in the polar direction is given by $l-m+1 = 1,2,3\cdots$ The first three of these polar modes are also highlighted in Fig. 5(a) and have a measured mode spacing of 0.47 nm, 0.41 nm and 0.4 nm, respectively. The polar mode spacing is in close agreement with the calculated mode spacing determined from a numerical FEM (COMSOL) model, see Fig. 5(b) and (c). An image of the microbubble is given in the inset of Fig. 5(c). If we define a as the major axis and b as the minor axis, the eccentricity, $\varepsilon _i$, of the microbubble can be calculated from [40] (3)$$\varepsilon_{i} = \frac{\textit{a}-\textit{b}}{\textit{a}}.$$ Additionally, the eccentricity, $\varepsilon _{\lambda }$, determined from the mode spacing can be calculated from [40] (4)$$\Delta f_\textrm{ecc}=|f_{ml}-f_{m+1l}|\approx f_{ml} \cdot \varepsilon_{\lambda} \frac{\textit{|m|}-{1/2}}{\textit{l}},$$ where $f_{ml}$ is the frequency of the $ml$ mode. The $\varepsilon _{i}$ and $\varepsilon _{\lambda }$ of the bubble were measured and calculated as 0.14 and 0.12, respectively, and are in reasonable agreement. Even though the walls of these microbubbles appear to be of uniform thickness, it is not surprising that the bubble shape can deform to a degree that lifts the mode degeneracy. Differently shaped, doped microbubbles were also tested by pumping with the 980 nm light. The resulting fluorescence spectra and images are shown in Fig. 6(a)-(d). As the eccentricity decreases, the number of higher order modes decreases, as expected. When the eccentricity is 10$\%$ only one higher order mode exists within a single FSR. As the eccentricity drops, the higher order mode spacing decreases and below 1$\%$ the mode degeneracy is nearly recovered resulting in a single mode spectrum. Fig. 6. (a) and (c) Microscope images of the microbubbles under test. $\varepsilon _{i}$ and $\varepsilon _{\lambda }$ are the measured and calculated eccentricity of the microbubbles. Fluorescence spectrum of the Yb$^{3+}$-Er$^{3+}$ doped tellurite glass microbubble with only two modes (b) or single mode (d) emission in an FSR, corresponding to the bubbles in (a) and (c), respectively. d is the diameter of the microbubble. $\Delta \lambda _\textrm {ecc}$ is the wavelength spacing between the polar modes. (e) Laser power at 1578.56 nm as a function of pump power. The red line represents a linear fit to the experimental data, with a lasing threshold of 1.66 mW. The inset is the output laser spectrum when the pump power is 1.71 mW. Most compound glasses have larger refractive indices than silica. For example, when a (typically silica) tapered fiber is used to pump a rare-earth doped compound microsphere with a high refractive index, many higher order WGMs are excited [41]. This could be attributed to the fact that the gain of many of the modes is greater than the loss, resulting in a lower energy conversion efficiency and the resonator being more prone to output mode hopping. As the pump power was increased further to 1.63 mW, laser emission at a wavelength of 1578.56 nm was detected. The results are shown in Fig. 6(e), where the relationship between the 980 nm pump power and the detected power of a single WGM lasing mode is plotted. The lasing threshold is about 1.66 mW and single-mode lasing output was observed throughout the entire measurement cycle. Wavelength tuning of the Yb$^{3+}$-Er$^{3+}$ doped tellurite glass microbubble with a diameter of 130 $\mu$m was also investigated by varying the temperature and the internal air pressure. The 980 nm diode laser was used as pump to obtain a fluorescence spectrum and the pump power was increased from 0 to 35.6 mW. The resulting spectra are shown in Fig. 7(a). It should be noted that when the power launched into the tapered fiber was increased to 40 mW, a large loss was induced because the tellurite glass microbubble melted and fused to the tapered fiber [42]. The result of the tuning is shown in Fig. 7(b) and a maximum wavelength shift, that is tuning range, of 1.53 nm was obtained. Some jumps in wavelength tuning range were observed due to the thermal effects around pump/cavity resonances [43,44]. Although the tuning is in general nonlinear, the overall tuning rate is −5.3 GHz/mW. The frequency shift of the WGM laser modes at different air pressures from 0 to 0.6 bar was also investigated and the results are shown in Fig. 7(c) and (d). A tuning sensitivity of $-6.5$ GHz/bar was determined following a linear fitting. The accuracy is limited by the spectral resolution of the OSA of 0.05 nm. Fig. 7. (a-b) The measured fluorescence emission spectrum obtained by increasing the pump power from 0 to 35.6 mW. The maximum wavelength tuning range is about 1.53 nm and the red arrow indicates the direction of wavelength tuning. (c) Lasing wavelength of the active tellurite glass microbubble at different pressures. (d) Frequency shift as a function of the internal air pressure. Red lines are linear fits. In this work, we report on a method to fabricate microbubbles from a soft glass, namely tellurite, using a CO${_2}$ laser. The method involves first melting the glass capillary to form a sphere and then blowing out the sphere to make a bubble. The fabricated bubbles have diameters and wall thicknesses of approximately 150 $\mu$m and 670 nm, respectively. Notably, the bubbles have quite a uniform wall thickness and spherical shape which can result in a low number of higher order modes. The measured eccentricity can approach that previously observed in microspheres. The microbubbles were made from both passive and Yb:Er doped glass. The tellurite glass microbubbles were investigated experimentally and characterized in terms of their $Q$-factors, mode spectra, tuning rates, eccentricity, and laser output, and these results were compared against silica whispering gallery resonators where possible. In the case of passive microbubbles, a high Q-factor of 2.3$\times$10$^6$ was achieved. A broadband 980 nm diode pump laser was used to change the temperature inside the microbubble and a large temperature sensitivity of 4.9 GHz/K was obtained. In addition, the air pressure sensitivity of 7.1 GHz/bar was measured by applying different internal air pressures. Even though the tellurite glass is softer than silica, the expected increase in pressure tuning is negated by the lower elasto-optic coefficient. For the active microbubbles, a maximum wavelength tuning range of 1.53 nm was observed by increasing the intensity of the pump light. Separately, the sensitivity of the output laser frequency to air pressure was determined as 6.5 GHz/bar which is similar to the undoped microbubble aerostatic tuning rate. Additionally, the devices fabricated in this investigation can have very low eccentricity and so fewer modes in a single FSR, resulting in a higher laser conversion efficiency. Note that, due to the toxicity of the material, they are not ideal for biosensing compared to devices made from silica [45,46], and the single input of the fabricated tellurite microbubbles impedes fluid flow. Aside from this limitation, the devices reported in this article have potential impact for many applications, including low threshold, high conversion efficiency and tunable microcavity laser sources operating in the near and mid-infrared range, integrated active photonic devices, and laser sensing using microbubble resonators based on compound glass. In future work, coupling may be improved by using either a high-index tapered fiber or prism coupler to improve the phase-matching condition. National Key Program of the Natural Science Foundation of China (NSFC 61935006); National Natural Science Foundation of China (NSFC 61805074, NSFC 61905048); Fundamental Research Funds for the Central Universities (3072019CF2504, 3072019CF2506, 3072019CFQ2503, 3072020CFJ2507, 3072020CFQ2501, 3072020CFQ2502, 3072020CFQ2503, 3072020CFQ2504, GK2250260018, HEUCFG201841); The 111 project to Harbin Engineering University (B13015); Heilongjiang Provincial Natural Science Foundation of China (LH2019F034); Heilongjiang Touyan Innovation Team Program; Harbin Engineering University Scholarship Fund; Okinawa Institute of Science and Technology Graduate University. The authors acknowledge the Engineering Support Section of OIST Graduate University. 1. M. Cai, O. Painter, K. J. Vahala, and P. C. Sercel, "Fiber-coupled microsphere laser," Opt. Lett. 25(19), 1430–1432 (2000). [CrossRef] 2. J. P. Rezac and A. T. Rosenberger, "Locking a microsphere whispering-gallery mode to a laser," Opt. Express 8(11), 605–610 (2001). [CrossRef] 3. I. M. White, N. M. Hanumegowda, H. Oveys, and X. Fan, "Tuning whispering gallery modes in optical microspheres with chemical etching," Opt. Express 13(26), 10754–10759 (2005). [CrossRef] 4. Y. Ooka, Y. Yang, J. Ward, and S. Nic Chormaic, "Raman lasing in a hollow, bottle-like microresonator," Appl. Phys. Express 8(9), 092001 (2015). [CrossRef] 5. P. Bianucci, "Optical microbottle resonators for sensing," Sensors 16(11), 1841 (2016). [CrossRef] 6. S. Kasumie, Y. Yong, J. M. Ward, and S. Nic Chormaic, "Towards visible frequency comb generation using a hollow WGM resonator," Rev. Las. Eng. 46, 92–96 (2018). 7. S. Frustaci and F. Vollmer, "Whispering-gallery mode (WGM) sensors: review of established and WGM-based techniques to study protein conformational dynamics," Curr. Opin. Chem. Biol. 51, 66–73 (2019). [CrossRef] 8. T. J. Kippenberg, R. Holzwarth, and S. A. Diddams, "Microresonator-based optical frequency combs," Science 332(6029), 555–559 (2011). [CrossRef] 9. G. Bahl, K. H. Kim, W. Lee, J. Liu, X. Fan, and T. Carmon, "Brillouin cavity optomechanics with microfluidic devices," Nat. Commun. 4(1), 1994 (2013). [CrossRef] 10. S. Parkins and T. Aoki, "Microtoroidal cavity QED with fiber overcoupling and strong atom-field coupling: A single-atom quantum switch for coherent light fields," Phys. Rev. A 90(5), 053822 (2014). [CrossRef] 11. M. R. Foreman, J. D. Swaim, and F. Vollmer, "Whispering gallery mode sensors," Adv. Opt. Photonics 7(2), 168–240 (2015). [CrossRef] 12. R. Madugani, Y. Yong, J. M. Ward, V. H. Le, and S. Nic Chormaic, "Optomechanical transduction and characterization of a silica microsphere pendulum via evanescent light," Appl. Phys. Lett. 106(24), 241101 (2015). [CrossRef] 13. Y. Yang, S. Saurabh, J. M. Ward, and S. Nic Chormaic, "High-Q, ultrathin-walled microbubble resonator for aerostatic pressure sensing," Opt. Express 24(1), 294–299 (2016). [CrossRef] 14. Y. Yang, X. Jiang, S. Kasumie, G. Zhao, L. Xu, J. M. Ward, L. Yang, and S. Nic Chormaic, "Four-wave mixing parametric oscillation and frequency comb generation at visible wavelengths in a silica microbubble resonator," Opt. Lett. 41(22), 5266–5269 (2016). [CrossRef] 15. Y. Yang, F. Lei, S. Kasumie, L. Xu, J. M. Ward, L. Yang, and S. Nic Chormaic, "Tunable erbium-doped microbubble laser fabricated by sol-gel coating," Opt. Express 25(2), 1308–1313 (2017). [CrossRef] 16. J. M. Ward, Y. Yang, F. Lei, X.-C. Yu, Y.-F. Xiao, and S. Nic Chormaic, "Nanoparticle sensing beyond evanescent field interaction with a quasi-droplet microcavity," Optica 5(6), 674–677 (2018). [CrossRef] 17. L. T. Hogan, E. H. Horak, J. M. Ward, K. A. Knapper, S. Nic Chormaic, and R. H. Goldsmith, "Toward real-time monitoring and control of single nanoparticle properties with a microbubble resonator spectrometer," ACS Nano 13(11), 12743–12757 (2019). [CrossRef] 18. M. Sumetsky, Y. Dulashko, and R. S. Windeler, "Optical microbubble resonator," Opt. Lett. 35(7), 898–900 (2010). [CrossRef] 19. A. Watkins, J. Ward, Y. Wu, and S. Nic Chormaic, "Single-input spherical microbubble resonator," Opt. Lett. 36(11), 2113–2115 (2011). [CrossRef] 20. P. Wang, J. Ward, Y. Yang, X. Feng, G. Brambilla, G. Farrell, and S. Nic Chormaic, "Lead-silicate glass optical microbubble resonator," Appl. Phys. Lett. 106(6), 061101 (2015). [CrossRef] 21. S. Inoue, A. Nukui, K. Yamamoto, T. Yano, S. Shibata, and M. Yamane, "Correlation between specific heat and change of refractive index formed by laser spot heating of tellurite glass surfaces," J. Non-Cryst. Solids 324(1-2), 133–141 (2003). [CrossRef] 22. G. K. White, "Thermal expansion of reference materials: copper, silica and silicon," J. Phys. D: Appl. Phys. 6(17), 2070–2078 (1973). [CrossRef] 23. N. Riesen, S. A. V. A. François, and T. M. Monro, "Material candidates for optical frequency comb generation in microspheres," Opt. Express 23(11), 14784–14795 (2015). [CrossRef] 24. N. Riesen, W. Q. Zhang, and T. M. Monro, "Dispersion analysis of whispering gallery mode microbubble resonators," Opt. Express 24(8), 8832–8847 (2016). [CrossRef] 25. N. Riesen, W. Q. Zhang, and T. M. Monro, "Dispersion in silica microbubble resonators," Opt. Lett. 41(6), 1257–1260 (2016). [CrossRef] 26. J. M. Ward, Y. Yang, and S. Nic Chormaic, "Glass-on-glass fabrication of bottle-shaped tunable microlasers and their applications," Sci. Rep. 6(1), 25152 (2016). [CrossRef] 27. G. Righini and S. Soria, "Biosensing by WGM microspherical resonators," Sensors 16(6), 905 (2016). [CrossRef] 28. W. Lee, Y. Sun, H. Li, K. Reddy, M. Sumetsky, and X. Fan, "A quasi-droplet optofluidic ring resonator laser using a micro-bubble," Appl. Phys. Lett. 99(9), 091102 (2011). [CrossRef] 29. S. Tanabe, "Rare-earth-doped glasses for fiber amplifiers in broadband telecommunication," C. R. Chim. 5(12), 815–824 (2002). [CrossRef] 30. J. Jiang, Y. Liu, K. Liu, S. Wang, Z. Ma, Y. Zhang, P. Niu, L. Shen, and T. Liu, "Wall-thickness-controlled microbubble fabrication for WGM-based application," Appl. Opt. 59(16), 5052–5057 (2020). [CrossRef] 31. R. Henze, T. Seifert, J. Ward, and O. Benson, "Tuning whispering gallery modes using internal aerostatic pressure," Opt. Lett. 36(23), 4536–4538 (2011). [CrossRef] 32. A. Cosci, F. Quercioli, D. Farnesi, S. Berneschi, A. Giannetti, F. Cosi, A. Barucci, G. N. Conti, G. Righini, and S. Pelli, "Confocal reflectance microscopy for determination of microbubble resonator thickness," Opt. Express 23(13), 16693–16701 (2015). [CrossRef] 33. J. Yu, X. Wang, W. Li, M. Zhang, J. Zhang, K. Tian, Y. Du, S. Nic Chormaic, and P. Wang, "An experimental and theoretical investigation of a 2 μm wavelength low-threshold microsphere laser," J. Lightwave Technol. 38(7), 1880–1886 (2020). [CrossRef] 34. L. R. P. Kassab, R. A. Kobayashi, M. J. V. Bell, A. P. Carmo, and T. Catunda, "Thermo-optical parameters of tellurite glasses doped with Yb3+," J. Phys. D: Appl. Phys. 40(13), 4073–4077 (2007). [CrossRef] 35. Q. Ma, T. Rossmann, and Z. Guo, "Whispering-gallery mode silica microsensors for cryogenic to room temperature measurement," Meas. Sci. Technol. 21(2), 025310 (2010). [CrossRef] 37. R. El-Mallawany, "Tellurite glasses Part 1. Elastic properties," Mater. Chem. Phys. 53(2), 93–120 (1998). [CrossRef] 38. M. J. Weber, Handbook of Optical Materials (CRC Press, 2003). 39. M. Sumetsky, Y. Dulashko, and R. S. Windeler, "Super free spectral range tunable optical microbubble resonator," Opt. Lett. 35(11), 1866–1868 (2010). [CrossRef] 40. C. Zhang, A. Cocking, E. Freeman, Z. Liu, and T. Srinivas, "On-chip glass microspherical shell whispering gallery mode resonators," Sci. Rep. 7(1), 14965 (2017). [CrossRef] 41. J. Yu, E. Lewis, G. Farrell, and P. Wang, "Compound glass microsphere resonator devices," Micromachines 9(7), 356 (2018). [CrossRef] 42. J. M. Ward, P. Féron, and S. Nic Chormaic, "A taper-fused microspherical laser source," IEEE Photonics Technol. Lett. 20(6), 392–394 (2008). [CrossRef] 43. T. Carmon, L. Yang, and K. J. Vahala, "Dynamical thermal behavior and thermal self-stability of microcavities," Opt. Express 12(20), 4742–4750 (2004). [CrossRef] 44. J. Ward and S. Nic Chormaic, "Thermo-optical tuning of whispering gallery modes in Er3+:Yb3+ co-doped phosphate glass microspheres," Appl. Phys. B 100(4), 847–850 (2010). [CrossRef] 45. F. Vollmer and S. Arnold, "Whispering-gallery-mode biosensing: label-free detection down to single molecules," Nat. Methods 5(7), 591–596 (2008). [CrossRef] 46. S. Berneschi, F. Baldini, A. Cosci, D. Farnesi, G. Nunzi Conti, S. Tombelli, C. Trono, S. Pelli, and A. Giannetti, "Fluorescence biosensing in selectively photo–activated microbubble resonators," Sens. Actuators, B 242, 1057–1064 (2017). [CrossRef] M. Cai, O. Painter, K. J. Vahala, and P. C. Sercel, "Fiber-coupled microsphere laser," Opt. Lett. 25(19), 1430–1432 (2000). J. P. Rezac and A. T. Rosenberger, "Locking a microsphere whispering-gallery mode to a laser," Opt. Express 8(11), 605–610 (2001). I. M. White, N. M. Hanumegowda, H. Oveys, and X. Fan, "Tuning whispering gallery modes in optical microspheres with chemical etching," Opt. Express 13(26), 10754–10759 (2005). Y. Ooka, Y. Yang, J. Ward, and S. Nic Chormaic, "Raman lasing in a hollow, bottle-like microresonator," Appl. Phys. Express 8(9), 092001 (2015). P. Bianucci, "Optical microbottle resonators for sensing," Sensors 16(11), 1841 (2016). S. Kasumie, Y. Yong, J. M. Ward, and S. Nic Chormaic, "Towards visible frequency comb generation using a hollow WGM resonator," Rev. Las. Eng. 46, 92–96 (2018). S. Frustaci and F. Vollmer, "Whispering-gallery mode (WGM) sensors: review of established and WGM-based techniques to study protein conformational dynamics," Curr. Opin. Chem. Biol. 51, 66–73 (2019). T. J. Kippenberg, R. Holzwarth, and S. A. Diddams, "Microresonator-based optical frequency combs," Science 332(6029), 555–559 (2011). G. Bahl, K. H. Kim, W. Lee, J. Liu, X. Fan, and T. Carmon, "Brillouin cavity optomechanics with microfluidic devices," Nat. Commun. 4(1), 1994 (2013). S. Parkins and T. Aoki, "Microtoroidal cavity QED with fiber overcoupling and strong atom-field coupling: A single-atom quantum switch for coherent light fields," Phys. Rev. A 90(5), 053822 (2014). M. R. Foreman, J. D. Swaim, and F. Vollmer, "Whispering gallery mode sensors," Adv. Opt. Photonics 7(2), 168–240 (2015). R. Madugani, Y. Yong, J. M. Ward, V. H. Le, and S. Nic Chormaic, "Optomechanical transduction and characterization of a silica microsphere pendulum via evanescent light," Appl. Phys. Lett. 106(24), 241101 (2015). Y. Yang, S. Saurabh, J. M. Ward, and S. Nic Chormaic, "High-Q, ultrathin-walled microbubble resonator for aerostatic pressure sensing," Opt. Express 24(1), 294–299 (2016). Y. Yang, X. Jiang, S. Kasumie, G. Zhao, L. Xu, J. M. Ward, L. Yang, and S. Nic Chormaic, "Four-wave mixing parametric oscillation and frequency comb generation at visible wavelengths in a silica microbubble resonator," Opt. Lett. 41(22), 5266–5269 (2016). Y. Yang, F. Lei, S. Kasumie, L. Xu, J. M. Ward, L. Yang, and S. Nic Chormaic, "Tunable erbium-doped microbubble laser fabricated by sol-gel coating," Opt. Express 25(2), 1308–1313 (2017). J. M. Ward, Y. Yang, F. Lei, X.-C. Yu, Y.-F. Xiao, and S. Nic Chormaic, "Nanoparticle sensing beyond evanescent field interaction with a quasi-droplet microcavity," Optica 5(6), 674–677 (2018). L. T. Hogan, E. H. Horak, J. M. Ward, K. A. Knapper, S. Nic Chormaic, and R. H. Goldsmith, "Toward real-time monitoring and control of single nanoparticle properties with a microbubble resonator spectrometer," ACS Nano 13(11), 12743–12757 (2019). M. Sumetsky, Y. Dulashko, and R. S. Windeler, "Optical microbubble resonator," Opt. Lett. 35(7), 898–900 (2010). A. Watkins, J. Ward, Y. Wu, and S. Nic Chormaic, "Single-input spherical microbubble resonator," Opt. Lett. 36(11), 2113–2115 (2011). P. Wang, J. Ward, Y. Yang, X. Feng, G. Brambilla, G. Farrell, and S. Nic Chormaic, "Lead-silicate glass optical microbubble resonator," Appl. Phys. Lett. 106(6), 061101 (2015). S. Inoue, A. Nukui, K. Yamamoto, T. Yano, S. Shibata, and M. Yamane, "Correlation between specific heat and change of refractive index formed by laser spot heating of tellurite glass surfaces," J. Non-Cryst. Solids 324(1-2), 133–141 (2003). G. K. White, "Thermal expansion of reference materials: copper, silica and silicon," J. Phys. D: Appl. Phys. 6(17), 2070–2078 (1973). N. Riesen, S. A. V. A. François, and T. M. Monro, "Material candidates for optical frequency comb generation in microspheres," Opt. Express 23(11), 14784–14795 (2015). N. Riesen, W. Q. Zhang, and T. M. Monro, "Dispersion analysis of whispering gallery mode microbubble resonators," Opt. Express 24(8), 8832–8847 (2016). N. Riesen, W. Q. Zhang, and T. M. Monro, "Dispersion in silica microbubble resonators," Opt. Lett. 41(6), 1257–1260 (2016). J. M. Ward, Y. Yang, and S. Nic Chormaic, "Glass-on-glass fabrication of bottle-shaped tunable microlasers and their applications," Sci. Rep. 6(1), 25152 (2016). G. Righini and S. Soria, "Biosensing by WGM microspherical resonators," Sensors 16(6), 905 (2016). W. Lee, Y. Sun, H. Li, K. Reddy, M. Sumetsky, and X. Fan, "A quasi-droplet optofluidic ring resonator laser using a micro-bubble," Appl. Phys. Lett. 99(9), 091102 (2011). S. Tanabe, "Rare-earth-doped glasses for fiber amplifiers in broadband telecommunication," C. R. Chim. 5(12), 815–824 (2002). J. Jiang, Y. Liu, K. Liu, S. Wang, Z. Ma, Y. Zhang, P. Niu, L. Shen, and T. Liu, "Wall-thickness-controlled microbubble fabrication for WGM-based application," Appl. Opt. 59(16), 5052–5057 (2020). R. Henze, T. Seifert, J. Ward, and O. Benson, "Tuning whispering gallery modes using internal aerostatic pressure," Opt. Lett. 36(23), 4536–4538 (2011). A. Cosci, F. Quercioli, D. Farnesi, S. Berneschi, A. Giannetti, F. Cosi, A. Barucci, G. N. Conti, G. Righini, and S. Pelli, "Confocal reflectance microscopy for determination of microbubble resonator thickness," Opt. Express 23(13), 16693–16701 (2015). J. Yu, X. Wang, W. Li, M. Zhang, J. Zhang, K. Tian, Y. Du, S. Nic Chormaic, and P. Wang, "An experimental and theoretical investigation of a 2 μm wavelength low-threshold microsphere laser," J. Lightwave Technol. 38(7), 1880–1886 (2020). L. R. P. Kassab, R. A. Kobayashi, M. J. V. Bell, A. P. Carmo, and T. Catunda, "Thermo-optical parameters of tellurite glasses doped with Yb3+," J. Phys. D: Appl. Phys. 40(13), 4073–4077 (2007). Q. Ma, T. Rossmann, and Z. Guo, "Whispering-gallery mode silica microsensors for cryogenic to room temperature measurement," Meas. Sci. Technol. 21(2), 025310 (2010). R. El-Mallawany, "Tellurite glasses Part 1. Elastic properties," Mater. Chem. Phys. 53(2), 93–120 (1998). M. J. Weber, Handbook of Optical Materials (CRC Press, 2003). M. Sumetsky, Y. Dulashko, and R. S. Windeler, "Super free spectral range tunable optical microbubble resonator," Opt. Lett. 35(11), 1866–1868 (2010). C. Zhang, A. Cocking, E. Freeman, Z. Liu, and T. Srinivas, "On-chip glass microspherical shell whispering gallery mode resonators," Sci. Rep. 7(1), 14965 (2017). J. Yu, E. Lewis, G. Farrell, and P. Wang, "Compound glass microsphere resonator devices," Micromachines 9(7), 356 (2018). J. M. Ward, P. Féron, and S. Nic Chormaic, "A taper-fused microspherical laser source," IEEE Photonics Technol. Lett. 20(6), 392–394 (2008). T. Carmon, L. Yang, and K. J. Vahala, "Dynamical thermal behavior and thermal self-stability of microcavities," Opt. Express 12(20), 4742–4750 (2004). J. Ward and S. Nic Chormaic, "Thermo-optical tuning of whispering gallery modes in Er3+:Yb3+ co-doped phosphate glass microspheres," Appl. Phys. B 100(4), 847–850 (2010). F. Vollmer and S. Arnold, "Whispering-gallery-mode biosensing: label-free detection down to single molecules," Nat. Methods 5(7), 591–596 (2008). S. Berneschi, F. Baldini, A. Cosci, D. Farnesi, G. Nunzi Conti, S. Tombelli, C. Trono, S. Pelli, and A. Giannetti, "Fluorescence biosensing in selectively photo–activated microbubble resonators," Sens. Actuators, B 242, 1057–1064 (2017). A. François, S. A. V. Aoki, T. Arnold, S. Bahl, G. Baldini, F. Barucci, A. Bell, M. J. V. Benson, O. Berneschi, S. Bianucci, P. Brambilla, G. Cai, M. Carmo, A. P. Carmon, T. Catunda, T. Cocking, A. Conti, G. N. Cosci, A. Cosi, F. Diddams, S. A. Du, Y. Dulashko, Y. El-Mallawany, R. Fan, X. Farnesi, D. Farrell, G. Feng, X. Féron, P. Foreman, M. R. Freeman, E. Frustaci, S. Giannetti, A. Goldsmith, R. H. Guo, Z. Hanumegowda, N. M. Henze, R. Hogan, L. T. Holzwarth, R. Horak, E. H. Inoue, S. Jiang, J. Jiang, X. Kassab, L. R. P. Kasumie, S. Kim, K. H. Kippenberg, T. J. Knapper, K. A. Kobayashi, R. A. Le, V. H. Lee, W. Lei, F. Lewis, E. Li, H. Li, W. Liu, J. Liu, K. Liu, T. Liu, Y. Liu, Z. Ma, Q. Ma, Z. Madugani, R. Monro, T. M. Nic Chormaic, S. Niu, P. Nukui, A. Nunzi Conti, G. Ooka, Y. Oveys, H. Painter, O. Parkins, S. Pelli, S. Quercioli, F. Reddy, K. Rezac, J. P. Riesen, N. Righini, G. Rosenberger, A. T. Rossmann, T. Saurabh, S. Seifert, T. Sercel, P. C. Shen, L. Shibata, S. Soria, S. Srinivas, T. Sumetsky, M. Sun, Y. Swaim, J. D. Tanabe, S. Tian, K. Tombelli, S. Trono, C. Vahala, K. J. Vollmer, F. Wang, P. Wang, X. Ward, J. Ward, J. M. Watkins, A. Weber, M. J. White, G. K. White, I. M. Windeler, R. S. Wu, Y. Xiao, Y.-F. Xu, L. Yamamoto, K. Yamane, M. Yang, L. Yang, Y. Yano, T. Yong, Y. Yu, J. Yu, X.-C. Zhang, C. Zhang, J. Zhang, W. Q. Zhao, G. ACS Nano (1) Adv. Opt. Photonics (1) Appl. Phys. B (1) Appl. Phys. Express (1) C. R. Chim. (1) Curr. Opin. Chem. Biol. (1) IEEE Photonics Technol. Lett. (1) J. Non-Cryst. Solids (1) J. Phys. D: Appl. Phys. (2) Mater. Chem. Phys. (1) Meas. Sci. Technol. (1) Micromachines (1) Nat. Commun. (1) Nat. Methods (1) Opt. Express (9) Optica (1) Phys. Rev. A (1) Rev. Las. Eng. (1) Sci. Rep. (2) Sens. Actuators, B (1) (1) 4 3 π ( c 1 2 ) 3 = 4 3 π ( c 2 2 ) 3 − 4 3 π ( c 2 − a 1 2 ) 3 , (2) Δ f = f ( 1 n Δ n Δ T + 1 d Δ d Δ T ) Δ T , (3) ε i = a − b a . (4) Δ f ecc = | f m l − f m + 1 l | ≈ f m l ⋅ ε λ |m| − 1 / 2 l ,
CommonCrawl
Cross Validated Cross Validated Meta Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. 2 features and 2 principal components Whats the difference between 2 features and 2 principal components? I know what a PCA is, I just have the following problem: If my data has 2 features, the PCA will produce 2 components. So why does the predictive power of 2 features equal the predicive power of 2 principal components? machine-learning pca dimensionality-reduction Robomatix 5811 silver badge1010 bronze badges $\begingroup$ This question seems clear enough to me (even if the OP is unclear on the issues, but then, that's why they're asking). $\endgroup$ – gung - Reinstate Monica May 25 '16 at 10:59 Generally for linear models, changing the basis doesn't matter: For linear models, representing your features in your original two-dimensional basis or a new two-dimensional basis from PCA won't change the predictive power. For the purposes of this question, there's nothing special about the PCA basis. Let $\mathbf{x} = \left[ \begin{array}{c} x_1 \\ x_2 \end{array} \right]$ denote an observation from a two-dimensional feature space. We can construct a new basis for this two-dimensional space using any set of two linearly independent vectors. These vectors could be from Principal Component Analysis (PCA), but they could also come from elsewhere (as long as they're linearly independent). Let $U$ be a matrix whose columns are vectors of the new basis. Coordinates of our observation $\mathbf{x}$ in terms of the new basis is given by multiplying by the inverse of the matrix $U$. $$\mathbf{z} = U^{-1} \mathbf{x}$$ In many machine learning contexts, you use linear transformations of your observations to do stuff. You use some $A\mathbf{x}$ where matrix $A$ represents some linear map. Clearly: $$ A \mathbf{x} = AUU^{-1}\mathbf{x} = AU \mathbf{z}$$ That is, it's entirely equivalent to use: linear transformation $A$ and vector $\mathbf{x}$ written in the original basis linear transformation $AU$ and vector $\mathbf{z}$ written in the new basis You can see though that this wouldn't hold for non-linear transformations. Simple example: OLS regression Model is: $$y_i = \mathbf{x}_i \cdot \boldsymbol{\beta} + \epsilon_i$$ OLS estimate: $$ \mathbf{b}_x = (X'X)^{-1}(X'\mathbf{y})$$ With data in new basis: $Z = X{U'}^{-1}$ $$ \begin{align*} \mathbf{b}_z &= (Z'Z)^{-1}(Z'\mathbf{y}) \\ &= \left( U^{-1}X'X {U'}^{-1} \right) ^{-1} {U^{-1}X' \mathbf{y} }\\ &= U' \left( X'X \right) ^{-1} U U^{-1}X' \mathbf{y} \\ &= U' \left( X'X \right) ^{-1} X' \mathbf{y} \\ &= U' \mathbf{b}_x \\ \end{align*} $$ Keep going and the residuals will be the same etc... You have the same predictive power and the estimated coefficients are related by the change in basis. Matthew GunnMatthew Gunn In your question, I believe features refers to the size of the space. For example, measurements for population variation were taken on two features: hair color and shoe size. Principle component analysis is often can often be associated with dimension reduction, but it need not be. In it's most general formulation, it is simply a rotation of the space to the most coordinate system where the variances along the axes are uncorrelated. In such a coordinate system, the components are linear combinations of the features, and in some cases, they may be the features themselves. The questions is asking you why using all of the features and using all of the principle components provides the same predictive power. I'll let you work that part out yourself. BillyziegeBillyziege Without any context, if you have two noncollinear variables, you have two features and two PCs. The PC reflects the structure of the data. Multiple collinear variables lead to fewer PCs than variables, the PCs simply provide an orthonormal basis for a matrix of features. What is perhaps interesting is sparse representation. LASSO is a supervised learning approach where a small number of features can be selected for predictive value. Alternately, choosing a small number of PCs and their loadings provides sparse representation of a matrix of features in an unsupervised approach. In that sense, the implications would be vastly different. AdamOAdamO Thanks for contributing an answer to Cross Validated! Not the answer you're looking for? Browse other questions tagged machine-learning pca dimensionality-reduction or ask your own question. How can top principal components retain the predictive power on a dependent variable (or even lead to better predictions)? A problem with interpretation of principal components as linear combinations of features What happens if I do principal components of the principal components? Models using more (almost all) principal components of the data are unexpectedly worse in cross validation Principal component analysis and DBScan Understanding the determination of principal components
CommonCrawl
More photos from this reportage are featured in Quartz's new book The Objects that Power the Global Economy. You may not have seen these objects before, but they've already changed the way you live. Each chapter examines an object that is driving radical change in the global economy. This is from the chapter on the drug modafinil, which explores modifying the mind for a more productive life. Looking at the prices, the overwhelming expense is for modafinil. It's a powerful stimulant - possibly the single most effective ingredient in the list - but dang expensive. Worse, there's anecdotal evidence that one can develop tolerance to modafinil, so we might be wasting a great deal of money on it. (And for me, modafinil isn't even very useful in the daytime: I can't even notice it.) If we drop it, the cost drops by a full $800 from $1761 to $961 (almost halving) and to $0.96 per day. A remarkable difference, and if one were genetically insensitive to modafinil, one would definitely want to remove it. Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis. This calculation - reaping only \frac{7}{9} of the naive expectation - gives one pause. How serious is the sleep rebound? In another article, I point to a mice study that sleep deficits can take 28 days to repay. What if the gain from modafinil is entirely wiped out by repayment and all it did was defer sleep? Would that render modafinil a waste of money? Perhaps. Thinking on it, I believe deferring sleep is of some value, but I cannot decide whether it is a net profit. Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum. Fatty acids are well-studied natural smart drugs that support many cognitive abilities. They play an essential role in providing structural support to cell membranes. Fatty acids also contribute to the growth and repair of neurons. Both functions are crucial for maintaining peak mental acuity as you age. Among the most prestigious fatty acids known to support cognitive health are: He used to get his edge from Adderall, but after moving from New Jersey to San Francisco, he says, he couldn't find a doctor who would write him a prescription. Driven to the Internet, he discovered a world of cognition-enhancing drugs known as nootropics — some prescription, some over-the-counter, others available on a worldwide gray market of private sellers — said to improve memory, attention, creativity and motivation. "A system that will monitor their behavior and send signals out of their body and notify their doctor? You would think that, whether in psychiatry or general medicine, drugs for almost any other condition would be a better place to start than a drug for schizophrenia," says Paul Appelbaum, director of Columbia University's psychiatry department in an interview with the New York Times. Many of these supplements include exotic-sounding ingredients. Ginseng root and an herb called bacopa are two that have shown some promising memory and attention benefits, says Dr. Guillaume Fond, a psychiatrist with France's Aix-Marseille University Medical School who has studied smart drugs and cognitive enhancement. "However, data are still lacking to definitely confirm their efficacy," he adds. The concept of neuroenhancement and the use of substances to improve cognitive functioning in healthy individuals, is certainly not a new one. In fact, one of the first cognitive enhancement drugs, Piracetam, was developed over fifty years ago by psychologist and chemist C.C. Giurgea. Although he did not know the exact mechanism, Giurgia believed the drug boosted brain power and so began his exploration into "smart pills", or nootropics, a term he coined from the Greek nous, meaning "mind," and trepein, meaning "to bend. Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury. "Piracetam is not a vitamin, mineral, amino acid, herb or other botanical, or dietary substance for use by man to supplement the diet by increasing the total dietary intake. Further, piracetam is not a concentrate, metabolite, constituent, extract or combination of any such dietary ingredient. [...] Accordingly, these products are drugs, under section 201(g)(1)(C) of the Act, 21 U.S.C. § 321(g)(1)(C), because they are not foods and they are intended to affect the structure or any function of the body. Moreover, these products are new drugs as defined by section 201(p) of the Act, 21 U.S.C. § 321(p), because they are not generally recognized as safe and effective for use under the conditions prescribed, recommended, or suggested in their labeling."[33] Contact us at webmaster@careyoukeep.com | Sitemap xml | Sitemap txt | Sitemap
CommonCrawl
Assessing uncertainty of climate change impacts on long-term hydropower generation using the CMIP5 ensemble—the case of Ecuador Pablo E. Carvajal1, Gabrial Anandarajah1, Yacob Mulugetta2 & Olivier Dessens1 Climatic Change volume 144, pages 611–624 (2017)Cite this article This study presents a method to assess the sensitivity of hydropower generation to uncertain water resource availability driven by future climate change. A hydrology-electricity modelling framework was developed and applied to six rivers where 10 hydropower stations operate, which together represent over 85% of Ecuador's installed hydropower capacity. The modelling framework was then forced with bias-corrected output from 40 individual global circulation model experiments from the Coupled Model Intercomparison Project 5 for the Representative Concentration Pathway 4.5 scenario. Impacts of changing climate on hydropower resource were quantified for 2071–2100 relative to a baseline period 1971–2000. Results show a wide annual average inflow range from + 277% to − 85% when individual climate experiments are assessed. The analysis also show that hydropower generation in Ecuador is highly uncertain and sensitive to climate change since variations in inflow to hydropower stations would directly result in changes in the expected hydropower potential. Annual hydroelectric power production in Ecuador is found to vary between − 55 and + 39% of the mean historical output when considering future inflow patterns to hydroelectric reservoirs covering one standard deviation of the CMIP5 RCP4.5 climate ensemble. Hydropower dominates the electricity system in South America, providing 63% of total electricity generation (van Vliet et al. 2016). This trend is expected to continue into the future. In the Tropical Andes only (the northwest region of South America: Colombia, Ecuador, Peru and Bolivia), there are plans for 151 new dams greater than 2 MW over the next 20 years, more than a 300% increase (Finer and Jenkins 2012). Ecuador in particular will have a power generation matrix with an expected 90% share of hydropower by 2017, with the addition of approximately 2800 MW of new hydropower capacity (ARCONEL 2015). However, future hydropower electricity generation is highly uncertain given variable inter-annual runoff patterns and also due to the possible impacts of climate change, given the considerable discrepancies around the likely change in the magnitude and direction of precipitation in the future (Cisneros et al. 2014). For the Tropical Andes, global circulation models (GCM) run for the Coupled Model Intercomparison Project 5 (CMIP5) forced under the Representative Concentration Pathway (RCP) 4.5Footnote 1 project a large variation for precipitation change. The 25th percentile of models projects a decline in precipitation approaching − 30%, while the 75th percentile suggests an increase of up to 20% (April to September) (van Oldenborgh et al. 2013). This spread of results is indicative of the variation in the representation of precipitation among GCMs, hence demonstrating their limitation to consistently represent the behaviour of precipitation in this region. A number of previous studies quantify impacts of climate change on energy systems at national (CEPAL 2012; Liu et al. 2016), regional (Schaeffer et al. 2013b; DOE 2015) and global level (van Vliet et al. 2016). The magnitude of climate change impacts on hydropower generation is usually assessed by running a baseline calibrated hydrological model driven by various climate projections as input forcing data, followed by an electricity generation model (Hay et al. 2002). To assess uncertainty related to climate change, studies use a combination of emission or concentration scenarios to derive a range of probable results but use only data from a limited number of GCMs, often only the mean value of GCM results is used (Buytaert et al. 2010). For instance, the studies by CEPAL (2012) and De Lucena et al. (2010) assessed vulnerability of hydropower to future climate projections for IPCC's SRES A2 and B2 scenariosFootnote 2 and one GCM (HadCM3) for Chile and Brazil, respectively. Escobar et al. (2011) assessed hydropower generation in Latin America and the Caribbean drawing on projections of average temperature and rainfall throughout the current century for A2 and B2 emission scenarios with the ensemble mean value of GCM results. In comparison, Grijsen (2014) assessed five hydropower river basins in Cameroon using one emission scenario A1B but 15 GCMs. Shrestha et al. (2016) consider the more recent RCP4.5 and RCP8.5 with three GCMs (MIROC-ESM, MRI-CGCM3, and MPI-ESM-M) to assess risk due to climate change for a hydropower project in Nepal. These studies, among others (e.g. Hamlet et al. 2010; Lind et al. 2013; Madani and Lund 2010), highlight the significant sensitivity that hydropower can have to precipitation changes and that the main source of uncertainty for regional climate scenarios is associated with projections of different GCMs, therefore the importance of using several GCMs to assess uncertainty and the growing interest in using large ensembles of GCMs to improve the reliability of future projections. The objective of this paper is to assess the impacts of climate change on hydrological patterns and therefore on hydropower generation when using a large ensemble of projections. For this purpose, a hydrologic-electricity model was developed and applied to six rivers in Ecuador where 10 hydropower stations operate, representing over 85% of the country's hydropower installed capacity. The model was calibrated for a 1971–2000 baseline period, which subsequently was used to assess changes in inflow by forcing it with bias-corrected outputs from 40 CMIP5 GCMs for the period 2071–2100 under the RCP4.5 scenario. The mean and standard deviation of inflow obtained with the CMIP5 ensemble were later used to simulate changes in the capacity factorsFootnote 3 and electrical output of hydropower stations. There are a number of novelties in this paper worth noting. Firstly, this study employs a large ensemble of GCMs to cover a wide range of future climate conditions. Secondly, the study uses a simple statistical approach that is not data intensive which can be replicated in data scarce regions. Finally, the uncertainty of the impacts of climate change upon the Tropical Andes has not been systematically investigated, despite the importance for hydropower deployment for the region (Finer and Jenkins 2012). The latest AR5 report of the IPPC insists on the importance of considering uncertainties surrounding climate in supporting national adaptation and mitigation strategies, and recognises the lack of consistent tools to deal with these uncertainties (Cisneros et al. 2014; IPCC 2014) To undertake this analysis, we obtain inflow time series for 10 hydropower stations in Ecuador using historic inflow values and gridded projected climate data to force a conceptual hydrological model. Next, we compile data describing the technical specifications of the selected hydropower plants and develop a model to simulate monthly hydropower electricity production. For hydropower stations that have storage capacity (reservoir), we assign a bespoke operating policy according to historic values that provide a realistic basis for water release decisions that affect hydropower production. These steps are detailed in the following subsections. Study area and data Ecuador is located in the northwest part of South America in the region known as the Tropical Andes (see Fig. 1). The Andes define the hydrographical system of the country and its river basins: the Pacific watershed that discharges into the Pacific Ocean and the Amazon watershed which consists of main tributaries to the Amazon river. Overall, spatial precipitation patterns are highly variable, with annual precipitation ranging from over 3000 mm in the Amazonian slopes to less than 500 mm in the southwest part of the country (Buytaert et al. 2011), while seasonal variability ranges from 350 mm/month in the rainy season to lower than 100 mm/month in the dry season (Espinoza Villar et al. 2009).Footnote 4 In this study, six large rivers that are relevant for hydropower generation are represented. Three of these rivers belong to the Pacific watershed: Toachi, Daule and Jubones, while three belong to the Amazon watershed: Paute, Agoyán and Coca (see Fig. 1). Ecuador's six major river basins, hydropower stations and gauging stations used in this study Hydropower installed capacity in Ecuador reached 4382 MW in November in 2016, which represents 58% of the total installed capacity (7587 MW), the remaining percentage provided by gas and fuel-based thermoelectric plants (40%) and non-conventional renewables (2%) (solar, wind, biomass and small hydro) (ARCONEL 2016). There is one national interconnected electricity grid system, which transmits centralised power generation to consumption centres in the country. Hydropower's share in the power generation matrix is currently 65% (15,264 GWh/year) and is expected to reach over 90% by 2017 (> 21,000 GWh/year) with new large-scale hydropower projects led by the government (Zambrano-Barragen 2012; ARCONEL 2015). Coca Coda Sinclair (1500 MW) is the largest among these new projects, a runoff facility located in the Coca River (Napo basin), which was recently inaugurated in late 2016 (El Comercio 2016). The latest National Energy Agenda 2016–2040 states that there is still large untapped hydropower potential estimated in 22 GW (MICSE 2016), thus supporting the country's long-term objective of continuing to harness this resource and consolidate the power matrix based primarily on hydropower. The study will assess Ecuador's 10 largest hydropower stations (7 in operation and 3 under construction) that together will represent over 85% of the country's installed hydropower capacity and represent different types of hydropower configuration systems, namely run-of-river/dam and single/cascading systems. Technical characteristics of these facilities including head, usable storage, design flow rate, efficiency and observed mean monthly flow (1971–2000) and electricity production were provided by the Ecuadorian electricity grid operator (CENACE). Streamflow-gauging stations are considered to characterise the catchment basin, which is a necessary simplification due to the lack of historic datasets of inflow that cover larger areas of the catchment. Details of each hydropower power station are summarised in the Supplementary Material. Observed historic monthly mean temperature, precipitation and potential evapotranspiration (PET) for Ecuador for a 30-year period (1971–2000) were extracted from the dataset of the University of East Anglia Climate Research Unit CRU TS v.3.24 (Harris et al. 2014) from the release of October 2016. The gridded data set has a resolution of 0.5° × 0.5°, and the studied river basins lie within 65 grid cells. Regarding data for climate change projections, GCMs results under the RCP2.6, RCP4.5 and RCP8.5 scenario from the CMIP5 were downloaded from the Royal Netherlands Meteorological Institute (KNMI) Climate Explorer database (see the Supplementary Material for a list of models used) (Trouet and Van Oldenborgh 2013). Monthly precipitation and PET data for each GCM were obtained for the six basins using a bilinear interpolation approach and for two 30-year periods: baseline 1971–2000 and future 2071–2100, against which baseline period values were compared by simple scaling, i.e. the delta factor approach (Fowler et al. 2007). Data was bias-corrected using precipitation and PET values from the observed baseline period CRU datasets and using a multiplier on a monthly basis (Babur et al. 2016). This paper uses only the RCP4.5 scenario for the uncertainty analysis. The reasons to present results only for the RCP4.5 scenario is that (i) it gathers the largest number of GCMs i.e. 41, compared to 26 for the RCP2.6 and 30 for the RCP8.5, (ii) results showed that inter-RCP scenario differences were smaller compared to inter-GCM differences; inter-GCM uncertainty range was also found to have similar magnitude for all three concentration scenarios. In addition, the RCP4.5 is the scenario which approximately conforms with a medium condition of future climate impact (Thomson et al. 2011) and also represents the 2 °C above pre-industrial values by 2100, which is the central aim of the United Nations 2015 Paris Agreement. Hydrological model For the hydrological component, a conceptual hydrological model consisting of a two-step approach similar to De Lucena et al. (2009), was selected to assess the sensitivity of runoff to climate change precipitation projections. The argument for this type of model over more complex physical models, such as distributed models (Vetter et al. 2015), is that application of these latter can be challenging since their inputs can be difficult to acquire in developing countries especially in the spatially continuous manner, thus hindering the calibration and validation process (Babur et al. 2016). Intercomparison between catchment basins is also made possible with conceptual models since historical precipitation and temperature values are more likely to exist for a larger number of basins (De Lucena et al. 2009). The first step uses 30 years of observed monthly time series of precipitation and inflow to assess the relationship between rainfall and inflow in each hydropower station through a logarithmic linear regression modelFootnote 5 (Jones et al. 2006) represented by the following equation: $$ \ln \left({Q}_t\right)=\alpha +{\beta}_1\ln \left({Pr}_{t-m}\right)+{\beta}_2{d}_2\ln \left({Pr}_{t-m}\right)+\varepsilon $$ where, Q m and Pr t − m are the average observed monthly inflow and precipitation (1971–2000) for month t, Footnote 6 α , β 1 , β 2 are the estimated regression coefficients , d 2 is a categorical variable,Footnote 7 and ε is the error term. The relevant regression coefficients are β 1 and β 2, which represent the sensitivity or 'elasticity' of average monthly inflow with respect to average precipitation (E Q-Pr ). When a month is in the d2 period, the elasticity E Q-Pr is equal to (β 1 + β 2), otherwise, it is equal to β 1. In the first step, the seasonal patterns are captured statistically but evapotranspiration and storage effects are omitted, so an additional step is included to correct for total annual discharge. The second step therefore includes the conceptual equation of the water balance: WB = Pr − PET + ΔS, where WB is the water balance, Pr is precipitation, PET is potential evapotranspiration and ΔS is storage variation in soil and underground aquifers that throughout the seasonal cycle can be negligible since the dry period presents negative values and the wet period presents positive values of similar magnitude (Arnold et al. 1998). Future runoff is simulated with the following equation: $$ {Q}_t^{future}={Q}_t^{baseline}\cdot \left[1+{E}_{Q-\mathit{\Pr}}\cdot \left(\varDelta {Pr}_t^{future, baseline}-1\right)\right]\cdot {\phi}_{WB,t} $$ where, \( {Q}_t^{future} \) is the projected inflow for month t for a specific GCM for the future period 2071–2100; \( {Q}_t^{baseline} \) is the observed inflow for the baseline period 1971–2000; E Q − Pr is the inflow-precipitation elasticity; \( \varDelta {Pr}_t^{future/ baseline} \) is the precipitation delta factor for projected future GCM and baseline and ϕ WB , t is the water balance correction factor for a specific month. Hydrological model performance has been validated with a ratings approach similar to that adopted by Ho et al. (2015) which calculate three statistical measures: (i) Pearson's correlation coefficient (r), (ii) Nash-Sutcliffe Efficiency (NSE) coefficient and (iii) percentage deviation (Dv) of simulated mean flow from observed mean flow. Hydropower electricity model Once scenarios of runoff were obtained, the approach taken to quantify the variation of hydropower output is calculated considering the site-specific potential energy of available runoff (head) and facility-level configuration of hydropower stations. To simulate the behaviour of the hydropower dam operators (Yi Ng et al. 2017), we model the available water that can be released for hydropower generation using reservoir specifications and according to the inflow time series generated by the previous hydrological model. Releases are specified for each month of the year, as well as reservoir level and spillage. Storage dynamics are simulated using the laws of mass balance: $$ {\displaystyle \begin{array}{c}\hfill {S}_t={S}_{t-1}+{Q}_t+{V}_t^{\ast }-{V}_t\hfill \\ {}\hfill 0\le {S}_t\le {S}_{usable}\hfill \\ {}\hfill {V}_{min}\le {V}_t\le {V}_{max}\hfill \end{array}} $$ where S t in the reservoir storage in month t, Q t is the current period reservoir inflow, \( {V}_t^{\ast } \) is the water release or spillage from an upstream hydropower dam (if any) and V t is the water release volume to the turbines. S usable is the maximum usable storage of the reservoir, V max is the maximum volume of water that can be released through the turbines for the hydropower station to work at maximum capacity in each period and V min is the minimum release that must satisfy turbine operation, downstream hydropower stations requirements and environmental flows. Monthly hydropower production E t (MWh) and capacity factor CF are simulated as follows: $$ {\displaystyle \begin{array}{c}\hfill {E}_t=\eta \cdot \rho \cdot g\cdot H\cdot {V}_t\hfill \\ {}\hfill {CF}_t={E}_t/\left(P\cdot T\right)\hfill \end{array}} $$ where η is plant efficiency, ρ is the water density, g is gravitational acceleration, H is hydraulic head and V t is the inflow into the turbine. Efficiency η accounts for turbine efficiency and friction losses, and is used as a calibration parameter. Hydraulic head considers penstock vertical head plus average dam height. In the capacity factor equation, P is nominal capacity of the hydropower station and T is number of hours in a month. We choose to assess the monthly capacity factor since hydroclimatic conditions are generally integrated into energy system models by exogenously defining the capacity factor of hydropower power generation technologies to characterise their availability according to inter-annual runoff seasonality (Gargiulo 2009; Kannan and Turton 2011; IFE 2013). Uncertainty in monthly hydropower production is inferred from the frequency distribution associated with the inflows obtained with GCM ensemble results and quantified by the magnitude of the standard deviation. Detailed mathematical formulation and validation results of the hydrological and hydropower model are provided in the Supplementary Material. We find that inter-GCM range of projections is extremely large, maximum deviations from the mean span from − 82% for the GFDL-CM3 (GCM no. 17) in Agoyán to + 277% for the IPSL-CM5A-LR (GCM no. 32) in Minas San Francisco. Figure 2 shows the projected mean annual inflow percentage changes compared to the historic baseline for each GCM and the CMIP5 ensemble mean (last black column in Fig. 2). There is also considerable variability in the climate change signal among gauging stations, meaning that a GCM is not necessarily consistent with increasing or decreasing values for different regions in a same scenario. The GISS-E2-R p2 (GCM no. 25), for example, suggests mean annual increases in Marcel Laniado, Minas San Francisco, Paute and Agoyán but decreases for Toachi Pilatón and Coca Codo Sinclair. In general, for the six gauging stations, out of 40 GCMs, 22 GCMs simulate an increase in mean annual discharge, the remaining 18 projecting decreases. This coincides with the ensemble mean projecting an increase in mean annual discharge since there are more models that agree on increase compared to decrease. However, given that all GCMs are considered equiprobable, this does not entail that there is a higher probability of increased inflow (Smith and Petersen 2014). Percentage change in the mean annual inflow at Ecuador's major hydropower stations. Results are given for the period 2071–2100, compared to the baseline 1971–2000, for each GCM under scenario RCP4.5 and the ensemble mean (black bar). GCMs are ordered according to Table 2 in the online resource Seasonal watershed characteristics are maintained by most of the GCMs. Figure 3 presents results of the season inflow assessment. Forcing the conceptual hydrological model with CMIP5 ensemble mean shows slightly higher inflow values than those of the baseline. Uncertainty is greatest in the wet season, with some GCMs doubling or tripling the baseline inflow but others remaining closer to the baseline values. Analysing results according to wet and dry seasons, we find that during the wet season, 62% of the GCMs agrees on increases, while during the dry season, 55% of GCMs agree on decreases of inflow. This corroborates the prediction for the region having wetter wet seasons and drier dry seasons under climate change (Kundzewicz et al. 2007). River inflow regimes for gauging stations at Ecuador's major hydropower stations. The historic baseline, each GCM and the CMIP5 ensemble mean under the RCP4.5 scenario for the 2071–2100 period is shown. The shaded band represents the standard deviation Regarding electricity generation, we find that capacity factors follow seasonal inflow patterns (compare to Fig. 3) and its variation range depends on storage and operational characteristics of the respective hydropower station. Figure 4 presents the capacity factors for historic, CMIP5 ensemble mean and the ± 1SD (error bars). Cascading hydropower stations in the same river have been aggregated given that they usually are considered as one integrated operation system. A + 1SD optimistic scenario increases the monthly capacity factors (85–89%); however, the − 1SD presents a more critical situation: monthly capacity factor dropping to a value of 0% during the dry season, namely for the stations that have small regulation capacity i.e. Coca Codo Sinclair, Minas San Francisco and Toachi Pilatón. Marcel Laniado which has a large reservoir presents less sensitivity to changes although in the − 1SD drops likewise to zero at the peak of the dry period in November. Figure 5 presents results for electricity generation for the aggregated hydropower system which has a total installed capacity of 4368 MW. The + 1SD scenario presents an overall higher electricity output throughout the year; the wet season (March to August) presents a 15% average increase, while the dry season presents an average increase of 46%. In contrast, the − 1SD presents an average reduction of − 50% during the wet season and of − 76% for the dry season. Stations: Coca Codo Sinclair, Toachi Pilatón, and Minas San Francisco do not have any output at all in the dry season for the − 1SD scenario. Paute and Agoyán maintain output in the − 1SD dry scenario due to their regulation capacities. Marcel Laniado seems less affected by inflow variations due to its large reservoir. Table 1 presents results at the annual level and percentage deviations from annual observed generation values for the aggregated hydropower system (22801 GWh), showing a 6% increase (1408 GWh) for the ensemble mean, 39% increase (800 GWh) for a + 1SD scenario, while a significant reduction of − 55% (− 12400 GWh) for the − 1SD scenario. These results provide statistical information which allows the use of alternative methods for addressing uncertainty in long-term energy planning analysis. Standard deviation is chosen since it has been used as a measure for uncertainty in risk analysis approaches and investment portfolio analysis for the power sector (Awerbuch and Yang 2007; Krey and Zweifel 2008; Vithayasrichareon and MacGill 2012). Traditionally, renewable energy sources, including hydropower, are considered of null or low risk in terms of operation price compared to thermal sources that depend on fuels with volatile prices. However, hydropower with its long-lived infrastructure has an inherent risk of experimenting high or low runoff outcome due to long-term climate variations. In this analysis, we have simulated the output of each hydropower station in isolation and work at maximum capacity when water is available. However, the operation of dams and hydropower stations depends not only on the availability of water but also on their interaction with the rest of the power system, for example, optimised real operation may sacrifice base load dispatch and reserve water for peak demand hours when electricity prices are high (IFE 2013; Yi Ng et al. 2017). Mean monthly capacity factors for Ecuador's major hydropower stations. The ± 1 standard deviation is shown by the probability space parameterised with the CMIP5 ensemble under RCP4.5 for the period 2071–2100 Typical seasonal power generation for selected hydropower stations considering mean and standard deviations according to the RCP4.5 of the CMIP5 ensemble for the 2071–2100 period. The dotted line is the aggregated historical generation. Amazon watershed (in green) than the Pacific watershed (in blue) Table 1 Annual generation output changes for the RCP4.5 ensemble mean, + 1SD and − 1SD. Simulated annual generation is for period 2071–2100 Our approach based on simulated hydropower production driven by changes on runoff due to climate change variations has some limitations. First, the use of the delta method to estimate the percentage changes of climate variables compared to a historic baseline entails assumptions about the nature of the changes, including a lack of change in the variability and spatial patters of climate (NORDEN 2010). The lack of meteorological data and high variability of the climate system in the Tropical Andes region complicate the use of more complex downscaling methods (Buytaert et al. 2010) and using downscaled information can be no more reliable than the climate model simulation that underlies it; more detail does not automatically imply better information (Taylor et al. 2012). Reliance on climate data from KNMI and downscaling from 0.5° grids may also result in incorrect inflows for regions with complex topography where there are sharp changes in rainfall and runoff over short distances. Second, ceteris paribus was assumed in this study in terms of other hydrological variables that can affect runoff in the long-term, e.g. land use and vegetation cover, upstream water use for agricultural or industrial purposes, which should be of concern specially for changes in seasonal patterns. However, most of the assessed capacity and future hydropower potential in Ecuador are on the eastern slopes of the Andes facing the Amazon flood plain where currently less than 4% of the country's population lives (INEC 2012). Finally, the distribution of a climate ensemble is not a true probability distribution but instead an expert judgement with respect to potential future climatic conditions (Moss et al. 2010) and therefore assigning probability statistics to them might be misleading (Taylor et al. 2012; Collins and Knutti 2013). Nonetheless, for the purpose of analysing impacts of climate change, GCMs are still the only credible tools currently available to simulate the physical processes that determine global climate, and are used as a basis for assessing climate change impacts on natural and human systems, especially when there is a need to parameterise the probability space (Schaeffer et al. 2013a; Parkinson and Djilali 2015). Conclusions and policy implications The results of this study show that the long-term projected changes in unregulated inflow into hydropower stations encompass a wide range, dominated by the large differences in inter-GCM precipitation projections. The CMIP5 ensemble mean projects a slight increase in total/mean annual inflow into Ecuador's hydropower stations towards the end of the century. However, when using the CMIP5 ensemble projections to characterise the probability space, the assessment of the seasonal patterns indicates that the country will experience wetter wet seasons and drier dry seasons, leading to large variations of hydropower annual output. Shortfalls in hydropower production would result in either reduced available electrical energy for consumers or, more likely, a temporary shift in the means of power generation. Ecuador's plans to become a net exporter of hydroelectric power to neighbouring Colombia, Peru and even Chile will need to be closely monitored given the loss of revenues and the projected increase in domestic demand that will need to augment supplies from alternative resources, including renewables or oil and gas fired plants. The opposite is also plausible; heavy rains could contribute to increased hydropower output, leading to reduced energy costs and surplus for exports (if international transmission infrastructure were available). The scale of these impacts is likely to depend on both the magnitude of the hydropower production windfall or shortfall and the relative importance of hydropower in the energy matrix. Hydropower stations with storage capabilities show less sensitivity to inflow changes compared to runoff facilities, although extreme dry scenarios will leave any storage capacities ineffective. Therefore, dam-based hydropower will have only certain climate-change-risk-control advantage compared to runoff stations; however, they will need larger investments and cause larger social and environmental impacts. Future research should point in the direction of methodologies that include the results and uncertainty of climate change projection ensembles in combination with energy system models that can capture hydropower interaction with the rest of the energy system. Complimentary future research on the role of the El Niño Southern Oscillation (ENSO), which has large impacts in this region, and its changes in frequency, intensity and duration, will help also to define a better picture of vulnerability hotspots where hydropower and other renewable energy sources are critically exposed to inter-annual climate variability. Such studies would inform decision makers of necessary investments needed to ensure energy security in the face of climate change. For Ecuador, a more robust long-term electricity should focus on an appropriate diversification of generating technologies. The share of hydropower particularly large runoff facilities must decrease rapidly, while policy support should promote an increase in non-conventional renewables. Radiative forcing is stabilised at 4.5 W/m2 in the year 2100 without ever exceeding this value. Socio-economic scenarios of the Intergovernmental Panel on Climate Chante Assessment Report 4 (A1, A2, B1, B2, etc.) The capacity factor of a power plant is the ratio of its actual output over a period of time, to its potential output if it were possible for it to operate at full nameplate capacity continuously over the same period of time. Even though small glaciers are present in the Ecuadorian Andes, strong solar radiation precludes the development of a seasonal snow cover. Snowmelt therefore does not provide an additional, seasonally-changing water reservoir, meaning that precipitation and evapotranspiration remain the leading hydroclimatic drivers (Kaser et al. 2003; Vergara et al. 2007; Kaser et al. 2010) Precipitation has been identified as the leading driver for inflow in Ecuador (Célleri 2007). In regions with little or no snow, e.g. in the Amazon, changes in runoff are much more dependent on changes in rainfall than on changes in temperature (Bates et al. 2008). Notice that there is a lag time m between precipitation and runoff, which has been adjusted to obtain the best model fit. A categorical variable was inserted to improve regression fit and represent seasonal patterns, being d 2 = 0 for the dry season (from October to February) and d 2 = 1 for the wet season. ARCONEL (2015) Plan Maestro de Electricidad - Expansion de la Generacion ARCONEL (2016) Balance Nacional Electrico Noviembre 2016. http://www.regulacionelectrica.gob.ec/estadistica-del-sector-electrico/balance-nacional/. Accessed 1 Feb 2017 Arnold JG, Srinivasan R, Muttiah RS, Williams JR (1998) Large area hydrologic modeling and assessment part I: model development. J Am Water Resour Assoc 34:73–89. doi:10.1111/j.1752-1688.1998.tb05961.x Awerbuch S, Yang S (2007) Efficient electricity generating portfolios for Europe: maximising energy security and climate change mitigation. EIB Pap 12:8–37 ISSN 0257-7755 Babur M, Babel MS, Shrestha S et al (2016) Assessment of climate change impact on reservoir inflows using multi climate-models under RCPs-the case of Mangla Dam in Pakistan. Water. doi:10.3390/w8090389 Bates B, Kundzewicz ZW, Wu S, Palutikof J (2008) Climate change and water. Technical paper of the Intergovernmental Panel on Climate Change. Intergovernmental Panel on Climate Change (IPCC) Buytaert W, Vuille M, Dewulf a et al (2010) Uncertainties in climate change projections and regional downscaling in the tropical Andes: implications for water resources management. Hydrol Earth Syst Sci 14:1247–1258. doi:10.5194/hess-14-1247-2010 Buytaert W, Cuesta-Camacho F, Tobón C (2011) Potential impacts of climate change on the environmental services of humid tropical alpine regions. Glob Ecol Biogeogr 20:19–33. doi:10.1111/j.1466-8238.2010.00585.x Célleri R (2007) Rainfall variability and rainfall-runoff dynamics in the Paute River Basin–Southern Ecuadorian Andes. Katholieke Universiteit Leuven CEPAL (2012) Análisis de Vulnerabilidad del Sector Hidroeléctrico frente a escenarios futuros de cambio climatico en Chile. Santiago, Chile Cisneros J, BE TO, Arnell NW, et al (2014) Freshwater resources. In: Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. Contribution of working group II to the fifth assessment report of the Intergovernmental Panel on Climate Change [Field, C.B., V.R Collins M, Knutti R (2013) Long-term climate change: projections, commitments and irreversibility 2013: The physical science basis. Contribution of working group I to the fifth assessment report of the Intergovernmental Panel on Climate Change De Lucena AFP, Szklo AS, Schaeffer R et al (2009) The vulnerability of renewable energy to climate change in Brazil. Energy Policy 37:879–889. doi:10.1016/j.enpol.2008.10.029 De Lucena A, Schaeffer R, Szklo A (2010) Least-cost adaptation options for global climate change impacts on the Brazilian electric power system. Glob Environ Chang 20:342–350. doi:10.1016/j.gloenvcha.2010.01.004 DOE (2015) Climate change and the U.S. energy sector: regional vulnerabilities and resilience solutions El Comercio (2016) Cuatro turbinas del Coca-Codo comenzaron a entregar energía en firme | El Comercio. http://www.elcomercio.com/actualidad/turbinas-cocacodosinclair-jorgeglas-energia.html. Accessed 24 Oct 2016 Escobar M, López FF, Clark V (2011) Energy-water-climate planning for development without carbon in Latin America and the Caribbean Espinoza Villar JC, Ronchail J, Guyot JL et al (2009) Spatio-temporal rainfall variability in the Amazon basin countries (Brazil, Peru, Bolivia, Colombia, and Ecuador). Int J Climatol 29:1574–1594. doi:10.1002/joc.1791 Finer M, Jenkins CN (2012) Proliferation of hydroelectric dams in the Andean Amazon and implications for Andes-Amazon connectivity. PLoS One 7:e35126. doi:10.1371/journal.pone.0035126 Fowler HJ, Blenkinsop S, Tebaldi C (2007) Linking climate change modelling to impacts studies: recent advances in downscaling techniques for hydrological modelling. Int J Climatol 27:1547–1578. doi:10.1002/joc.1556 Gargiulo M (2009) Getting started with TIMES-VEDA Grijsen J (2014) Understanding the impact of climate change on hydropower: the case of Cameroon Hamlet AF, Lee S-Y, Mickelson KEB, Elsner MM (2010) Effects of projected climate change on energy supply and demand in the Pacific Northwest and Washington State. Clim Chang 102:103–128. doi:10.1007/s10584-010-9857-y Harris I, Jones PD, Osborn TJ, Lister DH (2014) Updated high-resolution grids of monthly climatic observations—the CRU TS3.10 dataset. Int J Climatol 34:623–642. doi:10.1002/joc.3711 Hay LE, Clark MP, Wilby RL et al (2002) Use of regional climate model output for hydrologic simulations. J Hydrometeorol 3:571–590. doi:10.1175/1525-7541(2002)003<0571:UORCMO>2.0.CO;2 Ho JT, Thompson JR, Brierley C (2015) Projections of hydrology in the Tocantins-Araguaia Basin, Brazil: uncertainty assessment using the CMIP5 ensemble. Hydrol Sci J 6667:150603015228007. doi:10.1080/02626667.2015.1057513 IFE (2013) TIMES-Norway Model Documentation INEC (2012) Estandisticas Nacionales IPCC (2014) Climate change 2014: impacts, adaptation and vulnerability—contributions of the working group II to the fifth assessment report—summary for policy makers Jones RN, Chiew FHS, Boughton WC, Zhang L (2006) Estimating the sensitivity of mean annual runoff to climate change using selected hydrological models. Adv Water Resour 29:1419–1429. doi:10.1016/j.advwatres.2005.11.001 Kannan R, Turton H (2011) Documentation on the development of the Swiss TIMES Electricity Model (STEM-E) Kaser G, Juen I, Georges C et al (2003) The impact of glaciers on the runoff and the reconstruction of mass balance history from hydrological data in the tropical Cordillera Blanca, Perú. J Hydrol 282:130–144. doi:10.1016/S0022-1694(03)00259-2 Kaser G, Grosshauser M, Marzeion B (2010) Contribution potential of glaciers to water availability in different climate regimes. Proc Natl Acad Sci U S A 107:20223–20227. doi:10.1073/pnas.1008162107 Krey B, Zweifel P (2008) Efficient electricity portfolios for the United States and Switzerland: an investor view Kundzewicz, Mata, Arnel, et al (2007) Freshwater resources and their management. In: Climate change 2007: impacts, adaptation and vulnerability. pp 173–210 Lind A, Rosenberg E, Seljom P, Espegren K (2013) The impact of climate change on the renewable energy production in Norway. In: 2013 International Energy Workshop Liu X, Tang Q, Voisin N, Cui H (2016) Projected impacts of climate change on hydropower potential in China. 20:3343–3359. doi: 10.5194/hess-20-3343-2016 Madani K, Lund JR (2010) Estimated impacts of climate warming on California's high-elevation hydropower. Clim Chang 102:521–538. doi:10.1007/s10584-009-9750-8 MICSE (2016) Agenda Nacional de Energia Moss RH, Edmonds JA, Hibbard KA et al (2010) The next generation of scenarios for climate change research and assessment. Nature 463:747–756. doi:10.1038/nature08823 NORDEN (2010) Conference on future climate and renewable energy: impacts, risks and adaptation. In: Conference on future climate and renewable energy: impacts, risks and adaptation van Oldenborgh GJ, Collins M, Arblaster J, et al (2013) Annex I: atlas of global and regional climate projections Parkinson S, Djilali N (2015) Robust response to hydro-climatic change in electricity generation planning. Clim Chang:1–15. doi:10.1007/s10584-015-1359-5 Schaeffer R, Szklo A, De Lucena A et al (2013a) The vulnerable Amazon: the impact of climate change on the untapped potential of hydropower systems. IEEE Power Energy Mag 11:22–31. doi:10.1109/MPE.2013.2245584 Schaeffer, Szklo, Lucena, et al (2013b) The impact of climate change on the untapped potential of hydropower systems Shrestha S, Bajracharya AR, Babel MS (2016) Assessment of risks due to climate change for the upper Tamakoshi hydropower project in Nepal. Clim Risk Manag 14:27–41. doi:10.1016/j.crm.2016.08.002 Smith LA, Petersen AC (2014) Variations on reliability: connecting climate predictions to climate policy. Error Uncertain Sci Pract:137–156 Taylor KE, Stouffer RJ, Meehl GA (2012) An overview of CMIP5 and the experiment design. Bull Am Meteorol Soc 93:485–498. doi:10.1175/BAMS-D-11-00094.1 Thomson AM, Calvin KV, Smith SJ et al (2011) RCP4.5: a pathway for stabilization of radiative forcing by 2100. Clim Chang 109:77–94. doi:10.1007/s10584-011-0151-4 Trouet V, Van Oldenborgh GJ (2013) KNMI climate explorer: a web-based research tool for high-resolution paleoclimatology. Tree-Ring Res 69:3–13. doi:10.3959/1536-1098-69.1.3 Vergara W, Deeb AM, Valencia AM et al (2007) Economic impacts of rapid glacier retreat in the Andes. EOS Trans Am Geophys Union 88:261. doi:10.1029/2007EO250001 Vetter T, Huang S, Aich V et al (2015) Multi-model climate impact assessment and intercomparison for three large-scale river basins on three continents. Earth Syst Dyn 6:17–43. doi:10.5194/esd-6-17-2015 Vithayasrichareon P, MacGill IF (2012) Portfolio assessments for future generation investment in newly industrializing countries—a case study of Thailand. Energy 44:1044–1058. doi:10.1016/j.energy.2012.04.042 van Vliet MTH, Wiberg D, Leduc S, Riahi K (2016) Power-generation system vulnerability and adaptation to changes in climate and water resources—supplementary information. Nat Clim Chang. doi:10.1038/nclimate2903 Yi Ng J, Turner S, Galelli S (2017) Influence of El Nino Southern Oscillation on global hydropower production Zambrano-Barragen P (2012) The role of the state in large-scale hydropower development. Perspectives from Chile, Ecuador, and Peru. MASSACHUSETTS INSTITUTE OF TECHNOLOGY Profound appreciation is extended to the Ecuadorian Secretariat of Higher Education, Science, Technology and Innovation (SENESCYT) for providing monetary support to the first author for his doctoral studies at UCL Energy Institute. UCL Energy Institute, University College London, Central House 14 Upper Woburn Place, London, WC1H 0NN, UK Pablo E. Carvajal, Gabrial Anandarajah & Olivier Dessens Department of Science, Technology, Engineering and Public Policy, University College London, 36-37 Fitzroy Square, London, W1T 6EY, UK Yacob Mulugetta Pablo E. Carvajal Gabrial Anandarajah Olivier Dessens Correspondence to Pablo E. Carvajal. (DOCX 1250 kb). Carvajal, P.E., Anandarajah, G., Mulugetta, Y. et al. Assessing uncertainty of climate change impacts on long-term hydropower generation using the CMIP5 ensemble—the case of Ecuador. Climatic Change 144, 611–624 (2017). https://doi.org/10.1007/s10584-017-2055-4 Issue Date: October 2017
CommonCrawl
Home > Turkish Journal of Electrical Engineering and Computer Sciences > Vol. 29 (2021) > No. 4 Turkish Journal of Electrical Engineering and Computer Sciences Field-programmable gate array (FPGA) hardware design and implementation ofa new area efficient elliptic curve crypto-processor İHSAN ÇİÇEK 10.3906/elk-2008-8 Elliptic curve cryptography provides a widely recognized secure environment for information exchange in resource-constrained embedded system applications, such as Internet-of-Things, wireless sensor networks, and radio frequency identification. As the elliptic-curve cryptography (ECC) arithmetic is computationally very complex, there is a need for dedicated hardware for efficient computation of the ECC algorithm in which scalar point multiplication is the performance bottleneck. In this work, we present an ECC accelerator that computes the scalar point multiplication for the NIST recommended elliptic curves over Galois binary fields by using a polynomial basis. We used the Montgomery algorithm with projective coordinates for the scalar point multiplication. We designed a hybrid finite field multiplier based on the standard Karatsuba and shift-and-add multiplication algorithms that achieve one finite field multiplication in $\frac{m}{2}$ clock cycles for a key-length of m. The proposed design has been modeled in Verilog hardware description language (HDL), functionally verified with simulations, and implemented for field-programmable gate array (FPGA) devices using vendor tools to demonstrate hardware efficiency. Finally, we have integrated the ECC accelerator as an AXI4 peripheral with a synthesizable microprocessor on an FPGA device to create an elliptic curve crypto-processor. Elliptic curve cryptography, Karatsuba multiplier, crypto-accelerator, crypto-processor, scalar multiplication, field-programmable gate array KASHIF, MUHAMMAD and ÇİÇEK, İHSAN (2021) "Field-programmable gate array (FPGA) hardware design and implementation ofa new area efficient elliptic curve crypto-processor," Turkish Journal of Electrical Engineering and Computer Sciences: Vol. 29: No. 4, Article 17. https://doi.org/10.3906/elk-2008-8 Available at: https://journals.tubitak.gov.tr/elektrik/vol29/iss4/17 Computer Engineering Commons, Computer Sciences Commons, Electrical and Computer Engineering Commons Call for special issue on Revolutionizing Artificial Intelligence Approaches in Image Processing - NEW All Issues Vol. 31, No. 1 Vol. 30, No. 7 Vol. 30, No. 6 Vol. 30, No. 5 Vol. 30, No. 4 Vol. 30, No. 3 Vol. 30, No. SI-1 Vol. 30, No. 1 Vol. 29, No. SI-1 Vol. 29, No. 7 Vol. 29, No. 5 Vol. 29, No. 4 Vol. 29, No. 3 Vol. 29, No. 2 Vol. 29, No. 1 Vol. 28, No. 6 Vol. 28, No. 5 Vol. 28, No. 4 Vol. 28, No. 3 Vol. 28, No. 2 Vol. 28, No. 1 Vol. 27, No. 6 Vol. 27, No. 5 Vol. 27, No. 4 Vol. 27, No. 3 Vol. 27, No. 2 Vol. 27, No. 1 Vol. 26, No. 6 Vol. 26, No. 5 Vol. 26, No. 4 Vol. 26, No. 3 Vol. 26, No. 2 Vol. 26, No. 1 Vol. 25, No. 6 Vol. 25, No. 5 Vol. 25, No. 4 Vol. 25, No. 3 Vol. 25, No. 2 Vol. 25, No. 1 Vol. 24, No. 6 Vol. 24, No. 5 Vol. 24, No. 4 Vol. 24, No. 3 Vol. 24, No. 2 Vol. 24, No. 1 Vol. 23, No. Sup. 1 Vol. 23, No. 6 Vol. 23, No. 5 Vol. 23, No. 4 Vol. 23, No. 3 Vol. 23, No. 2 Vol. 23, No. 1 Vol. 22, No. 6 Vol. 22, No. 5 Vol. 22, No. 4 Vol. 22, No. 3 Vol. 22, No. 2 Vol. 22, No. 1 Vol. 21, No. Sup. 2 Vol. 21, No. Sup. 1 Vol. 21, No. 6 Vol. 21, No. 5 Vol. 21, No. 4 Vol. 21, No. 3 Vol. 21, No. 2 Vol. 21, No. 1 Vol. 20, No. Sup. 2 Vol. 20, No. Sup. 1 Vol. 20, No. 6 Vol. 20, No. 5 Vol. 20, No. 4 Vol. 20, No. 3 Vol. 20, No. 2 Vol. 20, No. 1 Vol. 19, No. 6 Vol. 19, No. 5 Vol. 19, No. 4 Vol. 19, No. 3 Vol. 19, No. 2 Vol. 19, No. 1 Vol. 18, No. 6 Vol. 18, No. 5 Vol. 18, No. 4 Vol. 18, No. 3 Vol. 18, No. 2 Vol. 18, No. 1 Vol. 17, No. 3 Vol. 17, No. 2 Vol. 17, No. 1 Vol. 16, No. 3 Vol. 16, No. 2 Vol. 16, No. 1 Vol. 15, No. 3 Vol. 15, No. 2 Vol. 15, No. 1 Vol. 14, No. 3 Vol. 14, No. 2 Vol. 14, No. 1 Vol. 13, No. 3 Vol. 13, No. 2 Vol. 13, No. 1 Vol. 12, No. 3 Vol. 12, No. 2 Vol. 12, No. 1 Vol. 11, No. 3 Vol. 11, No. 2 Vol. 11, No. 1 Vol. 10, No. 3 Vol. 10, No. 2 Vol. 10, No. 1 Vol. 9, No. 2 Vol. 9, No. 1 Vol. 8, No. 2 Vol. 8, No. 1 Vol. 7, No. 1-3 Vol. 6, No. 3 Vol. 6, No. 2 Vol. 6, No. 1 Vol. 5, No. 3 Vol. 5, No. 2 Vol. 5, No. 1 Vol. 4, No. 1-2-3 Vol. 3, No. 2-3
CommonCrawl
Articles for clean up, Half-Life 2: Episode Two, Resistance weapons, Explosives Magnusson Device Arne Magnusson Instant kill against Striders 150 damage to Hunters Launched by Gravity Gun, detonates when shot N/A Close weapon_striderbuster weapon_magnade (cut entity) "We call it the Magnusson Device. Not my chosen label you understand, but it seemed to please the personnel." ―Arne Magnusson[src] The Magnusson Device, also known as the Strider Buster, is a Resistance weapon developed by Arne Magnusson, the leader of White Forest. Magnusson introduces one of his devices in a display case. In City 17, Striders slaughtered large numbers of Resistance fighters and Citizens, despite the Resistance's use of RPGs, the most powerful weapon at their disposal. Doctor Magnusson struggled to develop a successful launching system for the rugby ball shaped Device, as the weapon needed to be attached to the towering Striders' body to be effective; however, this problem was immediately solved with the arrival of Gordon Freeman and his Gravity Gun to White Forest. According to Magnusson, the device functions is to cause Striders to destroy themselves. Due to the limited available resources during the Combine's rulership over the Earth, the device had to use simple and conventional materials and technology found in the wreckage of pre-Resonance Cascade society; however, it is likely that the electronics inside the Device are stripped from Combine machinery. The Device uses sharpened nails spaced around its circumference to puncture and attach to the carapace of a Strider. The Magnusson Device can be taken from small portable Teleporters found around White Forest, which are connected to deployment units within White Forest's missile base. The Teleporter will send another one when the previous one has been taken away by the player. The Magnusson Device can be attached on the Muscle Car's back thanks to MIRT's mechanic skill who adds a pocket behind it. The Magnusson Device is very effective at destroying Striders; however, the user must have precise aim with the Gravity Gun in order to attach it, which can be easily achieved by running up close to the Strider from behind. During the large assault on White Forest, Striders are always accompanied by pairs of Hunters, who will fire upon a readied Magnusson Device as soon as possible. Therefore, the player should destroy all nearby Hunters before attempting to launch the Magnusson Device at a Strider. Because ammunition at the White Forest base is limited, one method of dealing with the Hunters is to run them over with the Muscle Car ("Hit and Run" Achievement), which has the added bonus of protecting Gordon from the bulk of the Hunters' fire. If necessary, look away from the Hunters and fire at the Strider quickly and shoot at it to prevent its destruction. The Muscle Car also has a pocket installed on its rear bumper for storing a single Magnusson Device, which can be useful for quickly transporting a Device to where it needs to be used. After the device is attached on a Strider, any weapon can be used to hit it (preferably the Shotgun or MP7 if at longer ranges, due to its cone of fire increasing chances of hitting, and MP7 or Pistol at close range), exploding the device violently and completely shredding the Strider. Magnusson Devices can also be used to damage Hunters, but strategically, this is an inefficient use of the bombs as it deals 2/3 damage to the Hunters, thus given the threat the Striders pose to the rocket. As seen in the Half-Life 2 Beta source code, the Hopwire Grenade was originally to be the weapon of choice against Striders,[1] replaced in Episode Two by the Magnusson Device. This is confirmed in the Episode Two commentary, in which Valve's Joshua Weier states that the Magnusson Device "started life as a Half-Life 2 weapon called the Hopwire".[2] Originally, Gordon Freeman was to be able to throw a smaller Magnusson Device with his hands like a regular grenade, whose probable entity was to be "weapon_magnade". According to Richard Lord, the diagram explaining how to use the Magnusson Device against Striders went through several iterations, from highly stylized, to absurd, until they came to something the team believed they could ship.[2] A cut weapon, the Sticky Launcher, bears similarities to the Magnusson Device. The Magnusson Device bears a striking resemblance to "The Gadget", the first nuclear weapon designed under the Manhattan Project. The Magnusson Device does not have its own gib models, and instead uses the Manhack's gibs. The equations featured on the instruction whiteboard are based on potential and kinetic energy. Interestingly, in addition to gravitational potential energy, the equation for spring potential energy $ PE = \frac{1}{2} k x^2 $ is included, despite no spring involved in the operation of the device. However Magnusson does point out that he had trouble thinking of a launcher device before mentioning Gordon Freeman's Gravity Gun saying it "solves that little problem". Half-Life 2: Episode Two (First appearance) ↑ Half-Life 2 Beta source code ↑ 2.0 2.1 Half-Life 2: Episode Two commentary Retrieved from "https://half-life.fandom.com/wiki/Magnusson_Device?oldid=305999"
CommonCrawl
arXiv:2104.14942 (quant-ph) [Submitted on 30 Apr 2021 (v1), last revised 3 Jan 2022 (this version, v2)] Title:Four-mode squeezed states: two-field quantum systems and the symplectic group $\mathrm{Sp}(4,\mathbb{R})$ Authors:Thomas Colas, Julien Grain, Vincent Vennin Abstract: We construct the four-mode squeezed states and study their physical properties. These states describe two linearly-coupled quantum scalar fields, which makes them physically relevant in various contexts such as cosmology. They are shown to generalise the usual two-mode squeezed states of single-field systems, with additional transfers of quanta between the fields. To build them in the Fock space, we use the symplectic structure of the phase space. For this reason, we first present a pedagogical analysis of the symplectic group $\mathrm{Sp}(4,\mathbb{R})$ and its Lie algebra, from which we construct the four-mode squeezed states and discuss their structure. We also study the reduced single-field system obtained by tracing out one of the two fields. This procedure being easier in the phase space, it motivates the use of the Wigner function which we introduce as an alternative description of the state. It allows us to discuss environmental effects in the case of linear interactions. In particular, we find that there is always a range of interaction coupling for which decoherence occurs without substantially affecting the power spectra (hence the observables) of the system. Comments: 38 pages without appendices (total 49 pages), matches published version in EPJC Subjects: Quantum Physics (quant-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) Related DOI: https://doi.org/10.1140/epjc/s10052-021-09922-y From: Thomas Colas [view email] [v1] Fri, 30 Apr 2021 12:13:40 UTC (61 KB) [v2] Mon, 3 Jan 2022 13:29:17 UTC (61 KB) astro-ph astro-ph.CO gr-qc
CommonCrawl
The evidence? A 2012 study in Greece found it can boost cognitive function in adults with mild cognitive impairment (MCI), a type of disorder marked by forgetfulness and problems with language, judgement, or planning that are more severe than average "senior moments," but are not serious enough to be diagnosed as dementia. In some people, MCI will progress into dementia. I almost resigned myself to buying patches to cut (and let the nicotine evaporate) and hope they would still stick on well enough afterwards to be indistinguishable from a fresh patch, when late one sleepless night I realized that a piece of nicotine gum hanging around on my desktop for a week proved useless when I tried it, and that was the answer: if nicotine evaporates from patches, then it must evaporate from gum as well, and if gum does evaporate, then to make a perfect placebo all I had to do was cut some gum into proper sizes and let the pieces sit out for a while. (A while later, I lost a piece of gum overnight and consumed the full 4mg to no subjective effect.) Google searches led to nothing indicating I might be fooling myself, and suggested that evaporation started within minutes in patches and a patch was useless within a day. Just a day is pushing it (who knows how much is left in a useless patch?), so I decided to build in a very large safety factor and let the gum sit for around a month rather than a single day. Lebowitz says that if you're purchasing supplements to improve your brain power, you're probably wasting your money. "There is nothing you can buy at your local health food store that will improve your thinking skills," Lebowitz says. So that turmeric latte you've been drinking everyday has no additional brain benefits compared to a regular cup of java. Not included in the list below are prescription psychostimulants such as Adderall and Ritalin. Non-medical, illicit use of these drugs for the purpose of cognitive enhancement in healthy individuals comes with a high cost, including addiction and other adverse effects. Although these drugs are prescribed for those with attention deficit hyperactivity disorder (ADHD) to help with focus, attention and other cognitive functions, they have been shown to in fact impair these same functions when used for non-medical purposes. More alarming, when taken in high doses, they have the potential to induce psychosis. Though their product includes several vitamins including Bacopa, it seems to be missing the remaining four of the essential ingredients: DHA Omega 3, Huperzine A, Phosphatidylserine and N-Acetyl L-Tyrosine. It missed too many of our key criteria and so we could not endorse this product of theirs. Simply, if you don't mind an insufficient amount of essential ingredients for improved brain and memory function and an inclusion of unwanted ingredients – then this could be a good fit for you. Several studies have assessed the effect of MPH and d-AMP on tasks tapping various other aspects of spatial working memory. Three used the spatial working memory task from the CANTAB battery of neuropsychological tests (Sahakian & Owen, 1992). In this task, subjects search for a target at different locations on a screen. Subjects are told that locations containing a target in previous trials will not contain a target in future trials. Efficient performance therefore requires remembering and avoiding these locations in addition to remembering and avoiding locations already searched within a trial. Mehta et al. (2000) found evidence of greater accuracy with MPH, and Elliott et al. (1997) found a trend for the same. In Mehta et al.'s study, this effect depended on subjects' working memory ability: the lower a subject's score on placebo, the greater the improvement on MPH. In Elliott et al.'s study, MPH enhanced performance for the group of subjects who received the placebo first and made little difference for the other group. The reason for this difference is unclear, but as mentioned above, this may reflect ability differences between the groups. More recently, Clatworthy et al. (2009) undertook a positron emission tomography (PET) study of MPH effects on two tasks, one of which was the CANTAB spatial working memory task. They failed to find consistent effects of MPH on working memory performance but did find a systematic relation between the performance effect of the drug in each individual and its effect on individuals' dopamine activity in the ventral striatum. There are seven primary classes used to categorize smart drugs: Racetams, Stimulants, Adaptogens, Cholinergics, Serotonergics, Dopaminergics, and Metabolic Function Smart Drugs. Despite considerable overlap and no clear border in the brain and body's responses to these substances, each class manifests its effects through a different chemical pathway within the body. Noopept was developed in Russia in the 90s, and is alleged to improve learning. This drug modifies acetylcholine and AMPA receptors, increasing the levels of these neurotransmitters in the brain. This is believed to account for reports of its efficacy as a 'study drug'. Noopept in the UK is illegal, as the 2016 Psychoactive Substances Act made it an offence to sell this drug in the UK - selling it could even lead to 7 years in prison. To enhance its nootropic effects, some users have been known to snort Noopept. Dopaminergics are smart drug substances that affect levels of dopamine within the brain. Dopamine is a major neurotransmitter, responsible for the good feelings and biochemical positive feedback from behaviors for which our biology naturally rewards us: tasty food, sex, positive social relationships, etc. Use of dopaminergic smart drugs promotes attention and alertness by either increasing the efficacy of dopamine within the brain, or inhibiting the enzymes that break dopamine down. Examples of popular dopaminergic smart drug drugs include Yohimbe, selegiline and L-Tyrosine. The smart pill industry has popularized many herbal nootropics. Most of them first appeared in Ayurveda and traditional Chinese medicine. Ayurveda is a branch of natural medicine originating from India. It focuses on using herbs as remedies for improving quality of life and healing ailments. Evidence suggests our ancestors were on to something with this natural approach. Actually, researchers are studying substances that may improve mental abilities. These substances are called "cognitive enhancers" or "smart drugs" or "nootropics." ("Nootropic" comes from Greek - "noos" = mind and "tropos" = changed, toward, turn). The supposed effects of cognitive enhancement can be several things. For example, it could mean improvement of memory, learning, attention, concentration, problem solving, reasoning, social skills, decision making and planning. Please note: Smart Pills, Smart Drugs or Brain Food Supplements are also known as: Brain Smart Vitamins, Brain Tablets, Brain Vitamins, Brain Booster Supplements, Brain Enhancing Supplements, Cognitive Enhancers, Focus Enhancers, Concentration Supplements, Mental Focus Supplements, Mind Supplements, Neuro Enhancers, Neuro Focusers, Vitamins for Brain Function,Vitamins for Brain Health, Smart Brain Supplements, Nootropics, or "Natural Nootropics" "Cavin's personal experience and humble writing to help educate, not only people who have suffered brain injuries, but anyone interested in the best nutritional advice for optimum brain function is a great introduction to proper nutrition filled with many recommendations of how you can make a changes to your diet immediately. This book provides amazing personal insight related to Cavin's recovery accompanied with well cited peer reviewed sources throughout the entire book detailing the most recent findings around functional neurology! This would be a very time-consuming experiment. Any attempt to combine this with other experiments by ANOVA would probably push the end-date out by months, and one would start to be seriously concerned that changes caused by aging or environmental factors would contaminate the results. A 5-year experiment with 7-month intervals will probably eat up 5+ hours to prepare <12,000 pills (active & placebo); each switch and test of mental functioning will probably eat up another hour for 32 hours. (And what test maintains validity with no practice effects over 5 years? Dual n-back would be unusable because of improvements to WM over that period.) Add in an hour for analysis & writeup, that suggests >38 hours of work, and 38 \times 7.25 = 275.5. 12,000 pills is roughly $12.80 per thousand or $154; 120 potassium iodide pills is ~$9, so \frac{365.25}{120} \times 9 \times 5 = 137. The difference in standard deviations is not, from a theoretical perspective, all that strange a phenomenon: at the very beginning of this page, I covered some basic principles of nootropics and mentioned how many stimulants or supplements follow a inverted U-curve where too much or too little lead to poorer performance (ironically, one of the examples in Kruschke 2012 was a smart drug which did not affect means but increased standard deviations). Barbara Sahakian, a neuroscientist at Cambridge University, doesn't dismiss the possibility of nootropics to enhance cognitive function in healthy people. She would like to see society think about what might be considered acceptable use and where it draws the line – for example, young people whose brains are still developing. But she also points out a big problem: long-term safety studies in healthy people have never been done. Most efficacy studies have only been short-term. "Proving safety and efficacy is needed," she says. Price discrimination is aided by barriers such as ignorance and oligopolies. An example of the former would be when I went to a Food Lion grocery store in search of spices, and noticed that there was a second selection of spices in the Hispanic/Latino ethnic food aisle, with unit prices perhaps a fourth of the regular McCormick-brand spices; I rather doubt that regular cinnamon varies that much in quality. An example of the latter would be using veterinary drugs on humans - any doctor to do so would probably be guilty of medical malpractice even if the drugs were manufactured in the same factories (as well they might be, considering economies of scale). Similarly, we can predict that whenever there is a veterinary drug which is chemically identical to a human drug, the veterinary drug will be much cheaper, regardless of actual manufacturing cost, than the human drug because pet owners do not value their pets more than themselves. Human drugs are ostensibly held to a higher standard than veterinary drugs; so if veterinary prices are higher, then there will be an arbitrage incentive to simply buy the cheaper human version and downgrade them to veterinary drugs. If you happen to purchase anything recommended on this or affiliated websites, we will likely receive some kind of affiliate compensation. We only recommend stuff that we truly believe in and share with our friends and family. If you ever have an issue with anything we recommend please let us know. We want to make sure we are always serving you at the highest level. If you are purchasing using our affiliate link, you will not pay a different price for the products and/or services, but your purchase helps support our ongoing work. Thanks for your support! Fitzgerald 2012 and the general absence of successful experiments suggests not, as does the general historic failure of scores of IQ-related interventions in healthy young adults. Of the 10 studies listed in the original section dealing with iodine in children or adults, only 2 show any benefit; in lieu of a meta-analysis, a rule of thumb would be 20%, but both those studies used a package of dozens of nutrients - and not just iodine - so if the responsible substance were randomly picked, that suggests we ought to give it a chance of 20% \times \frac{1}{\text{dozens}} of being iodine! I may be unduly optimistic if I give this as much as 10%. When I spoke with Jesse Lawler, who hosts the podcast Smart Drugs Smarts, about breakthroughs in brain health and neuroscience, he was unsurprised to hear of my disappointing experience. Many nootropics are supposed to take time to build up in the body before users begin to feel their impact. But even then, says Barry Gordon, a neurology professor at the Johns Hopkins Medical Center, positive results wouldn't necessarily constitute evidence of a pharmacological benefit. No. There are mission essential jobs that require you to live on base sometimes. Or a first term person that is required to live on base. Or if you have proven to not be as responsible with rent off base as you should be so your commander requires you to live on base. Or you're at an installation that requires you to live on base during your stay. Or the only affordable housing off base puts you an hour away from where you work. It isn't simple. The fact that you think it is tells me you are one of the "dumb@$$es" you are referring to above. But when aficionados talk about nootropics, they usually refer to substances that have supposedly few side effects and low toxicity. Most often they mean piracetam, which Giurgea first synthesized in 1964 and which is approved for therapeutic use in dozens of countries for use in adults and the elderly. Not so in the United States, however, where officially it can be sold only for research purposes. How should the mixed results just summarized be interpreted vis-á-vis the cognitive-enhancing potential of prescription stimulants? One possibility is that d-AMP and MPH enhance cognition, including the retention of just-acquired information and some or all forms of executive function, but that the enhancement effect is small. If this were the case, then many of the published studies were underpowered for detecting enhancement, with most samples sizes under 50. It follows that the observed effects would be inconsistent, a mix of positive and null findings. On the other hand, sometimes you'll feel a great cognitive boost as soon as you take a pill. That can be a good thing or a bad thing. I find, for example, that modafinil makes you more of what you already are. That means if you are already kind of a dick and you take modafinil, you might act like a really big dick and regret it. It certainly happened to me! I like to think that I've done enough hacking of my brain that I've gotten over that programming… and that when I use nootropics they help me help people. If the entire workforce were to start doping with prescription stimulants, it seems likely that they would have two major effects. Firstly, people would stop avoiding unpleasant tasks, and weary office workers who had perfected the art of not-working-at-work would start tackling the office filing system, keeping spreadsheets up to date, and enthusiastically attending dull meetings. Despite decades of study, a full picture has yet to emerge of the cognitive effects of the classic psychostimulants and modafinil. Part of the problem is that getting rats, or indeed students, to do puzzles in laboratories may not be a reliable guide to the drugs' effects in the wider world. Drugs have complicated effects on individuals living complicated lives. Determining that methylphenidate enhances cognition in rats by acting on their prefrontal cortex doesn't tell you the potential impact that its effects on mood or motivation may have on human cognition. Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb Cocoa flavanols (CF) positively influence physiological processes in ways which suggest that their consumption may improve aspects of cognitive function. This study investigated the acute cognitive and subjective effects of CF consumption during sustained mental demand. In this randomized, controlled, double-blinded, balanced, three period crossover trial 30 healthy adults consumed drinks containing 520 mg, 994 mg CF and a matched control, with a 3-day washout between drinks. Assessments included the state anxiety inventory and repeated 10-min cycles of a Cognitive Demand Battery comprising of two serial subtraction tasks (Serial Threes and Serial Sevens), a Rapid Visual Information Processing (RVIP) task and a mental fatigue scale, over the course of 1 h. Consumption of both 520 mg and 994 mg CF significantly improved Serial Threes performance. The 994 mg CF beverage significantly speeded RVIP responses but also resulted in more errors during Serial Sevens. Increases in self-reported mental fatigue were significantly attenuated by the consumption of the 520 mg CF beverage only. This is the first report of acute cognitive improvements following CF consumption in healthy adults. While the mechanisms underlying the effects are unknown they may be related to known effects of CF on endothelial function and blood flow. On the other end of the spectrum is the nootropic stack, a practice where individuals create a cocktail or mixture of different smart drugs for daily intake. The mixture and its variety actually depend on the goals of the user. Many users have said that nootropic stacking is more effective for delivering improved cognitive function in comparison to single nootropics. Regardless of your goal, there is a supplement that can help you along the way. Below, we've put together the definitive smart drugs list for peak mental performance. There are three major groups of smart pills and cognitive enhancers. We will cover each one in detail in our list of smart drugs. They are natural and herbal nootropics, prescription ADHD medications, and racetams and synthetic nootropics. It's been widely reported that Silicon Valley entrepreneurs and college students turn to Adderall (without a prescription) to work late through the night. In fact, a 2012 study published in the Journal of American College Health, showed that roughly two-thirds of undergraduate students were offered prescription stimulants for non-medical purposes by senior year. One item always of interest to me is sleep; a stimulant is no good if it damages my sleep (unless that's what it is supposed to do, like modafinil) - anecdotes and research suggest that it does. Over the past few days, my Zeo sleep scores continued to look normal. But that was while not taking nicotine much later than 5 PM. In lieu of a different ml measurer to test my theory that my syringe is misleading me, I decide to more directly test nicotine's effect on sleep by taking 2ml at 10:30 PM, and go to bed at 12:20; I get a decent ZQ of 94 and I fall asleep in 16 minutes, a bit below my weekly average of 19 minutes. The next day, I take 1ml directly before going to sleep at 12:20; the ZQ is 95 and time to sleep is 14 minutes. …Phenethylamine is intrinsically a stimulant, although it doesn't last long enough to express this property. In other words, it is rapidly and completely destroyed in the human body. It is only when a number of substituent groups are placed here or there on the molecule that this metabolic fate is avoided and pharmacological activity becomes apparent. The compound is one of the best brain enhancement supplements that includes memory enhancement and protection against brain aging. Some studies suggest that the compound is an effective treatment for disorders like vascular dementia, Alzheimer's, brain stroke, anxiety, and depression. However, there are some side effects associated with Alpha GPC, like a headache, heartburn, dizziness, skin rashes, insomnia, and confusion. Some data suggest that cognitive enhancers do improve some types of learning and memory, but many other data say these substances have no effect. The strongest evidence for these substances is for the improvement of cognitive function in people with brain injury or disease (for example, Alzheimer's disease and traumatic brain injury). Although "popular" books and companies that sell smart drugs will try to convince you that these drugs work, the evidence for any significant effects of these substances in normal people is weak. There are also important side-effects that must be considered. Many of these substances affect neurotransmitter systems in the central nervous system. The effects of these chemicals on neurological function and behavior is unknown. Moreover, the long-term safety of these substances has not been adequately tested. Also, some substances will interact with other substances. A substance such as the herb ma-huang may be dangerous if a person stops taking it suddenly; it can also cause heart attacks, stroke, and sudden death. Finally, it is important to remember that products labeled as "natural" do not make them "safe." There is no official data on their usage, but nootropics as well as other smart drugs appear popular in the Silicon Valley. "I would say that most tech companies will have at least one person on something," says Noehr. It is a hotbed of interest because it is a mentally competitive environment, says Jesse Lawler, a LA based software developer and nootropics enthusiast who produces the podcast Smart Drug Smarts. "They really see this as translating into dollars." But Silicon Valley types also do care about safely enhancing their most prized asset – their brains – which can give nootropics an added appeal, he says. Iluminal is an example of an over-the-counter serotonergic drug used by people looking for performance enhancement, memory improvements, and mood-brightening. Also noteworthy, a wide class of prescription anti-depression drugs are based on serotonin reuptake inhibitors that slow the absorption of serotonin by the presynaptic cell, increasing the effect of the neurotransmitter on the receptor neuron – essentially facilitating the free flow of serotonin throughout the brain. None of that has kept entrepreneurs and their customers from experimenting and buying into the business of magic pills, however. In 2015 alone, the nootropics business raked in over $1 billion dollars, and web sites like the nootropics subreddit, the Bluelight forums, and Bulletproof Exec are popular and packed with people looking for easy ways to boost their mental performance. Still, this bizarre, Philip K. Dick-esque world of smart drugs is a tough pill to swallow. To dive into the topic and explain, I spoke to Kamal Patel, Director of evidence-based medical database Examine.com, and even tried a few commercially-available nootropics myself.
CommonCrawl
January 12, 2022 January 10, 2022 by publisher Eric Busboom. An isochrone is the area of travel from a point in a constant time, which allows for more realistic analysis of retail cachment areas than a simple radius. An isochrone ( meaning "equal time" ) is an area that encloses the points one can travel to in a fixed time. For a bird, an isochrone would be a circle, but humans usually have to follow roads, so human isochrones have shapes that mirror the road network. Despite the name, isochrones are most often computed using a fixed distance. For instance, a 5km isochrone would be all the points that are within 5km of a point, but using distances based on a road network, not the straight distance. So, instead of a 5km radius circle, a 5km isochrone will have a non-circular shape. Civic Knowledge is developing a Python program for raster-based spatial analysis. The system can create isochrone regions, rasterize them, and use them in algebraic equations with other rasters, allowing for very powerful spatial analysis. For a quick example, here is a 10km isochrone around Franklin Barbeque in Austin Texas. The isochrone region is colored by the distance from the central point, in meters. The area is computed by tracing the road network, then finding a region ( a concave hull ) that encloses all of the nodes ( the ends of road segments ) that are a specific distance from the central point. The distances are quantized to 500 meters. There are a lot of analyses that we can do with this shape. One that is particularly interesting to a business is to calculate the number of customers in the area around the store, weighted by how likely they are to visit, which is a function of the attractiveness of the location and the distance from each consumer. The most common model for this sort of analysis is the Huff model: $$P_{ij}= \frac{A_j^\alpha D_{ij}^{- \beta}} {\sum_{j=1}^{n}A_j^{\alpha} D_{ij}^{- \beta}}$$ where : $A_j$ is a measure of attractiveness of store j $r_{ij}$ is the distance from the consumer's location, i, to store j. $\alpha$ is an attractiveness parameter $\beta$ is a distance decay parameter $n$ is the total number of stores, including store j The term $A_j^\alpha D_{ij}^{- \beta}$ multiplies the "attractiveness" of the location times the distance weighted probability of the consumer visiting the location. The attractiveness term, $A_j^\alpha$ can be any of a variety of measures, but retail square footage is a common one. The exponent $\alpha$ accounts for the non-linearity of attractiveness; a store that has twice the square footage is not always twice as attractive. The distance weighting value, because of it's negative exponent, accounts for consumers who are farther away being less likely to visit. If $\beta$ is 1, then the term reduces to $1/r$ for distance $r$ and if $\beta$ is two, the term is $1/r^2$. Because $1/r^2$ is the same law that gravity follows, this model is sometimes called Huff's Gravity Model. While the Huff model is expressed as the probability that a consumer will visit a location, we can also use it to determine the likely number of visitors to a location, by calculating the value for every person in the retail cachement area. For this analysis, we will use a raster map of population density. Here is the map of population, based on census tracts for 2019, for the area around our location. Each pixel of the map is 100m square and the value at the pixel is the estimated number of people who live in that square, computed by dividing the total estimated population of a census tract by its area. By calculating the Huff model value for each pixel and multiplying by the number of people in that pixel, then summing over all pixels, we can get an estimated number of visitors to the location. This calculation will involve these steps: Compute the isochrone for the location, with pixel values of meters distance from the central location. Set ${\beta}=1$ and compute $1/r$ for all pixels. Multiply $1/r$ values by the population. Sum the values to get an estimate of the number of customers. For this example, we are only working with one location, and assuming the atractiveness is 1. For the full Huff model we would have to perform this calculation for every location, sum the results and use it as a denominator. The original isochrone values are in meters from the central point and we use ${\beta}=1$ so the weighting for population will be $1/r$. Then we can multiply the weighting by the population to get a weighted population. {'data': {'text/markdown': 'The total population of the whole isochrone area is 383,044 but the weighted population within the isochrone is \n43,005. The difference is the result of the $1/r$ weighting, which counts population farther away with a value less than \npopulation that is closer. \n\nIf we had used the straight line distance instead of the isochrone — which would be a circle instead of the odd isochrone shape — \nthe 1/r weighted popuation would be 192,230.\n\nHere is what the straight-line distances look like versus the isochrone; they are very different. \n\n', 'text/plain': "}, 'execution_count': 13, 'metadata': {}, 'output_type': 'execute_result'} The total population of the whole isochrone area is 383,044 but the weighted population within the isochrone is 43,005. The difference is the result of the $1/r$ weighting, which counts population farther away with a value less than population that is closer. If we had used the straight line distance instead of the isochrone — which would be a circle instead of the odd isochrone shape — the 1/r weighted popuation would be 192,230. Here is what the straight-line distances look like versus the isochrone; they are very different. We can also use the isochrones to count the number of specific sites, such as related businesses or competitors, within a given area. In this case, we will create a raster of the locations of cafes, which will have a value of 1 where there is a cafe and 0 elsewhere. We'll binarize the isochone areas where the value is less than 5_000, which will produce a raster with 1 for the cells that are 5km or less away from the central point, 0 elsewhere. Multiplying these two rasters will produce a raster {'data': {'text/markdown': "Summing the cells in the last raster gives us the number of cafes within 5km of the location, 39. \nIf we'd used the striaght-line distance ( a circular area ) the number would have been 57.\n", 'text/plain': "}, 'execution_count': 10, 'metadata': {}, 'output_type': 'execute_result'} Summing the cells in the last raster gives us the number of cafes within 5km of the location, 39. If we'd used the striaght-line distance ( a circular area ) the number would have been 57. Isochrone areas are a powerful addition to your spatial analysis techniques, allowing a more accurate assessment of cachement areas. Using isochrones can produce significantly different results that simplier fixed radii, although they can require more effort to use, particualrly in a vector-based process. However, when used as part of a raster-based spatial analysis, there is little difference. Categories spatial_analysis Tags isochrone Post navigation
CommonCrawl
Basic Electrical Engineering ⊡1. History ⊞⊟2. Fundamental of Physics 2.1. International Systems of Units 2.2. Converstion of Units 2.3. Accuracy and Precision 2.4. Significant Figures 2.5. Rounding Off 2.6. Physical Quantities 2.7. Length 2.8. Mass 2.9. Time 2.10. Building blocks of matter 2.11. Density 2.12. Motion 2.12.1. Displacement 2.12.2. Speed and Velocity 2.12.3. Acceleration 2.12.4. Scalars and Vectors 2.12.5. Coordinates Systems ⊡3. Sources of Electric Energy ⊞⊟4. Branches of Electrical Engineering 4.1. Power Engineering 4.2. Electronic Engineering 4.3. Computer Engineering 4.4. Microelectronics 4.5. Control System Engineering 4.6. Signal Processing 4.7. Telecommunication Engineering 4.8. Instrumentation Engineering ⊞⊟5. Electricity and Magnetism 5.1. Atom and Its Structure 5.2. Electric Charge 5.3. Electric Field 5.3.1. Electric Permittivity 5.3.2. Electric Flux 5.3.3. Gauss Law 5.4. Potential Difference 5.5. Voltage Sources 5.5.1. Battery 5.5.2. Ampere Hour Rating 5.5.3. Solar Cell 5.5.4. Voltmeter 5.5.5. Ammeter 5.5.6. Galvanometer 5.6. Conductance and Insulation 5.6.1. Conductor 5.6.2. Insulator 5.6.3. Semiconductor 5.7. Capacitance 5.8. Current 5.9. Magnetism 5.9.1. Magnetic Fields 5.9.2. Magnetic Flux 5.9.3. Magnetic Flux Density 5.9.4. Oersted law 5.9.5. Faradays Law of Induction 5.9.6. Fleming Right and Left Hand Rule 5.9.7. Lorentz Force : Force on a Charge 5.9.8. Force on a Current Carrying Conductor 5.9.9. Magnetic Field of Current Carrying Conductor 5.9.10. Force between Two Parallel Conductors Potential Difference Basic Electrical Engineering > Electricity and Magnetism What is potential difference? A charge repels other like charges (with same sign). That means, an electron repels other electrons. If you at one point have, say, 10 electrons then they will try to move as far away from each other as they can. This point with many electrons (that is, this point that electrons are strongly repelled from) is said to have high potential. A point of less repulsion is said to have lower potential. In this way, electrons will always try to move towards the lower potential. In other words: if there is a potential difference between two points then electrons will try to move because they experience an electric force towards the lower potential. Voltage and the term "potential difference" are often used interchangeably. The potential difference might be better defined as the potential energy difference between two points in a circuit. The amount of difference (expressed in volts) determines how much potential energy exists to move electrons from one specific point to another. The quantity identifies how much work, potentially, can be done through the circuit. A household AA alkaline battery, for example, offers 1.5 V. Typical household electrical outlets offer 120 V. The greater the voltage in a circuit, the greater its ability to "push" more electrons and do work. Voltage/potential difference can be compared to water stored in a tank. The larger the tank, and the greater its height (and thus its potential velocity), the greater the water's capacity to create an impact when a valve is opened and the water (like electrons) can flow. How a potential difference can be established? To create and sustain a potential difference you need something to move charges "the wrong way". That is, towards the point of higher potential. You just need a force larger than the repelling force. Every source of voltage is established by simply creating a separation of positive and negative charges. for example, a region of positive charge has been established by a packaged number of positive ions, and a region of negative charge by a similar number of electrons, separated by a distance r. Fig. No. 2 Defining the voltage between two points. Since the voltage established by the separation of a single electron, a package of electrons called a coulomb (C) of charge was defined as follows: One coulomb of charge is the total charge associated with $6.242 \times 10^{18}$ electrons. Conversely, the negative charge associated with a single electron is $$ Q_e = {1 \over 6.242 \times 10^{18}}C = 0.1602 \times 10^{-18} C$$ $$ Q_e = 1.602 \times 10^{-19} C$$ If we take a coulomb of negative charge near the surface of the positive charge and move it toward the negative charge, we must expend energy to overcome the repulsive forces of the larger negative charge and the attractive forces of the positive charge as shown in Fig. 2.(b), if a total of 1 joule (J) of energy is used to move the negative charge of 1 coulomb (C), there is a difference of 1 volt (V) between the two points. The defining equation is $$V = {W \over Q} $$ Take particular note that the charge is measured in coulombs, the energy in joules, and the voltage in volts. The unit of measurement, volt, was chosen to honor the efforts of Alessandro Volta, who first demonstrated that a voltage could be established through chemical action. What is The electron volt? It is the level of energy required to move an electron through a potential difference of 1 volt. Applying the basic energy equation, $$W = QV = (1.602 \times 10^{-19} C)(1 volt)$$ $$ = 1.602 \times 10^{-19} J $$ $$ 1 eV= 1.602 \times 10^{-19} J $$ Voltage is the pressure from an electrical circuit's power source that pushes charged electrons (current) through a conducting loop, enabling them to do work such as illuminating a light. In brief, voltage = pressure, and it is measured in volts (V). The term recognizes Italian physicist Alessandro Volta (1745-1827), inventor of the voltaic pile the forerunner of today's household battery. In electricity's early days, voltage was known as electromotive force (emf). This is why in equations such as Ohm's Law, voltage is represented by the symbol E. Voltage is either alternating current (ac) voltage or direct current (dc) voltage. Example of voltage in a simple direct current (dc) circuit: Fig.no.2: Italian physicist Alessandro Volta (1745-1827) Fig. No.3: Image shows voltage as a pressure to force electron in a conductor.
CommonCrawl
Statistics and Probability (11) Epidemiology & Infection (11) The Journal of Laryngology & Otology (3) Journal of Plasma Physics (2) Advances in Animal Biosciences (1) Advances in Applied Mathematics and Mechanics (1) High Power Laser Science and Engineering (1) Journal of Helminthology (1) Journal of Mechanics (1) The Aeronautical Journal (1) Global Science Press (1) Royal Aeronautical Society (1) Testing Membership Number Upload (1) The potential application of plant wax markers from alfalfa for estimating the total feed intake of sheep H. Zhang, Y. P. Guo, W. Q. Chen, N. Liu, S. L. Shi, Y. J. Zhang, L. Ma, J. Q. Zhou Journal: animal , First View Published online by Cambridge University Press: 20 June 2019, pp. 1-10 Estimating the feed intake of grazing herbivores is critical for determining their nutrition, overall productivity and utilization of grassland resources. A 17-day indoor feeding experiment was conducted to evaluate the potential use of Medicago sativa as a natural supplement for estimating the total feed intake of sheep. A total of 16 sheep were randomly assigned to four diets (four sheep per diet) containing a known amount of M. sativa together with up to seven forages common to typical steppes. The diets were: diet 1, M. sativa + Leymus chinensis + Puccinellia distans; diet 2, species in diet 1 + Phragmites australis; diet 3, species in diet 2 + Chenopodium album + Elymus sibiricus; and diet 4, species in diet 3 + Artemisia scoparia + Artemisia tanacetifolia. After faecal marker concentrations were corrected by individual sheep recovery, treatment mean recovery or overall recovery, the proportions of M. sativa and other dietary forages were estimated from a combination of alkanes and long-chain alcohols using a least-square procedure. Total intake was the ratio of the known intake of M. sativa to its estimated dietary proportion. Each dietary component intake was obtained using total intake and the corresponding dietary proportions. The estimated values were compared with actual values to assess the estimation accuracy. The results showed that M. sativa exhibited a distinguishable marker pattern in comparison to the other dietary forage species. The accuracy of the dietary composition estimates was significantly (P < 0.001) affected by both diet diversity and the faecal recovery method. The proportion of M. sativa and total intake across all diets could be accurately estimated using the individual sheep or the treatment mean recovery methods. The largest differences between the estimated and observed total intake were 2.6 g and 19.2 g, respectively, representing only 0.4% and 2.6% of the total intake. However, they were significantly (P < 0.05) biased for most diets when using the overall recovery method. Due to the difficulty in obtaining individual sheep recovery under field conditions, treatment mean recovery is recommended. This study suggests that M. sativa, a natural roughage instead of a labelled concentrate, can be utilized as a dietary supplement to accurately estimate the total feed intake of sheep indoors and further indicates that it has potential to be used in steppe grassland of northern China, where the marker patterns of M. sativa differ markedly from commonly occurring plant species. Protein restriction and succedent realimentation affecting ileal morphology, ileal microbial composition and metabolites in weaned piglets Q. Shi, Y. Zhu, J. Wang, H. Yang, J. Wang, W. Zhu Published online by Cambridge University Press: 14 May 2019, pp. 1-10 Dietary protein restriction is one of the effective ways to reduce post-weaning diarrhoea and intestinal fermentation in piglets, but it may also reduce growth performance. The compensatory growth induced by subsequent protein realimentation may solve the issue. However, little research has been done on the impact of protein realimentation on the gut. In this study, the effects of protein restriction and realimentation on ileal morphology, ileal microbial composition and metabolites in weaned piglets were investigated. Thirty-six 28-day-old weaned piglets with an average body weight of 6.47 ± 0.04 kg were randomly divided into a control group and a treatment group. The CP level in the diet of the control group was 18.83% for the entire experimental period. The piglets in the treatment group were fed 13.05% CP between days 0 and 14 and restored to a diet of 18.83% CP for days 14 to 28. On day 14 and 28, six pigs from each group were sacrificed and sampled. It was found that the abundance of Lactobacillus and Salmonella in the ileal digesta was significantly lower in the treatment group than the control group on day 14, whereas the abundance of Clostridium sensu stricto 1, Streptococcus, Halomonas and Pseudomonas significantly increased in the ileal digesta of the treatment group on day 14 compared with the control group. In addition, reduced concentrations of lactic acid, total short-chain fatty acids (total SCFAs), total branched chain fatty acids, ammonia and impaired ileal morphology and mucosal barrier were observed in the treatment group on day 14. However, diarrhoea levels decreased in the treatment group throughout the experiment. During the succedent protein realimentation stage, the treatment group demonstrated compensatory growth. Compared with the control group, the treatment group showed increased abundance of Lactobacillus and reduced abundance of Salmonella, Halomonas and Pseudomonas in the ileum on day 28. The concentrations of lactic acid and total SCFAs increased significantly, whereas the concentration of ammonia remained at a lower level in the treatment group on day 28 compared with the control group. Overall, protein realimentation could improve ileal morphology and barrier functions and promote ileal digestive and absorptive functions. In conclusion, ileal microbial composition and metabolites could change according to dietary protein restriction and realimentation and eventually influence ileal morphology and barrier functions. The mitochondrial genome of Dipetalonema gracile from a squirrel monkey in China P. Zhang, R.K. Ran, A.Y. Abdullahi, X.L. Shi, Y. Huang, Y.X. Sun, Y.Q. Liu, X.X. Yan, J.X. Hang, Y.Q. Fu, M.W. Wang, W. Chen, G.Q. Li Journal: Journal of Helminthology , First View Published online by Cambridge University Press: 17 October 2018, pp. 1-8 Dipetalonema gracile is a common parasite in squirrel monkeys (Saimiri sciureus), which can cause malnutrition and progressive wasting of the host, and lead to death in the case of massive infection. This study aimed to identify a suspected D. gracile worm from a dead squirrel monkey by means of molecular biology, and to amplify its complete mitochondrial genome by polymerase chain reaction (PCR) and sequence analysis. The results identified the worm as D. gracile, and the full length of its complete mitochondrial genome was 13,584 bp, which contained 22 tRNA genes, 12 protein-coding genes, two rRNA genes, one AT-rich region and one small non-coding region. The nucleotide composition included A (16.89%), G (20.19%), T (56.22%) and C (6.70%), among which A + T = 73.11%. The 12 protein-coding genes used TTG and ATT as start codons, and TAG and TAA as stop codons. Among the 22 tRNA genes, only trnS1AGN and trnS2UCN exhibited the TΨC-loop structure, while the other 20 tRNAs showed the TV-loop structure. The rrnL (986 bp) and rrnS (685 bp) genes were single-stranded and conserved in secondary structure. This study has enriched the mitochondrial gene database of Dipetalonema and laid a scientific basis for further study on classification, and genetic and evolutionary relationships of Dipetalonema nematodes. Layer pullet preferences for light colors of light-emitting diodes G. Li, B. Li, Y. Zhao, Z. Shi, Y. Liu, W. Zheng Journal: animal / Volume 13 / Issue 6 / June 2019 Light colors may affect poultry behaviors, well-being and performance. However, preferences of layer pullets for light colors are not fully understood. This study was conducted to investigate the pullet preferences for four light-emitting diode colors, including white, red, green and blue, in a lighting preference test system. The system contained four identical compartments each provided with a respective light color. The pullets were able to move freely between the adjacent compartments. A total of three groups of 20 Chinese domestic Jingfen layer pullets (54 to 82 days of age) were used for the test. Pullet behaviors were continuously recorded and summarized for each light color/compartment into daily time spent (DTS), daily percentage of time spent (DPTS), daily times of visit (DTV), duration per visit, daily feed intake (DFI), daily feeding time (DFT), feeding rate (FR), distribution of pullet occupancy and hourly time spent. The results showed that the DTS (h/pullet·per day) were 3.9±0.4 under white, 1.4±0.3 under red, 2.2±0.3 under green and 4.5±0.4 under blue light, respectively. The DTS corresponded to 11.7% to 37.6% DPTS in 12-h lighting periods. The DTV (times/pullet·per day) were 84±5 under white, 48±10 under red, 88±10 under green and 94±8 under blue light. Each visit lasted 1.5 to 3.2 min. The DFI (g/pullet·per day) were 27.6±1.7 under white, 7.1±1.6 under red, 15.1±1.1 under green and 23.1±2.0 under blue light. The DFT was 0.18 to 0.65 h/pullet·per day and the FR was 0.57 to 0.75 g/min. For most of the time during the lighting periods, six to 10 birds stayed under white, and one to five birds stayed under red, green and blue light. Pullets preferred to stay under blue light when the light was on and under white light 4 h before the light off. Overall, pullets preferred blue light the most and red light the least. These findings substantiate the preferences of layer pullets for light colors, providing insights for use in the management of light-emitting diode colors to meet pullet needs. Maternal infection during pregnancy and type 1 diabetes mellitus in offspring: a systematic review and meta-analysis Y. Yue, Y. Tang, J. Tang, J. Shi, T. Zhu, J. Huang, X. Qiu, Y. Zeng, W. Li, Y. Qu, D. Mu Journal: Epidemiology & Infection / Volume 146 / Issue 16 / December 2018 Previous studies have demonstrated that type 1 diabetes mellitus (T1DM) could be triggered by an early childhood infection. Whether maternal infection during pregnancy is associated with T1DM in offspring is unknown. Therefore, we aimed to study the association using a systematic review and meta-analysis. Eighteen studies including 4304 cases and 25 846 participants were enrolled in this meta-analysis. Odds ratios (ORs) and 95% confidence intervals (CIs) were synthesised using random-effects models. Subgroup analyses and sensitivity analyses were conducted to assess the robustness of associations. Overall, the pooled analysis yielded a statistically significant association between maternal infection during pregnancy and childhood T1DM (OR 1.31, 95% CI 1.07–1.62). Furthermore, six studies that tested maternal enterovirus infection showed a pooled OR of 1.54 (95% CI 1.05–2.27). Heterogeneity from different studies was evident (I2 = 70.1%, P < 0.001) and was mainly attributable to the different study designs, ascertaining methods and sample size among different studies. This study provides evidence for an association between maternal infection during pregnancy and childhood T1DM. Integrative Structure and Functional Anatomy of a Nuclear Pore Complex I. Nudelman, S.J. Kim, J. Fernandez-Martinez, Y. Shi, W. Zhang, B. Raveh, T. Herricks, B.D. Slaughter, J. Hogan, P. Upla, I.E. Chemmama, R. Pellarin, I. Echeverria, M. Shivaraju, A.S. Chaudhury, J. Wang, R. Williams, J.R. Unruh, C.H. Greenberg, E.Y. Jacobs, Z. Yu, M.J. de la Cruz, R. Mironska, D.L. Stokes, J.D. Aitchison, M.F. Jarrold, J.L. Gerton, S.J. Ludtke, C.W. Akey, B.T. Chait, A. Sali, M.P. Rout Using two-sex life tables to determine fitness parameters of four Bactrocera species (Diptera: Tephritidae) reared on a semi-artificial diet – CORRIGENDUM W. Jaleel, J. Yin, D. Wang, Y. He, L. Lu, H. Shi Journal: Bulletin of Entomological Research / Volume 108 / Issue 6 / December 2018 Published online by Cambridge University Press: 31 January 2018, p. 715 Using two-sex life tables to determine fitness parameters of four Bactrocera species (Diptera: Tephritidae) reared on a semi-artificial diet Fruit flies in the genus Bactrocera are global, economically important pests of agricultural food crops. However, basic life history information about these pests, which is vital for designing more effective control methods, is currently lacking. Artificial diets can be used as a suitable replacement for natural host plants for rearing fruit flies under laboratory conditions, and this study reports on the two-sex life-table parameters of four Bactrocera species (Bactrocera correcta, Bactrocera dorsalis, Bactrocera cucurbitae, and Bactrocera tau) reared on a semi-artificial diet comprising corn flour, banana, sodium benzoate, yeast, sucrose, winding paper, hydrochloric acid and water. The results indicated that the larval development period of B. correcta (6.81 ± 0.65 days) was significantly longer than those of the other species. The fecundity of B. dorsalis (593.60 eggs female−1) was highest among the four species. There were no differences in intrinsic rate of increase (r) and finite rate of increase (λ) among the four species. The gross reproductive rate (GRR) and net reproductive rate (R0) of B. dorsalis were higher than those of the other species, and the mean generation time (T) of B. cucurbitae (42.08 ± 1.21 h) was longer than that of the other species. We conclude that the semi-artificial diet was most suitable for rearing B. dorsalis, due to its shorter development time and higher fecundity. These results will be useful for future studies of fruit fly management. Effectiveness of patching traumatic eardrum perforations with lens cleaning paper via an otoscope K Yan, M Lv, E Xu, F Fan, Y Lei, W Liu, X Yu, N Li, L Shi Journal: The Journal of Laryngology & Otology / Volume 131 / Issue 9 / September 2017 To study the clinical effect of lens cleaning paper patching on traumatic eardrum perforations. A total of 122 patients were divided into 2 groups, of which 56 patients were treated with lens cleaning paper patching and 66 acted as controls. The closure rate and healing time were compared between the two groups. The healing rate of small perforations was 96.4 per cent (27 out of 28) in the patching group and 90 per cent (27 out of 30) in the control group. The difference was not statistically significant (p > 0.05). The healing rate of large perforations was 89.3 per cent (25 out of 28) and 80.6 per cent (29 out of 36) in the two groups, respectively. The difference was statistically significant (p < 0.05). The healing time of large perforations was shorter in the patching group than in the control group (p < 0.01). Patching with lens cleaning paper under an endoscope can accelerate the closure of large traumatic eardrum perforations. Numerical Investigation of Flow Around a Multi-Element Airfoil with Hybrid RANS-LES Approaches Based on SST Model L. Zhang, J. Li, Y. F. Mou, H. Zhang, W. B. Shi, J. Jin Journal: Journal of Mechanics / Volume 34 / Issue 2 / April 2018 Accurate prediction of the flow around multi-element airfoil is a prerequisite for improving aerodynamic performance, but its complex flow features impose high demands on turbulence modeling. In this work, delayed detached-eddy-simulation (DDES) and zonal detached-eddy-simulation (ZDES) was applied to simulate the flow past a three-element airfoil. To investigate the effects of numerical dissipation of spatial schemes, the third-order MUSCL and the fifth-order interpolation based on modified Roe scheme were implemented. From the comparisons between the calculations and the available experimental result, third-order MUSCL-Roe can provide satisfactory mean velocity profiles, but the excessive dissipation suppresses the velocity fluctuations level and eliminates the small-scale structures; DDES cannot reproduce the separation near the trailing edge of the flap which lead to the discrepancy in mean pressure coefficients, while ZDES result has better tally with the experiment. Using portable RapidSCAN active canopy sensor for rice nitrogen status diagnosis J. Lu, Y. Miao, W. Shi, J. Li, J. Wan, X. Gao, J. Zhang, H. Zha Journal: Advances in Animal Biosciences / Volume 8 / Issue 2 / July 2017 The objective of this study was to determine how much improvement red edge-based vegetation indices (VIs) obtained with the RapidSCAN sensor would achieve for estimating rice nitrogen (N) nutrition index (NNI) at stem elongation stage (SE) as compared with commonly used normalized difference vegetation index (NDVI) and ratio vegetation index (RVI) in Northeast China. Sixteen plot experiments and seven on-farm experiments were conducted from 2014 to 2016 in Sanjiang Plain, Northeast China. The results indicated that the performance of red edge-based VIs for estimation of rice NNI was better than NDVI and RVI. N sufficiency index calculated with RapidSCAN VIs (NSI_VIs) (R2=0.43–0.59) were more stable and more strongly related to NNI than the corresponding VIs (R2=0.12–0.38). Bacterial lysate for the prevention of chronic rhinosinusitis recurrence in children J Chen, Y Zhou, J Nie, Y Wang, L Zhang, Q Shi, H Tan, W Kong Journal: The Journal of Laryngology & Otology / Volume 131 / Issue 6 / June 2017 Chronic rhinosinusitis is a common nasal disorder in children that is prone to recurrence. This study investigated the prevention of chronic rhinosinusitis recurrence with bacteria lysate in children. Bacteria lysate was administered 10 days per month for 3 months to children with chronic rhinosinusitis, who had just entered a remission phase. Visual analogue score, nasal symptoms scores, rhinitis attack frequency and antibiotic use were assessed at three months and one year. At one year of follow up, the visual analogue score, nasal discharge and obstruction scores, number of days with rhinitis attacks per month and number of days with antibiotic use per month were significantly decreased in the prevention group versus the control group (p < 0.05). Bacterial lysate used in the remission period of rhinosinusitis in children was shown to provide long-term prophylaxis. Bacterial lysate can effectively reduce the frequency of rhinosinusitis attacks and ameliorate attack symptoms. Seroprevalence and associated risk factors of Toxocara infection in Korean, Manchu, Mongol, and Han ethnic groups in northern China G.-L. YANG, X.-X. ZHANG, C.-W. SHI, W.-T. YANG, Y.-L. JIANG, Z.-T. WEI, C.-F. WANG, Q. ZHAO Journal: Epidemiology & Infection / Volume 144 / Issue 14 / October 2016 Toxocariasis is a very prevalent zoonotic disease worldwide. Recently, investigators have focused more on Toxocara spp. seroprevalence in humans. Information regarding Toxocara seroprevalence in people from different ethnic backgrounds in China is limited. For this study, blood samples were collected from a total of 802 Han, 520 Korean, 303 Manchu, and 217 Mongol subjects from Jilin and Shandong provinces. The overall Toxocara seroprevalence was 16·07% (14·21% Han, 20·58% Korean, 11·22% Manchu, 18·89% Mongol). Living in suburban or rural areas, having dogs at home, exposure to soil, and consumption of raw/undercooked meat were risk factors for Toxocara infection. Exposure to soil was identified as the major risk factor for Toxocara seropositivity in all of the tested ethnicities. To the best of our knowledge, this is the first report concerning Toxocara infection in Manchus and Mongols in China. The present study provided baseline data for effective prevention strategies of toxocariasis in northeast China and recommends improvements in personal hygiene standards to achieve this goal. Clinical and immunological analysis of measles patients admitted to a Beijing hospital in 2014 during an outbreak in China B. TU, J.-J. ZHAO, Y. HU, J.-L. FU, H-H. HUANG, Y.-X. XIE, X. ZHANG, L. SHI, P. ZHAO, X.-W. ZHANG, D. WU, Z. XU, Z.-P. ZHOU, E.-Q. QIN, F.-S. WANG Journal: Epidemiology & Infection / Volume 144 / Issue 12 / September 2016 Published online by Cambridge University Press: 02 June 2016, pp. 2613-2620 At the end of 2013, China reported a countrywide outbreak of measles. From January to May 2014, we investigated the clinical and immunological features of the cases of the outbreak admitted to our hospital. In this study, all 112 inpatients with clinically diagnosed measles were recruited from the 302 Military Hospital of China. The virus was isolated from throat swabs from these patients, and cytokine profiles were examined. By detecting the measles virus of 30 of the 112 patients, we found that this measles outbreak was of the H1 genotype, which is the major strain in China. The rates of complications, specifically pneumonia and liver injury, differed significantly in patients aged <8 months, 8 months to 18 years, and >18 years: pneumonia was more common in children, while liver injury was more common in adults. Pneumonia was a significant independent risk factor affecting measles duration. Compared to healthy subjects, measles patients had fewer CD4+IL-17+, CD4+IFN-γ +, and CD8+IFN-γ + cells in both the acute and recovery phases. In contrast, measles patients in the acute phase had more CD8+IL-22+ cells than those in recovery or healthy subjects. We recommend that future studies focus on the age-related distribution of pneumonia and liver injury as measles-related complications as well as the association between immunological markers and measles prognosis. Combined use of real-time PCR and nested sequence-based typing in survey of human Legionella infection – ERRATUM T. QIN, H. ZHOU, H. REN, W. SHI, H. JIN, X. JIANG, Y. XU, M. ZHOU, J. LI, J. WANG, Z. SHAO, X. XU Published online by Cambridge University Press: 23 May 2016, p. 2688 Seroprevalence and associated risk factors of Toxoplasma gondii infection in the Korean, Manchu, Mongol and Han ethnic groups in eastern and northeastern China X.-X. ZHANG, Q. ZHAO, C.-W. SHI, W.-T. YANG, Y.-L. JIANG, Z.-T. WEI, C.-F. WANG, G.-L. YANG Journal: Epidemiology & Infection / Volume 144 / Issue 9 / July 2016 A cross-sectional study was conducted from June 2013 to August 2015 to determine the seroprevalence and possible risk factors for human Toxoplasma gondii infection in Korean, Manchu, Mongol and Han ethnic groups in eastern and northeastern China. A total of 1842 serum samples, including Han (n = 802), Korean (n = 520), Manchu (n = 303) and Mongol (n = 217) groups, were analysed using enzyme-linked immunoassays to detect IgG and IgM T. gondii antibodies. The overall T. gondii IgG and IgM seroprevalences were 13·79% and 1·25%, respectively. Of these groups, Mongol ethnicity had the highest T. gondii seroprevalence (20·74%, 45/217), followed by Korean ethnicity (16·54%, 86/520), Manchu ethnicity (13·86%, 42/303) and Han ethnicity (11·35%, 98/802). Multiple analysis showed that the consumption of raw vegetables and fruits, the consumption of raw/undercooked meat and the source of drinking water were significantly associated with T. gondii infection in the Han group. Likewise, having a cat at home was identified as being associated with T. gondii infection in the Korean, Manchu and Mongol groups. Moreover, the consumption of raw/undercooked meat was identified as another predictor of T. gondii seropositivity in the Mongol group. The results of this survey indicate that T. gondii infection is prevalent in Korean, Manchu, Mongol and Han ethnic groups in the study region. Therefore, it is essential to implement integrated strategies with efficient management measures to prevent and control T. gondii infection in this region of China. Moreover, this is the first report of T. gondii infection in Korean, Manchu, and Mongol ethnic groups in eastern and northeastern China. Combined use of real-time PCR and nested sequence-based typing in survey of human Legionella infection Legionnaires' disease (LD) is a globally distributed systemic infectious disease. The burden of LD in many regions is still unclear, especially in Asian countries including China. A survey of Legionella infection using real-time PCR and nested sequence-based typing (SBT) was performed in two hospitals in Shanghai, China. A total of 265 bronchoalveolar lavage fluid (BALF) specimens were collected from hospital A between January 2012 and December 2013, and 359 sputum specimens were collected from hospital B throughout 2012. A total of 71 specimens were positive for Legionella according to real-time PCR focusing on the 5S rRNA gene. Seventy of these specimens were identified as Legionella pneumophila as a result of real-time PCR amplification of the dotA gene. Results of nested SBT revealed high genetic polymorphism in these L. pneumophila and ST1 was the predominant sequence type. These data revealed that the burden of LD in China is much greater than that recognized previously, and real-time PCR may be a suitable monitoring technology for LD in large sample surveys in regions lacking the economic and technical resources to perform other methods, such as urinary antigen tests and culture methods. Multilocus variable-number tandem-repeat analysis of Neisseria meningitidis serogroup C in China X. Y. SHAN, H. J. ZHOU, J. ZHANG, B. Q. ZHU, L. XU, Z. XU, G. C. HU, A. Y. BAI, Y. W. SHI, B. F. JIANG, Z. J. SHAO This study characterized Neisseria meningitidis serogroup C strains in China in order to establish their genetic relatedness and describe the use of multilocus variable-number tandem-repeat (VNTR) analysis (MLVA) to provide useful epidemiological information. A total of 215 N. meningitidis serogroup C strains, obtained from 2003 to 2012 in China, were characterized by MLVA with different published schemes as well as multilocus sequence typing. (i) Based on the MLVA scheme with a combination of five highly variable loci, 203 genotypes were identified; this level of discrimination supports its use for resolving closely related isolates. (ii) Based on a combination of ten low variable loci, clear phylogenetic relationships were established within sequence type complexes. In addition, there was evidence of microevolution of VNTR loci over the decade as strain lineages spread from Anhui to other provinces, the more distant the provinces from Anhui, the higher the genetic variation. Rapid and Sensitive Detection of Nano-fluidically Trapped Protein Biomarkers Nandhinee Radha Shanmugam, Anjan Panneer Selvam, Thomas W. Barrett, Steve Kazmierczak, Shalini Prasad Published online by Cambridge University Press: 21 November 2014, mrss14-1686-v09-13 In this work, we demonstrate the label-free and ultrasensitive detection of troponin-T, cardiac biomarker using nanoporous membrane integrated on a microelectrode sensor platform. The nanoporous membrane allows for spatial confinement of the protein molecules. Antigen interaction with thiol immobilized antibody perturbs the electrical double layer. Charge perturbations are recorded as impedance change at low frequency using the principle of electrochemical impedance spectroscopy (EIS). The measured impedance change is used to quantitatively determine the concentration of troponin-T in tested sample. We have shown that sensitivity of sensor for troponin-T to be 1pg/mL. The accuracy and reliability of this sensor was tested by comparing the experimental troponin-T concentration values with a commercially available electrochemiluminescence assay measured with Roche Elecsys analyzer. Using this technique we were successful in detecting protein biomarkers in whole blood, human serum, and ionic buffers. This technology provides a robust analytical platform for rapid and sensitive detection of protein biomarkers, thus establishing this technology as an ideal candidate for biomarker screening in clinical settings. Longitudinal–transverse aerodynamic force in viscous compressible complex flow L. Q. Liu, Y. P. Shi, J. Y. Zhu, W. D. Su, S. F. Zou, J. Z. Wu Journal: Journal of Fluid Mechanics / Volume 756 / 10 October 2014 We report our systematic development of a general and exact theory for diagnosis of total force and moment exerted on a generic body moving and deforming in a calorically perfect gas. The total force and moment consist of a longitudinal part (L-force) due to compressibility and irreversible thermodynamics, and a transverse part (T-force) due to shearing. The latter exists in incompressible flow but is now modulated by the former. The theory represents a full extension of a unified incompressible diagnosis theory of the same type developed by J. Z. Wu and coworkers to compressible flow, with Mach number ranging from low-subsonic to moderate-supersonic flows. Combined with computational fluid dynamics (CFD) simulation, the theory permits quantitative identification of various complex flow structures and processes responsible for the forces, and thereby enables rational optimal configuration design and flow control. The theory is confirmed by a numerical simulation of circular-cylinder flow in the range of free-stream Mach number $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}M_{\infty }$ between 0.2 and 2.0. The L-drag and T-drag of the cylinder vary with $M_{\infty }$ in different ways, the underlying physical mechanisms of which are analysed. Moreover, each L-force and T-force integrand contains a universal factor of local Mach number $M$ . Our preliminary tests suggest that the possibility of finding new similarity rules for each force constituent could be quite promising.
CommonCrawl
Minimality of the Ehrenfest wind-tree model JMD Home This Volume The work of Federico Rodriguez Hertz on ergodicity of dynamical systems 2016, 10: 191-207. doi: 10.3934/jmd.2016.10.191 On the work of Rodriguez Hertz on rigidity in dynamics Ralf Spatzier 1, Department of Mathematics, 2074 East Hall, 530 Church Street, University of Michigan, Ann Arbor, MI 48109-1043 Received March 2016 Published June 2016 This paper is a survey about recent progress in measure rigidity and global rigidity of Anosov actions, and celebrates the profound contributions by Federico Rodriguez Hertz to rigidity in dynamical systems. Keywords: Rodriguez Hertz., Brin prize. Mathematics Subject Classification: Primary: 37C15, 37C85, 37D20, 53C24; Secondary: 42B0. Citation: Ralf Spatzier. On the work of Rodriguez Hertz on rigidity in dynamics. Journal of Modern Dynamics, 2016, 10: 191-207. doi: 10.3934/jmd.2016.10.191 W. Ballmann, Nonpositively curved manifolds of higher rank,, Ann. of Math. (2), 122 (1985), 597. doi: 10.2307/1971331. Google Scholar W. Ballmann, Lectures on Spaces of Nonpositive Curvature,, With an appendix by Misha Brin, (1995). doi: 10.1007/978-3-0348-9240-7. Google Scholar C. Bonatti, S. Crovisier, G. Vago and A. Wilkinson, Local density of diffeomorphisms with large centralizers,, Ann. Sci. École Norm. Sup. (4), 41 (2008), 925. Google Scholar C. Bonatti, S. Crovisier and A. Wilkinson, The $C^1$ generic diffeomorphism has trivial centralizer,, Inst. Hautes Études Sci. Publ. Math., 109 (2009), 185. Google Scholar A. Brown, F. Rodriguez Hertz and Z. Wang, Global smooth and topological rigidity of hyperbolic lattice actions,, , (2015). Google Scholar K. Burns and A. Katok, Manifolds with nonpositive curvature,, Ergodic Theory Dynam. Systems, 5 (1985), 307. doi: 10.1017/S0143385700002935. Google Scholar K. Burns and R. Spatzier, Manifolds of nonpositive curvature and their buildings,, Inst. Hautes Études Sci. Publ. Math., 65 (1987), 35. Google Scholar D. Damjanović and A. Katok, Local rigidity of partially hyperbolic actions I. KAM method and $\mathbbZ^k$ actions on the torus,, Ann. of Math. (2), 172 (2010), 1805. doi: 10.4007/annals.2010.172.1805. Google Scholar D. Damjanović and A. Katok, Local rigidity of partially hyperbolic actions. II: The geometric method and restrictions of Weyl chamber flows on $SL(n,\mathbbR)/\Gamma$,, Int. Math. Res. Not. IMRN, 19 (2011), 4405. doi: 10.1093/imrn/rnq252. Google Scholar P. Eberlein, Lattices in spaces of nonpositive curvature,, Ann. of Math. (2), 111 (1980), 435. doi: 10.2307/1971104. Google Scholar P. Eberlein, Isometry groups of simply connected manifolds of nonpositive curvature. II,, Acta Math., 149 (1982), 41. doi: 10.1007/BF02392349. Google Scholar M. Einsiedler and T. Fisher, Differentiable rigidity for hyperbolic toral actions,, Israel J. Math., 157 (2007), 347. doi: 10.1007/s11856-006-0016-0. Google Scholar M. Einsiedler and A. Katok, Invariant measures on $G/\Gamma$ for split simple Lie groups $G$,, Dedicated to the memory of Jürgen K. Moser, 56 (2003), 1184. doi: 10.1002/cpa.10092. Google Scholar M. Einsiedler and A. Katok, Rigidity of measures-The high entropy case and non-commuting foliations,, Israel J. Math., 148 (2005), 169. doi: 10.1007/BF02775436. Google Scholar M. Einsiedler and E. Lindenstrauss, Diagonalizable flows on locally homogeneous spaces and number theory,, in International Congress of Mathematicians. Vol. II, (2006), 1731. Google Scholar M. Einsiedler and E. Lindenstrauss, On measures invariant under diagonalizable actions: The rank-one case and the general low-entropy method,, J. Mod. Dyn., 2 (2008), 83. doi: 10.3934/jmd.2008.2.83. Google Scholar M. Einsiedler and E. Lindenstrauss, Diagonal actions on locally homogeneous spaces,, in Homogeneous Flows, (2010), 155. Google Scholar M. Einsiedler and E. Lindenstrauss, On measures invariant under tori on quotients of semisimple groups,, Ann. of Math. (2), 181 (2015), 993. doi: 10.4007/annals.2015.181.3.3. Google Scholar D. Fisher, Local rigidity of group actions: Past, present, future,, in Dynamics, (2007), 45. doi: 10.1017/CBO9780511755187.003. Google Scholar D. Fisher, B. Kalinin and R. Spatzier, Global rigidity of higher rank Anosov actions on tori and nilmanifolds,, With an appendix by James F. Davis, 26 (2013), 167. doi: 10.1090/S0894-0347-2012-00751-6. Google Scholar D. Fisher and G. Margulis, Local rigidity of affine actions of higher rank groups and lattices,, Ann. of Math. (2), 170 (2009), 67. doi: 10.4007/annals.2009.170.67. Google Scholar J. Franks, Anosov diffeomorphisms,, in Global Analysis (Proc. Sympos. Pure Math., (1968), 61. Google Scholar H. Furstenberg, Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation,, Math. Systems Theory, 1 (1967), 1. doi: 10.1007/BF01692494. Google Scholar B. Farb and S. Weinberger, Isometries, rigidity and universal covers,, Ann. of Math. (2), 168 (2008), 915. doi: 10.4007/annals.2008.168.915. Google Scholar A. Gogolev, Diffeomorphisms Hölder conjugate to Anosov diffeomorphisms,, Ergodic Theory Dynam. Systems, 30 (2010), 441. doi: 10.1017/S0143385709000169. Google Scholar M. Gromov, Groups of polynomial growth and expanding maps,, Inst. Hautes Études Sci. Publ. Math., 53 (1981), 53. Google Scholar A. Gorodnik and R. Spatzier, Mixing properties of commuting nilmanifold automorphisms,, Acta Math., 215 (2015), 127. doi: 10.1007/s11511-015-0130-0. Google Scholar S. Hurder, Rigidity for Anosov actions of higher rank lattices,, Ann. of Math. (2), 135 (1992), 361. doi: 10.2307/2946593. Google Scholar S. Hurder, A survey of rigidity theory for Anosov actions,, in Differential Topology, (1992), 143. doi: 10.1090/conm/161. Google Scholar A. Katok, Lyapunov exponents, entropy and periodic orbits for diffeomorphisms,, Inst. Hautes Études Sci. Publ. Math., 51 (1980), 137. Google Scholar A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems,, With a supplementary chapter by Katok and L. Mendoza, (1995). doi: 10.1017/CBO9780511809187. Google Scholar B. Kalinin, A. Katok and F. Rodriguez Hertz, Nonuniform measure rigidity,, Ann. of Math. (2), 174 (2011), 361. doi: 10.4007/annals.2011.174.1.10. Google Scholar A. Katok and J. Lewis, Local rigidity for certain groups of toral automorphisms,, Israel J. Math., 75 (1991), 203. doi: 10.1007/BF02776025. Google Scholar A. Katok and J. Lewis, Global rigidity results for lattice actions on tori and new examples of volume-preserving actions,, Israel J. Math., 93 (1996), 253. doi: 10.1007/BF02761106. Google Scholar A. Katok, J. Lewis and R. Zimmer, Cocycle superrigidity and rigidity for lattice actions on tori,, Topology, 35 (1996), 27. doi: 10.1016/0040-9383(95)00012-7. Google Scholar N. Kopell, Commuting diffeomorphisms,, in Global Analysis (Proc. Sympos. Pure Math., (1968), 165. Google Scholar A. Katok and F. Rodriguez Hertz, Rigidity of real-analytic actions of $\text{\rm SL}(n,\mathbbZ)$ on $\mathbbT^n$: A case of realization of Zimmer program,, Discrete Contin. Dyn. Syst., 27 (2010), 609. doi: 10.3934/dcds.2010.27.609. Google Scholar A. Katok and F. Rodriguez Hertz, Measure and cocycle rigidity for certain non-uniformly hyperbolic actions of higher-rank abelian groups,, J. Mod. Dyn., 4 (2010), 487. doi: 10.3934/jmd.2010.4.487. Google Scholar A. Katok and F. Rodriguez Hertz, Arithmeticity and topology of smooth actions of higher rank abelian groups,, J. Mod. Dyn., 10 (2016), 135. doi: 10.3934/jmd.2016.10.135. Google Scholar A. Katok and R. J. Spatzier, Invariant measures for higher-rank hyperbolic abelian actions,, Ergodic Theory Dynam. Systems, 16 (1996), 751. doi: 10.1017/S0143385700009081. Google Scholar A. Katok and R. J. Spatzier, Differential rigidity of Anosov actions of higher rank abelian groups and algebraic lattice actions,, Tr. Mat. Inst. Steklova, 216 (1997), 292. Google Scholar B. Kalinin and V. Sadovskaya, Global rigidity for totally nonsymplectic Anosov $\mathbbZ^k$ actions,, Geom. Topol., 10 (2006), 929. doi: 10.2140/gt.2006.10.929. Google Scholar B. Kalinin and V. Sadovskaya, On the classification of resonance-free Anosov $\mathbbZ^k$ actions,, Michigan Math. J., 55 (2007), 651. doi: 10.1307/mmj/1197056461. Google Scholar B. Kalinin and R. Spatzier, On the classification of Cartan actions,, Geom. Funct. Anal., 17 (2007), 468. doi: 10.1007/s00039-007-0602-2. Google Scholar J. W. Lewis, Infinitesimal rigidity for the action of $\text{\rm SL}(n,\mathbbZ)$ on $\mathbbT^n$,, Trans. Amer. Math. Soc., 324 (1991), 421. doi: 10.1090/S0002-9947-1991-1058434-X. Google Scholar D. A. Lind, Dynamical properties of quasihyperbolic toral automorphisms,, Ergodic Theory Dynamical Systems, 2 (1982), 49. doi: 10.1017/S0143385700009573. Google Scholar R. Lyons, On measures simultaneously $2$- and $3$-invariant,, Israel J. Math., 61 (1988), 219. doi: 10.1007/BF02766212. Google Scholar A. Manning, There are no new Anosov diffeomorphisms on tori,, Amer. J. Math., 96 (1974), 422. doi: 10.2307/2373551. Google Scholar R. Mañé, Quasi-Anosov diffeomorphisms and hyperbolic manifolds,, Trans. Amer. Math. Soc., 229 (1977), 351. doi: 10.1090/S0002-9947-1977-0482849-4. Google Scholar G. A. Margulis, Discrete groups of motions of manifolds of nonpositive curvature,, in Proceedings of the International Congress of Mathematicians (Vancouver, (1974), 21. Google Scholar G. A. Margulis, Discrete Subgroups of Semisimple Lie Groups,, Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)], (1991). Google Scholar J. Palis and J.-C. Yoccoz, Centralizers of Anosov diffeomorphisms on tori,, Ann. Sci. École Norm. Sup. (4), 22 (1989), 99. Google Scholar J. Palis and J.-C. Yoccoz, Rigidity of centralizers of diffeomorphisms,, Ann. Sci. École Norm. Sup. (4), 22 (1989), 81. Google Scholar F. Rodriguez Hertz, Global rigidity of certain abelian actions by toral automorphisms,, J. Mod. Dyn., 1 (2007), 425. doi: 10.3934/jmd.2007.1.425. Google Scholar F. Rodriguez Hertz and Z. Wang, Global rigidity of higher rank abelian Anosov algebraic actions,, Invent. Math., 198 (2014), 165. doi: 10.1007/s00222-014-0499-y. Google Scholar D. J. Rudolph, $\times 2$ and $\times 3$ invariant measures and entropy,, Ergodic Theory Dynam. Systems, 10 (1990), 395. doi: 10.1017/S0143385700005629. Google Scholar S. J. Schreiber, On growth rates of subadditive functions for semiflows,, J. Differential Equations, 148 (1998), 334. doi: 10.1006/jdeq.1998.3471. Google Scholar M. Shub, Expanding maps,, in Global Analysis (Proc. Sympos. Pure Math., (1968), 273. Google Scholar S. Smale, Differentiable dynamical systems,, Bull. Amer. Math. Soc., 73 (1967), 747. doi: 10.1090/S0002-9904-1967-11798-1. Google Scholar R. J. Spatzier, An invitation to rigidity theory,, in Modern Dynamical Systems and Applications, (2004), 211. Google Scholar K. Vinhage, On the rigidity of Weyl chamber flows and Schur multipliers as topological groups,, J. Mod. Dyn., 9 (2015), 25. doi: 10.3934/jmd.2015.9.25. Google Scholar W. van Limbeek, Riemannian manifolds with local symmetry,, J. Topol. Anal., 6 (2014), 211. doi: 10.1142/S179352531450006X. Google Scholar W. van Limbeek, Symmetry gaps in Riemannian geometry and minimal orbifolds,, , (2014). Google Scholar K. Vinhage and Z. J. Wang, Local rigidity of higher rank homogeneous abelian actions: A complete solution via the geometric method,, , (2015). Google Scholar Z. Wang and W. Sun, Lyapunov exponents of hyperbolic measures and hyperbolic periodic orbits,, Trans. Amer. Math. Soc., 362 (2010), 4267. doi: 10.1090/S0002-9947-10-04947-0. Google Scholar R. J. Zimmer, Strong rigidity for ergodic actions of semisimple Lie groups,, Ann. of Math. (2), 112 (1980), 511. doi: 10.2307/1971090. Google Scholar R. J. Zimmer, Actions of semisimple groups and discrete subgroups,, in Proceedings of the International Congress of Mathematicians, (1986), 1247. Google Scholar Dmitry Dolgopyat. The work of Federico Rodriguez Hertz on ergodicity of dynamical systems. Journal of Modern Dynamics, 2016, 10: 175-189. doi: 10.3934/jmd.2016.10.175 The Editors. The 2015 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2016, 10: 173-174. doi: 10.3934/jmd.2016.10.173 The Editors. The 2013 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2014, 8 (1) : i-ii. doi: 10.3934/jmd.2014.8.1i The Editors. The 2017 Michael Brin Prize in Dynamical Systems. Journal of Modern Dynamics, 2019, 15: 131-132. doi: 10.3934/jmd.2019015 Mikhail Lyubich. Forty years of unimodal dynamics: On the occasion of Artur Avila winning the Brin Prize. Journal of Modern Dynamics, 2012, 6 (2) : 183-203. doi: 10.3934/jmd.2012.6.183 Giovanni Forni. On the Brin Prize work of Artur Avila in Teichmüller dynamics and interval-exchange transformations. Journal of Modern Dynamics, 2012, 6 (2) : 139-182. doi: 10.3934/jmd.2012.6.139 Richard Tapia. My reflections on the Blackwell-Tapia prize. Mathematical Biosciences & Engineering, 2013, 10 (5&6) : 1669-1672. doi: 10.3934/mbe.2013.10.1669 Tao Zhang, W. Art Chaovalitwongse, Yue-Jie Zhang, P. M. Pardalos. The hot-rolling batch scheduling method based on the prize collecting vehicle routing problem. Journal of Industrial & Management Optimization, 2009, 5 (4) : 749-765. doi: 10.3934/jimo.2009.5.749 HTML views (0) Ralf Spatzier
CommonCrawl
What is the current state of research in quantum gravity? Why the in the Hamiltonian formulation of gravity, the conjugate momenta to the connection is a vector density of weight 1? ADM Hamiltonian formalism and Quantum gravity Hamiltonian mechanics and special relativity? Ground State Degeneracy of 2+1D U(1) Chern Simons Theory? Topological ground state degeneracy of SU(N), SO(N), Sp(N) Chern-Simons theory Normalization of the Chern-Simons level in $SO(N)$ gauge theory What is the dictionary between Witten's (Chern-Simons) invariants and Reshetikhin-Turaev (quantum group) invariants? Some basic questions about Chern-Simons theory Chern-Simons on a lattice and the framing anomaly Doubts about Chern-Simons state as a solution of the Hamiltonian constraint in quantum gravity I've been doing some work with both Baez's *Knots, gauge fields and gravity* (1) and Gambini, Pullin's *Loops, knots, gauge Theories and quantum gravity* (2), lately. I have basically two problems: I understand that, in the ADM formalism, the Lagrangian density for the cosmological term of Einstein equation is given by $$ L= q\Lambda \underline{N},$$ where $q$ is the determinant of the 3-metric, $\Lambda$ is the cosmological constant, and $\underline{N}$ is $q^\frac{-1}{2}$N (the lapse function). Also, that $$\tilde{E^i_a}=q^{\frac{1}{2}}E^i_a$$ are the densitized triads of the Ashtekar formalism. However, I don't get why $q$ can be given by the expression (7.53) from (2): $$q=\frac{1}{6}\underline{\epsilon_{abc}}\epsilon^{ijk}\tilde{E^a_i}\tilde{E^b_j}\tilde{E^c_K}. $$ Is there a way to obtain such expression? The second problem is: after promoting the Ashtekar variables to operators ( $\hat{A^a_i}$ and $\hat{E^a_i}=\frac{\delta}{\delta A^a_i}$ ), it's can be shown, for the Chern-Simons state $$\psi_{\Lambda}= e^{-\frac{6}{\Lambda}S_{CS}},$$ with $S_{CS}$ being the Chern-Simons action $$S_{CS}= \int_{\Sigma} tr \,(A\wedge dA +\frac{2}{3}A\wedge A\wedge A),$$ \begin{eqnarray*} \frac{\delta}{\delta A^i_a}\psi_\Lambda = \frac{3}{\Lambda}\overline{\epsilon^{abc}}F^i_{bc}\psi_\Lambda \\ \underline{\epsilon_{abc}}\frac{\delta}{\delta A^i_a}\psi_\Lambda = \frac{6}{\Lambda}F^i_{bc}\psi_\Lambda, \end{eqnarray*} which comes from expressions (7.70) and (7.71) from (2). My problem is with the second line. Am I supposed to take $$ \underline{\epsilon_{abc}}\overline{\epsilon^{abc}} = 2~?$$ Why would that be true? Sorry for the lengthy post. I'd be glad if someone could help me with these. chern-simons-theory quantum-gravity specific-reference hamiltonian-formalism asked Apr 27, 2016 in Theoretical Physics by Theoretician (10 points) [ no revision ] p$\hbar$ys$\varnothing$csOverflow
CommonCrawl
Nano-antenna enhanced two-focus fluorescence correlation spectroscopy Lutz Langguth1, Agata Szuba2, Sander A. Mann1, Erik C. Garnett1, Gijsje H. Koenderink2 & A. Femius Koenderink ORCID: orcid.org/0000-0003-1617-57481 Nanophotonics and plasmonics We propose two-focus fluorescence correlation spectroscopy (2fFCS) on basis of plasmonic nanoantennas that provide distinct hot spots that are individually addressable through polarization, yet lie within a single diffraction limited microscope focus. The importance of two-focus FCS is that a calibrated distance between foci provides an intrinsic calibration to derive diffusion constants from measured correlation times. Through electromagnetic modelling we analyze a geometry of perpendicular nanorods, and their inverse, i.e., nanoslits. While we find that nanorods are not suited for nano-antenna enhanced 2fFCS due to substantial background signal, a nanoslit geometry is expected to provide a di tinct cross-correlation between orthogonally polarized detection channels. Furthermore, by utilizing a periodic array of nanoslits instead of a single pair, the amplitude of the cross-correlation can be enhanced. To demonstrate this technique, we present a proof of principle experiment on the basis of a periodic array of nanoslits, applied to lipid diffusion in a supported lipid bilayer. Fluorescence correlation spectroscopy is a common technique to deduce the concentration and mobility of fluorescent particles. It is based on measurements of fluorescence intensity fluctuations, which occur as particles perform a random walk through a single, tight, microscope focus1,2,3. These intensity fluctuations are correlated on a time scale comparable to the time required to diffuse through the focus, and are especially prominent for concentrations lower than one fluorescent particle per focus volume. Apertures in metallic films4, 5, bull's eye antennas6, 7, and nanoparticles with plasmonic resonances8,9,10,11, have been demonstrated as a means to reduce focus size, thereby significantly extending the concentration range of FCS, even to biophysically relevant micromolar concentrations12. In addition to possible orders of magnitude reduction in detection volume, plasmonic nanostructures can also dramatically improve fluorescence count rates by enhancing radiative emission and redirecting light7, 13, Since count rates enter quadratically as a reduction in FCS acquisition time, signal enhancements are highly useful. A main drawback of FCS is that conversion of a measured correlation time into a diffusion constant requires accurate knowledge of the focus size and shape14, 15, Aberrations or imperfect alignment of confocal pinholes can significantly change the measured properties14. In standard FCS protocols it is therefore necessary to perform calibration measurements on samples of known kinetic properties16. In the case of nano-antenna enhanced FCS the need for calibration is even stronger, as the detection volume depends on the optical properties of the antenna at the pump and fluorescence wavelengths, and even the fluorophore quantum efficiency13 and rotational diffusion time of the diffusing species. This makes it challenging to perform proper calibration, because a reference specimen is required with exactly the same photophysical and similar hydrodynamic properties. In this paper we propose dual focus nano-antenna FCS. In conventional dual focus FCS (2fFCS), the intensity fluctuations originating from two spatially well separated diffraction limited foci are cross-correlated17, 18. Two-focus FCS is a robust method to measure absolute diffusion coefficients, as the distance between the two detection volumes can be precisely set in an experiment. This distance serves as length calibration, independent of aberrations17, 18, Here we explore if we can mitigate or eliminate the problematic calibration of nano-antenna enhanced FCS by utilizing multiple foci, as in 2fFCS, while maintaining the benefits provided by nano-antennas19. This paper is structured as follows: first, we design geometries for 2fFCS based on polarization multiplexing. We then show that while nano-particle antennas are not suited, nano-apertures of alternating orientation should give a distinct two-focus signal. Finally we present an experimental proof of principle in the context of lipid diffusivity in model biomembranes. 2fFCS requirements In FCS, one measures the normalized time-correlation of (fluctuating) detected intensities as given by: $${G}_{i,j}(\tau )=\frac{\langle {I}_{i}(t){I}_{j}(t+\tau \rangle )}{\langle {I}_{i}(t)\rangle \langle {I}_{j}(t)\rangle },$$ where 〈·〉 indicates averaging over time t, while τ indicates the particular time-delay value at which one evaluates the temporal correlation. We have introduced subscripts i and j as labels for detected intensities on possibly distinct detectors. In standard FCS one uses a single detection channel (i = j = 1), whereas in 2fFCS one can measure the autocorrelation (i = j) or cross-correlations i ≠ j between two detectors2, 14, 17. It can be shown that the correlations should be equal to: $${G}_{i,j}(\tau )-1=\frac{\int d{\bf{r}}\int d{\bf{r}}^{\prime} MD{F}_{i}({\bf{r}}){G}_{D}({\bf{r}},{\bf{r}}^{\prime} ,\tau )MD{F}_{j}({\bf{r}}^{\prime} )}{{C}_{0}{(\int d{\bf{r}}MD{F}_{i}({\bf{r}})\int d{\bf{r}}MD{F}_{j}({\bf{r}}))}^{2}}$$ where G D (r, r′,τ) is the diffusion kernel that quantifies the probability for a molecule to diffuse from r′ to r in a time τ, and C 0 is the concentration. MDF i is the molecular detection function, which indicates the probability that a molecule at r actually gives rise to a photon detection event in detection channel i. In 2fFCS one uses two spatially separated molecular detection functions MDF 1(r) and MDF 2(r), which originate from two displaced foci. Fluorescence events from these foci are detected either via two separate detectors, confocal with each excitation focus, or via a temporal multiplexing scheme in the excitation beam14, 17, 18, 20, 21. In 2fFCS, one expects the temporal cross-correlation (i ≠ j) between two detection channels to differ significantly from the autocorrelations (i = j). In particular, for well separated detection volumes, the autocorrelation shows a distinct peak at a characteristic time τ C that directly derives from the focus separation. If the center-to-center distance R between the foci is known, the diffusion constant can be determined directly from τ C . However, it should be noted that a clear peak in the cross-correlation function is not strictly necessary for 2fFCS to be useful: if the shape of the MDF s is known, simultaneous fitting of auto- and cross-correlations also allows accurate determination of the diffusion constant. For example, Dertinger et al.17, 18, derived explicit FCS trace dependencies assuming Gaussian foci, and argued that the height of the cross-correlation contribution decreases with R −3 or R −2 in the case of diffusion of analytes in 3D or 2D, respectively. Nano-antenna enhanced 2fFCS can in principle give closely spaced detection volumes with small overlap. However, if one would place a plasmonic 2fFCS substrate in the focal plane of a conventional microscope, the far field optics typically can not resolve the two foci at subwavelength distance. We propose that nano-structures can encode the spatial origin of fluorescence emission into two orthogonally polarized detection channels, as sketched in Fig. 1. Key to this encoding is that one designs a structure composed of antennas with a strongly linearly polarized response of orthogonal orientation for distinct emitter positions. The antennas should then be aligned with two far-field detection polarization channels. In this work we discuss two of the simplest nano antenna geometries that provide a strong polarization response: nanorods, and their inverse, i.e., nanoslits in a metal film. We focus on designs to measure diffusion in 2D systems, such as lipid bilayers12, 22, that can be draped over a plasmonic surface. We propose a nano-antenna version of dual focus FCS based on polarization encoding the fluorophore position near plasmonic nano-antennas. (a) Two plasmon antennas with an orthogonally polarized resonance are placed in the diffraction-limited focus of a fluorescence confocal microscope. When diffusing fluorophores are near an antenna, the otherwise unpolarized fluorescence becomes polarized along the antenna resonance. (b) Fluctuating intensities on two APDs that are confocal with both antennas, but sensitive to orthogonal polarizations, will show a temporal cross-correlation. The antenna spacing acts as a ruler for measuring diffusion constants. Numerical approach Before we present numerical results on nano-antenna enhanced 2fFCS, we outline the general calculation approach. Consistent with reported plasmon FCS results4,5,6,7, 9,10,11,12,13 we choose gold as a plasmonic material, and therefore design the antennas to work in the long-wavelength part of the visible spectrum, around 650 nm. In a typical setting these antennas would be fabricated on glass, and embedded in water that accommodates the lipid bilayer. For sake of concreteness we present results for a 2D diffusion coefficient of D = 4.5 · 10−8 cm2/s at a surface concentration of C 0 = 1 · 10−13 m−2, appropriate for diffusion in supported lipid bilayers. We assume the antennas to be covered by a thin dielectric planarizing layer, providing a flat plane for the diffusing lipids 30 nm above the metal interface. We designed antennas to obtain a resonance that is broad enough to cover both excitation and emission wavelengths, while giving a low response in the orthogonal polarization. To numerically investigate the performance of these antennas for 2fFCS, we first need to estimate the molecular detection function. The MDF is in essence given by the excitation efficiency function (EEF) and the collection efficiency function2 (CEF): MDF(x, y) = EEF(x, y, z 0) · CEF(x, y, z 0). The EEF is the probability to excite a fluorophore at a given position, and scales linearly with the pump field intensity. In an experiment unpolarized or circularly polarized light should be used to ensure that excitation close to both nano-rods or slits occurs with equal probability. Hence, for the EEF we take the sum of the excitation field intensities for both linear polarizations at the pump wavelength (specified below for the two case studies) as calculated with full wave simulations. This approach ensures that the excitation field is taken into account, which is crucial, as the exciting beam can lead to background intensity strongly affecting results23. The near-field resulting from an incident beam at the emission wavelength instead of the excitation wavelength can provide us with the collection efficiency CEF 24. Through reciprocity, the calculated near-field at a location near the antenna provides the power one would collect in the far field from a classical constant-current source at that location25. Using the near-field intensities upon polarized excitation as collection efficiency functions (CEF s) we obtain: $$\begin{array}{rcl}MD{F}_{X}(x,y,{z}_{0}) & = & EEF(x,y)\cdot {I}_{E,X}(x,y,{z}_{0})\\ MD{F}_{Y}(x,y,{z}_{0}) & = & EEF(x,y)\cdot {I}_{E,Y}(x,y,{z}_{0}){\boldsymbol{,}}\end{array}$$ where I E is the electric field intensity in the x, y plane at z 0. It should be noted that this approach assumes randomly oriented fluorophores, implying rapid rotational diffusion compared to the fluorescence decay time, which is valid for typical fluorophores (rotation diffusion times about one order of magnitude shorter than decay times)3, 26. Finally, the molecular detection functions can be converted into simulated FCS time traces by numerically executing the integration in Eq. (2), with the assumption of a membrane-diffusion application which limits integration to a 2D plane. Numerical results for nanorods Nanorods As the simplest polarization sensitive geometry, we use a nanorod geometry. Gold (tabulated optical constants from Johnson and Christy27) nanorods in water (n = 1.33) on glass (n = 1.45) can be matched to the resonances of red-fluorescent dyes such as the commonly used Alexa647. In the numerical examples here we will use an excitation wavelength of 676 nm and an emission wavelength of 690 nm. Accordingly we focus on gold nano-rods of 70 nm length, 40 nm width, and 30 nm height which have a resonance at ≈676 nm vacuum wavelength but is broad enough to cover both the excitation and emission wavelength. If we place a second nanorod at right angles to the first (see Fig. 2a), one obtains a configuration that upon different incident far-field polarizations gives rise to near-fields at distinctly different spatial locations, localized at the tips of the accordingly oriented nano-antenna. Conversely, emission from sources located in the hot spots at either rod is expected to be strongly polarized along the adjacent rod due to coupling of the transition dipole with the particle resonance24, 28. It is important to position the rods in such a way that cross-talk between polarization channels is minimized, as polarization cross-talk will degrade the 2fFCS crosscorrelation contrast. For minimum polarization-crosstalk it is advantageous to aim for minimum near-field coupling between the plasmon particles. At least to first order, in a simple picture of coupled dipolar resonances, this is achieved by placing one rod exactly along the symmetry axis of the other. As actual FCS experiments are sensitive to near-field detail, full wave simulations are required to calculate the actual fields and MDFs, thereby fully accounting for any coupling that may occur between the antenna. We perform full wave simulations using the finite-difference time-domain method (Lumerical FDTD Solutions29), and report the resulting near-field intensity maps in Fig. 2b and c. Here we used a gaussian beam with an NA = 0.7 (beam waist w 0 ≈ 300 nm) to excite the rods with polarizations aligned along both of the rods. The rods are separated by 120 nm center-to-center (see Fig. 2a), and light is incident from the glass side. It should be noted that in addition to the localized fields there is substantial background electromagnetic energy density, due to the beam focus. We construct the CEF and MDF from these gaussian beam simulations (at 676 and 690 nm wavelength) as input in Eq. ((2)). The obtained molecular detection functions (Fig. 2d,e) are strongly peaked at the rod ends, and clearly spatially distinct. Nanorods for nano-antenna dual focus fluorescence correlation spectroscopy. (a) A sketch of the nanorod based geometry for nano-antenna 2fFCS. Two 40 × 70 nm nanorods (30 nm height) with 120 nm center to center distance form a T-shape. When placed on glass and in water, their near field response is resonant at 676 nm. (b,c) The near-field intensity in a plane 30 nm above the rods, upon plane wave illumination at 690 nm (targeted emission wavelength) for polarization along the horizontal (b) and vertical (c) rod. Through reciprocity, this intensity is proportional to the collection efficiency function. The scale bar is 200 nm. (d,e) The molecular detection efficiency reconstructed from simulations for a λ = 676 nm pump wavelength (w 0 ≈ 300 nm Gaussian beam width) and λ = 690 nm emission wavelength (i.e., panels b,c). As a global scaling factor does not affect the correlation curves, both MDFs are normalized to the maximum in panel d). (f) Predicted autocorrelation functions for the intensity on the x- and y-polarized detector (blue and red curves), predicted cross-correlation between polarization channels (yellow and dashed purple), and autocorrelation of the polarization contrast (green), constructed as (ACF x + ACF y − 2CCF)/2. Circles indicate the time at which the correlation has dropped to 50% of its maximum value at τ = 0. Figure 2f reports the resulting simulated fluorescence correlation traces. First, one notes that intensity autocorrelation functions (ACF) of the two polarization channels give the typical roll-off behavior expected for FCS. As the MDF s for x and y polarization are not identical due to the structural asymmetry, the ACF s also show different zero-time contrasts (which is a measure for the MDF-volume). Their roll-off times are 2.5 ms in case of x-polarized light and 3.5 ms for the y polarization, as indicated by the dots in Fig. 2f. The corresponding length scale \((\sqrt{4D\tau })\) of around 210 to 250 nm is close to the diffraction limit rather than being close to the antenna hot spot size, commensurate with the finding that nano-antenna FCS can only outperform conventional diffraction-limited FCS if the hot spots are sufficiently strong compared to the background focus field23. Figure 1(d,e) shows that cross-correlating the two detectors is actually predicted to give an FCS trace similar in shape to the autocorrelation functions, though lower in contrast. Importantly, there is no evidence of a distinct peak in the autocorrelation at a characteristic non-zero τ C , and the cross-correlation is monotonically decreasing. This result is indicative of insufficient spatial separation between the MDF s, as a consequence of the large contribution of the (overlapping) background intensity. This is a known problem that also negatively impacts simple nano-antenna enhanced FCS with nanoparticles23. The reader should be warned that one easily underestimates the role of the background focus in spatial maps as shown in Fig. 2(b–e). However, it is not the contrast in peak field intensity in Fig. 2 that matters, but rather the area-integrated content. Even at the large calculated contrast, the large diameter of the focus compared to the hot spots means that the background light will contribute significantly. This observation implies that even though the spatial maps of the polarization-resolved MDFs locally show large contrast at the antennas, the non-zero polarization-agnostic background implies strong cross-talk between detector channels. If one would not correlate intensity traces, but rather measure the temporal correlation function of instantaneous polarization differences by autocorrelating I x − I y (in practice obtainable through ACF x + ACF y − 2CCF), one finds the shortest roll-off time of 1.1 ms (red dot in Fig. 1). This corresponds to the time over which the emission polarization of randomly oriented fluorophores coupled to one of the antennas is conserved. It is therefore a measure for the diffusion time through the near-field of a single antenna. Hence, a plasmon nano-rod antenna geometry does provide two well-separated hot spots that are addressable by orthogonal polarizations. However, due to the background focus that enters both the MDF s, cross talk dominates the detector cross-correlation. Nanoslits Having identified that 2fFCS requires not just localized hot spots, but also efficient suppression of background intensity, we propose two modifications. First, a nano-slit or nano-aperture geometry that uses apertures in thick metal films effectively blocks background signal, as already shown for single-focus antenna-enhanced FCS4, 7, 30. Second, by making the structure periodic, one can increase the amplitude of the cross-correlation. The proposed structure (shown schematically in Fig. 3a) consists of an optically thick gold-film (100 nm) perforated by rectangular nano-apertures. We study an arrangement on a square grid of deeply sub-diffraction pitch, with the orientation of apertures alternating along the x and y axis, such that neighbors always have opposite orientations. This periodic arrangement has the advantage that detecting an emitter in an aperture of opposite orientation does not require diffusion from one aperture to another but from one aperture to any of four others, which are all located at the same distance. This increases the cross-correlation term by a factor of four, but keeps its shape and the peak position the same. It should be noted that diffusion beyond-nearest neighbor holes adds a long-time tail, also accounted for in our work, without changing the shape of the short-time cross correlation contribution. Periodic arrays of plasmonic nano-apertures for fluorescence correlation spectroscopy. (a) Sketch of an array of nano-apertures on a square grid with pitch d = 120 nm. The apertures alternate in orientation and measure 100 by 40 nm. The apertures are assumed located in a 100 nm thick gold film on glass, immersed in water. (b) For both x- and y-polarized incident light, the near-field in a plane 30 nm above the film (averaged over unit cell) shows a peak at 670 nm. (c,d) The calculated MDF's for unpolarized excitation at 670 nm, and x– and y– polarized detection, assuming Gaussian beam optics with an NA of 0.7. The focused beam waist is shown with a dashed orange circle, and the 100 × 40 nm apertures are outlined on top of the color maps in white. Importantly, the MDFs are localized on subsets of differently oriented holes. (e) The autocorrelation functions for the intensity measured by the x- and y-polarized detectors (blue and red curves), the predicted cross-correlation between polarization channels (yellow and dashed purple curves), and autocorrelation of the polarization contrast (green, calculated as (ACF x + ACF y − 2CCF)/2). The spatial overlap of the two MDF s is sufficiently small that the cross-correlation shows a distinct maximum at 0.53 ms. The dots depict the roll-off time where the correlation has dropped to 50% of its maximum value (ACF), resp. the CCF peak position. For the periodic array of nano-apertures we again perform full wave simulations with FDTD using a Gaussian excitation beam corresponding to a tight focus (NA = 0.7), and we obtain fields at the optical pump wavelength and Stokes-shifted fluorescent wavelength (taken as 676 nm resp. 690 nm). Figure 3b shows the averaged near field intensity spectrum 30 nm above the gold film, with the excitation and emission wavelengths indicated with dashed lines. A clear resonance is visible at the excitation wavelength. Figure 3c,d report the MDF for two orthogonally polarized detectors, assuming that the FCS sampling plane is again 30 nm above the gold surface. Fields plots (not shown) show that the nano-aperture respond strongly when oriented perpendicular to the driving field polarization, as expected according to the Babinet-principle31,32,33. Commensurate with the result that the nanorods have a strongly polarized resonance, the MDF is high at nano-apertures oriented perpendicular to the detection polarization, yet low at the other apertures. The cross talk between MDF s is small, owing to the fact that the metal film blocks light, and resonances have field intensity peaks localized right above the aperture. The correlation functions calculated for the nano-aperture array are shown in Fig. 3e. The autocorrelations for each polarization channel are slightly different because in the simulation the focus is centered on a vertical slit. The autocorrelations vary slightly with excitation beam position. Due to the presence of multiple displaced detection volumes, the ACFs do not roll-off monotonically, but show shoulders near 1 ms. The roll-off time (point where the ACF G(τ) − 1 has reduced to half its maximum) is 0.16 ms for both polarizations (blue and red dots). The cross-correlation functions (yellow and purple) show a distinct peak at τ peak = 0.533 ms, proving that the nano-aperture design indeed provides a sufficiently small spatial overlap between the x- and y-polarized MDF s to make dual focus cross-correlation FCS possible. The autocorrelation of the polarization contrast ΔI (green curve) yields a roll-off time of 0.096 ms, corresponding to the diffusion time through a single aperture. Despite the fact that now multiple hot-spots are present, the individual detection volumes are smaller than in the nanorod geometry thanks to the reduced background intensity. As a result, the correlation contrast is higher. If the 2fFCS cross-correlation shows a peak at non-zero delay time, the peak-time can be converted in a diffusion constant. For 2fFCS experiments performed with two displaced Gaussian foci (identical size σ and separation R), one would expect a peak correlation at a delay τ peak = (2R 2 − 2σ 2)/(8D)17, 18. If we take the roll-off time of the polarization fluctuations (sum of autocorrelates minus cross-correlates; Fig. 3e, green line) of τ ΔI = 0.096 ms as a measure for the size σ of the individual hot-spot through σ 2 = 4τ ΔI D, and knowing the set distance between the two MDF s (R = 120 nm), one retrieves the diffusion coefficient as D = R 2/[4(τ peak + τ ΔI )]. With the peak time τ peak = 0.53 ms read off from the cross-correlation curve, this procedure yields 5.7 · 10−8 cm2/s, in reasonable agreement with the value originally assumed in the numerical simulation of 4.5 · 10−8 cm2/s. It should be noted that in a tight focus the hot spots are somewhat displaced from the holes, leading to a reduced separation (110 nm rather than 120 nm), and concomitantly a derived D = 4.8 · 10−8 even closer to the assumed value. Hence, the alternating nano-aperture array geometry indeed allows measurement of diffusion constants through sample geometry, circumventing the need for calibration runs on known solutions. In principle the need to precisely calibrate the shape of the MDF is obviated by the fact that aperture spacing is a robust calibration-free ruler. Our simulations show that this robustness improves when using a less tight illumination focus. In this case the fact that more apertures are illuminated removes the dependence on where the center of the focus actually is chosen (which leads to the difference between ACF x and ACF y in Fig. 3e), and the hot spot spacing more closely approaches the sample periodicity. We have performed a proof of principle experiment, using focused ion beam milling to make nano-aperture arrays in thermally evaporated gold films (100 nm thickness) on glass which was coated with a planarizing layer of 30 nm SiO2 spin-on glass (HSQ) after milling. We have fabricated arrays of rectangular arrays of 165 nm length, 50 nm width, and arranged on a grid of 180 nm. As a reference system, we fabricated arrays of pitch 200 nm and square 100 nm holes. Since these holes are square, no polarization sensitivity is expected. Figure 4(a,b) shows electron microscopy images of the fabricated arrays. On the metal film we prepared a supported lipid bilayer composed of DOPC (L-a-phosphatidylcholine, Avanti Polar Lipids) doped with nominally 0.5 · 10−6 mol % Rho-PE (1,2-dipalmitoyl-sn-glycero-3-phosphoethanolamine-N-(lissamine rhodamine B sulfonyl) ammonium salt, Avanti Polar Lipids) to perform FCS on their 2D diffusion. Proof of principle experiment for nano-aperture enhanced fluorescence correlation spectroscopy. (a,b) SEM images of a polarization sensitive nano-aperture array of rectangular 165 by 50 nm slits at 180 nm pitch (a) and a polarization insensitive reference array of 100 × 100 nm2 square apertures at 200 nm pitch (b). Scalebars indicate 500 nm. (c) In the experiment the gold film is covered by a 30 nm SiO2 coating (spincoated HSQ) which supports a lipid bilayer. Emitters are diffusing in the plane of the lipid bilayer. The sample is immersed in water, and we pump and collect emission from the glass side of the sample. (d) We study a lipid bilayer sample in a confocal fluorescence microscope. Light collected from the emitters is passed through a longpass (LP) filter, and split into polarization channels by a polarizing beam splitter (PBS). We use two APDs per polarization channel to avoid artefacts due to APD deadtime and afterpulsing. We obtained FCS traces using a homebuilt fluorescence microscope (see Fig. 4(c,d)). The sample was pumped with 300 μW using the 568 nm line of a cw Ar:Kr laser. Excitation and collection were performed through the glass side of the sample, using a Nikon CFI S Plan Fluor 60x ELWD objective (NA 0.7). For detection we pass the light through two Chroma HG580LP filters to reject laser light, and onto Si avalanche photodiodes (APDs) in Geiger mode (Micro Photon Devices). The APDs are connected to a 16-channel Becker and Hickl DPC230 time-correlator card in time tagging mode. Photon time traces of 120 seconds were correlated using in-house developed software using the multi-tau algorithm34. To improve statistics we average three correlation traces. We use a polarizing beam splitter to separate out the two polarization channels. To avoid correlation artefacts that may appear in autocorrelation traces due to APD dead times, we divide the signal over two APDs in each polarization branch. Figure 5 shows the obtained FCS traces. Panel (a) pertains to a reference measurement without gold film, while panel (b) and (c) pertain to the arrays of square resp. rectangular holes. For the reference sample of square arrays, both polarization channels are expected to correspond to identical molecular detection functions. Indeed, Fig. 5(b) shows that the cross-correlation between polarization channels gives a time-trace identical to the autocorrelation of each polarization channel separately. Instead, for the array of alternating rectangular nano-apertures, the standard FCS trace (single polarization channel) and the cross-correlation of cross-polarized channels shows a large difference (Fig. 5(c)). This measurement demonstrates the feasibility of nano-antenna 2fFCS. We emphasize that all detectors are confocal with the (same) diffraction limited sample excitation spot, which encompasses about five optically unresolvable nanoapertures in total. The cross-correlation contrast hence entirely comes from the encoding of spatial information, i.e., fluorophore location, in polarization channels. Experimental intensity correlations for nano-aperture 2fFCS. Intensity correlations for (a). A lipid bilayer directly on glass, (b). A polarization insensitive square hole array in gold, and (c). The crossed slits. Blue curves with circular symbols correspond to autocorrelations, i.e., correlating detectors in the same polarization channel, while red curves with square symbols correspond to cross-correlation of polarization channels. For the reference system (a) and the polarization insensitive antennas (b), the auto- and crosscorrelates are identical, but a strong difference appears for the crossed nanoslit sample (c). Shaded areas around the curve indicate the standard error in the mean, given that curves are averages over 3 runs of 120 seconds each, where we furthermore average over all auto- resp. crosscorrelating detector combinations. Dotted (dash-dotted) black lines indicate the fitted ACF (CCF) according to a simple Gaussian model (see text). Note that for panels b,c the difference is the polarization cross-talk required to fit the data. In (b) all apertures contribute equally independent of polarization leading to an identical auto- and crosscorrelation. In (c) the x(y)-oriented holes contribute 4 times more strongly to MDF y(x) than the y(x)-oriented holes. In panel c the purple curve shows the difference in correlations, corresponding to the temporal correlation of instantaneous polarization differences. The black curve superimposed on the purple curve is the difference of the fits to the CCF and ACF, which has a rolloff time of 0.93 ms. Diffusion at the aperture substrate is slowed down compared to the reference case (D = 0.33 versus 4.5 μm2/s). The fact that the experimental data shows a high value of the cross-correlation at τ = 0 indicates significant spatial overlap of the two MDF s, which we attribute to non-perfect polarization contrast in the nanoapertures. Indeed, similar curves are observed in simulations similar to those in Fig. 3, for geometries and operating wavelengths that do not result in strong polarization separation. As a result of the resulting cross-talk between polarization channels, there is no clear peak in the cross correlation time trace as there would be in the ideal case presented in Fig. 3(e). However, one can still fit the time traces to obtain diffusion constants. For verification we first fit a single gaussian FCS model to reference data without a gold film (Fig. 5(a)). If we assume a focus width of 405 nm (intensity FWHM), tantamount to an MDF width σ = 243 nm23, we find a diffusion coefficient of 5 μm2/s, in reasonable agreement with a previously reported value of 4.5 μm 2/s22, 35. Thereby, this fit verifies the operation of our set up, and our focus size estimate in absence of the plasmonic structures. We continue by globally fitting a numerical model to the auto and cross-correlation data of the arrays. This model is based on ref. 23 which treats FCS in focal distributions that are a superposition of many Gaussian contributions. We modified this model to deal with 2D systems in which we assume an MDF is given by the sum of a broad background focus, and a periodic array of hot spots spaced by the array pitch in our sample. The hot spot amplitudes are a multiple of the local background focus intensity, where the enhancement factor and hot spot size are treated as two fit parameters. We further extended our model to compute auto- and cross-correlations of different sets of gaussian volumes. The polarization-selective behavior is implemented by assigning an amplitude difference to sets of orthogonally oriented apertures that reverses between MDF x and MDF y . This model allows us to efficiently calculate the auto- and crosscorrelations for complex MDFs accounting for up to 100 holes, using just seven parameters: five to define the geometry (background focus size, hot spot spacing, hot spot size, hot spot enhancement factor, polarization contrast) and two to quantify the fluorophore physics (concentration and diffusion constant). We have verified that this model can successfully reproduce simulated correlation curves such as the ones shown in Fig. 3e. To further constrain the fit, we fix the background focus size to 405 nm from the calibration measurement, and the hot spot spacing to the 180 nm sample pitch. We simultaneously fit the square-hole and rectangular-hole ACF and CCF data traces, imposing identical concentration and diffusion constant. Even with these tight constraints, the model satisfactorily reproduces our data, precisely tracing both the cross- and autocorrelations. We note that this fit is obtained with a diffusion coefficient of 0.33 μm 2/s, significantly below the value obtained in the reference system. We attribute this discrepancy to a difference in electrostatic properties between the glass reference, and the aperture array, due to surface properties of the gold film and planarization layer, and their modification by focused ion beam milling. Importantly, this discrepancy has no bearing on the validation of the optical mechanisms of nano-antenna enhanced 2fFCS per se, as is evidenced by the values for the geometrical fit parameters that we retrieve. According to the fit, hot spot sizes are approximately 55 nm FWHM for the rectangular resp. square hole samples, with MDF-enhancement factors in the hot spots a factor 12 resp. 6 compared to the background. The polarization contrast in the rectangular sample is approximately 4:1 according to the fit. Figure 5c also shows the roll off of the polarization fluctuations, i.e., the measured difference between ACF and CCF data sets, overplotted with the difference of the fit curves. The roll off time of 0.93 ms directly translates to a hot spot size σ = \(\sqrt{4D\tau }\) = 35 nm (translates to FWHM 55 nm), using the diffusion constant fitted to the rectangular and square datasets. This size is in good agreement with the simulated hot spot sizes. We conclude that our experiment supports the proposition of nanoantenna 2fFCS. In this proof-of-concept experiment, the cross-correlation peaks at zero time delay as opposed to the distinct peak at non-zero time delay in Fig. 3(e). This indicates that in our experimental realization there is still substantial overlap between the cross-polarized MDF s. Indeed, our fit indicates cross-talk through the background focus, and through the fitted cross-polarization contrast of 4:1, indicating that'dark' cross-polarized slits still contribute to the MDF with 25% of the strength of co-polarized 'bright' slits. We anticipate that these shortcoming can be resolved by improved fabrication procedures and a sample design that is more tailored to align plasmon resonance and emission properties. A more red-shifted dye than tetramethylrhodamine (fluorescence at 576 nm), or a different plasmonic metal to blueshift the plasmon resonance would be helpful to align the emission with the slit resonance. Moreover, low-quantum efficiency dyes are well-known to be much more effective at singling out hot-spot properties36, likely leading to improved polarization contrast by removing the background focus contribution. Conclusion and outlook We have shown a nano-antenna design which encodes fluorescence emission originating from different spatial regions into two orthogonal far-field polarization states with high contrast. It combines the advantages of calibration free 2fFCS measurement and the benefits of nano-antenna enhanced fluorescence spectroscopy, like smaller detection volumes, and pump and emission enhancements. The use of polarization sensitive nano-antennas to provide spatial selection reduces the complexity of 2fFCS experiments compared to existing far-field implementations. In the proposed nano-optical implementation, only the addition of a polarization splitter and detection path is necessary. Otherwise, the set up remains entirely identical to a single-focus confocal set up, as opposed to having to create displaced foci. The discussed geometries are a proof of principle and are not yet completely optimized. The general design rules identified in this work are that (i) polarization cross-talk between MDF channels must be minimized, while simultaneously (ii) no background focus contribution must be present. In this work we followed the rationale that polarization cross-talk is minimized by minimizing the dipole-dipole coupling between plasmon resonances by spatial arrangement. Our full-wave simulations show that this philosophy can lead to robust designs even at small spacings between slits. Large gains in performance could be made by combining these design philosophies with established methods to increase the light collection efficiency. The periodic array can for instance further be tailored to ensure that light that is emitted into the SPP guided mode is beamed into the far-field, thereby increasing the count rate per fluorophore7. Also, local density of states enhancements can be used to boost count rates per molecule, and to make molecular detection functions more spatially selective10, 11, 36. Furthermore, other types of far-field channels could be used to address different near-field volumes. For example, from ref. 37 we extrapolate that dividing the radiation pattern in different detection channels allows to select distinct near-field volumes around complex nano-structures with sub-diffractive spacing. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. Magde, D., Elson, E. & Webb, W. Thermodynamic Fluctuations in a Reacting System - Measurement by Fluorescence Correlation Spectroscopy. Phys. Rev. Lett. 29, 705–708 (1972). CAS Article ADS Google Scholar Schwille, P. & Haustein, E. Fluorescence correlation spectroscopy. An introduction to its concepts and applications (Biophysics Textbook Online 1(3), Göttingen, 2001). Lakowicz, J. R. Principles of Fluorescence Spectroscopy 3rd edn (Springer, 2006). Levene, M. J. et al. Zero-mode waveguides for single-molecule analysis at high concentrations. Science 299, 682–6 (2003). CAS Article PubMed ADS Google Scholar Gérard, D. et al. Nanoaperture-enhanced fluorescence: Towards higher detection rates with plasmonic metals. Phys. Rev. B 77, 045413 (2008). Article ADS Google Scholar Aouani, H. et al. Optical-fiber-microsphere for remote fluorescence correlation spectroscopy. Opt. Express 17, 19085–92 (2009). Aouani, H. et al. Bright unidirectional fluorescence emission of molecules in a nanoaperture with plasmonic corrugations. Nano Lett. 11, 637–44 (2011). Estrada, L. C., Aramenda, P. F. & Martnez, O. E. 10000 times volume reduction for fluorescence correlation spectroscopy using nano-antennas. Opt. Express 16, 20597 (2008). Kinkhabwala, A. A., Yu, Z., Fan, S. & Moerner, W. E. Fluorescence correlation spectroscopy at high concentrations using gold bowtie nanoantennas. Chem. Phys. 406, 3–8 (2012). Punj, D. et al. A plasmonic'antenna-in-box' platform for enhanced single-molecule analysis at micromolar concentrations. Nat. Nanotechnol 8, 512–6 (2013). Punj, D. et al. Plasmonic antennas and zero-mode waveguides to enhance single molecule fluorescence detection and fluorescence correlation spectroscopy toward physiological concentrations. Wiley Interdiscip. Rev.: Nanomed. Nanobiotechnol 6, 268–282 (2014). Wenger, J., Rigneault, H., Dintinger, J., Marguet, D. & Lenne, P.-F. Single-fluorophore diffusion in a lipid membrane over a subwavelength aperture. J. Biol. Phys. 32, SN1–4 (2006). Wenger, J. et al. Radiative and Nonradiative Photokinetics Alteration Inside a Single Metallic Nanometric Aperture. J. Phys. Chem. C 111, 11469–11474 (2007). Enderlein, J., Gregor, I., Patra, D. & Fitter, J. Art and Artefacts of Fluorescence Correlation Spectroscopy. Curr. Pharm. Biotechnol. 5, 155–161 (2004). Kapusta, P., Wahl, M., Benda, A., Hof, M. & Enderlein, J. Fluorescence lifetime correlation spectroscopy. J. Fluoresc. 17, 43–8 (2007). Kapusta, P. Absolute Diffusion Coefficients: Compilation of Reference Data for FCS Calibration. Picoquant Application Note (2010). Dertinger, T. et al. Two-focus fluorescence correlation spectroscopy: a new tool for accurate and absolute diffusion measurements. ChemPhysChem 8, 433–43 (2007). Dertinger, T. et al. The optics and performance of dual-focus fluorescence correlation spectroscopy. Opt. Express 16, 14353 (2008). Article PubMed ADS Google Scholar Giannini, V., Fernández-Domnguez, A. I., Heck, S. C. & Maier, S. A. Plasmonic nanoantennas: Fundamentals and their use in controlling the radiative properties of nanoemitters. Chem. Rev. 111, 3888–3912 (2011). Korlann, Y., Dertinger, T., Michalet, X., Weiss, S. & Enderlein, J. Measuring diffusion with polarization-modulation dual-focus fluorescence correlation spectroscopy. Opt. Express 16, 14609 (2008). Article PubMed PubMed Central ADS Google Scholar Štefl, M., Benda, A., Gregor, I. & Hof, M. The fast polarization modulation based dual-focus fluorescence correlation spectroscopy. Opt. Express 22, 885–99 (2014). Stelzle, M., Miehlich, R. & Sackmann, E. Two-dimensional microelectrophoresis in supported lipid bilayers. Biophys. J. 63, 1346–1354 (1992). CAS Article PubMed PubMed Central ADS Google Scholar Langguth, L. & Koenderink, A. F. Simple model for plasmon enhanced fluorescence correlation spectroscopy. Opt. Express 22, 15397 (2014). Bharadwaj, P., Deutsch, B. & Novotny, L. Optical Antennas. Adv. Opt. Photonics 1, 438 (2009). Novotny, L. & Hecht, B. Principles of Nano Optics (Cambridge University Press, 2006). Chizhik, A. I. et al. Electrodynamic coupling of electric dipole emitters to a fluctuating mode density within a nanocavity. Phys. Rev. Lett. 108, 163002 (2012). Johnson, P. B. & Christy, R. W. Optical Constants of the Noble Metals. Phys. Rev. B 6, 4370–4379 (1972). Taminiau, T. H., Stefani, F. D. & van Hulst, N. F. Enhanced directional excitation and emission of single emitters by a nano-optical yagi-uda antenna. Opt. Express 16, 10858–10866 (2008). Lumerical. Fdtd solutions. http://www.lumerical.com/tcad-products/fdtd/. Craighead, H. Future lab-on-a-chip technologies for interrogating individual molecules. Nature 442, 387–93 (2006). Falcone, F. et al. Babinet principle applied to the design of metasurfaces and metamaterials. Phys. Rev. Lett. 93, 197401 (2004). Zentgraf, T. et al. Babinet s principle for optical frequency metamaterials and nanoantennas. Phys. Rev. B 76, 33407 (2007). Ögüt, B. et al. Hybridized metal slit eigenmodes as an illustration of Babinet's principle. ACS Nano 5, 6701–6 (2011). Wahl, M., Gregor, I., Patting, M. & Enderlein, J. Fast calculation of fluorescence correlation data with asynchronous time-correlated single-photon counting. Opt. Express 11, 3583–91 (2003). Guo, L. et al. Molecular diffusion measurement in lipid bilayers over wide concentration ranges: A comparative study. ChemPhysChem 9, 721–728 (2008). Khatua, S. et al. Resonant plasmonic enhancement of single-molecule fluorescence by individual gold nanorods. ACS Nano 8, 4440–4449 (2014). Koenderink, A. F., Hernández, J. V., Robicheaux, F., Noordam, L. D. & Polman, A. Programmable nanolithography with plasmon nanoparticle arrays. Nano Lett. 7, 745–749 (2007). We are indebted to Clara Osorio for discussions, and to Sjoerd Wouda annd Luc Blom for photon-correlation software. This work is part of the research program of the Netherlands Organization for Scientific Research (NWO). This work is supported by NanoNextNL, a micro- and nanotechnology consortium of the Government of The Netherlands, and 130 partners. The research leading to these results was partially supported by the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement no. 337328, "NanoEnabledPV" and (FP/2007-2013)/ERC Grant Agreement no. 335672 "Minicell". Center for Nanophotonics, AMOLF, Science Park 102, Amsterdam, NL-1098XG, The Netherlands Lutz Langguth, Sander A. Mann, Erik C. Garnett & A. Femius Koenderink Biological Soft Matter Group, AMOLF, Science Park 102, Amsterdam, NL-1098XG, The Netherlands Agata Szuba & Gijsje H. Koenderink Lutz Langguth Agata Szuba Sander A. Mann Erik C. Garnett Gijsje H. Koenderink A. Femius Koenderink L.L. and F.K. conceived the nano-2fFCS proposition. L.L. and A.S. conceived the experimental implementation, with L.L. and F.K. responsible for nanofabrication, optics and analysis, and A.S. and G.K. for the lipid membrane system. S.M. performed the numerical simulations in Lumerical, for which FK implemented the correlation (independently verified by L.L.,using COMSOL, results not shown). F.K., G.K. and E.G. supervised the projects, and all authors contributed to the manuscript. Correspondence to A. Femius Koenderink. Langguth, L., Szuba, A., Mann, S.A. et al. Nano-antenna enhanced two-focus fluorescence correlation spectroscopy. Sci Rep 7, 5985 (2017). https://doi.org/10.1038/s41598-017-06325-6 Accepted: 12 June 2017
CommonCrawl
Electromagnetically induced transparency in an all-dielectric nano-metamaterial for slow light application Qiao Wang, Li Yu, Huixuan Gao, Shuwen Chu, and Wei Peng Qiao Wang, Li Yu, Huixuan Gao, Shuwen Chu, and Wei Peng* Department of Physics, Dalian University of Technology, Dalian 116024, China *Corresponding author: wpeng@dlut.edu.cn Q Wang L Yu H Gao S Chu W Peng Qiao Wang, Li Yu, Huixuan Gao, Shuwen Chu, and Wei Peng, "Electromagnetically induced transparency in an all-dielectric nano-metamaterial for slow light application," Opt. Express 27, 35012-35026 (2019) All-dielectric metamaterial analogue of electromagnetically induced transparency and its sensing... Tian Ma, et al. A novel planar metamaterial design for electromagnetically induced transparency and slow light Junqiao Wang, et al. Analogue of electromagnetically induced transparency with high-Q factor in metal-dielectric... Fengyan He, et al. Nanophotonics, Metamaterials, and Photonic Crystals Destructive interference Electromagnetically induced transparency Resonant modes Slow light Slow light applications Surface plasmon polaritons Original Manuscript: August 21, 2019 Revised Manuscript: November 7, 2019 Manuscript Accepted: November 8, 2019 Model and theory Results and discussions Slow light technique has significant potential applications in many contemporary photonic device developments for integrated all-optical circuit, such as buffers, regenerators, switches and interferometers. In this paper, we present an efficient coupling mechanism of an electromagnetically induced transparency like (EIT-like) effect in an all-dielectric nano-metamaterial. This EIT-like effect is generated by destructive interference between a radiative Fabry-Perot (FP) mode and a dark waveguide (WG) mode, which is based on a combined structure of a dielectric grating and multilayer films. The dark WG mode is excited by guided mode of dielectric grating instead of radiative FP mode. In analogy to the molecular transition process, the FP mode, guided mode and WG mode are denoted by excited states of |1〉, |2〉 and |3〉. The two coupling pathways of the EIT-like effect in our metamaterial are |0〉 → |1〉 and |0〉 → |2〉 → |3〉 → |1〉, where |0〉 is the ground state. The simulated resonant wavelength of WG mode is consistent with theoretical result. We further confirm this EIT-like effect through a two-oscillator coupling analysis. We achieve a group refractive index of 913.6 by adjusting these two modes coupling of the EIT-like effect, which is useful for developing slow light device. This work provides a valuable solution to realize electromagnetically induced transparency in all-dielectric nanomaterial. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement The velocity of light in vacuum is approximately $3 \times {10^8}\;\textrm{m/s}$. This ultrahigh speed provides tremendous advantage for optical data transmission, but also limits the optical signal controlling in time domain. To overcome this bottle-neck issue, many possible technical solutions have been investigated to scale down the group velocity of a compound light pulse consisting of different frequencies, that is, to obtain slow light. Currently, many slow light techniques are investigated for developing high efficient photonic devices, such as buffers [1–3], regenerators [4,5], switches [6,7] and interferometers [8,9]. There are some ordinary solutions to generate slow lights, which include electromagnetically induced transparency (EIT) [10–12], coherent population oscillation [13,14], optical parametric amplification [15,16], stimulated Brillouin scattering [17,18] or photonic crystal based method [19,20]. All these methods aim at creating a narrow spectral region with changes, exactly, peaks in the dispersion relation. EIT is one of the most promising way to achieve low group velocity of light. An EIT phenomenon was initially observed in a three-level atomic system [21] with ground state of $|0 \rangle$ and excited states of $|1 \rangle$ and $|2 \rangle$. A probe light excites the molecular transition from $|0 \rangle$ to $|1 \rangle$. When frequency of a second driven light satisfies the molecular transition from $|1 \rangle$ to $|2 \rangle$, destructive interference occurs between the two transition processes and the population of state $|1 \rangle$ is nearly zero, which generate a transparency window in spectrum. Using EIT, L. V. Hau et al. demonstrated a group velocity of 17 m/s in an ultra-cold gas of sodium atoms experimentally, which is acknowledged as the first realization of slow light [10]. D. F. Phillips et al. reported the "storage of light" by dynamically reducing the group velocity of the light pulse to zero in a vapor of rubidium atoms [22]. B. Wu et al. observed optical pulse delays of 16 ns with a delay-bandwidth product of 0.8 in a planar chip consisting of hot rubidium atoms in hollow-core waveguides [23]. Nevertheless, atomic EIT systems suffer from extreme experimental conditions, such as temperature, which make it with high cost and not convenient for device integration. Therefore, EIT-like effect at room temperature in solid systems, analogy of EIT in atomic system, has attracted great attentions. K. Totsuka et al. performed an EIT-like effect in a fiber taper by coupling two ultrahigh-Q silica microspheres with different diameters, and observed a 8.5 ns delay for a 51 ns pulse width [24]. J. Gu et al. presented an optically tunable group delay of ultrafast optical pulses at terahertz by integrating photoconductive silicon into the metamaterial unit cell and generating EIT-like effect [25]. The EIT-like effect was also reported in nano plasmonics nanasystem, which is named as plasmon induce transparency (PIT), referring to a EIT-like effect induced by surface plasmon polaritons (SPPs). S. Zhang et al. investigated a PIT formed by a dipole antenna and two strips in a plasmonic metamaterial [26]. R. Taubert et al. demonstrated a PIT with the coupling of a broad dipolar and a narrow dark quadrupolar plasmons [27]. T. T. Kim et al. achieved an electrically tunable graphene EIT metamaterial at THz regime [28]. J. Hu et al. investigated the coupling of graphene Tamm plasmon polaritons (TPPs) and silver TPPs in a graphene/dielectric Bragg reflector/Ag slab hybrid system [29]. Whether in these dielectric or plasmonic nanosystems, the physical mechanisms of EIT-like effect are slightly different from that in the three-level atomic system. The EIT-like effect is formed by destructive interference of a radiative mode and a dark mode. The dark mode is excited by radiative mode instead of external field directly. If we analogize the EIT-like effect in atomic system, the radiative and dark modes are designated as two excited states $|1 \rangle$ and $|2 \rangle$ respectively. The EIT-like effect is formed the coupling of two pathways of $|0 \rangle \to |1 \rangle$ and $|0 \rangle \to |1 \rangle \to |2 \rangle \to |1 \rangle$. Recently, the EIT-like effect is also generated in nanoscaled all-dielectric nanosystem. S.-G. Lee et al. presented an EIT-like effect by coupling two resonant guide modes with a low- and high-quality factor [30]. C. Sui et al. generated the EIT-like effect by using destructive interference between a broad magnetic resonance of a dipole and a narrow guide mode resonance in the substrate [31]. B. Han et al. presented a EIT-like effect by coupling toroidal dipole moment and magnetic resonance with an E-shaped silicon array [32]. Although the EIT-like effects in all-dielectric nanosystems have been reported, efforts are still required to further reveal underlying mechanism of EIT-like effect for practical applications. In this paper, we present an efficient coupling mechanism of an EIT-like effect in an all-dielectric nano-metamaterial based on combined grating and multilayer structure. We study the EIT-like effect by using finite difference time domain (FDTD) method and two-oscillator coupling theory. The EIT-like effect is formed by destructive interference of a radiative Fabry-Perot (FP) mode and a dark waveguide (WG) mode. The dark WG mode is excited by guided mode of dielectric grating instead of radiative FP mode. If the FP mode, guided mode and WG mode are represented by the excited states of $|1 \rangle$, $|2 \rangle$ and $|3 \rangle$, the two coupling pathways of the EIT-like effect in our metamaterial are $|0 \rangle \to |1 \rangle$ and $|0 \rangle \to |2 \rangle \to |3 \rangle \to |1 \rangle$. The simulated resonant wavelength of WG mode is consistent with theoretical result, which support our explanation of the origin of the EIT-like effect. We also achieve a maximum group refractive index of 913.6 by using this metamaterial mechanism, which can be potentially applied for slow light device development. 2. Model and theory Figure 1 shows the proposed all-dielectric nano-metamaterial. It includes a grating with ZnO-dopted SiO2 material, a Cytop fluoropolymer layer, a ZnO-dopted SiO2 layer, and a Cytop fluoropolymer substrate. Figure 1(a) is the whole device schematics, and Fig. 1(b) is the profile view. The dielectric grating is defined by its period P, slit width a, and thickness t. The thickness of Cytop fluoropolymer layer and ZnO-dopted SiO2 layer are described by ${d_1}$ and ${d_2}$, respectively. A p-polarized light is illuminated from grating side with a wavevector ${k_0}$ and an angle of $\theta $ to the y axis. Fig. 1. Schematic illustration of the proposed all-dielectric nano-metamaterial: (a) whole schematics and (b) profile view. A two-dimensional FDTD method is adopted for the study in this paper. The mesh size is ${5}\;\textrm{nm} \times {5}\;\textrm{nm}$, which is small enough to deal with the problem of electromagnetic wave propagation in dielectric material. The periodic boundaries and perfectly matched layers (PML) are applied on x and y boundaries of the simulation space. The permittivity of Cytop fluoropolymer material is set as experimental data with $\varepsilon = n_1^2 = 1.8117 + i2.6900 \times {10^{ - 3}}$ [33]. The refractive index of ZnO-dopted SiO2 material is ${n_2} = 2.198$. There are three types of resonances in this all-dielectric nano-metamaterial: FP mode in the slit of the dielectric grating, the guided mode of the grating and the WG mode in the ZnO-dopted SiO2 layer. In the dielectric grating slit, the resonant condition of the FP mode satisfies (1)$$2{k_{\textrm{FP}}}t + \arg ({{\rho_1}{\rho_2}} )= 2m\pi ,$$ where ${k_{\textrm{FP}}}$ is the wavenumber of the FP mode, m is an integer representing the order of the FP mode, and ${\rho _1}$ and ${\rho _2}$ are the reflection coefficients at the two ends of the slit. The propagating constant of the guided mode of dielectric grating contains two parts: parallel wavevector component of incident light and wavevector component provided by the grating. It is expressed as (2)$$\beta = {k_0}\sin \theta + nG,$$ where $G = {{2\pi } \mathord{\left/ {\vphantom {{2\pi } P}} \right.} P}$, and n is an positive integer of $1{,_{}}2{,_{}}3 \cdots$. A symmetric three-layered waveguide is formed by the Cytop fluoropolymer layer, the ZnO-dopted SiO2 layer, and the Cytop fluoropolymer substrate. In the waveguide, since we perform a two-dimensional simulation with electric field paralleled to the simulation plane, only TM mode of the WG mode can be excited. The dispersion relation of TM mode is expressed as [34] (3)$$\sqrt {n_2^2k_0^2 - {\beta ^2}} {d_2}\textrm{ = }q\pi + 2\arctan \left( {\frac{{n_2^2}}{{n_1^2}}\sqrt {\frac{{{\beta^2} - k_0^2n_1^2}}{{k_0^2n_2^2 - {\beta^2}}}} } \right),$$ where $\beta$ is propagating constant of TM mode in the waveguide, $q\textrm{ = }0{,_{}}1{,_{}}2 \cdots$ corresponds to mode order of $\textrm{T}{\textrm{M}_0}$, $\textrm{T}{\textrm{M}_1}$, $\textrm{T}{\textrm{M}_2} \cdots$. In this designed nano-metamaterial structure, the FP mode and guided mode are excited directly as light interacts with the grating. When the propagating constant of guided mode matches that of the WG mode, the WG mode is excited subsequently. As light propagating continuously, the transmission and reflection spectra contain the information of coupling of FP mode and WG mode. We will explain the coupling mechanism of EIT-like effect in section 3.1. The EIT-like effect is generated by destructive interference between a radiative FP mode and a dark WG mode, where the FP mode describes the broad peak and the dark WG mode determines the narrow dip of EIT-like effect. By solving Eqs. of (2) and (3), we can theoretically obtain the dip wavelength of EIT-like effect (i.e. resonant wavelength of the WG mode). 3. Results and discussions We will discuss the coupling mechanism of EIT-like effect in detail in Section 3.1. Then, the EIT-like effect is further analyzed with a two-oscillator coupling theory for slow light application in Section 3.2. 3.1 Physical mechanism of the EIT-like effect We start our study from analyzing reflection and transmission responses of the all-dielectric nano-metamaterial with different grating thickness t under a TM polarized light, as shown in Figs. 2(a) and 2(c). The geometrical parameters of the metamaterial are defined as $P = {500}\;\textrm{nm}$, $a = {100}\;\textrm{nm}$, ${d_1} = {700}\;\textrm{nm}$ and ${d_2} = {100}\;\textrm{nm}$. A normally incident plane wave is considered. It is found that a narrow mode interferes with a broad mode for TM polarization. Then, we extract the reflection spectra with grating thickness of $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$ as illustrated in Figs. 2(e) and 2(f), which are indicated in black dashed lines in Fig. 2(a). A EIT-like effect is exhibited for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. The full width at half maximum (FWHM) of the EIT-like effect dip is 2 nm for $t = {160}\;\textrm{nm}$ and 1.7 nm for $t = {525}\;\textrm{nm}$. Its peak and dip wavelengths of the EIT-like effect are also illustrated in red in the two subfigures. Fig. 2. Reflection spectra of all-dielectric nano metameterial with different t under (a) TM and (b) TE polarized light; Transmission spectra with different t under (c) TM and (d) TE polarized light; Reflection spectra with (e) $t = {160}\;\textrm{nm}$ and (f) $t = {525}\;\textrm{nm}$ under TM polarization; (g) Transition levels of the EIT-like effect in all-dielectric nano-metamaterial. To reveal the physics mechanism of the EIT-like effect, the detailed near-field distributions are given out. Figures 3(a)–3(c) illustrate electromagnetic field distributions of the peaks and dip of the EIT with $t = {160}\;\textrm{nm}$, while Figs. 3(d)–3(f) display the corresponding results with $t = {525}\;\textrm{nm}$. Taking $t = {160}\;\textrm{nm}$ as an example, a 1st order FP mode is formed in the grating slit at $\lambda = 721.8\;\textrm{nm}$ and $\lambda = 728.8\;\textrm{nm}$, as shown in Figs. 3(a) and 3(c). These two wavelengths correspond to the peaks of EIT-like effect. Figure 3(b) shows that a WG mode is excited in the ZnO-dopted SiO2 layer at the EIT dip of $\lambda = 725.0\;\textrm{nm}$. For $t = {525}\;\textrm{nm}$, Figs. 3(d)–3(f) illustrate the similar results except that the FP mode is 2nd order. These detailed distributions explain that the EIT-like effect is formed by the coupling of the 1st or 2nd order FP mode and WG mode. The theoretical resonant wavelength of the WG mode is 724.9 nm, which is derived from Eqs. (2) and (3). The simulated result is 725.0 nm, which is consistent with the theory. This consistency demonstrates that the WG mode is excited by guided mode. Figures 2(b) and 2(d) illustrate reflection and transmission responses with different t under a TE polarization. There is no structure paralleled to the electric field direction for TE polarization, therefore, no TE mode of WG mode is excited. As a result, the EIT-like effect is not shown for TE polarization. Fig. 3. Electric and magnetic field distributions of (a) $t = {160}\;\textrm{nm}$, $\lambda = {721.8}\;\textrm{nm}$; (b) $t = {160}\;\textrm{nm}$, $\lambda = {725.0}\;\textrm{nm}$; (c) $t = {160}\;\textrm{nm}$, $\lambda = {728.8}\;\textrm{nm}$; (d) $t = {525}\;\textrm{nm}$, $\lambda = {722.4}\;\textrm{nm}$; (e) $t = {525}\;\textrm{nm}$, $\lambda = {725.0}\;\textrm{nm}$; and (f) $t = {525}\;\textrm{nm}$, $\lambda = {728.2}\;\textrm{nm}$. In the metamaterial, the broad FP mode is supported by the dielectric material with a positive real part in permittivity at optical frequency. Therefore, the FP mode acts as a broad peak in the reflection, as shown in Fig. 2(a). When a broad FP peak couples with a narrow WG mode, the narrow EIT dip is formed, as shown in Figs. 2(e) and 2(f). To explicitly reveal the origin of the EIT-like effect, we analogize the EIT to molecular transition process, as shown in Fig. 2(g). The ground state is displayed by $|0 \rangle$. The FP mode, guided mode and WG mode are represented by the excited states of $|1 \rangle$, $|2 \rangle$ and $|3 \rangle$. The radiative FP mode is directly excited in the slit of the dielectric grating when the external field of light matches the FP mode condition. It corresponds to the transition pathway of $|0 \rangle \to |1 \rangle$. The WG mode cannot be excited by incident light, which means that the pathway of $|0 \rangle \to |3 \rangle$ is forbidden. The direct coupling between the FP mode and WG mode is also impossible because their propagations are in perpendicular directions. Thus, the pathway of $|0 \rangle \to |1 \rangle \to |2 \rangle \to |1 \rangle$ cannot be realized. The dark WG mode is excited by the guided mode of the dielectric grating, which has already been demonstrated by the consistency between the simulated and theoretical wavelengths of WG mode. Thus, the second transition pathway is $|0 \rangle \to |2 \rangle \to |3 \rangle \to |1 \rangle$. An EIT-like effect is formed by the coupling of the two transition pathways. Subsequently, we focus on the influence of the grating period P. Figures 4(a) and 4(b) show reflection spectra of the metamaterial with different grating thicknesses t for $P = {600_{}}{\mathop{\textrm {nm}}\nolimits} $ and $P = {700}\;\textrm{nm}$. A coupling between FP mode and WG mode is exhibited for these two periods. The resonant wavelength of WG mode redshifts because larger P will result in a smaller $\beta$, then, smaller ${k_0}$ and lager $\lambda$ according to Eqs. (2) and (3). The simulated wavelengths of the WG mode are 851.8 nm and 979.6 nm, which match the theoretical results of 851.3 nm and 979.7 nm. Figures 4(c) and 4(d) illustrate the reflection spectra with a continuous change of P from 500 nm to 800 nm for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. As P increases, the WG mode redshifts out of the 1st order (or 2nd order) FP mode for $t = {160}\;\textrm{nm}$ (or $t = {525}\;\textrm{nm}$). Fig. 4. Reflection spectra with different t for (a) $P = {600}\;\textrm{nm}$ and (b) $P = {700}\;\textrm{nm}$; Reflection spectra with different P for (c) $t = {160}\;\textrm{nm}$ and (d) $t = {525}\;\textrm{nm}$. Then, we investigate the effect of the grating slit width a on spectral response. Figures 5(a) and 5(b) show the reflection spectra with different grating thickness t for $a = {200}\;\textrm{nm}$ and $a = {400}\;\textrm{nm}$. A coupling between FP mode and WG mode is observed for $a = {200}\;\textrm{nm}$. The 2nd order and 3rd order FP modes are more distinct for $a = {200}\;\textrm{nm}$ compared with the result for $a = {100}\;\textrm{nm}$ in Fig. 2(a). For $a = {400}\;\textrm{nm}$, the WG mode cannot be excited because the duty cycle of the ZnO-dopted SiO2 material in one grating period is 1:5, which is too small to support guided mode. The FP mode is also faded out for such a small duty cycle. Figures 5(c) and 5(d) exhibit the continuous change of a from 50 nm to 450 nm for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. For $t = {160}\;\textrm{nm}$, only 1st order FP mode couples with WG mode. For $t = {525}\;\textrm{nm}$, both 1st order and 2nd order FP modes couple with WG mode. The upper and lower red regions represent the 1st order and 2nd order FP modes, as displayed in Fig. 5(d). If the WG mode is excited, whether how a is changed, the simulated wavelengths of WG mode remain at 725.0 nm . The results agree with the theoretical equations of (2) and (3), that means the slit width has no effect on WG mode. Fig. 5. Reflection spectra with different t for (a) $a = {200}\;\textrm{nm}$ and (b) $a = {400}\;\textrm{nm}$; Reflection spectra with a continuous changing a for (c) $t = {160}\;\textrm{nm}$ and (d) $t = {525}\;\textrm{nm}$. To demonstrate the 1st order FP mode in the upper red region in Fig. 5(d), reflection spectra of $a = {300}\;\textrm{nm}$ is plotted in Fig. 6(a), with marked wavelengths of peaks and dip of EIT-like effect. Figures 6(b)–6(d) illustrate magnetic field distributions at the peaks and dip of EIT. It is found that EIT-like effect is generated by the coupling of 1st order FP mode and WG mode for $a = {300}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. Fig. 6. Reflection spectrum with $a = {300}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$ in Fig. 5(d); Magnetic field distributions of (b) $\lambda = {722.0}\;\textrm{nm}$, (c) $\lambda = {725.0}\;\textrm{nm}$, and (d) $\lambda = {727.2}\;\textrm{nm}$. Afterwards, we also study the effect of the thicknesses of the Cytop fluoropolymer layer of ${d_1}$ and the ZnO-dopted SiO2 layer of ${d_2}$. Figures 7(a) and 7(b) illustrate reflection spectra with different ${d_1}$ and ${d_2}$ with $t = {160}\;\textrm{nm}$, while Figs. 7(c) and 7(d) display the corresponding results with $t = {525}\;\textrm{nm}$. It is found that ${d_1}$ determines the coupling extent of FP mode and WG mode. As ${d_1}$ increases, this coupling of the two modes becomes weaker and EIT dip becomes narrower. The EIT-like effect almost disappears for ${d_1}\;>\;{1000}\;\textrm{nm}$, which indicates the interaction distance of FP mode and WG mode generating EIT-like effect is smaller than 1000 nm. As ${d_2}$ increases, high orders of TM mode appear and they interfere with the 1st order FP mode for $t = {160}\;\textrm{nm}$ and 2nd order FP mode for $t = {525}\;\textrm{nm}$. If the resonant wavelength of TM mode is fixed at 725.0 nm, the theoretical ${d_2}$ of $\textrm{T}{\textrm{M}_0}$, $\textrm{T}{\textrm{M}_1}$ and $\textrm{T}{\textrm{M}_2}$ are 100 nm, 319.4 nm and 538.8 nm. The theoretical ${d_2}$ of $\textrm{T}{\textrm{M}_0}$, $\textrm{T}{\textrm{M}_1}$ and $\textrm{T}{\textrm{M}_2}$ are derived from Eq. (3) with $q\textrm{ = }0{,_{}}1{,_{}}2$. With these ${d_2}$ conditions, the EIT-like effect is also generated. The electric and magnetic field distributions of the $\textrm{T}{\textrm{M}_0}$ with ${d_2} = 100{}_{}\textrm{nm}$ for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$ have already been shown in Figs. 3(b) and 3(e). Figures 8(a)–8(d) illustrate the electric and magnetic field distributions of $\textrm{T}{\textrm{M}_1}$ and $\textrm{T}{\textrm{M}_2}$ with ${d_2} = 319.4{}_{}\textrm{nm}$ and ${d_2} = 538.8{}_{}\textrm{nm}$ for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. The thicknesses t have also been indicated by white outlines in the subfigures. The magnetic field distributions clearly show that there are two maximums for $\textrm{T}{\textrm{M}_1}$ mode and three maximums for $\textrm{T}{\textrm{M}_2}$ mode along y direction in the ZnO-dopted SiO2 WG. Fig. 7. Reflection spectra with different (a) ${d_1}$ and (b) ${d_2}$ for $t = {160}\;\textrm{nm}$; Reflection with different (c) ${d_1}$ and (d) ${d_2}$ for $t = {525}\;\textrm{nm}$. Fig. 8. Electric and magnetic field distributions of $\textrm{T}{\textrm{M}_1}$ and $\textrm{T}{\textrm{M}_2}$: (a) $t = {160}\;\textrm{nm}$, ${d_2} = {319.4}\;\textrm{nm}$; (b) $t = {160}\;\textrm{nm}$, ${d_2} = {538.8}\;\textrm{nm}$; (c) $t = {525}\;\textrm{nm}$, ${d_2} = {319.4}\;\textrm{nm}$ ; (d) $t = {525}\;\textrm{nm}$, ${d_2} = {538.8}\;\textrm{nm}$. It should be noted that a bright red region appears at $\lambda = {938.4}\;\textrm{nm}$ for $t = {525}\;\textrm{nm}$ with changing ${d_1}$ and ${d_2}$, which is different form the results for $t = {160}\;\textrm{nm}$. Therefore, to illustrate the mechanism, we choose four typical points of A, B, C and D, as denoted by white characters in Fig. 7. Figure 9 gives out the electric and magnetic field distributions of point A, point B, point C and point D. The parameters of these four points are marked in detail in the captions of Fig. 9. From Figs. 9(b) and 9(d), we can see that the resonance at $\lambda = {938.4}\;\textrm{nm}$ in Figs. 7(c) and 7(d) is due to the presence of the 1st order FP mode in the grating slit. Fig. 9. Electric and magnetic field distributions at $\lambda = {938.4}\;\textrm{nm}$ with (a) point A: $t = {160}\;\textrm{nm}$, ${d_1} = {675}\;\textrm{nm}$, ${d_2} = {100}\;\textrm{nm}$; (b) point B: $t = {525}\;\textrm{nm}$, ${d_1} = {675}\;\textrm{nm}$, ${d_2} = {100}\;\textrm{nm}$; (c) point C: $t = {160}\;\textrm{nm}$, ${d_1} = {700}\;\textrm{nm}$, ${d_2} = {275}\;\textrm{nm}$; (d) point D: $t = {525}\;\textrm{nm}$, ${d_1} = {700}\;\textrm{nm}$, ${d_2} = {275}\;\textrm{nm}$. We further examine the real part of refractive index (i.e. ${n_1}^\prime$) in the fluoropolymer layer and the fluoropolymer substrate. The reflection spectra are shown in Figs. 10(a) and 10(b). The EIT turns into a Fano lineshape as ${n_1}^\prime$ changes from 1.346 to 1.40 in the fluoropolymer layer and substrate. The Fano lineshape is asymmetric because the narrow WG mode couples with one edge of the FP mode. The WG mode redshifts as ${n_1}^\prime$ increases. The theoretical wavelengths of WG mode are 724.9 nm, 725.8 nm, 727.9 nm, 730.2 nm, 732.5 nm, 734.9 nm and 737.3nm, as marked by color lines at the bottom of Fig. 10. The simulated wavelengths of WG mode are 725.0 nm, 725.8 nm, 728.0 nm, 730.0 nm, 732.4nm, 734.6 nm, 737.0nm with the changing of ${n_1}^\prime$ in the fluoropolymer layer, while they are 725.0 nm, 725.8 nm, 728.0 nm, 730.2 nm, 732.4nm, 734.8 nm, 737.4nm if we change ${n_1}^\prime$ in the fluoropolymer substrate. The simulated results agree with theoretical results. A refractive index (RI) sensitivity is defined by $S = {{\Delta \lambda } \mathord{\left/ {\vphantom {{\Delta \lambda } {\Delta \textrm{n}}}} \right.} {\Delta \textrm{n}}}$. According to the definition, RI sensitivity of all-dielectric nano-metamaterial is 227.8 nm/RIU. Another important parameter for the metamaterial is Q-factor, which is expressed as $\textrm{Q - factor} = {\textrm{f} \mathord{\left/ {\vphantom {\textrm{f} {\Delta \textrm{f}}}} \right.} {\Delta \textrm{f}}}$. In this metamaterial design, we achieve a Q-factor of 453.1, which is much higher than the Q-factor in former reported plasmonic works [26,35,36]. The improvement of Q -factor is because that we overcome the large non-radiative ohmic loss in this all-dielectric metamaterial structure. Fig. 10. Reflection spectra of all-dielectric nano-metameterial with different ${n_1}^\prime $ (a) in the fluoropolymer layer and (b) in the fluoropolymer substrate. 3.2 Two-oscillator coupling theory and slow light application In the all-dielectric nano-metamaterial, EIT-like effect is formed by the destructive coupling of a radiative FP resonance and a dark WG mode. To analogize EIT-like effect, these two modes can be viewed as two "molecule" oscillators [26,27]. Namely, the coupling between the two modes is regarded as a two-oscillator coupling. To provide a quantitative description of our EIT-like effect, an analysis equation of Eq. (4) is introduced. The radiative oscillator of ${x_1}(t )$ oscillates with central frequency ${\omega _0}$, which strongly couples to an external field of ${E_0}{e^{ - i\omega t}}$. While the dark oscillator of ${x_2}(t )$ weakly couples to external field, therefore the driven field term on the right of the equal sign is zero. The central frequency of the dark oscillator is ${\omega _0} + \delta$, where $\delta$ illustrate the central frequency shift of the dark oscillator compared with the radiative oscillator. The $\kappa$ stands for the coupling coefficient between the two oscillators. And ${\gamma _1}$ and ${\gamma _2}$ represent the damping of the two oscillators respectively. (4)$$\left\{ \begin{array}{l} \frac{{{\partial^2}{x_1}(t)}}{{\partial {t^2}}} + {\gamma_1}\frac{{\partial {x_1}(t)}}{{\partial t}} + \omega_0^2{x_1}(t) + 2\kappa \frac{{\partial {x_2}(t)}}{{\partial t}} = {E_0}{e^{ - i\omega t}}\\ \frac{{{\partial^2}{x_2}(t)}}{{\partial {t^2}}} + {\gamma_2}\frac{{\partial {x_2}(t)}}{{\partial t}} + {({{\omega_0} + \delta } )^2}{x_2}(t) - 2\kappa \frac{{\partial {x_1}(t)}}{{\partial t}} = 0 \end{array} \right..$$ When the system reaches to the stable state, the two oscillators have a same ${e^{ - i\omega t}}$ term under the forced external field. By solving Eq. (4), one can obtain the expressions of the two modes. Here, we mainly focus on ${x_1}(t )$ because the oscillation contains the influences of both the external field and the dark mode . ${x_1}(t )$ is written as (5)$${x_1}(t )= \frac{1}{{2{\omega _0}}} \cdot \frac{{({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )}}{{{\kappa ^2} - ({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )({\omega - {\omega_0} + {{i{\gamma_1}} \mathord{\left/ {\vphantom {{i{\gamma_1}} 2}} \right.} 2}} )}}{E_0}{e^{ - i\omega t}}.$$ The energy dissipation is related to the imaginary part of amplitude of the radiative mode, which is expressed as (6)$$A(\omega )= {\mathop{\textrm {Im}}\nolimits} \frac{{f({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )}}{{{\kappa ^2} - ({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )({\omega - {\omega_0} + {{i{\gamma_1}} \mathord{\left/ {\vphantom {{i{\gamma_1}} 2}} \right.} 2}} )}},$$ where f is a coefficient that contains ${{{E_0}} \mathord{\left/ {\vphantom {{{E_0}} {2{\omega_0}}}} \right.} {2{\omega _0}}}$ and how strong the radiative FP resonance couples to the external field. In this paper, the energy dissipation is described by absorption. Then, we begin to study the coupling parameters of these two modes and discuss the possible slow light application of the metamaterial. We choose the EIT effect with $t = {160}\;\textrm{nm}$ because of its high contrast ratio as shown in Fig. 2(e). Figure 11(a) illustrates reflection and transmission spectra of the metamaterial with ${d_1}$ changing from 450nm to 650 nm, with an interval of 50 nm. The black lines mark reflection spectra and the blue lines denote transmission spectra. The reflection and transmission are nearly complemented. The spectral width of the EIT monotonously narrows down as ${d_1}$ increases, as shown in Fig. 7(a). The absorption spectra are calculated as (7)$$A = 1 - R - T,$$ where A, R, T represent the absorption, reflection and transmission of the metamaterial, respectively. The absorption spectra with gradually changing ${d_1}$ is plotted in red lines of Fig. 11(b). Then, we fit absorption spectra using Eq. (6) and the results are displayed in purple lines. The detailed fitting parameters of two-oscillator coupling model in Eq. (6) are also exhibited on the right of each subfigure of Fig. 11(b). With these fitting parameters, the characteristics of the simulated absorption curves are well described by the analytical results. Fig. 11. (a) Reflection spectra, transmission spectra and (b) absorption spectra with ${d_1}$ changing from 450 nm to 650 nm with an interval of 50 nm, for $t = {160}\;\textrm{nm}$. In FDTD simulation, the coupling between FP mode and WG mode is determined by ${d_1}$. Figures 7(a) and 7(c) show that this coupling becomes weaker as ${d_1}$ increases. In two-oscillator coupling theory, $\kappa $ stands for coupling coefficient between the two oscillators. Smaller $\kappa $ indicates a weaker coupling of two oscillators. As shown in Fig. 11(b), the value of $\kappa $ decreases as ${d_1}$ increases form 450 nm to 650nm, which schematically illustrates the consistency of FDTD simulation and two-oscillator coupling theory. Few can deny the fact that the polarizability $\vec{P}$ of a molecule is proportional to complex susceptibility $\chi $ and external field $\vec{E}$. The complex susceptibility $\chi $ is expressed by $\chi = {\chi _\textrm{r}} + i{\chi _\textrm{m}}$. According to "molecular" hypothesis theory, the polarizability $\vec{P}$ of a radiative "molecule" oscillator is scale with its oscillation ${x_1}(t )$ [26]. Therefore, we can derive that real part of susceptibility is still proportional to the real amplitude of ${x_1}(t )$. Then, the ${\chi _\textrm{r}}$ is expressed as (8)$${\chi _\textrm{r}}(\omega )= {\mathop{\textrm {Re}}\nolimits} \frac{{f({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )}}{{{\kappa ^2} - ({\omega - {\omega_0} - \delta + {{i{\gamma_2}} \mathord{\left/ {\vphantom {{i{\gamma_2}} 2}} \right.} 2}} )({\omega - {\omega_0} + {{i{\gamma_1}} \mathord{\left/ {\vphantom {{i{\gamma_1}} 2}} \right.} 2}} )}}.$$ Using Eq. (8) and the fitting parameters in Fig. 11(b), we can obtain the ${\chi _\textrm{r}}$ in Fig. 12(a). The result shows that spectral width of ${\chi _\textrm{r}}$ decreases as ${d_1}$ grows. The group refractive index can be roughly estimated from the slope of ${\chi _\textrm{r}}$. Fig. 12. (a) ${\chi _\textrm{r}}$ and (b) ${n_\textrm{g}}$ with different ${d_1}$ for $t = {160}\;\textrm{nm}$. In this paper, to quantitatively describe the EIT-like effect for slow light application, the group refractive index is extracted from ${n_\textrm{g}} = c \times ({{{\textrm{d}k} \mathord{\left/ {\vphantom {{\textrm{d}k} {\textrm{d}\omega }}} \right.} {\textrm{d}\omega }}} )$. According to these Refs. [30,37], the dispersion relation of the metamaterial is determined by the expression of (9)$$k = \frac{{\phi - {\phi _0}}}{L} + \frac{\omega }{c},$$ where $\phi - {\phi _0}$ represent simulated phase difference with and without the metamaterial. In our system, the effective propagation length is expressed by $L = 2 \times ({t + {d_1} + {d_2}} )$ in consideration of reflection. By differencing both sides of Eq. (9), we can obtain the following relation: (10)$$\frac{{\textrm{d}k}}{{\textrm{d}\omega }} = \frac{1}{L} \times \frac{{\textrm{d}({\phi - {\phi_0}} )}}{{\textrm{d}\omega }} + \frac{1}{c}.$$ By substituting Eq. (10) into the expression of ${n_\textrm{g}}$, the group refractive index is derived as (11)$${n_\textrm{g}} = \frac{c}{L} \times \frac{{\textrm{d}({\phi - {\phi_0}} )}}{{\textrm{d}\omega }} + 1.$$ Using Eq. (11), the group refractive index for different ${d_1}$ are presented in Fig. 12(b). The group refractive index reaches a maximum of 913.6 for ${d_\textrm{1}}\textrm{ = }{450}\;\textrm{nm}$. Finally, we investigate spectra response of all-dielectic nano-metamaterial with multiple-grouped WG structures. Figure 13 shows reflection spectra with double-grouped and triple-grouped WG structures for $t = {160}\;\textrm{nm}$ and $t = {525}\;\textrm{nm}$. It is found that there are multiple EIT-like effect appeared when more grouped WG structures are introduced into the metamaterial. And the number of the dips are consistent with the number of the grouped WG structures. Specifically, there are two EIT dips for double-grouped WG structures and three dips for triple-grouped WG structures. The appearance of multiple EIT-like effect by simply changing the number of the grouped WG structures, which will be beneficial for further slow light application. Fig. 13. Reflection spectra with double-grouped and triple-grouped WG structures for (a) $t = {160}\;\textrm{nm}$ and (b) $t = {525}\;\textrm{nm}$. In summary, we present a novel coupling mechanism to realize EIT-like effect in an all-dielectric nano-metamaterial in this paper. The EIT-like effect is generated through destructive interference between a radiative FP mode and a dark WG mode. The dark WG mode is excited by using the guided mode of dielectric grating instead of radiative FP mode. Then, the EIT-like effect of the metamaterial is fully investigated by adjusting the grating thickness t, grating period P, slit width a, fluoropolymer layer thickness ${d_1}$, WG layer thickness ${d_2}$, and the refractive indexes of the fluoropolymer layer and substrate. The resonant wavelengths of WG mode with these changing parameters agree well with the theoretical results. The EIT-like effect is further studied with a two-oscillator coupling theory. To verify the application capability of this metamaterial design, a group refractive index of 913.6 is achieved theoretically, which can be further investigated for slow light application. This provides a useful method to realize EIT in all-dielectric nano-metamaterials. National Natural Science Foundation of China (11405020, 61520106013, 61727816, 51661145025). 1. R. S. Tucker, P. C. Ku, and C. J. Chang-Hasnain, "Slow-light optical buffers: capabilities and fundamental limitations," J. Lightwave Technol. 23(12), 4046–4066 (2005). [CrossRef] 2. A. Tyszka-Zawadzka, B. Janaszek, and P. Szczepański, "Tunable slow light in graphene-based hyperbolic metamaterial waveguide operating in SCLU telecom bands," Opt. Express 25(7), 7263–7272 (2017). [CrossRef] 3. F. Bagci and B. Akaoglu, "Enhancement of buffer capability in slow light photonic crystal waveguides with extended lattice constants," Opt. Quantum Electron. 47(3), 791–806 (2015). [CrossRef] 4. A. S. Losev and A. S. Troshin, "Twofold light-pulse regeneration under conditions of electromagnetically induced transparency," J. Opt. Technol. 80(7), 431–434 (2013). [CrossRef] 5. M. Ebnali-Heidari, C. Monat, C. Grillet, and M. Moravvej-Farshi, "A proposal for enhancing four-wave mixing in slow light engineered photonic crystal waveguides and its application to optical regeneration," Opt. Express 17(20), 18340–18353 (2009). [CrossRef] 6. S. Baur, D. Tiarks, G. Rempe, and S. Dürr, "Single-photon switch based on Rydberg blockade," Phys. Rev. Lett. 112(7), 073901 (2014). [CrossRef] 7. M. Bajcsy, S. Hofferberth, V. Balic, T. Peyronel, M. Hafezi, A. S. Zibrov, V. Vuletic, and M. D. Lukin, "Efficient all-optical switching using slow light within a hollow fiber," Phys. Rev. Lett. 102(20), 203902 (2009). [CrossRef] 8. K. Qin, S. Hu, S. T. Retterer, I. I. Kravchenko, and S. M. Weiss, "Slow light Mach–Zehnder interferometer as label-free biosensor with scalable sensitivity," Opt. Lett. 41(4), 753–756 (2016). [CrossRef] 9. O. S. Magaña-Loaiza, B. Gao, S. A. Schulz, K. M. Awan, J. Upham, K. Dolgaleva, and R. W. Boyd, "Enhanced spectral sensitivity of a chip-scale photonic-crystal slow-light interferometer," Opt. Lett. 41(7), 1431–1434 (2016). [CrossRef] 10. L. V. Hau, S. E. Harris, Z. Dutton, and C. H. Behroozi, "Light speed reduction to 17 metres per second in an ultracold atomic gas," Nature 397(6720), 594–598 (1999). [CrossRef] 11. C. Jiang, H. Liu, Y. Cui, X. Li, G. Chen, and B. Chen, "Electromagnetically induced transparency and slow light in two-mode optomechanics," Opt. Express 21(10), 12165–12173 (2013). [CrossRef] 12. G. Lai, R. Liang, Y. Zhang, Z. Bian, L. Yi, G. Zhan, and R. Zhao, "Double plasmonic nanodisks design for electromagnetically induced transparency and slow light," Opt. Express 23(5), 6554–6561 (2015). [CrossRef] 13. A. V. Turukhin, V. S. Sudarshanam, M. S. Shahriar, J. A. Musser, B. S. Ham, and P. R. Hemmer, "Observation of ultraslow and stored light pulses in a solid," Phys. Rev. Lett. 88(2), 023602 (2001). [CrossRef] 14. P. Palinginis, S. Crankshaw, F. Sedgwick, E. T. Kim, M. Moewe, C. J. Chang-Hasnain, H. Wang, and S. L. Chuang, "Ultraslow light (< 200 m/s) propagation in a semiconductor nanostructure," Appl. Phys. Lett. 87(17), 171102 (2005). [CrossRef] 15. E. Shumakher, A. Willinger, R. Blit, D. Dahan, and G. Eisenstein, "Large tunable delay with low distortion of 10 Gbit/s data in a slow light system based on narrow band fiber parametric amplification," Opt. Express 14(19), 8540–8545 (2006). [CrossRef] 16. D. Dahan and G. Eisenstein, "Tunable all optical delay via slow and fast light propagation in a Raman assisted fiber optical parametric amplifier: a route to all optical buffering," Opt. Express 13(16), 6234–6249 (2005). [CrossRef] 17. Y. Okawachi, M. S. Bigelow, J. E. Sharping, Z. Zhu, A. Schweinsberg, D. J. Gauthier, R. W. Boyd, and A. L. Gaeta, "Tunable all-optical delays via Brillouin slow light in an optical fiber," Phys. Rev. Lett. 94(15), 153902 (2005). [CrossRef] 18. M. Merklein, I. V. Kabakova, T. F. S. Büttner, D. Y. Choi, B. Luther-Davies, S. J. Madden, and B. J. Eggleton, "Enhancing and inhibiting stimulated Brillouin scattering in photonic integrated circuits," Nat. Commun. 6(1), 6396 (2015). [CrossRef] 19. T. Baba, "Slow light in photonic crystals," Nat. Photonics 2(8), 465–473 (2008). [CrossRef] 20. M. Minkov and V. Savona, "Wide-band slow light in compact photonic crystal coupled-cavity waveguides," Optica 2(7), 631–634 (2015). [CrossRef] 21. K. J. Boller, A. Imamoglu, and S. E. Harris, "Observation of electromagnetically induced transparency," Phys. Rev. Lett. 66(20), 2593–2596 (1991). [CrossRef] 22. D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin, "Storage of light in atomic vapor," Phys. Rev. Lett. 86(5), 783–786 (2001). [CrossRef] 23. B. Wu, J. F. Hulbert, E. J. Lunt, K. Hurd, A. R. Hawkins, and H. Schmidt, "Slow light on a chip via atomic quantum state control," Nat. Photonics 4(11), 776–779 (2010). [CrossRef] 24. K. Totsuka, N. Kobayashi, and M. Tomita, "Slow light in coupled-resonator-induced transparency," Phys. Rev. Lett. 98(21), 213904 (2007). [CrossRef] 25. J. Gu, R. Singh, X. Liu, X. Zhang, Y. Ma, S. Zhang, S. A. Maier, Z. Tian, A. K. Azad, H. T. Chen, A. J. Taylor, J. Han, and W. Zhang, "Active control of electromagnetically induced transparency analogue in terahertz metamaterials," Nat. Commun. 3(1), 1151 (2012). [CrossRef] 26. S. Zhang, D. A. Genov, Y. Wang, M. Liu, and X. Zhang, "Plasmon-Induced Transparency in Metamaterials," Phys. Rev. Lett. 101(4), 047401 (2008). [CrossRef] 27. R. Taubert, M. Hentschel, J. Kästel, and H. Giessen, "Classical analog of electromagnetically induced absorption in plasmonics," Nano Lett. 12(3), 1367–1371 (2012). [CrossRef] 28. T. T. Kim, H. D. Kim, R. Zhao, S. S. Oh, T. Ha, D. S. Chung, Y. H. Lee, B. Min, and S. Zhang, "Electrically tunable slow light using graphene metamaterials," ACS Photonics 5(5), 1800–1807 (2018). [CrossRef] 29. J. Hu, E. Yao, W. Xie, W. Liu, D. Li, Y. Lu, and Q. Zhan, "Strong longitudinal coupling of Tamm plasmon polaritons in graphene/DBR/Ag hybrid structure," Opt. Express 27(13), 18642–18652 (2019). [CrossRef] 30. S.-G. Lee, S.-Y. Jung, H.-S. Kim, S. Lee, and J.-M. Park, "Electromagnetically induced transparency based on guided-mode resonances," Opt. Lett. 40(18), 4241–4244 (2015). [CrossRef] 31. C. Sui, B. Han, T. Lang, X. Li, X. Jing, and Z. Hong, "Electromagnetically induced transparency in an all-dielectric metamaterial-waveguide with large group index," IEEE Photonics J. 9(5), 1–8 (2017). [CrossRef] 32. B. Han, X. Li, C. Sui, J. Diao, X. Jing, and Z. Hong, "Analog of electromagnetically induced transparency in an E-shaped all-dielectric metasurface based on toroidal dipolar response," Opt. Mater. Express 8(8), 2197–2207 (2018). [CrossRef] 33. S. Hayashi, D. V. Nesterenko, A. Rahmouni, and Z. Sekkat, "Observation of Fano line shapes arising from coupling between surface plasmon polariton and waveguide modes," Appl. Phys. Lett. 108(5), 051101 (2016). [CrossRef] 34. A. F. Kaplan, T. Xu, and L. J. Guo, "High efficiency resonance-based spectrum filters with tunable transmission bandwidth fabricated using nanoimprint lithography," Appl. Phys. Lett. 99(14), 143111 (2011). [CrossRef] 35. W. Cao, R. Singh, C. Zhang, J. Han, M. Tonouchi, and W. Zhang, "Plasmon-induced transparency in metamaterials: Active near field coupling between bright superconducting and dark metallic mode resonators," Appl. Phys. Lett. 103(10), 101106 (2013). [CrossRef] 36. G. Rana, P. Deshmukh, S. Palkhivala, A. Gupta, S. P. Duttagupta, S. S. Prabhu, V. Achanta, and G. S. Agarwal, "Quadrupole-quadrupole interactions to control plasmon-induced transparency," Phys. Rev. Appl. 9(6), 064015 (2018). [CrossRef] 37. E. Özbay, A. Abeyta, G. Tuttle, M. Tringides, R. Biswas, C. T. Chan, C. M. Soukoulis, and K. M. Ho, "Measurement of a three-dimensional photonic band gap in a crystal structure made of dielectric rods," Phys. Rev. B 50(3), 1945–1948 (1994). [CrossRef] R. S. Tucker, P. C. Ku, and C. J. Chang-Hasnain, "Slow-light optical buffers: capabilities and fundamental limitations," J. Lightwave Technol. 23(12), 4046–4066 (2005). A. Tyszka-Zawadzka, B. Janaszek, and P. Szczepański, "Tunable slow light in graphene-based hyperbolic metamaterial waveguide operating in SCLU telecom bands," Opt. Express 25(7), 7263–7272 (2017). F. Bagci and B. Akaoglu, "Enhancement of buffer capability in slow light photonic crystal waveguides with extended lattice constants," Opt. Quantum Electron. 47(3), 791–806 (2015). A. S. Losev and A. S. Troshin, "Twofold light-pulse regeneration under conditions of electromagnetically induced transparency," J. Opt. Technol. 80(7), 431–434 (2013). M. Ebnali-Heidari, C. Monat, C. Grillet, and M. Moravvej-Farshi, "A proposal for enhancing four-wave mixing in slow light engineered photonic crystal waveguides and its application to optical regeneration," Opt. Express 17(20), 18340–18353 (2009). S. Baur, D. Tiarks, G. Rempe, and S. Dürr, "Single-photon switch based on Rydberg blockade," Phys. Rev. Lett. 112(7), 073901 (2014). M. Bajcsy, S. Hofferberth, V. Balic, T. Peyronel, M. Hafezi, A. S. Zibrov, V. Vuletic, and M. D. Lukin, "Efficient all-optical switching using slow light within a hollow fiber," Phys. Rev. Lett. 102(20), 203902 (2009). K. Qin, S. Hu, S. T. Retterer, I. I. Kravchenko, and S. M. Weiss, "Slow light Mach–Zehnder interferometer as label-free biosensor with scalable sensitivity," Opt. Lett. 41(4), 753–756 (2016). O. S. Magaña-Loaiza, B. Gao, S. A. Schulz, K. M. Awan, J. Upham, K. Dolgaleva, and R. W. Boyd, "Enhanced spectral sensitivity of a chip-scale photonic-crystal slow-light interferometer," Opt. Lett. 41(7), 1431–1434 (2016). L. V. Hau, S. E. Harris, Z. Dutton, and C. H. Behroozi, "Light speed reduction to 17 metres per second in an ultracold atomic gas," Nature 397(6720), 594–598 (1999). C. Jiang, H. Liu, Y. Cui, X. Li, G. Chen, and B. Chen, "Electromagnetically induced transparency and slow light in two-mode optomechanics," Opt. Express 21(10), 12165–12173 (2013). G. Lai, R. Liang, Y. Zhang, Z. Bian, L. Yi, G. Zhan, and R. Zhao, "Double plasmonic nanodisks design for electromagnetically induced transparency and slow light," Opt. Express 23(5), 6554–6561 (2015). A. V. Turukhin, V. S. Sudarshanam, M. S. Shahriar, J. A. Musser, B. S. Ham, and P. R. Hemmer, "Observation of ultraslow and stored light pulses in a solid," Phys. Rev. Lett. 88(2), 023602 (2001). P. Palinginis, S. Crankshaw, F. Sedgwick, E. T. Kim, M. Moewe, C. J. Chang-Hasnain, H. Wang, and S. L. Chuang, "Ultraslow light (< 200 m/s) propagation in a semiconductor nanostructure," Appl. Phys. Lett. 87(17), 171102 (2005). E. Shumakher, A. Willinger, R. Blit, D. Dahan, and G. Eisenstein, "Large tunable delay with low distortion of 10 Gbit/s data in a slow light system based on narrow band fiber parametric amplification," Opt. Express 14(19), 8540–8545 (2006). D. Dahan and G. Eisenstein, "Tunable all optical delay via slow and fast light propagation in a Raman assisted fiber optical parametric amplifier: a route to all optical buffering," Opt. Express 13(16), 6234–6249 (2005). Y. Okawachi, M. S. Bigelow, J. E. Sharping, Z. Zhu, A. Schweinsberg, D. J. Gauthier, R. W. Boyd, and A. L. Gaeta, "Tunable all-optical delays via Brillouin slow light in an optical fiber," Phys. Rev. Lett. 94(15), 153902 (2005). M. Merklein, I. V. Kabakova, T. F. S. Büttner, D. Y. Choi, B. Luther-Davies, S. J. Madden, and B. J. Eggleton, "Enhancing and inhibiting stimulated Brillouin scattering in photonic integrated circuits," Nat. Commun. 6(1), 6396 (2015). T. Baba, "Slow light in photonic crystals," Nat. Photonics 2(8), 465–473 (2008). M. Minkov and V. Savona, "Wide-band slow light in compact photonic crystal coupled-cavity waveguides," Optica 2(7), 631–634 (2015). K. J. Boller, A. Imamoglu, and S. E. Harris, "Observation of electromagnetically induced transparency," Phys. Rev. Lett. 66(20), 2593–2596 (1991). D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin, "Storage of light in atomic vapor," Phys. Rev. Lett. 86(5), 783–786 (2001). B. Wu, J. F. Hulbert, E. J. Lunt, K. Hurd, A. R. Hawkins, and H. Schmidt, "Slow light on a chip via atomic quantum state control," Nat. Photonics 4(11), 776–779 (2010). K. Totsuka, N. Kobayashi, and M. Tomita, "Slow light in coupled-resonator-induced transparency," Phys. Rev. Lett. 98(21), 213904 (2007). J. Gu, R. Singh, X. Liu, X. Zhang, Y. Ma, S. Zhang, S. A. Maier, Z. Tian, A. K. Azad, H. T. Chen, A. J. Taylor, J. Han, and W. Zhang, "Active control of electromagnetically induced transparency analogue in terahertz metamaterials," Nat. Commun. 3(1), 1151 (2012). S. Zhang, D. A. Genov, Y. Wang, M. Liu, and X. Zhang, "Plasmon-Induced Transparency in Metamaterials," Phys. Rev. Lett. 101(4), 047401 (2008). R. Taubert, M. Hentschel, J. Kästel, and H. Giessen, "Classical analog of electromagnetically induced absorption in plasmonics," Nano Lett. 12(3), 1367–1371 (2012). T. T. Kim, H. D. Kim, R. Zhao, S. S. Oh, T. Ha, D. S. Chung, Y. H. Lee, B. Min, and S. Zhang, "Electrically tunable slow light using graphene metamaterials," ACS Photonics 5(5), 1800–1807 (2018). J. Hu, E. Yao, W. Xie, W. Liu, D. Li, Y. Lu, and Q. Zhan, "Strong longitudinal coupling of Tamm plasmon polaritons in graphene/DBR/Ag hybrid structure," Opt. Express 27(13), 18642–18652 (2019). S.-G. Lee, S.-Y. Jung, H.-S. Kim, S. Lee, and J.-M. Park, "Electromagnetically induced transparency based on guided-mode resonances," Opt. Lett. 40(18), 4241–4244 (2015). C. Sui, B. Han, T. Lang, X. Li, X. Jing, and Z. Hong, "Electromagnetically induced transparency in an all-dielectric metamaterial-waveguide with large group index," IEEE Photonics J. 9(5), 1–8 (2017). B. Han, X. Li, C. Sui, J. Diao, X. Jing, and Z. Hong, "Analog of electromagnetically induced transparency in an E-shaped all-dielectric metasurface based on toroidal dipolar response," Opt. Mater. Express 8(8), 2197–2207 (2018). S. Hayashi, D. V. Nesterenko, A. Rahmouni, and Z. Sekkat, "Observation of Fano line shapes arising from coupling between surface plasmon polariton and waveguide modes," Appl. Phys. Lett. 108(5), 051101 (2016). A. F. Kaplan, T. Xu, and L. J. Guo, "High efficiency resonance-based spectrum filters with tunable transmission bandwidth fabricated using nanoimprint lithography," Appl. Phys. Lett. 99(14), 143111 (2011). W. Cao, R. Singh, C. Zhang, J. Han, M. Tonouchi, and W. Zhang, "Plasmon-induced transparency in metamaterials: Active near field coupling between bright superconducting and dark metallic mode resonators," Appl. Phys. Lett. 103(10), 101106 (2013). G. Rana, P. Deshmukh, S. Palkhivala, A. Gupta, S. P. Duttagupta, S. S. Prabhu, V. Achanta, and G. S. Agarwal, "Quadrupole-quadrupole interactions to control plasmon-induced transparency," Phys. Rev. Appl. 9(6), 064015 (2018). E. Özbay, A. Abeyta, G. Tuttle, M. Tringides, R. Biswas, C. T. Chan, C. M. Soukoulis, and K. M. Ho, "Measurement of a three-dimensional photonic band gap in a crystal structure made of dielectric rods," Phys. Rev. B 50(3), 1945–1948 (1994). Abeyta, A. Achanta, V. Agarwal, G. S. Akaoglu, B. Awan, K. M. Azad, A. K. Baba, T. Bagci, F. Bajcsy, M. Balic, V. Baur, S. Behroozi, C. H. Bian, Z. Bigelow, M. S. Biswas, R. Blit, R. Boller, K. J. Boyd, R. W. Büttner, T. F. S. Cao, W. Chan, C. T. Chang-Hasnain, C. J. Chen, B. Chen, G. Chen, H. T. Choi, D. Y. Chuang, S. L. Chung, D. S. Crankshaw, S. Cui, Y. Dahan, D. Deshmukh, P. Diao, J. Dolgaleva, K. Dürr, S. Duttagupta, S. P. Dutton, Z. Ebnali-Heidari, M. Eggleton, B. J. Eisenstein, G. Fleischhauer, A. Gaeta, A. L. Gao, B. Gauthier, D. J. Genov, D. A. Giessen, H. Grillet, C. Gu, J. Guo, L. J. Gupta, A. Ha, T. Hafezi, M. Ham, B. S. Han, B. Han, J. Harris, S. E. Hau, L. V. Hawkins, A. R. Hemmer, P. R. Hentschel, M. Ho, K. M. Hofferberth, S. Hong, Z. Hu, J. Hu, S. Hulbert, J. F. Hurd, K. Janaszek, B. Jiang, C. Jing, X. Jung, S.-Y. Kabakova, I. V. Kaplan, A. F. Kästel, J. Kim, E. T. Kim, H. D. Kim, H.-S. Kim, T. T. Kobayashi, N. Kravchenko, I. I. Ku, P. C. Lai, G. Lang, T. Lee, S. Lee, S.-G. Lee, Y. H. Li, D. Li, X. Liang, R. Liu, M. Liu, W. Liu, X. Losev, A. S. Lu, Y. Lukin, M. D. Lunt, E. J. Luther-Davies, B. Ma, Y. Madden, S. J. Magaña-Loaiza, O. S. Maier, S. A. Mair, A. Merklein, M. Min, B. Minkov, M. Moewe, M. Monat, C. Moravvej-Farshi, M. Musser, J. A. Nesterenko, D. V. Oh, S. S. Okawachi, Y. Özbay, E. Palinginis, P. Palkhivala, S. Park, J.-M. Peyronel, T. Phillips, D. F. Prabhu, S. S. Qin, K. Rahmouni, A. Rana, G. Rempe, G. Retterer, S. T. Savona, V. Schmidt, H. Schulz, S. A. Schweinsberg, A. Sedgwick, F. Sekkat, Z. Shahriar, M. S. Sharping, J. E. Shumakher, E. Singh, R. Soukoulis, C. M. Sudarshanam, V. S. Sui, C. Szczepanski, P. Taubert, R. Taylor, A. J. Tian, Z. Tiarks, D. Tomita, M. Totsuka, K. Tringides, M. Troshin, A. S. Tucker, R. S. Turukhin, A. V. Tuttle, G. Tyszka-Zawadzka, A. Upham, J. Vuletic, V. Walsworth, R. L. Wang, Y. Weiss, S. M. Willinger, A. Wu, B. Xie, W. Xu, T. Yao, E. Yi, L. Zhan, G. Zhan, Q. Zhang, C. Zhang, S. Zhang, W. Zhang, X. Zhao, R. Zhu, Z. Zibrov, A. S. ACS Photonics (1) IEEE Photonics J. (1) J. Opt. Technol. (1) Opt. Quantum Electron. (1) Phys. Rev. Appl. (1) Phys. Rev. Lett. (8) (1) 2 k FP t + arg ⁡ ( ρ 1 ρ 2 ) = 2 m π , (2) β = k 0 sin ⁡ θ + n G , (3) n 2 2 k 0 2 − β 2 d 2 = q π + 2 arctan ⁡ ( n 2 2 n 1 2 β 2 − k 0 2 n 1 2 k 0 2 n 2 2 − β 2 ) , (4) { ∂ 2 x 1 ( t ) ∂ t 2 + γ 1 ∂ x 1 ( t ) ∂ t + ω 0 2 x 1 ( t ) + 2 κ ∂ x 2 ( t ) ∂ t = E 0 e − i ω t ∂ 2 x 2 ( t ) ∂ t 2 + γ 2 ∂ x 2 ( t ) ∂ t + ( ω 0 + δ ) 2 x 2 ( t ) − 2 κ ∂ x 1 ( t ) ∂ t = 0 . (5) x 1 ( t ) = 1 2 ω 0 ⋅ ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) κ 2 − ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) ( ω − ω 0 + i γ 1 / i γ 1 2 2 ) E 0 e − i ω t . (6) A ( ω ) = Im f ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) κ 2 − ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) ( ω − ω 0 + i γ 1 / i γ 1 2 2 ) , (7) A = 1 − R − T , (8) χ r ( ω ) = Re f ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) κ 2 − ( ω − ω 0 − δ + i γ 2 / i γ 2 2 2 ) ( ω − ω 0 + i γ 1 / i γ 1 2 2 ) . (9) k = ϕ − ϕ 0 L + ω c , (10) d k d ω = 1 L × d ( ϕ − ϕ 0 ) d ω + 1 c . (11) n g = c L × d ( ϕ − ϕ 0 ) d ω + 1.
CommonCrawl
Identify components in an expression Expand binomial expressions Expand further binomial expressions Expand perfect square trinomial Expand difference of two squares Identify greatest common algebraic factor Factor algebraic terms Factor algebraic factors CanadaON Just like perfect square numbers $9$9$\left(3^2\right)$(32) and $144$144$\left(12^2\right)$(122), algebraic expressions such as $a^2$a2, $x^2$x2 and $a^2b^2$a2b2 are also called perfect squares. Squares of binomial expressions, such as $\left(a+b\right)^2$(a+b)2, are also perfect squares, and we can expand these binomial products in the following way: $\left(a+b\right)^2$(a+b)2 $=$= $\left(a+b\right)\left(a+b\right)$(a+b)(a+b) $=$= $a^2+ab+ba+b^2$a2+ab+ba+b2 $=$= $a^2+2ab+b^2$a2+2ab+b2 Since $ab=ba$ab=ba Perfect Binomial Squares Fit the Rule: $\left(x+y\right)^2=x^2+2xy+y^2$(x+y)2=x2+2xy+y2 The square of a binomial appears often, not just in math but in the real world as well. Squares like these can also be used to show the possible ways that genes can combine in offspring. For example, among tigers, the normal colour gene C is dominant, while the white colour gene c is recessive. So a tiger with colour genes of CC or Cc will have a normal skin colour, while a tiger with colour genes of cc will have a white skin colour. The following square shows all four possible combinations of these genes. Since the square for each gene combination represents $\frac{1}{4}$14​ of the area of the larger square, the probability that a tiger has colour genes of CC or Cc (i.e. it has a normal skin colour) is $\frac{3}{4}$34​, while the probability that it has colour genes of cc (i.e. it has a white skin colour) is $\frac{1}{4}$14​. It is possible to model the probabilities of the gene combinations as the square of a binomial. Since any parent tiger has a $50%$50% chance of having a C gene and a $50%$50% chance of having a c gene, the genetic makeup of a parent tiger can be modelled as $\frac{1}{2}C+\frac{1}{2}c$12​C+12​c and that of its offspring as $\left(\frac{1}{2}C+\frac{1}{2}c\right)^2$(12​C+12​c)2. Squaring a binomial the usual way, $\left(\frac{1}{2}C+\frac{1}{2}c\right)^2$(12​C+12​c)2 $=$= $\frac{1}{4}C^2+2\left(\frac{1}{4}C\right)\left(\frac{1}{4}c\right)+\frac{1}{4}c^2$14​C2+2(14​C)(14​c)+14​c2 $=$= $\frac{1}{4}C^2+\frac{1}{2}Cc+\frac{1}{4}c^2$14​C2+12​Cc+14​c2 The final expression shows that the probability that a tiger will have the gene combination CC (i.e. normal skin colour) is $\frac{1}{4}$14​, the probability that it will have the gene combination Cc (i.e. normal skin colour) is $\frac{1}{2}$12​ and the probability that it will have the gene combination cc (i.e. white skin) is $\frac{1}{4}$14​, which are exactly the same results as those shown in the gene table above. Complete the expansion of the perfect square: $\left(x-3\right)^2$(x−3)2 $\left(x-3\right)^2=x^2-\editable{}x+\editable{}$(x−3)2=x2− Write the perfect square trinomial that factors as $\left(s+4t\right)^2$(s+4t)2. Expand the following perfect square: $\left(4x+7y\right)^2$(4x+7y)2 10D.QR3.01 Expand and simplify second-degree polynomial expressions, using a variety of tools and strategies
CommonCrawl
Manhattan distance formula java Calculated by Heuristic Search 2 Heuristic Search •Heuristic or informed search exploits additional knowledge about the problem that helps direct search to more promising paths. Worldwide distance calculator with air line, route planner, travel duration and flight distances. Hi, my calculations on paper to find the distance between 2 lines is not matching up with what my app is giving me. 781666666666666, -79. 916666666666671 Distance: 0. That Are there any disadvantages of using Distance Squared checks rather than Distance? This is not about Manhattan distance etc solved with a simple formula Distance matrices are used in phylogeny as non-parametric distance methods and were originally applied to phenetic data using a matrix of pairwise distances. Tech Scholar Department of Computer Science & Engineering BRCM College of Engineering & Technology, Bahal Abstract—C lustering hak of bj cp d w m are more similar to each other than to those in other Now the Manhattan distance between these points is a+c+b+d, and we note that this is the sum of distances from each point to the crux point (f,g). When the input source data is a raster, the set of source cells consists of all cells in the source raster that have valid values. i. nasa. Enter 2 coordinates in the X-Y-Z coordinates system to get the formula and distance of the line connecting the two points. K Means Clustering . core. The sum of the distances (sum of the vertical and horizontal distance) from the blocks to their goal positions, plus the number of moves made so far to get to the state. The way to locate the treasure is by touching a square on the grid. 8, and t is the amount of time, in seconds, that the object has been falling. Math. 2. The last one is also known as L 1 distance. Objects or references in Java; The java program finds distance between two points using manhattan distance equation. This image (from Wikipedia) illustrates this well: The green line is the actual distance. Object implements GeometricObject2D, The class provides static methods to compute distance between two points. Euclidean distance is harder by hand bc you're squaring anf square rooting. It follows that minimizing the distance between a pair of points, one in each quadrant, amounts to finding a point closest to (f,g) in each quadrant. please any help greatly appreciate Calculating the distance between two points problem (Beginning Java forum at Coderanch) Let's see. Java Programming. In this analysis the user starts with a collection of samples and attempts to group them into 'k' Number of Clusters based on certain specific distance measurements. Manhattan, Canberra or Minkowski distance, the sum is scaled up proportionally to the number of columns used. Its a very simple prog too; it just calculates a dist, using dist formula. Look at your cost function and find the minimum cost D for moving from one space to an adjacent space. Re: How to you write a java program using the distance formula? It has to do with the fact hat you can't multiply in Java like (x2-x1)(x2-x1). def create_distance_matrix(locations): # Create the distance matrix. You can use Java Machine Learning Library. This is the familiar straight line distance that most people are familiar with. The points can be a scalar or vector and the passed to function as arguments can be integer or double datatype. sqrt(double a) returns the correctly rounded positive square root of a double value. Cloneable, TechnicalInformationHandler Implementing Euclidean distance (or similarity) function. . Computes agglomerative hierarchical clustering of the dataset. More formally, we can define the Manhattan distance, also known as the L 1-distance, between two points in an Euclidean space with fixed Cartesian coordinate system is defined as the sum of the lengths of the projections of the line segment between the points onto the coordinate axes. * NaN will be treated as missing values and will be excluded from the * calculation. 41727 and Manhattan and displays the route on an interactive map. I'm implementing NxN puzzels in java 2D array int[][] state. Euclidean distance is only appropriate for data measured on the same scale. 2; distanceSq public double distanceSq(Point2D pt) In an one dimensional space, euclidean distance is the the difference between two points. A better priority function for a given state is the sum of the distances (sum of the vertical and horizontal distance) from the blocks to their goal positions, plus the number of moves made so far to get to the state. Implementing this is not so difficult. Euclidean distance vs Squared. The formula for calculate Manhattan distance is as follows: Manhattan distance is also known as L1 distance. I need to first use euclidean distance to Which distance formula should I use for faster performance, Manhattan distance or Euclidean distance? Study of Euclidean and Manhattan Distance Metrics using Simple K-Means Clustering Deepak #Sinwar1, Rahul Kaushik*2 #Assistant Professor, *M. These are Euclidean distance, Manhattan, Minkowski distance,cosine similarity and lot more. x) + abs(a. Geographic distance can be simple and fast. Distance Functions The idea to use distance measure is to find the distance (similarity) between new sample and training cases and then finds the k-closest customers to new customer in terms of height and weight. Also known as the Manhattan distance. Must be zero if node represents a goal state. It also has a method which calculates the distance directly. Below is the syntax dy1, dx2, and dy2}} // compare points according to their distance to this point private class DistanceToOrder implements Point2D. Software Workshop Java Solvability of the Tiles Game. j x c The Manhattan distance can be calculated as per the formula:[15] 1 j kn i p c j Front end Java has implemented the And that is it, this is the cosine similarity formula. When lambda is equal to 1, it becomes the city block distance, and when lambda is equal to 2, it becomes the Euclidean distance. PriorityQueue of TileBoards that is ordered Calculate Euclidean Distance Between Two . Distance on a hex grid using this coordinate system uses an extension of the two-axis coordinates into a third axis, and I have the formula on my hex grid page . Other distances, based on other norms, are sometimes used instead. Shows work with distance formula and graph. I need to calculate the two image distance value. This java program code will be opened in a new pop up window once you click pop-up from the right corner. Andrea Trevino's step-by-step tutorial on the K-means clustering mean distance to the Oracle and Java are // if the manhattan distance is 0, then we got the solution What's a Java code for a unification algorithm? What is the simplest C++ code for Kruskal's MST I just need a formula that will get me 95% there. Calculates the distance between two points using the Manhattan distance formula. In two dimensional space, euclidean metric is calculated based on pythagorean theorem, whereas in n dimensional space, it is calculated with additional coordinates. Why A* Search Algorithm ? Manhattan Distance – it is nothing but the distance between the current cell and the goal cell using the distance formula h The java program finds distance between two points using manhattan distance equation. programming forums Java Mobile Certification Databases Caching Books Adding methods to a point class Returns the "Manhattan distance" between the current C++ :: Manhattan Distance Between Two Vectors Mar 25, 2014. Since: 1. k-means implementation with custom distance matrix in input. The Updatable_heap data structure makes use of a heap as an array using the complete binary tree representation and a chained hash table. the Lance-Williams formula and the the distance between two clusters is the average of the Euclidean algorithms (Basic and Extended) // Java program to demonstrate working of extended Pairs with same Manhattan and Euclidean distance; Manhattan distance is also very common for continuous variables. Formula for the Volume of a Hexagon. Distance Selection It's worrying that this article completely glosses over the fact that the Manhattan distance approximation is seriously wrong. The usual choice is to set all three weights to 1. Available with Spatial Analyst license. g. Distance Method (Required) Specifies how distances are calculated from each feature to its nearest neighboring feature. The Manhattan distance refers to how far apart two places are if the person can only travel straight horizontally or vertically, as though driving on the streets of Manhattan. ManhattanDistance . Return true or false if such elements exists. lang. Manhattan (city block)—The distance between two points measured along axes at right angles. Write a program Solver. k-Means: Step-By-Step Example. setting and include the Manhattan The distance value in blue color color indicates the driving distance, calculated in both kilometers and miles. The array might also contain duplicates. am required to use the manhattan heureustic in the Optimizing Manhattan-distance method for N-by-N puzzles. Formula for determining solvability Software Workshop Java Solvability of the Tiles Game. Object equals , getClass , hashCode , notify , notifyAll , toString , wait , wait , wait What is Manhattan Distance? Update Cancel. a d b y P r o f i t W e l l. ca 2 University of Waterloo, Department of Combinatorics and Optimization, Waterloo, Ontario N2L 3G1, Canada, hwolkowicz@uwaterloo. co/data-science) Watch Sample Class recording http://www. Manhattan priority function. Tell me about yourself? The java program finds distance between two points using manhattan distance equation. e. A fully vectorized function that computes the Euclidean distance matrix between two sets of vectors. The java program finds distance between two points using manhattan distance equation. lucene. Another common approach is to replace absolute distance with "Manhattan Distance Vincenty formula for distance between two Latitude/Longitude points . These examples are extracted from open source projects. straight-line) distance between two points in Euclidean space. Computes the Manhattan (city block) distance between two arrays. Alternative Distance Measure: There are many distance measures available, and you can even develop your own domain-specific distance measures if you like. So I click in cell C3, and then I'll start entering my formula. def heuristic(a, b): # Manhattan distance on a square grid return abs(a. Write a Python program to compute Euclidean distance. The exact implementation will depend on the representation of the board state. Objects or references in Java; I want to calculate the distance between two points in Java same as a standard distance formula where you Point2D Java API class: public static double The manhattan distance between two points is defined as: The question is then ``what is the formula that gives the manhattan distance between a point and a line?''. Euclidean distance function. The returned distance is n * d / m, * where d is the distance between non-missing values. This formula is accurate for any pair of points. It has to be (x2-x1)*(x2-x1). In Taxicab Geometry, the distance between two points is found by adding the vertical and horizontal distance together. For the real numbers the only norm is the absolute value . In other words, it measures the minimum number of substitutions required to change one string into the other, or the minimum number of errors that could have transformed one string into the other. use a java. Write a method named FallingDistance that accepts an object's falling time (in seconds) as an argument. 0779102 BBC BASIC []. to you can easily determine distances between world-wide locations. ) is: Where n is the number of variables, and X i and Y i are the values of the i th variable, at points X and Y respectively. 888594) and (0. we're only adding 1 into the formula to avoid zero-division. ca Manhattan distance is an metric in which the distance between two points is the sum of the absolute differences of their Cartesian coordinates. gov ) of Caltech and NASA's Jet Propulsion Laboratory as K Means Clustering is exploratory data analysis technique. Mark Ryan. In the simple case, you can set D to be 1. Here's the code that does this. Methods inherited from class java. Calculate distance of 2 points in 3 dimensional space. edureka. The >> centroid computed for the Manhattan distance is However, [1,1] and [-1,-1] are much closer to X than [1,-1] and [-1,1] in Mahalanobis distance. 15-121 Fall 2009. No diagonal moves are allowed. and you can compute the distance using the haversine formula or the distance can be simple and fast; Can Java be Manhattan priority function. The formula to calculate this has been shown in the image. Manhattan distance (plural Manhattan distances) The sum of the horizontal and vertical distances between points on a grid; See also . In Chebyshev distance, all 8 adjacent cells from the given point can be reached by one unit. The Euclidean Distance Euclidean distance is a metric distance from point A to point B in a Cartesian system, and it is derived from the Pythagorean Theorem. Similar routes: A look at the distance matrix computation function in R, focusing on the different methods and how clustering differs with each distance calculation. Java 2D arrays are nothing but an array of arrays, so if you want to swap two elements in a row, The Manhattan distance between two items is the sum of the differences of their corresponding components. apache. Implementation [ edit ] THIS IS NOT DESCRIBING THE "PAM" ALGORITHM. Euclidean distance is a measure of the true straight line distance between two points in Euclidean space. Wikipedia The java program finds distance between two points using manhattan distance equation. The Manhattan distance is the sum of the (absolute) differences of their coordinates. Homework Assignment 7 as the "Manhattan Distance" between S and the goal state. Proposition 1 The manhattan distance between a point of coordinates and a line of equation is given by : The java program finds distance between two points using manhattan distance equation. Distance formulas on a square grid are well known (manhattan, euclidean, diagonal distance). we use predict function in which the first argument is the formula to be applied and second How can I find the maximum Manhattan distance between 2 points from a given set of points? between two points in any 3D . Euclidean (as the crow flies)—The straight-line distance between two points. The shortest distance (air line) between CLL and Manhattan is 1,435. Java Properties; Thread Dump calculates the Manhattan (taxicab) distance between (0,0) and To give you a better understanding of how function queries can be A primer in using Java from R – part 1; Mahalanobis distance with "R" (Exercice) We are going to apply the Mahalanobis Distance formula: In this article we'll demonstrate the implementation of k-means clustering algorithm to produce recommendations. Java (1) KMeans (1) Manhattan Distance Spatial Autocorrelation (Morans I) (Spatial Statistics) Calculations based on either Euclidean or Manhattan distance require projected data to accurately measure Taxicab geometry is a form of geometry, where the distance between two points A and B is not the length of the line segment AB as in the Euclidean geometry, but the sum of the absolute differences of their coordinates. An implementation of Manhattan Distance for Clustering in Python. K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases based on a similarity measure (e. all this algorithm is actually doing is computing distance between points, I am looking for code in java that implement A* algorithm for the 8-puzzle game by given initial state : 1 3 2 4 5 6 8 7 and Goal state 1 2 3 8 4 7 6 5 I want to print out the running steps which A* Heuristic algorithm for the 8-tile puzzle using java. Implement an alternative distance measure, such as Manhattan distance or the vector dot product. Now remember that c squared equals a squared plus b squared. Guidelines to use the calculator When entering numbers, do not use a slash: "/" or "\" You are being redirected. , in the Technical round. Euclidean distance, Manhattan distance or other The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes. The taxicab metric is also known as recti-linear distance, Minkowski's L1 distance, city block distance, or Manhattan distance. This is non-hierarchical method of grouping objects together. The formula for this distance between a point X =(X 1, X 2, etc. The great circle method is chosen over other methods. Formula for determining solvability Manhattan Interview Questions Technical (JAVA) These are the latest Interview Questions by Manhattan Associates. example D = pdist2( X , Y , Distance , DistParameter ) returns the distance using the metric specified by Distance and DistParameter . 5. Manhattan Distance. If we have a direct distance d between any two rows, the wraparound distance e is the value such that: d + e = n-1 e = n-1 - d Now the distance between two rows is the minimum of the direct distance and the wraparound distance. Sehingga sering juga disebut city block distance, juga sering disebut sebagai ablosute value distance atau boxcar distance. The derivation uses several matrix identities such as (AB) T = B T A T , (AB) -1 = B -1 A -1 , and (A -1 ) T = (A T ) -1 . TileBasedMap. Example usage: When to use Hamming distance and Levenshtein's distance? Though now I code in Java. Otherwise, print no solution. Parameters for missing value handling and normalization can be set depending on the selected distance function. The most popular similarity measures implementation in python. Calculating sum of manhattan distances in a sliding puzzle. Let m be the number non-missing values, and n be the * number of all values. px - the X coordinate of the specified point to be measured against this Point2D py - the Y coordinate of the specified point to be measured against this Point2D Returns: the square of the distance between this Point2D and the specified point. One of the main challenges to calculating distances - especially large ones - is accounting for the curvature of the Earth. y - b. You want the exact same thing in C# and can't be bothered to do the conversion. And the group of two or more people wants to meet and minimize the total travel distance. PriorityQueue of TileBoards that is ordered Calculating Manhattan Distance. java performance algorithm taxicab you could calculate the manhatten distance by just iterating Manhattan Distance. Formula for determining solvability The following distance formula calculator will calculate for you the distance between two points on the coordinate system. Cosine Similarity will generate a metric that says how related are two documents by looking at the angle instead of magnitude, like in the examples below: The Cosine Similarity values for different documents, 1 (same direction), 0 (90 deg. For spaces with more dimensions the norm can be any function p {\displaystyle p} with Dan!Jurafsky! Where did the name, dynamic programming, come from? & …The 1950s were not good years for mathematical research. Deletion, insertion, and replacement of characters can be assigned different weights. This java programming code is used to find the distance formula . As you've shown, in dimension 2, it's not Euclidean. Getting started. As mentioned in the comment: The actual number of moves must at least be as large as the pure geometric (manhattan) distance between the tiles. RE: how do you write a distance formula in java? How do you write a java program to solve the distance between two points in java?More specificly, give an example program please! Below is the syntax highlighted version of Vector. The variables in the formula are as follows: d is the distance in meters, g is 9. #include <cstdlib> The City block distance is instead calculated as the distance in x plus the distance in y, which is similar to the way you move in a city (like Manhattan) where you have to move around the buildings instead of going straight through. In simple way of saying it is the absolute sum of difference between the x-coordinates and y-coordinates. How can we measure similarities between two images? set up a similarity distance formula: (\Phi(a),\Phi(b)) $ is Manhattan (cityblock) distance between a pair of feature vectors. However, it seems quite straight forward but I am having trouble. In this case, we'll use the distance formula to build the distance matrix when you run the program. Nevertheless, depending on your application, a sample of size 4,500 may still to be too small to be useful. Based on the gridlike street geography of the New York borough of Manhattan. Euclidean Distance Matrices and Applications Nathan Krislock1 and Henry Wolkowicz2 1 University of Waterloo, Department of Combinatorics and Optimization, Waterloo, Ontario N2L 3G1, Canada, ngbkrislock@uwaterloo. Calpernicus. Browse other questions tagged java algorithm or ask your GreatCircle. In brief, A* works from it's * Manhattan distance between two arrays of type float. ), -1 (opposite directions). The last formula is the definition of the squared Mahalanobis distance. The first one is the euclidean, that are the root sum-of-squares of differences, while the second one is the manhattan distance that are the sum of absolute distances. KNN has been used in statistical estimation and pattern recognition already in the beginning of 1970's as a non-parametric technique. Calculate Euclidean Distance Between Two Points. The Distance tools allow you to perform distance analysis in the following ways: This is a variant of the Manhattan Distance problem. java. To find the distance between two points ($$x_1, y_1$$) and Euclidean distance between two points. Building Java Programs Lab 8: Classes. Loading Unsubscribe from Adam Gaweda? Minimum Edit Distance - Explained ! - Stanford University - Duration: 13:00. Distance in Euclidean space Edit. Important in navigation, it is a special case of a more general formula in spherical trigonometry, the law of haversines, that relates the sides and angles of spherical triangles. Download the development or the minified version. Tech Scholar Department of Computer Science & Engineering BRCM College of Engineering & Technology, Bahal Abstract—C lustering hak of bj cp d w m are more similar to each other than to those in other You ask: When does a Manhattan Distance become a Euclidean Distance in Geometry? Only in dimension 1. Tiles Game. The most common measure of the distance between two points. Thanks! Find a point such that sum of the Manhattan distances is minimized The formula for distance between a point and a line in 2-D is given by: // Java program to public class Point2D extends java. Euclidean distance Predict the class value by finding the maximum class represented in the K nearest neighbors K Nearest Neighbor Algorithm Use kmeans to compute the distance from each centroid to points on a grid. In an n-dimensional real vector space with a fixed Cartesian coordinate system, two points can be connected by a straight line. Online distance calculator. If you are lucky to stumble on the square that conceals the treasure the game stops. For example, in city such as Manhattan, the distance between two points could be the Euclidean distance: the distance "as the crow flies", or it could be the "Minkovski or Taxicab" distance, which is the number of blocks - along roads and avenues that need to be traversed. You've got a homework assignment for something on Manhattan Distance in C#. 920094 Point 2: 32. Euclidean distance. "Peace on Earth" Java Applet Program. y) In Dijkstra's Algorithm we used the actual distance from the start for the priority queue ordering. alaska (25 The distance formula is really just the Pythagorean Theorem in disguise. This is the method recommended for calculating short distances by Bob Chamberlain ( rgc@jpl. If the argument is NaN or less than zero, the result is NaN A distance metric is a function that defines a distance between two observations. Manhattan distance? I am writing a Maze program in Java. Learn more about euclidean distance, function An overview of the Distance toolset. 20 mi (2,309. Distance Formula: Write a class object that contains the distance formula. This distance formula can also be expanded into the arc-length formula. For example, the Hamming and Manhattan priorities of the initial state below are 5 and 10, respectively. Distance Metric Description Formula k-means clustering, The Spatial Analyst extension provides several sets of tools that can be used in proximity analysis. Euclidean Distance. There are various ways to compute distance on a plane, many of which you can use here, but the most accepted version is Euclidean Distance, named after Euclid, a famous mathematician who is Emanuele Feronato on July 29, 2010 • . I have converted the formula into Java Calculating distance between two points represented by lat,long upto 15 actual 25 feet distance we are getting 7 feet Manhattan /City block distance. knj jiij. Distance Matrix Computation Description. Math contains a sqrt() method which you can use to calculate a square root. spatial. am required to use the manhattan heureustic in the Manhattan distance is easier to calculate by hand, bc you just subtract the values of a dimensiin then abs them and add all the results. The equation was taken from Wikipedia. When you click text, the code will be changed to text format. You can think of the Manhattan distance being the X and Y components of a line running between the two points. Euclidean distance is the distance between two points in Euclidean space Euclidean distance vs. Simply enter any desired location into the search function and you will get the shortest distance (air line) between the points, the route (route planner) as well as all important information. Implementation of stack. It is at most the length of the longer string. GreatCircle. Max heap in Java. To find the closest pair it is sufficient to compare the squared-distances, it is not necessary to perform the square root for each pair! The distance between a particular data item in the given dataset and the nearest mean is computed by using an objective function, such as Euclidean, Manhattan or Taxicab distance, which value normally represents similarity between an item and the nearest mean based on the distance between the vectors of attributes of either the nearest mean or Distance Between 2 Points. Now the Manhattan distance between these points is a+c+b+d, and we note that this is the sum of distances from each point to the crux point (f,g). ) and a point Y =(Y 1, Y 2, etc. SimpleKMeans - DistanceFunction calculate the cosineSimilarity of two instances with the formula. I've wanted to make sure that Teaching the Distance Formula Using I Can This is a variant of the Manhattan Distance problem. How to Calculate Perimeter and Area Ratio. 891663,0. This JavaScript uses the Haversine Formula (shown below) expressed in terms of a two-argument inverse tangent function to calculate the great circle distance between two points on the Earth. You can see how p=1 and x=a-b leads to the first formula. In mathematics, the norm of a vector is its length. Hot Spot Analysis (Getis-Ord Gi*) (Spatial Statistics) Manhattan (city block)—The distance between two points measured along axes at right angles. Object equals , getClass , hashCode , notify , notifyAll , toString , wait , wait , wait Writing the Distance Method in Java Adam Gaweda. The shortest path problem Edges have an associated distance (also called costs or weight This distance formula can also be expanded into the arc-length formula. But it is a very good exercise for programming as long as you do it by yourself. not sure what kind of formula i should use to calculate the algorithm 2 * (travelling distance) The Levenshtein distance has several simple upper and lower bounds. Euclidean vs Chebyshev vs Manhattan Distance. i read it and it says that for 4 movement, its recommended to use manhattan distance but i am not supposed to do the manhattan but instead use travelling distance. Can anyone tell me what is going wrong in my code? When I run Calculates the great circle distance using the Vincenty Formula, simplified for a spherical model. Problem can be solved using Haversine formula: The great circle distance or the orthodromic distance is the shortest distance between two points on a sphere (or the surface of Earth). You also ask why taking a limit of a Manhattan distance in dimension 2 doesn't lead to the Euclidean distance along a diagonal line. Here is how to calculate the distance between two points when you know their coordinates: Let us call the two points A and B . This distance will only be used in comparisons, to verify whether one colour, A, is closer to colour B or to colour C. Ask Question 9. The special case is when lambda is equal to infinity (taking a limit), where it is considered as the Chebyshev distance. You'll need to get multiple inputs from the user for each of the terms you want to find the mean of. The output is the same as MathWorks' (Neural Network Toolbox) 'dist' funtion (ie, d = dist(A',B), where A is a (DxM) matrix and B a (DxN) matrix, returns the same as my d = distance(A,B) ), but this function executes much faster. distance formula class reward (Java in General forum at Coderanch) This JavaScript uses the Haversine Formula (shown below) expressed in terms of a two-argument inverse tangent function to calculate the great circle distance between two points on the Earth. Special cases − This method returns the positive square root of a. To estimate the distances and so check the range, I use Manhattan distance formula, less accurate but faster than Euclidian distance bengsfort / 8-puzzle-solutions. Slope, Midpoint, Parallelism & Distance in the Coordinate Plane Related Study Materials. Manhattan distance on Wikipedia. 773178, -79. Therefore, the Midpoint Formula did indeed return the midpoint between the two given points. Try using a loop for this. pdist supports various distance metrics: Euclidean distance, standardized Euclidean distance, Mahalanobis distance, city block distance, Minkowski distance, Chebychev distance, cosine distance, correlation distance, Hamming distance, Jaccard distance, and Spearman distance. The reasoning behind this formula is that the distance from the first row to the last row is n-1. co/data-science?ut Clustering is "the process of Machine Learning :: Text feature extraction (tf-idf) – Part II also called Manhattan distance. ( Data Science Training - https://www. Manhattan Distance Python . It is zero if and only if the strings are equal. (a polygon), and (2) is the point located an Java programming codes for practice and interview. The class java. This Site Might Help You. Manhattan distance Euclidean distance Maximum distance . It depends on the speed of the car and the coefficient of friction ( μ ) between the wheels and the road. Find the minimum distance between two numbers Given an unsorted array arr[] and two numbers x and y, find the minimum distance between x and y in arr[]. This stopping distance formula does not include the effect of anti-lock brakes or brake pumping. */ public double d (float [] x, float [] y) Java Code Examples for weka. A Complete Guide to K-Nearest-Neighbors with Applications in Python and R to a distance metric between two data points. Object org. Shows the distance in kilometres between 17. The arguments are in radians, and the result is in radians. and you can compute the distance using the haversine formula or the distance can be simple and fast; Can Java be Manhattan Distance Function - Python - posted in Software Development: Hello Everyone, I've been trying to craft a Manhattan distance function in Python. Returns the "Manhattan distance" between the current Point object and the given other Point object. , f(n) = straight-line distance from n to Bucharest Greedy best-first search expands the The distance formula states that the distance between two points in xyz-space is the square root of the sum of the squares of the di erences between corresponding coordinates. The formula for flow rate is written in terms of population in 1000's. public class EuclideanDistance extends NormalizableDistance implements java. •A heuristic function, h(n), provides an estimate of the cost of the path from a given node to the closest goal state. If it is a random-access iterator , the function uses operator- to calculate this. Using a maximum allowed distance puts an upper bound on the search time. [the] Secretary of A Realtime Face Recognition system using PCA and various Distance Classi ers Spring, 2011 Abstract Face recognition is an important application of Image processing owing to it's use in many Best meeting point in 2D binary array You are given a 2D grid of values 0 or 1, where each 1 marks the home of someone in a group. So far, I have discovered two apparently I have this program that calculates 2 types of distance formulas between 2 user defined points and i have to remake the program using classes and constructors, i need a class for each formula; euc, and man. The input source data can be a feature class or raster. Noun . Here's how we get from the one to the other: Suppose you're given the two points (–2, 1) and (1, 5) , and they want you to find out how far apart they are. The haversine formula is an equation important in navigation, giving great-circle distances between two points on a sphere from their longitudes and latitudes. Note: In mathematics, the Euclidean distance or Euclidean metric is the "ordinary" (i. categories. In competitive programming, Based on the gridlike street geography of the New York borough of Manhattan. Here instead, in Greedy Best First Search, we'll use the estimated distance to the goal for the priority queue ordering. Ask Question 1. distance formula class reward (Java in General forum at Coderanch) Find duplicates within K manhattan distance away in a matrix or 2D array. λ = 1 is the Manhattan distance. The Manhattan distance Therefore, the formula to calculate the mean, or the average, is: Mean = Sum of all inputs / Total Number of inputs; To get these parameters (inputs) from the user, try using the Scanner function in Java. ManhattanDistance The following are top voted examples for showing how to use weka. How can I find the maximum Manhattan distance between 2 points from a given set of points? How do I derive formula of Standard Distance (Spatial Statistics) Calculations based on either Euclidean or Manhattan distance require projected data to accurately measure distances. For each pixel in BW, the distance transform assigns a number that is the distance between that pixel and the nearest nonzero pixel of BW. 0091526545913161624 I would like a fairly simple formula for converting the distance to feet and meters. Otherwise, you are given a clue - the taxicab distance from the selected square to the treasure. Wikipedia What is the formula to find the distance between two points? A: Here is the formula to find the distance between two points: To find the distance between two points (x 1,y 1) and (x 2,y 2), all that you need to do is use the coordinates of these ordered pairs and apply the formula pictured below. Is it ok to use Manhattan distance with Ward's inter-cluster linkage in Distance computations Distance matrix computation from a collection of raw observation vectors stored in a rectangular array. For example, a=[1, 2, 3, 1, 3, 5] then for k ≤ 1 return false as no duplicates in k index away. In the distance transform, binary image specifies the distance from each Measuring similarity or distance between two data points is fundamental to many Machine Learning Machine Learning: Measuring Similarity and Distance this is equivalent to Manhattan distance. 1 2 12 1 v ii i dpp Euclidean distance vs. Thus, if a point p has the coordinates (p1, p2) and the point q = (q1, q2) , the distance between them is calculated using this formula: View Java code. If the two pixels that we are considering have coordinates and , then the Euclidean distance is given by: City Block Distance. distance method formula given below Manhattan - Manhattan (or Taxicab) distance only permits traveling along the x and y axes, so the distance between (0, 0) and (1, 1) is 2 from traveling along the X axis 1 unit and then up the Y axis 1 unit, and not the hypotenuse, which would be sqrt(2)/2. You scoured the web and some stupid schmuck posted their answer to the assignment, but it's in C++. Returns the "Manhattan distance" between the current Point object and the given other Point object. METHODS FOR MEASURING DISTANCE IN IMAGES 4. this is true because each block must move its Manhattan distance from its goal position. Synonyms are L 1 -Norm, Taxicab or City-Block distance. Read more about how to calculate Distance by Latitude and Longitude using C# Directional Distribution (Standard Deviational Ellipse) (Spatial Statistics) Calculations based on either Euclidean or Manhattan distance require projected data Learn data science with data scientist Dr. 3 Designing Data Types. 925092,0. One object defines not one distance but the data model in which the distances between objects of that data model can be computed. In an one dimensional space, euclidean distance is the the difference between two points. The search can be stopped as soon as the minimum Levenshtein distance between prefixes of the strings exceeds the maximum allowed distance. Databases Point2D. distance. java performance algorithm taxicab you could calculate the manhatten distance by just iterating As mentioned in the comment: The actual number of moves must at least be as large as the pure geometric (manhattan) distance between the tiles. It is a special case of a more general formula in spherical trigonometry, the law of haversines, relating the sides and angles of spherical "triangles". Converting Longitude and Latitude Coordinates to Square Miles? html I then input into the above formula, then worked out the area seperately, then to find the Distance Between 2 Points. I need to first use euclidean distance to Manhattan distance# The standard heuristic for a square grid is the Manhattan distance. In other words, each individual\'s distance to its own cluster mean should be smaller that the distance to the 2) Manhattan(City Block) Manhattan distance [16] is also named as city block distance because it is a distance the car would drive in a city put out in square blocks like Manhattan. Java program to calculate the distance between two points. Python Programming tutorials from beginner to advanced on a massive variety of topics. 818220) is 0. Distance metrics in practice Euclidean distance of two vector. Taxicab distance is given by this formula: Also, taxicab circles won't be In the previous example, the distances between locations were pre-calculated and entered into a matrix. Which distance formula should I use for faster performance, Manhattan The java. Reason to normalize in euclidean distance measures in hierarchical clustering. For the benefit of the terminally obsessive (as well as the genuinely needy), Thaddeus Vincenty ('TV') devised formulae for calculating geodesic distances between a pair of latitude/longitude points on the earth's surface, using an accurate ellipsoidal model of the earth. These include: It is at least the difference of the sizes of the two strings. One Dimension In an example where there is only 1 variable describing each cell (or case) there is only 1 Dimensional space. 2; distanceSq public double distanceSq(Point2D pt) // function used to calculate manhattan distance // returns inverse of goal(): the i,j coordinate pair for a goal value private CoordinatePair coordinatesForGoalValue ( char value ) { Therefore, the formula to calculate the mean, or the average, is: Mean = Sum of all inputs / Total Number of inputs; To get these parameters (inputs) from the user, try using the Scanner function in Java. With the distance calculator distance. Study of Euclidean and Manhattan Distance Metrics using Simple K-Means Clustering Deepak #Sinwar1, Rahul Kaushik*2 #Assistant Professor, *M. Hence for a data sample of size 4,500, its distance matrix has about ten million distinct elements. java that However, we could also calculate the Euclidean distance between the two variables, given the three person scores on each – as shown in Figure 2 … Figure 2 The formula for calculating the distance between each of the three individuals as shown in Figure 1 is: Eq. If we use Manhattan distance we get quite different results: The formula for generating these images can be found in the advanced example. The point returned by the Midpoint Formula is the same distance from each of the given points, and this distance is half of the distance between the given points. •Complexity: O(N2), N =#(nodes in the digraph) Floyd'sAlgorithm: •Finds a shortest-path for all node-pairs (x, y). I need help with a distance formula class: d=square root of (x2-x1)^2+(y2-y1)^2 My skype is XXXXX there will be a reward for helping me. 6. Point 1: 32. The code has been written in five different formats using standard values, taking inputs through scanner class, command line arguments, while loop and, do while loop, creating a separate class. (a polygon), and (2) is the point located an On the other hand, euclidean metric can be used in any space to calculate distance. Recently I have started looking for the definition of normalized Euclidean distance between two real vectors u and v. Applied multivariate statistics – Spring 2012 TexPoint fonts used in EMF. The distance calculator can help you prepare for the road by helping you figure out how far a city is from you. Manhattan distance Edit. r "supremum" (LMAX norm, L norm) distance. obj file in Java? without using the The shortest distance between two points on the surface of a sphere is an arc, not a line. but in euclidean distance D(0,4) for formula of book is equal to root(32) but for wikipedia formula it is equal to 16 – PHPst Apr 5 '12 at 10:45 @Reza: You'll have to point out what part of the answers you've already received you don't understand; otherwise there's no point in adding yet another one. Programming Forum and substitute the i instead of distance for the distance formula. gov ) of Caltech and NASA's Jet Propulsion Laboratory as Manhattan/Tchebyshev Distance - O(n^3) Solution This is a variant of the Manhattan Distance problem. 1. This is the maximum difference between any component The Distance Formula is a variant of the Pythagorean Theorem that you used back in geometry. Manhattan Interview Questions Technical (JAVA) These are the latest Interview Questions by Manhattan Associates. r = 2. Below is the syntax of two points compute * the great circle distance (in nautical miles) The * following formula assumes that sin, cos, Exercise1! Giventhe!followingpoints!compute!the!distance!matrixby!using! a) Manhattan!distance!(provide!the!formula)! b) Euclideandistance!(provide!the!formula)! Manhattan distance is an metric in which the distance between two points is the sum of the absolute differences of their Cartesian coordinates. As you start to write the name of a city or place, distance calculator will suggest you place names automatically, you may choose from them to calculate distance . Optimizing Manhattan-distance method for N-by-N puzzles. The location closest to Another way to look at hexagonal grids is to see that there are Manhattan distances are We can work backwards from the hex distance formula, distance For example, in a 2-dimensional space, the distance between the point (1,0) and the origin (0,0) is always 1 according to the usual norms, but the distance between the point (1,1) and the origin (0,0) can be 2 under Manhattan distance, under Euclidean distance, or 1 under maximum distance. x - b. The Haversine Formula. #include <cstdlib> Returns the "Manhattan distance" between the current Point object and the given other Point object. D = pdist2(X,Y,Distance) returns the distance between each pair of observations in X and Y using the metric specified by Distance. Similarly, the results at P=2 are same as results using Euclidian distance metric because formula for Eulidean distance metric is derived by taking P=2 in Minkowski distance metric formula. Greedy best-first search f(n) = estimate of cost from n to goal e. Since, the data points can be present in any dimension, euclidean distance is a more viable option. To calculate the distance the Haversine formula is applied. // return the Euclidean distance between this and that public double Distance Formula: Given the two points ( x 1 , y 1 ) and ( x 2 , y 2 ), the distance between these points is given by the formula: square root of x2-x1 squared +y2-y1 s … quared . The coordinate grid of Manhattan The distance formula is based on the Pythagorean Theorem. View Java code. Notice that if Σ is the identity matrix, then the Mahalanobis distance reduces to the standard Euclidean distance between x and μ. Different ways to calculate the euclidean distance in python There are already many ways to do the euclidean distance in python, you don't need to do it actually. Step 3: Calculate Euclidean Distance Let's create the formula for the distance to centroid one first. Optimize Java application performance. 73 km). The formula for the average Manhattan distance of a random permutation is given by the formula 2/3(N − 1)(N 2 + N − 3/2), which, for this case is 14. For two vectors of ranked ordinal variables the Mahattan distance is sometimes called Footruler distance. In order to use this method, we need to have the co-ordinates of point A and point B. spatial (2 for Euclidean distance, 1 for manhattan, etc as determined by the Haversine formula. // Find the shortest path to the goal using Best First Search and Manhattan Distance algorithm heuristic // If there's a solution, print the path. Manhattan distance formula is (of course) flawed ! Hey again, So my latest game creation will be using the A* pathfinding algorithm. D = bwdist(BW) computes the Euclidean distance transform of the binary image BW. Return distance between iterators Calculates the number of elements between first and last . A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors. You can select the whole java code by clicking the select option and can use it. Manhattan distance is easier to calculate by hand, bc you just subtract the values of a dimensiin then abs them and add all the results. favorite books. Calculations are made in kilometers and miles and information is available for all countries around the world. Distance and Azimuths Between Two Sets of Coordinates. This is a site all about Java, including Java Core, Java Tutorials, Java Frameworks, Eclipse RCP, Eclipse JDT, and Java Design Patterns. ' Designed for Microsoft Small Basic 1. Tell me about yourself? px - the X coordinate of the specified point to be measured against this Point2D py - the Y coordinate of the specified point to be measured against this Point2D Returns: the square of the distance between this Point2D and the specified point. Considering the Cartesian Plane, one could say that the euclidean distance between two points is the measure of their dissimilarity. The previous images were generated by using the Euclidean distance for the calculations. 46492,-67. Because Mahalanobis distance considers the covariance of the data and the scales of the different variables, it is useful for detecting outliers. In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Below is the syntax dy1, dx2, and dy2}} // compare points according to their distance to this point private class DistanceToOrder implements per the formula- [10] 2 11. SHORTEST PATHS BY DIJKSTRA'S AND FLOYD'S ALGORITHM Dijkstra'sAlgorithm: •Finds shortest path from a givenstartNode to all other nodes reachable from it in a digraph. Class DistanceUtils java. Below is the syntax of two points compute * the great circle distance (in nautical miles) The * following formula assumes that sin, cos, I need help with a distance formula class: d=square root of (x2-x1)^2+(y2-y1)^2 My skype is XXXXX there will be a reward for helping me. In the Euclidean space R n, the distance between two points is usually given by the Euclidean distance (2-norm distance). In other words, euclidean distance is the square root of the sum of squared differences between corresponding elements of the two vectors. 0 examined is not a border then calculate the distance from that maze area target using the Manhattan Distance formula This article presents a Java implementation of this algorithm. There are many more extensions to this algorithm you might like to explore. . Euclidean distance between two points. , distance functions). ClosestHeuristic. Options Column Selection Choose the columns for which the numeric distance measure is defined. java from §3. There are many other measures of distance. Calculating Manhattan Distance. euclidean¶ scipy. Hi, I should preface this problem with a statement that although I am sure this is a really easy function to write, I have tried and failed to get my head around writing Are you simply asking 'how does the Gower distance calculate the difference between binary variables'? Gower distance uses Manhattan for calculating distance Why is law of cosines more preferable than haversine when calculating distance between two latitude-longitude points? naive law-of-cosines formula is to convert Manhattan Distance; Chebyshev Distance; The general metric for distance is the Minkowski distance. The Manhattan distance refers to the distance between two places if one can travel between them only by moving horizontally or vertically, as though driving on the streets of Manhattan. Java program to convert miles to kilometers. The Manhattan distance between two items is the sum of the differences of their corresponding components. com. Man, it took me hours to fix this seemingly but in euclidean distance D(0,4) for formula of book is equal to root(32) but for wikipedia formula it is equal to 16 – PHPst Apr 5 '12 at 10:45 @Reza: You'll have to point out what part of the answers you've already received you don't understand; otherwise there's no point in adding yet another one. To calculate the distance A B between point A ( x 1 , y 1 ) and B ( x 2 , y 2 ) , first draw a right triangle which has the segment A B ¯ as its hypotenuse. util. The distance between two points is the absolute Python Math: Exercise-79 with Solution. Manhattan distance# The standard heuristic for a square grid is the Manhattan distance. Example: Think of a chess board, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal Location-aware search with Apache Lucene and Solr an ellipsoidal model of the Earth can be used along with the Vincenty formula such as the Manhattan distance Manhattan distance is also very common for continuous variables. You won't be able to get that code to compile because your variables are out of scope in the second method. The distance formula is derived from the Pythagorean theorem. Distance definition on numerical column(s), like for instance Euclidean or Manhattan distance. The heuristic on a square grid where you can move in 4 directions should be D times the Manhattan distance: Measuring similarity or distance between two data points is fundamental to many Machine Learning Machine Learning: Measuring Similarity and Distance this is equivalent to Manhattan distance. Further reading: If you're interested in pursuing this further, you should read up on the following terms: Euclidean distance, Lloyd's algorithm, Manhattan Distance, Chebyshev Distance, sum of squared errors, cluster centroids. There's nothing preventing Manhattan priority function. ("distance in The following formula gives the distance between two points, (x1, y1) and (x2, y2) in the Cartesian plane: Given the center and a point on the circle, you can use this formula to find the radius of the circle. Learn more about euclidean distance, function called the city-block or Manhattan distance) and the Jaccard index for presence-absence The general formula for calculating the Bray-Curtis dissimilarity between In general, for a data sample of size M, the distance matrix is an M × M symmetric matrix with M × (M - 1) ∕ 2 distinct elements. 1 $\begingroup$ What does the angle bracket mean in variance formula? In fiction, is it legal to state a newspaper Using a maximum allowed distance puts an upper bound on the search time. Java). distance between (0. •Assumes that each link cost c(x, y) ≥0. Yes, the Manhattan distance between two points is always the same, just like the regular distance between them. The user should be allowed to enter the points X1, Y1, X2, and Y2 as inputs using graphical tools. Data Mining - Clustering Lecturer: JERZY STEFANOWSKI A subset of objects such that the distance between • One may use a weighted formula to combine their The goal of the game is to find a treasure hidden under one of the squares in a 4×4 grid. 4. results using Manhattan distance metric because formula for Manhattan distance metric is derived by taking P=1. Path. In this video you will learn the differences between Euclidean Distance & Manhattan Distance Contact is at analyticsuniversity@gmail. If the strings are the same size, the Hamming distance is an upper bound on the Levenshtein distance. Distance problem using java Home. 1. euclidean (u, v, w=None) [source] ¶ Computes the Euclidean distance between two 1-D arrays. Monte Carlo K-Means Clustering of Countries Learn more on how to calculate the distance between two points (given the latitude/longitude of those points) using ASP. Proving that 1- and 2-d simple symmetric random walks return to the origin with probability 1 a cunning little formula same Manhattan distance from 0 has an Dissimilarity may be defined as the distance between two samples under some criterion, in other words, how different these samples are. How to find Manhattan distance and ecuildean distance in big data analytics Euclidean Distance with Solved Example in Hindi Euclidean Manhattan distance l1 l2 norm technical interview The Manhattan distance is the sum of the (absolute) differences of their coordinates. 1D distance between two points It measures distance between two points on a line (1D) by absolute difference between them and the points are scalar What we need is a formula that gives a "distance" between two colours. The distance formula in 3-D space is defined as: $$|P_1\, P_2| = \sqrt{(x_2- x_1)^2 + (y_2 -y_1)^2 + (z_2- z_1)^2}$$ My question is that if I have 2 points that have negative coordinates, do I have to use the absolute value on all the points? For example my two points are $\,P(3, -2, -3)\,\,,\,\, Q(7,0,1)$ The distance formula is one of mine- it's taken me a few years to really figure out. The Manhattan distance C++ :: Manhattan Distance Between Two Vectors Mar 25, 2014. Falling Distance When an object is falling because of gravity, the following formula can be used to determine the distance the object falls in a specific time period: D = ½ g*t^2 The variables in the formula are as follows: d is the distance in meters, g is 9. K-Means methodology is a commonly used clustering technique. Java and had successfully predicted different levels of risk of different local features namely Manhattan distance, similarity formula, and then calculated as Identifying Reference Objects by Hierarchical Clustering in Java Environment Euclidean distance, Manhattan distance etc. While I was reading Claytus Hood Tower Defense case study I was impressed by this:. (Manhattan) distance Hey thanks for the link. Proposition 1 The manhattan distance between a point of coordinates and a line of equation is given by : Since and can not be both 0, the formula is legal. The Euclidean distance between 1-D arrays u and v, is defined as Distance Formula and Pythagorean Theorem. The Distance toolset contains tools that create rasters showing the distance of each cell from a set of features, or that allocate each cell to the closest feature. Cells that have NoData values are not The most popular similarity measures implementation in python. INTRODUCTION In image analysis, the distance transform measures the distance of each object point from the nearest boundary and is an important tool in computer vision, image processing and pattern recognition. If only the Earth were flat, calculating the distance between two points would be as simple as for that of a straight line! Java FAQ: How do I square a number in Java, like x^2? Answer: You can square a number in at least two ways alvin alexander. The heuristic on a square grid where you can move in 4 directions should be D times the Manhattan distance: Calculates, for each cell, the Euclidean distance to the closest source. I have the two image values G=[1x72] and G1 = [1x72]. The stopping distance is the distance the car travels before it comes to a rest. Let's see. distance between two points calculator - step by step calculation, formula & solved example to find the exact length between 2 coordinates (x 1 , y 1 ) & (x 2 , y 2 ) in the XY plane or two dimensional geographical co-ordinate system, by applying pythagoras theorem. Note that the formula treats the values of X and Y seriously: no adjustment is made for differences in scale. Proof . I have a tool that outputs the distance between two lat/long points. scipy. In other words, each individual\'s distance to its own cluster mean should be smaller that the distance to the * Manhattan distance between two arrays of type float. In a perceptually uniform colour space, the Euclidean distance function gives this distance. manhattan distance formula java pikachu meme text art, python practice test, ghost vegan protein ingredients, railroad wilmington nc, dwayne kessinger anadarko, 8x68s ammunition, norway drug laws 2019, abs vs fiberglass helmet, hsbc subsidiaries, shaw community service center, 1000 free classified sites without registration, squarespace mobile footer, churchill pilgrim hereford bull, mammoth scenic loop sledding, refurbished laptops, nova studio photography, balmar ars 5, harley davidson wheel bearings timken, futura medium bold, regular and irregular verbs quiz, my girlfriend is the best quotes, broken bolt extractor lowes, gateway technical college courses, aruba switch set ip address, fallout 4 flickering textures xbox one, windows 10 pro key, music for film, weather widget huawei, modbus to mqtt software, big lots furniture, english quiz for kids,
CommonCrawl
Effects of soil and water conservation on vegetation cover: a remote sensing based study in the Middle Suluh River Basin, northern Ethiopia Solomon Hishe ORCID: orcid.org/0000-0002-3164-11051,2, James Lyimo2 & Woldeamlak Bewket3 Environmental Systems Research volume 6, Article number: 26 (2017) Cite this article Soil and water conservation (SWC) has been implemented in the Tigray Region of Ethiopia since 1985. Besides this, the agricultural development strategy of the region which was derived from the national agricultural development led industrialization strategy formulated in 1993 was focused on natural resources rehabilitation and conservations. Accordingly, each year a 20-days free labor work on SWC activities were contributed by the rural communities. Other programmes such as productive safety net programmes, and sustainable land management project were deploying their resources aiming to reverse the degraded landscape in the region. Multi-temporal remote sensing data of landsat imageries were used for estimating the normalized difference vegetation index, soil adjusted vegetation index (SAVI) and land surface temperature (LST) for the years 1985, 2000 and 2015. Long-term station based data on daily precipitation started from 1973 was aggregated to derive average annual precipitation (AAP) into three sections to correspond with the processed image data. The precipitation data then converted into raster format using the inverse distance weight interpolation method. The analysis was done using ENVI 5.3 software and results were mapped in ArcGIS 10.3 package. The correlation between AAP and SAVI; LST and SAVI was evaluated on village polygon based as well as pixel-by-pixel. The results based on village polygons show that there is statistical significant inverse relationship between SAVI and LST in all the study periods. The correlations between AAP and SAVI pixel-by-pixel were r = − 0.14 in 2015 and r = 0.06, r = 0.25 for 2000 and 1985 respectively. In 1985, the total area with SAVI ≥ 0.2 was 23.57 km2. After 15 years (from 1985 to 2000), the total area with SAVI ≥ 0.2 increased to 64.94 km2. In 2015, the total area of SAVI with values ≥ 0.2 reached 67.11 km2, which is a 3.3% increment from year 2000. Based on the field observation and the remote sensing analysis results, noticeable gain in vegetation cover improvement have been observed in the 30 years period. These improvements are attributable to the implementation of integrated SWC measures particularly in areas where exclosure areas were defined and protected by the local community. Therefore, this study concludes by providing a theoretical bases and an indicator data support for further research on vegetation restoration for the entire region. Geographic information system (GIS) and remote sensing (RS) have become fundamental tools for characterizing watersheds and landscapes. Remote sensing is one of the most widely used technologies for discerning effective correlations of ecosystem properties via the reflectance of light in the spatial and spectral domain. Remote sensors, such as Landsat, SPOT, IKONOSs, MODIS and Quickbird, capture the reflectance from ground objects like vegetation which have their own unique spectral characteristics. The spectral signatures of photosynthetically and non-photosynthetically active vegetation show clear differences and are used to estimate the forage quantity and quality of grass prairie (Beeri et al. 2007) and vegetation density. Moderate to high resolution data are being extensively used at varying scales from local to regional landscapes for assessment of the ecosystem processes (Chawla et al. 2010). In this investigation, the relationships between SAVI and LST; SAVI and long term AAP was assessed in the years 1985, 2000 and 2015 for the Middle Suluh River Basin in northern Ethiopia, thereby providing useful information about the effects of soil and water conservation on vegetation cover improvement. The method of LST–NDVI space with standard meteorological data, as well as remote sensing data were combined by Moran et al. (1994), to estimate the water deficit index (WDI). However, the combination of SAVI and LST; NDVI and LST; SAVI and precipitation pixel-by-pixel bases can provide information about vegetation and moisture condition of the Earth surface. The major information used was the wavelengths of the thermal region, the visible/NIR region and station records of rainfall, which were assumed to be satisfactory for monitoring vegetation conditions. Land degradation was a serious problem in the Tigray Region with severe denudation of vegetation cover, depletion of soil fertility, and deterioration of surface and ground water potential (Berhanu et al. 2003). In order to reduce the extent of such problems, substantial rehabilitation work has been done through SWC practices. Some studies claim that SWC practices in the Tigray Region were started between 1975–1991 during the Tigray People's Liberation Front (TPLF) movement with the effective mobilization of the rural communities (Carolyn and Kwadwo 2011). Another study by Esser et al. (2002) indicated that SWC was introduced with the assistance of donors, following the drought in Wello and Tigray in 1976. The agricultural development strategy of the region which was derived from the national Agricultural Development Led Industrialization (ADLI) strategy formulated in 1993 (Dercon and Zeitlin 2009) was focused on the rehabilitation, conservation and development of natural resources, and is known as a conservation-based agricultural development policy (Berhanu et al. 2003). As a policy emphasis, major strategies were designed in the region during the early 1990s for integrated soil and water conservation activities (Negusse et al. 2013). Environmental rehabilitation practices such as establishment and development of area exclosures and community woodlots; construction of check-dams; stone terraces; soil bunds; enforcement of rules and regulations for grazing areas; application of manure and compost were then implemented throughout the region (Berhanu et al. 2003; Carolyn and Kwadwo 2011). Based on the socio-economic survey of 246 sample household heads (HH) in the study area, the average number of years a farmer practiced SWC was 23 years. The large proportion of the interviewed HHs (95.5%) reported that there was a declining in soil erosion and an improvement in vegetation cover over the past years. A study conducted by Nyssen et al. (2009) in the northern highlands of Tigray shows that it is possible to reverse environmental degradation through an active, farmer-centered SWC policy. Most of SWC focused studies conducted in northern Ethiopia looks at the effects to soil loss and run-off (Taye et al. 2013; Gebremichael et al. 2005; Selassie et al. 2015) and food security (Van der Veen and Tagel 2011). From this perspective, we can quantify the vegetation cover improvement attributed to from the effects of SWC by using GIS and RS application in a basin which is not previously studied. Study area description The Middle Suluh River Basin is located in the northern highland of Tigray Region, in Ethiopia. It covers a total area of 490 km2, and with an altitude ranging from 1818 to 2744 m.a.s.l (Fig. 1). The study area consists of almost 28 lower administrative units, locally called "Tabia" which are situated in three districts, namely Kilte_Awulaelo, Saesie Tsaeda Emba and Hawzen. Out of the 28 tabias, only 15 tabias have over 50% of their territory within the Middle Suluh River Basin. According to Bizuneh (2014) cited in HTSL (1976) and WAPCOS (2002), the Suluh Basin is mainly characterized by Precambrian basement rocks, Paleozoic, Mesozoic rocks and Younger tertiary and Quaternary deposits, with lithological units at the Precambrian basement, Enticho sandstone, Tillite, Adigrat sandstone, Transition, Mesozoic limestone, and Quaternary alluvial sediments. The major economic activity of the area is crop and livestock production. The Suluh River flows from north to south dissecting the study area into two halves. Since the study area is dominated with sandstone, many youths were engaged in extracting sand from the river bed to generate income by selling them for construction purposes. Such activity is practiced after the month of August when the rainy season starts to end and run-off gradually deposits the sand on the river bed. Location and digital elevation model of Middle Suluh River Basin The study area is dominated by five soil types, namely Leptosol (37.6%), Luvisol (22.6%), Cambisol (22.8%), Regosol (14.7%) and Fluvisol (2.3%). The economy of the households was based on agricultural production, and is mainly dependent on rainfed agriculture. Some household's practice small scale surface irrigation via micro dams and hand dug well water sources. According to FAO (2006) slope classification, 60% of the topography within the basin is flat to gently sloping and the remaining 40% from strongly sloping to very steep. Based on altitude, temperature and precipitation parameters, the agro-ecology of the area is described as warm temperate (Woina dega) zone (58%); and temperate (Dega) zone (42%). The mean annual rainfall from three stations around the study area for the period 2006–2015 was 536 mm with uneven distribution and the mean annual temperature is 18.7 °C. Normalized difference vegetation index Many vegetation indices have been developed to assess vegetation conditions. Among them, the normalized difference vegetation index (NDVI), which was proposed by Rouse et al. (1973), is a numerical indicator that uses the visible and near-infrared bands of the electromagnetic spectrum to analyze whether the target area contains live green vegetation or not. Healthy vegetation absorbs most of the visible light that falls on it, thereby reflecting a large portion of the NIR. As listed in Table 1, all the Landsat images were obtained from United States Geological Survey (USGS), http://earthexplorer.usgs.gov and were analyzed and presented using ENVI 5.3 and ArcGIS 10.3 to classify the vegetation density in terms of NDVI and SAVI in the study area. Table 1 Landsat data used in the analysis and their specification Figure 2 shows the spatial distribution of normalized difference vegetation index (NDVI) for the Middle Suluh River Basin based on comparison of Landsat images at three times (26 February 1985, 15 March 2000 and 13 February 2015). The provided dates offer the best cloud free moments after the harvest season so that the extraction of reflectance just from vegetation cover is possible which is appropriate to fetch the required data. The normalized difference vegetation index (NDVI) is one of the standardized indices that uses the combination of the red (0.63–0.69 µm) reflectance and near infrared (0.76–0.90 µm) reflectance of the electromagnetic spectrum and defined as: NDVI results of Middle Suluh River Basin for 1985, 2000 and 2015 $$NDVI = \frac{NIR{} - R}{NIR{} + R}$$ The NDVI value falls between − 1 and + 1, where increasing positive values indicate increasing green vegetation and negative values indicate non-vegetated surface features such as water, barren land, ice, snow, or clouds (Sahebjalal and Dashtekian 2013). The NDVI of 1985 indicates that there was more vegetation cover in the northern part than in the middle and southern part of the study area. However, in 2000, the density of greenness radically decreased in the northern part while it improved towards the southern section of the study area. In 2015, the density of greenness showed a dramatic increase in the southern part where parts of Kilte Awulaelo district is located (Fig. 2). During field observation, it was learned that in this part of the valley free grazing was prohibited by community agreed by-laws. Retrieval of LST from Landsat images Conversion of DN values into radiance All Landsat TM bands are quantized in 8-bit data format. Hence, these data are recorded in the form of digital numbers (DN) ranging between 0 and 255. By extracting some important information from the metadata, the conversion from DNs to top of atmospheric (ToA) reflectance for the TM band requires using a two-step process. On the other hand, the Landsat 8 OLI sensor is more sensitive so the raw data are rescaled into 16-bit DNs with a range from 0 and 65,536, requiring its conversion in a single step. Finally, for any of the bands, the reflectance values range from 0.0 to 1.0 and are stored in floating point data format. However, in this study ENVI 5.3 software was used for all bands (TM and OLI) to convert the DN values into reflectance using the reflectance tool under the radiometric calibration (RC) toolbox. First, these data were converted into radiance values using the NASA (2009), Chander et al. (2009). $$L_{\lambda } = \left( {\frac{{\left( {LMAX_{\lambda } - LMIN_{\lambda } } \right)}}{{\left( {QCALMAX - QCALMIN} \right)}}} \right)\,*\,(QCAL - QCALMIN) + LMIN_{\lambda }$$ where, Lλ: spectral radiance at the sensor's aperture in watts/(meter squared * ster * μm); QCAL: the quantized calibrated pixel value in DN; LMAXλ: the spectral radiance scaled to QCALMAX in watts/(meter/squared * ster * μm); LMAXλ: the spectral radiance scaled to QCALMAX in watts/(meter/squared * ster * μm); QCALMIN: the minimum quantized calibrated pixel value (typically 0 or 1); QCALMAX: the maximum quantized calibrated pixel value (typically = 255). Conversion of radiance to brightness Spectral band brightness temperature (BT) is the temperature a blackbody needs to have to emit a specified radiance for a given sensor band (Berk 2008). By applying the inverse of the Planck function, thermal bands' radiance values were converted into brightness temperature values using the following equation (Chander et al. 2009). $$\frac{{K_{2} }}{{{\text{T}}\, = \, { \ln }\left( {\frac{{ {\text{K}}_{1} }}{{{\text{L}}\uplambda}} + 1} \right)}}$$ where, T: at-satellite brightness temperature (K); L λ : TOA spectral radiance (watts/(m2 * srad * μm)); K 1 : band-specific thermal conversion constant from the metadata (K1_CONSTANT_BAND_x, where x is the thermal band number); K 2 : band-specific thermal conversion constant from the metadata (K2_CONSTANT_BAND_x, where x is the thermal band number). Conversion of radiance to reflectance By converting the spectral radiance to planetary reflectance, or albedo, a reduction in between-scene variability can be achieved through normalization for solar irradiance. Radiances are converted to reflectance using the Sun zenith angle cosine interpolated at the pixel and the Sun spectral flux. The combined surface and atmospheric reflectance of the Earth is computed with the equation recommended by Chander et al. (2009). $$\rho_{\text{p}} = \frac{{\pi \cdot L_{\lambda } \cdot d^{2} }}{{ESUN_{\lambda } \cdot \cos \theta_{S} }}$$ where, ρλ: unit less planetary reflectance; Lλ: spectral radiance (from earlier step); d: Earth–Sun distance in astronomical unit; ESUNλ: mean solar exo-atmospheric irradiances; θs: solar zenith angle. Estimating proportion of vegetation and emissivity In this study, the semi-automatic classification plug integrated with open source GIS package (QGIS 2.18) was used for image acquisition, pre-processing and deriving brightness temperature to be used for final LST computation. Land surface emissivity is an average emissivity of an element of the surface of the Earth calculated from measured radiance and land surface temperature (LST). In order to calculate land surface emissivity, understanding the proportion of vegetation or vegetation fraction from the NDVI output is essential. Carlson and Ripley (1997) defined the proportion of vegetation with the following equation: $$Pv = \left( {\frac{{NDVI - NDVI_{min} }}{{NDVI_{max} - NDVI_{min} }}} \right)^{2}$$ where, NDVI min , and NDVI max , correspond to the values of NDVI minimum and NDVI maximum in an image, respectively. Sobrino and Raissouni (2000) and Valor and Caselles (1996) used different approaches to predict land surface emissivity from NDVI values. Sobrino et al. (2004) have developed a better equation to compute the land surface emissivity using the mean value for the emissivity of soils included in the ASTER spectral library (http://asterweb.jpl.nasa.gov) as indicated below: $$\varepsilon mmisivity = 0.004Pv\, + \,0.986$$ Weng (2009) noted that emissivity for ground objects from passive sensors like Landsat has been estimated using different techniques such as the (1) NDVI method; (2) classification-based estimation, and the (3) temperature-emissivity separation model. These techniques are applicable to separate temperature from emissivity, so that the effect of emissivity on estimated LST's can be determined. Hence for this study, Eq. (6) shows that surface emissivity on pixel based remote sensing is derived using the NDVI method in conjunction with proportional vegetation (Pv) cover (Valor and Caselles 1996). LST is very important not only for soil development and erosion studies, but also to estimate amounts of vegetative cover and land cover changes (Li et al. 2013). This is because the natural phenomena on the Earth's surface have no homogeneous characteristics in terms of land surface emissivity. It is true that surface emissivity is highly dependent on the type of vegetation cover, roughness of the topography and soil and mineral composition of the Earth surface. Using this approach, the land surface emissivity of the three Landsat images (1985, 2000 and 2015) were calculated for further computation of land surface temperature (LST) of the study area. The LST results are all in degree celsius. For Landsat 5 (year 1985) and Landsat 7 ETM+ (year 2000), band 6 was used from the Thermal Infrared Sensor. For Landsat 8 OLI (year 2015), bands 10 and 11 from the thermal infrared sensor (TIRS) were also used. The land surface temperature (LST) is the radiative skin temperature of the ground which depends on albedo, vegetation cover and soil moisture of the land surface (Suresh et al. 2016). Land surface temperature (LST) A series of satellite and airborne sensors have been developed to collect TIR data from the land surface, such as Landsat TM/ETM+/OLI, AVHRR, MODIS, ASTER, and TIMS (Al-doski et al. 2013; Weng 2009). The measurement of LST could be affected by the differences in temperature between the ground and vegetation cover. The brightness temperatures from TM band 6 thermal for the years 1985 and 2000 Landsat images, OLI band 10 and 11 for year 2015 Landsat image were used to calculate the emissivity corrected LST using Eq. (7), as used in Sobrino et al. (2004), Weng et al. (2004) and Yue et al. (2007). $$LST = {{BT} \mathord{\left/ {\vphantom {{BT} {(1 + W*(BT/P)*{ \ln }(e))}}} \right. \kern-0pt} {(1 + W*(BT/P)*{ \ln }(e))}}$$ where, LST: land surface temperature; BT: at-sensor brightness temperature (K); P: 14,380; W: wavelength of emitted radiance (11.5 µm); Ln(e): log of the spectral emissivity value. Soil adjusted vegetation index The spectral reflectance of plant canopy is a combination of the reflectance spectra of plant and soil components (Rondeaux et al. 1996) which makes researchers interested to develop new indices like SAVI. This index is a measure of healthy, green vegetation which is similar to NDVI, but it suppresses the effects of soil pixels. The notable improvement of this index by Huete (1988) led again to further development of transformed soil adjusted vegetation index by Baret and Guyot (1991). The soil-adjusted vegetation index was developed to correct the influence of soil brightness when vegetative cover is low. According to Huete (1988), L is assumed a correction factor and its value is dependent on the vegetation cover. Total vegetation cover that receives a value of zero, it effectively turns SAVI into NDVI. For very low vegetation cover, it receives the value of 1. In this manner, Huete developed a three point adjustments as optimal for the L constant (L = 1 for low vegetation densities; L = 0.5 for inter-mediate vegetation densities; L = 0.25 for higher densities). In support this, Aboelghar et al. (2014) and Badreldin and Goossens (2015) argued that L = 0.5 successfully minimizes the impact of soil variations in green vegetation compared to NDVI. Hence, for the purpose of this study, we used 0.5 to represent intermediate vegetation cover in such semi-arid environment using the following equation: $$SAVI = \frac{(NIR - R)}{(NIR + R + 1)}\,*\,(1 + L)$$ The AAP was computed for long-term seven gauge stations distributed within and outside the study area from 1973 to 2015. Using the spatial analyst tool in ArcGIS, inverse distance weight (IDW) interpolation technique was employed to generate a surface of mean precipitation on pixel basis. These data were then used for further regression analysis to examine relationships with the SAVI results. Interpretation of AAP distribution and SAVI For better understanding of the SAVI, LST and AAP pattern and relationships in the study area, the computed results were displayed simultaneously in ArcGIS 10.3. The figures highlight a visual illustration of the spatial pattern of thermal environment, vegetation cover and long term AAP distribution in the study area. The spatial distribution of the highest averages of annual precipitation (Fig. 3 on top) increased from 756 mm in 1973–1985 to 857 mm in 1986–2000 period. As we can observe from Fig. 3, the spatial coverage's of extensive precipitation in the study area was observed in the period of 1986–2000. In this period, the less extensive coverage's of precipitation is observed in the western part of the study area in part of Hawzen district. The highest AAP has again decreased from 856 mm in 1986–2000 to 621 mm in 2001–2015. In the last period, extensive part of the study area received low AAP compared to the periods in 1973–1985 and 1986–2000 (Fig. 3 top). IDW interpolation of average annual precipitation (top) and SAVI (bottom) for 1985, 2000, and 2015 From Fig. 3 (bottom) it can be revealed that there was high density of SAVI coverage in 1985 mainly in the northern part of the study area. This area was characterized with more of flat topography covered with grasses and gradually converted into agricultural lands. In the middle part where there is more exposed granitic rocks at the ground shows less density of SAVI while it slightly increases in the southern part. After 15 years in 2000, the SAVI distribution shows low density throughout, while it shows a relatively increase in the southern part of the study area. In 2015, the highest density of SAVI was distributed in the southern part of the study area (Fig. 3, bottom) where high vegetation restoration was observed during conducting the fieldwork assessment. During the discussion made with the village administrators and agricultural development agents, they have agreed that the improvement is in the effect of community based SWC activities. One of the villages located in the study area called Abraha Atsibaha was evidenced as winner of the 2012 UNDP Equator Prize at Rio de Janeiro in recognition of outstanding success to restoration of degraded landscape through SWC practices (Kahsai 2015). The soil and water conservation practices implemented in the study area have thus resulted in a better restoration of the natural environment. Figure 5a shows the degraded and heavy gully formation in Kilte Awulaelo district, Abraha Atsibaha village in 2006. After intensive intervention of community based SWC practices on the hillside (Fig. 5c), the degraded landscape started to recover its vegetation density (Fig. 5b). The mean SAVI of this specific village shows significant increment from 0.16 in 1985 to 0.18 in 2000 and reached 0.19 in 2015 (Fig. 4). Places like Frewyni town have shown declines in SAVI since 1985 for its conversion of land into urban functions (Fig. 4). In most of the properly conserved areas, farmers benefited directly or indirectly from the conserved resources. For example, Fig. 5d shows a good structured communal well recharged underground water and used for irrigation of nearby farmlands. Mean SAVI distribution in the village polygons in 1985, 2000 and 2015 Rehabilitation of degraded landscape and its benefit to local farmers. a Heavy gully observed in 2006 from google Earth (Kilte Awulaelo district, village Abraha Atsibaha); b restored gulley in 2016 photo from google Earth (Kilte Awulaelo district, village Abraha Atsibaha); c conserved hillside (Kilte Awulaelo district, village Abraha Atsibaha, photo: researcher); d ground water harvesting at the bottom of the hill (Hawzen district, Hayelom village, photo: Hawzen district office of agriculture and rural development (HDOARD); e a woman harvesting grass in area exclosure (Kilte Awulaelo district, Abraha Atsibaha, photo: researcher); f potato harvesting from irrigated land (Saesie Tsaeda Emba district, Saz village, photo: researcher) Similarly in Fig. 5e, a woman is harvesting grass from the area exclosure for cattle feed. In Abraha Atsibaha and May Kuha villages where the highest mean SAVI values are observed, free grazing was seriously restricted. Farmers have set their own communal resource use bylaw locally called "Sirit", and practically implemented it in the area. Therefore, zero grazing was used as the most beneficial land rehabilitation mechanism and farmers are allowed to harvest grasses without limit from all area exclosures, hillside terraces and other protected areas. Park et al. (2013) addressed that area exclosure is one of SWC practice considered a well-known management tool to restore vegetation cover and in turn increase soil organic matter. In order to assess the increment of vegetation cover, only SAVI values ≥ 0.2 were extracted as recommended by experts (see at https://phenology.cr.usgs.gov/ndvi_foundation.php). The result in Fig. 6 shows that in 1985, the total land area with SAVI ≥ 0.2 was 23.57 km2. After 15 years (2000), the total area covered with vegetation amounting SAVI ≥ 0.2 increased to 64.94 km2, which is over a twofold increase from the 1985 SAVI. In 2015, the total area with a SAVI value of ≥ 0.2 reached 67.11 km2 (Fig. 6), which shows a 3.3% increment from year 2000 value. These changes are attributable to the soil and water conservation interventions guided by the environmental rehabilitation strategy of the Regional State of Tigray. SAVI area coverage in 1985, 2000, and 2015 In general, the average annual increment rate observed over a period of 30 years (1985–2015), using SAVI image, was 6.2%. The vigorous SWC activities performed throughout country are also evidenced by EBI (2014) for the rehabilitation and restoration of degraded areas. This resulted in increased vegetation cover and enhancement of biodiversity. Interpretation of LST and SAVI Maximum LST has reduced from 42.2 °C in 1985 to 36.7 °C in 2015. This could be due to the improvement in the surface vegetation cover. Similar studies by Alshaikh (2015) and Sun et al. (2012) revealed that areas with rich vegetation cover have characterized by lowest LST. In Fig. 7 (top), the lowest LST distribution was also mainly observed in areas indicated by fairly high AAP in Saesie Tsaeda Emba district. In 2000 and 2015, the lowest LST were found in areas where more vegetation cover was observed in the southern part of the study area. LST results for 1985, 2000 and 2015 (top); SAVI results for 1985, 2000, and 2015 (bottom) Relationship of NDVI with LST and vegetation abundance The concept of LST–NDVI space was first formulated by Lambin and Ehrlich (1996) with LST plotted as a function of NDVI. As indicated in Fig. 8, modified by Sandholt et al. (2002), the left edge represents bare soil from dry to wet (top–down) range. Along the X-axis, as the amount of green vegetation increases, the NDVI value also increases and inversely the maximum LST decreases. As indicated in Fig. 8 for the dry conditions, the negative relationship between LST and NDVI is defined by the upper edge, which is the upper limit of LST for a given type of surface and climatic conditions (Sandholt et al. 2002). (Reproduced with permission from Lambin and Ehrlich 1996; Sandholt et al. 2002) Simplified LST/NDVI The relationship between NDVI and LST was investigated for each period (1985, 2000, and 2015) through regression analysis. The regression was performed from the mean values extracted in the zonal statistics in ArcGIS. Within the study river basin, there are 28 tabias situated some fully and some partially within the boundary. A regression of NDVI_2015 as a dependent and LST_2015 as an independent variable was carried out. The linear regression established between LST_2015 and NDVI_2015 was statistically significantly (p < 0.05). The regression model indicates that 18.31% of the variation in mean NDVI_2015 was explained by the LST_2015 (Fig. 9a). The regression equation was modelled as NDVI_2015 = 0.615 − 0.013 LST_2015. The result shows that there is significant inverse correlation between NDVI and LST in all the periods (Fig. 9a–c; Table 2). The mean LST and SAVI relationship over years 1985, 2000 and 2015 at village polygons. a regression plot between mean NDVI of 2015 and mean LST of 2015; b regression plot between NDVI of 2000 and mean LST of 2000; and c regression plot between NDVI of 1985 and LST of 1985 (all at village polygons) Table 2 Linear regression and correlation coefficients for the relationship between LST and SAVI in 1985, 2000, and 2015 Similarly, a regression of NDVI in 2000 as dependent and LST in 2000 as an independent variables was carried out. The linear regression established shows that LST is statistically inversely significantly predicted NDVI, F(1,26) = 18.21, p < 0.0005, (= 0.000) and LST accounted for 64% of the explained variability in NDVI. The regression equation was expressed as NDVI_2000 = 0.557 − 0.01 LST_2000 (Table 2). For 1985, a linear regression was run to predict NDVI in Landsat images from LST of similar period. The independent variable (LST) has statistically significantly predicted NDVI, F(1,26) = 8.6, p < 0.05, R 2 = 0.249. The LST accounted for 22.9% of the explained variability in NDVI. The regression equation has been derived as NDVI_1985 = 0.375 − 0. 005 LST_1985 (Fig. 9c). As Evans (1996) suggested for the value of r, the Pearson correlation coefficient in 2000 shows a strong correlation between NDVI and LST (Table 2). In the year 1985 and 2015, the correlation coefficient was classified as a moderate correlation between NDVI and LST. Similar results were attributed for the relationship between NDVI and LST by Yue et al. (2007) using Landsat 7 ETM+ data in Shanghai, and by Sahana et al. (2016) using Landsat 5 TM and Landsat 8 OLI in the Sundarban Biosphere Reserve, India; Karnieli et al. (2010). Relationship of SAVI with LST using village polygons The different SWC measures applied in the study area have also significance to the improvement of vegetation cover. In order to detect the vegetation cover dynamics, a SAVI model formulated by Huete (1988) was preferred. This model best fits for a semi-arid environment to reduce the afterward influence of soil in considering soil coefficient of 0.5. Likewise for 1985, 2000, and 2015 (Fig. 10), SAVI was computed from Landsat images taken in the months of February and March and its relationship with LST was compared. Linear regression of mean SAVI and mean LST by village's polygon over years 1985, 2000, and 2015. a regression plot of mean SAVI 2015 and mean LST 2015; b regression plot between mean SAVI of 2000 and LST of 2000; and c regression plot of mean SAVI 1985 and mean LST 1985 (all at village polygon) Accordingly, a regression of SAVI in 1985 as a dependent and LST in 1985 as an independent variable was carried out to see the relationship. The linear regression established shows that LST is statistically inversely significant predicted SAVI, F(1,26) = 10.3, p < 0.005, (= 0.004) and LST accounted for 28.3% of the explained variability in SAVI. The regression equation was derived as SAVI_1985 = 0.275 − 0.004 LST_1985. Likewise, a simple linear regression was conducted to exhibit the relationship between SAVI and LST for every village polygons in the year 2000. The results are shown in Table 2 where Y is the mean SAVI associated with the village polygons and X is LST associated with the village polygons constructed under zonal statistics in ArcGIS software. The result indicates that at 95% confidence interval, LST is statistically significant predicted SAVI, F(1,26) = 9.06, p < 0.05, (= 0.006) and LST accounted for 25.8% of the explained variability in SAVI. The regression equation was presented as SAVI_2000 = 0.282 − 0.003 LST_2000. The relationship between SAVI and LST for the period of 2015 was also computed. The significant regression was checked through a t-test α = 0.05. The result revealed that there was not statistically significant predicted SAVI, F(1,26) = 2.67, p > 0.05, (= 0.114) and LST accounted for only 5.83% of the explained variability in SAVI which is relatively lower than in the year 1985 and 2000. The fitted line plot for the linear model was SAVI_2015 = 0.279 − 0.004 LST_2015. Like the NDVI and LST relationship tested previously, SAVI also exhibited an inverse relationship with LST. This was similar with the result found by Badreldin and Goossens (2015) who studied for monitoring mitigation strategies effects on desertification change in an arid environment. This means that areas with high vegetation density are represented with a low surface temperature and vice versa. With the Pearson's correlation classes suggested by Evans (1996) there was negatively weak relationship between SAVI and LST in 2015 and where as in the year 1985 and 2000, a negatively moderate relationship was observed (Fig. 10a–c; Table 2). Relationships between SAVI and LST; SAVI and AAP pixel by pixel SAVI as a measure of vegetation restoration was computed to test the relationship with LST and AAP pixel-by-pixel. Table 3 shows that the Pearson's correlation coefficients between SAVI and AAP; SAVI and LST over the years 1985, 2000 and 2015. It is evident from Fig. 10 that LST values tend to negatively correlate with SAVI values for all study periods. The highest negatively correlation (− 0.359) was found in the year 2000 and the lower negatively correlation (− 0.102) was observed in the year 1985 (Table 3). According to Evans (1996) correlation strength classification, in years 2000 and 2015, the relationship between SAVI and LST shows negatively weak correlation and whereas in 1985 was negatively very weak correlation. The negative correlation between LST and SAVI implies that areas with lower biomass of the vegetation have higher LST and vice versa. The combination of LST and SAVI by scatterplot results in a triangular shape like LST and NDVI as described by other scholars (see for example Carlson et al. (1994); Gillies et al. (1997); Gillies and Carlson (1995); Weng et al. (2004)). Table 3 Linear regression and correlation coefficients for the relationship between SAVI and LST; SAVI and AAP in 1985, 2000, and 2015 pixel-by-pixel All the models are statistically significant (p < 0.05) and the pixel based samples were large enough (n = 544365) to obtain a precise estimate of the strength of the relationship. In the year 2015 the previously observed highest vegetation density indicated with 1.8% of the variation can be accounted for by the regression model (Fig. 11a). A significant negative correlation (r = − 0.135; p < 0.05) was found between the pixel based mean SAVI and pixel based AAP, which indicates that as AAP in 2015 decreases, SAVI 2015 tends to increase. On the contrary, the very weak positive correlation (r = 0.057; r = 0.252) in the year 2000 and 1985 respectively indicates that when APP increases, SAVI also tends to increases to some extent (Table 3). Scatter plot of linear regression model between SAVI and AAP in 1985, 2000 and 2000 pixel-by-pixel. a regression plot of SAVI 2015 and AAP 2015; b regression plot of SAVI 2000 and AAP 2000; and c regression plot of SAVI 1985 and AAP 1985 (all pixel-by-pixel). However, the significant increase of vegetation density in the study area could be due to other factors: (1) effect of appropriate SWC practices implemented to rehabilitate the degraded landscape; (2) vegetation water use efficiency; (3) the impact of zero grazing for protection of area exclosure. In a Reuters news report written by Whiting (2017) as cited from Chris Reij, desertification expert at the World Resources Institute, addresses that the Tigray Region of Ethiopia is now greener than it has ever been during the last 145 years and the improvement of the vegetation cover is not due to an increase in rainfall, but due to human investment in restoring degraded land to productivity. For this reason, Ethiopia's Tigray Region won gold in a U.N.-backed award in 2017 for the world's best policies to combat desertification and improve fertility of dry lands (Whiting 2017). Davenport and Nicholson (1993) observed the notable inconsistencies in the vegetation index and rainfall associations that argued the relationships between precipitation and NDVI are not direct and causal. Contrarily, Kassie et al. (2008) argued that physical-based SWC measures did not have a positive impact but reduced yield and biomass in the high-rainfall areas of the Ethiopian highlands compared with non-conserved plots. Studies revealed that SWC has been implemented in the Tigray Region, of northern Ethiopia since 1985. The implementation was more effective from the early 90s due to the more emphasis given by the government towards land rehabilitation. The implementation of SWC in the region as a strategy was to reduce run-off, improve soil fertility and finally reverse the degraded landscape for the betterment of the rural livelihood. This study evaluated changes in vegetation cover following the implementation of SWC measures. Satellite images were used to generate SAVI and LST, whereas long-term AAP records were also used to account for the effects of precipitation. The implementation of different forms of SWC activities, such as area exclosure, stone terraces, soil bunds, contour ditches, moisture retention reservoirs and check dams are an optimal solution to reverse the vegetation degraded landscape of arid and semi-arid regions in Ethiopia. The supplemental survey made in the study area asserts that 95% of the respondents observed a vegetation cover improvement in their locality over the last 25 years. This was due to the proper implementation of SWC, particularly the practice of area exclosure in protecting from human and livestock interference for better restoration. When degraded landscape protected with different SWC practices, run-off will reduce, infiltration capacity will increase, which retain soil moisture and finally improve vegetation density. In order to achieve such results, the involvement of local communities at all processes in the conservation program is essential. On this matter, Bewket (2007) argued that the success of any SWC intervention depends on the extent to which the introduced conservation technologies are accepted and adopted by the farmers. The pixel-by-pixel correlation between SAVI and long-term AAP explained better estimates as compared to the village polygon results. Even though the AAP distribution shows a declining trend over the 30 years of study period, the vegetation cover shows an increasing trend. This was proved statistical inversely significant correlation between SAVI and AAP (r = − 0.135) in the year 2015. This clearly indicates that the significant increase in vegetation cover was not the result of precipitation rather other factors like the integrated SWC practices applied in the area contributes significantly. When appropriate SWC techniques were applied, runoff can be reduced and instead the infiltration rate and water holding capacity of the soil can be improved. To assert such a result, similar studies shall be done in other SWC practiced areas and their results will be compared for a better conclusion. It is recommended that the implementation, protection and follow-up of SWC activities require the direct involvement of rural communities at all stages for the better and sustainable restoration of vegetation cover. The study has shown that SAVI and LST derived from Landsat images in different periods, and AAP of long-term station measurements are useful data when analyzing the relationship between precipitation and vegetation cover and detecting vegetation cover improvement. Aboelghar M, Ali A, Arafat S (2014) Spectral wheat yield prediction modeling using SPOT satellite imagery and leaf area index. Arabian J Geosci 7:465–474. https://doi.org/10.1007/s12517-012-0772-6 Al-doski J, Mansor SB, Zulhaidi H, Shafri M (2013) NDVI differencing and post-classification to detect vegetation changes in Halabja City, Iraq. J Appl Geol Geophys 1(2):1–10 Alshaikh A (2015) Vegetation cover density and land surface temperature interrelationship using satellite data, case study of Wadi Bisha, South KSA. Adv Remote Sens 4:248–262 Badreldin N, Goossens R (2015) A satellite-based disturbance index algorithm for monitoring mitigation strategies effects on desertification change in an arid environment. Mitig Adapt Strateg Glob 20(2):263–276 Baret F, Guyot G (1991) Potentials and limits of vegetation indices for LAI and APAR assessment. Remote Sens Environ 35(2):161–173 Beeri O, Phillips R, Hendrickson J, Frank AB, Kronberg S (2007) Estimating forage quantity and quality using aerial hyperspectral imagery for northern mixed-grass prairie. Remote Sens Environ 110:216–225. https://doi.org/10.1016/j.rse.2007.02.027 Berhanu G, Pender J, Ehui SK, Mitiku H (2003) Policies for sustainable land management in the highlands of Tigray, northern Ethiopia: summary of papers and proceedings of a workshop held at Axum Hotel, Mekelle, Ethiopia, 28–29 March 2002. Socio-economics and policy research working paper 54, Kenya Berk A (2008) Analytically derived conversion of spectral band radiance to brightness temperature. J Quant Spectrosc Radiat Transf 109(7):1266–1276 Bewket W (2007) Soil and water conservation intervention with conventional technologies in northwestern highlands of Ethiopia: acceptance and adoption by farmers. Land Use Policy 24(2):404–416 Bizuneh A (2014) Modeling the effect of climate and land use change on the water resources in northern Ethiopia: The case of Suluh River Basin. Freie Universität Berlin. p 153 Chander G, Markham B, Helder D (2009) Summary of current radiometric calibration coefficients for Landsat MSS, TM, ETM+, and EO-1 ALI sensors. Remote Sens Environ 113:893–903 Carlson TN, Ripley DA (1997) On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens Environ 62(3):241–252 Carlson T, Gillies R, Perry E (1994) A method to make use of thermal infrared temperature and NDVI measurements to infer surface soil water content and fractional vegetation cover. Remote Sens Rev 9(1–2):161–173 Carolyn T, Kwadwo A (2011) Responding to land degradation in the highlands of Tigray, northern Ethiopia, IFPRI discussion paper 01142 Chawla A, Kumar A, Rajkumar S, Singh RD, Thukral AK (2010) Correlation of multispectral satellite data with plant species diversity vis-à-vis soil characteristics in a landscape of Western Himalayan Region, India. Appl Remote Sens 1–1:1–13 Davenport M, Nicholson S (1993) On the relation between rainfall and the normalized difference vegetation index for diverse vegetation types in East Africa. Int J Remote Sens 14(12):2369–2389 Dercon S, Zeitlin A (2009) Rethinking agriculture and growth in Ethiopia: a conceptual discussion. Paper prepared as part of a study on agriculture and growth in Ethiopia. http://www.economics.ox.ac.uk/members/Stefan.Dercon/ethiopiapaper2_v2.pdf. Accessed 6 Apr 2017 EBI (2014) Government of the Federal Democratic Republic of Ethiopia: Ethiopia's fifth national report to the convention on biological diversity Ethiopian. Biodiversity Institute, Addis Ababa Esser K, Vågen TG, Yibebe T, Mitiku H (2002) Soil and water conservation in Tigray, Ethiopia Evans J (1996) Straightforward statistics for the behavioral sciences. Brooks/Cole, Boston FAO (2006) Guidelines for Soil Description, 4th edn. Rome, p 97 Gebremichael D, Nyssen J, Poesen J, Deckers J, Haile M, Govers G, Moeyersons J (2005) Effectiveness of stone bunds in controlling soil erosion on cropland in the Tigray Highlands, northern Ethiopia. Soil Use Manag 21(3):287–297 Gillies R, Carlson T (1995) Thermal remote sensing of surface soil water content with partial vegetation cover for incorporation into climate models. J Appl Meteorol 34:745–756 Gillies R, Kustas W, Humes K (1997) A verification of the 'triangle' method for obtaining surface soil water content and energy fluxes from remote measurements of the normalized difference vegetation. Int J Remote Sens 18(15):3145–3166 Huete AR (1988) A soil-adjusted vegetation index (SAVI). Remote Sens Environ 25:295–309 Kahsai G (2015) Participatory watershed management as the driving force for sustainable livelihood change in the community: the case of Abreha we Atsebeha. In: Nicol A, Langan S, Victor M, Gonsalves J (eds) Water-smart agriculture in East Africa. International Water Management Institute (IWMI), Addis Ababa, p 352. https://doi.org/10.5337/2015.203 Karnieli A, Agam N, Pinker RT, Anderson M, Imhoff ML, Gutman GG, Goldberg A (2010) Use of NDVI and land surface temperature for drought assessment: merits and limitations. J Clim 23(3):618–633 Kassie M, Pender J, Yesuf M, Kohlin G (2008) Estimating returns to soil conservation adoption in the northern Ethiopian highlands. Agric Econ 38(2):213–232 Lambin E, Ehrlich D (1996) The surface temperature-vegetation index space for land cover and land-cover change analysis. Int J Remote Sens 17(3):463–487 Li Z-L, Wu H, Wang N, Qiu S, Sobrino JA, Wan Z, Yan G (2013) Land surface emissivity retrieval from satellite data. Int J Remote Sens 34(9–10):3084–3127. https://doi.org/10.1080/01431161.2012.716540 Moran MS, Clarke TR, Inoue Y, Vidal A (1994) Estimating crop water deficiency using the relation between surface minus air temperature and spectral vegetation index. Remote Sens Environ 49(49):246–263. https://doi.org/10.1016/0034-4257(94)90020-5 NASA (National Aeronautics and Space Administration) (2009) Landsat 7 science data users handbook. http://landsathandbook.gsfc.nasa.gov/handbook/handbook_toc.html. Accessed 13 Dec 17 Negusse T, Yazew E, Tadesse N (2013) Quantification of the impact of integrated soil and water conservation measures on groundwater availability in Mendae Catchment, Abraha We-Atsebaha, eastern Tigray, Ethiopia. Momona Ethiop J Sci 5(2):117–136 Nyssen J, Poesen J, Deckers J (2009) Land degradation and soil and water conservation in tropical highlands. Soil Tillage Res 103(2):197–202 Park KH, Qu ZQ, Wan QQ, Ding GD, Wu B (2013) Effects of enclosures on vegetation recovery and succession in Hulunbeier steppe, China. For Sci Technol 9(1):25–32 Rondeaux G, Steven M, Baret F (1996) Optimization of soil-adjusted vegetation indices. Remote Sens Environ 55:95–107 Rouse JW Jr, Haas RH, Schell JA, Deering DW (1973) Monitoring vegetation system in the great plains with ERTS. Remote sensing center, A&M University, College Station Sahana M, Ahmed R, Sajjad H (2016) Analyzing land surface temperature distribution in response to land use/land cover change using split window algorithm and spectral radiance model in Sundarban Biosphere Reserve, India. Model Earth Syst Environ 2:81. https://doi.org/10.1007/s40808-016-0135-5 Sahebjalal E, Dashtekian K (2013) Analysis of land use-land covers changes using normalized difference vegetation index (NDVI) differencing and classification methods. Afr J Agric Res 8(37):4614–4622 Sandholt I, Rasmussen K, Andersen J (2002) A simple interpretation of the surface temperature/vegetation index space for assessment of surface moisture status. Remote Sens Environ 79:213–224 Selassie YG, Anemut F, Addisu S, Abera B, Alemayhu A, Belayneh A, Getachew A (2015) The effects of land use types, management practices and slope classes on selected soil physico-chemical properties in Zikre watershed, north-western Ethiopia. Environ Syst Res 4(1):3 Sobrino JA, Raissouni N (2000) Toward remote sensing methods for land cover dynamic monitoring: application to Morocco. Int J Remote Sens 21(2):353–366 Sobrino JA, Jiménez-Muñoz JC, Paolini L (2004) Land surface temperature retrieval from LANDSAT TM 5. Remote Sens Environ 90(4):434–440 Sun Q, Wu Z, Tan J (2012) The relationship between land surface temperature and land use/land cover in Guangzhou, China. Environ Earth Sci 65(6):1687–1694 Suresh S, Ajay SV, Mani K (2016) Estimation of land surface temperature of high range mountain landscape of Devikulam Taluk using Landsat 8 data. Int J Res Eng Technol 5:92–96 Taye G, Poesen J, Wesemael BV, Vanmaercke M, Teka D, Deckers J, Goosseb T, Maetensb W, Nyssen J, Hallet V, Haregeweyn N (2013) Effects of land use, slope gradient, and soil and water conservation structures on runoff and soil loss in semi-arid northern Ethiopia. Phys Geogr 34(3):236–259 Valor E, Caselles V (1996) Mapping land surface emissivity from NDVI: application to European, African, and South American areas. Remote Sens Environ 57(3):167–184 Van der Veen A, Tagel G (2011) Effects of policy intervention on food security in Tigray, northern Ethiopia. Ecol Soc 16(1):18 Weng Q (2009) Thermal infrared remote sensing for urban climate and environmental studies: methods, applications, and trends. ISPRS J Photogramm Remote Sens 64(4):335–344 Weng Q, Lu D, Schubring J (2004) Estimation of land surface temperature-vegetation abundance relationship for urban heat island studies. Remote Sens Environ 89(4):467–483 Whiting A (2017) Ethiopia's Tigray Region bags gold award for greening its drylands. https://www.reuters.com/article/us-land-farming/ethiopias-tigray-region-bags-gold-award-for-greening-its-drylands-idUSKCN1B21CT. Accessed 25 Nov 2017 Yue W, Xu J, Tan W, Xu L (2007) The relationship between land surface temperature and NDVI with remote sensing: application to Shanghai Landsat 7 ETM+ data. Int J Remote Sens 28(15):3205–3226 SH has made substantial contribution in conception design, acquisition of data, interpretation of results and leading the overall activities of the research; WB and JL have been involved in guiding the principal author and critically commenting the manuscript. Both have given approval of the current version to be published. All authors read and approved the final manuscript. The authors would like to thank TRECCAfrica II for providing scholarship to the corresponding author to study Ph.D. programme at the Institute of Resources Assessment, University of Dar es Salaam, Tanzania. The USGS website that allowed the authors to download the Landsat images freely from their archives should also be acknowledged. The authors would also like to thank the Metrological agency of Mekelle branch for providing us climatic data; and local community of the study area for their cooperation during the fieldwork. Moreover, many thanks to Dr. Haile Muluken and Mr. Ramzy Bejjani for proof reading and editing the manuscript. Finally, special thanks go to the anonymous reviewers which helped to improve this manuscript. Authors declare that the data and materials presented in this manuscript can be made publically available by Springer Open as per the editorial policy. The first author is also grateful to Mekelle University for granting research fund under Registration Number CRPO/CSSL/PhD/003/08; and to Association of African Universities (AAU) for awarding small grants for thesis writing. Department of Geography and Environmental Studies, Mekelle University, P.O. Box 231, Mekelle, Ethiopia Solomon Hishe Institute of Resources Assessment, University of Dar es Salaam, Dar es Salaam, Tanzania Solomon Hishe & James Lyimo Department of Geography and Environmental Studies, Addis Ababa University, Addis Ababa, Ethiopia Woldeamlak Bewket James Lyimo Correspondence to Solomon Hishe. Hishe, S., Lyimo, J. & Bewket, W. Effects of soil and water conservation on vegetation cover: a remote sensing based study in the Middle Suluh River Basin, northern Ethiopia. Environ Syst Res 6, 26 (2017). https://doi.org/10.1186/s40068-017-0103-8 Land surface temperature Middle Suluh River Basin
CommonCrawl
Compared with those reporting no use, subjects drinking >4 cups/day of decaffeinated coffee were at increased risk of RA [rheumatoid arthritis] (RR 2.58, 95% CI 1.63-4.06). In contrast, women consuming >3 cups/day of tea displayed a decreased risk of RA (RR 0.39, 95% CI 0.16-0.97) compared with women who never drank tea. Caffeinated coffee and daily caffeine intake were not associated with the development of RA. Four of the studies focused on middle and high school students, with varied results. Boyd, McCabe, Cranford, and Young (2006) found a 2.3% lifetime prevalence of nonmedical stimulant use in their sample, and McCabe, Teter, and Boyd (2004) found a 4.1% lifetime prevalence in public school students from a single American public school district. Poulin (2001) found an 8.5% past-year prevalence in public school students from four provinces in the Atlantic region of Canada. A more recent study of the same provinces found a 6.6% and 8.7% past-year prevalence for MPH and AMP use, respectively (Poulin, 2007). The data from 2-back and 3-back tasks are more complex. Three studies examined performance in these more challenging tasks and found no effect of d-AMP on average performance (Mattay et al., 2000, 2003; Mintzer & Griffiths, 2007). However, in at least two of the studies, the overall null result reflected a mixture of reliably enhancing and impairing effects. Mattay et al. (2000) examined the performance of subjects with better and worse working memory capacity separately and found that subjects whose performance on placebo was low performed better on d-AMP, whereas subjects whose performance on placebo was high were unaffected by d-AMP on the 2-back and impaired on the 3-back tasks. Mattay et al. (2003) replicated this general pattern of data with subjects divided according to genotype. The specific gene of interest codes for the production of Catechol-O-methyltransferase (COMT), an enzyme that breaks down dopamine and norepinephrine. A common polymorphism determines the activity of the enzyme, with a substitution of methionine for valine at Codon 158 resulting in a less active form of COMT. The met allele is thus associated with less breakdown of dopamine and hence higher levels of synaptic dopamine than the val allele. Mattay et al. (2003) found that subjects who were homozygous for the val allele were able to perform the n-back faster with d-AMP; those homozygous for met were not helped by the drug and became significantly less accurate in the 3-back condition with d-AMP. In the case of the third study finding no overall effect, analyses of individual differences were not reported (Mintzer & Griffiths, 2007). Exercise is also important, says Lebowitz. Studies have shown it sharpens focus, elevates your mood and improves concentration. Likewise, maintaining a healthy social life and getting enough sleep are vital, too. Studies have consistently shown that regularly skipping out on the recommended eight hours can drastically impair critical thinking skills and attention. Many of the food-derived ingredients that are often included in nootropics—omega-3s in particular, but also flavonoids—do seem to improve brain health and function. But while eating fatty fish, berries and other healthy foods that are high in these nutrients appears to be good for your brain, the evidence backing the cognitive benefits of OTC supplements that contain these and other nutrients is weak. The Trail Making Test is a paper-and-pencil neuropsychological test with two parts, one of which requires shifting between stimulus categories. Part A simply requires the subject to connect circled numbers in ascending order. Part B requires the subject to connect circled numbers and letters in an interleaved ascending order (1, A, 2, B, 3, C….), a task that places heavier demands on cognitive control. Silber et al. (2006) analyzed the effect of d-AMP on Trails A and B and failed to find an effect. The majority of nonmedical users reported obtaining prescription stimulants from a peer with a prescription (Barrett et al., 2005; Carroll et al., 2006; DeSantis et al., 2008, 2009; DuPont et al., 2008; McCabe & Boyd, 2005; Novak et al., 2007; Rabiner et al., 2009; White et al., 2006). Consistent with nonmedical user reports, McCabe, Teter, and Boyd (2006) found 54% of prescribed college students had been approached to divert (sell, exchange, or give) their medication. Studies of secondary school students supported a similar conclusion (McCabe et al., 2004; Poulin, 2001, 2007). In Poulin's (2007) sample, 26% of students with prescribed stimulants reported giving or selling some of their medication to other students in the past month. She also found that the number of students in a class with medically prescribed stimulants was predictive of the prevalence of nonmedical stimulant use in the class (Poulin, 2001). In McCabe et al.'s (2004) middle and high school sample, 23% of students with prescriptions reported being asked to sell or trade or give away their pills over their lifetime. Finally, all of the questions raised here in relation to MPH and d-AMP can also be asked about newer drugs and even about nonpharmacological methods of cognitive enhancement. An example of a newer drug with cognitive-enhancing potential is modafinil. Originally marketed as a therapy for narcolepsy, it is widely used off label for other purposes (Vastag, 2004), and a limited literature on its cognitive effects suggests some promise as a cognitive enhancer for normal healthy people (see Minzenberg & Carter, 2008, for a review). Many people quickly become overwhelmed by the volume of information and number of products on the market. Because each website claims its product is the best and most effective, it is easy to feel confused and unable to decide. Smart Pill Guide is a resource for reliable information and independent reviews of various supplements for brain enhancement. I have a needle phobia, so injections are right out; but from the images I have found, it looks like testosterone enanthate gels using DMSO resemble other gels like Vaseline. This suggests an easy experimental procedure: spoon an appropriate dose of testosterone gel into one opaque jar, spoon some Vaseline gel into another, and pick one randomly to apply while not looking. If one gel evaporates but the other doesn't, or they have some other difference in behavior, the procedure can be expanded to something like and then half an hour later, take a shower to remove all visible traces of the gel. Testosterone itself has a fairly short half-life of 2-4 hours, but the gel or effects might linger. (Injections apparently operate on a time-scale of weeks; I'm not clear on whether this is because the oil takes that long to be absorbed by surrounding materials or something else.) Experimental design will depend on the specifics of the obtained substance. As a controlled substance (Schedule III in the US), supplies will be hard to obtain; I may have to resort to the Silk Road. The abuse of drugs is something that can lead to large negative outcomes. If you take Ritalin (Methylphenidate) or Adderall (mixed amphetamine salts) but don't have ADHD, you may experience more focus. But what many people don't know is that the drug is very similar to amphetamines. And the use of Ritalin is associated with serious adverse events of drug dependence, overdose and suicide attempts [80]. Taking a drug for another reason than originally intended is stupid, irresponsible and very dangerous. From the standpoint of absorption, the drinking of tobacco juice and the interaction of the infusion or concoction with the small intestine is a highly effective method of gastrointestinal nicotine administration. The epithelial area of the intestines is incomparably larger than the mucosa of the upper tract including the stomach, and the small intestine represents the area with the greatest capacity for absorption (Levine 1983:81-83). As practiced by most of the sixty-four tribes documented here, intoxicated states are achieved by drinking tobacco juice through the mouth and/or nose…The large intestine, although functionally little equipped for absorption, nevertheless absorbs nicotine that may have passed through the small intestine. My intent here is not to promote illegal drugs or promote the abuse of prescription drugs. In fact, I have identified which drugs require a prescription. If you are a servicemember and you take a drug (such as Modafinil and Adderall) without a prescription, then you will fail a urinalysis test. Thus, you will most likely be discharged from the military. Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin. Noopept was developed in Russia in the 90s, and is alleged to improve learning. This drug modifies acetylcholine and AMPA receptors, increasing the levels of these neurotransmitters in the brain. This is believed to account for reports of its efficacy as a 'study drug'. Noopept in the UK is illegal, as the 2016 Psychoactive Substances Act made it an offence to sell this drug in the UK - selling it could even lead to 7 years in prison. To enhance its nootropic effects, some users have been known to snort Noopept. Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you. Aniracetam is known as one of the smart pills with the widest array of uses. From benefits for dementia patients and memory boost in adults with healthy brains, to the promotion of brain damage recovery. It also improves the quality of sleep, what affects the overall increase in focus during the day. Because it supports the production of dopamine and serotonin, it elevates our mood and helps fight depression and anxiety. I noticed on SR something I had never seen before, an offer for 150mgx10 of Waklert for ฿13.47 (then, ฿1 = $3.14). I searched and it seemed Sun was somehow manufacturing armodafinil! Interesting. Maybe not cost-effective, but I tried out of curiosity. They look and are packaged the same as the Modalert, but at a higher price-point: 150 rather than 81 rupees. Not entirely sure how to use them: assuming quality is the same, 150mg Waklert is still 100mg less armodafinil than the 250mg Nuvigil pills. The placebos can be the usual pills filled with olive oil. The Nature's Answer fish oil is lemon-flavored; it may be worth mixing in some lemon juice. In Kiecolt-Glaser et al 2011, anxiety was measured via the Beck Anxiety scale; the placebo mean was 1.2 on a standard deviation of 0.075, and the experimental mean was 0.93 on a standard deviation of 0.076. (These are all log-transformed covariates or something; I don't know what that means, but if I naively plug those numbers into Cohen's d, I get a very large effect: \frac{1.2 - 0.93}{0.076}=3.55.) Starting from the studies in my meta-analysis, we can try to estimate an upper bound on how big any effect would be, if it actually existed. One of the most promising null results, Southon et al 1994, turns out to be not very informative: if we punch in the number of kids, we find that they needed a large effect size (d=0.81) before they could see anything: Many of the most popular "smart drugs" (Piracetam, Sulbutiamine, Ginkgo Biloba, etc.) have been around for decades or even millenia but are still known only in medical circles or among esoteric practicioners of herbal medicine. Why is this? If these compounds have proven cognitive benefits, why are they not ubiquitous? How come every grade-school child gets fluoride for the development of their teeth (despite fluoride's being a known neurotoxin) but not, say, Piracetam for the development of their brains? Why does the nightly news slant stories to appeal more to a fear-of-change than the promise of a richer cognitive future? (As I was doing this, I reflected how modafinil is such a pure example of the money-time tradeoff. It's not that you pay someone else to do something for you, which necessarily they will do in a way different from you; nor is it that you have exchanged money to free yourself of a burden of some future time-investment; nor have you paid money for a speculative return of time later in life like with many medical expenses or supplements. Rather, you have paid for 8 hours today of your own time.) Noopept shows a much greater affinity for certain receptor sites in the brain than racetams, allowing doses as small as 10-30mg to provide increased focus, improved logical thinking function, enhanced short and long-term memory functions, and increased learning ability including improved recall. In addition, users have reported a subtle psychostimulatory effect. Eugeroics (armodafinil and modafinil) – are classified as "wakefulness promoting" agents; modafinil increased alertness, particularly in sleep deprived individuals, and was noted to facilitate reasoning and problem solving in non-ADHD youth.[23] In a systematic review of small, preliminary studies where the effects of modafinil were examined, when simple psychometric assessments were considered, modafinil intake appeared to enhance executive function.[27] Modafinil does not produce improvements in mood or motivation in sleep deprived or non-sleep deprived individuals.[28]
CommonCrawl
Contribution of fish farming ponds to the production of immature Anopheles spp. in a malaria-endemic Amazonian town Izabel Cristina dos Reis1,2,3, Cláudia Torres Codeço1, Carolin Marlen Degener1, Erlei Cassiano Keppeler4, Mauro Menezes Muniz2, Francisco Geovane Silva de Oliveira4, José Joaquin Carvajal Cortês2,3,5, Antônio de Freitas Monteiro6, Carlos Antônio Albano de Souza6, Fernanda Christina Morone Rodrigues3, Genilson Rodrigues Maia7 & Nildimar Alves Honório2,3 Malaria Journal volume 14, Article number: 452 (2015) Cite this article In the past decade fish farming has become an important economic activity in the Occidental Brazilian Amazon, where the number of new fish farms is rapidly increasing. One of the primary concerns with this phenomenon is the contribution of fishponds to the maintenance and increase of the anopheline mosquito population, and the subsequent increase in human malaria burden. This study reports the results of a 2-year anopheline abundance survey in fishponds and natural water bodies in a malaria-endemic area in northwest Brazil. The objective of this study was to investigate the contribution of natural water bodies (rivers, streams, creeks, ponds, and puddles) and artificial fishponds as breeding sites for Anopheles spp. in Mâncio Lima, Acre and to investigate the effect of limnological and environmental variables on Anopheles spp. larval abundance. Natural water bodies and fishponds were sampled at eight different times over 2 years (early, mid and late rainy season, dry season) in the Amazonian town of Mâncio Lima, Acre. Anopheline larvae were collected with an entomological dipper, and physical, chemical and ecological characteristics of each water body were measured. Management practices of fishpond owners were ascertained with a systematic questionnaire. Fishponds were four times more infested with anopheline larvae than natural water bodies. Electrical conductivity and the distance to the nearest house were both significant inverse predictors of larval abundance in natural water bodies. The density of larvae in fishponds raised with increasing border vegetation. Fishponds owned by different farmers varied in the extent of anopheline larval infestation but ponds owned by the same individual had similar infestation patterns over time. Commercial fishponds were 1.7-times more infested with anopheline larvae compared to fishponds for family use. These results suggest that fishponds are important breeding sites for anopheline larvae, and that adequate management activities, such as removal of border vegetation could reduce the abundance of mosquito larvae, most importantly Anopheles darlingi. Malaria, one of the most prevalent infectious diseases, is caused by parasites of the genus Plasmodium (Apicomplexa: Plasmodiidae) and is transmitted to humans via the bite of infected female Anopheles (Diptera: Culicidae) mosquitoes. Anopheles darlingi is a highly anthropophilic and efficient malaria vector that is widely prevalent in the Brazilian Amazon basin [1]. At least 33 other anopheline species exist in the Brazilian Amazon region [2, 3] and several of them, including Anopheles deaneorum, Anopheles braziliensis, Anopheles nuneztovari, Anopheles oswaldoi s.l, Anopheles triannulatus, Anopheles strodei, Anopheles evansae, Anopheles galvaoi, Anopheles aquasalis, Anopheles albitarsis s.l, and Anopheles peryassui have also been implicated as malaria vectors in the Amazon [3–6]. The wide diversity of neo-tropical anophelines has been attributed to the genus' ability to adapt to numerous niches [7]. Anopheline larvae habitats range from fresh and salt-water marshes, to mangrove swamps, rice paddies, grass-filled ditches, the borders of rivers and streams, and small transient puddles of water [8, 9]. Environmental factors such as the physical and chemical characteristics of the water, as well as vegetation type, directly and indirectly influence anopheline ovipositing behaviour, larval distribution, population density, and development [10, 11]. The knowledge of factors that affect larval breeding sites is requisite to understand the space–time distribution of the mosquitoes and to develop appropriate vector-control strategies. Fish farming activities are generally implemented in rural areas. The town of Mâncio Lima is an exception, as numerous fishponds were constructed in recent years. The town is located at the margin of an igapó forest (blackwater-flooded forest), which is seasonally flooded with fresh water, and crisscrossed by small streams. This provides ideal conditions for the easy construction of fishponds by either digging ponds beside rivers, or by damming them. In recent years, fish farming has blossomed, with roughly one fishpond for every 20 houses. The objective of this study was to investigate the contribution of natural water bodies (rivers, streams, creeks, ponds, and puddles) and artificial fishponds as breeding sites for Anopheles spp. in Mâncio Lima and to investigate the effect of limnological and environmental variables on Anopheles spp. larval abundance. Study site The study was performed in the town of Mâncio Lima (7°37′33.42″S, 72°53′29.89″W) in northwest Acre, Brazil (Fig. 1). The municipality's 15,246 inhabitants (population density: 2.8 residents/km2) are spread through urban (57.6 %) and rural/riparian landscapes (42.4 %). The town is located on a mosaic of fragmented primary forest, animal pastures, man-made structures, and a diverse array of different water bodies (including the Japiim River, dams, streams, narrow channels with riparian morich palms, perennial and temporal puddles, and pisciculture-focused fishponds). It is divided into nine neighbourhoods. Downtown is the most urbanized, with paved streets and numerous commercial activities. The other neighbourhoods spread over a peri-urban landscape that consists mostly of unpaved roads, small farms, empty fields, and small clusters of homes. The major economic activities in the region are manioc flour production and pisciculture [12]. The hot and humid climate is characterized by rainy (April to November) and dry seasons (December to March) [13]. Average annual precipitation is 2,100 mm and the mean annual temperature is 25.5 °C (monthly minimum and maximum temperatures: 19–32 °C) [14]. Maps of the study area. a Brazil and Acre State; b Acre State; grey highlights the municipality of Mâncio Lima while red denotes the location of the study site within Mâncio Lima; c Google Earth satellite map of the study area showing the location and types of water bodies where mosquito larvae were sampled Mâncio Lima is among the ten most malaria-affected municipalities in the Brazilian Amazon [15]. The number of notified malaria cases has increased considerably since 2004, coinciding with the expansion of fish farming activities in the area [16]. Investment in fish farming is an integral part of the Brazilian Federal Government's poverty alleviation programme, which focuses on enhancing local economies [17, 18]. In 2006, a malaria epidemic occurred in Mâncio Lima with 16,125 cases [annual parasite index (API) = 1217.8] [19]. After enforcing mosquito control measures, the number of malaria cases declined to 4398 cases in 2008 (API = 305.9), but experienced a recent surge in 2014 with 6016 reported cases (API = 380.8). Mâncio Lima is a major commercial access point for riverine and rural communities in the same municipality, and the neighbouring municipality of Rodrigues Alves. People frequently commute between rural areas and Mâncio Lima town to exchange commodities, procure medical attention, and access social welfare [12]. The high prevalence of malaria in these rural and riverine regions, and the frequent human traffic to and from Mâncio Lima, contributes to the town`s vulnerability to malaria epidemics. Mapping of water bodies A satellite image (OpenStreetmap, October 12, 2011) served as a base to draw a street map and to locate fishponds. A field inspection was conducted in November 2011 to verify pond locations and use, and to identify additional fishponds with the help of residents. The study inclusion criterion was a distance of less than 2 km from the closest neighbourhood centre. When a property had more than three fishponds, up to three ponds were randomly chosen to be included in the survey. Streams and wetlands were the predominant natural water bodies in the study area. A convenient sample of natural water bodies was identified, prioritizing those with relative proximity to streets and homes. All water bodies were geo-referenced with global positioning system (GPS). The collection scheme is described in Fig. 1. All 55 fishponds and 21 natural water bodies were included in each of the five complete surveys. These were carried out approximately every 6 months between February 2012 and March 2014, once during each rainy and dry season. Nineteen of the 55 fishponds were also sampled in between these complete surveys (henceforth referred to as 'fast surveys'). Collections for this sub-set occurred roughly every 3 months, encompassing the middle of the rainy season (February 2012 and 2013, March 2014), the middle of the dry season (July 2012 and 2013) and early and late rainy season, which are intermediate or transitional seasons (May 2012 and 2013, November 2012 and 2013) (Fig. 2). In this way, a dataset with greater temporal resolution for 19 (34.5 %) of the fishponds was obtained. Sampling scheme of each of the nine mosquito larvae surveys of natural and artificial water bodies in Mâncio Lima between February 2012 and March 2014 Entomologic sampling A standard 0.5 L dipper (Bioquip Co, Gardena, CA, USA) was used for sampling of immature Anopheles spp. as previously described [20]. The number of samples (dips) per water body varied from 20 to 155 depending on the length of the (accessible) border and size of the water body [21]. The number of Anopheles spp. larvae collected per dip per water body was recorded. Collected larvae were placed into Whirl–Pak® sampling bags (Nasco Corp, Fort Atkinson, WI, USA; dimensions: 118 mL, 8 × 18 cm) half-filled with water from the collection site. The bags were sealed to retain air, placed in a container for transport, and transferred to the Núcleo Operacional Sentinela de Mosquitos Vetores (NOSMOVE/FIOCRUZ) in Rio de Janeiro. Third and fourth instar larvae were preserved in 70 % alcohol for identification. First and second instars were reared in plastic basins until reaching the third or fourth instar. All immature larvae were identified to species by Consoli and Lourenço-de-Oliveira [1]. The following descriptive ecological characteristics of each sampling area were recorded during each survey: type and size of water body, if flowing or standing water, type of usage, proportion of border with vegetation, and the presence or absence of macrophytes. The proportion of borders covered with vegetation was visually estimated. The distance (in m) between water bodies and the nearest human dwelling was measured with a flexible ruler. Limnological data A multi-parameter water quality sonde (YSI Inc. 6600V2, Yellow Springs, OH, USA) was used to measure pH, temperature (°C), ammonium (mg/L), chlorophyll (mg/L), nitrate (mg/L), electrical conductivity (µS/cm), dissolved oxygen (mg/L), turbidity (NTU, nephelometric turbidity unit). Every water body was sampled twice (at its opposite ends) and the mean value of both samples was recorded. Limnological data only were collected during the first three complete surveys (February 2012, July 2012 and February 2013). These measurements were taken concomitantly to the mosquito larval sampling in each water body. Standardized questionnaire for owners of fishponds The 55 surveyed fishponds belonged to 31 different owners, 30 of whom were interviewed during the final full sample conducted in February 2014. The questionnaire ascertained the following information: commercial fish farming or not, type of fish food, if the reared fish species was always the same, as well as if and how often each pond was emptied and refilled. The data were analysed in five steps. An overview is provided in Table 1. Generalized linear mixed models (GLMM) with a negative binomial distribution were used in all analyses because the variance of the number of larvae was greater than the mean, and because adjusted Poisson models were highly over-dispersed. Possible non-linear relationships between explanatory variables and the abundance of Anopheles spp. larvae were also considered by adjusting generalized additive mixed models (GAMM) with R's 'mgcv' library [22] (results not shown because all co-variables were linear or approximately linear). All models included the water body ID code as a random effect to control for multiple samples of the same body of water over time. R version 3.1.2 [23] and the 'MASS' [24] and 'lme4' libraries [25] were used. All GLMM models had the following basic structure: $$ \begin{aligned} & Anopheles_{ti} \sim {\text{ Negative binomial }}\left( {\mu_{ti} , \, k} \right) \\ & { \log }\left( {\mu_{ti} } \right) \, = { \log }\left( {N_{ti} } \right) + \beta {\text{X}}_{ti} + a_{i} \\ \end{aligned} $$ $$ {\text{a}}_{\text{i}} \sim {\text{ N}}\left( {0,\sigma_{\text{a}}^{ 2} } \right) $$ where log(N ti ), the natural logarithm of the number of dips at time t in water body i, is the model offset to correct for the differences in the number of dips; a i is the random intercept accounting for the repeated measures design; X represents explanatory variables (multiple covariates were adjusted in some cases); and β are the fixed effects of variables X. Table 1 Overview of data analyses after mosquito larvae sampling in Mâncio Lima, 2012–2014 It was first investigated if ponds and natural breeding sites were similarly infested by Anopheles larvae. The dummy variable 'type of water body' (type = 0 or 1; fishponds or natural breeding site, respectively) was the primary effect. As larval infestation differed significantly between the two types of breeding sites and as some explanatory variables were only available for fishponds, the second step (investigation of the effect of limnological and ecological variables on larvae infestation) was carried out separately each of the two types of water bodies. Separate univariate GLMM models were adjusted for each variable. The AICs (Akaike information criterion) of models with significant effects (p value <0.1) were compared and a full model was adjusted. The co-variables of the full model were included in order of increasing AIC. Variables that were not statistically significant in the full model were removed in a stepwise fashion until the final model included only those that remained statistically significant (p < 0.05). In the case of collinearity, only one variable (lowest AIC in the univariate model) was included in the multiple model. The third step was an investigation of the influence of time on larval infestation. The main effects were either collection (collection = 1, …, 5 for natural breeding sites and collection = 1, …, 9 for fishponds), or month (month = 2, 7 for natural breeding sites and month = 2, 5, 7, 11 for fishponds). In the fourth analysis, the effect of the each pond's owner (owner = 1, …, 30) on larval infestation in fishponds was evaluated. The owner ID code was included as a random effect in Eq. (1), instead of water body ID, and the resultant model was compared to the model without other covariates. The last step of the data analysis included an evaluation of the effect of variables from the questionnaire. The modelling strategy was the same as described in step 2. As only the variable commercial (c = 0 for commercial and c = 1 for non-commercial fishponds) had an effect on larval density, it was also evaluated if the proportion of border vegetation, which was highly significant in step 2, had different effects on commercial and non-commercial fishponds: $$ \begin{aligned} & Anopheles_{ti} \sim {\text{ Negative binomial }}\left( {\mu_{ti} , \, k} \right) \\ & { \log }\left( {\mu_{ti} } \right) \, = { \log }\left( {N_{ti} } \right) + {\text{ f}}_{Ci} \left( v \right) + a_{i} \\ \end{aligned} $$ $$ {\text{a}}_{\text{i}} \sim {\text{ N}}(0,\sigma_{\text{a}}^{ 2} ) $$ where f Ci (v) is the smooth non-linear effect of border vegetation v in each water body i (i = 1, …, 56), among non-commercial and commercial fishponds. The other parameters are as previously described. The paper reports data from entomological surveys carried out using standard methods, in private and public spaces. Access to private spaces was requested to each land owner, and collections carried out only after their oral consent. Access to public spaces did not require permission but before taking place, the overall study was presented and approved by local Health and Environmental Secretariats. The only request was that the results were presented to the population, which occurred in the form of talks in the town's conference room and in schools. The paper also reports data from interviews. A signed consent preceded the interviews. All measures were taken to guarantee confidentiality. The study was approved by the ethical committee at the Oswaldo Cruz Foundation in Rio de Janeiro (CEP Fiocruz n.402.039). Participants' names were not recorded but instead, identification numbers of the fishponds were used. All the information was treated confidentially and only available to those directly concerned with this research. Physical characteristics of water bodies Of the 93 monitored water bodies, 55 (59.1 %) were fishponds, 33 (35.5 %) flowing water (creeks, streams, and river) and five (5.4 %) small areas of standing water (ponds and puddles). Fishponds varied in size, with perimeters ranging from 24 to 900 m (average perimeter = 170.7 m). Most flowing waters (29, 89 %) were creeks, ranging from 0.5 to 50 m in width. The average distance between the potential breeding habitats and the closest human domicile was 37 m. Sixty-nine per cent of fishponds (n = 38) were used for commercial purposes. Anopheline abundance A total of 21,156 Anopheles spp. larvae were collected (Table 2). Fifty-three per cent of the larvae were not identified because of larval mortality prior to third stage. The following eight anopheline species were identified among the 9944 (47 %) of the surviving and identifiable larvae: An. albitarsis s.l. (75.8 %), An. darling (16.1 %), An. deaneorum (6.1 %), An. brasiliensis (<1 %), Anopheles argyritarsis (<1 %), An. triannulatus (<1 %), Anopheles rondoni (<1 %), An. galvaoi (<1 %) (Table 2). Of all water bodies that were investigated during the five complete surveys (55 fishponds and 21 natural water bodies), 46 fishponds and eight natural water bodies were positive for An. darlingi. Twelve fishponds and seven natural water bodies were positive for An. darlingi once; five fishponds and one water body two times; 11 fishponds were positive at three different times; 12 fishponds four times; and six fishponds all five times they were sampled. Table 2 Descriptive statistics of anopheline larvae from Mâncio Lima, Acre, Brazil collected between February 2012 and March 2014 The mean number of anopheline larvae per dip (±standard deviation) in fishponds and natural water bodies were, respectively, 1.02 ± 1.77 and 0.20 ± 0.40. Type of breeding site was a significant covariate in the GLMM. The model indicated that fishponds were 4.42 (95 % CI = 2.84–6.88) times more infested with anopheline larvae than natural breeding sites after correcting for repeated sampling (Table 3). Table 3 Results from statistically significant models (p < 0.05) to assess larvae density in sampled natural and artificial water bodies in Mâncio Lima, Acre, Brazil between February 2012 and March 2014 Influence of limnological and ecological factors on larval infestation Larval abundance in fishponds was unaffected by water temperature, turbidity, dissolved oxygen, nitrate, ammonia, electrical conductivity, chlorophyll, pH, presence of macrophytes, and distance to the nearest house (p > 0.05). The proportion of border vegetation was the only significant covariate (p < 0.05) (Table 3). Every 10 % increase of vegetation (proportion of the breeding site border with presence of vegetation) caused a 10 % increase in larval abundance. The percentage of border vegetation was not estimated for natural water bodies. Among the remaining variables, only electrical conductivity and distance to the nearest house were significant negative predictors in univariate models. The final model included both of these variables (Table 3). Temporal patterns of larval abundance The temporal abundance of anopheline larvae varied significantly between collections in fishponds (Fig. 3a; Table 3) but not in natural water bodies (Fig. 3b). Variation in larval abundance in both types of breeding sites was not associated with the season of collection (represented by the variable month). Temporal pattern of Anopheles spp. larval density (mean number of larvae per dip). a In fishponds sampled during each of the five complete and four fast surveys; b in natural water bodies surveyed only during the five complete surveys Effect of fishpond owner on larval infestation When comparing two models, one with the house owner ID and the other with water body ID as random effects, the model with owner ID was slightly better in terms of AIC values (AIC = 2233 and 2230, respectively). In other words, extent of infestation differed between owners, and ponds from the same owner exhibited similar degrees of infestation (Fig. 4). Anopheles spp. larval density (log-scale) in each fishpond and each complete survey by owner (panel numbers indicate individual owners). Darker dots indicate multiple overlapping fishponds Of all variables investigated in the questionnaire (commercial pond use, type of fish species bred in the pond, emptying, and type of fish food), the only variable that significantly influenced larval density was whether or not the pond was used for commercial use. Fishponds that contained fish intended for sale (40 of 55) were 1.7 times more infested (estimated effect size = 0.52, p = 0.037) than those with fish for family consumption. Figure 5 shows results from models with smoothing functions for border vegetation in both commercial and non-commercial fishponds. The model indicates that (1) larval infestation in non-commercial fishponds was not affected by the proportion of the border covered by vegetation; (2) commercial ponds were less infested with larvae than non-commercial ponds when less than 65 % of the border had vegetation; and, (3) when border vegetation exceeded 65 %, commercial ponds were more infested than non-commercial ponds (Table 3). Result of a generalized additive mixed model (GAMM) incorporating a smoothing effect for border vegetation in commercial and non-commercial fishponds The aquatic phase of immature mosquitoes is a critical stage in the mosquito life cycle. The distribution and abundance of many disease vector species, including malarial vectors, is directly related to the characteristics of the vector's breeding sites [10]. Both natural water bodies and fishponds were infested with anopheline larvae in this Amazonian malaria-endemic town. Fishponds however were on average four times more infested with anopheline larvae than natural water bodies. These results support the findings from a previous study from 2008 in Manaus, Brazil, where fishponds were also four times more infested than other available mosquito breeding sites [26]. Fishponds were also identified as breeding sites for An. darlingi in the northeastern Peruvian Amazon [27], where presence of fishponds were risk factors for human malaria transmission [28]. Fishponds become suitable habitats for An. darlingi, the most important malaria vector [1] as it adapts to man-made breeding habitats, preferring deep, stable, clear water bodies in proximity to human dwellings [1, 28], despite potential larval predation by juvenile fish [27, 28]. The high larval abundance in fishponds suggests that fish predation in these habitats is not completely sufficient to prevent larval development. It was hard to ascertain the specific fish community of each pond, but personal observations indicate that most ponds had a mixed community of native and introduced species, combining diverse species, such as Nile tilapia (Oreochromis niloticus), pacu (Piaractus brachypomus), curimatã (Prochilodus spp.), tambaqui (Colossoma macropomum), spotted sorubim (Pseudoplatystoma corruscans), and piauaçu (Leporinus macrocephalus). Of these species, Oreochromis niloticus has been used for anopheline larval control in Africa [29]. While some studies show that the introduction of tilapia and mosquitofish (Gambusia affinis) into active fishponds have drastically reduced the density of Anopheles gambiae and other anophelines in Africa [30, 31], other studies show no such effect [32, 33]. The predatory effect of fishes can be reduced by other factors such as (1) the presence of a dense border vegetation and floating plant parts (offers hiding places for the mosquito larvae, which reduces the efficiency of larvivourous fish); (2) insufficient number of fishes in the ponds; (3) high larval productivity (due to a large adult mosquito population); (4) growth stage of the fish (only fingerlings feed preferentially on mosquito larvae); and, (5) feeding behaviour of the bred fish species (curimatã and spotted sorubim mainly feed on the ground and only rarely on the water surface [34–37]). Another factor that might have influenced the survival of anopheline larvae in fishponds is a likely different fauna of predatory aquatic insects in the two types of water bodies. An inspection of the insects found in the dips suggests a variety of species that could act as larva predators in fishponds: Odonatas, Hemiptera, Coleoptera, and Diptera. Further studies should investigate the impact of these species on larval dynamics in natural water collections and fishponds. Strong economic incentives for the development of fish farming in the State of Acre, especially in the Juruá region, where Mâncio Lima is located, have caused rapid changes in the urban landscape, and a direct impact on the risk of malaria infection in the region. A total of 14,310 malaria cases were notified in Mâncio Lima during the study period (13,387 from which 94 % were autochthonous). A previous work of our group already showed that fishpond construction effort (2003–2006) coincided both spatially and temporally with the increased number of malaria notifications [15]. These findings reinforce the association between the reproductive behaviour of malaria vectors and this economic activity [27]. Anopheline density was higher in natural water bodies that were close to households. This suggests an elevated exposure of these residents to the malaria vector in comparison to residents living further away. A direct association between malaria prevalence and An. darlingi larvae abundance in fishponds at distances of less than 100 m in Mâncio Lima was already observed [15]. It was previously shown that an increasing distance between humans and breeding sites reduces the contact between humans and anthropophilic malaria-vector mosquitoes, such as An. darlingi, and therefore the risk of malaria infection [38]. Previous studies suggest that abiotic factors (pH, temperature, nitrate, ammonia, sulfate, turbidity, electrical conductivity, and chlorophyll) as well as biotic factors (vegetation, predation and competition) affect the development and survival of anopheline larvae [39–42]. However, the physical–chemical characteristics of breeding sites does not often explain the preference of anophelines for certain habitats [43, 44]. In the present study, pH, temperature, ammonium, chlorophyll, nitrate, dissolved oxygen, and turbidity were not associated with the occurrence of Anopheles spp. larvae in both artificial and natural breeding sites. Some reasons for this result include low measurement variation between water bodies (temperature and pH, for example) and lack of statistical power (small effect compared to the sample size because limnological data were only collected during the first three complete surveys). A study from western Kenya also did not detect any significant association between the occurrence of An. gambiae larvae and several habitat variables [45]; McKeon et al. [10] showed that water temperature was not associated with the occurrence of Culex and Anopheles. The only significant predictor for the number of anopheline larvae in fishponds was the per cent of border with vegetation (in a positive relationship). Anopheline larvae use border vegetation to hide from potential predators [39]. Studies in Peru, Venezuela and Colombia indicated that emerging water body vegetation (especially Graminae) were risk factors for the presence and maintenance of Anopheles spp. [27, 46, 47]. It is interesting to note that the effect of border vegetation in Mâncio Lima was different in commercial and non-commercial fishponds. Anopheles spp. infestation of commercial fishponds was associated with borders that had more than 65 % vegetation cover. Considering that commercial fishponds have more fish than those for family use, this suggests an interaction between fish predation and vegetation border. Fishes can control larvae in fishponds with up to 65 % vegetation cover. Beyond that, the extent of the hiding space for larvae is sufficient to counteract the effects of predation. This result points to an objective target for pond management but further studies should investigate its effect on a controlled experimental design. Larval infestation in non-commercial fishponds was not influenced by the presence or absence of border vegetation. This difference between commercial and non-commercial fishponds might be due to differences in the fish species and abundance, a hypothesis for future investigations. In the natural water bodies, on the other hand, an inverse association between anopheline larval abundance and water electrical conductivity was found, which reinforces previous results from West Africa and the Brazilian Amazon [10, 11, 48]. Conductivity increases when ions are liberated through the decomposition process, and is an indirect measure of water`s pollutant concentration [49]. It is known that anopheline species are sensitive to pollution. Larval infestation in fishponds and natural water bodies did not change significantly between the seasons. For the fishponds this is probably due to the fact that they are unaffected by seasonal changing water levels [35]. Natural water bodies were most likely unaffected by the season, because larval density was extremely low in comparison to fishponds. This was different in several collection points in the Azul River in Boa Vista (Roraima, Brazil), where An. darlingi larvae were absent during the rainy season, and present in the dry season [38]. Vittor et al. [27], on the other hand, found a moderate positive association of the presence of An. darlingi larvae with the rainy season. Infestation levels differed between fishponds, and ponds from the same owner tended to have similar extents of infestation. Fishponds owned by the same farmer could exhibit similar degrees of infestation because of their spatial aggregation, fish life cycle, species of fish, fish feeding schedule, and distance between the pond and the edge of the forest. These variables were not included in the present study, yet other studies have described an association between each of them and subsequent anopheline breeding and malaria transmission [21, 26, 35]. The findings reinforce the importance of developing and implementing feasible good practice programmes for both commercial and non-commercial fish farmers in order to reduce the risk of malaria in their families and communities. Good practices should include protocols for inspection and removal of excess vegetation from the borders as well as floating vegetation on the surface of their ponds, regulation of water level [39], and appropriate use of herbicides or algicides. Furthermore, biological larvicides, such as Bacillus sphaericus (Bs) or Bacillus thuringiensis var. israelensis (Bti) could be considered as a control alternative in fishponds [26, 32]. It is also recommended to evaluate if a combination of fish species could enhance their predatory effect on anopheline larvae, providing a biological control alternative for this problem. Importantly, it must be acknowledged that any of these tasks can be daunting for those working alone or with low resources. There are several barriers to good fishpond management practices, including but not limited to a general lack of knowledge, proper equipment, financial resources, or time. Identifying and elucidating the influence of each of these barriers is important for the development of effective interventions. Especially, an aquaculture malaria control programme should consider a collaborative approach to help low-resource farmers, for example, stimulating their participation and organization of cooperatives. The present study suggests that fishponds serve as important and productive breeding sites for malaria vectors. They therefore contribute to the ongoing transmission of malaria in the Brazilian Amazon. Adequate fishpond maintenance must be promoted with the aim of rendering fishponds less desirable for anopheline larvae, most importantly An. darlingi in malaria-endemic regions. Consoli R, Lourenço-de-Oliveira R. Principais mosquitos de importância sanitária no Brasil. Rio de Janeiro: Fiocruz; 1994. Deane LM. Malaria vectors in Brazil. Mem Inst Oswaldo Cruz. 1986;81:5–14. Tadei WP, Thatcher BD, Santos JM, Scarpassa VM, Rodrigues IB, Rafael MS. Ecologic observations on anopheline vectors of malaria in the Brazilian Amazon. Am J Trop Med Hyg. 1998;59:325–35. Arruda ME, Carvalho MB, Nussenzweig RS, Maracic M, Ferreira W, Cochrane AH. Potential vectors of malaria and their different susceptibility to Plasmodium falciparum and Plasmodium vivax in northern Brazil identified by immunoassay. Am J Trop Med Hyg. 1986;35:873–91. Póvoa MM, Souza RTL, Lacerda RNL, Rosa ES, Galiza D, Souza JR, et al. The importance of Anopheles albitarsis and An. darlingi in human malaria transmission in Boa Vista, state of Roraima, Brazil. Mem Inst Oswaldo Cruz. 2006;101:163–8. Deane LM. A cronologia da descoberta dos transmissores da malária na Amazônia brasileira. Mem Inst Oswaldo Cruz. 1989;84:149–56. Grimaldi D, Engel MS. Evolution of the insects. Cambridge: Cambridge University Press; 2005. Monnerat R, Magalhães I, Daniel S, Ramos F, Sujii E, Praça L, et al. Inventory of breeding-sites and species of Anopheline mosquitoes in the Juruá valley. Inter J Mosq Res. 2014;1:1–3. Oyewole IO, Momoh OO, Anyasor GN, Ogunnowo AA, Ibidapo CA, Oduola OA, et al. Physico-chemical characteristics of Anopheles breeding sites: impact on fecundity and progeny development. Afr J Environ Sci Technol. 2009;3:447–52. McKeon SN, Schlichting CD, Póvoa MM, Conn JE. Ecological suitability and spatial distribution of five Anopheles species in Amazonian Brazil. Am J Trop Med Hyg. 2013;88:1079–86. Soleimani-Ahmadi M, Vatandoost H, Zare M. Characterization of larval habitats for anopheline mosquitoes in a malarious area under elimination program in the southeast of Iran. Asian Pac J Trop Biomed. 2014;4:S73–80. Prefeitura Municipal de Mâncio Lima. http://www.prefeituramanciolima.com.br/. Acessed 1 Jan 2015. Schmidt JCJ. O clima da Amazônia. Rev Bras Geogr. 1942;4:465–500. Salati E, Salati E, Companhol T, Nova NV. Caracterização do clima atual e definição das alterações climáticas para o território brasileiro: Tendências de Variações Climáticas para o Brasil no Século XX e Balanços Hídricos para Cenários Climáticos para o Século XXI ao longo do Século XXI. In: Marengo JA, editor. Mudanças Climáticas Globais e Efeitos sobre a Biodiversidade. Brasilia: Ministério do Meio Ambiente; 2007. Pina-Costa A, Brasil P, Di Santi SM, de Araujo MP, Suarez-Mutis MC, Santelli AC, et al. Malaria in Brazil: what happens outside the Amazonian endemic region. Mem Inst Oswaldo Cruz. 2014;109:618–33. Reis IC, Honório NA, Barros FSMd, Barcellos C, Kitron U, Camara DCP, et al. Epidemic and endemic malaria transmission related to fish farming ponds in the Amazon frontier. PLoS One. 2015;10:e0137521. Costa KM, de Almeida WA, Magalhaes IB, Montoya R, Moura MS, de Lacerda MV. Malaria in Cruzeiro do Sul (Western Brazilian Amazon): analysis of the historical series from 1998 to 2008. Rev Panam Salud Publica. 2010;28:353–60. Silva RSU, Carvalho FT, Santos AB, Ribeiro ES, Cordeiro KM, Melo GIB, et al. Malária no município de Cruzeiro do Sul, estado do Acre, Brasil: aspectos epidemiológicos, clínicos e laboratoriais. Rev Pan-Amaz Saude. 2012;3:45–54. Ministry of Brazilian Health: SIVEP-Malaria. http://portalweb04.saude.gov.br/. Acessed 1 Jan 2015. Service M. Mosquito ecology: field sampling methods. 2nd ed. London: Chapman and Hall; 1993. Barros FSM, Honório NA, Arruda ME. Mosquito Anthropophily: implications on malaria transmission in the Northern Brazilian Amazon. Neotrop Entomol. 2010;39:1039–43. Wood SN. Generalized additive models: an introduction with R. Boca Raton: Chapman Hall/CRC(Texts in Statistical Science); 2006. R Development Core Team. R: A Language and Environment for Statistical Computing. Viena, Austria: the R Foundation for Statistical Computing; 2014. Venables WN, Ripley BD. Modern applied statistics with S. 4th ed. New York: Springer; 2002. Bates D, Maechler M, Bolker B, Walker S. lme4: linear mixed-effects models using Eigen and S4. J Stat Softw. 2014;1–51. Rodrigues IB, Tadei WP, Santos RC, Santos SO, Bagio JB. Malaria control: efficacy of Bacillus sphaericus 2362 formulates against Anopheles species in artifitial pisciculture tanks and breeding sites in pottery areas. Rev de Pat Trop. 2008;37:161–77. Vittor AY, Pan W, Gilman RH, Tielsch J, Glass G, Shields T, et al. Linking deforestation to malaria in the Amazon: characterization of the breeding habitat of the principal malaria vector, Anopheles darlingi. Am J Trop Med Hyg. 2009;81:5–12. Maheu-Giroux M, Casapia M, Soto-Calle VE, Ford LB, Buckeridge DL, Coomes OT, et al. Risk of malaria transmission from fishponds in the Peruvian Amazon. Acta Trop. 2010;115:112–8. Howard AFV, Zhou G, Omlin FX. Malaria mosquito control using edible fish in western Kenya: preliminary findings of a controlled study. BMC Public Health. 2007;7:199. Walshe DP, Garner P, Abdel-Hameed Adeel AA, Pyke GH, Burkot T. Larvivorous fish for preventing malaria transmission. Cochrane Database of Syst Rev. 2013;12:CD008090. Prasad H, Prasad RN, Haq S. Control of mosquito breeding through Gambusia affinis in rice fields. Indian J Malariol. 1993;30:57–65. Imbahale SS, Mweresa CK, Takken W, Mukabana WR. Development of environmental tools for anopheline larval control. Parasit Vectors. 2011;4:130. Blaustein L. Larvivorous fishes fail to control mosquitoes in experimental rice plots. Hydrobiologia. 1992;232:219–32. Linden AL, Cech JR. Prey selection by mosquitofish (Gambusia affinis) in California rice fields: effect of vegetation and prey species. J Am Mosq Control Assoc. 1990;6:115–20. Tadei WP, Cordeiro RS, Lima GR, Oliveira AEM, Pinto RC, Santos JMM,et al. Controle da malária em Manaus: tanques de piscicultura, proliferação de anofelinos e monitoramento. In: 57ª Reunião Anual da SBPC. Fortaleza, Ceará; 2005. Centro de Produções Técnicas e Editora Ltda. Peixes de água doce do Brasil. http://www.cpt.com.br/cursos-criacaodepeixes/artigos/. Acessed 01 August 2015. Mbuya NP, Kateyo E. The influence of juvenile fish (Oreochromis niloticus) on population density of pond breeding mosquitoes in the degraded Wetlands of Yala Swamp, Western Kenya. Glob J Res Rev. 2014;1:59–71. Barros FSM, Arruda ME, Gurgel HC, Honorio NA. Spatial clustering and longitudinal variation of Anopheles darlingi (Diptera: Culicidae) larvae in a river of the Amazon: the importance of the forest fringe and of obstructions to flow in frontier malaria. Bull Entomol Res. 2011;101:643–58. Rejmánková E, Grieco J, Achee N, Roberts DR. Ecology of Larval Habitats. In: Manguin S, editors. Anopheles mosquitoes—new insights into malaria vectors. Intech; 2013. pp. 397–446. Washburn JO. Regulatory factors affecting larval mosquito populations in container and pool habitats: implications for biological control. J Am Mosq Control Assoc. 1995;11:279–83. Mutero CM. Nga'ang'a PN, Wekoyela P, Githure J, Konradsen F. Ammonium sulphate fertilizer increases larval populations of Anopheles arabiensis and culicine mosquitoes in rice fields. Acta Trop. 2004;89:187–92. Stresman GH. Beyond temperature and precipitation: ecological risk factors that modify malaria transmission. Acta Trop. 2010;116:167–72. Van der Hoek W, Amerasinghe FP, Konradsen F, Amerasinghe PH. Characteristics of malaria vector breeding habitats in Sri Lanka: relevance for environmental management. Southeast Asian J Trop Med Pub Health. 1998;29:168–72. Piyaratne MK, Amerasinghe FP, Amerasinghe PH, Konradsen F. Physico-chemical characteristics of Anopheles culicifacies and Anopheles varuna breeding water in a dry stream in Sri Lanka. J Vect Borne Dis. 2005;42:61–7. Minakawa N, Muteri CM, Githure JI, Beier JC, Yan G. Spatial distribution and habitat characterization of Anopheline mosquito larvae in Western Kenya. Am J Trop Med. 1999;61:1010–6. Berti J, Zimmerman R, Amarista J. Spatial and temporal distribution of anopheline larvae in two malarious areas in Sucre State, Venezuela. Mem Inst Oswaldo Cruz. 1993;88:353–62. Brochero H, Pareja PX, Ortiz G, Olano VA. Sitios de cría y actividad de picadura de especies deAnopheles en el municipio de Cimitarra, Santander, Colombia. Biomédica. 2006;26:269–77. Fillinger U, Sombroek H, Majambere S, van Loon E, Takken W, Lindsay SW. Identifying the most productive breeding sites for malaria mosquitoes in The Gambia. Malar J. 2009;8:1–14. Companhia Ambiental do Estado de São Paulo (CETESB). Relatório de qualidade das águas interiores do Estado de São Paulo 2002. São Paulo: CETESBE; 2003. ICR, CTC, ECK, and NAH conceived and designed the experiments. ICR, CTC, CMD, ECK, MMM, FGSO, JJCC, AFM, CAAS, FCMR, GRM, and NAH performed the experiments. ICR, CTC, CMD, ECK, GRM, and NAH analysed the data. ICR, CTC, CMD, ECK, JJCC, GRM, and NAH contributed to writing the paper. All authors read and approved the final of the manuscript. The authors would like to thank Teresa Fernandes Silva do Nascimento, Jerônimo Augusto Fonseca Alencar, Izanelda Batista Magalhães, Thayna Maria Holanda de Souza, Neilson Melo, José Raimundo Silva de Moura, Andrealis Santos de Souza, Daniel Cardoso Portela Camara, Renan Silva da Costa, Glaucio Pereira Rocha, Tatiana Docile, Célio da Silva Pinel and the residents of Mâncio Lima for logistical and operational support during fieldwork. We are grateful for the financial support of the Brazilian Council for Scientific and Technological Development (CNPq) (Grants 484027/2012-3, 479977/2008-9, 471295/2011-6, 552746/2011-8, post-doctoral scholarship for CMD); the Carlos Chagas Filho Foundation for Support of Research in the State of Rio de Janeiro (FAPERJ) (Grant E-26/111.500/2011), the Agency for Support and Evaluation of Graduate Education (CAPES) (Grant 3341-13-5 from the Science without borders programme to ICR); Universidade Federal do Acre, FUNTAC (FDCT 03/2011 Grant 04/2012), FAPAC (PPSUS Grant n. 14/2013) and the Tropical Medicine Graduation Programme of the Oswaldo Cruz Institute (Fiocruz, RJ, Brazil). Programa de Computação Científica-PROCC, Fundação Oswaldo Cruz, Rio de Janeiro, RJ, Brazil Izabel Cristina dos Reis , Cláudia Torres Codeço & Carolin Marlen Degener Laboratório de Transmissores de Hematozoários, Instituto Oswaldo Cruz (Fiocruz), Rio de Janeiro, RJ, Brazil , Mauro Menezes Muniz , José Joaquin Carvajal Cortês & Nildimar Alves Honório Núcleo Operacional Sentinela de Mosquitos Vetores, NOSMOVE (Parceria DIRAC-IOC-VPAAPS/FIOCRUZ), Rio de Janeiro, RJ, Brazil , Fernanda Christina Morone Rodrigues Centro Multidisciplinar, Universidade Federal do Acre, Cruzeiro do Sul, Acre, Brazil Erlei Cassiano Keppeler & Francisco Geovane Silva de Oliveira Laboratório de Doenças Parasitárias, Instituto Oswaldo Cruz (Fiocruz), Rio de Janeiro, RJ, Brazil José Joaquin Carvajal Cortês Secretaria Municipal de Saúde de Cruzeiro do Sul, Cruzeiro do Sul, Acre, Brazil Antônio de Freitas Monteiro & Carlos Antônio Albano de Souza Secretaria de Estado de Agropecuária de, Cruzeiro do Sul, Acre, Brazil Genilson Rodrigues Maia Search for Izabel Cristina dos Reis in: Search for Cláudia Torres Codeço in: Search for Carolin Marlen Degener in: Search for Erlei Cassiano Keppeler in: Search for Mauro Menezes Muniz in: Search for Francisco Geovane Silva de Oliveira in: Search for José Joaquin Carvajal Cortês in: Search for Antônio de Freitas Monteiro in: Search for Carlos Antônio Albano de Souza in: Search for Fernanda Christina Morone Rodrigues in: Search for Genilson Rodrigues Maia in: Search for Nildimar Alves Honório in: Correspondence to Izabel Cristina dos Reis. dos Reis, I.C., Codeço, C.T., Degener, C.M. et al. Contribution of fish farming ponds to the production of immature Anopheles spp. in a malaria-endemic Amazonian town. Malar J 14, 452 (2015) doi:10.1186/s12936-015-0947-1 Received: 06 May 2015 Anopheles spp. Anopheles darlingi Natural mosquito breeding habitat By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. Please note that comments may be removed without notice if they are flagged by another user or do not comply with our community guidelines.
CommonCrawl
Mechanisms of Action: Physiological Effects Role of Homoserine Transacetylase as a New Target for Antifungal Agents Ishac Nazi, Adam Scott, Anita Sham, Laura Rossi, Peter R. Williamson, James W. Kronstad, Gerard D. Wright Ishac Nazi Antimicrobial Research Centre, Department of Biochemistry and Biomedical Sciences, McMaster University, Ontario, Canada L8N 3Z5 Anita Sham The Michael Smith Laboratories, The University of British Columbia, 2185 East Mall, Vancouver, British Columbia, Canada V6T 1Z4 Peter R. Williamson Section of Infectious Diseases, Department of Medicine, University of Illinois at Chicago College of Medicine, Chicago, Illinois James W. Kronstad Gerard D. Wright For correspondence: wrightge@mcmaster.ca DOI: 10.1128/AAC.01400-06 Microbial amino acid biosynthesis is a proven yet underexploited target of antibiotics. The biosynthesis of methionine in particular has been shown to be susceptible to small-molecule inhibition in fungi. The first committed step in Met biosynthesis is the acylation of homoserine (Hse) by the enzyme homoserine transacetylase (HTA). We have identified the MET2 gene of Cryptococcus neoformans H99 that encodes HTA (CnHTA) by complementation of an Escherichia coli metA mutant that lacks the gene encoding homoserine transsuccinylase (HTS). We cloned, expressed, and purified CnHTA and determined its steady-state kinetic parameters for the acetylation of L-Hse by acetyl coenzyme A. We next constructed a MET2 mutant in C. neoformans H99 and tested its growth behavior in Met-deficient media, confirming the expected Met auxotrophy. Furthermore, we used this mutant in a mouse inhalation model of infection and determined that MET2 is required for virulence. This makes fungal HTA a viable target for new antibiotic discovery. We screened a 1,000-compound library of small molecules for HTA inhibitors and report the identification of the first inhibitor of fungal HTA. This work validates HTA as an attractive drug-susceptible target for new antifungal agent design. Amino acid biosynthesis is an attractive antimicrobial target because many of these biosynthetic pathways are absent in mammals. The pathway that produces Met, Thr, and Ile from the precursor Asp in bacteria and fungi is one such target (21). Met is an important amino acid due to its involvement in several microbial processes. First, it is essential for protein synthesis and is the N-terminal amino acid in most proteins. Furthermore, Met is important in the synthesis of S-adenosylmethionine, a major methylating agent, and is essential for DNA synthesis and in the synthesis of sulfur-containing compounds (Fig. 1). Met biosynthetic branch of the Asp pathway. This biosynthetic pathway has been studied as a potential antimicrobial target, and several component enzymes have been validated as novel targets (9, 18, 22, 25). In a search for antifungal agents, Yamaguchi et al. isolated a compound from a Streptomyces sp. that had antifungal activity. This compound, (S)-2-amino-5-hydroxy-4-oxopentanoic acid (RI-331), exhibited antifungal activity against Candida albicans but had no toxic effect when administered to mice (22). Homoserine (Hse) dehydrogenase, the third enzyme in the Asp pathway, was subsequently verified as the target of RI-331 (23, 24). Further chemical validation of the importance of Asp pathway as a potential antimicrobial target is evident in the activities of azoxybacilin (2) and rhizocticin (14). Moreover, Ejim et al. have demonstrated that cystathionine β-lyase is important for virulence of Salmonella enterica serovar Typhimurium (9). Other studies of gene disruption of different targets in Met biosynthesis such as MET3 and MET6 have demonstrated the importance of these genes in the virulence of the pathogenic fungus Cryptococcus neoformans (18, 25). Based on this precedent, we reasoned that homoserine transacetylase (HTA), the first committed step in Met biosynthesis, would also be a good target for new antimicrobial agents. We have identified this gene in the human pathogen C. neoformans and characterized its gene product. We report its importance in virulence and a small-molecule screen that has identified the first inhibitor of a fungal HTA. Isolation and cloning of MET2 cDNA from C. neoformans H99.A C. neoformans H99 cDNA library constructed with the pBluescript vector (Stratagene) was used to identify the MET2 gene. The cDNA library was transformed into an Escherichia coli metA mutant strain with a nonfunctional homoserine transsuccinylase (HTS) enzyme, which was a generous gift from Barry L. Wanner (Department of Biological Sciences, Purdue University, West Lafayette, IN) and Hirotada Mori (Institute of Science and Technology, Japan). Transformants were selected on M9 minimal medium lacking Met and induced with isopropyl β-d-thiogalactopyranoside (IPTG). The DNA insert in this isolated plasmid was completely sequenced and verified to be the MET2 gene encoding C. neoformans HTA (CnHTA). This cDNA was amplified from the plasmid using the oligonucleotides (ML8890 and ML8891) in Table 1. The amplified gene was inserted into the TOPO 4-Blunt vector (Invitrogen) according to the manufacturer's instructions, generating a plasmid with the desired MET2 gene flanked by NheI and HindIII restriction enzyme sites. The MET2 gene was then inserted into the NheI and HindIII restriction enzyme sites of the pET28 vector (Novagen) using standard techniques. The resulting plasmid, pET28+ MET2, was used to transform E. coli BL21(DE3) cells, allowing the expression of CnHTA with an N-terminal hexa-histidine tag. DNA oligonucleotides for PCR gene amplification Expression and purification of HTA from C. neoformans H99.The His-tagged enzyme was expressed in E. coli BL21(DE3)/pET28+ MET2 in 1 liter of Luria-Bertani (LB) broth supplemented with 50 μg/ml kanamycin to an optical density at 600 nm of 0.75 at 37°C. Isopropyl β-d-thiogalactopyranoside was then added to a final concentration of 1 mM, and the culture was incubated for 4 h at 30°C in an orbital shaker. The cells were harvested by centrifugation at 8,000 × g for 10 min, resuspended in a final volume of 25 ml of lysis buffer (50 mM HEPES, pH 8.0, 500 mM NaCl, 20 mM imidazole, and a complete protease inhibitor cocktail tablet [Roche]), and disrupted by three passes through a French pressure cell at 10,000 lb/in2. Cell debris was removed by centrifugation at 12,000 × g for 10 min, and the supernatant was applied to a 1-ml Ni-nitrilotriacetic acid agarose column (QIAGEN) and washed with 20 ml of buffer A (50 mM HEPES, pH 8.0, 500 mM NaCl, 20 mM imidazole). A linear gradient of 20 to 250 mM imidazole in 50 mM HEPES, pH 8.0, plus 500 mM NaCl over a period of 25 min was applied, and CnHTA was eluted between 30 and 50 mM imidazole. Fractions containing the recombinant protein were determined by sodium dodecyl sulfate-polyacrylamide gel electrophoresis. CnHTA enzyme activity.L-Hse acetylation reaction rates were determined by monitoring the production of free CoA through the change in absorbance at 324 nm via in situ titration with 4, 4′-dithiodipyridine (DTDP; ε = 19,800 M−1 cm−1). Steady-state kinetic parameters were determined from assays performed in 50 mM HEPES (pH 8.0) with 2 mM DTDP containing various concentrations of up to 10 mM L-Hse and at a fixed concentration of 1.0 mM acetyl coenzyme A (acetyl-CoA) in a final volume of 200 μl for L-Hse kinetics. The same experiment was performed with various concentrations of up to 1.0 mM acetyl-CoA at a fixed concentration of 10 mM L-Hse for acetyl-CoA kinetics. Progress curves were monitored in a Molecular Devices SpectraMAX Plus spectrophotometer using a 96-well flat-bottom polystyrene microtiter plate (VWR). Initial rates were fit to eqation 1, describing Michaelis-Menten kinetics using GraFit 4.0 software (15): $$mathtex$$\[v{=}k_{\mathrm{cat}}\ E_{t}[S]/(K_{m}{+}[S])\]$$mathtex$$(1) Disruption of the C. neoformans MET2 gene.C. neoformans H99 genomic DNA (gDNA) was a generous gift from J. P. Xu (McMaster University, Hamilton, Ontario, Canada). This gDNA was used as a template for the PCR to amplify an approximately 1.3-kb fragment of the MET2 gene using the oligonucleotide primers ML9190 and ML9191 described in Table 1. The amplified gene was cloned into the HindIII and PstI restriction enzyme sites of pBluescript vector (Stratagene) to generate pBluemet2. The neomycin resistance cassette was a generous gift from J. R. Perfect (Duke University, Durham, NC) (11) and was amplified using the oligonucleotides ML8803 and ML8804 outlined in Table 1. The amplicon was cloned into the BglII restriction enzyme site of pBluemet2 vector, which is approximately in the middle of the MET2 gene fragment, thus generating the pBluemet2::neoR disruption construct. This disruption construct allows the presence of 0.6 and 0.7 kb of homologous DNA to the MET2 gene flanking the neomycin cassette, which is sufficient for gene disruption in H99 (17). pBluemet2::neoR was used as a template for PCR amplification with the oligonucleotides used to amplify the MET2-gDNA fragment in Table 1 (ML9190 and ML9191). This linear amplicon was transformed into C. neoformans H99 by biolistic transformation as described by Toffaletti et al. (20). Transformants were incubated overnight at 30°C on yeast extract-peptone-dextrose (YEPD) plates supplemented with 1 M sorbitol. Then the cells were transferred to YEPD plates supplemented with the antibiotic G418 (200 μg/ml). G418-resistant transformants were further screened for Met auxotrophy on synthetic medium lacking Met. G418-resistant colonies incapable of growing without Met were further screened by PCR and Southern hybridization analysis to confirm the presence of the expected MET2 gene disruption. The strain carrying the gene disruption was termed the met2::NeoR strain. The 1.3-kb MET2 genomic fragment from the PCR amplification described above was delivered to the met2::NeoR strain by biolistic transformation (20). Transformants were selected on synthetic medium lacking Met. Colonies that grew on this medium and that were G418 sensitive were further screened using PCR and Southern hybridization analysis to confirm the presence of the undisrupted MET2 gene. This reconstituted strain was designated the met2::MET2 strain. H99, met2::NeoR, and met2::MET2 cells were separately grown from freshly isolated colonies from each strain in 3 ml of YEPD at 30°C overnight. Culture concentrations were adjusted to an optical density at 600 nm of unity and were monitored by serial dilutions before being applied as spots to minimal medium plates in the absence and presence of 100 μg/ml Met. Moreover, titration of Met (1,000, 500, 250, 125, 62.5, 31.25, 15.6, 7.81, 3.9, and 2.0 μM) into minimal medium with proline or ammonium sulfate as the sole nitrogen source was used to determine the minimum growth concentration required to rescue the Met auxotrophy. Test of virulence in the murine inhalation model.To test the role of MET2 in virulence, the murine cryptococcal inhalation model was used (7). Three groups of 10 4- to 5-week-old female A/Jcr mice were anesthetized with xylazine (5.5 mg/kg of body weight) and ketamine (80 mg/kg of body weight), and then they were suspended on a silk thread via their superior incisors. The mice were inoculated with 50 μl (5 × 104 cells) of wild-type H99, met2::NeoR, or met2::MET2 cells via intranasal instillation (dripping cell suspension into one nare). They were kept on the silk thread for at least 10 min to ensure complete inhalation into the lungs. The mice were subsequently fed ad libitum and were monitored twice daily throughout the experiment. At the first sign of morbidity, each mouse was euthanized by exposure to carbon dioxide following the UBC Animal Care Guidelines (SOP 009E4). SpHTA ChemDiv kinase inhibitor library screen.HTA from Schizosaccharomyces pombe (SpHTA) was produced as previously described (3). High-throughput screening was carried out by measuring the change in absorbance at 412 nm due to the production of CoA by the titration of 5, 5′-dithio-bis(2-nitrobenzoic acid) (DTNB) (ε = 13,600 M−1 cm−1). After selecting an optimal amount of enzyme giving linear substrate turnover for approximately 4 min, 48 high controls (no inhibitor) and 48 low controls (no enzyme) were measured to establish a Z′ factor for the assay (equation 2) (26): $$mathtex$$\[Z{^\prime}{=}1{-}[(3.{\sigma}_{H}{+}3.{\sigma}_{L})/({\vert}{\mu}_{H}{-}{\mu}_{L}{\vert})]\]$$mathtex$$(2) The ChemDiv kinase inhibitor library (ChemDiv, San Diego, CA) contains 1,000 compounds, all resuspended in dimethyl sulfoxide at a final concentration of 1 mM. Reaction mixtures contained 100 mM HEPES, pH 8.0, 1.3 mM L-Hse, 40 μM acetyl-CoA, 1 mM DTNB, 0.001% Tween 20, and approximately 25 μM inhibitor compounds and were initiated with 54 ng of SpHTA. The final volume of the reactions was 100 μl. Assay plates were set up using a Biomek FX automated liquid handler (Beckman Coulter) at the McMaster High-Throughput Screening Facility (McMaster University, Hamilton, Ontario, Canada). Ninety-six-well flat-bottom polystyrene microtiter plates (VWR) were shaken for 5 s in a SpectraMax Plus 384 plate reader (Molecular Devices) and monitored for 4 min at a wavelength of 412 nm. Progress curve slopes for data analysis were taken from 60 to 150 s to avoid initial noise while still maintaining assay linearity. Data were visualized in a two-dimensional plot of residual enzyme activity for each replicate, calculated as per equation 3: $$mathtex$$\[{\%}\ \mathrm{residual\ activity}{=}[(\mathrm{Sample\ data}{-}{\mu}_{\mathrm{LC}})/({\mu}_{\mathrm{HC}}{-}{\mu}_{\mathrm{LC}})]{\times}100\]$$mathtex$$(3) where μLC is the mean of the low controls and μHC is the mean of the high controls. Hit compounds were selected for further characterization of inhibition of CnHTA activity. The concentration of inhibitor that was required for 50% inhibition of an enzyme (IC50) was determined by plotting the maximal rate versus inhibitor concentration using GraFit 4.0 software (15). Assay mixtures contained 50 mM HEPES, pH 8.0, 1.5 mM L-Hse, 200 μM acetyl-CoA, 2 mM DTDP, 0.001% Tween 20, and 20 μl of increasing inhibitor concentrations in a final volume of 200 μl. Assays were initiated by the addition of 13.5 ng of enzyme, and the reactions were monitored at a wavelength of 324 nm. To further characterize the type of CnHTA inhibition, we monitored enzyme activity by varying one of the substrate concentrations and holding the second at a fixed concentration at different amounts of the inhibitor. Data were fit to equations 4 or 5 describing competitive or noncompetitive inhibition, respectively, using the Enzyme Kinetics Module v1.0 of Sigma Plot 2000 (6): $$mathtex$$\[v{=}V_{\mathrm{max}}/[1{+}(K_{m}/S)\ {\cdot}\ (1{+}I/K_{i})]\]$$mathtex$$(4) $$mathtex$$\[v{=}V_{\mathrm{max}}/[(1{+}I/K_{i})\ {\cdot}\ (1{+}K_{m}/S)]\]$$mathtex$$(5) Cloning of the C. neoformans MET2 gene.The 19.5-Mb C. neoformans genome sequence (http://cneo.genetics.duke.edu) from Duke University was searched to identify a partial sequence of the MET2 orthologue using the S. pombe gene as a probe. The gene fragment from the C. neoformans genomic data predicted a MET2 gene with two introns. In an effort to obtain the MET2 open reading frame, we turned to a C. neoformans cDNA library constructed in the E. coli-compatible plasmid pBluescript. E. coli does not have an equivalent HTA, but rather activates L-Hse for eventual transformation into cystathionine by O-succinylation catalyzed by HTS, which is encoded by the metA gene. We reasoned, however, that HTA could complement a metA mutant if the next enzyme in the pathway, cystathionine γ-synthase, could accommodate O-acetyl homoserine in addition to O-succinyl homoserine. We therefore introduced the C. neoformans cDNA library into an E. coli metA mutant and selected for growth on media lacking Met. Plasmid DNA was isolated from the positive transformants, and the cDNA insert was sequenced to verify the presence of the C. neoformans MET2 gene. Characterization of CnHTA.CnHTA was overexpressed in E. coli BL21(DE3) cells and yielded 5 mg of enzyme from 1 liter of medium. The kinetic parameters of CnHTA were determined by monitoring the production of CoA, which in the presence of DTDP produces a thiolate that has a maximum absorbance at 324 nm. The steady-state kinetic parameters were determined to have a kcat of 122 s−1, kcat/Km_L-Hse of 1.30 × 105 s−1·M−1, and kcat/Km_acetyl-CoA of 8.78 × 105 s−1·M−1. The kcat of this enzyme is about 10-fold higher than that of the enzyme from S. pombe (16). Similarly, the kcat/Km_L-Hse is 100-fold higher for the CnHTA but comparable to the kcat/Km_acetyl-CoA. Disruption and complementation of the MET2 gene in C. neoformans H99.To determine whether the MET2 gene is important for C. neoformans growth in the absence of Met, we disrupted the gene by homologous recombination with a gene fragment in which MET2 was disrupted. A 1.3-kb fragment of the MET2 gene was disrupted by insertion of a neomycin resistance cassette near the center of the gene (Fig. 2A) (11). The disruption construct was introduced into the C. neoformans H99 strain by biolistic transformation (20), and the colonies that grew on YEPD supplemented with 200 μg/ml of G418 were further analyzed by their ability to grow on minimal medium lacking Met. Colonies unable to grow without supplemental Met were analyzed for the presence of the MET2 gene disruption by PCR and Southern hybridization analysis. PCR analysis of the disrupted strain showed the predicted increase in the size of the amplicon by 2.0 kb (Fig. 2B), and Southern hybridization analysis also showed an increase of 2.0 kb (Fig. 2C). The mutant with this construct (met2::NeoR) was used for the rest of the studies as the C. neoformans met2 gene disruption strain. MET2 gene disruption strategy. (A) Diagram of the construction of the MET2 gene disruption construct with the neomycin resistance cassette (NeoR). The NeoR cassette was inserted at approximately the middle of the MET2 gene fragment. (B) PCR screen of the H99 (lane 1), met2::NeoR (lane 2), and met2::MET2 (lane 3) strains, which confirms the expected increase in the size of the MET2 gene upon the gene disruption with NeoR in the C. neoformans mutant. (C) Southern hybridization analysis of H99 (lane 1), met2::NeoR (lane 2), and met2::MET2 (lane 3) cells digested with BamHI. This also confirms the presence of the MET2 gene disruption by the increase in the fragment size upon NeoR insertion. The met2::NeoR strain was complemented by introduction of the 1.3-kb amplicon of pBluemet2 by biolistic transformation (20). The transformants were selected on minimal medium lacking Met, and the positive transformants were further analyzed by PCR and Southern hybridization analysis to identify those with wild-type characteristics (Fig. 2B and C). To determine the reliance of C. neoformans met2::NeoR strain growth on an exogenous Met source, we tested the growth of met2::NeoR on minimal medium plates in the presence and absence of Met. It was evident from the growth assay shown in Fig. 3 that the met2::NeoR strain could not grow in the absence of Met. This indicates the essentiality of the MET2 gene for the growth of C. neoformans in Met-deficient environments. Further experimentation revealed that when proline was the sole nitrogen source, as opposed to ammonium sulfate, then addition of 62.5 μM Met was required to rescue the met2::NeoR auxotrophy. Meanwhile, 125 μM of Met was required to rescue this auxotrophy in the presence of ammonium sulfate as the sole nitrogen source. This condition could be attributed to the low Met uptake by the cell in the presence of ammonium sulfate as the sole nitrogen source due to nitrogen repression of amino acid uptake. This effect on Met uptake had been previously studied for MET3 and MET6 gene disruptions in C. neoformans (18, 25). C. neoformans MET2 gene disruption mutant auxotrophy. Growth of H99 (row 1), met2::NeoR, (row 2), and met2::MET2 (row 3) cells on minimal solid medium in the presence or absence of 100 μg/ml of Met to study the effect of supplemental Met on the auxotrophy of the met2::NeoR. mutant. This mutant only grows in the presence of Met, which confirms that the amino acid is essential for rescuing the growth of C. neoformans upon MET2 gene disruption. MET2 is required for C. neoformans infection in the murine inhalation model.To investigate the role of MET2 in virulence, the mutant strain along with the wild-type and reconstituted strains were inoculated into mice via the nasal inhalation model (7). The survival curves shown in Fig. 4 revealed that mice infected with the wild-type strain H99 were all sacrificed upon showing signs of morbidity by the 21st day postinfection. Similarly, the met2::MET2-infected mice were all sacrificed upon showing signs of morbidity by the 31st day postinfection, indicating complementation by the wild-type allele. In contrast, mice infected with the met2::NeoR strain all survived for the entire experimental observation time postinfection. These results indicate that disruption of the MET2 gene in C. neoformans attenuates virulence in the murine inhalation model. Murine inhalation model used to test the virulence of a MET2 gene disruption. Three groups of 10 female A/Jcr mice were inoculated with wild-type H99 (⋄), met2::NeoR (□), or met2::MET2 (Δ) cells intranasally. The time of mouse survival with an infection was used as a measure to identify the virulence of the met2::NeoR strain compared to that of the wild type. The observation period of mouse survival was a total of 50 days postinfection. Small-molecule screen identifies inhibitors of HTA.SpHTA was screened to identify inhibitors that halt its catalytic activity. Statistical analysis of high and low control assays returned a Z′ value of 0.72, indicating the assay is robust and suitable for screening. We screened a small (1,000 compound) protein kinase inhibitor library from ChemDiv, reasoning that these protein kinase inhibitors are built around a scaffold that mimics the nucleotide substrate and that these could therefore also bind to the nucleotide recognition region of the CoA binding site of HTA. Each compound was tested in duplicate and hence should render residual activities that are in agreement and fall on a line with a slope of 1. Compounds within the 50% inhibition hit zone were selected for further analysis. In total, 40 compounds were examined to identify inhibitors which were competitive for acetyl-CoA. This was accomplished by screening at two different concentrations of acetyl-CoA, 10 and 100 μM. Ideally, a competitive inhibitor for acetyl-CoA would be less effective in the presence of more substrate, yielding an increase in residual activity. Four compounds that displayed this trend were selected for further IC50 studies, but solubility problems hindered further studies on three of them. The IC50 values exhibited by 6-carbamoyl-3a,4,5,9b-tetrahydro-3H-cyclopenta[c]quinoline-4-carboxylic acid (CTCQC) (Fig. 5A) were 156 μM and 287 μM in the presence of 10 μM and 100 μM of acetyl-CoA, respectively. This compound was further used for more testing against CnHTA. Inhibition of CnHTA by CTCQC. Panel A shows the chemical structure of CTCQC. Panel B is a Lineweaver-Burk plot of the inhibition by CTCQC versus acetyl-CoA (Ac-CoA). The intersecting lines at the y axes confirm that the compound is a competitive inhibitor of acetyl-CoA. Panel C is a Lineweaver-Burk plot of the inhibition by CTCQC versus L-Hse. This pattern of intersecting lines in the second quadrant demonstrates that the compound is a noncompetitive inhibitor of L-Hse. Inhibitor CTCQC concentrations: 0 μM (•), 50 μM (○), 100 μM (▾), and 200 μM (▿). Characterization of CTCQC inhibition of CnHTA.The IC50 value of CnHTA by CTCQC was determined to be approximately 4.50 μM. We further investigated the mode of CnHTA inhibition by CTCQC using a standard steady-state approach. Lineweaver-Burk plots of CnHTA activity at different concentrations of CTCQC were generated for both substrates and fit to various models of inhibition. These plots of the enzyme activity at increasing amounts of acetyl-CoA and saturating amounts of the L-Hse in the presence of increasing amounts of compound present intersecting lines at the y axes (Fig. 5B). This indicates that CTCQC is a competitive inhibitor of acetyl-CoA with a Ki value of 13.6 ± 2 μM. The same plots of the enzyme activity at increasing amounts of L-Hse and saturating amounts of acetyl-CoA in the presence of increasing amounts of CTCQC present intersecting lines in the second quadrant (Fig. 5C). This pattern of intersecting lines indicates that CTCQC acts as a noncompetitive inhibitor of L-Hse with a Ki value of 91.7 ± 11 μM (19). CTCQC had no effect on C. neoformans growth in minimal medium up to 128 μg/ml. The poor bioavailability of the compound could be due to a number of issues, including transport (influx or efflux of the compound) or metabolism to an inactive form. Met is required for several biochemical processes, and as a result, it is essential for cell growth (25). Met is a required amino acid in mammals and must acquire it through diet, but it is synthesized in bacteria, fungi, and plants via the Asp biosynthetic pathway (21). The absence of Met biosynthesis in humans makes the associated biosynthetic enzymes attractive targets for antimicrobial discovery. This is also supported by the fact that the bioavailability of Met is too low in humans (20 μM) (10) to support C. neoformans growth. Several research groups have studied different enzymes of the Asp pathway and revealed that they could be good candidates for antimicrobial drugs (9, 18, 22, 25). HTA is the first committed step in the biosynthesis of Met in fungi, gram-positive bacteria and some gram-negative bacteria (1). Due to the importance of Met biosynthesis in different microbes, we studied its role in virulence and inhibition by targeting HTA in the human pathogen C. neoformans. CnHTA is the product of the MET2 gene in C. neoformans H99, which we isolated by complementing an E. coli metA mutant that is incapable of producing HTS. These two enzymes catalyze the acylation of the hydroxyl group of L-Hse, which is ultimately converted into Met. Although the two enzymes have a similar function, they share no significant primary sequence homology (4, 5). We demonstrate here for the first time that CnHTA can complement an E. coli HTS-null mutant, and we used this strategy to identify the C. neoformans MET2 open reading frame. Overexpression of MET2 in E. coli BL21(DE3) cells allowed us to confirm homoserine acetylation using steady-state kinetics after enzyme purification. The expected role of CnHTA as an important enzyme in Met biosynthesis in C. neoformans was confirmed by disrupting the MET2 gene in this organism, which results in the predicted Met auxotrophy. As expected, the met2::NeoR mutant was not capable of growing in the absence of Met in culture media. Interestingly, when Met was added to the cultures, the mutant did not grow as well in medium supplemented with ammonium sulfate as the source of nitrogen instead of proline. Nitrogen sources have been shown to negatively affect complementation of amino acid auxotrophy (12, 13), and ammonium sulfate has been specifically shown to inhibit suppression of Met auxotrophy (18). Therefore, Met can complement the met2 mutation and allows the growth of the mutant, confirming that the MET2 gene is essential for Met production in C. neoformans. Moreover, MET2 is an essential C. neoformans gene in an environment lacking threshold concentrations of Met. To determine the role of MET2 in virulence, the wild-type, met2::NeoR, and met2::MET2 strains were studied in a standard mouse inhalation model of infection. The wild-type H99 strain and reconstituted met2::MET2 strain were virulent, causing complete mortality to the mice in the first 21 and 31 days postinfection, respectively. The met2-null mutant strain showed attenuated virulence, with no visible signs of a pathogenic effect on the mice over the entire period (50 days) of observation postinfection. This result is encouraging with regard to drug discovery because it reveals for the first time that HTA is a viable drug target in antifungal research. The disruption of the MET2 gene leading to a nonfunctional CnHTA supports our hypothesis that blocking the mechanism of the enzyme is detrimental to the survival of the microbe. Therefore, we followed this up with an in vitro small-molecule screen using a library of compounds directed towards protein kinase inhibition based on a purine scaffold. The reasoning was that these could interact with the purine binding site of the acetyl-CoA substrate. We identified CTCQC as a competitive inhibitor for acetyl-CoA and noncompetitive inhibitor of homoserine, consistent with our hypothesis. This is the first reported inhibitor of this enzyme and sets the stage for downstream elaboration of this hit compound to improve bioactivity. Many studies have identified various targets in the Asp pathway as possible routes for drug discovery (8, 9, 18, 22, 25). Our work reveals for the first time that although HTA and HTS have no primary sequence homology, CnHTA can rescue an HTS-null mutant by ultimately producing Met. Moreover, CnHTA presents a novel target that is essential in Met-depleted environments. It provides a novel target for antimicrobial development in an era of increased resistance and with a desperate need for effective drugs. The authors wish to thank Jonathan Cechetto, Jan Blanchard, and Nadine Elowe of the McMaster High Throughput Screening Laboratory for substructure searches and helpful discussions. The authors would like to thank J. R. Perfect for his generous donation of the neomycin resistance cassette and J. P. Xu for providing the C. neoformans H99 strain. Special thanks are extended to Ken Kasha and Youn-Seb Shim at the University of Guelph for providing help with the biolistic transformation machinery. This research was supported by the Canadian Institute of Health Research, Crompton Co./Cie., and by a Canada Research Chair in Antibiotic Biochemistry to G.D.W. Received 8 November 2006. Returned for modification 21 January 2007. ↵▿ Published ahead of print on 12 March 2007. Andersen, G. L., G. A. Beattie, and S. E. Lindow. 1998. Molecular characterization and sequence of a methionine biosynthetic locus from Pseudomonas syringae. J. Bacteriol. 180:4497-4507. Aoki, Y., M. Yamamoto, S. M. Hosseini-Mazinani, N. Koshikawa, K. Sugimoto, and M. Arisawa. 1996. Antifungal azoxybacilin exhibits activity by inhibiting gene expression of sulfite reductase. Antimicrob. Agents Chemother. 40:127-132. Bareich, D. C., I. Nazi, and G. D. Wright. 2003. Simultaneous in vitro assay of the first four enzymes in the fungal aspartate pathway identifies a new class of aspartate kinase inhibitor. Chem. Biol. 10:967-973. Born, T. L., and J. S. Blanchard. 1999. Enzyme-catalyzed acylation of homoserine: mechanistic characterization of the Escherichia coli metA-encoded homoserine transsuccinylase. Biochemistry 38:14416-14423. Born, T. L., M. Franklin, and J. S. Blanchard. 2000. Enzyme-catalyzed acylation of homoserine: mechanistic characterization of the Haemophilus influenzae met2-encoded homoserine transacetylase. Biochemistry 39:8556-8564. Brannan, T., B. Althoff, L. Jacobs, J. Norby, and S. Rubenstein. 2000. Sigma Plot, 6th ed. SSPS, Inc., Chicago, IL. Cox, G. M., J. Mukherjee, G. T. Cole, A. Casadevall, and J. R. Perfect. 2000. Urease as a virulence factor in experimental cryptococcosis. Infect. Immun. 68:443-448. Ejim, L., I. A. Mirza, C. Capone, I. Nazi, S. Jenkins, G. L. Chee, A. M. Berghuis, and G. D. Wright. 2004. New phenolic inhibitors of yeast homoserine dehydrogenase. Bioorg. Med. Chem. 12:3825-3830. Ejim, L. J., V. M. D'Costa, N. H. Elowe, J. C. Loredo-Osti, D. Malo, and G. D. Wright. 2004. Cystathionine β-lyase is important for virulence of Salmonella enterica serovar Typhimurium. Infect. Immun. 72:3310-3314. Fasman, G. D. 1976. Handbook of biochemistry and molecular biology, 3rd ed. CRC Press, Cleveland, OH. Fraser, J. A., R. L. Subaran, C. B. Nichols, and J. Heitman. 2003. Recapitulation of the sexual cycle of the primary fungal pathogen Cryptococcus neoformans var. gattii: implications for an outbreak on Vancouver Island, Canada. Eukaryot. Cell 2:1036-1045. Kingsbury, J. M., Z. Yang, T. M. Ganous, G. M. Cox, and J. H. McCusker. 2004. Cryptococcus neoformans Ilv2p confers resistance to sulfometuron methyl and is required for survival at 37 degrees C and in vivo. Microbiology 150:1547-1558. Kingsbury, J. M., Z. Yang, T. M. Ganous, G. M. Cox, and J. H. McCusker. 2004. Novel chimeric spermidine synthase-saccharopine dehydrogenase gene (SPE3-LYS9) in the human pathogen Cryptococcus neoformans. Eukaryot. Cell 3:752-763. Kugler, M., W. Loeffler, C. Rapp, A. Kern, and G. Jung. 1990. Rhizocticin A, an antifungal phosphono-oligopeptide of Bacillus subtilis ATCC 6633: biological properties. Arch. Microbiol. 153:276-281. Leatherbarrow, R. J. 2001. GraFit, 4.0 ed. Erithacus Software Ltd., Staines, United Kingdom. Nazi, I., and G. D. Wright. 2005. Catalytic mechanism of fungal homoserine transacetylase. Biochemistry 44:13560-13566. Nelson, R. T., B. A. Pryor, and J. K. Lodge. 2003. Sequence length required for homologous recombination in Cryptococcus neoformans. Fungal Genet. Biol. 38:1-9. Pascon, R. C., T. M. Ganous, J. M. Kingsbury, G. M. Cox, and J. H. McCusker. 2004. Cryptococcus neoformans methionine synthase: expression analysis and requirement for virulence. Microbiology 150:3013-3023. Segel, I. H. 1993. John Wiley & Sons, Inc., New York, NY. Toffaletti, D. L., T. H. Rude, S. A. Johnston, D. T. Durack, and J. R. Perfect. 1993. Gene transfer in Cryptococcus neoformans by use of biolistic delivery of DNA. J. Bacteriol. 175:1405-1411. Umbarger, H. E. 1978. Amino acid biosynthesis and its regulation. Annu. Rev. Biochem. 47:532-606. Yamaguchi, H., K. Uchida, T. Hiratani, T. Nagate, N. Watanabe, and S. Omura. 1988. RI-331, a new antifungal antibiotic. Ann. N. Y. Acad. Sci. 544:188-190. Yamaki, H., M. Yamaguchi, H. Imamura, H. Suzuki, T. Nishimura, H. Saito, and H. Yamaguchi. 1990. The mechanism of antifungal action of (S)-2-amino-4-oxo-5-hydroxypentanoic acid, RI-331: the inhibition of homoserine dehydrogenase in Saccharomyces cerevisiae. Biochem. Biophys. Res. Commun. 168:837-843. Yamaki, H., M. Yamaguchi, T. Tsuruo, and H. Yamaguchi. 1992. Mechanism of action of an antifungal antibiotic, RI-331, (S) 2-amino-4-oxo-5-hydroxypentanoic acid: kinetics of inactivation of homoserine dehydrogenase from Saccharomyces cerevisiae. J. Antibiot. (Tokyo) 45:750-755. Yang, Z., R. C. Pascon, A. Alspaugh, G. M. Cox, and J. H. McCusker. 2002. Molecular and genetic analysis of the Cryptococcus neoformans MET3 gene and a met3 mutant. Microbiology 148:2617-2625. Zhang, J. H., T. D. Chung, and K. R. Oldenburg. 1999. A simple statistical parameter for use in evaluation and validation of high throughput screening assays. J. Biomol. Screen. 4:67-73. Antimicrobial Agents and Chemotherapy Apr 2007, 51 (5) 1731-1736; DOI: 10.1128/AAC.01400-06 You are going to email the following Role of Homoserine Transacetylase as a New Target for Antifungal Agents
CommonCrawl
Cascade enzymes within self-assembled hybrid nanogel mimicked neutrophil lysosomes for singlet oxygen elevated cancer therapy Targeted scavenging of extracellular ROS relieves suppressive immunogenic cell death Hongzhang Deng, Weijing Yang, … Xiaoyuan Chen Chemotaxis-driven delivery of nano-pathogenoids for complete eradication of tumors post-phototherapy Min Li, Shuya Li, … Yucai Wang Nanoscale coordination polymers induce immunogenic cell death by amplifying radiation therapy mediated oxidative stress Zhusheng Huang, Yuxiang Wang, … Ahu Yuan Enhancing Gasdermin-induced tumor pyroptosis through preventing ESCRT-dependent cell membrane repair augments antitumor immune response Zhaoting Li, Fanyi Mo, … Quanyin Hu Multifunctional metal-organic framework-based nanoreactor for starvation/oxidation improved indoleamine 2,3-dioxygenase-blockade tumor immunotherapy Liangliang Dai, Mengjiao Yao, … Yanli Zhao A protein-based cGAS-STING nanoagonist enhances T cell-mediated anti-tumor immune responses Xuan Wang, Yingqi Liu, … Zhong Luo Design of a self-driven probiotic-CRISPR/Cas9 nanosystem for sono-immunometabolic cancer therapy Jifeng Yu, Bangguo Zhou, … Huixiong Xu Tumor-killing nanoreactors fueled by tumor debris can enhance radiofrequency ablation therapy and boost antitumor immune responses Zhijuan Yang, Yujie Zhu, … Zhuang Liu MR imaging tracking of inflammation-activatable engineered neutrophils for targeted therapy of surgically treated glioma Meiying Wu, Haixian Zhang, … Hairong Zheng Qing Wu1,2 na1, Zhigang He1 na1, Xia Wang1, Qi Zhang1, Qingcong Wei1, Sunqiang Ma1, Cheng Ma1, Jiyu Li1 & Qigang Wang1 Nature Communications volume 10, Article number: 240 (2019) Cite this article Cancer microenvironment Gels and hydrogels As the first line of innate immune cells to migrate towards tumour tissue, neutrophils, can immediately kill abnormal cells and activate long-term specific adaptive immune responses. Therefore, the enzymes mediated elevation of reactive oxygen species (ROS) bioinspired by neutrophils can be a promising strategy in cancer immunotherapy. Here, we design a core-shell supramolecular hybrid nanogel via the surface phosphatase triggered self-assembly of oligopeptides around iron oxide nanoparticles to simulate productive neutrophil lysosomes. The cascade reaction of superoxide dismutase (SOD) and chloroperoxidase (CPO) within the bioinspired nanogel can convert ROS in tumour tissue to hypochlorous acid (HOCl) and the subsequent singlet oxygen (1O2) species. Studies on both cells and animals demonstrate successful 1O2-mediated cell/tumour proliferation inhibition, making this enzyme therapy capable for treating tumours without external energy activation. Cancer remains a leading cause of death, causing millions of deaths worldwide every year. Conventional clinical cancer therapies, including surgery, chemotherapy and radiotherapy, have some limitations, such as toxic side effects to normal cells, drug/radio resistance and an increased incidence of tumour re-growth1,2,3,4. The human innate immune system, arising from long-term evolution, has inspired a new biochemical approach for highly efficient cancer therapy that is achieved through nonspecific cytotoxic activity via molecular oxygen-dependent cell inactivation, lysozyme-associated digestion, and the release of immune cytokines5,6,7. Neutrophil-dependent cell inactivation commonly occurs via neutrophil lysosomes, which are several hundred nanometres in size, constituted with the so called azurophilic granules in conjunction with peroxidase8,9,10. The reactive oxygen species (ROS)-responsive cytotoxic activity of neutrophil lysosomes is determined by the biocatalytic system of myeloperoxidase (MPO), hydrogen peroxide (H2O2) and halide ions11,12,13. The detailed biochemical oxidizing reaction involves the initial enzymatic oxidation of halide by H2O2, such as Cl−, to hypochlorous acid (HOCl) and its subsequent decomposition to produce singlet oxygen (1O2) for the destruction of microorganisms14,15. Chloroperoxidase (CPO), a robust peroxidase from Caldariomyces fumago with higher resistance to the oxidative inactivation than MPO, was usually used for industrial catalysis of chloride by H2O2 to dominantly produce 1O216,17,18,19. In general, the 1O2-involved biomedical therapy techniques20,21,22, the clinical photodynamic therapy (PDT) and the emerging sonodynamic therapy (SDT) are all referring to the excitation of sensitizers by light irradiation or ultrasound in presence of molecular oxygen to produce the dominant 1O2 at tumour sites for irreversible cellular damage23,24,25,26,27. Besides, researchers have also utilized inorganic nanoparticles loading drugs to stimulate the ROS and executed biofeton-like production of toxic hydroxyl radical (OH˙) for pathological-responsive chemodynamics treatment (CDT) of cancer28,29. Inspired by the natural 1O2-generating strategy of neutrophil lysosomes, the cascade enzymes including the CPO and superoxide dismutase (SOD) has been firstly selected in this work. SOD is an important antioxidant enzyme widely used as the food antioxidant in industry to catalyse the conversion of ˙O2ˉ into H2O2, which can increase the amount of H2O2 for the further formation of 1O2 by the cascade biocatalysis of CPO. Our enzyme system provides 1O2-elevating strategy from the endogenous ROS (˙O2ˉ and H2O2) with the tuneable dosage, which even can cause the death of hypoxic tumour without the external energy activation. A carrier suitable to entrap CPO and SOD should be similar to the membrane-bound lysosomes of neutrophils, which are several hundred nanometres in size. Nanogels, or hydrogel nanoparticles with particle sizes from a few tens of nanometres to several hundred nanometres, have been emerging as a versatile and viable platform for various biomedical applications, especially for biocatalytic proteins, because the gel matrix can protect enzymes from protein structural degradation and subsequent deactivation, and ensure higher loading and easy mobility of substrates to achieve an efficient catalytic effect30,31,32,33,34. In particular, hydrogels with a supramolecular structure, self-assembled from oligopeptides or small molecules by noncovalent interactions, have received significant attention as a protein-like material to mimic the extracellular matrix (ECM) in the fields of nanomedicine, catalysis, and tissue engineering35,36,37,38. Finally, the multifunctional SOD/CPO-loaded nanogel system (SCNG) can thus be constructed by co-loading the cascade SOD and CPO in the supramolecular nanogel structure as a simulated neutrophil lysosome, responsively converting the relatively higher level of ROS in tumour microenvironment to 1O2 for tumour therapy. A hypothesis can thus be proposed: enzyme dynamic therapy (EDT), which takes full advantage of the enzymatic reactions in the tumour region, can controllably generate 1O2 to treat cancer. To the best of our knowledge, such a synergetic enzymatic process and treatment have not been previously reported. Construction and characterization of SCNGs The proposed responsive EDT mechanism and the preparation of SCNGs are shown in Fig. 1. The EDT with cascade SOD/CPO provides a tuneable 1O2-elevating strategy by biocatalyst reaction with the endogenous ROS (˙O2ˉ and H2O2). It includes the catalysis of endogenous ˙O2ˉ into O2 species and H2O2 by SOD, and following conversion of both as-obtained H2O2 and endogenous H2O2 species into HClO and subsequent 1O2 species with CPO. Comparing to the commonly mentioned PDT, SDT or CDT, EDT employs the endogenous-like enzymes as the core component to cascade catalyze ROS in tumour microenvironment to drastically produce 1O2. The highly enzymatic efficiency and specificity serve this EDT the potential and unique efficacy and safety advantages. This EDT can be an efficient therapeutic tool for hypoxic tumour due to the in situ self-catalyzed O2 species, without any further external energy activation and with amplified 1O2 dosage due to adjustable cascade enzymes and continuous enzymatic oxidative stress (Fig. 1a). The route of SCNGs preparation process includes the synthesis of the MNP core, interface-triggered self-assembling hydrogelation and cascade enzyme encapsulations (Fig. 1b). In detail, MNP cores were synthesized and followed with carboxyl modification (MNPs@COOH) by a hydrolysis reaction with aminopropyltriethoxysilane (APTES) followed by reaction with succinic anhydride. Subsequently, the AP triggers were attached on the surface of MNPs@COOH (MNPs@AP) by amidation and further activated by 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide and N-hydroxysuccinimide (EDC/NHS). Subsequently, a mild enzyme-induced assembly (EIA) technique is applied at the nanoscale surface. In particular, the bounded AP on the surface of MNPs@AP dephosphorylates the hydrophilic peptide precursors N-(fluorenyl-methoxycarbonyl) tyrosine phosphate (Fmoc-Tyr(H2PO3)-OH) to hydrophobic hydrogelators (Fmoc-Tyr-OH), which preferentially self-assemble in the aqueous solution. The inorganic nanosurface (MNPs@AP) serves as a bioactive seed layer, and the concentration of gelators is accordingly increased near the nanosurface/solution interface along with the successive enzymatic reaction. The relatively hydrophobic portions (Fmoc group) of Fmoc-Tyr-OH tend to self-assemble towards the MNP surface spontaneously to form a molecular layer, while the hydrophilic portion remains in the outside aqueous solution, and the supramolecular hydrogel can be therefore obtained by this self-assembling of Fmoc-Tyr-OH through π–π stacking and electrostatic interactions. The integrative equilibrium of enzymatic-controlled generation and deposition of gelators around the MNPs determines the thickness of supramolecular NGs. Scheme of the responsive EDT mechanism and preparation of SCNGs. a The cascade SOD/CPO-mediated therapy includes the catalysis of ˙O2ˉ into H2O2 by SOD, conversion of both as-obtained H2O2 and endogenous H2O2 species into final 1O2 species with CPO. b The fabrication process of SCNGs involves (1) modification of MNPs core and the AP trigger attachment, (2) dephosphorylation of hydrophilic peptide precursors to hydrophobic hydrogelators by the bounded AP, (3) AP-triggered self-assembling between the hydrophobic portions and hydrophilic part of hydrogelators through π–π stacking and electrostatic interactions around MNP, (4) immobilization of cascade SOD and CPO and further a safe and effective EDT in tumour microenvironment. The responsive SCNGs can effectively convert the endogenous ROS (˙O2ˉ and H2O2) into highly reactive 1O2 by the cascade reaction of SOD and CPO in tumour region, which subsequently cause cancer cells death Highly monodispersed supramolecular core-shell NGs were obtained via this AP-triggered surface-initiated self-assembling technique, and due to their magnetic MNP cores, they can be recycled by the magnetic separation method. The final simulated neutrophil lysosomes, SCNGs, were fabricated by the further immobilization of SOD and CPO onto as-obtained NGs in Tris·HCl buffer (1 M, pH 6.0) due to the hydrogen bonding between peptides and proteins. When entering a tumour microenvironment with larger oxidative stress and slightly-higher ROS levels, the SCNGs responsively catalyse the relatively high-level ROS (mainly ˙O2ˉ and H2O2 species) to "deadly" 1O2 species, while leaving the normal cells intact, which achieves safe and effective EDT therapy by the specifically and efficiently enzymatic generation of ROS over the threshold of the vulnerable tumour. The morphology, structure and composition of the MNP core and the as-obtained SCNGs system were characterized by scanning transmission electron microscopy (STEM), energy dispersive X-ray (EDX) spectroscopy mapping, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and dynamic light scattering (DLS), Fourier transform infrared spectroscopy (FT-IR) measurements and thermogravimetry (TG) analysis (Fig. 2 and Supplementary Fig. 1). Relative to the rough surface of the MNP core as the Fe3O4 cluster (Supplementary Fig. 1a, d), SCNGs exhibit more spherical structure and smooth surface with better monodispersity (Supplementary Fig. 1b, c, e, f). An obvious light-contrast organic hydrogel nanoshell can be observed around the black MNP core in STEM image under different modes, especially the BF and DF modes, as shown in Fig. 2. A further element mapping of corresponding nanogels by EDS (Fig. 2f–i), showing a spherical distribution of elemental C and P from the residual phosphate groups on the surface of MNP core, further indicate the successful coating of the supramolecular hydrogel nanoshell structure. Construction and characterization of SCNGs. The STEM images of SCNGs using different modes and energy dispersive X-ray (EDX) spectroscopy mapping images of the C, P, O and Fe elements (a SE: second electronic, b BF: bright field, c DF: dark field, d, e HAADF: high-angle annular dark field). j The size distribution of MNPs (green line) and SCNGs (orange line) in buffer. k FT-IR spectra of the MNPs (green line) and SCNGs (orange line). l TG test of the MNPs (green line) and SCNGs (orange line). Scale bar in a–e, 100 nm. Scale bar in f–i, 50 nm. Source data are provided as a Source Data file The average diameters of the air-dried MNP core and SCNGs observed in the STEM images are ~80 nm and 110 nm, respectively. Furthermore, the dynamic diameter distributions were further investigated by DLS measurements (Fig. 2j). The supramolecular core-shell SCNGs possess an average dynamic diameter of ~250 nm with a favourable dispensability. Usually, the hydrogel nanoparticles in the aquatic solution exhibit much higher size value in DLS test, due to a larger average hydrodynamic diameter including the immobilized water inside the hydrogel network and the hydration layer around, comparing to the dried size obtained by SEM or TEM. Comparatively, MNPs are measured to be ~150 nm, also confirming the existence of a supramolecular hydrogel structure. The component and content of organic hydrogel nanoshell were further assessed by FT-IR and TG analysis, as illustrated in Fig. 2k, l. Both the FT-IR spectra of MNPs (green line) and SCNGs (orange line) exhibit the absorption peaks of iron oxide structure at around 585 cm−1 (Fe–O vibrations). The characteristic bands in the region of 1683–1142 cm−1 of MNPs can be observed arising from the stabilized layer of polyvinyl pyrrolidone (PVP)39. The successful gelatin process of SCNGs based on the peptide hydrogelators can be validated by the appearance of increased intensity of -C–O–C- peak at 1036 cm−1 and the -CH2 stretching at 2950 cm−1, and the significantly decreased Fe–O vibrations;40 The increased intensity of the peaks at 1638 (νCO Amide I band) and 1553 (δNH Amide II band) in the SCNGs spectrum belong to protein amide band from SOD and CPO41,42. Quantitatively, a more 33.3% weight loss of lyophilized SCNGs relative to MNPs can be obtained in the TG curves. The accurate amount of peptides hydrogelator and cascade enzymes can therefore be calculated to be approximately 0.62 g per 1 g of MNPs. As a proof, the corresponding bulk hydrogel can be finally obtained by this surface modified AP-triggered interfacial hydrogelation and furtherly crosslinking nanogel to form bulk hydrogel at the higher hydrogelators (about 10 times), shown in Supplementary Fig. 2. The storage modulus (G') in the hydrogel rheological test is higher than the loss moduli (G") in the corresponding frequency sweep tests also indicating the successful hydrogelation process. Both the morphology and structural features as well as the composition analysis indicate that a successful supramolecular hydrogel nanoshell was constructed based on the introduced MNP core by the developed AP-triggered surface-initiated self-assembling technique. As an enzymatic therapy mode, the stability and enzyme activity under different pH values and storage times should be evaluated. (Supplementary Figs. 3–6 and Supplementary Table 1). Firstly, the stability of SCNGs, including the release of the enzymes in PBS at 37 ℃ for 14 days, the diameter and the morphology change after being treated in cell culture media containing 10% (v/v) fetal calf serum (FCS) for 24 and 48 h have been tested, as shown in Supplementary Figs. 3–4. There is no significant release of enzymes during the 14 days, and the morphology by TEM after treatment stay intact, indicating the good stability of SCNGs without disassembly after a long time storage. While the corresponding average sizes increased from 255 nm to about 295 nm after treatment, possibly due to the absorption of serum in the culture media. Besides, the loading amount of SOD and CPO in SCNGs were detected to be 43.8 U mg−1 and 25.0 U mg−1 of NGs as illustrated in Supplementary Table 1, respectively. Their activities were evaluated based on the ability of SOD to inhibit the autoxidation of pyrogallol and CPO to catalyze the conversion of monochlorodimedon (MCD) to dichlorodimedon (DCD)43,44. The activities of SOD and CPO within SCNGs remained 76 and 84% of that relative to free enzymes, respectively. Furthermore, the related enzyme activities in the simulated acid intracellular environment at pH = 7.0, 6.0, 4.6 have been studied as shown in Supplementary Fig. 5. The activity of SOD in SCNGs remained ~80% at pH 4.6, while CPO exhibited 289% activity at pH 4.6 compared to the activity at pH 7.4, because of the optimum catalytic pH of CPO at about 4.5. Finally, the storage activity tests during 30 days have been also conducted (Supplementary Fig. 6). Both the activity of SOD and CPO in SCNGs did not show any significant activity loss. The supramolecular hydrogel matrix can relieve the enzyme protein degradation by the non-covalent immobilization and sacrifice protection by their protein-like structure, ensuring high structural stability and enzymatic capability for efficient catalysis45. Characterization of the 1O2 generation and efficacy The key to effective EDT lies in the efficacy of 1O2 generation (Fig. 3). The EPR technique was employed, using TEMP as a trapping agent and H2O2 (100 µM) in NaCl solution (100 mM) as endogenous environment, to qualitatively detect and validate 1O2 signals generated by SCNGs. As shown in Fig. 3a, after mixing the SCNGs and 1O2 trapper (TEMP), a typical 1O2-induced 1:1:1 triplet signal was clearly observed in the EPR spectra (with a hyperfine splitting constant αN = 17.34 G and a g value = 2.0056 of the photoproduct of TEMP-singlet oxygen46), and its intensity increased over time. A further catalysis mechanism and efficacy of cascade enzymes was demonstrated by comparing the efficacy of 1O2 generation of SOD and CPO, single SOD or CPO systems. In order to simulate the endogenous ˙O2ˉ and H2O2 in ROS, the catalysis of xanthine oxidase (XO, 13 U mL−1) and xanthine (X, 25 mM) have been introduced, which was reported to generate most of ˙O2ˉ species and additional small amount of H2O247. As illustrated in Supplementary Fig. 7, in the SOD systems, negligible 1O2 signals generated in the spectra even in 10 min after reaction. While in the CPO catalysis system, a relatively weaker peaks of 1O2 signals were obtained by the catalysis reaction from CPO and XO/X-derived H2O2. Considerable and increased 1O2 signals in EPR spectra were observed in the SOD and CPO cascade system along with the reaction times from 0 min to 10 min, which is consistent with the ERP results of SCNGs in Fig. 3a. All the results verified that the cascade SOD and CPO in SCNGs can efficiently biocatalyze ROS in tumour tissues to the 1O2 species with tuneable dosage for anti-tumour therapy. Characterization of the generation of 1O2. a Time-dependent electron paramagnetic resonance (EPR) signals of 1O2 from SCNGs in the presence of 2,2,6,6-tetramethylpiperidine (TEMP). TEMP was served as a 1O2 trapper. b Fluorescence spectra of the singlet oxygen sensor green (SOSG) with SCNGs in PBS buffer (20 mM, pH 6.8). c Time-dependent fluorescence spectra of SOSG with free CPO and SCNGs in PBS buffer (20 mM, pH 6.8). a.u., arbitrary units. d Three dimensional confocal laser scanning microscopy (3D-CLSM) images of living HepG2 cells after co-incubation with SCNGs at 300 μg mL-1 for 2 h and treated with SOSG probe (5 µM) from 0 min to 30 min. Scale bar, X-axis: 140 μm, Y-axis: 140 μm, Z-axis: 12 μm. e The corresponding CLSM photomicrograph with YZ (cells on the red line) and XZ (cells on the orange line) planes of living HepG2 cells and cells after co-incubation with NGs and SCNGs at 300 μg mL-1 for 2 h and treated with SOSG probe (5 µM) at 18 min. Scale bar, 20 μm. Source data are provided as a Source Data file Furthermore, the efficacy of 1O2 generation can be evaluated by fluorescent probes, such as Singlet Oxygen Sensor Green reagent (SOSG)48. As illustrated in Fig. 3b, the fluorescence emission intensity of SOSG by reacting with SCNGs gradually increases as the reaction time increased from 0 min to 5 min in the H2O2 and NaCl solution, verifying the production of 1O2 by the SCNG systems with H2O2 and NaCl. The time-dependent fluorescence spectra of SOSG that reacted with SCNGs or free CPO in the presence of H2O2 and NaCl are displayed in Fig. 3c, which indicates that the 1O2 generation is increasing over time. The SCNGs here show a stable and sustaining reactive process as shown in the spectra; the activity is still considerable and favourable in the immobilized enzyme system. Relative to the free CPO, SCNGs show no significant impacts on the production of 1O2, only a slower 1O2 generation rate in the same concentration of CPO. Hence, a simulated biochemical oxidizing reaction of neutrophil lysosomes occurs during the SCNG system, in which 1O2 species are generated by the enzymatic reaction of CPO oxidation of Cl- by H2O2. In addition, the characterization of 1O2 from therapeutic agents was further evaluated on a HepG2 cell model with starvation treatment. After co-incubation with SCNGs for approximately 2 h, the fluorescent probe SOSG was added for subsequent examination by CLSM from 0 min to 30 min. As the prolonging of time (Fig. 3d), the living HepG2 cells exhibit more and more stronger green fluorescence in 3D-CLSM, indicating abundant 1O2 inside the cells originating from the enzymatic reaction with the intracellular ROS. In order to further distinguish the impacts of SCNGs and HepG2 cells themselves on the 1O2 generation, HepG2 cells were co-incubated with NGs and SCNGs at 300 μg mL−1 for 2 h. The CLSM photomicrograph with YZ (cells on the red line) and XZ (cells on the orange line) planes were captured after adding SOSG probe for 18 min. In addition to the control group and NGs group with negligible fluorescence signals, SCNGs group shows extensive green 1O2 signals within the whole cell from both the YZ and XZ plane imaging, as shown in Fig. 3e. All the results verify that the therapeutic agent SCNGs can significantly increase the production of 1O2 in tumour microenvironment responsively, to fulfil the potential EDT with high safety and efficacy. In vitro EDT After the analysis of 1O2 species from SCNG therapeutic agents, the efficiency and safety of EDT were investigated in vitro on hepatoma carcinoma HepG2 cells and normal hepatic cells HL-7702, respectively. First, the cytotoxicity of the as-prepared NGs and SCNGs with the concentration from 1 μg mL−1 to 500 μg mL−1 was evaluated by CCK-8 assays after incubation with HepG2 cells by starvation treatment for 24 h. As shown in Fig. 4a, the NG itself shows almost no detectable cytotoxicity towards the HepG2 cells, with approximately 90% cell viability even at a concentration of 500 μg mL−1. By contrast, with the therapeutic 1O2 mediated by the loaded cascade SOD/CPO, SCNGs exhibits significant cell proliferation inhibition, with an IC50 approximately 291.24 μg mL−1. Besides, the safety of SCNGs was further evaluated by the cytotoxicity test with CCK8 assays after co-incubated with HL-7702 cells for 24 h with the concentration from 1 μg mL−1 to 500 μg mL−1. As shown in Fig. 4b, both NGs and SCNGs exhibited no significant effect on the HL-7702 cell survival, all the viability values are nearly at about 90% even at high concentration of 500 μg mL−1. In this study, SCNGs can significantly inhibit the tumour cell proliferation and conversely show negligible cytotoxicity to normal cells, indicating a high efficiency and safety of EDT in vitro. Safety, efficiency and mechanism of EDT. a Cytotoxicity of NGs and SCNGs by CCK8 assays after incubated with HepG2 cells for 24 h with the concentration of 1 μg mL-1 to 500 μg mL-1. NGs exhibited no significant effect on the cell survival, while SCNGs had increased cytotoxicity to HepG2 cells with the increase of concentration. Data are presented as mean ± s.d. (n = 3). b Cytotoxicity of NGs and SCNGs by CCK8 assays after incubated with HL-7702 cells for 24 h with the concentration of 1 μg mL-1 to 500 μg mL−1. Both NGs and SCNGs exhibited no significant effect on the cell survival. Data are presented as mean ± s.d. (n = 3). c The flow cytometry results of the apoptosis of HepG2 cells staining with Annexin V-FITC/PI after incubated with NGs and SCNGs at the IC50 value for 24 h (left), and the corresponding data analysis (right). The enhanced apoptosis promoted by SCNGs confirms the significant effect in cancer-cell killing of SCNGs. Data are presented as mean ± s.d. (n = 3). p values were analyzed by Student's two-sided t-test (**P ≤ 0.01, n.s. represents no significant differences). d Time-dependent CLSM images (top) and corresponding flow-cytometry analysis (bottom) of HepG2 cells treated with NGs and SCNGs at the IC50 value for 24 h using carboxy-H2DCFDA as a ROS detector (left), and the corresponding data analysis (right). The experiment was conducted three times, and representative results are present. Scale bar, 20 µm. Data are presented as mean ± s.d. (n = 3). p values were analyzed by Student's two-sided t-test (**P < 0.01, n.s. represents no significant differences). Source data are provided as a Source Data file Furthermore, the apoptosis of HepG2 cells incubated with NGs and SCNGs at the IC50 value for 24 h was analysed more precisely by flow cytometry after staining with annexin V-FITC/PI (fluorescein isothiocyanate/ propidium iodide). As shown in Fig. 4c, relative to the PBS group (approximately 3.9%), treatment with NGs induces no evident apoptosis (approximately 6.9%), while SCNGs significantly promote the apoptosis of the tumour cells (approximately 53%). These results indicate that SCNGs can serve as safe and efficient potential agents for tumour therapy due to their considerable biocompatibility, lack of significant toxic effects to normal cells, and (conversely) strong cytotoxicity to cancer cells with oxidative stress. This biochemical oxidizing reaction between the SCNG system and microenvironmental ROS, which involves many oxidant products, such as 1O2 and HOCl, can further stimulate more ROS to initiate another round of enzymatic reactions by the immobilized SOD/CPO, theoretically providing high effectiveness for the tumour therapy via this developed EDT technique. Thus, as a proof-of-concept study, the intracellular ROS condition in tumour cells was investigated after 24 h of incubation with SCNGs at the IC50 value by using 5-chloromethyl-2′,7′-dichlorodihydrofluorescein diacetate acetyl ester (DCFH-DA) as the ROS fluorescent probe, as illustrated in Fig. 4d. An increased strong green fluorescence can be detected under CLSM from 0 min to 18 min after adding DCFH-DA probe, which indicates that the DCFH-DA probe molecules gradually enter the HepG2 cells containing elevated ROS (top of Fig. 4d). Further quantitative analysis was performed on HepG2 cells incubated with PBS, NGs or SCNGs using flow cytometry at 18 min after adding the probe (bottom of Fig. 4d). As expected, only SCNGs can effectively upregulate the level of ROS, and the NGs perform similarly to the control. For a comparative study, the endogenous ROS level of HL-7702 cells with and without SCNGs treatment have been tested (Supplementary Fig. 8). As expected, the preliminary ROS levels in HepG2 cells are slightly higher than the levels in HL-7702 cells. And the level of ROS in the HL-7702 cells treated by PBS as ctrl group and treated by SCNGs show no significant difference, that is to say the endogenous ROS in HL-7702 cells cannot trigger much enough killing ROS to induce the cell apoptosis, which can be attributed to the strong antioxidant response in normal cells comparing to the tumour cells with exhausted antioxidant response at large oxidative stress20,49. The intracellular ROS activity in tumour cells is greatly enhanced by the cascade reaction of SOD and CPO in SCNGs inside the cancer cells, which can serve as one of the basic principles for the highly efficient EDT. Before the study of EDT in vivo, the mechanism of tumour cell apoptosis induced by SCNGs were investigated. The γH2AX assay (phosphorylation of the core histone protein H2AX), neutral comet assays and cell cycle analysis were conducted. γH2AX is used as a marker to track the amount of DNA double-strand breaks (DSBs) to evaluate the DNA damage induced by 1O2. As shown in Fig. 5a, the red fluorescence of γH2AX remarkably increases in the SCNG group relative to the group of NGs and the control, indicating that cells exposed to SCNGs suffer DNA damage by 1O2. In order to directly visualize the DNA damages in HepG2 tumour cells, we employed the neutral comet assays, which have been reported to exclusively detect DNA doubles-strand breaks induced by various DNA-damaging agents. As shown in Fig. 5b, comet tails in HepG2 cells treated with SCNGs are significantly longer compared with the NGs and control treatments, showing much more damaged DNA doubles-strand breaks existed in SCNGs group. In addition, DNA damage usually lead to the perturbation of cell cycle progression; thus, flow cytometry assays on HepG2 cells after incubation with PBS, NGs and SCNGs respectively at the IC50 value for 24 h with PI staining were conducted (Fig. 5c). The SCNGs treatment results in a noticeable obstruction of the cell cycle progression and interrupts the G0/G1 checkpoint transition leading to the cell apoptosis. All these results verify that the mechanism of tumour cell death induced by EDT is the 1O2-induced DNA damage that further interferes with cell cycle progression. Mechanism of tumour cell apoptosis induced by EDT. a Representative CLSM images of different groups of HepG2 cells using γH2AX as a DNA damage biomarker. Scale bar, 20 µm. The red fluorescence indicates the DNA damage of HepG2 cells caused by 1O2 after SCMGs treatment. b Representative fluorescent images of DNA damages in different groups of HepG2 cells using the neutral comet assays. Scale bars, 50 μm. The appearance of longer comet tails in HepG2 cells indicates the damaged DNA doubles-strand breaks. c Effects of SCNGs on cell cycles by flow cytometry assays on different groups of HepG2 cells with PI staining (left), and the corresponding data analysis (right). The control, NGs and SCNGs groups are referring to the HepG2 cells after incubation with PBS, NGs and SCNGs respectively at the IC50 value for 24 h. The all experiments were performed three times, and the representative results are displayed. Data are presented as mean ± s.d. (n = 3). p values were analyzed by Student's two-sided t-test (*P < 0.05, n.s. represents no significant differences). Source data are provided as a Source Data file In vivo EDT At last, the nude mice bearing with HepG2 cell derived liver cancer model have been established to further evaluate the in vivo efficacy of EDT. The tumour-bearing mice with similar tumour size of 80–100 mm3 were divided into three groups, including PBS control group, NGs and SCNGs group, respectively. Each mouse was intravenously injected by PBS, NGs or SCNGs at 5 mg mL−1 of 100 μL (with the dose of 20 mg kg−1 mouse), followed by recording the tumour sizes using a calliper every other day to examine the EDT effects for 14 days. The tumour sizes in SCNGs group increased slightly, indicating a significant inhibition of tumour growth. By contrast, mice treated with NGs and PBS presented a fast tumour growth during the 14 days treatment for intravenous injection (Fig. 6a, b). Besides, no significant body weight changes can be also observed in all three groups for intravenous injection during the therapeutic process (Fig. 6c). Finally, the mice were sacrificed to collect the tumours and organs for histological analyses. As depicted in Fig. 6d, the pathological structure and morphology, apoptosis and proliferation of the cells in tumour sections were analysed by H&E (haematoxylin and eosin), TUNEL (Terminal deoxynucleotidyl transferase dUTP nick end labeling) and KI-67 (nuclear-associated antigen KI-67) tests. The cells in tumour sections exhibited obvious destruction with large vacuoles and irregular widening nucleus after SCNGs treatment in H&E staining, while the tumour tissue of NGs and PBS group still remained intact and undamaged. In addition, only the tumour tissues treatment with SCNGs demonstrated significant TUNEL-positive apoptotic tumour cells (green fluoresce) and decreased KI-67-positive proliferating tumour cells. All these results represent the potential high therapeutic effect of SCNGs for EDT. Moreover, to evaluate the toxicity and side effects of SCNGs in vivo, the H&E stained tissue slices of the major organs were also analysed, including heart, liver, spleen, lung and kidney, as shown in Fig. 6e. No noticeable damage or pathological change were observed in the organ slices after injection of SCNGs, which revealed that SCNGs possessed no evident side effects and can be a safe agent for further in vivo applications. In vivo EDT on HepG2 cell derived mouse model. a Photographs on the 0, 2nd, 6th, 14th day of tumour-bearing mice after various treatments by intravenous injection. Dose: PBS (100 μL), NGs (100 μL, 5 mg mL−1), SCNGs (100 μL, 5 mg mL−1). The relative tumour volume (b) and body weight (c) change curves of each group of mice in 14 days after various treatments. Date are the means ± s.d. (n = 3). p values were analyzed by Student's two-sided t-test (*P < 0.05). d Histopathology analysis (H&E staining, TUNEL and KI-67 assay) of the tumour tissue after 14 days of different treatments. Scale bar, 200 μm. e H&E staining of the major organs of each group of mice after 14 days of different treatments. Scale bar, 200 μm. Source data are provided as a Source Data file To further reveal the potential therapeutic effect of SCNGs for in vivo EDT, a more relevant animal model, the Hepatocellular carcinoma (HCC) patient derived xenograft (PDX) mouse model has been employed to investigate the efficacy of anti-tumour and the corresponding survival percentages. Considering the significant differences of tumour growth due to the different sources of patient tumor tissues and genomic diversities, the experimental endpoints were considered when tumour volume reached 1000 mm3 as commonly reported for subcutaneous tumours. Both the relative tumour volume, body weight change and the corresponding survival curves (with regard to the percent mice with tumour volumes < 1000 mm3) have been studied based on this advanced model, as shown in Fig. 7. At day 15 and day 19, the volumes of mice treated with PBS and NGs exceeded the defined end-point 1000 mm3, respectively, with the P value of 0.0053 and 0.0065 showing significant differences comparing with the tumor volumes of SCNGs group (Fig. 7a). At day 23, all of the mice volume treated with PBS and NGs exceeded 1000 mm3, whereas 83% of SCNGs-treated mice volume remained lower than the end-point size. The administration of SCNGs significantly prolonged the survival of the mice, as shown in Fig. 7c. Concurrently, a more detailed histopathology analysis of the HCC PDX tumour tissue after different treatments has been conducted to verify the actual therapeutic species (Fig. 7d). The localization of SCNGs in tumour tissues was confirmed by ex vivo Prussian blue staining images of tumour tissue extracted from the mice after treatment. We can clearly see that parts of the tumour tissues were stained blue marked by the arrows, indicative of the accumulation of iron oxide-based SCNGs within the tumour areas. For a more quantitative data, the ICP test has been also conducted with about 32.6 ng Fe per mg tumour tissue. The further EDT related therapeutic species including the ROS, 1O2 and the intermediate HClO have been detected in tumour tissues of different groups by using the corresponding fluorescent probes, DCFH-DA, SOSG and aminophenyl fluorescein (APF, as HClO detector). Based on the obtained results, the strong green fluorescence of DCFH-DA and SOSG can be detected on the slice of SCNGs group, comparing to the PBS and NGs groups. Only SCNGs can effectively upregulate the level of ROS and 1O2 to fulfil the responsive EDT, similarly to the in vitro results; finally, the pathological examinations of related tumour tissues by H&E staining also show the consistent results with the previous data on HepG2 nude mice model. Significant damaged and apoptotic tumour cells can be found in the observed tumour tissues, while negligible damage in the organ tissues can be detected (Supplementary Fig. 9) of SCNGs group. In vivo EDT on HCC PDX mouse model. The relative tumour volume (a), body weight (b) change curves and the corresponding survival percentages (c) of each group of mice. Dose: PBS (100 μL), NGs and SCNGs (100 μL, 5 mg mL−1). Date are the means ± s.d. (n = 6). p values were analyzed by Student's two-sided t-test (**P < 0.01). d Histopathology analysis of the HCC PDX tumour tissue: ROS, 1O2, HClO, H&E and Prussian blue staining, respectively. Scale bar, 100 μm. Source data are provided as a Source Data file In this study, bioinspired by the mechanism of neutrophil lysosomes, an EDT protocol has been proposed, which employs the endogenous-like enzymes as the core component to cascade catalyze ROS in tumour microenvironment to drastically produce 1O2. The supramolecular hybrid nanogel via the developed facile AP-triggered self-assembly with cascade enzyme system of SOD and CPO has been constructed (SCNGs). The supramolecular hydrogel matrix here can relieve the enzyme protein degradation by the non-covalent immobilization and sacrifice protection by their protein-like structure, ensuring high structural stability and enzymatic capability for efficient catalysis; the tandem SOD and CPO in the hybrid nanogels endow the SCNGs with excellent ROS sensitivity and capability of significantly promoted oxidative stress of tumour cells. The encapsulated CPO can convert H2O2 into 1O2 in tumour cells, while SOD can consume the endogenous ˙O2ˉ to increase the endogenous H2O2 levels either to feed CPO or to kill tumour cells. In vitro results of EDT on HepG2 cells have shown that the intracellular ROS activity can be greatly enhanced by the cascade reaction of SOD and CPO in SCNG inducing considerable cytotoxicity. While the endogenous ROS in HL-7702 cells cannot be catalysed to trigger much enough killing ROS to induce the cell apoptosis, due to the strong antioxidant response in normal cells comparing to the tumour cells. In vivo experiments on both HepG2 cell derived nude mice model and the advanced HCC PDX mice model, further demonstrate that the tumour microenvironment-responsive SCNGs can significantly inhibit proliferation and enhance apoptosis of the tumour cells/tissues. The further histopathology analysis by using the corresponding fluorescent probes, DCFH-DA, SOSG and APF indicates that SCNGs can effectively upregulate the level of ROS and 1O2 species to fulfil the responsive and efficient EDT, similarly to the in vitro results. In summary, as a proof of concept study, a simulation of neutrophil lysosomes has been constructed by AP-triggered self-assembled core-shell nanogel, co-loading with cascade SOD and CPO to responsively and controllablely generate 1O2 quickly killing abnormal cells, as a proposed EDT. This multifunctional SOD/CPO-loaded nanogel system, SCNG can responsively generate the therapeutic 1O2 around the simulated ROS microenvironment and in the tumour cells with high efficacy. The SCNGs show good aqueous dispersibility, considerable biocompatibility, and negligible toxic effects to normal cells and (conversely) strong cytotoxicity to cancer cells, which can be further served as safe and efficient potential therapeutic agents for EDT of tumours. We believe that this work will provide a insight to establish non-invasive and endogenous therapeutic nanoplatform only relying on the endogenous ROS in tumour without any other external intervention. The potential and unique efficacy and safety advantages due to its highly enzymatic efficiency and specificity can serve EDT as a very favorable constituent part in 1O2-induced tumour therapeutic strategy, such as PDT, SDT and CDT. Ferric chloride hexahydrate (FeCl3·6H2O), ethylene glycol (EG), diethylene glycol (DEG), Acid Phosphatase (AP, MW = 10 kDa, EC 3.1.3.2), Superoxide Dismutase (SOD, ≥ 3000 U mg−1, EC 1.15.1.1) and Chloroperoxidase (CPO, ≥ 3000 U mL−1, EC 1.11.1.10) were purchased from Sigma-Aldrich. Sodium acetate (CH3COONa, NaOAc), Poly (vinylpyrrolidone) (PVP, K30), Pyrogallol, Succinic anhydride and 3-aminopropyltriethoxysilane (APTES) was purchased from Aladdin. 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) and N-hydroxysuccinimide (NHS) were purchased from Energy Chemical, Fmoc-Tyr(H2PO3)-OH was purchased from GL Biochem. Tris·HCl buffer was purchased from Beijing Solarbio Science & Technology. Singlet Oxygen Sensor Green (SOSG) was purchased from Thermofisher. The aminophenyl fluorescein (APF) as the HClO detector were purchased from Shanghai Maokangbio. Enhanced BCA Protein Assay Kit was purchased from Beyotime Biotechnology. All materials were used without further purification. Scanning electron microscope (SEM) images were conducted on FEI Magellan 400 L system. TEM and STEM images were taken with a JEOL 2100 microscope (Japan) operated at 200 kV with element mapping (Oxford X-Max 80T). Dynamic light scattering (DLS) studies of the microgels were conducted on Zetasizer Nano instrument (Malvern Instruments Ltd., United Kingdom). UV-vis spectra were obtained by UV-2700 (Shimadzu Corporation). Absorbance of the CCK8 assay was measured at 450 nm using the ELx800 reader (BioTek Instruments, Inc, Winooski, VT). FACS was analyzed using the Becton–Dickinson spectrophotometer (BD, Franklin Lakes, NJ). Confocal microscopic studies were performed on a confocal fluorescence microscope Zeiss LSM510 and LSM710 system (Carl Zeiss AG, Jena, Germany). Thermogravimetric analysis (TG) of MNPs and nanogels were carried out on a thermal analyzer (NETZSCH STA 409 PC, Germany). Preparation of the SCNG nanoparticles The monodispersed Fe3O4 (MNPs) were synthesized according to the literature to produce a diameter of 80 nm39. Afterwards, the obtained MNPs (1 mL, 10 mg mL−1) was first resuspended in a mixture of 80 mL of ethanol and 80 mL of deionized water and sonicated for 30 min. Then, 6 mL of APTES was injected and stirred at 70 °C for 24 h under continuous N2 flow for amination. The amino-functionalized MNPs (MNPs-NH2) were obtained after washing three times with ethanol and deionized water. Subsequently, the MNPs- NH2 was further treated with 10% succinic anhydride in dimethylformamide (DMF) and stirred for 18 h to achieve carboxylic functionalized nanoparticles (MNPs-COOH). The resultant mixture was purified by magnetic separation and washed with ethanol and deionized water for further use. The acquired MNPs-COOH was then activated by EDC (200 mg) and NHS (300 mg) in 20 mL of phosphate buffer solution at pH 5.8 for 2 h. After 3 washing steps, the nanoparticles were resuspended in 20 mL of AP solution (40 mg, 2 mg mL−1) to covalently attach the AP on the surface of MNPs. After separation, the obtained MNPs@AP nanoparticles were further dispersed in 10 mL of precursor coating solution, which was composed of Fmoc-Tyr(H2PO3)-OH (0.5%) and Na2CO3 (0.2%), and mechanically stirred for 24 h at room temperature. The supramolecular NGs were then collected by magnetic separation after washing three times with deionized water. Finally, 0.75 mL of the mixture of SOD (0.6 mg, 2000 U) and CPO (70 μL, 1200 U) in Tris·HCl buffer (1 M, pH 6.0) were added into the NGs solution (25 mg, 20 mg mL−1) and stirred at room temperature for another 24 h. The dual enzyme-loaded NGs (SCNGs) were acquired by magnetic separation. The supernatant was analysed to calculate the amount of residual enzyme. Loading Amount and Activity Test of SOD and CPO The loading amount and activity of SOD within SCNGs was determined based on the inhibition ability of SOD on pyrogallol autoxidation. The inhibition rate of pyrogallol autoxidation can be expressed as the following equation 1: $${\mathrm{inhibition}}\left( \% \right) = \left( {{{A}} - {{B}}} \right) \times 100/{{A}},$$ where A and B represent the autoxidation rates of pyrogallol in the absence and presence of SOD, respectively. The autoxidation rate of pyrogallol can be calculated by the slope of absorbance curve during the process of autoxidation at the first minute. Therefore, the amount of residual SOD in supernatant can be measured by the comparison of its inhibition rate to the standard quantitative free SOD. Then, the amount of loading SOD can be quantified by the subtraction of initial added SOD and residual SOD in supernatant. Typically, 10 μL standard free SOD (0.1 mg mL−1) in PBS (pH 7.8, 50 mM) or 10 μL residual SOD in supernatant (diluted by 4 times) and 2.98 mL Tris·HCl buffer (50 mM, pH 8.2, including 1 mM Na2EDTA) were first added to a quartz cuvette, followed by injecting of 10 μL pyrogallol (50 mM in 10 mM HCl). Immediately, the kinetics measurements were performed by a UV spectroscopy with the absorbance peak at 325 nm and 6 s interval. The absorbance change at 325 nm was recorded to calculate the autoxidation rate of pyrogallol, and the amount of loading SOD can thus be quantified. The activity of the immobilized SOD in nanoparticles is defined as the ratio between the inhibition of sample and free SOD: $${\mathrm{Activity}}_{{\mathrm{SOD}}}\left( \% \right) = {{I}}_{{\mathrm{confined}}} \times 100/{{I}}_{{\mathrm{free}}},$$ where Iconfined and Ifree represent the inhibition rates of pyrogallol autoxidation by SOD-laden NGs (SCNGs) and free SOD, respectively. The experiment recipe was the same as above, and the amount of SOD employed in the parallel experiment should be equivalent. The absorbance change at the first minute was assigned to compare the activity of SOD in different states, such as free SOD and SCNGs. The loading amount of CPO was determined by the UV absorbance of the residual CPO in the supernatant at 403 nm. The activity of CPO was evaluated based on the ability of CPO to catalyze the conversion of monochlorodimedon (MCD) to dichlorodimedon (DCD) at pH 2.75 in the presence of potassium chloride (KCl) and H2O2. The activity assay was performed in 0.1 M phosphate buffer (pH 2.75), containing 20 mM KCl, 2 mM H2O2, 0.1 mM MCD and standard free CPO (2 U) or SCNGs (including 2 U CPO) in supernatant. The reaction progress was monitored by recording the absorbance changes at 278 nm. The decrement of MCD in the initial first minute was employed to evaluate the activity of CPO. After quantification of CPO in SCNGs, the activity comparison assays of CPO in different states (free CPO or SCNGs) ware further conducted, which were performed the same as above and the amount of CPO in different states should be equivalent. The reaction rate at the first minute was employed to evaluate the activity of free CPO and SCNGs. The specific activity of SCNGs was expressed as the reaction rate comparison of SCNGs relative to free CPO: $${\mathrm{Activity}}_{{\mathrm{CPO}}}\left( {\mathrm{\% }} \right) = {{R}}_{{\mathrm{confined}}}{\mathrm{ \times }}100/{\mathrm{R}}_{{{free}}},$$ where Rconfined and Rfree represent the reaction rate of SCNGs and free CPO, respectively. The activities of the SOD and CPO in SCNGs at different pH (pH = 4.6, 6.0, 7.0) environment were further tested according to the above methods. Moreover, the storage activities of SOD and CPO in SCNGs were also performed during 30 days at different time interval. The cumulative release of the enzymes in PBS at 37 ℃ were conducted by detecting the total amount of the protein at different time points during 14 days using the BCA protein quantitation kit. Detection of singlet oxygen in vitro The EPR spectra were recorded on a Bruker EMX-8/2.7 spectrometer operating at 9.873 GHz (microwave power: 20 mW; modulation frequency: 100 kHz; modulation amplitude: 0.5 G; receiver gain: 4 × 105). The mixture of NaCl (100 mM), H2O2 (100 μM) and SCNGs or free CPO (including 8 U CPO) with the 1O2 trapper (TEMP) was rapidly transferred to a standard quartz capillary and placed into the EPR spectrometer. The EPR spectrum was then recorded every 4 min. Another assay was performed as following: NaCl (100 mM), XO (0.26 U), X (10 mM) and free SOD (60 U), free CPO (40 U) or free SOD/CPO (including 60 U SOD and 40 U CPO) was rapidly mixed with the 1O2 trapper (TEMP) to monitor the EPR spectrometer immediately. SOSG was used as the probe molecule to detect the generation efficiency of 1O2 in SCNGs. Typically, 10 μL of SOSG (50 μM) was added into the solution composed of NaCl (100 μL, 1 M), H2O2 (10 μL, 10 mM) and SCNGs or free SOD/CPO (including 10 U SOD and 8 U CPO) to form a 1 mL solution in the cuvette. The fluorescence intensity was then recorded on an F-7000 fluorescence spectrophotometer (Hitachi, Japan) at different time intervals. 1O2 detection was conducted by recording the fluorescence emission spectra of SOSG (Ex/Em = 504/525 nm) with the excitation wavelength fixed at 488 nm. HepG2 and HL-7702 cells were obtained from the Institute of Biochemistry and Cell Biology, Shanghai Institutes of Biological Sciences, Chinese Academy of Sciences (Shanghai, China). All cells were tested negative for mycoplasma contamination. To study the production of 1O2 in cancer cells, the HepG2 cells were seeded in confocal dish with a density of 5 × 104 per well. After incubation for 24 h, the medium was replaced with fresh culture medium. Then, PBS, NGs (300 µg mL−1) or SCNGs (300 µg mL−1) was injected to the well. After further incubation for 2 h, the cells were treated with SOSG probe (5 µM). Subsequently, the fluorescence emission spectrum of oxidized SOSG (Ex/Em = 504/525 nm) was immediately recorded on a confocal fluorescence microscope (Carl Zeiss AG, Jena, Germany). Measurements were made for 140-second periods for 20 pictures, with 40-second intervals between each measurement. 10 group of 3D-CLSM pictures were captured automatically using 30 min (3 min/group). For fully showing the living cell fluorescence intensity, the photomicrograph at 18 min with YZ and XZ planes were captured. In vitro cell cytotoxicity For cell viability analysis, the HepG2 cells were seeded into 96-well plates with a density of 5000 cells per well for 24 h. And then, the cells were randomly treated with NGs and SCNGs at different concentrations from 1 μg mL−1 to 500 μg mL−1. After further incubation for 24 h, a mixed solution consisting of CCK-8 (10 µL, Dojindo, Kumamoto, Japan) and fresh culture medium (100 µL) was added to each well and incubated for an additional 2 h at 37 °C and 5% CO2. Finally, the absorbance at 450 nm was measured by a microplate reader (BioTek Instruments, Inc., USA), and the IC50 value was calculated using the SPSS statistics 21.0. Besides, to further evaluate the safety of SCNGs, the cytotoxicity test was also conducted by CCK8 assays with HL-7702 cells (the normal hepatic cell). For cell apoptosis analysis, the HepG2 cells were firstly treated with NGs or SCNGs at the IC50 value for 24 h. And then, the cells were preincubated with 5 µL annexin V (5 µL annexin V-FITC was dissolved in 50 µL buffer, Bio Basic Inc., Markham, ON, Canada) in dark at room temperature for 15 min, followed by adding 10 µL PI (10 µL PI was dissolved in 250 µL buffer, Sigma). After staining, the percentage of apoptotic cells was analysed by flow cytometry (BD, Franklin Lakes, NJ). The Q2 region represents late apoptotic cells, and Q4 region represents early apoptotic cells. Typical gating used is shown in Supplementary Fig. 10. Analysis of oxidative stress in cells The HepG2 and HL-7702 cells were seeded with a density of 1 × 105 per well in 6-well plates, which treated with PBS, NGs or SCNGs at IC50 value for 24 h. Then, the cells were further loaded with 5-chloromethyl-2,7-dichlorodihydrofluorescein diacetate acetylester (DCFH-DA, 25 µM, Sigma-Aldrich) and the fluorescence emission spectrum of carboxy-DCF (Ex/Em = 495/529 nm) was immediately captured on confocal fluorescence microscope Zeiss LSM510 and LSM710 system (Carl Zeiss AG, Jena, Germany) at 0, 6, 12, 18, 30 min. Quantitative analyses for ROS production were further performed by flow cytometric detection (BD, Franklin Lakes, NJ). The HepG2 cells after being treated with PBS, NGs or SCNGs (at IC50 value) were incubated with 25 µM DCFH-DA (Sigma-Aldrich) for 18 min. Then, the reaction of probe and ROS were stopped and cells were analysed by flow cytometry with excitation at 488 nm and emission at 505–550 nm. The HepG2 cells were seeded with a density of 1 × 105 per well in 6-well plates. After incubation for 24 h, the cells were randomly treated with PBS, NGs or SCNGs (at IC50 value) and further incubated for 24 h at 37 °C and 5% CO2. Then, the cells on the slides were fixed with 4% PFA at room temperature for 20 min and permeabilized with 0.5% Triton X-100 at 37 °C for 30 min, followed by incubation with anti-γH2AX primary antibody (Abcam, Anti-gamma H2A.X (phospho S139) antibody [9F3] (ab26350), dilution at 1:200) at 37 °C for 30 min. Then, the cells were stained and sealed with ProLong Gold Antifade Reagent plus 4′,6-diamidino-2-phenylindole (Invitrogen). Finally, the preparations were washed with PBS and mounted in fluorescent mounting medium with DAPI (Invitrogen). Negative controls were processed in the same way but without the primary antibody. Slides were photographed under a fluorescence microscope (Carl Zeiss AG, Jena, Germany). Analysis of DNA damage activity by comet assay After treatment with PBS, NGs or SCNGs at IC50 value for 24 h, HepG2 cells were collected for analyzing DNA damage activity by comet assy. Cell density was adjusted to 1000 cells mixed with 0.5% low-melting point agarose equilibrated to 37 °C, and cell-agarose suspensions were spread onto the comet slides embedded with 1% normal-melting agarose for incubation at 4 °C. The prepared slides were lysed immediately in chilled neutral lysis buffer (0.1 M Na2EDTA·2H2O, 2.5 M NaCl, 1% Triton X-100, 10% DMSO, and 10 mM Tris, pH 8.0) at 4 °C in the dark for 4 h. In order to allow the cellular DNA to unwind, the slides were submerged in the precooled neutral electrophoresis buffer (90 mM Tris buffer, 90 mM boric acid, 2 mM Na2EDTA·2H2O, pH 8.5) at 4 °C for 20 min. Next, the damaged DNA fragments were electrophoresed for 30 min at a constant voltage of 20 V cm−1. After the slides being neutralized with 0.4 M Tris·HCl (pH 7.5) and left to air dry, samples were stained with the non-toxic DNA dye SYBR Green I at room temperature in the dark for 20 min and then examined under a fluorescence inverted microscope (DMI6000B; Leica, Wentzler, Germany). The HepG2 cells treated by PBS, NGs or SCNGs (at IC50 value) were suspended in ice-cold PBS and fixed in 70% ethanol at −20 °C for 18 h, after which the cells were washed with PBS and stained for 15 min at 37 °C with 500 μL of 50 μg mL−1 PI (containing 50 μg mL−1 RNase) (BD Pharmingen) followed by flow cytometric analysis (BD Franklin Lakes, NJ). The animal experiments were conducted in accordance with the guidelines of the National Institutes of Health of China for the care and use of laboratory animals. All animal work was approved and conducted under the guidelines of Shanghai Tenth People's Hospital, Tongji University's Animal Care and Use Committee. HepG2 cell derived xenograft mouse model Female nude mice (6–8 weeks) were purchased from Slaccas Co. Ltd. To establish the xenograft tumour model, the nude mice were subcutaneously injected with a suspension of 1 × 106 HepG2 cells in PBS (80 mL) on the right hind limbs. When the tumour sizes reached about average size of 80–100 mm3, the mice were randomly divided into 3 groups: (1) PBS (100 μL) with intravenous injection; (2) NGs (100 μL, 5 mg mL−1, with the dose of 20 mg kg−1 mouse) with intravenous injection; (3) SCNGs (100 μL, 5 mg mL−1, with the dose of 20 mg kg−1 mouse) with intravenous injection. The mice were treated by intravenous injection at the 0 day, 2nd day, 4th day, 6th day and 10th day. The body weight was recorded, and tumor volume was measured by a calliper every two days which was calculated according to the following equation 4: $${\mathrm{tumour}}\,{\mathrm{volume}}\left( {{\mathrm{mm}}^{\mathrm{3}}} \right){\mathrm{ = length \times width}}^{\mathrm{2}}{\mathrm{/2}}{\mathrm{.}}$$ Hepatocellular carcinoma (HCC) patient derived xenograft (PDX) mouse model HCC PDX model (No. 73) was ordered from the Shanghai Biomodel Organism by transferring patient tumour fragments to Male NOD-SCID mice (6–8 weeks). When the tumour sizes reached about average size of 80–100 mm3, the mice were randomly divided into 3 groups: (1) 6 mice with intravenous injection of PBS (100 μL); (2) 6 mice with intravenous injection of NGs (100 μL, 5 mg mL−1, with the dose of 20 mg kg−1 mouse); (3) 6 mice with intravenous injection of SCNGs (100 μL, 5 mg mL−1, with the dose of 20 mg kg−1 mouse). The in vivo experiments have three investigators, who independently participated the whole experiments during the experimental period. As a special PDX model of individual differences, the experimental end point50 has been set when the tumour volume > 1000 mm3. The corresponding survival curves have been obtained based on the percent mice with tumour volumes > 1000 mm3. The plots in the illustrations of the corresponding tumour volumes and the body weights are terminated when the first animal in the group reaches a tumor volume of 1,000 mm3. Under the ethical approval conditions, the observation of mice have been prolonged to 31 days, until the size of once mouse tumors was evaluated to exceed 10% of the animal's weight, all the experiments have been terminated as the completion of this experimental period. The mice were treated by intravenous injection at the 1st day, 2nd day, 4th day, 8th day, 12th day, 16th day, 20th day, 25th day and 30th day during the whole experimental duration. The body weight was recorded, and tumor volume was measured by a calliper every two days which was calculated according to the following equation 5: $${\mathrm{tumour}}\;{\mathrm{volume}}\left( {{\mathrm{mm}}^{\mathrm{3}}} \right){\mathrm{ = length \times width}}^{\mathrm{2}}{\mathrm{/2}}{\mathrm{.}}$$ For further histological analysis, Prussian blue staining and ICP analysis, the mice were sacrificed, and the tumour tissues and major organs including livers, lungs, hearts, spleens and kidneys were collected. In addition, 6 mice were randomly chosen for intratumoural injecting PBS, NGs or SCNGs with SOSG or APF respectively to evaluate the 1O2 and HClO in tumour tissues. Histopathology analysis The tumour tissues, livers, lungs, hearts, spleens and kidneys were excised and fixed in 10% neutral formaldehyde, conventionally paraffin embedded, sectioned, and placed on slides. For analysis, 4 μm sections from each sample were stained with haematoxylin and eosin (H&E, Sigma-Aldrich) for the histopathological evaluation using a standard procedure. The 4 μm tumour sections from intravenous injection groups were further immunohistochemically stained for TUNEL (terminal transferase UTP nick-end labelling) and KI-67 (nuclear-associated antigen KI-67, Ki-67 (D3B5) Rabbit mAb (Mouse Preferred; IHC Formulated) #12202, dilution at 1:400) assay to analyse the cell death and proliferation in tumour tissue. For ROS detection, the tumours from the three groups of PDX model mice were cryosectioned at 4 μm thickness and then stained with DCFH-DA according to the instructions. As for the detection of 1O2 and HOCl in vivo, the tumours from intratumoural injection of PBS, NGs or SCNGs with SOSG or APF on PDX model mice were collected and cryosectioned onto slides with a 4 μm thickness. Each tissue section was observed by a light microscopy or fluorescence microscope (Leica DMI6000). Sections were evaluated from six randomly selected fields by two separate pathologists in a blinded-manner. All data were expressed in this manuscript as mean ± s.d. All the results have been performed at least three times by independent experiments. No samples and animals were excluded from the analysis. A two-tailed Student's t test was used to analyse the statistical significance between two groups. The statistical analysis was performed by using GraphPad prism 7.0 (GraphPad Software Inc.). Asterisks indicate significant differences (*P < 0.05, **P < 0.01). A reporting summary for this Article is available as a Supplementary Information file. Additional data related to this paper may be requested from the corresponding authors (Wang Q. (wangqg66@tongji.edu.cn), Li J. (lijiyu@tongji.edu.cn) or Wang X. (15174@tongji.edu.cn)). Shi, J., Kantoff, P. W., Wooster, R. & Farokhzad, O. C. Cancer nanomedicine: progress, challenges and opportunities. Nat. Rev. Cancer 17, 20–37 (2017). Albini, A. & Sporn, M. B. The tumour microenvironment as a target for chemoprevention. Nat. Rev. Cancer 7, 139–147 (2007). Torchilin, V. P. Multifunctional, stimuli-sensitive nanoparticulate systems for drug delivery. Nat. Rev. Drug. Discov. 13, 813–827 (2014). Lu, Y., Aimetti, A. A., Langer, R. & Gu, Z. Bioresponsive materials. Nat. Rev. Mater. 1, 16075 (2016). Bogdan, C., Röllinghoff, M. & Diefenbach, A. Reactive oxygen and reactive nitrogen intermediates in innate and specific immunity. Curr. Opin. Immunol. 12, 64–76 (2000). Grivennikov, S. I., Greten, F. R. & Karin, M. Immunity, inflammation, and cancer. Cell 140, 883–899 (2010). Iwasaki, A. & Medzhitov, R. Regulation of adaptive immunity by the innate immune system. Science 327, 291–295 (2010). Faurschou, M. & Borregaard, N. Neutrophil granules and secretory vesicles in inflammation. Microbes Infect. 5, 1317–1327 (2003). Borregaard, N. et al. Human neutrophil granules and secretory vesicles. Eur. J. Haematol. 51, 187–198 (1993). Nathan, C. Neutrophils and immunity: challenges and opportunities. Nat. Rev. Immunol. 6, 173–182 (2006). Borregaard, N. & Cowland, J. B. Granules of the human neutrophilic polymorphonuclear leukocyte. Blood 89, 3503–3521 (1997). Segal, A. W. How neutrophils kill microbes. Annu. Rev. Immunol. 23, 197–223 (2005). Ward, P. A., Warren, J. S. & Johnson, K. J. Oxygen radicals, inflammation, and tissue injury. Free Radic. Bio. Med. 5, 403–408 (1988). Clark, R. A. & Klebanoff, S. J. Myeloperoxidase-H2O2-halide system: cytotoxic effect on human blood leukocytes. Blood 50, 65–70 (1977). Klebanoff, S. J. Myeloperoxidase. Proc. Assoc. Am. Physicians 111, 383–389 (1999). Colonna, S., Gaggero, N., Richelmi, C. & Pasta, P. Recent biotechnological developments in the use of peroxidases. Trends Biotechnol. 17, 163–168 (1999). van Rantwijk, F. & Sheldon, R. A. Selective oxygen transfer catalysed by heme peroxidases: synthetic and mechanistic aspects. Curr. Opin. Biotechnol. 11, 554–564 (2000). Allain, E. J., Hager, L. P., Li, D. & Jacobsen, E. N. Highly enantioselective epoxidation of disubstituted alkenes with hydrogen peroxide catalyzed by chloroperoxidase. J. Am. Chem. Soc. 115, 4415–4416 (1993). Kanofsky, J. R. Singlet oxygen production by chloroperoxidase-hydrogen peroxide-halide systems. J. Biol. Chem. 259, 5596–5600 (1984). Schumacker, P. T. Reactive oxygen species in cancer cells: Live by the sword, die by the sword. Cancer Cell. 10, 175–176 (2006). Trachootham, D., Alexandre, J. & Huang, P. Targeting cancer cells by ROS-mediated mechanisms: a radical therapeutic approach? Nat. Rev. Drug. Discov. 8, 579–591 (2009). Winterbourn, C. C. Reconciling the chemistry and biology of reactive oxygen species. Nat. Chem. Biol. 4, 278–286 (2008). Zhou, Z., Song, J., Nie, L. & Chen, X. Reactive oxygen species generating systems meeting challenges of photodynamic cancer therapy. Chem. Soc. Rev. 45, 6597–6626 (2016). Cheng, Y. et al. Perfluorocarbon nanoparticles enhance reactive oxygen levels and tumour growth inhibition in photodynamic therapy. Nat. Commun. 6, 8785 (2015). Liu, Y. et al. Hypoxia induced by upconversion-based photodynamic therapy: towards highly effective synergistic bioreductive therapy in tumours. Angew. Chem. Int. Ed. 54, 8105–8109 (2015). Deepagan, V. G. et al. Long-circulating Au-TiO2 nanocomposite as a sonosensitizer for ROS-mediated eradication of cancer. Nano. Lett. 16, 6257–6264 (2016). Huang, P. et al. Metalloporphyrin-encapsulated biodegradable nanosystems for highly efficient magnetic resonance imaging-guided sonodynamic cancer therapy. J. Am. Chem. Soc. 139, 1275–1284 (2017). Dai, Y. et al. Hypochlorous acid promoted platinum drug chemotherapy by myeloperoxidase-encapsulated therapeutic metal phenolic nanoparticles. ACS Nano 12, 455–463 (2018). Huo, M. F., Wang, L. Y., Chen, Y. & Shi, J. L. Tumour-selective catalytic nanomedicine by nanocatalyst delivery. Nat. Commun. 8, 357 (2017). Liu, Y. et al. Biomimetic enzyme nanocomplexes and their use as antidotes and preventive measures for alcohol intoxication. Nat. Nanotechnol. 8, 187–192 (2013). Nochi, T. et al. Nanogel antigenic protein-delivery system for adjuvant-free intranasal vaccines. Nat. Mater. 9, 572–578 (2010). Kudina, O. et al. Highly efficient phase boundary biocatalysis with enzymogel nanoparticles. Angew. Chem. Int. Ed. 53, 483–487 (2014). Xia, L.-W. et al. Nano-structured smart hydrogels with rapid response and high elasticity. Nat. Commun. 4, 2226 (2013). Wang, Q. et al. High-water-content mouldable hydrogels by mixing clay and a dendritic molecular binder. Nature 463, 339–343 (2010). Lutolf, M. P. & Hubbell, J. A. Synthetic biomaterials as instructive extracellular microenvironments for morphogenesis in tissue engineering. Nat. Biotech. 23, 47–55 (2005). Du, X., Zhou, J., Shi, J. & Xu, B. Supramolecular hydrogelators and hydrogels: from soft matter to molecular biomaterials. Chem. Rev. 115, 13165–13307 (2015). Li, J. et al. Enzyme-instructed intracellular molecular self-assembly to boost activity of cisplatin against drug-resistant ovarian cancer cells. Angew. Chem. Int. Ed. 54, 13307–13311 (2015). Ren, C., Zhang, J., Chen, M. & Yang, Z. Self-assembling small molecules for the detection of important analytes. Chem. Soc. Rev. 43, 7257–7266 (2014). Xuan, S., Wang, F., Wang, Y.-X. J., Jimmy, C. Y. & Leung, K. C.-F. Facile synthesis of size-controllable monodispersed ferrite nanospheres. J. Mater. Chem. 20, 5086–5094 (2010). Schweitzer-Stenner, R. Advances in vibrational spectroscopy as a sensitive probe of peptide and protein structure: A critical review. Vib. Spectrosc. 42, 98–117 (2006). David, C. et al. Raman and IR spectroscopy of manganese superoxide dismutase, a pathology biomarker. Vib. Spectrosc. 62, 50–58 (2012). Aburto, J. et al. Stability and catalytic properties of chloroperoxidase immobilized on SBA-16 mesoporous materials. Micro. Mesopor. Mat. 83, 193–200 (2005). Gao, R., Yuan, Z., Zhao, Z. & Gao, X. Mechanism of pyrogallol autoxidation and determination of superoxide dismutase enzyme activity. Bioelectroch. Bioener. 45, 41–45 (1998). Hager, L. P., Morris, D. R., Brown, F. S. & Eberwein, H. Chloroperoxidase II. utilization of halogen anions. J. Biol. Chem. 241, 1769–1777 (1966). Wang, Q. G. et al. A Supramolecular-hydrogel-encapsulated hemin as an artificial enzyme to mimic peroxidase. Angew. Chem. Int. Ed. 46, 4285–4289 (2007). Zang, L.-Y., Zhang, Z. & Misra, H. P. EPR studies of trapped singlet oxygen (lO2) generated during photoirradiation of hypocrellin a. Photochem. Photobiol. 52, 677–683 (1990). Fridovich, I. Quantitative aspects of the production of superoxide anion radical by milk xanthine oxidase. J. Biol. Chem. 245, 4053–4057 (1970). Gollmer, A. et al. Singlet Oxygen Sensor Green®: Photochemical behavior in solution and in a mammalian cell. Photochem. Photobiol. 87, 671–679 (2011). Cairns, R. A., Harris, I. S. & Mak, T. W. Regulation of cancer cell metaholism. Nat. Rev. Cancer 11, 85–95 (2011). National Research Council. Guide for the Care and Use of Laboratory Animals Ch. 2 (National Academies Press, Washington, D.C., 2010). This study was supported by Grants from the National Natural Science Foundation of China (NO. 51773155, 51473123, 51873156, 81470897), the National Key Research and Development Program (No. 2016YFA0100800), Shanghai Chenguang Project (NO. 15CG16) and Talent project from Education Ministry (NECT-13-0422). We thank Professor Yu Chen from Tongji University School of Medicine and Tianyu Yu from Shanghai Tenth People's Hospital, Tongji University for their kind support in the animal experiments. These authors contributed equally: Qing Wu, Zhigang He. School of Chemical Science and Engineering, Shanghai Tenth People's Hospital & Putuo District People's Hospital, Tongji University, Siping Road 1239, Shanghai, 200092, China Qing Wu, Zhigang He, Xia Wang, Qi Zhang, Qingcong Wei, Sunqiang Ma, Cheng Ma, Jiyu Li & Qigang Wang State Key Laboratory of Transducer Technology, Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Science, 865 Changning Road, Shanghai, 200050, China Qing Wu Zhigang He Xia Wang Qi Zhang Qingcong Wei Sunqiang Ma Cheng Ma Jiyu Li Qigang Wang Q.Wu, Z.H., X.W., J.L. and Q.Wang proposed the concept or designed the experiments. Q.Wu, Z.H. Q.Z. and Q.Wei performed the preparation, characterization and data analysis. Q.Wu and Q.Z. performed the preparation and characterization of the nanoparticles and the detection of singlet oxygen. Z.H., S.M. and C.M. performed the cell experiments. Z.H., Q.Wu and X.W. performed the animal, tissue experiments and data analysis. All authors contributed to analyse the experimental data and discuss the results. Q.Wu, Z.H., X.W., J.L. and Q.Wang co-wrote the paper. All authors edited the manuscript and approved the final version. Correspondence to Xia Wang, Jiyu Li or Qigang Wang. Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Wu, Q., He, Z., Wang, X. et al. Cascade enzymes within self-assembled hybrid nanogel mimicked neutrophil lysosomes for singlet oxygen elevated cancer therapy. Nat Commun 10, 240 (2019). https://doi.org/10.1038/s41467-018-08234-2 Nanoplatforms with donor-acceptor Stenhouse adduct molecular switch for enzymatic reactions remotely controlled with near-infrared light Ke Li Miao-Deng Liu Xian-Zheng Zhang Science China Materials (2023) Bioenzyme-based nanomedicines for enhanced cancer therapy Mengbin Ding Yijing Zhang Kanyi Pu Nano Convergence (2022) In vivo three-dimensional multispectral photoacoustic imaging of dual enzyme-driven cyclic cascade reaction for tumor catalytic therapy Shan Lei Peng Huang A MXene-Based Bionic Cascaded-Enzyme Nanoreactor for Tumor Phototherapy/Enzyme Dynamic Therapy and Hypoxia-Activated Chemotherapy Xiaoge Zhang Lili Cheng Jie Liu Nano-Micro Letters (2022) Protein trap-engineered metal-organic frameworks for advanced enzyme encapsulation and mimicking Weiqing Xu Yu Wu Chengzhou Zhu Nano Research (2022) Sign up for the Nature Briefing: Cancer newsletter — what matters in cancer research, free to your inbox weekly. Get what matters in cancer research, free to your inbox weekly. Sign up for Nature Briefing: Cancer
CommonCrawl
Can someone please explain to me what the particular scenarios mean? "The set of points in $\mathbb{R}^2$ classified ORANGE corresponds to {$x:x^Tβ>0.5$}, indicated in Figure 2.1, and the two predicted classes are separated by the decision boundary {$x:x^Tβ=0.5$}, which is linear in this case. We see that for these data there are several misclassifications on both sides of the decision boundary. Perhaps our linear model is too rigid—or are such errors unavoidable? Remember that these are errors on the training data itself, and we have not said where the constructed data came from. Consider the two possible scenarios: Scenario 1: The training data in each class were generated from bivariate Gaussian distributions with uncorrelated components and different means. Scenario 2: The training data in each class came from a mixture of 10 low- variance Gaussian distributions, with individual means themselves distributed as Gaussian. A mixture of Gaussians is best described in terms of the generative model. One first generates a discrete variable that determines which of the component Gaussians to use, and then generates an observation from the chosen density. In the case of one Gaussian per class, we will see in Chapter 4 that a linear decision boundary is the best one can do, and that our estimate is almost optimal. The region of overlap is inevitable, and future data to be predicted will be plagued by this overlap as well. In the case of mixtures of tightly clustered Gaussians the story is different. A linear decision boundary is unlikely to be optimal, and in fact is not. The optimal decision boundary is nonlinear and disjoint, and as such will be much more difficult to obtain." Can someone please explain to me what the particular scenarios mean? It's from Elements of Statistical Learning by Tibshirani machine-learning normal-distribution econometrics whuber♦ $\begingroup$ Welcome to our site! To help us assist you, please try to be more specific about what information you need. To many readers, this description will be perfectly clear and requires no further explanation. What term(s) do you need help with? What specific sense of "mean" are you seeking--an intuitive qualitative explanation, a mathematical formula, working computer code, a concrete example, or something else? $\endgroup$ – whuber♦ Jan 4 '14 at 20:00 $\begingroup$ I would really appreciate if someone could describe the two scenarios in a different way. I am not sure what their motivation is, what exactly the two scenarios are (I'd appreciate some clarification)... Basically, if someone can translate what is written into something that is more easily understood.. $\endgroup$ – Christian Jan 5 '14 at 4:26 $\begingroup$ I like to read the textbook chapters in sequence, and I'm stuck at this point and can't really understand much that comes directly after! $\endgroup$ – Christian Jan 8 '14 at 5:12 In scenario 1, there are two bivariate Normal distributions. Here I show two such probability density functions (PDFs) superimposed in a pseudo-3D plot. One has a mean near $(0,0)$ (at the left) and the other has a mean near $(3,3)$. Samples are drawn independently from each. I took the same number ($300$) so that we wouldn't have to compensate for different sample sizes in evaluating these data. Point symbols distinguish the two samples. The gray/white background is the best discriminator: points in gray are more likely to arise from the second distribution than the first. (The discriminator is elliptical, not linear, because these distributions have slightly different covariance matrices.) In scenario 2 we will look at two comparable datasets produced using mixture distributions. There are two mixtures. Each one is determined by ten distinct Normal distributions. They all have different covariance matrices (which I do not show) and different means. Here are the locations of their means (which I have termed "nuclei"): A mixture of Gaussians is best described in terms of the generative model. One first generates a discrete variable that determines which of the component Gaussians to use, and then generates an observation from the chosen density. To draw an set of independent observations from a mixture, you first pick one of its components at random and then draw a value from that component. The PDF of a mixture is a weighted sum of PDFs of the components, with the weights being the chance of selecting each component in that first stage. Here are the PDFs of the two mixtures. I drew them with a little extra transparency so you can see them better in the middle where they overlap: To make the two scenarios easier to compare, the means and covariance matrices of these two PDFs we chosen to closely match the corresponding means and covariances of the two bivariate Normal PDFs used in scenario 1. To emulate scenario 2 (the mixture distributions), I drew samples of 300 independent values from each of the two datasets by selecting each of their components with a probability of $1/10$ and then independently drawing a value from the selected component. Because the selection of components is random, the number of draws from each component was not always exactly $30 = 300 \times 1/10$, but it was usually close to that. Here is the result: The black dots show the ten component means for each of the two distributions. Clustered around each black dot are approximately 30 samples. However, there is much intermingling of values, so it is impossible from this figure to determine which samples were drawn from which component. In the case of mixtures of tightly clustered Gaussians the story is different. A linear decision boundary is unlikely to be optimal, and in fact is not. The optimal decision boundary is nonlinear and disjoint, and as such will be much more difficult to obtain." The background in that last figure is the best discriminator for these two mixture distributions. It is complicated because the distributions are complicated; obviously it is not just a line or smooth curve, such as appeared in scenario 1. I believe the entire point of this comparison lies in our option, as analysts, to choose which model we want to use to analyze either one of these two datasets. Because we would not in practice know which model is appropriate, we could try using a mixture model for the data in scenario 1 and we could equally well try using a Normal model for the data in scenario 2. We would likely be fairly successful in any case due to the relatively low overlap (between blue and red sample points). Nevertheless, the different (equally valid) models can produce distinctly different discriminators (especially in areas where data are sparse). whuber♦whuber $\begingroup$ Shouldn't fig. 5 represents scenario 2? In description scenario 1 is mentioned (First line in the para just above the 5th figure) $\endgroup$ – Siddhesh Mar 2 '16 at 8:21 $\begingroup$ @Siddhesh Thank you for noticing that: it was a typographical error. I will fix it. $\endgroup$ – whuber♦ Mar 2 '16 at 15:15 $\begingroup$ In scenario 1, when you state "The gray/white background is the best discriminator", what do you mean with "the best discriminator" ? Is is the best among all ellipsoid boundaries, or the best with respect to some specific error function ? $\endgroup$ – Gabriel Romon Jun 12 '17 at 19:45 $\begingroup$ @LeGrand I'm agnostic concerning the error function (as I must be, since this thread doesn't address any specific problem): these images illustrate whatever the authors were referring to by "we will see in Chapter 4 ... best one can do." Here we are unconcerned with the error (or loss) function and seek only to understand the distinction between the two scenarios, regardless of the loss. $\endgroup$ – whuber♦ Jun 12 '17 at 20:48 $\begingroup$ @whuber how do you encode your output variable y in your script ? here my full question stats.stackexchange.com/questions/403810/… $\endgroup$ – kiriloff Apr 19 '19 at 19:32 The point being made in section 2.3 of the book (where this quote come from) is that if the source of the data is from Scenario 1, there is nothing better you can do than a linear division (as in figure 2.1). Any finer tuning is actually self-delusion: you should then expect to get worse results predicting cases outside the training data if you do not use the optimal linear division. However, if the source of the data is from Scenario 2, you can reasonably expect the low variance of each of the $10$ source distributions to make it more likely that data points of the same colour will tend to cluster together in a non-linear manner, and so a non-linear approach may be more skilful. The example the book gives is that of looking at the colours of nearest neighbours: figure 2.2 shows the classification boundary if you look at the 15 nearest neighbours (a fairly smooth non-linear boundary) while figure 2.3 looks at the boundary if you look at just 1 nearest neighbour (a very jagged boundary). I suspect that the point being made is that the value of statistical or machine learning techniques depends on the source of the data, and that some techniques are better in some circumstances, and others in others. But it is also possible to generalise ideas from different methods and come up with further techniques, as section 2.4 and figure 2.5 do with what the book calls the "Bayes classifier". HenryHenry whuber made an extraordinary good statement. Here I just want to add some more details. The scenario 1 is talking about linear discriminative analysis(LDA) where the decision boundary is linear and whuber is describing a more general quadratic discriminative analysis(QDA) where the decision boundary is a quadratic function. Of course, LDA is a special case of QDA. A linear decision boundary is optimal to scenario 1 because solving the classification problem using maximum likelihood estimation gives a linear solution. At the same time, even though LDA looks very different from linear regression, the decision boundaries given by these two methods are very similar. Intuitively, if we think these two decision boundaries are two straight lines, these two lines will have the same slope but different intercepts. For more mathematical details, I would recommend this blog which gives a great and detailed explanation. VincentVincent Thanks for contributing an answer to Cross Validated! Not the answer you're looking for? Browse other questions tagged machine-learning normal-distribution econometrics or ask your own question. Reinstate Monica Encoding quantitative outputs for regression Detect outliers in mixture of Gaussians Why is optimizing a mixture of Gaussian directly computationally hard? How is the tied covariance matrix enforced in Linear Discriminant Analysis? How to make sense of this PCA plot with logistic regression decision boundary (breast cancer data)? Why is it wrong to interpret SVM as classification probabilities? Can Gaussian Mixture Model Clustering tell me something about the distribution of my data? what does mixture mean in the context of Gaussian Naive Bayes classifier?
CommonCrawl
Analysis of regional economic development based on land use and land cover change information derived from Landsat imagery First comprehensive quantification of annual land use/cover from 1990 to 2020 across mainland Vietnam Duong Cao Phan, Ta Hoang Trung, … Kenlo Nishida Nasahara Assessing spatio-temporal patterns and driving force of ecosystem service value in the main urban area of Guangzhou Yi He, Wenhui Wang, … Haowen Yan Land use and land cover changes influence the land surface temperature and vegetation in Penang Island, Peninsular Malaysia Gbenga F. Akomolafe & Rusly Rosazlina Long-term evaluation on urban intensive land use in five fast-growing cities of northern China with GEE support Yiqun Shang, Xinqi Zheng, … Fei Xiao Intensity and Stationarity Analysis of Land Use Change Based on CART Algorithm Xiao Sang, Qiaozhen Guo, … Jinlong Zang Quantitative assessment of urban wetland dynamics using high spatial resolution satellite imagery between 2000 and 2013 Tangao Hu, Jiahong Liu, … Bin Xie Spatial distribution of arable and abandoned land across former Soviet Union countries Myroslava Lesiv, Dmitry Schepaschenko, … Steffen Fritz Integrated usage of historical geospatial data and modern satellite images reveal long-term land use/cover changes in Bursa/Turkey, 1858–2020 Paria Ettehadi Osgouei, Elif Sertel & M. Erdem Kabadayı Scenario simulation of land use and land cover change in mining area Xiaoyan Chang, Feng Zhang, … Xiaojun Liu Chao Chen1, Xinyue He1, Zhisong Liu2, Weiwei Sun3, Heng Dong4 & Yanli Chu5 Environmental social sciences The monitoring of economic activities is of great significance for understanding regional economic development level and policymaking. As the carrier of economic activities, land resource is an indispensable production factor of economic development, and economic growth leads to increased demand for land as well as changes in land utilization form. As an important means of earth observation, remote-sensing technology can obtain the information of land use and land cover change (LUCC) related to economic activities. This study proposes a method for analysing regional economic situations based on remote-sensing technology, from which LUCC information extraction, sensitivity factor selection, model construction and accuracy evaluation were implemented. This approach was validated with experiments in Zhoushan City, China. The results show that the economic statistical index is most sensitive to the construction land area, and the average correlation coefficient between the actual data and the predicted data is 0.949, and the average of mean relative error is 14.21%. Therefore, this paper suggests that LUCC could be utilised as an explanatory indicator for estimating economic development at the regional level, and the potential applications of remotely-sensed image in economic activity monitoring are worth pursuing. The monitoring of economic activities is of great significance for the understanding of economic situations and the support of policymaking concerned with sustainable development and management1. Considering that economic activities are cumulatively changing the surface of the Earth, data that reveal the Earth's surface changes can therefore enable the capability of frequent and large-scale observations of economic activity, which could substantially improve understanding of the actual economic situation and its trend prediction. Traditional data collection methods such as mapping and ground surveying are time-consuming and costly2. Additionally, the information is not updated frequently and is difficult to access3,4. A powerful implementation of economic activity monitoring research is remotely-sensed image, which provide an up-to-date and realistic presentation of the Earth's surface. Remote-sensing technology is an effective means of observing surface changes on the Earth due to its fast and wide-range imaging capability5,6,7,8,9,10. Since the 1970s, terrestrial Earth observation data have been continuously collected in various spectral, spatial and temporal resolutions11,12. In recent decades, the accessibility, quality and scope of these data have been continuously improving, making it a fundamental information source in the study of pattern change and visualization of the Earth's surface as well as important data in the research of human activities monitoring13,14. Having the capability to detect low levels of visible and near-infrared (VNIR) radiance at night, the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) night-time light (NTL) data provided a new scope for measuring human economic activities15,16,17,18,19,20. These NTL data are free and feature a wide spatial coverage from − 180° to 180° longitude and − 65° to 75° latitude, thus greatly enhancing NTL application research21. As an objective reflection of human activities, NTL data provide a cost-effective and spatially consistent means for monitoring economic activities22,23,24,25. The initial purpose of the DMSP/OLS, however, was to observe the clouds illuminated by moonlight, and the night-time light imagery was a by-product of the data under cloud-free conditions26,27,28,29,30. Consequently, there remain some limitations when using NTL data: (1) Underestimation of economic activities that emit less or no additional night-time light, and in the potentially serious measurement errors of gross domestic product (GDP) growth, particularly in developing and emerging economies, the growth is more likely to be underestimated31,32,33. (2) As the most important NTL data, DMSP-OLS data are provided by multiple DMSP satellites, and the fact that NTL data from different satellites in different years cannot be directly compared due to lack of onboard calibration is likely to be the main obstacle to a time-series analysis34,35,36,37. (3) The DMSP-OLS NTL data provided by the NGDC have a geographic grid resolution of 30 arc seconds and a grid cell size that is approximately 0.86 km2 at the equator16. Therefore, the probability of multiple ground objects belonging to the same pixel is large, which affects the accuracy of economic activity evaluation38. Generally, satellite remote-sensing missions are originally designed to monitor the physical environment of the Earth, for which the mapping of LUCC information is one of the most important applications39,40. LUCC information is usually associated with and mainly driven by socioeconomic factors and is also a direct reflection of economic activities41,42,43. It is well documented that the relationship between economic growth and LUCC information is not a one-way effect, but rather a complex relationship of interactions44,45. On the one hand, economic activities have profoundly changed the surface morphology of the Earth. Moreover, with economic development and population increase, land use changes have accelerated sharply, and the land cover pattern changes have become more and more significant. On the other hand, the variation process of land use and land cover has significant impacts on economy. As the foundation for economic activities, land is an indispensable production factor for economic development, and the input of land resources plays an important role in promoting economic growth34,46,47. Thus, in the coordinated process of LUCC information and economic development, changes in economic activity intensity can be reflected through LUCC information. Remotely-sensed image can intuitively and comprehensively reflect the dynamics of land use and land cover, and the types, quantities and locations of LUCC information can be obtained via classification technology from remotely-sensed image. Normal multispectral optical satellite data with various spatial, temporal and spectral resolutions have been extensively applied in investigations of LUCC information and its driving mechanism of socioeconomic factors44,48. However, economic indicators for assessing economic development have rarely been connected to LUCC data in its estimation in time series. The LUCC information derived from remotely-sensed images such as those from Landsat and MODIS is spatial and temporal, and this information does not require further postprocessing for comparison with NTL data49,50. Moreover, with the launch of satellites such as QuickBird, IKONOS, GeoEye, WorldView, SPOT 6/7, and GF-1/2 with higher spatial resolution, opportunities are provided for the global production of LUCC at the scale of 10 m or even meters51,52,53,54. This study proposes a method to analyse regional economic situations based on the LUCC dynamics derived from remotely-sensed image. The main steps are as follows. First, multi-temporal remotely-sensed image is used to obtain the LUCC information (types and their areas) over a long period of time in the study area. Second, correlation analysis is applied to select the optimal indicators of economic situations (described by various economic statistical indices) from the LUCC indicators. Then, regression analysis is applied in order to model the socioeconomic indicators. Finally, the method accuracy and model applicability are evaluated. The objectives of this study were to quantify the relationship between LUCC area and several socioeconomic statistics over time and to test the capability of LUCC to estimate regional economic development. This study will result in significant understanding of the regional economic development as well as assessing the data accuracy of social survey activities. The interrelationship between LUCC information and economic development is the basis of the proposed method, which attempted to reflect economic development using remotely-sensed images. The overall workflow (Fig. 1) was divided into 4 steps: (1) LUCC information extraction (in "LUCC information extraction" section), (2) sensitivity factor selection (in "Sensitivity factor selection" section), (3) model construction (in "Model construction" section) and (4) accuracy evaluation (in "Accuracy evaluation" section). Flow chart of the proposed method. The software packages used for this study were environment for visualizing images (ENVI) for image processing, ArcGIS and MATLAB were used for analysing and presenting the results, and statistical product and service solutions (SPSS) was used for statistical analysis. LUCC information extraction The LUCC information is obtained from remotely-sensed time series data. First, pre-processing was performed on the remotely-sensed image, including radiometric calibration, atmospheric correction and image cropping. The digital numbers in the raw data were converted to the top of atmosphere (TOA) reflectance by physical means via calibration parameters provided by Calibration Parameter Files (CPF), and the influence of atmospheric scattering and absorption were reduced by using Fast line-of-sight atmospheric analysis of spectral hypercubes (FLAASH) in ENVI. Then, based on features such as the spectral and spatial resolutions of the image, the training samples were selected and the maximum likelihood classification (MLC) algorithm in supervised classification was carried out in order to obtain the land use types and their associated areal coverage within the study area. Considering the visual separability of different ground objects, the training samples were selected for six classes such as construction land, water, bare land, forest, tidal flat, crop land, and the training samples were divided into two parts, two-thirds for classification, and one-thirds for accuracy assessment. The reliable accuracy of classification was performed using overall accuracy and Kappa coefficient computed, and the overall accuracy is a measure of how well the classified pixels match the ground truth data while the Kappa coefficient measures how well the classification in question would compare to a chance arrangement of pixels to each land cover class. Finally, linear interpolation was performed by Eq. (1) on data with missing years, since remotely-sensed image may not cover all years. $$Data_{INI + i} = Data_{INI} + \frac{{Data_{TER} - Data_{INI} }}{TER - INI} \times i$$ where DataINI+1 is the data of ith year after the initial year INI, DataINI and DataTER are the data of the initial year INI and the termination year TER, respectively, and the data of the years between the initial year INI and the termination year TER is missing. In the study, the overall accuracy and Kappa coefficient were computed by Eq. (2) using the confusion matrix, which is a square array of numbers set out in rows and columns which express the number of sample units (i.e., pixels, clusters of pixels, or polygons) assigned to a particular category relative to the actual category as verified on the ground55,56,57. $$\left\{ {\begin{array}{*{20}l} {OvAc = \mathop \sum \limits_{i = 1}^{r} {{x_{ii} } \mathord{\left/ {\vphantom {{x_{ii} } N}} \right. \kern-\nulldelimiterspace} N}} \hfill \\ {k_{hat} = \frac{{N\mathop \sum \nolimits_{i = 1}^{r} x_{ii} - \mathop \sum \nolimits_{i = 1}^{r} \left( {x_{i + } x_{ + i} } \right)}}{{N^{2} - \mathop \sum \nolimits_{i = 1}^{r} \left( {x_{i + } x_{ + i} } \right)}}} \hfill \\ \end{array} } \right.$$ where OvAc is and khat are overall accuracy and Kappa coefficient, respectively, r is the number of rows in the matrix (the total number of categories), xii is the number of observations in row i and column i (the total pixels number of corrected classifications in training samples used for accuracy assessment), xi+ and x+i are the marginal totals of row i and column i, respectively, and N is the total number of observations (the total pixels number of training samples used for accuracy assessment). Sensitivity factor selection The interaction mechanism between economic growth and LUCC information is complex, and eleven economic indices are selected to describe economic status: gross domestic product (GDP), value-added of primary industry (VPI), value-added of secondary industry (VSI), value-added of tertiary industry (VTI), per capita GDP (PGDP), fixed assets investment (FAI), total tourist income (TTI), gross industrial output value (GIOV), gross agricultural output value (GAOV), gross planting output value (GPOV) and gross forestry output value (GFOV). Therefore, the land category that is most relevant to the economic statistical index must be selected as sensitive factor to construct the model for estimating socioeconomic situations. The correlation coefficients between the economic statistical index and the land use type by Eq. (3) using correlation analysis. For each economic statistical index, the most relevant land use type was selected as the sensitivity factor, i.e., the explanatory variable in the model. $$r_{LUCCm - ESIn} = \frac{{\mathop \sum \nolimits_{i = 1}^{N} \mathop \sum \nolimits_{j = 1}^{N} \left( {x_{LUCCmi} - \overline{{x_{LULCm} }} } \right)\left( {x_{ESInj} - \overline{{x_{ESIn} }} } \right)}}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{N} \left( {x_{LUCCmi} - \overline{{x_{LUCCm} }} } \right)^{2} } \sqrt {\mathop \sum \nolimits_{i = 1}^{N} \left( {x_{ESInj} - \overline{{x_{ESIn} }} } \right)^{2} } }}$$ where \(r_{LUCCm - ESIn}\) is the correlation coefficient between land use type m (one in construction land, water, bare land, forest, tidal flat and crop land) and economic statistical index n (one in GDP, VPI, VSI, VTI, PGDP, FAI, TTI, GIOV, GAOV, GPOV and GFOV), \(x_{LUCCmi}\) and \(x_{ESInj}\) \({x}_{ESInj}\) are ith land use type m and jth economic statistical index in category n, respectively, \(\overline{{x_{LULCm} }}\) and \(\overline{{x_{ESIn} }}\) are the average of land use type m and the average of economic statistical index n, respectively, N is the total number of land use type m or the total number of economic statistical index n. Model construction Regression analysis was applied when using the LUCC information to model the economic statistical indices. In order to eliminate heteroscedasticity and clarify the relationship between the LUCC information and the economic statistical indices more accurately, the logarithmically transformation base 10 were performed to change the range and scale of the data. We attempted to construct a single-factor quantitative model in which each economic statistical index is a dependent variable and the area of each land use type is an independent variable. The model is described by Eq. (4): $$I_{economic} = f\left( {L_{landuse} } \right)$$ where \(I_{economic}\) is the value of a given economic statistical index, \(L_{landuse}\) is the area of the land use type that is selected as the sensitivity factor for this economic statistical index, and f is the quantitative model. In this study, by inspecting scatter plots, a series of comparative statistical regression analyses are conducted, including linear, quadratic-term, power and exponential models. 4 types of simple quantitative models were constructed as shown in Table 1. Table 1 The type and representation of the model. Accuracy evaluation Accuracy evaluation was performed to validate the model. For the model validation data, the independent variable was inserted into the regression model to obtain the estimated value, which was then compared with the actual value. The relative error (RE) is the ratio of the absolute error to the actual value, which can reflect the deviation of the model prediction from the actual value. The mean relative error (MRE) was used to evaluate the overall accuracy of the models. The formula is as follows: $$MRE = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \left| {\frac{{y_{e} - y_{a} }}{{y_{a} }}} \right|$$ where ye and ya are the model estimated value and the actual value, respectively, and n is the number of the actual value. Study area and data Founded in 1987, Zhoushan is the first prefecture-level city in China that consists of islands; specifically, the Zhoushan Archipelago, which consists of 1390 islands with areas greater than 500 m258. Zhoushan City is located on the coast of the East China Sea, west of Hangzhou Bay and north of Shanghai (Fig. 2). It has a total administrative area of approximately 22,200 km2, but a land area of only 1140.12 km2. Zhoushan has abundant marine economic resources and is well known for marine fishery, tourism, international shipping and shipbuilding industries58. In 2011, the Zhoushan Archipelago New District was established. This was the first national strategic-level new district in China with a marine economy theme. Location map of the study area. (a) The geo-location of the study area, (b) The remotely-sensed image of the study area is false-color composite of Landsat-8 OLI images that rank band composites in the order of short wave-infrared (SWIR), near-infrared (NIR), and red bands. Map created in ArcMap 10.5 of the Environmental System Resource Institute, Inc. (www.esri.com/software/arcgis/arcgis-for-desktop). Boundaries made with free vector data provided by National Catalogue Service for Geographic Information (https://www.webmap.cn/commres.do?method=dataDownload). Zhoushan is characterised by hilly landforms, with numerous mountains and hills on the islands. Thus, land that can be effectively used is scarce. For this reason, the land development intensity of the islands varies greatly and the core zones of the city are located on the islands with larger areas. Zhoushan Island is the largest in Zhoushan City and also its economic and political center, and the area of Zhoushan Island is 502.65 km2, its east–west length is 44 km and its north–south width is 18 km59. The study area presented in Fig. 2a includes Zhoushan Island, Changzhi Island, Aoshan Island, Xiaogan Islands, and Lidiao Islands. The total land area of study region is 529.38 km2. This area is the core zone of Zhoushan and has experienced dramatic LUCC due to the rapid economic growth of the city in recent decades58. The original remotely-sensed image of the study area acquired on February 22, 2020 from the Landsat-8 OLI image is presented in Fig. 2b. The image are clear and high quality because of good weather conditions, and the study area includes the ocean, lakes, river, urban areas, wetlands, forest, and other features. The data used in this study can be classified into 2 groups: remotely-sensed image and regional statistics of the study area. The remotely-sensed image for a particular day of a given year were used to derive the annual LUCC dynamics of the study area using classification technology of remotely-sensed image. The regional statistics were used to characterise the regional economic development situations for each calendar year. Remotely-sensed image We attempted to determine the LUCC information of the study area from remotely-sensed image since the city was established. Given the limitations and constraints in the acquisition and selection of proper images, Landsat satellite images were used to derive the LUCC information in the study area. Landsat is a series of terrestrial satellites launched by NASA. Since 1972, 8 satellites have been launched, of which the Landsat 6 satellite failed to transmit. At present, the Landsat satellites have been continuously observing the Earth for more than 40 years and have accumulated large-scale, long-term remotely-sensed image, which are widely used in Earth observation research21,32. Landsat satellites have basically the same observation conditions and 16- or 18-day re-entry cycles. In addition, the thematic mapper (TM), enhanced thematic mapper (ETM+), operational land imager (OLI) on Landsat satellite are the multi-spectral sensor with spatial resolution of 30 m (except for several spectral bands), which is better than the NTL data. Considering the availability of cloud-free spatial coverage and the consistency of the annual acquisition date, 11 Landsat TM or OLI images spanning 32 years (1984–2016) were used to obtain the multi-temporal LUCC information of the study area (Table 2). The collected images were provided by the US Geological Survey (USGS) (https://glovis.usgs.gov/) and the Geospatial Data Cloud Platform of the Chinese Academy of Sciences Computer Network Information Center (https://www.gscloud.cn). The image format is GeoTIFF and the coordinate system is World Geodetic System 1984 (WGS84) projected by Universal Transverse Mercator (UTM) Projection. For the Landsat TM images, only the 6 reflective bands with 30-m spatial resolution were used for further data analysis, while the thermal infrared (TIR) band with a coarse spatial resolution of 120 m was excluded. For OLI images, the Pan band and Cirrus band were excluded, while the other 7 bands with 30-m spatial resolution were used. Table 2 Information of landsat imagery used in the research. Socioeconomic dataset In general, GDP is the most common economic indicator. In this study, we extended the selection of indicators to include those that are, in theory, closely related to LUCC information. We assembled a city-level statistical dataset spanning 32 years (1984–2016) from the statistical yearbook of Zhoushan City, gross domestic product (GDP), value-added of primary industry (VPI), value-added of secondary industry (VSI), value-added of tertiary industry (VTI), per capita GDP (PGDP), fixed assets investment (FAI), total tourist income (TTI), gross industrial output value, (GIOV), gross agricultural output value (GAOV), gross planting output value (GPOV) and gross forestry output value (GFOV). GDP is a monetary measure of the market value of all the final goods and services produced in a specific period, PGDP refers to the per capita GDP, the primary industry (PI) refers to agriculture, forestry, animal husbandry, and fishery (excluding the service industry in agriculture, forestry, animal husbandry, and fishery), the secondary industry (SI) refers to mining (excluding mining auxiliary activities), manufacturing (excluding metal products, machinery and equipment repair), electricity, heat, gas and water production and supply, and construction, the tertiary industry (TI) is the service industry, which mainly includes transportation, communications, commerce, catering, finance, education, and public services, FAI measures the change in the total spending on non-rural capital investments such as factories, roads, power grids, and property in Chines, the TTI refers to the total monetary income obtained by the destination country or region in a certain period from providing tourism products, purchasing goods, and other services to tourists at home and abroad, the GIOV refers to the total result of industrial production activities of an industrial enterprise (unit) in a certain period, which is the total value of industrial final products and industrial labor services provided in money, the GAOV is the total amount of all agricultural, forestry, animal husbandry, and fishery products expressed in monetary form in a certain period (usually 1 year), the GPOV refers to the total amount of plantation and agricultural products expressed in monetary form in a certain period (usually 1 year), and the GFOV refers to the total amount of fishery expressed in monetary form in a certain period (usually 1 year)13,16,60,61,62,63. The details of these indicators are listed in Table 3. For the PGDP, from 1984 to 2000 it was calculated using the registered population, and from 2000 to 2016 it was calculated using the resident population. In this study, PGDP is in units of yuan (CNY), while the other economic statistics are in units of 105 CNY. Table 3 Socioeconomic dataset of Zhoushan. Experimental results and analyses The Landsat satellite images were pre-processed using several procedures, i.e., radiometric calibration, atmospheric correction and image cropping. In classification system construction, the spatial resolution, spectral resolution of remotely-sensed image and the features of ground objects in the study area need to be considered comprehensively. The study area has a complex landscape, with hills in the centers of the islands, making the spatial distribution of the objects discrete and resulting in mixed pixels at the spatial resolution of the images. Due to the significant spectral confusion, we grouped several categories together; specifically, grassland was grouped into cropland; aquaculture and brine pan were grouped into tidal flat. Finally, the LUCC information in the study area was divided into 6 categories: (1) construction land, (2) forest, (3) water, (4) bare land, (5) cropland and (6) tidal flat. Maximum likelihood estimation was applied to the supervised classification of pre-processed Landsat images. The images were visually enhanced using linear contrast stretching and different band combinations to help select training samples. The classification results were modified and corrected in order to eliminate obvious errors. Accuracy assessment was performed for each classification result, and the samples were chosen from the repository of Google Earth historical images. The average value of the overall classification accuracy was 84.05%, and the average value of the Kappa coefficient was 0.80, as shown in Table 4. Table 4 Classification accuracy assessment. The final classification maps for each of the 11 years are shown in Fig. 3, and the statistics of each category from 1984 to 2016 are listed in Table 5, and based on the classification results, the areas of land use types for missing years were obtained by the linear interpolation. In the Table 5, the bold and the italics represent the original data and the interpolated data, respectively. Classification maps of the 11 years examined in this study. Map created in ArcMap 10.5 of the Environmental System Resource Institute, Inc. (www.esri.com/software/arcgis/arcgis-for-desktop). Table 5 Areas of land use types. The study area has obvious characteristics of LUCC information over the past few decades. First, construction land has increased more than fivefold, while tidal flats and cultivated land/grassland have decreased significantly. The construction land area continually increased over the 32-year study period, from 19.57 to 131.51 km2, an increase of 111.94 km2 and an increase ratio of 572.03%. Conversely, the tidal flat area decreased by 27.40 km2, from 29.35 to 1.95 km2, translating to a decrease ratio of 93.35%. The area of cultivated land/grassland decreased by 61.60 km2, from 220.70 to 159.10 km2, or by 27.91%. Meanwhile, forest land changed relatively little in ratio and area. The forest area increased by 16.06 km2, from 210.72 to 226.78 km2, translating to an increase of 7.62%, while the water area increased by 5.56 km2, from 5.62 km2 to 11.18 km2, an increase ratio of 98.80%. Finally, the area of bare land fluctuated greatly, but the overall change was not obvious, exhibiting a net increase of 0.45 km2 during the study period, from 19.84 to 20.29 km2, or 2.26%. Given that LUCC information is a complex process driven by socioeconomic factors, the primary challenge for estimating economic situations using LUCC information is to determine the association between the area of land use type and the economic statistics. Pearson correlation analysis was applied to qualitatively examine the statistical dependence between area of land use type and economic statistics across the study period. The Pearson correlation coefficient (ranging from − 1 to 1) was used to indicate the sensitivity level of land use type versus economic indices. In addition, the statistical significance level was tested using two-tailed t-statistics. In order to make LUCC information consistent with economic statistics, linear interpolation was performed on missing-year data. Consequently, we obtained 33 sets of raw data consisting of the LUCC information and economic statistics for every year from 1984 to 2016. This raw dataset was then logarithmically transformed (in base 10, described as lg) in order to eliminate the intrinsic exponential growth trend of economic indicators and to make the data more consistent with the normal distribution, which is the assumption of Pearson correlation analysis. We extracted one-thirds of the dataset at equal intervals for model validation, with the remaining data used for modelling. The correlation coefficients between each economic index and the area of each land cover type for the data used to construct the model are listed in Table 6. These results reveal that all of the economic indices are positively correlated with construction land, forest and water, and negatively correlated with cropland, bare land and tidal flat. The relevance varies, but it is obvious that each economic index is significantly correlated with construction land area; indeed, the coefficients have the highest values among all the land use types. It is apparent that among all of these land use type, the change of construction land is the best explanatory variable for revealing the trend of economic development in the study area. Thus, construction land was selected as the sensitivity factor for the single-factor quantitative model. Table 6 Correlation coefficients between LUCC information and economic statistical index. In order to reduce the impact of large variations in economic index values in time series and to improve the accuracy of regression analysis, a lg–lg regression model was used to estimate economic indices. Taking the lg of the economic index as the dependent variable and the lg of area of construction land (ACL) as the independent variable, the lg–lg scatter plots are presented in Fig. 4. Scatter plots of lg of economic index versus lg of area of construction land (ACL): (a) GDP, (b) VPI, (c) VSI, (d) VTI, (e) PGDP, (f) FAI, (g) TTI, (h) GIOV, (i) GAOV, (j) GPOV, (k) GFOV. Figures created in MATLAB R2018a of the MathWorks, Inc. (www.mathworks.com). The coefficients for the lg–lg regression model were estimated by the data used to construct the model. The specific information is shown in Tables 7, 8, 9 and 10; all of the models are significant, with p < 0.01. Table 7 Fitting results of the linear model. Table 8 Fitting results of the quadratic-term model. Table 9 Fitting results of the power model. Table 10 Fitting results of the exponential model. Model validation data were used to verify the models. The estimated economic indices were derived by the models and then compared to the actual values. The MRE is listed in Table 11, showing that the estimation accuracy varies among the models. For most of the economic indices, including VPI, VTI, PGDP, TTI, GAOV and GPOV, the quadratic-term models have higher precision than other models. For VSI and GIOV, linear models have the highest precision. For GDP and FAI, power models display the best performance in terms of precision. Overall, in spite of the model differences, GDP, VPI, VSI, VTI, PGDP, FAI and GIOV are better estimated than TTI, GAOV, GPOV and GFOV. Table 11 Mean relative errors (MREs) of the models. The best-fitting models for quantifying the relationship of each economic index were obtained by comparing the MREs of the different models. Among the 4 model types, the model with the lowest MRE was selected as the final model for each economic index. As shown in Table 12 and Fig. 5, the prediction errors of GDP, VTI and PGDP are less than 10%, indicating that these 3 economic indices are quite well estimated by the best-fitting models. For VPI, VSI, FAI, TTI, GIOV, GAOV and GPOV, the errors are also within 20%, which are satisfactory. For GFOV, however, the lowest MRE of 28.19% indicates that the models could not fit it accurately. Overall, the quantitative models could accurately reveal the dynamic changes for most of the economic indicators in this case study. Table 12 Best-fitting models. Scatter plots of economic index versus area of construction land and the best-fitting models: (a) GDP, (b) VPI, (c) VSI, (d) VTI, (e) PGDP, (f) FAI, (g) TTI, (h) GIOV, (i) GAOV, (j) GPOV, (k) GFOV. Figures created in MATLAB R2018a of the MathWorks, Inc. (www.mathworks.com). Remotely-sensed image record changes on the Earth's surface and thus can be used to represent human activities and to estimate socioeconomic indicators. Traditionally, NTL data are the main remotely-sensed image utilised to estimate socioeconomic situations. Few studies, however, have focused on the quantitative relationship between LUCC information and economic development, which is also a reliable indicator for estimating socioeconomic situations. This study opens up unique opportunities for the objective, seamless understanding of regional economic development from the perspective of land-use/cover change using remotely-sensed time series data, as well as the correction of economic survey data, both with a high degree of accuracy. The results of the case study in Zhoushan City indicated that LUCC information derived from remotely-sensed image could be indicative of dynamics in economic activity during economic development processes at the city level, as revealed by various quantitative correlations with relevant economic statistics. There is good performance in modelling the economic statistics when the area of construction land is selected as the sensitivity factor. The method proposed in this study still contains some deficiencies and uncertainties, however, as a result of the following factors. The LUCC information is the key factor affecting the modelling accuracy, since the sensitivity factors selected from LUCC information were the basis for the regressions. The spatial resolution of Landsat imagery was relatively low and the grouping of land use types was made due to the low separability caused by mixed pixels. It is necessary to use high-resolution images to extract more detailed LUCC information, and at the same time, we can use much more suitable classification methods to improve classification accuracy. The spatial matching of remotely-sensed image and statistical data is also one of the influential factors which is currently not perfect and requires further improvement in future studies. Due to an absence of statistical data precisely matching the remotely-sensed image spatially, we had to utilise the LUCC information that only covered the core zone of Zhoushan City when modelling the statistical data at the city level. In the study, only a single factor was included in the modelling, while in reality the correlation analysis showed that several land use types are significantly correlated with economic indices. Thus, additional factors should be included in the modelling, and the analysis of the impacts of different land use types on economic indices is necessary. The existing studies have shown that the interaction between LUCC and economic development displays obvious regional differences. This interaction may be affected by many natural and unnatural factors such as land resource conditions, land policy and economic development stage. Therefore, the reliability of the proposed method needs to be further verified by additional case studies in different areas. From the perspective of the interrelationship between LUCC information and economic development, this study proposed a method for analysing regional economic situations using remotely-sensed image to extract LUCC information. Through a case study of Zhoushan, China's first prefecture-level island city, this research investigated the ability of LUCC information to estimate economic indices. The LUCC information was extracted from Landsat images, taking the area of construction land as the explanatory variable after correlation analysis. Eleven economic indices—GDP, VPI, VSI, VTI, PGDP, FAI, TTI, GIOV, GAOV, GPOV and GFOV—were incorporated in linear, quadratic-term, power and exponential models. The accuracy evaluation revealed that the mean relative errors of the best-fitting models for the 11 economic indices were 6.50%, 14.47%, 14.57%, 7.61%, 7.38%, 14.95%, 17.99%, 14.60%, 17.45%, 12.57% and 28.19%, respectively. In conclusion, the results prove that LUCC information could be used as an explanatory indicator for estimating economic development at the regional level, and the potential applications of remotely-sensed image in the monitoring of economic activities are worth pursuing. In future research, the remotely-sensed image quality still has room for improvement, and more methods could be applied to the classification process. In this paper, a comprehensive analysis method for analysis of regional economic situation was enriched using remote-sensing technology. The study has some deficiencies, and further work should be conducted regarding (1) more case studies from different regions should be undertaken in order to verify the reliability and applicability of the proposed method; (2) using remotely-sensed image with the higher spatial resolution to obtain more detailed information on land use and cover change, and reduce the impact of mixed pixels on statistics of areas of land use types; (3) using the method such as deep learning to improve the classification accuracy. Cao, W., Wu, D., Huang, L. & Liu, L. Spatial and temporal variations and significance identification of ecosystem services in the Sanjiangyuan National Park, China. Sci. Rep. 10, 1377–1398. https://doi.org/10.1038/s41598-020-63137-x (2020). Wu, J., He, S., Peng, J., Li, W. & Zhong, X. Intercalibration of DMSP-OLS night-time light data by the invariant region method. Int. J. Remote Sens. 20, 7356–7368. https://doi.org/10.1080/01431161.2013.820365 (2013). Rawat, K. S., Singh, S. K., Singh, M. I. & Garg, B. L. Comparative evaluation of vertical accuracy of elevated points with ground control points from ASTERDEM and SRTMDEM with respect to CARTOSAT-1DEM. Remote Sens. Appl. Soc. Environ. 13, 289–297. https://doi.org/10.1016/j.rsase.2018.11.005 (2019). Parsa, V. A. & Salehi, E. Spatio-temporal analysis and simulation pattern of land use/cover changes, case study: Naghadeh, Iran. J. Urban Manag. 5, 43–51. https://doi.org/10.1016/j.jum.2016.11.001 (2016). Chen, C. et al. The application of the tasseled cap transformation and feature knowledge for the extraction of coastline information from remote sensing images. Adv. Space Res. 64, 1780–1791. https://doi.org/10.1016/j.asr.2019.07.032 (2019). Article ADS Google Scholar Chen, C. et al. Knowledge-based identification and damage detection of bridges spanning water via high-spatial-resolution optical remotely sensed imagery. J. Indian Soc. Remote Sens. 47, 1999–2008. https://doi.org/10.1007/s12524-019-01036-z (2019). Chen, C., Fu, J. Q., Zhang, S. & Zhao, X. Coastline information extraction based on the tasseled cap transformation of Landsat-8 OLI images. Estuar. Coast. Shelf Sci. 217, 281–291. https://doi.org/10.1016/j.ecss.2018.10.021 (2019). Rogana, J. & Chen, D. M. Remote sensing technology for mapping and monitoring land-cover and land-use change. Prog. Plan. 61, 301–325. https://doi.org/10.1016/S0305-9006(03)00066-7 (2004). Saux, B. L., Yokoya, N., Hansch, R. & Prasad, S. Advanced multisource optical remote sensing for urban land use and land cover classification. IEEE Geosci. Remote Sens. Mag. 6, 85–89. https://doi.org/10.1109/MGRS.2018.2874328 (2018). Singh, S. K., Basommi, B. P., Mustak, S. K., Srivastava, P. K. & Szabo, S. Modelling of land use land cover change using earth observation data-sets of Tons River Basin, Madhya Pradesh, India. Geocarto Int. 33, 1202–1222. https://doi.org/10.1080/10106049.2017.1343390 (2018). Goldblatt, R. et al. Using Landsat and nighttime lights for supervised pixel-based image classification of urban land cover. Remote Sens. Environ. 205, 253–275. https://doi.org/10.1016/j.rse.2017.11.026 (2018). Balázs, B., Bíró, T., Dyke, G., Singh, S. K. & Szabó, S. Extracting water-related features using reflectance data and principal component analysis of Landsat images. Hydrol. Sci. J. 63, 269–284. https://doi.org/10.1080/02626667.2018.1425802 (2018). Liu, Y., Zhang, X., Kong, X., Wang, R. & Chen, L. Identifying the relationship between urban land expansion and human activities in the Yangtze River Economic Belt, China. Appl. Geogr. 94, 163–177. https://doi.org/10.1016/j.apgeog.2018.03.016 (2018). Mustak, S., Baghmar, N. K., Srivastava, P. K., Singh, S. K. & Binolakar, R. Delineation and classification of rural–urban fringe using geospatial technique and onboard DMSP–operational Linescan system. Geocarto Int. 33, 375–396. https://doi.org/10.1080/10106049.2016.1265594 (2018). Croft, T. A. Nighttime images of the earth from space. Sci. Am. 239, 86–98. https://doi.org/10.1038/scientificamerican0778-86 (1978). Elvidge, C. D. et al. Relation between satellite observed visible-near infrared emissions, population, economic activity and electric. Int. J. Remote Sens. 18(6), 1373–1379. https://doi.org/10.1080/014311697218485 (1997). Elvidge, C. D. et al. Night-time lights of the world: 1994–1995. ISPRS J. Photogramm. Remote Sens. 56, 81–99. https://doi.org/10.1016/S0924-2716(01)00040-5 (2001). Doll, C. N. H., Muller, J. P. & Elvidge, C. D. Night-time imagery as a tool for global mapping of socioeconomic parameters and greenhouse gas emissions. AMBIO J. Hum. Environ. 29, 157–163. https://doi.org/10.1579/0044-7447-29.3.157 (2000). Doll, C. N. H., Muller, J. P. & Morley, J. G. Mapping regional economic activity from night-time light satellite imagery. Ecol. Econ. 57, 75–92. https://doi.org/10.1016/j.ecolecon.2005.03.007 (2006). Ivajnšič, D. & Devetak, D. GIS-based modelling reveals the fate of antlion habitats in the Deliblato Sands. Sci. Rep. 10, 5299. https://doi.org/10.1038/s41598-020-62305-3 (2010). Ghosh, M. K., Kumar, L. & Roy, C. Monitoring the coastline change of Hatiya Island in Bangladesh using remote sensing techniques. ISPRS J. Photogram. Remote Sens. 101, 137–144. https://doi.org/10.1016/j.isprsjprs.2014.12.009 (2015). Henderson, J. V., Storeygard, A. & Weil, D. N. Measuring economic growth from outer space. Am. Econ. Rev. 102, 994–1028. https://doi.org/10.1257/aer.102.2.994 (2012). Sutton, P. C. & Costanza, R. Global estimates of market and non-market values derived from nighttime satellite imagery, land cover, and ecosystem service valuation. Ecol. Econ. 41, 509–527. https://doi.org/10.1016/S0921-8009(02)00097-6 (2002). Ma, T., Zhou, C., Pei, T., Haynie, S. & Fan, J. Quantitative estimation of urbanization dynamics using time series of DMSP/OLS nighttime light data: a comparative case study from China's cities. Remote Sens. Environ. 124, 99–107. https://doi.org/10.1016/j.rse.2012.04.018 (2012). Min, B., Gaba, K. M., Sarr, O. F. & Agalassou, A. Detection of rural electrification in Africa using DMSP-OLS night lights imagery. Int. J. Remote Sens. 34, 8118–8141. https://doi.org/10.1080/01431161.2013.833358 (2013). Xie, Y. & Weng, Q. World energy consumption pattern as revealed by DMSP-OLS nighttime light imagery. GISci. Remote Sens. 53, 265–282. https://doi.org/10.1080/15481603.2015.1124488 (2016). Coops, N. C., Kearney, S. P., Bolton, D. K. & Radeloff, V. C. Remotely-sensed productivity clusters capture global biodiversity patterns. Sci. Rep. 8, 16261. https://doi.org/10.1038/s41598-018-34162-8 (2018). Article ADS CAS PubMed PubMed Central Google Scholar Waluda, C. M., Yamashiro, C., Elvidge, C. D., Hobson, V. R. & Rodhouse, P. G. Quantifying light-fishing for Dosidicus gigas in the eastern Pacific using satellite remote sensing. Remote Sens. Environ. 91, 129–133. https://doi.org/10.1016/j.rse.2004.02.006 (2004). Oozeki, Y. et al. Reliable estimation of IUU fishing catch amounts in the northwestern Pacific adjacent to the Japanese EEZ: potential for usage of satellite remote sensing images. Mar. Policy 88, 64–74. https://doi.org/10.1016/j.marpol.2017.11.009 (2018). Li, D., Zhao, X. & Li, X. Remote sensing of human beings—a perspective from nighttime light. Geo-spatial Inf. Sci. 19, 69–79. https://doi.org/10.1080/10095020.2016.1159389 (2016). Zhao, X. et al. Waterbody information extraction from remote-sensing images after disasters based on spectral information and characteristic knowledge. Int. J. Remote Sens. 38, 1402–1422. https://doi.org/10.1080/01431161.2016.1278284 (2017). Keola, S., Andersson, M. & Hall, O. Monitoring economic development from space: using nighttime light and land cover data to measure economic growth. World Dev. 66, 322–334. https://doi.org/10.1016/j.worlddev.2014.08.017 (2015). Chen, C., Fu, J. Q., Sui, X. X., Lu, X. & Tan, A. H. Construction and application of knowledge decision tree after a disaster for water body information extraction from remote sensing images. J. Remote Sens. 22, 792–801. https://doi.org/10.11834/jrs.20188044 (2018). Liu, Z., He, C., Zhang, Q., Huang, Q. & Yang, Y. Extracting the dynamics of urban expansion in China using DMSP-OLS nighttime light data from 1992 to 2008. Landsc. Urban Plan. 106, 62–72. https://doi.org/10.1016/j.landurbplan.2012.02.013 (2012). Raupach, M. R., Rayner, P. J. & Paget, M. Regional variations in spatial structure of nightlights, population density and fossil-fuel CO2 emissions. Energy Policy 38, 4756–4764. https://doi.org/10.1016/j.enpol.2009.08.021 (2010). He, C., Ma, Q., Liu, Z. & Zhang, Q. Modeling the spatiotemporal dynamics of electric power consumption in Mainland China using saturation-corrected DMSP/OLS nighttime stable light data. Int. J. Digit. Earth 7, 993–1014. https://doi.org/10.1080/17538947.2013.822026 (2014). Yang, X., Lu, Y. C., Murtiyoso, A., Koehl, M. & Grussenmeyer, P. HBIM modeling from the surface mesh and its extended capability of knowledge representation. ISPRS Int. J. Geo-Inf. 8, 301. https://doi.org/10.3390/ijgi8070301 (2019). Yang, X., Qin, Q. M., Grussenmeyer, P. & Koehl, M. Urban surface water body detection with suppressed built-up noise based on water indices from Sentinel-2 MSI imagery. Remote Sens. Environ. 219, 259–270. https://doi.org/10.1016/j.rse.2018.09.016 (2018). Singh, S. K. et al. Landscape transform and spatial metrics for mapping spatiotemporal land cover dynamics using earth observation data-sets. Geocarto Int. 32, 113–127. https://doi.org/10.1080/10106049.2015.1130084 (2017). Singh, M., Malhi, Y. & Hagwat, S. Evaluating land use and aboveground biomass dynamics in an oil palm–dominated landscape in Borneo using optical remote sensing. J. Appl. Remote Sens. 8, 083695. https://doi.org/10.1117/1.jrs.8.083695 (2014). Rounsevell, M. D. A. et al. Challenges for land system science. Land Use Policy 29, 899–910. https://doi.org/10.1016/j.landusepol.2012.01.007 (2012). Dang, A. N. & Kawasaki, A. Integrating biophysical and socio-economic factors for land-use and land-cover change projection in agricultural economic regions. Ecol. Model. 344, 29–37. https://doi.org/10.1016/j.ecolmodel.2016.11.004 (2017). Li, J., Zhang, C., Zheng, X. & Chen, Y. Temporal-spatial analysis of the warming effect of different cultivated land urbanization of metropolitan area in China. Sci. Rep. 10, 2760. https://doi.org/10.1038/s41598-020-59593-0 (2020). Serra, P., Pons, X. & Saurí, D. Land-cover and land-use change in a Mediterranean landscape: a spatial analysis of driving forces integrating biophysical and human factors. Appl. Geogr. 28, 189–209. https://doi.org/10.1016/j.apgeog.2008.02.001 (2008). Li, C. et al. Study on average housing prices in the inland capital cities of China by night-time light remote sensing and official statistics data. Sci. Rep. 10, 7732. https://doi.org/10.1038/s41598-020-64506-2 (2017). Shu, C., Xie, H., Jiang, J. & Chen, Q. Is urban land development driven by economic development or fiscal revenue stimuli in China?. Land Use Policy 77, 107–115. https://doi.org/10.1016/j.landusepol.2018.05.031 (2018). Singh, S. K., Srivastava, K., Gupta, M., Thakur, K. & Mukherjee, S. Appraisal of land use/land cover of mangrove forest ecosystem using support vector machine. Environ. Earth Sci. 71, 2245–2255. https://doi.org/10.1007/s12665-013-2628-0 (2014). Liu, J. et al. Spatial patterns and driving forces of land use change in China during the early 21st century. J. Geogr. Sci. 20, 483–494. https://doi.org/10.1007/s11442-010-0483-4 (2010). Liao, W. et al. Taking optimal advantage of fine spatial resolution: promoting partial image reconstruction for the morphological analysis of very-high-resolution images. IEEE Geosci. Remote Sens. Mag. 5, 8–28. https://doi.org/10.1109/mgrs.2017.2663666 (2017). Chen, C. et al. Damaged bridges over water: using high-spatial-resolution remote-sensing images for recognition, detection, and assessment. IEEE Geosci. Remote Sens. Mag. 6, 69–85. https://doi.org/10.1109/MGRS.2018.2852804 (2018). Gong, P. et al. Mapping essential urban land use categories in China (EULUC-China): preliminary results for 2018. Sci. Bull. 65, 182–187. https://doi.org/10.1016/j.scib.2019.12.007 (2020). Gong, P. et al. Annual maps of global artificial impervious area (GAIA) between 1985 and 2018. Remote Sens. Environ. 236, 111510. https://doi.org/10.1016/j.rse.2019.111510 (2020). Sun, W., Peng, J., Yang, G. & Du, Q. Correntropy-based sparse spectral clustering for hyperspectral band selection. IEEE Geosci. Remote Sens. Lett. 17, 484–488. https://doi.org/10.1109/LGRS.2019.2924934 (2020). Sun, W., Yang, G., Peng, J. & Du, Q. Lateral-slice sparse tensor robust principal component analysis for hyperspectral image classification. IEEE Geosci. Remote Sens. Lett. 17, 107–111. https://doi.org/10.1109/LGRS.2019.2915315 (2020). Congalton, R. G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 37, 35–46. https://doi.org/10.1046/j.1365-2249.2000.01137.x (1991). Sharma, D. & Singhai, J. An object-based shadow detection method for building delineation in high-resolution satellite images. PEG J. Photogramm. Remote Sens. Geoinf. Sci. 87, 103–118. https://doi.org/10.1007/s41064-019-00070-3 (2019). Chen, Y. Y., Ming, D. P. & Lv, X. W. Superpixel based land cover classification of VHR satellite image combining multi-scale CNN and scale parameter estimation. Earth Sci. Inform. 12, 341–363. https://doi.org/10.1007/s12145-019-00383-2 (2020). Fu, J. Q., Chen, C. & Chu, Y. L. Spatial–temporal variations of oceanographic parameters in the Zhoushan sea area of the East China Sea based on remote sensing datasets. Reg. Stud. Mar. Sci. 28, 100626. https://doi.org/10.1016/j.rsma.2019.100626 (2019). Chen, J. Y. et al. Land-cover reconstruction and change analysis using multisource remotely sensed imageries in Zhoushan Islands since 1970. J. Coast. Res. 30, 272–282. https://doi.org/10.2112/JCOASTRES-D-13-00027.1 (2013). Orimoloye, I. R. et al. Wetland shift monitoring using remote sensing and GIS techniques: landscape dynamics and its implications on Isimangaliso Wetland Park, South Africa. Earth Sci. Inform. 12, 553–563. https://doi.org/10.1007/s12145-019-00400-4 (2020). Kakooei, M. & Baleghi, Y. A two-level fusion for building irregularity detection in post-disaster VHR oblique images. Earth Sci. Inform. 13, 459–477. https://doi.org/10.1007/s12145-020-00449-6 (2020). Ranjan, S., Sarvaiya, J. N. & Patel, J. N. Integrating spectral and spatial features for hyperspectral image classification with a modified composite kernel framework. PEG J. Photogram. Remote Sens. Geoinf. Sci. 87, 275–296. https://doi.org/10.1007/s41064-019-00080-1 (2019). Singh, H., Garg, R. D. & Karnatak, H. C. Online image classification and analysis using OGC web processing service. Earth Sci. Inform. 12, 307–317. https://doi.org/10.1007/s12145-019-00378-z (2020). The authors would like to thank the editors and the anonymous reviewers for their outstanding comments and suggestions, which greatly helped them to improve the technical quality and presentation of the manuscript. We also greatly appreciate the USGS (https://www.usgs.gov) and Geospatial Data Cloud (https://www.gscloud.cn) for the free availability of Landsat remotely-sensed image. This work is supported by the National Natural Science Foundation of China (41701447, 41701483), the Fundamental Research Funds for Zhejiang Provincial Universities and Research Institutes (2019J00003) , the Training Program of Excellent Master Thesis of Zhejiang Ocean University. We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript. Marine Science and Technology College, Zhejiang Ocean University, Zhoushan, 316022, China Chao Chen & Xinyue He College of Mathematics, Physics and Information, Zhejiang Ocean University, Zhoushan, 316022, China Zhisong Liu Department of Geography and Spatial Information Techniques, Ningbo University, Ningbo, 315211, China Weiwei Sun School of Resources and Environmental Engineering, Wuhan University of Technology, Wuhan, 430070, China Heng Dong School of Economics and Management, Zhejiang Ocean University, Zhoushan, 316022, Zhejiang, China Yanli Chu Chao Chen Xinyue He All authors listed in the revised manuscript significantly contributed to this study. Conception and supervision of the research topic, methodology, Chao Chen; data processing, Chao Chen, Xinyue He and Zhisong Liu; results analyzed, Chao Chen, Zhisong Liu and Weiwei Sun; Writing-original draft preparation, Chao Chen, Xinyue He and Yanli Chu; writing-review, revision and editing, Chao Chen, Xinyue He, Heng Dong and Yanli Chu. Correspondence to Zhisong Liu. Chen, C., He, X., Liu, Z. et al. Analysis of regional economic development based on land use and land cover change information derived from Landsat imagery. Sci Rep 10, 12721 (2020). https://doi.org/10.1038/s41598-020-69716-2 Accepted: 13 July 2020 Dynamic monitoring of urban built-up object expansion trajectories in Karachi, Pakistan with time series images and the LandTrendr algorithm Xinrong Yan Juanle Wang Scientific Reports (2021) A seamless economical feature extraction method using Landsat time series data Liyan Wang Earth Science Informatics (2021) Extraction of land covers from remote sensing images based on a deep learning model of NDVI-RSU-Net Jingwei Hou Yanjuan Wang Arabian Journal of Geosciences (2021)
CommonCrawl
Tolvaptan-induced hypernatremia related to low serum potassium level accompanying high blood pressure in patients with acute decompensated heart failure Hidetada Fukuoka1, Koichi Tachibana1, Yukinori Shinoda1, Tomoko Minamisaka1, Hirooki Inui1, Keisuke Ueno1, Soki Inoue1, Kentaro Mine1, Kumpei Ueda1 & Shiro Hoshida ORCID: orcid.org/0000-0002-0268-94171 BMC Cardiovascular Disorders volume 20, Article number: 467 (2020) Cite this article Tolvaptan significantly increases urine volume in acute decompensated heart failure (ADHF); serum sodium level increases due to aquaresis in almost all cases. We aimed to elucidate clinical factors associated with hypernatremia in ADHF patients treated with tolvaptan. We enrolled 117 ADHF patients treated with tolvaptan in addition to standard therapy. We examined differences in clinical factors at baseline between patients with and without hypernatremia in the initial three days of hospitalization. Systolic (p = 0.045) and diastolic (p = 0.004) blood pressure, serum sodium level (p = 0.002), and negative water balance (p = 0.036) were significantly higher and serum potassium level (p = 0.026) was significantly lower on admission day in patients with hypernatremia (n = 22). In multivariate regression analysis, hypernatremia was associated with low serum potassium level (p = 0.034). Among patients with serum potassium level ≤ 3.8 mEq/L, the cutoff value obtained using receiver operating characteristic curve analysis, those with hypernatremia related to tolvaptan treatment showed significantly higher diastolic blood pressure on admission day (p = 0.004). In tolvaptan treatment combined with standard therapy in ADHF patients, serum potassium level ≤ 3.8 mEq/L may be a determinant factor for hypernatremia development. Among hypokalemic patients, those with higher diastolic blood pressure on admission may be carefully managed to prevent hypernatremia. Tolvaptan, a selective V2 receptor antagonist with an aquaretic effect, significantly increases urine volume without increasing electrolyte excretion into the urine in acute decompensated heart failure (ADHF) [1,2,3]. Tolvaptan can decrease body weight, increase serum sodium level, and ameliorate some congestion symptoms in patients with ADHF, which may help prevent overdose of loop diuretics, especially in patients with renal dysfunction [4]. A meta-analysis of the published literature suggests short-term benefits of tolvaptan, but the impact on mortality is inconclusive [4,5,6,7]. The serum sodium level increases as a result of aquaresis in almost all cases, and hypernatremia can be lethal in some patients [8, 9] and was identified as a significant adverse event to be prevented [10]. Therefore, a lower dose of tolvaptan to prevent hypernatremia has been recommended in the initial phase [11, 12], because tolvaptan treatment can dose dependently lead to abnormal hypernatremia [13, 14]. Sometimes, hypernatremia results in central nervous system disturbance. There is a population that is a risk to the development of hypernatremia [15], and risk factors for hypernatremia in tolvaptan treatment were previously reported [10,11,12]. This study aimed to elucidate clinical factors associated with hypernatremia in patients with ADHF treated with full medications and tolvaptan in real-world practice. We retrospectively investigated 117 consecutive in-hospital patients with ADHF (mean age, 78 years) who received oral tolvaptan therapy in addition to standard therapy, including carperitide infusion, for the treatment of volume overload between January 2016 and December 2018 in our cardiology ward. Heart failure (HF) symptoms in all patients worsened despite treatment including oral diuretic therapy before hospital admission. Patients were excluded if they had anuria, consciousness disturbance, and cardiogenic shock. All patients underwent baseline blood and urine tests, including neurohumoral assessment such as plasma B-type natriuretic peptide (BNP), renin activity, and aldosterone concentration, chest X-rays, and echocardiography on admission day. Serum osmolality was calculated using the following equation: $${\text{Calculated}}\;{\text{serum}}\;{\text{osmolality}} = {2} \times {\text{Na}} + {\text{blood}}\;{\text{urea}}\;{\text{nitrogen}}/{2}.{8} + {\text{blood}}\;{\text{sugar}}/{18}.$$ Vital signs, 24-h fluid intake, and urine volume were measured at baseline and every 24 h thereafter. Body weight was measured after urination and before breakfast at baseline. First-morning spot urine tests included the measurements of osmolality and sodium (UNa), potassium, urea nitrogen (UUN), and creatinine (UCr) levels. The following formula was used to estimate urine osmolality: $${\text{Urine}}\;{\text{osmolality }} = { 1}.0{7 } \times \, \left\{ {{2 } \times \, \left[ {{\text{UNa}}\;\left( {{\text{mEq}}/{\text{L}}} \right)} \right] \, + \, \left[ {{\text{UUN}}\;\left( {{\text{mg}}/{\text{dL}}} \right)} \right]/{2}.{8 } + \, \left[ {{\text{UCr}}\;\left( {{\text{mg}}/{\text{dL}}} \right)} \right] \, \times { 2}/{3}} \right\} \, + { 16}.$$ It was planned that all patients would undergo repeated blood and urine tests during 3 days after admission. Left ventricular ejection fraction was assessed by echocardiography using the biplane Simpson's rule. Classification of hypernatremia The development of hypernatremia was defined in a risk analysis when at least one measurement of serum sodium level was ≥ 148 mEq/L in the initial three days after tolvaptan treatment. Predictive factors that affect the development of hypernatremia by tolvaptan treatment were extracted from variables in clinical characteristics, blood and urine tests, and medications. All numerical data are expressed as mean ± standard deviation or percentages. Continuous data were compared using the unpaired t-test. Categorical data were assessed using the chi-square test. The area under the curve was calculated, and optimal cutoff values of predictors of hypernatremia were determined. A multivariate logistic regression analysis was applied to assess the independent factors showing hypernatremia using the variables that were significant in the univariate analysis. p values < 0.05 were considered statistically significant. All statistical analyses were performed using EZR (Saitama Medical Center, Jichi Medical University, Saitama, Japan), which is a graphical user interface for R (The R Foundation for Statistical Computing, Vienna, Austria). Baseline characteristics in patients with hypernatremia Systolic (p = 0.045) and diastolic (p = 0.004) blood pressures were significantly higher on admission day in patients with hypernatremia (n = 22, Table 1). However, no differences were observed in comorbidities, such as diabetes, hypertension, and dyslipidemia, and medications before admission between patients with and without hypernatremia. The incidence of atrial fibrillation was also not different (Table 1). Regarding laboratory data, there were no differences in BNP level; estimated glomerular filtration rate; albumin, blood sugar, and uric acid levels; renin activity; and aldosterone level between the two groups (Table 1). However, serum sodium level (p = 0.002) was significantly higher, and serum potassium level (p = 0.026) was significantly lower at baseline in patients with hypernatremia (Table 1). We did not observe differences in urine examination results at baseline. When we calculated serum osmolality by sodium, blood urea nitrogen, and blood sugar levels, patients exhibiting hypernatremia showed significantly higher calculated serum osmolality (p = 0.012, Table 2). There were no differences in the doses of tolvaptan (7.5 ± 3.8 vs. 8.1 ± 2.5 mg/day, p = 0.269) and carperitide (0.025 ± 0.010 vs. 0.025 ± 0.06 μg/min, p = 0.835) between patients with and without hypernatremia. Table 1 Baseline characteristics of patients on admission day with and without hypernatremia in the initial three days after tolvaptan treatment Table 2 Calculated parameters at baseline in patients with and without hypernatremia Regarding water balance calculated using the equation of (urine volume—water intake), dehydration obviously occurred during the first hospitalization day in patients with hypernatremia (p = 0.036, Table 3). In the multivariate regression analysis using significant factors observed in the univariate analysis, hypernatremia in the initial three days of hospitalization was independently associated with low serum potassium level (p = 0.034, Table 4). The cutoff serum potassium level at baseline was 3.8 mEq/L by the receiver operating characteristic curve analysis (Fig. 1). Table 3 Water balance in patients with and without hypernatremia Table 4 Multivariate regression analysis of factors predicting hypernatremia The cutoff serum sodium and potassium levels at baseline by receiver operating characteristic curve analysis in patients with decompensated heart failure with hypernatremia in the initial 3 days of hospitalization after tolvaptan administration in addition to standard therapy, including carperitide infusion Characteristics of patients with hypernatremia in those with low potassium level at baseline There were no significant differences in the renin activity and aldosterone level and medications with loop diuretics, angiotensin-converting enzyme inhibitors/angiotensin receptor blockers, and aldosterone antagonists between patients with serum potassium level ≤ 3.8 mEq/L at baseline with and without hypernatremia (Table 5). However, patients with hypernatremia exhibited significantly higher diastolic pressure on admission day (p = 0.004) among those with serum potassium level ≤ 3.8 mEq/L (Table 5). The ratio of aldosterone level to renin activity tended to be high in patients with hypokalemia with hypernatremia. Table 5 Baseline characteristics of patients whose baseline potassium level ≤ 3.8 mEq/L stratified based on the presence or absence of hypernatremia Hypernatremia in the initial three days of hospitalization after tolvaptan administration in addition to standard therapy, including carperitide infusion, in patients with ADHF was associated with low serum potassium level at baseline in the multivariate regression analysis. Among patients with serum potassium level ≤ 3.8 mEq/L, the cutoff value by receiver operating characteristic curve analysis, those with hypernatremia related to tolvaptan treatment showed significantly higher diastolic blood pressure on admission day. Tolvaptan and renin–angiotensin–aldosterone system (RAAS) Secondary aldosteronism with HF and loop diuretic therapy may be attributed to hypokalemia [15]. Aldosterone stimulates sodium reabsorption and potassium excretion via Na + -K + ATPase in the renal tubules of these patients, leading to hypokalemia. However, tolvaptan inhibits angiotensin II-induced increases in aldosterone production via a V2 receptor-independent pathway in vitro [16]. Furthermore, treatment with tolvaptan plus natriuretic peptide does not activate RAAS [17] and prevents an increase in aldosterone levels compared to that with natriuretic peptide only [18]. In patients with hypokalemia at baseline, those with hypernatremia exhibited higher diastolic blood pressure, although there was no difference in medications with angiotensin-converting enzyme inhibitors/angiotensin II receptor blockers, aldosterone antagonists, and loop diuretics in those without hypernatremia in this study. The ratio of aldosterone level to renin activity tended to be higher in patients with hypokalemia with hypernatremia. These results suggest that the inhibitory effects of RAAS by RAAS inhibitor treatment were less or breakthrough phenomena of RAAS occurred in patients with hypokalemia with hypernatremia than in those without hypernatremia. There are individual differences in the inhibitory extents of RAAS by RAAS inhibitor treatment, and the frequency of use of aldosterone blockade (approximately 20%) was lower in this study compared to those in other studies (approximately 40%) [11, 12]. The use of loop diuretics results in the inhibition of sodium reabsorption, but aldosterone blockade may be insufficient in patients with hypokalemia despite RAAS inhibitor treatment in consideration with higher blood pressure. Moreover, hypokalemia reduces urine concentration and induces an increase in urine volume, thus resulting in hypernatremia in addition to the effect of tolvaptan. These findings may indicate the pathophysiologically more severe state of HF in patients with hypokalemia with hypernatremia, which could be clarified by a further study examining prognosis in these patients. It is well known that tolvaptan can decrease body weight and increase the sodium level in patients who are with ADHF [19]. We used the criteria of hypernatremia as sodium level ≥ 148 mEq/L (out of normal range in our hospital) in the initial three days of hospitalization, which was different from that in the previous study showing the risk factors for tolvaptan-induced hypernatremia (≥ 147 mEq/L [10]; ≥ 150 mEq/L [11, 12]). The incidence of hypernatremia was higher (19%) in this study than those in previous studies, resulting from the threshold of hypernatremia [11, 12] or included patients with liver cirrhosis (0.2%) [20]. In patients with liver cirrhosis, tolvaptan-induced hypernatremia was not related to hypokalemia, possibly because almost all patients with liver cirrhosis were administered spironolactone [10]. These findings strongly suggest that aldosterone-related factors may be involved in hypernatremia and hypokalemia of patients treated with tolvaptan. The combined use of tolvaptan and adequate RAAS inhibitors may be recommended to prevent hypernatremia in loop diuretic-refractory ADHF. Some limitations are to be noted in this study: It is a single-center study; a study showing additive effect of tolvaptan in association with standard therapy, including carperitide infusion, in patients with ADHF; and not a dose-finding study. The routine use of carperitide is not recommended as a first-line vasodilator for elderly patients with ADHF [21]. Although urine examination result, such as urine osmolality, was used to predict response to tolvaptan [22], we did not observe differences in urine factors, such as urine osmolality and urine sodium/creatinine ratio, between patients with and without hypernatremia in this study. Some important clinical data such as echocardiographic indices were lacking for better, cautious understanding of the study results. In tolvaptan treatment combined with standard therapy in patients with ADHF, serum potassium level ≤ 3.8 mEq/L at baseline may be a determinant factor for the development of hypernatremia. Among patients with hypokalemia, those with higher diastolic blood pressure on admission may be carefully managed to prevent hypernatremia, possibly because of the involvement of aldosterone-related factors. The raw data may be made available upon reasonable request from the corresponding author. ADHF: Acute decompensated heart failure BNP: Brain natriutetic peptide HF: RAAS: Renin-angiotensin-aldosterone system UCr: Urine creatinine UNa: Urine sodium UUN: Urine urea nitrogen Yamamura Y, Nakamura S, Itoh S, Hirano T, Onogawa T, Yamashita T, Yamada Y, Tsujimae K, Aoyama M, Kotosai K, Ogawa H, Yamashita H, Kondo K, Tominaga M, Tsujimoto G, Mori T. OPC-41061, a highly potent human vasopressin V2-receptor antagonist: pharmacological profile and aquaretic effect by single and multiple oral dosing in rats. J Pharmacol Exp Ther. 1998;287:860–7. Schrier RW, Gross P, Gheorghiade M, Berl T, Verbalis JG, Czerwiec FS, Orlandi C, Investigators SALT. Tolvaptan, a selective oral vasopressin V2-receptor antagonist, for hyponatremia. N Engl J Med. 2006;355:2099–112. Gassanov N, Semmo N, Semmo M, Nia AM, Fuhr U, Er F. Arginine vasopressin (AVP) and treatment with arginine vasopressin receptor antagonists (vaptans) in congestive heart failure, liver cirrhosis and syndrome of inappropriate antidiuretic hormone secretion (SIADH). Eur J Clin Pharmacol. 2011;67:333–46. Wang C, Xiong B, Cai L. Effects of tolvaptan in patients with acute heart failure: a systematic review and meta-analysis. BMC Cardiovasc Disord. 2017;17:164. https://doi.org/10.1186/s12872-017-0598-y. CAS Article PubMed Central Google Scholar Wu MY, Chen TT, Chen YC, Tarng DC, Wu YC, Lin HH, Tu YK. Effects and safety of oral tolvaptan in patients with congestive heart failure: a systematic review and network meta-analysis. PLoS ONE. 2017;12:e0184380. Article PubMed Central Google Scholar Alskaf E, Tridente A, Al-Mohammad A. Tolvaptan for heart failure, systematic review and meta-analysis of trials. J Cardiovasc Pharmacol. 2016;68:196–203. Gunderson EG, Lillyblad MP, Fine M, Vardeny O, Berei TJ. Tolvaptan for volume management in heart failure. Pharmacotherapy. 2019;39:473–85. Darmon M, Timsit JF, Francais A, Nquile-Makao M, Adrie C, Cohen Y, Garrouste-Orgeas M, Goldgran-Toledano D, Dumenil AS, Jamali S, Cheval C, Allaouchiche B, Souweine B, Azoulay E. Association between hypernatremia acquired in the ICU and mortality: a cohort study. Nephrol Dial Transplant. 2010;25:2510–5. Funk GC, Lindner G, Druml W, Metnitz B, Schwarz C, Bauer P, Metnitz PG. Incidence and prognosis of dysnatremias present on ICU admission. Intensive Care Med. 2010;36:304–11. Hirai K, Shimomura T, Moriwaki H, Ishii H, Shimoshikiryo T, Tsuji D, Inoue K, Kadoiri T, Itoh K. Risk factors for hypernatremia in patients with short- and long-term tolvaptan treatment. Eur J Clin Pharmacol. 2016;72:1177–83. Kinugawa K, Sato N, Inomata T, Shimakawa T, Iwatake N, Mizuguchi K. Efficacy and safety of tolvaptan in heart failure patients with volume overload—an interim result of post-marketing surveillance in Japan. Circ J. 2014;78:844–52. Kinugawa K, Sato N, Inomata T, Yasuda M, Shibasaki Y, Shimakawa T. Novel risk score efficiently prevents tolvaptan-induced hypernatremic events in patients with heart failure. Circ J. 2018;82:1344–50. Gheorghiade M, Niazi I, Ouyang J, Czerwiec F, Kambatashi J, Zampino M, Orlandi C, Tolvaptan Investigators. Vasopressin V2-receptor blockade with tolvaptan in patients with chronic heart failure: results from a double-blind, randomized trial. Circulation. 2003;107:2690–6. Konstam MA, Gheorghiade M, Burnett JC Jr, Grinfeld L, Maggioni AP, Swedberg K, Udelson JE, Zannad F, Cook T, Ouyang J, Zimmer C, Orlandi C, Efficacy of Vasopressin Antagonism in Heart Failure Outcome Study With Tolvaptan (EVEREST) Investigators. Effects of oral tolvaptan in patients hospitalized for worsening heart failure: the ELEVEST outcome trial. JAMA. 2007;297:1319–31. Kinugawa K, Inomata T, Sato N, Yasuda M, Shimakawa T, Bando K, Mizuguchi K. Effectiveness and adverse events of tolvaptan in octogenarians with heart failure interim analyses of Samsca Post-Marketing Surveillance In Heart failurE (SMILE Study). Int Heart J. 2015;56:137–43. Ali F, Dohi K, Okamoto R, Katayama K, Ito M. Novel molecular mechanisms in the inhibition of adrenal aldosterone synthesis: action of tolvaptan via vasopressin V2 receptor-independent pathway. Br J Pharmacol. 2019;176:1315–27. Jujo K, Saito K, Ishida I, Furuki Y, Kim A, Suzuki Y, Sekiguchi H, Yamaguchi J, Ogawa H, Hagiwara N. Randomized pilot trial comparing tolvaptan with furosemide on renal and neurohumoral effects in acute heart failure. ESC Heart Fail. 2016;3:177–88. Costello-Boerrigter LC, Boerrigter G, Cataliotti A, Harty GJ, Burnett JC Jr. Renal and anti-aldosterone actions of vasopressin-2 receptor antagonism and B-type natriuretic peptide in experimental heart failure. Circ Heart Fail. 2010;3:412–9. Ma G, Ma X, Wang G, Teng W, Hui X. Effect of tolvaptan add-on therapy in patients with acute heart failure: meta-analysis on randomized controlled trials. BMJ Open. 2019;9:e025537. https://doi.org/10.1136/bmjopen-2018-025537. Sakaida I, Terai S, Kurosaki M, Yasuda M, Okada M, Bando K, Fukuta Y. Effectiveness and safety of tolvaptan in liver cirrhosis patients with edema—interim results of Samsca posT-mARkeTing surveillance in liver cirrhosis (STATR study). Hepatol Res. 2017;47:1137–46. Nagai T, Iwakami N, Nakai M, Nishimura K, Sumita Y, Mizuno A, Tsutsui H, Ogawa H, Anzai T, Investigators JROAD-DPC. Effect of intravenous carperitide versus nitrates as first-line vasodilators on in-hospital outcomes in hospitalized patients with acute heart failure: Insight from a nationwide claim-based database. Int J Cardiol. 2019;280:104–9. Imamura T, Kinugawa K, Shiga T, Kato N, Muraoka H, Minatsuki S, Inaba T, Maki H, Hatano M, Yao A, Kyo S, Nagai R. Novel criteria of urine osmolality effectively predict response to tolvaptan in decompensated heart failure patients: Association between non-responders and chronic kidney disease. Circ J. 2013;77:397–404. Department of Cardiovascular Medicine, Yao Municipal Hospital, 1-3-1 Ryuge-cho, Yao, Osaka, 581-0069, Japan Hidetada Fukuoka, Koichi Tachibana, Yukinori Shinoda, Tomoko Minamisaka, Hirooki Inui, Keisuke Ueno, Soki Inoue, Kentaro Mine, Kumpei Ueda & Shiro Hoshida Hidetada Fukuoka Koichi Tachibana Yukinori Shinoda Tomoko Minamisaka Hirooki Inui Soki Inoue Kentaro Mine Kumpei Ueda Shiro Hoshida Conception and design of the study, or acquisition of, or analysis and interpretation of data: HF, YS, TM, HI, KU, SI, KM, KU. Drafting the article or revising it critically for important intellectual content: HF, KT, SH. Final approval of the version to be submitted: All authors have read and approved the submission of the manuscript. Correspondence to Shiro Hoshida. This study complied with the tenets of the Declaration of Helsinki. Because our study was performed in a retrospective manner, a local ethics committee (Ethics Committee of Yao Municipal Hospital) ruled that no formal ethics approval or consent was required in this study. The director of our hospital granted permission to access and use the raw data. The authors have no financial or other relations that could lead to a conflict of interest. Fukuoka, H., Tachibana, K., Shinoda, Y. et al. Tolvaptan-induced hypernatremia related to low serum potassium level accompanying high blood pressure in patients with acute decompensated heart failure. BMC Cardiovasc Disord 20, 467 (2020). https://doi.org/10.1186/s12872-020-01751-3 Structural Diseases, Heart Failure & Congenital
CommonCrawl
Perturbed minimizing movements of families of functionals Threshold phenomenon for homogenized fronts in random elastic media Patrick W. Dondl 1,, and Martin Jesenko 1, Abteilung für Angewandte Mathematik, Albert-Ludwigs-Universität Freiburg, Hermann-Herder-Str. 10, 79104 Freiburg, Germany * Corresponding author: Patrick W. Dondl Received May 2019 Revised November 2019 Published January 2021 Early access April 2020 Fund Project: This work is supported by EPSRC grant EP/M028682/1 'Effective properties of interface evolution in a random environment.' We consider a model for the motion of a phase interface in an elastic medium, for example, a twin boundary in martensite. The model is given by a semilinear parabolic equation with a fractional Laplacian as regularizing operator, stemming from the interaction of the front with its elastic environment. We show that the presence of randomly distributed, localized obstacles leads to a threshold phenomenon, i.e., stationary solutions exist up to a positive, critical driving force leading to a stick-slip behaviour of the phase boundary. The main result is proved by an explicit construction of a stationary viscosity supersolution to the evolution equation and is based on a percolation result for the obstacle sites. Furthermore, we derive a homogenization result for such fronts in the case of the half-Laplacian in the pinning regime. Keywords: Random media, viscosity solutions, non-local operator, pinning, homogenization. Mathematics Subject Classification: 35D40, 35R11, 74A40, 74N20. Citation: Patrick W. Dondl, Martin Jesenko. Threshold phenomenon for homogenized fronts in random elastic media. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 353-372. doi: 10.3934/dcdss.2020329 C. Bucur, Some observations on the Green function for the ball in the fractional Laplace framework, Commun. Pure Appl. Anal., 15 (2016), 657-699. doi: 10.3934/cpaa.2016.15.657. Google Scholar L. Caffarelli and L. Silvestre, Regularity theory for fully nonlinear integro-differential equations, Communications on Pure and Applied Mathematics, 62 (2009), 597-638. doi: 10.1002/cpa.20274. Google Scholar L. Courte, K. Bhattacharya and P. Dondl, Bounds on precipitate hardening of line and surface defects in solids, (2019), arXiv: 1903.07505. Google Scholar E. Di Nezza, G. Palatucci and E. Valdinoci, Hitchhiker's guide to the fractional Sobolev spaces, Bull. Sci. Math., 136 (2012), 521-573. doi: 10.1016/j.bulsci.2011.12.004. Google Scholar N. Dirr, P. W. Dondl and M. Scheutzow, Pinning of interfaces in random media, Interfaces Free Bound., 13 (2011), 411-421. doi: 10.4171/IFB/265. Google Scholar P. W. Dondl and K. Bhattacharya, Effective behavior of an interface propagating through a periodic elastic medium, Interfaces Free Bound., 18 (2016), 91-113. doi: 10.4171/IFB/358. Google Scholar P. W. Dondl, M. Scheutzow and S. Throm, Pinning of interfaces in a random elastic medium and logarithmic lattice embeddings in percolation, Proc. Roy. Soc. Edinburgh Sect. A, 145 (2015), 481-512. doi: 10.1017/S0308210512001291. Google Scholar J. Droniou and C. Imbert, Fractal first-order partial differential equations, Archive For Rational Mechanics And Analysis, 182 (2006), 299-331. doi: 10.1007/s00205-006-0429-2. Google Scholar R. K. Getoor, First passage times for symmetric stable processes in space, Trans. Amer. Math. Soc., 101 (1961), 75-90. doi: 10.1090/S0002-9947-1961-0137148-5. Google Scholar M. Koslowski, A. M. Cuitiño and M. Ortiz, A phase-field theory of dislocation dynamics, strain hardening and hysteresis in ductile single crystals, J. Mech. Phys. Solids, 50 (2002), 2597-2635. doi: 10.1016/S0022-5096(02)00037-6. Google Scholar R. Monneau and S. Patrizi, Derivation of Orowan's law from the Peierls-Nabarro model, Comm. Partial Differential Equations, 37 (2012), 1887-1911. doi: 10.1080/03605302.2012.683504. Google Scholar S. Moulinet, C. Guthmann and E. Rolley, Roughness and dynamics of a contact line of a viscous fluid on a disordered substrate, The European Physical Journal E, 8 (2002), 437-443. doi: 10.1140/epje/i2002-10032-2. Google Scholar [13] C. Pozrikidis, The Fractional Laplacian, CRC Press, Boca Raton, FL, 2016. doi: 10.1201/b19666. Google Scholar X. Ros-Oton, Nonlocal elliptic equations in bounded domains: A survey, Publ. Mat., 60 (2016), 3-26. doi: 10.5565/PUBLMAT_60116_01. Google Scholar X. Ros-Oton and J. Serra, The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary, J. Math. Pures Appl. (9), 101 (2014), 275-302. doi: 10.1016/j.matpur.2013.06.003. Google Scholar J. Schmittbuhl, A. Delaplace, K. J. Mäløy, H. Perfettini and J. P. Vilotte, Slow crack propagation and slip correlations, Pure And Applied Geophysics, 160 (2003), 961-976. doi: 10.1007/978-3-0348-8083-1_10. Google Scholar J. Schmittbuhl, S. Roux, J.-P. Vilotte and K. Maloy, Interfacial crack pinning: Effect of nonlocal interactions, Physical Review Letters, 74 (1995), 1787-1790. doi: 10.1103/PhysRevLett.74.1787. Google Scholar R. Servadei and E. Valdinoci, On the spectrum of two different fractional operators, Proc. Roy. Soc. Edinburgh Sect. A, 144 (2014), 831-855. doi: 10.1017/S0308210512001783. Google Scholar S. Throm, Pinning of Interfaces in a Random Elastic Medium, Master's Thesis, Ruprecht Karls Universität Heidelberg, 2012. Google Scholar Figure 1. Decomposition of the base space for $ n = 2 $ Figure 2. Cross-sections of the decomposition of $ {\mathbb R}^{n+1} $ above the horizontal hyperplane (left) and slightly perturbed one (right) Chiu-Yen Kao, Yuan Lou, Wenxian Shen. Random dispersal vs. non-local dispersal. Discrete & Continuous Dynamical Systems, 2010, 26 (2) : 551-596. doi: 10.3934/dcds.2010.26.551 Bin Guo, Wenjie Gao. Finite-time blow-up and extinction rates of solutions to an initial Neumann problem involving the $p(x,t)-Laplace$ operator and a non-local term. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 715-730. doi: 10.3934/dcds.2016.36.715 Florent Berthelin, Paola Goatin. Regularity results for the solutions of a non-local model of traffic flow. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3197-3213. doi: 10.3934/dcds.2019132 Stig-Olof Londen, Hana Petzeltová. Convergence of solutions of a non-local phase-field system. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 653-670. doi: 10.3934/dcdss.2011.4.653 Juan C. Pozo, Vicente Vergara. Fundamental solutions and decay of fully non-local problems. Discrete & Continuous Dynamical Systems, 2019, 39 (1) : 639-666. doi: 10.3934/dcds.2019026 Joelma Azevedo, Juan Carlos Pozo, Arlúcio Viana. Global solutions to the non-local Navier-Stokes equations. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021146 Ritu Agarwal, Kritika, Sunil Dutt Purohit, Devendra Kumar. Mathematical modelling of cytosolic calcium concentration distribution using non-local fractional operator. Discrete & Continuous Dynamical Systems - S, 2021, 14 (10) : 3387-3399. doi: 10.3934/dcdss.2021017 Umberto Biccari. Internal control for a non-local Schrödinger equation involving the fractional Laplace operator. Evolution Equations & Control Theory, 2022, 11 (1) : 301-324. doi: 10.3934/eect.2021014 Guillaume Bal, Wenjia Jing. Homogenization and corrector theory for linear transport in random media. Discrete & Continuous Dynamical Systems, 2010, 28 (4) : 1311-1343. doi: 10.3934/dcds.2010.28.1311 Laura Sigalotti. Homogenization of pinning conditions on periodic networks. Networks & Heterogeneous Media, 2012, 7 (3) : 543-582. doi: 10.3934/nhm.2012.7.543 Guillaume Bal. Homogenization in random media and effective medium theory for high frequency waves. Discrete & Continuous Dynamical Systems - B, 2007, 8 (2) : 473-492. doi: 10.3934/dcdsb.2007.8.473 Qiyu Jin, Ion Grama, Quansheng Liu. Convergence theorems for the Non-Local Means filter. Inverse Problems & Imaging, 2018, 12 (4) : 853-881. doi: 10.3934/ipi.2018036 Gabriel Peyré, Sébastien Bougleux, Laurent Cohen. Non-local regularization of inverse problems. Inverse Problems & Imaging, 2011, 5 (2) : 511-530. doi: 10.3934/ipi.2011.5.511 Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 763-781. doi: 10.3934/dcdsb.2017037 Imran H. Biswas, Indranil Chowdhury. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian. Communications on Pure & Applied Analysis, 2016, 15 (3) : 907-927. doi: 10.3934/cpaa.2016.15.907 Massimiliano Ferrara, Giovanni Molica Bisci, Binlin Zhang. Existence of weak solutions for non-local fractional problems via Morse theory. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2483-2499. doi: 10.3934/dcdsb.2014.19.2483 Huxiao Luo, Xianhua Tang, Zu Gao. Sign-changing solutions for non-local elliptic equations with asymptotically linear term. Communications on Pure & Applied Analysis, 2018, 17 (3) : 1147-1159. doi: 10.3934/cpaa.2018055 Monica Marras, Nicola Pintus, Giuseppe Viglialoro. On the lifespan of classical solutions to a non-local porous medium problem with nonlinear boundary conditions. Discrete & Continuous Dynamical Systems - S, 2020, 13 (7) : 2033-2045. doi: 10.3934/dcdss.2020156 Henri Berestycki, Nancy Rodríguez. A non-local bistable reaction-diffusion equation with a gap. Discrete & Continuous Dynamical Systems, 2017, 37 (2) : 685-723. doi: 10.3934/dcds.2017029 Matteo Focardi. Vector-valued obstacle problems for non-local energies. Discrete & Continuous Dynamical Systems - B, 2012, 17 (2) : 487-507. doi: 10.3934/dcdsb.2012.17.487 Patrick W. Dondl Martin Jesenko
CommonCrawl
From Encyclopedia of Mathematics Revision as of 22:23, 25 April 2012 by Ulf Rehmann (talk | contribs) (tex,mr,zbl,msc) This page is deficient and requires revision. Please see Talk:Polygon for further comments. 2010 Mathematics Subject Classification: Primary: 51-XX [MSN][ZBL] A closed broken line, namely: If $A_,\dots,A_n$ are distinct points, no three of which lie on a line, then the collection of segments $[A_1 A_2],\; [A_2 A_3],\dots, [A_n,A_1]$ is called a polygon or polygonal curve (see Fig. a). A polygon can be spatial (skew) or planar (below planar polygons are discussed). Figure: p073580a A connected (multiply-connected) domain whose boundary consists of a finite number of segments and is a closed polygonal curve (or consists of several closed polygonal curves, in this case the polygon is sometimes called a polygonal figure, see Fig. b). A polygon in the sense of the first definition is called one-dimensional, in the second, two-dimensional. Figure: p073580b The vertices of a polygonal curve (the points $A_i$) are called the vertices of the polygon. The segments $[A_j A_{j+1}]$ are called its sides. Two sides having a common vertex are called adjacent, and two vertices which are the end-points of one segment of the polygonal curve are called adjacent vertices. If the boundary of a polygon is a simple polygonal curve, so that non-adjacent sides have no common points (interior or end), then the polygon is called simple. If the boundary of a two-dimensional polygon is not simple, then the polygon is called self-intersecting (Fig. c) (a self-intersecting polygon in the sense of the first definition). In this case the polygon is a union of a simple polygons. A self-intersecting polygonal curve divides the plane into a finite number of regions, one of which is infinite. If rays are also considered as boundaries of a polygon, then an infinite polygon is defined, which has a finite number of segments and rays for its boundary. Figure: p073580c Figure: p073580d Every simple polygon (or boundary of a simple polygon) divides the plane into an interior (finite) and an exterior (infinite) domain (Jordan's theorem). A simple polygon may have a very complicated boundary structure. To ascertain whether a point belongs to the interior or exterior domain it is sufficient to draw a ray from that point not passing through a vertex: If the number of intersections of the ray with the boundary is even, then the point is in the exterior domain, if it is odd, then the point is in the interior. The angle formed by two rays originating at some vertex of a polygon and containing both adjacent sides of this vertex is called an interior angle of the polygon if this angle has a non-empty intersection with the interior domain and if the vertex belongs to this intersection. (See Fig. d.) The sum of the interior angles of a simple $n$-gon is equal to $(n-2)180^\textrm{o}$. A simple polygon has at least one interior angle less than a straight angle. A straight line passing through two non-adjacent vertices of a polygon is called a diagonal line, and a segment with end-points at two non-adjacent vertices is called a diagonal of the polygon. A polygon is called orientable if it is possible to give an order of passing round the vertices so that the end of one side is the start of the next. In this case, the boundary of the polygon is called a polygonal (oriented) closed path in the plane. An oriented simple two-dimensional polygon is, on passing round the boundary, always either only to the left of the boundary or only to the right of it; the domain of a polygon with self-intersection may be on different sides of it. A simple polygon is called regular (metrically regular) if all its angles are congruent to each other and all the sides are congruent (have equal length). It is possible to circumscribe a circle round a regular polygon and to inscribe a circle in a regular polygon. The radius of the inscribed circle is the inradius of the regular polygon. Regular polygons with the same number of sides are similar to each other. In the table below the radius of the circumscribed circle, the radius of the inscribed circle and the area are given for certain polygons ($\def\a{\alpha}\a$ is the side length of the polygon).' Number of sides Radius of the circumscribed circle Radius of the incribed circle Area $3$ $\frac{\a\sqrt{3}}{3}$ $\frac{\a\sqrt{3}}{6}$ $\frac{\a^2\sqrt{3}}{4}$ $4$ $\frac{\a\sqrt{2}}{2}$ $\frac{\a}{2}$ $\a^2$ $5$ $\frac{\a}{10}\sqrt{10(5+\sqrt{5})}$ $\frac{\a}{10}\sqrt{5(5+2\sqrt{5})}$ $\frac{\a^2}{4}\sqrt{5(5+2\sqrt{5})}$ $6$ $\a$ $\frac{\a\sqrt{3}}{2}$ $\frac{3\a^2\sqrt{3}}{2}$ $8$ $\frac{\a}{2}\sqrt{2(2+\sqrt{2})}$ $\frac{\a}{2}\sqrt{1+\sqrt{2})}$ $2\a^2(1+\sqrt{2}$ $10$ $\frac{\a}{2}(1+\sqrt{5}$ $\frac{\a}{2}\sqrt{5+2\sqrt{5}}$ $\frac{5}{2}\a^2\sqrt{5+2\sqrt{5}}$ A polygon is called convex if it is situated on one side of all lines containing some side of the polygon. A convex polygon is always simple. A regular polygon is convex. In a convex polygon the number of diagonals is equal to $n(n-3)/2$ ($n$ the number of sides), from each vertex it is possible to draw $n-3$ diagonals which divided the polygon into $n-2$ triangles (Fig. e). Figure: p073580e The supplement of an interior angle of a convex polygon is called an exterior angle. The sum of the exterior angles, taking one for each vertex, of a convex polygon form a complete angle $360^\textrm{o}$. The exterior angle is the turning angle of the boundary at the vertex. A self-intersecting polygon all sides of which are congruent and all angles of which are congruent is called star-shaped (or star-shaped regular). Star-shaped polygons exist for numbers of sides starting with five; they can be considered as a specified set of diagonals of a regular $n$-gon. Polygons for which only all angles or only all sides are congruent are called semi-regular. A semi-regular polygon with an even number of vertices is called equi-angular semi-regular if all its angles are congruent and the sides are congruent alternately (the simplest example is a rectangle). There is always a circle passing through all vertices of an equi-angular semi-regular polygon. There always exist two circles each of which touches every other side of an equi-angular semi-regular polygon. A semi-regular polygon with an even number of vertices is called equilateral semi-regular if all its sides are congruent and if the angles are congruent alternately (the simplest example is a rhombus). In an equilateral semi-regular polygon a circle can be inscribed so that it touches all the sides; there are two circles each of which passes through alternate vertices of the polygon. The construction of equi-angular and equilateral semi-regular polygons is realized using regular polygons. A regular polygon may be constructed by ruler and compasses when its number of sides is $2^m p_1\cdots p_k$, where $p_i$ are different prime numbers of the form $p=2^{2^\sigma} + 1$, $\sigma$ a positive integer (Gauss' theorem). Five numbers of this form are known: 3, 5, 17, 257, 65537. No other regular polygons can be constructed by ruler and compasses. Thus, it is possible to construct by ruler and compasses regular $n$-gons if $n=3, 4, 5, 6, 8, 10, 12, 15, 16, 17, 20, 24, 32, 34$ and it is impossible for $n=7, 9, 11, 13, 14, 18, 19, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 33$. The problem of constructing a regular $n$-gon is equivalent to the problem of dividing the circle into $n$ equal parts. According to another definition, a polygon is regular if its $n$ vertices all lie on one circle while its $n$ sides are tangent to a concentric circle. If the sides surround the centre $d$ times (where $d$ is relatively prime to $n$, and less than $n/2$), the polygon is an $n$-gon of density $d$ and is denoted by the Schläfli symbol $ \{n/d\}$. Thus, a convex regular $n$-gon (with $d=1$) is $\{n\}$. When $d>1$, the polygon is a star-polygon. The star-pentagon $\{5/2\}$ is the pentagram; similarly, $\{8,3\}$, $\{10/3\}$, $\{12/5\}$ are the octagram, decagram and dodecagram. When $n=7$ or 9, there are two star-polygons. Fig. f shows the polygons $\{3\}$, $\{4\}$, $\{5\}$, $\{5/2 \}$, $\{6 \}$, $\{7 \}$, $\{7/2 \}$, $\{7/3 \}$. Gauss' construction for $\{17\}$ is commemorated by a small $\{17/8\}$ on the pedestal of his statue in Braunschweig. Figure: p073580f In certain branches of geometry the sides of a polygon are taken to be the lines in which the segments of a closed polygonal line lie. It is possible to have simultaneously inscribed and circumscribed polygons (the vertices of one lie on the sides of the other) and even to have polygons simultaneously circumscribed and inscribed in themselves (see, for example, Configuration). The area of a polygon can be calculated by dividing it into triangles. The area of an oriented simple polygon is given the plus sign if the interior domain remains on the left on traversing the boundary, and the minus sign if it remains on the right. When the boundary of a polygon is a polygonal closed self-intersecting path dividing the plane into regions, then the area of such a polygon can be defined using the so-called coefficient of a region: If a segment is drawn from some point of the exterior domain to a point in the interior of the given region, and the boundary of the polygon intersects this segment $p$ times from left to right and $q$ times from right to left, then $p-q$ is called the coefficient of the region. The coefficient does not depend on the choice of the two points mentioned. The area of an oriented polygon (the area of a polygonal path), in this case, is the sum of the areas of the regions multiplied by their coefficients. If a rectangular coordinate system $x,y$ is introduced in the plane, then the area of an oriented polygon is given by the integral $\int y dx$, where $y$ are the ordinates of the points of the boundary of the polygon traversed once. A polygon is called curvilinear if its boundary consists of a finite number of pieces of curves. Such polygons exist on curved surfaces. If the boundary of a polygon on a surface consists of pieces of arcs of geodesic curves of this surface, then the polygon is called geodesic. When a polygon is bounded by segments of asymptotic curves of a surface, then the polygon is called asymptotic, etc. For curvilinear polygons the interior angle at a vertex is defined as that quantity related to the rotation of the boundary at the vertex that forms with it a straight angle (the surface is assumed to be regular). Simple polygons on a surface may have just two sides and two angles; these are called digons or lunes. A polygon can even be defined as a set of vectors, which are considered as the radius vectors of the vertices of the polygon ($n$-gon), that is, the points (vertices) are formed by vectors. The properties of an $n$-gon are translated into the language of vector algebra, which opens up the possibility of applying algebraic methods in the study of classes of polygons. One can define, in particular, the sum of polygons, the product of a polygon by a number from a commutative field (the characteristic of which is mutually prime to the number of vertices), cyclic classes of $n$-gons, cyclic mappings and their algebra, etc. In this way one can establish many general properties of $n$-gons (see, for example, [BaSc]). A polygon is also defined using the notion of the convex hull, which is a convex polygon. Namely, a figure $\Phi$ is called a polygon if it can be decomposed into convex polygons: $$\Phi = \bigcup_i F_i,\quad F_i\cap F_j = \empty\quad \textrm{ for } i\ne j,$$ where the $F_i$ are convex polygons. [Al] P.S. Alexandroff [P.S. Aleksandrov] (ed.) et al. (ed.), Enzyklopaedie der Elementarmathematik, 4. Geometrie, Deutsch. Verlag Wissenschaft. (1967) (Translated from Russian) Zbl 0174.23301 [BaSc] F. Bachmann, Z. Schmidt, "$n$-gons", Univ. Toronto Press (1975) (Translated from German) MR0358536 Zbl 0293.50013 [Co] H.S.M. Coxeter, "Introduction to geometry", Wiley (1969) pp. 90, 303 MR0346644 Zbl 0181.48101 [Ha] J. Hadamard, "Géométrie élémentaire", 1, Moscow (1957) (In Russian; translated from French) Zbl 37.0537.08 [Pe] D.P. Perepelkin, "A course of elementary geometry", 1, Moscow-Leningrad (1948) (In Russian) Polygon. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Polygon&oldid=25475 This article was adapted from an original article by L.A. Sidorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Polygon&oldid=25475" TeX done
CommonCrawl
Alternative splicing links histone modifications to stem cell fate decision Yungang Xu ORCID: orcid.org/0000-0002-9834-30061,2 na1, Weiling Zhao1,2 na1, Scott D. Olson3, Karthik S. Prabhakara3 & Xiaobo Zhou1,2 Understanding the embryonic stem cell (ESC) fate decision between self-renewal and proper differentiation is important for developmental biology and regenerative medicine. Attention has focused on mechanisms involving histone modifications, alternative pre-messenger RNA splicing, and cell-cycle progression. However, their intricate interrelations and joint contributions to ESC fate decision remain unclear. We analyze the transcriptomes and epigenomes of human ESC and five types of differentiated cells. We identify thousands of alternatively spliced exons and reveal their development and lineage-dependent characterizations. Several histone modifications show dynamic changes in alternatively spliced exons and three are strongly associated with 52.8% of alternative splicing events upon hESC differentiation. The histone modification-associated alternatively spliced genes predominantly function in G2/M phases and ATM/ATR-mediated DNA damage response pathway for cell differentiation, whereas other alternatively spliced genes are enriched in the G1 phase and pathways for self-renewal. These results imply a potential epigenetic mechanism by which some histone modifications contribute to ESC fate decision through the regulation of alternative splicing in specific pathways and cell-cycle genes. Supported by experimental validations and extended datasets from Roadmap/ENCODE projects, we exemplify this mechanism by a cell-cycle-related transcription factor, PBX1, which regulates the pluripotency regulatory network by binding to NANOG. We suggest that the isoform switch from PBX1a to PBX1b links H3K36me3 to hESC fate determination through the PSIP1/SRSF1 adaptor, which results in the exon skipping of PBX1. We reveal the mechanism by which alternative splicing links histone modifications to stem cell fate decision. Embryonic stem cells (ESCs), the pluripotent stem cells derived from the inner cell mass of a blastocyst, provide a vital tool for studying the regulation of early embryonic development and cell fate decision and hold the promise for regenerative medicine [1]. The past few years have witnessed remarkable progress in understanding the ESC fate decision, i.e. either pluripotency maintenance (self-renewal) or proper differentiation [2]. The underlying mechanisms have been largely expanded from the core pluripotent transcription factors (TFs) [3], signaling pathways [4,5,6,7,8,9], specific microRNAs [10, 11], and long non-coding RNAs [12] to alternative pre-messenger RNA (mRNA) splicing (AS) [13, 14], histone modifications (HMs) [15,16,17,18,19], and cell-cycle machinery [20]. These emerging mechanisms suggest their intricate interrelations and potential joint contributions to ESC pluripotency and differentiation, which, however, remain unknown. Alternative splicing (AS) is one of the most important pre-mRNA processing to increase the diversity of transcriptome and proteome in tissue-dependent and development-dependent manners [21]. The estimates based on RNA-sequencing (RNA-seq) revealed that up to 94%, 60%, and 25% of genes in human, Drosophila melanogaster, and Caenorhabditis elegans, respectively, undergo AS [21,22,23,24,25]. AS also provides a powerful mechanism to control the developmental decision in ESCs [26,27,28]. Specific isoforms are necessary to maintain both the identity and activity of stem cells and switching to different isoforms ensures proper differentiation [29]. In particular, the AS of TFs plays major roles in ESC fate determination, such as FGF4 [30] and FOXP1 [13] for hESC, and Tcf3 [14] and Sall4 [31] for mouse ESCs (mESCs). Understanding the precise regulations on AS would contribute to the elucidation of ESC fate decision and has attracted extensive efforts [32]. For many years, studies aiming to shed light on this process focused on the RNA level, characterizing the manner by which splicing factors (SFs) and auxiliary proteins interact with splicing signals, thereby enabling, facilitating, and regulating RNA splicing. These cis-acting RNA elements and trans-acting SFs have been assembled into splicing code [33], revealing a number of AS regulators critical for ESC differentiation, such as MBNL [34] and SON [28]. However, these genetic controls are far from sufficient to explain the faithful regulation of AS [35], especially in some cases that tissue-specific AS patterns exist despite the identity in sequences and ubiquitous expression of involved SFs [36, 37], indicating additional regulatory layers leading to specific AS patterns. As expected, we are increasingly aware that splicing is not an isolated process; rather, it occurs co-transcriptionally and is presumably also regulated by transcription-related processes. Emerging provocative studies have unveiled that AS is subject to extensive controls not only from genetic but also epigenetic mechanisms due to its co-transcriptional occurrence [38]. The epigenetic mechanisms, such as HMs, benefit ESCs by providing an epigenetic memory for splicing decisions so that the splicing pattern could be passed on during self-renewal and be modified during differentiation without the requirement of establishing new AS rules [38]. HMs have long been thought to play crucial roles in ESC maintenance and differentiation by determining what parts of the genome are expressed. Specific genomic regulatory regions, such as enhancers and promoters, undergo dynamic changes in HMs during ESC differentiation to transcriptionally permit or repress the expression of genes required for cell fate decision [15]. For example, the co-occurrence of the active (H3K4me3) and repressive (H3K27me3) HMs at the promoters of developmentally regulated genes defines the bivalent domains, resulting in the poised states of these genes [39]. These poised states will be dissolved upon differentiation to allow these genes to be active or more stably repressed depending on the lineage being specified, which enables the ESCs to change their identities [40]. In addition to above roles in determining transcripts abundance, HMs are emerging as major regulators to define the transcripts structure by determining how the genome is spliced when being transcribed, adding another layer of regulatory complexity beyond the genetic splicing code [41]. A number of HMs, such as H3K4me3 [42], H3K9me3 [43], H3K36me3 [44, 45], and hyperacetylation of H3 and H4 [46,47,48,49,50], have been proven to regulate AS by either directly recruiting chromatin-associated factors and SFs or indirectly modulating transcriptional elongation rate [38]. Together, these studies reveal that HMs determine not only what parts of the genome are expressed, but also how they are spliced. However, few studies focused on the detailed mechanisms, i.e. epigenetic regulations on AS in the context of cell fate decision. Additionally, cell-cycle machinery dominates the mechanisms underlying ESC pluripotency and differentiation [20, 51]. Changes of cell fates require going through the cell-cycle progression. Studies in mESCs [52] and hESCs [53, 54] found that the cell fate specification starts in the G1 phase when ESCs can sense differentiation signals. Cell fate commitment is only achieved in G2/M phases when pluripotency is dissolved through cell-cycle-dependent mechanisms. However, whether the HMs and AS and their interrelations are involved in these cell-cycle-dependent mechanisms remains unclear. Therefore, it is intuitive to expect that HMs could contribute to ESC pluripotency and differentiation by regulating the AS of genes required for specific processes, such cell-cycle progression. Nevertheless, we do not even have a comprehensive view of how HMs relate to AS outcome at a genome-wide level during ESC differentiation. Therefore, further studies are required to elucidate the extent to which the HMs are associated with specific splicing repertoire and their joint contributions to ESC fate decision between self-renewal and proper differentiation. To address these gaps in current knowledge, we performed genome-wide association studies between transcriptome and epigenome of the differentiation from the hESCs (H1 cell line) to five differentiated cell types [15]. These cells cover three germ layers for embryogenesis, adult stem cells, and adult somatic cells, representing multiple lineages of different developmental levels (Additional file 1: Figure S1A). This carefully selected dataset enabled our understanding of AS epigenetic regulations in the context of cell fate decision. First, we identified several thousands of AS events that are differentially spliced between the hESCs and differentiated cells, including 3513 mutually exclusive exons (MXE) and 3678 skipped exons (SE) which were used for further analyses. These hESC differentiation-related AS events involve ~ 20% of expressed genes and characterize the multiple lineage differentiation. Second, we profiled 16 HMs with chromatin immunoprecipitation sequencing (ChIP-seq) data available for all six cell types, including nine types of acetylation and seven types of methylation. Following the observation that the dynamic changes of most HMs are enriched in AS exons and significantly different between inclusion-gain and inclusion-loss exons, we found that three of the 16 investigated HMs (H3K36me3, H3K27ac, and H4K8ac) are strongly associated with 52.8% of hESC differentiation-related AS exons. We then linked the association between HMs and AS to cell-cycle progression based on the additional discovery that the AS genes predominantly function in cell-cycle progression. More intriguingly, we found that HMs and AS are associated in G2/M phases and involved in ESC fate decision through promoting pluripotency state dissolution, repressing self-renewal, or both. In particular, with experimental valuations, we demonstrated an H3K36me3-regulated isoform switch from PBX1a to PBX1b, which is implicated in hESC differentiation by attenuating the activity of the pluripotency regulatory network. Collectively, we presented a mechanism conveying the HM information into cell fate decision through the regulation of AS, which will drive extensive studies on the involvements of HMs in cell fate decision via determining the transcript structure rather than only the transcript abundance. AS characterizes hESC differentiation The role of AS in the regulation of ES cell fates adds another notable regulatory layer to the known mechanisms that govern stemness and differentiation [55]. To screen the AS events associated with ES cell fate decision, we investigated a panel of RNA-seq data during hESC (H1 cell line) differentiation [15]. We considered four cell types directly differentiated from H1 cells, including trophoblast-like cells (TBL), mesendoderm (ME), neural progenitor cells (NPC), and mesenchymal stem cells (MSC). We also considered IMR90, a cell line for primary human fetal lung fibroblast, as an example of terminally differentiated cells. These cells represent five cell lineages of different developmental levels (Additional file 1: Figure S1A). We identified thousands of AS events of all types with their changes of "per spliced in" (ΔPSIs) are > 0.1 (inclusion-loss) or < − 0.1 (inclusion-gain), and with the false discovery rates (FDRs) are < 0.05 based on the measurement used by rMATS [56] (Additional file 1: Figure S1B and Table S1, see "Methods"). We implemented further analyses only on the most common AS events, including 3513 MXEs and 3678 SEs, which are referred to as hESC differentiation-associated AS exons (Additional file 1: Figure S1C and Additional file 2: Table S2). These hESC differentiation-related AS exons possess typical properties, as previously described [57, 58], as follows: (1) most of their hosting genes are not differentially expressed between hESCs and differentiated cells (Additional file 1: Figure S1D); (2) they tend to be shorter with much longer flanking introns compared to the average length of all exons and introns (RefSeq annotation), respectively (Additional file 1: Figure S2A, B); (3) the arrangement of shorter AS exons surrounded by longer introns is consistent across cell lineages and AS types (Additional file 1: Figure S2C, D); and (4) the lengths of AS exons are more often divisible by three to preserve the reading frame (Additional file 1: Figure S2E). During hESC differentiation, about 20% of expressed genes undergo AS (2257 genes for SE and 2489 genes for MXE), including previously known ESC-specific AS genes, such as the pluripotency factor FOXP1 [13] (Fig. 1a) and the Wnt/β-catenin signalling component CTNND1 [14] (Fig. 1b). These hESC differentiation-related AS genes include many TFs, transcriptional co-factors, chromatin remodelling factors, housekeeping genes, and bivalent domain genes implicated in ESC pluripotency and development [39] (Fig. 1c and Additional file 1: Figure S1C). Enrichment analysis based on a stemness gene set [59] also shows that hESC differentiation-related AS genes are enriched in the regulators or markers that are most significantly associated with stemness signatures of ESCs (Additional file 1: Figure S3A, see "Methods"). AS characterizes the hESC differentiation. a, b Sashimi plots show two AS events of previously known ESC-specific AS events, FOXP1 (a) and CTNND1 (b). Inset histograms show the PSIs (Ψ) of the AS exons in all cell types based on the MISO estimation. c The bar graph shows that the number of total AS events and lineage-specific AS events increase coordinately with the developmental levels. Higher developmental level induces more (lineage-specific) AS events. MXE.sp. and SE.sp. indicate the percentage of lineage-specific AS events. d Heat maps show the differential "percent splice in" (ΔPSIs) of SE (left) and MXE (right) AS events (rows) for each cell lineage (columns). For MXE event, the ΔPSIs are of the upstream exons. e, f The hosting genes of MXE (e) and SE (f) AS events characterize cell lineages. Black and white bars refer to the common AS genes shared by all cell lineages, while the colour bars indicate the lineage-specific AS genes. The length of the colour bars is proportional to the percentage of lineage-specific genes. Dark fills indicate the inclusion-gain events, while light fills indicate the inclusion-loss events. The numbers in the bars are the proportion of corresponding parts; the numbers in the parentheses are the numbers of common AS genes or lineage-specific AS genes of each lineage. Gain or loss for MXE events refers to the upstream exons. Also see Additional file 1: Figures S1–S3 Clustering on AS events across cell lineages show lineage-dependent splicing patterns (Fig. 1d). Upon hESC differentiation, the SE exons tend to lose their inclusion levels (inclusion-loss), while the upstream exons of MXE events are likely to gain their inclusion levels (inclusion-gain) (Fisher's exact test, p = 3.83E-107). The numbers of AS events increase accordingly with the developmental level following hESC differentiation (Fig. 1c). For example, the differentiation to ME involves the fewest AS events and ME presents the most stem-cell-like AS profiles, while the IMR90 has the most AS events and exhibits the most similar AS profiles to adult cells (Fig. 1c, d). Inter-lineage comparisons show, on average, that 42.0% of SE and 56.4% of MXE events (Fig. 1c, d and Additional file 1: Figure S3B, C), involved in 29.6% and 38.6% of AS hosting genes (Fig. 1e, f and Additional file 1: Figure S3D, E), are lineage-specific. In contrast, only 0.65% of SE and 0.14% of MEX events (Additional file 1: Figure S3B, C), involved in 0.49% and 1.52% of AS hosting genes, are shared by all lineages (Fig. 1e, f and Additional file 1: Figure S3D, E). Similar trends are observed from pairwise comparisons (Additional file 1: Figure S3F). Furthermore, one-third of AS genes (n = 881) have both MXE and SE events (Additional file 1: Figure S3G). Only four genes are common across all cell lineages and AS types, of which the AS events of Ctnnd1 and Mbd1 have been reported to regulate mESC differentiation [14]. Together, these results demonstrate that AS depicts lineage-dependent and developmental level-dependent characterizations of hESC differentiation. Dynamic changes of HMs predominantly occur in AS exons In ESCs, epigenetic mechanisms contribute mainly to maintaining the expression of pluripotency genes and the repression of lineage-specific genes in order to avoid exiting from stemness. Upon differentiation, epigenetic mechanisms orchestrate the expression of developmental programs spatiotemporally to ensure the heritability of existing or newly acquired phenotypic states. Though epigenetic signatures are mainly found to be enriched in promoters and enhancers, it has become increasingly clear that they are also present in gene bodies, especially in exon regions, implying a potential link of epigenetic regulation to pre-mRNA splicing [60, 61]. Consistent with previous reports [36, 37, 62], we also observed that few involved SFs are differentially expressed during H1 cells differentiation (Additional file 1: Figure S3H, see "Methods"), which confirms the existence of an additional layer of epigenetic regulations on AS. However, the extents to which the AS is epigenetically regulated and how these AS genes contribute to the cell fate decision are poorly understood. We focused on 16 HMs, including nine histone acetylation and seven histone methylation that have available data in all six cell types (see "Methods") and aimed to reveal their associations with AS genes during hESC differentiation. To investigate whether the dynamic changes of these HMs upon cell differentiation prefer the AS exons consistently (Fig. 2a, b), we profiled the differential HM patterns of around the hESC differentiation-associated AS exons and the same number of randomly selected constitutive splicing (CS) exons of the same AS genes for each differentiation lineage. We compared the changes of ChIP-seq reads count (normalized Δ reads count, see "Methods") in ± 150-bp regions around the splice sites upon hESC differentiation (Fig. 2c and Additional file 1: Figure S4, see "Methods"). Except for a small part of cases (with black dots or boxes in Fig. 2d), most HMs changed more significantly around AS exons than around constitutive exons upon hESC differentiation (Mann–Whitney–Wilcoxon test, p ≤ 0.05, Fig. 2d and Additional file 1: Figure S4). Nevertheless, some HMs displayed strong links to AS, such as H3K79me1 and H3K36me3, while others only had weak link strengths, such as H3K27me3 and H3K9me3 (Fig. 2d). This result is consistent with the fact that the former are involved in active expression and AS regulation [38, 44, 63], while the latter are the epigenetic marks of repressed regions and heterochromatin [64]. The link strengths are presented as the -log10 p values to test whether the HM changes consistently prefer the AS exons across different cell lineages and AS types (Fig. 2d sidebar graph, see "Methods"). Taken together, these results, from a global view, revealed a potential regulatory link from HMs to RNA splicing, of which some are strong while the others are weak. Dynamic changes of HMs predominantly occur in AS exons. a, b Genome browser views of representative H3K36me3 changes in MXE (exemplified as FGFR2) and SE (exemplified as CDC27) events, respectively, showing that the changes of H3K36me3 around the AS exons (blue shading) are more significant than around the flanking constitutive exons (gray shading) in 4 H1-derived cell types and IMR90. The tracks of H1 are duplicated as yellow shadings overlapping with other tracks of the derived cells (green) for a better comparison. c Representative profiles of HM changes (normalized Δ reads number) around the AS exons and randomly selected constitutive splicing (CS) exons upon hESC differentiation, shown as the average of all cell lineages pooled together. The ± 150-bp regions (exons and flanking introns) of the splice sites were considered and 15 bp-binned to produce the curves. It shows that the changes of HMs are more significant around AS exons than around constitutive exons, especially in exonic regions (gray shading). The p values, Mann–Whitney–Wilcoxon test. d The statistic significances for changes of all 16 HMs in all cell lineages and pooling them together (pooled), represented as the -log10 p values based on Mann–Whitney–Wilcoxon test. The detailed profiles are provided in Additional file 1: Figure S4. Black boxes indicate the cases that HMs around constitutive exons change more significantly than around AS exons, corresponding to the red-shaded panels in Additional file 1: Figure S4. Sidebars represent the significances whether the changes of HMs are consistently enriched in AS exons across cell lineages, showing the link strength between AS and HMs and represented as the -log10 p value based on Fisher's exact test. The yellow vertical line indicates the significance cutoff of 0.05. Also see Additional file 1: Figure S4 Three HMs are significantly associated with AS upon hESC differentiation To quantitatively associate the HMs with AS, all ChIP-seq data were processed for narrow peak calling using MACS2 [65]. For each AS exon of each differentiation lineage, we then quantified the differential inclusion levels, i.e. the changes of "percent splice in" (ΔPSIs, Additional file 1: Figure S1B), and the differential HMs signals, i.e. the changes of normalized narrow peak height of ChIP-seq (ΔHMs, Additional file 1: Figure S5A, see "Methods") between H1 and differentiated cells. We observed significant differences in all HM profiles (except H3K27me3, Additional file 1: Figure S5B) between the inclusion-gain and inclusion-loss exons across cell lineages and AS types (Mann–Whitney–Wilcoxon test, p ≤ 0.05) (Fig. 3a and Additional file 1: Figure S5B). However, three independent correlation tests showed only weak global quantitative associations between the ΔPSIs and ΔHMs for some HMs (Fig. 3c and Additional file 1: Figure S5C), including eight HMs for MXE AS exons and eight HMs for SE AS exons. The weak associations may indicate that only subsets of AS exons are strongly associated with HMs and vice versa, which is consistent with a recent report [66]. A subset of HMs and AS are strongly associated upon hESC differentiation. a Representative profiles of HM (H3K36me3) changes (normalized Δ reads number) around the inclusion-gain (red lines) and inclusion-loss (blue lines) AS exons, as well as randomly selected constitutive splicing (CS) exons (black lines) for both MXE (left) and SE (right) AS events. It shows that HM changes are significantly different between inclusion-gain and inclusion-loss AS exons (p values, Mann–Whitney–Wilcoxon test). Additional file 1: Figure S5B provides the whole significances of all HMs across AS types and cell lineages. b Pearson correlation test between differential HM signals (ΔHMs) and differential inclusion levels (ΔPSIs), taking H3k36me3 as an example. Additional file 1: Figure S5C provides the correlation test results of other HMs based on two more tests. c A representative k-means cluster shows a subset of SE AS events having a negative correlation between the ΔPSIs and the ΔHMs of H3K36me3. Additional file 1: Figures S5D and S6 provide all the clustering results. d Scatter plot shows that HM-associated AS events display significant correlations between the ΔPSIs and the ΔHMs upon hESC differentiation, taking H3K27ac–associated (positively) MXE events as an example. Also see Additional file 1: Figures S5, S6 To explore the subsets of highly associated AS exons and corresponding HMs, we performed k-means clustering on the sets of inclusion-gain and inclusion-loss exons of SE and MXE events, separately, taking the ΔHMs of eight identified HMs as epigenetic features (Fig. 3c and Additional file 1: Figures S5D and S6, see "Methods"). We obtained three subsets of HM-associated SE exons and three subsets of HM-associated MXE exons (Additional file 3: Table S3). The three HM-associated SE subsets include 180, 664, and 1062 exons and are negatively associated with H4K8ac (Additional file 1: Figure S6), negatively associated with H3K36me3 (Fig. 3c), and positively associated with H3K36me3 (Additional file 1: Figure S6), respectively. The three HM-associated MXE subsets include 99, 821, and 971 exons and are positively associated with H3K27ac (Fig. 3d), negatively associated with H3K36me3 (Additional file 1: Figure S6), and positively associated with H3K36me3 (Additional file 1: Figure S6), respectively. The exons of each subset show significant correlations between their ΔPSIs and ΔHMs upon hESC differentiation (Fig. 3d). These HM-associated AS exons account for an average of 52.8% of hESC differentiation-related AS events, on average (Additional file 1: Figure S5E). Of the three AS-associated HMs, H3K36me3 has both positive and negative correlations with AS exons. This is consistent with the fact that H3K36me3 has dual functions in regulating AS through two different chromatin-adapter systems, PSIP1/SRSF1 [45] and MRG15/PTBP1 [44]. The former increases the inclusion levels of targeting AS exons, whereas the latter decreases the inclusion levels [38]. As expected, 139 and 11 of our identified H3K36me3-associated AS genes have been reported to be regulated by SRSF1 [67, 68] (Additional file 1: Figure S5F) and PTBP1 [69] (Additional file 1: Figure S5G), respectively. Taken together, our analysis showed that more than half (52.8%) of hESC differentiation-associated AS events are significantly associated with three of 16 HMs during hESC differentiation, including H3K36me3, H3K27ac, and H4K8ac. HM-associated AS genes predominantly function in G2/M phases to facilitate hESC differentiation Epigenetic mechanisms have been proposed to be dynamic and play crucial roles in human ESC differentiation [15, 16]. Given the aforementioned associations between HMs and AS, and the well-established links between AS and hESC differentiation, we hypothesized that the three HMs (H3K36me3, H3K27ac, and H4K8ac) may contribute to stem cell differentiation through their associated AS events. To test our hypothesis and gain more insights into the differences between the HM-associated and HM-unassociated AS events, we performed comparative function analyses between their hosting genes, revealing that HMs are involved in alternatively splicing the core components of cell-cycle machinery and related pathways to regulate stem cell pluripotency and differentiation. We found that HMs prefer to be associated with even shorter AS exons (Additional file 1: Figure S7A, p < 0.001, Student's t-test), though AS exons are shorter than the average length of all exons (Additional file 1: Figure S2A). HM-associated genes (n = 2125) show more lineage specificity, i.e. more genes (49.76% vs 29.6% of MXE or 38.6% of SE genes) are lineage-specific (Additional file 1: Figures S7B and S3D, E), regardless of whether IMR90 is included or not (Additional file 1: Figure S7C). Only a few HM-associated genes are shared by different cell lineages, even in pairwise comparisons (Additional file 1: Figure S7D); the most common shared genes are lineage-independent housekeeping genes (Additional file 1: Figure S7E). These suggest that HM-associated AS genes contribute more to lineage specificity. In addition, the HM-associated AS genes (966 of 2125) are more enriched in stemness signatures than unassociated AS genes (429 of 1057) (Fig. 4a). TF binding enrichment analysis shows that HM-associated AS genes are likely to be regulated by TFs involved in cell differentiation, whereas HM-unassociated AS genes are likely to be regulated by TFs involved in cell proliferation and growth (Fig. 4b). All these results suggest that HM-associated and HM-unassociated AS genes function differently during hESC differentiation. HM-associated AS genes predominantly function in G2/M cell-cycle phases contributing to hESC differentiation. a HM-associated AS genes are enriched more significantly in stemness signatures than HM-unassociated AS genes. b TF binding enrichment shows that HM-associated AS genes prefer to be regulated by TFs involved in cell differentiation, while the HM-unassociated AS genes are prone to be regulated by TFs involved in cell proliferation and growth. c GO enrichment analysis shows that HM-associated AS genes are enriched more significantly in cell-cycle progression than HM-unassociated AS genes, shown as the -log10 p values after FDR (≤ 0.05) adjustment. d The significant enrichment of HM-associated AS genes in the cell cycle are consistent across cell lineages, with the MSC as an exception that no significant enrichment was observed. e The top 20 enriched functions show that HM-associated AS genes involved in cell-cycle progression prefer to function in G2/M phases and DNA damage response. f The canonical pathway enrichment shows that AMT/ATR-mediated DNA damage response is the top enriched pathway of HM-associated AS genes. The vertical lines (yellow) indicate the significance cutoff of 0.05. Also see Additional file 1: Figures S7, S8 Gene Ontology (GO) enrichment analysis shows that more than half of the HM-associated AS genes (1120 of 2125) function in cell-cycle progression and exhibit more significant enrichment than do HM-unassociated AS genes (376 of 1057, Fig. 4c, d and Additional file 1: Figure S8A). The significance of the top enriched GO term (GO:0007049, cell cycle) is consistent across cell lineages, although HM-associated AS genes exhibit more lineage specificity and few of them are shared among lineages (Additional file 1: Figures S7B–D and S8B). These results suggest the involvement of HMs in AS regulation of the cell-cycle machinery that has been reported to be exploited by stem cells to control their cell fate decision [20]. Further study of the top enriched cell-cycle AS genes (Fig. 4d and Additional file 1: Figure S8A) shows that HM-associated (n = 282) and HM-unassociated AS genes (n = 150) play roles in different cell-cycle phases and related pathways. The former is prone to function in G2/M phases and DNA damage response (Fig. 4e, f). This indicates that HMs contribute to cell differentiation, at least partially, via AS regulations in these phases, which is consistent with the fact that inheritance of HMs in daughter cells occurs during the G2 phases [20]. The latter play roles in G1 phase, cell-cycle arrest, and Wnt/β-catenin signalling (Additional file 1: Figure S8C, D). Since cell fate choices seem to occur or at least be initiated during G1/S transition [53], while cell fate commitment is achieved in G2/M [54], it could be rational for stem cells to change their identity during the G2 phase when HMs are reprogrammed [20]. Intriguingly, the top enriched pathway of HM-associated AS genes is "ATM/ATR-mediated DNA damage response," which is activated in S/G2 phases and has been recently reported as a gatekeeper of the pluripotency state dissolution (PSD) that participates in allowing hESC differentiation [54]. Together with our previous results [19], it suggests the presence of a combinational mechanism involving HMs and AS, wherein HMs facilitate the PSD and cell fate commitment by alternatively splicing the key components of the ATM/ATR pathway. Additionally, many cell-cycle TF genes are involved in the top enriched HM-associated AS gene set. The pre-B-cell leukaemia transcription factor 1 (PBX1) is one of these genes that contribute to cell-cycle progression and is discussed later in next section. Taken together, we suggest that three of 16 HMs function in positive or negative ways affect the AS of subsets of genes and further contribute to hESC differentiation in a cell-cycle phase-dependent manner. The results suggest a potential mechanistic model connecting the HMs, AS regulations, and cell-cycle progression with the cell fate decision. Splicing of PBX1 links H3K36me3 to hESC fate decision The past few years have identified key factors required for maintaining the pluripotent state of hESCs [70, 71], including NANOG, OCT4 (POU5F1), SOX2, KLF4, and c-MYC, the combination of which was called Yamanaka factors and sufficient to reprogram somatic cells into induced pluripotent stem cells (iPSCs) [72]. These factors appear to activate a transcriptional network that endows cells with pluripotency [73]. The above integrative analyses showed strong links between three HMs and RNA splicing, revealing a group of epigenetic regulated AS genes involved in cell-cycle machinery. PBX1 was one of the genes that their ASs are positively associated with H3K36me3 (Fig. 5a, b). Its protein is a member of the TALE (three-amino acid loop extension) family homeodomain transcription factors [74, 75] and well-known for its functions in lymphoblastic leukaemia [76,77,78,79] and several cancers [80,81,82,83,84,85,86,87,88,89]. PBX1 also plays roles in regulating developmental gene expression [90], maintaining stemness and self-renewal [80, 91, 92], and promoting the cell-cycle transition to the S phase [93]. Additionally, multiple lines of evidence obtained from in vivo and in vitro highlighted its functions as a pioneer factor [86, 94]. However, few studies have distinguished the distinct functions of its different isoforms. Isoform switch from PBX1a and PBX1b during hESC differentiation. a Genome browser view shows the AS event and H3K36me3 signals of PBX1 upon hESC differentiation. The green horizontal bars below the ChIP-seq tracks indicate the narrow peaks called by MACS2. b The inclusion level for exon 7 of PBX1 is significantly correlated to the H3K36me3 signals over this exon across cell lineages. c The sequence difference of three protein isoforms of PBX1 and the main functional domains. d The relative expressions of PBX1a and PBX1b in 56 cells/tissues, representing the differential expressions of two isoforms in three groups based on their developmental states. e The expression levels of NANOG and OCT4 genes are negatively correlated with the expression of PBX1b. f The expression levels of PSIP1 and SRSF1 show significant positive correlations with the expression level of PBX1a. Also see Additional file 1: Figures S9, S10 PBX1 has three isoforms [95], including PBX1a, PBX1b, and PBX1c (Fig. 5c and Additional file 1: Figure S9A). PBX1a and PBX1b are produced by the AS of exon 7 (Fig. 5a) and attract most of the research attention of PBX1. PBX1b retains the same DNA-binding homeodomain as PBX1a, but changes 14 amino acids (from 334 to 347) and truncates the remaining 83 amino acids at the C-terminus of PBX1a (Fig. 5c and Additional file 1: Figure S9A). This C-terminal alteration of PBX1a has been reported to affect its cooperative interactions with HOX partners [96], which may impart different functions to these two isoforms. We here revealed its H3K36me3-regulated isoform switch between PBX1a and PBX1b, which functions at the upstream of pluripotency transcriptional network to link H3K36me3 with ESC fate decision. We first observed differential transcript expressions of these two isoforms between the hESCs and differentiated cells, wherein PBX1a was predominantly transcribed in hESCs, while PBX1b was predominantly induced in differentiated cells (Fig. 5a and Additional file 1: Figure S9B). The same trend was also observed in an extended dataset of 56 human cell lines/tissues (Fig. 5d) from the Roadmap [97] and ENCODE [98] projects (Additional file 4: Table S4). Additionally, we did not observe significantly different expression of the total PBX1 and three other PBX family members across cell types (Additional file 1: Figure S9C, fold change < 2), indicating that the isoform switch of PBX1, rather than the differential expression of its family members, plays more important roles during hESC differentiation. To further test the possible mechanism by which PBX1b contributes to stem cell differentiation, we investigated the transcription levels of Yamanaka factors. Of these TFs, the NANOG is activated by direct promoter binding of PBX1 and KLF4, which is essential for stemness maintenance [91, 99]. Consistently, all these core factors are repressed in differentiated cells where PBX1b is highly expressed (Additional file 1: Figure S9D–G), even though the PBX1a is expressed. Based on the 56 human cell lines/tissues, we also observed significant negative correlations between expression of most important pluripotent factors (NANOG and OCT4) and PBX1b (Fig. 5e), as well as positive correlations between these two factors and PBX1a (or inclusion level of exon 7, Additional file 1: Figure S10A, B). Consistent with previous reports showing that the PBX1a and PBX1b differ in their ability to activate or repress the expression of reporter genes [100, 101], we hypothesize that PBX1a promotes the activity of the pluripotent regulatory network by promoting the expression of NANOG, whereas PBX1b may attenuate this activity by competitively binding and regulating the same target gene, since PBX1b retains the same DNA-binding domain as PBX1a. These observations are strongly suggestive that the switch from PBX1a to PBX1b is a mechanism by which PBX1 contributes to hESC differentiation via regulating the pluripotency regulatory network. Exon 7 of PBX1 shows significantly positive correlations between its inclusion levels (PSIs) and the surrounding epigenetic signals of H3K36me3 in hESCs and differentiated cells (Fig. 5b). It suggests a potential role of H3K36me3 in regulating the isoform switch between PBX1a and PBX1b. To investigate the regulatory role of H3K36me3, we focused on two previously proved chromatin-adaptor complexes, MRG15/PTBP1 [44] and PSIP1/SRSF1 [45], which endow dual functions to H3K36me3 in AS regulation [38]. Based on the 56 cell lines/tissues from the Roadmap/ENCODE projects, we first found significant positive correlations between the expressions of PBX1a (or inclusion level of exon 7) and PSIP1/SRSF1 (Fig. 5f), but not with MRG15/PTBP1 (Additional file 1: Figure S10C, D). This result suggests that the AS of PBX1 is epigenetic regulated by H3K36me3 through the PSIP1/SRSF1 adaptor system, which was strongly supported by a recent report using the HeLa cell lines [67]. The overexpression of SRSF1 in Hela cells introduces a PSI increase of 0.18 for exon 7 of PBX1 (chr1: 164789308–164,789,421 based on NCBI37/hg19 genome assembly) based on the RNA-seq (Table S1 of [67]). Additionally, this exon was one of the 104 AS exons that were further validated using radioactive reverse transcription polymerase chain reaction (RT-PCR) (Table S2 of [67]). Their results showed that exon 7 of PBX1 is indeed a splicing target of SRSF1, supporting our conclusions. We then validated the above hypotheses on MSCs and IM90 cells, since these two cells types show the most significant difference from H1 cells regarding our hypotheses (Fig. 5b). We cultured H1 cells, IMR90 cells, and induced H1 cells to differentiate into MSCs (H1-MSCs, see "Methods" for details). Additionally, we also included other two sources of MSCs, including one derived from human bone marrow (hBM-MSCs) and the other derived from adipose tissue (hAT-MSCs) (see "Methods" for details). Consistent with the results from RNA-seq, the same expression patterns of Yamanaka factors in H1, MSCs, and IMR90 cells were observed using quantitative RT-PCR (qRT-PCR) and western blot (Fig. 6a), which confirmed the pluripotent state of H1 cells and the differentiated states of other cell types. We then detected the isoform switch from PBX1a to PBX1b in our cultured cells, which are consistent both in mRNA and protein levels (Fig. 6b and Additional file 1: Figure S10E) and further confirmed by the western blot using PBX1b-specific antibody (anti-PBX1b) (Fig. 6b bottom and Additional file 1: Figure S10E iii). These results have verified that the PBX1b was significantly induced in differentiated cells, where the PBX1a was significantly reduced. Isoform switch of PBX1 links H3K36me3 to hESC fate decision. a qRT-PCR and western blot show the expression levels of Yamanaka factors in H1, MSC, and IMR90 cells. Whiskers denote the standard deviations of three replicates. b RT-PCR and western blot show the isoform switches between PBX1a and PBX1b from H1 cells to differentiated cells. c i. ChIP-PCR shows the differential binding of PBX1b to NANOG promoter in H1 cells and differentiated cells; ii. ChIP-PCR shows the reduced H3K36me3 signal in differentiated cells; iii. ChIP-PCR shows the differential recruitment of PSIP1 to exon 7 of PBX1. d RIP-PCR show the differential recruitment of SRSF1 around exon 7 of PBX1. e Co-IP shows the overall physical interaction between PSIP1 and SRSF1 in all studied cell types. f The mechanism by which H3K36me3 is linked to cell fate decision by regulating the isoform switch of PBX1, which functions upstream of the pluripotency regulatory network. Also see Additional file 1: Figures S9, S10 We also validated the mechanism by which the splicing of PBX1 links H3K36me3 to stem cell fate decision. We first confirmed that PBX1b also binds to the promoter of NANOG at the same region where PBX1a binds to and the binding signals (ChIP-PCR) were high in the differentiated cells but very low in H1 stem cells (Fig. 6c i and Additional file 1: Figure S10F i). Consistent with the results from ChIP-seq, we also observed reduced H3K36 tri-methylation around exon 7 of PBX1 based on ChIP-PCR assay (Fig. 6c ii and Additional file 1: Figure S10F ii). Furthermore, the chromatin factor PSIP1 only showed high binding signal in H1 stem cells (Fig. 6c iii and Additional file 1: Figure S10F iii), which recruit the SF SRSF1 to the PBX1 exclusively in H1 stem cells (Fig. 6d and Additional file 1: Figure S10G) even though the physical binding between these two factors were universally detected in all cell types (Fig. 6e). All these experimental results suggested that, upon differentiation, stem cells reduced the H3K36 tri-methylation and may attenuate the recruitment of PSIP1/SRSF1 adaptor around exon 7 of PBX1, leading to the exclusion of exon 7 and highly expressed PBX1b in differentiated cells. High expression of PBX1b may attenuate the activity of PBX1a in promoting the pluripotency regulatory network. Taken together, we suggested that H3K36me3 regulates the AS of PBX1 via the PSIP1/SRSF1 adaptor system, leading the isoform switch from PBX1a to PBX1b during hESC differentiation. Subsequently, PBX1b competitively binds to NANOG and abolishes the bindings of PBX1a. This competitive binding attenuates the pluripotency regulatory network to repress self-renewal and consequently facilitate differentiation (Fig. 6f). These findings revealed how the PBX1 contributes to cell fate decision and exemplify the mechanism by which AS links HMs to stem cell fate decision. ESCs provide a vital tool for studying the regulation of early embryonic development and cell fate decision. [1]. In addition to the core pluripotency regulatory network, emerging evidence revealed other processes regulating ESC pluripotency and differentiation, including HMs, AS, cell-cycle machinery, and signalling pathways [54]. Here, we connected these previously separate avenues of investigations, beginning with the discovery that three of 16 HMs are significantly associated with more than half of AS events upon hESC differentiation. Further analyses implicated the association of HMs, AS regulation, and cell-cycle progression with hESC fate decision. Specifically, HMs orchestrate a subset of AS outcomes that play critical roles in cell-cycle progression via the related pathways, such as ATM/ATR-mediated DNA response [19], and TFs, such as PBX1 (Additional file 1: Figure S10H). In this way, HMs, AS regulation, and signalling pathways are converged into the cell-cycle machinery that has been claimed to rule the pluripotency [20]. Although epigenetic signatures, such as HMs, are mainly enriched in promoters and other intergenic regions, it has become increasingly clear that they are also present in the gene body, especially in exon regions. This indicates a potential link between epigenetic regulation and the predominantly co-transcriptional occurrence of AS. Thus far, H3K36me3 [44, 45], H3K4me3 [42], H3K9me3 [43], and the acetylation of H3 and H4 [46,47,48,49,50] have been revealed to regulate AS, either via the chromatin-adapter systems or by altering Pol II elongation rate. Here, we investigated the extent to which the HMs could associate with AS by integrative analyses on both transcriptome and epigenome data during hESC differentiation. We found that three HMs are significantly associated with about half of AS events. By contrast, a recent report showed that only about 4% of differentially regulated exons among five human cell lines are positively associated with three promoter-like epigenetic signatures, including H3K9ac, H3K27ac, and H3K4m3 [66]. Like that report, we also found a positive association of H3K27ac with a subset of AS events. However, our results differ regarding the other two HMs that we identified to be associated with AS. In our study, H3K36me3 is associated with the most identified HM-associated AS events, either positively or negatively. It is reasonable since H3K36me3 is a mark for actively expressed genomes [63] and it has been reported to have dual roles in AS regulations through two different chromatin-adapter systems, PSIP1/SRSF1 [44] and MRG15/PTBP1 [45]. SRSF1 is a SF which will increase the inclusion of targeted AS exons and PTBP1 will decrease the inclusion levels of the regulated AS exons. Therefore, the exons that are regulated by the PSIP1/SRSF1 adapter system will show positive correlations with the H3K36me3, while the exons regulated by MRG15/PTBP1 will show negative correlations. Our results are consistent with this fact and show both direction correlations between different sets of AS events and H3K36me3. Many of these AS events in our study have been validated by other studies (Additional file 1: Figure S5F, G). H4K8ac is associated with the fewest number of AS events in our results. Although rarely studied, H4K8ac is known to act as a transcriptional activator in both the promoters and transcribed regions [102]. Its negative association with AS is supported by the finding that it recruits proteins involved in increasing the Pol II elongation rate [103]. This suggests that H4K8ac may function in AS regulation by altering the Pol II elongation rate, rather than via the chromatin-adaptor systems. However, further studies are required. Collectively, possible reasons for the different results between this study and others [66] could be that: (1) we considered differentiation from hESCs to five different cell types, which covered more inter-lineage AS events than the previous report; or (2) different sets of epigenetic signatures were considered, which may lead to relatively biased results in both studies. Obviously, the inclusion of more cell lineages and epigenetic signatures may reduce this bias. Therefore, an extended dataset of 56 cell lines/tissues was included in our study and the observations support our results. Our study also extended the understanding that HMs contribute to cell fate decision via determining not only what parts of the genome are expressed, but also how they are spliced [38]. We demonstrated that the HM-associated AS events have a significant impact on cell fate decision in a cell-cycle-dependent manner. The most intriguing discovery is that the HM-associated genes are enriched in G2/M phases and predominantly function in ATM/ATR-mediated DNA response. Evidentially, the ATM/ATR-mediated checkpoint has been recently revealed to attenuate pluripotency state dissolution and serves as a gatekeeper of the pluripotency state through the cell cycle [54]. The cell cycle has been considered the hub machinery for cell fate decision [20] since all commitments will go through the cell-cycle progression. Our study expanded such machinery by linking the HMs and AS regulation to cell-cycle pathways and TFs, which, together, contribute to cell fate decision (Additional file 1: Figure S10H). We also exemplified our hypothesized mechanism by an H3K36me3-regulated isoforms switch of PBX1. In addition to its well-known functions in lymphoblastic leukaemia [76,77,78,79] and a number of cancers [80,81,82,83,84,85,86,87,88,89], PBX1 was also found to promote hESC self-renewal by corporately binding to the regulatory elements of NANOG with KLF4 [99]. We found that the transcriptions of two isoforms of PBX1, PBX1a and PBX1b, are regulated by H3K36me3 during hESC differentiation. Their protein isoforms competitively bind NANOG and the binding of PBX1b will abolish the binding of PBX1a, which further attenuates the activity of the core pluripotency regulatory network composed of Yamanaka factors. The switch from PBX1a to PBX1b is modulated by H3K36me3 via the PSIP1/SRSF1 adapter system [45]. Our results were also supported by an extended dataset of 56 cell lines/tissues from the Roadmap/ENCODE projects. Collectively, our findings expanded understanding of the core transcriptional network by adding a regulatory layer of HM-associated AS (Fig. 6f). A very recent report showed that the switch in Pbx1 isoforms was regulated by Ptbp1 during neuronal differentiation in mice [104], indicating a contradiction that the AS of PBX1 should be negatively regulated by H3K36me3 via the MRG15/PTBP1 [44]. Our study also included the neuronal lineage and showed that differentiation to NPC is an exception, distinct from other lineages. If NPC is considered separately, the results are consistent with the recent report [104] showing that NPCs and mature neurons express increasing levels of PBX1a rather than PBX1b (Additional file 1: Figure S9B). Another recent report showed that PBX1 was a splicing target of SRSF1 in the HeLa cell line [67], which strongly supports our findings. Taken together, these evidence suggests that there are two parallel mechanisms regulating PBX1 isoforms in embryonic development, in which neuronal differentiation adopts a mechanism that is different from other lineages. Finally, it is worth noting that both our work and other studies [66] reported that HMs cannot explain all AS events identified either during ESC differentiation or based on pairwise comparisons between cell types. Moreover, bidirectional communication between HMs and AS has been widely reported. For instance, the AS can enhance the recruitment of H3K36 methyltransferase HYPB/Set2, resulting in significant differences in H3K36me3 around the AS exons [66]. These findings increased the complexity of defining the cause and effect between HMs and AS. Nevertheless, our findings suggest that at least a subset of AS events are regulated epigenetically, similar to the way that epigenetic states around the transcription start sites define what parts of the genome are expressed. Additionally, as we described in our previous study, the AS outcomes may be estimated more precisely by combining splicing cis-elements and trans-factors (i.e. genetic splicing code) and HMs (i.e. epigenetic splicing code), as an "extended splicing code" [19]. Taken together, we presented a mechanism conveying the HM information into cell fate decision through the AS of cell-cycle factors or the core components of pathways that controlling cell-cycle progression (Additional file 1: Figure S10H). We performed integrative analyses on transcriptome and epigenome data of the hESCs, H1 cell line, and five differentiated cell types, demonstrating that three of 16 HMs were strongly associated with half of AS events upon hESC differentiation. We proposed a potential epigenetic mechanism by which some HMs contribute to ESC fate decision through the AS regulation of specific pathways and cell-cycle genes. We experimentally validated one cell-cycle-related transcription factor, PBX1, which demonstrated that AS provides a mechanism conveying the HM information into the regulation of cell fate decisions (Fig. 6f). Our study will have a broad impact on the field of epigenetic reprogramming in mammalian development involving splicing regulations and cell-cycle machinery. Identification of AS exons upon hESC differentiation Aligned BAM files (hg18) for all six cell types (H1, ME, TBL, MSC, NPC, and IMR90) were downloaded from the source provided in reference. Two BAM files (replicates) of each cell type were analyzed using rMATS (version 3.0.9) [56] and MISO (version 0.4.6) [105]. The rMATS was used to identify AS exons based on the differential PSI (Ψ) values between each differentiated cell type and H1 cells (Additional file 1: Figure S1B). The splicing changes (ΔPSIs or ΔΨ) are used to identify the AS events between H1 and other cell types. A higher cutoff is always helpful in reducing the false positive while compromising the sensitivity. The cutoff, |ΔΨ| ≥ 0.1 or |ΔΨ| ≥ 10%, is widely accepted and used in AS identification [25, 106,107,108]. Many other studies even used 0.05 as the cutoff [109,110,111,112,113]. We did additional correlation analyses based on different ΔPSI cutoffs (0.1, 0.2, 0.3, 0.4, and 0.5). With the increase of the cutoffs, the number of AS events was significantly reduced (Additional file 1: Figure S11A); however, the correlations ware only slightly increased between AS and some HMs (Additional file 1: Figure S11B, C, upper panels), i.e. no consistent impacts of the cutoffs on the correlations were observed. Similarly, the correlation significances were also not consistently affected (Additional file 1: Figure S11B, C, lower panels). Therefore, in our study, only AS exons which hold the |ΔPSI| ≥ 0.1, p value ≤ 0.01, and FDR ≤ 5%, were considered as final hESC differentiation-related AS exons. All identified AS event types are summarized in Additional file 1: Table S1. Finally, two types of AS exons, namely SE and MXE, which are most common and account for major AS events, were used for subsequent analysis (Additional file 2: Table S2). MISO was used to estimate the confidence interval of each PSI and generate Sashimi graphs [114] (see Figs. 1a, b and 5a). To match the ChIP-seq analysis, genomic coordinates of identified AS events were lifted over to hg19 using LiftOver tool downloaded from UCSC. ChIP-seq data process and HM profiling ChIP-seq data (aligned BAM files, hg19) were downloaded from Gene Expression Omnibus (GEO, accession ID: GSE16256). This dataset includes the ChIP-seq reads of up to 24 types of HMs for six cell types (H1, ME, TBL, MSC, NPC, and IMR90). Among these, nine histone acetylation modifications (H2AK5ac, H2BK120ac, H2BK5ac, H3K18ac, H3K23ac, H3K27ac, H3K4ac, H3K9ac, and H4K8ac) and seven histone methylation modifications (H3K27me3, H3K36me3, H3K4me1, H3K4me2, H3K4me3, H3K79me1, and H3K9me3) are available for all six cell types and therefore were used for our analyses. To generate global differential profiles of HM changes between AS exons and constitutive exons upon hESC differentiation, for each MXE and SE AS events, we first randomly selected the constitutive splicing (CS) exons from the same genes, composing a set of CS exons. We then considered the HM changes in a ± 150-bp region flanking both splice sites of each AS and CS exon, i.e. a 300-bp exon-intron boundary region. Each region was 15 bp-binned. Alternatively, for a few cases where the exon or intron is < 150 bps, the entire exonic or intronic region was evenly divided into 10 bins. This scaling allows combining all regions of different lengths to generate uniform profiles of HM changes around the splice sites (see Fig. 2c and Additional file 1: Figure S4). To this end, we calculated the sequencing depth-normalized Δ reads number for each binned region between H1 cells and differentiated cells as follows: $$ \varDelta\ reads\ number=\frac{\mathrm{reads}\ \mathrm{number}\ \mathrm{of}\ \mathrm{H}1\ \mathrm{cells}-\mathrm{reads}\ \mathrm{number}\ \mathrm{of}\ \mathrm{differentiated}\ \mathrm{cells}}{\mathrm{bin}\ \mathrm{size}\ \mathrm{in}\ \mathrm{bps}} $$ Each region is assigned a value representing the average Δ reads number between H1 cells and differentiated cells for each HM. We also compared HM profiles between the inclusion-gain and inclusion-loss exons (Fig. 3a and Additional file 1: Figure S5B) using the same strategy. The statistical results (Fig. 2c and Additional file 1: Figure S5B) are presented as the p values based on Mann–Whitney–Wilcoxon tests (using the R package). To quantitatively link HMs to AS upon hESC differentiation, the ChIP-seq data were further processed by narrow peak calling. For each histone ChIP-seq dataset, the MACS v2.0.10 peak caller (https://github.com/taoliu/MACS/) [65, 115] was used to compare ChIP-seq signal to a corresponding whole cell extract (WCE) sequenced control to identify narrow regions of enrichment (narrow peak) that pass a Poisson p value threshold of 0.01. All other parameters (options) were set as default. We then compared the HM signals between H1 cells and differentiated cells. We defined the "differential HM signals (ΔHMs)" as the difference of the normalized peak signals (i.e. the heights of the narrow peaks) between H1 and the differentiated cells. Because the 3′ splice sites (3′ ends of the introns) determine the inclusion of the downstream AS exons [116] and the distances from the peaks to their target sites affect their regulatory effects [117], we normalize the peak signals against the distance (in kb) between the peak summits and 3′ splice sites (Additional file 1: Figure S5A). Since there is no evidence showing that distal HMs could regulate the AS, we only considered local peaks with at least 1 bp overlapping on either side of the AS exon. For exons without overlapping peaks, peak signals of these exons were set to zero. For the exons there are more than one overlapping peaks, the peak signals of these exons were set to the greater ones. For MXE events, only the upstream AS exons were considered due to their exclusiveness in inclusion level between these two exons, i.e. the sum of the PSIs for 2 exons of an MXE event is always 1. Association studies and k-means clustering To quantitatively estimate associations between HMs and AS, we first used three independent correlation tests, including Pearson correlation (PC), multiple linear regression (MLR), and logistic regression (LLR), to test global correlations between AS events and each of 16 HMs based on differential inclusions (ΔPSIs) and differential HM signals (ΔHMs). PC was performed using the R package (stats, cor.test(),method = 'pearson'). MLR and LLR were calculated using Weka 3 [118], wherein the ΔHMs are independent variables and ΔPSIs are response variables. The results show that only some HMs correlate with AS, and most correlations are weak (Additional file 1: Figure S5C). HMs that have significant correlations (p ≤ 0.05) with AS were used for further clustering analysis, through which we identified six subsets of HM-associated AS events (Additional file 3: Table S3). K-means clustering was performed separately on inclusion-gain and inclusion-loss AS events of MXE and SE, based on the selected HM signatures (Additional file 1: Figure S5C, checked with a "√"). K was set to 6 for all clustering processes (Additional file 1: Figure S5D), which produced the minimal root mean square error (RMSE) for most cases based on a series of clustering with k in the range of 2–8 (data not shown). Then the two clusters that generate mirror patterns, of which one was from inclusion-gain events and one was from inclusion-loss events, were combined to be considered as a subset of HM-associated AS events (Additional file 1: Figure S6). Finally, we identified six subsets of HM-associated AS events displaying significantly positive or negative correlations with three HMs, respectively. Gene expression quantification For each cell type, two aligned BAM files (replicates) were used to estimate the expression level of genes using Cufflinks [119]. Default parameters (options) were used. The expression level of each gene was presented as FPKM for each cell type. Differentially expressed genes (DEGs) were defined as those genes whose fold changes between the differentiated cells and hESCs are > 2. Specifically for DEG analysis of SF genes, we collected a list of 47 ubiquitously expressed SFs with "RNA splicing" in their GO annotations from AmiGO 2 [120]. The enrichment significances in Additional file 1: Figures S1D and S3H are shown as the p values based on hypergeometric tests, using the DEGs of all expressed genes (with the FPKM ≥ 1 in at least hESCs or one of the differentiated cell types) as background. We found that the AS genes are generally not differentially expressed between hESCs and differentiated cells, indicating that they function in hESC differentiation via isoform switches rather than expression changes. Few SF genes show differential expression between hESCs and differentiated cells, indicating the existence of epigenetic control of AS, rather than the direct control on SFs expression. Genome annotations Since the RNA-seq reads (BAM) files and ChIP-seq read (BAM) files downloaded from the public sources were mapped to different human genome assemblies, NCBI36/hg18 (Mar. 2006) and GRCh37/hg19 (February 2009), respectively, we downloaded two version of gene annotations (in GTF formats) from the UCSC Table Browser [121]. The hg18 GTF file was used for rMATS and MISO to identify AS during the differentiation from H1 ESCs into five differentiated cells. The hg19 GTF file was used to define the genome coordinates of AS exons and further for ChIP-seq profile analysis (Figs. 2a–c, 3a, and Additional file 1: Figures S4 and S5A). We compared exonic and intronic lengths based on hg18 annotation (Additional file 1: Figure S2). Gene Ontology enrichment analysis The GO enrichment analysis was performed using ToppGene [122] by searching the HGNC Symbol database under default parameters (p value method: probability density function). Overrepresented GO terms for the GO domain "biological process" were used to generate data shown in Fig. 4c–e and Additional file 1: Figure S8A–C using either the FDR (0.05) adjusted p value or the enriched gene numbers (Additional file 1: Figure S8A). Canonic pathway enrichment analysis Both the HM-associated (n = 282) and HM-unassociated (150) AS genes from the top enriched GO term (GO:0007049) were used to perform canonic pathway enrichment (Fig. 4f and Additional file 1: Figure S8D) analysis through Ingenuity Pathway Analysis (IPA, http://www.ingenuity.com/products/ipa). Stemness signature and TF binding enrichment analysis StemChecker [59], a web-based tool to discover and explore stemness signatures in gene sets, was used to calculate the enrichment of AS genes in stemness signatures. Significances were tested (hypergeometric test) using the gene set from human (Homo sapiens) of this database as the background. For all AS genes (n = 3865), 2979 genes were found in this dataset. Of 2979 genes, 1395 were found in the stemness signature gene set, most of which (n = 813) are ESC signature genes. Additional file 1: Figure S3A shows the enrichment significance as the -log10 p values (Bonferroni adjusted). For HM-associated AS genes (n = 2125), 1992 genes were found in this dataset. Of 1992 genes, 966 were found in the stemness signature gene set, most of which (n = 562) are ESC signature genes. For HM-unassociated genes (n = 1057), 987 genes were found in this dataset. Of 987 genes, 429 were found in the stemness signature gene set, most of which (n = 251) are ESC signature genes. The significances are shown as -log10 p values (Bonferroni adjusted) in Fig. 4a. FunRich (v2.1.2) [123, 124], a stand-alone functional enrichment analysis tool, was used for TF binding enrichment analysis to get the enriched TFs that may regulate the query genes. The top six enriched TFs of HM-associated and HM-unassociated AS genes are presented and shown as the proportion of enriched AS genes. It shows that HM-associated AS genes are more likely to be regulated by TFs involved in cell differentiation and development, while the HM-unassociated AS genes are more likely to be regulated by TFs involved in cell proliferation and renewal (Fig. 4b). Roadmap and ENCODE data analysis All raw data are available from the GEO accession IDs GSE18927 and GSE16256. The individual sources of RNA-seq data for 56 cell lines/tissues from Roadmap/ENCODE projects are listed in Additional file 4: Table S4. The RNA-seq data (BAM files) were used to calculate the PSI of exon 7 for PBX1 in each cell line/tissue and to estimate the expression levels of all gene (FPKM), based on aforementioned strategies. The relative expression levels of PBX1a and PBX1b shown in Fig. 5 and Additional file 1: Figure S10 were calculated as the individual PFKM value of each divided by their total FPKM values. Statistical analyses and tests Levels of significance were calculated with the Mann–Whitney–Wilcoxon test for Figs. 2c, d, 3a, and Additional file 1: Figure S4 and S5B, with Fisher's exact test for Figs. 1d, 2d, and Additional file 1: Figure S5B, with Student's t-test for Fig. 5d and Additional file 1: Figure S7A, and with a hypergeometric test for Additional file 1: Figures S1D and S3H. Levels of correlation significance were calculated with PC, MLR, and LLR for Fig. 3c, d, and Additional file 1: Figure S5C. MLR and LLR were performed using Weka 3 [118], whereas all other tests were performed using R packages. The p values for the enrichment analyses (Fig. 4, Additional file 1: Figures S3A and S8) were adjusted either by PDR or Bonferroni (refer to the corresponding method sections for details). The statistical analyses of the ChIP, RIP, and western blotting assays were shown in Additional file 1: Figure S10E–G. Three replicates were conducted for each assay and the quantitatively statistical analyses were performed on the relative band intensities normalized by control genes (ß-Actin) or input signals. ImageJ (https://imagej.nih.gov/ij/) was used to quantify the band intensities. ANOVA was used for statistical significance tests. Cell culture and MSC induction Human embryonic stem cell (H1cell line) line was purchased from the WiCell Research Institute (product ID: WA01). H1 cells were cultured on the matrigel-coated six-well culture plates (BD Bioscience) in the defined mTeSR1 culture medium (STEMCELL Technologies). The culture medium was refreshed daily. The H1-derived mesenchymal stem cells (H1-MSC) were differentiated from H1 cells as described previously [125]. Briefly, small H1 cell aggregates were cultured on a monolayer of mouse OP9 bone-marrow stromal cell line (ATCC) for nine days. After depleting the OP9 cell layer, the cells were then cultured in semi-solid colony-forming serum-free medium supplemented with fibroblast growth factor 2 and platelet-derived growth factor BB for two weeks. The mesodermal colonies were selected and cultured in mesenchymal serum-free expansion medium with FGF2 to generate and expand H1-MSCs. hAT-MSCs were derived from the subcutaneous fats provided by the National Disease Research Interchange (NDRI) using the protocol described previously [126]. Briefly, the adipose tissue was mechanically minced and digested with collagenase Type II (Worthington Bio, Lakewood, NJ, USA). The resulted single cell suspension was cultured in α-minimal essential medium with 5% human platelet lysate (Cook Regentec, Indianapolis, IN, USA), 10 μg/mL gentamicin (Life Technologies, CA, USA), and 1× Glutamax (Life Technologies). After reaching ~ 70% confluence, the adherent cells were harvested at the passage 1 (P1) hAT-MSC/AEY239. hBM-MSCs were isolated from commercially available fresh human bone marrow (hBM) aspirates (AllCells, Emeryville, CA, USA) and expanded following a standard protocol [127]. Briefly, hBM-MSC were cultured in α-minimal essential medium supplemented with 17% fetal bovine serum (FBS; lot-selected for rapid growth of MSC; Atlanta Biologicals, Norcross, GA, USA), 100 units/mL penicillin (Life Technologies, CA, USA), 100 μg/mL streptomycin (Life Technologies), and 2 mM L-glutamine (Life Technologies). After reaching ~ 70% confluence, the adherent cells were harvested at the passage 1 (P1) hBM-MSC-5204(G). The human IMR-90 cell line was purchased from the American Tissue Type Culture Collection (Manassas, VA, USA) and cultured in Eagle's Minimum Essential Medium (Thermo Scientific, Logan, UT, USA) supplemented 10% fetal calf serum, 2 mM L-glutamine, 100 U/mL penicillin, and 100 μg/mL streptomycin at 37 °C in humidified 5% CO2. RNA isolation and RT-PCR Total RNA was extracted from cells using the RNeasy Mini plus kit (Qiagen, Valencia, CA, USA) according to the manufacturer's instructions. RT reactions for first-strand complementary DNA (cDNA) synthesis were performed with 2 μg of total RNA in 25 uL reaction mixture containing 5 μL of 5× first-strand synthesis buffer, 0.5 mM dNTP, 0.5 μg oligo(dT)12-18mer (Invitrogen), 200 units of M-MLV reverse transcriptase (Promega, Madison, WI, USA), and 25 units of RNase inhibitor (Invitrogen). The mixture was then incubated at 42 °C for 50 min and 52 °C for 10 min. The reaction was stopped by heating at 70 °C for 15 min. The PCR amplifications were carried out in 50 μL reaction solution containing 1 μL of RT product, 5 μL of 10× PCR buffer, 0.15 mM MgCl2, 0.2 mM dNTP, 0.2 mM sense and antisense primers, and 2.5 U Taq polymerase (Bohringer, Mannheim, Germany). The sequences for the upstream and downstream primers of PBX1 and β-actin are listed in Additional file 1: Table S5. PCR reaction solution was denatured initially at 95 °C for 3 min, followed by 35 cycles at 95 °C for 1 min, 55 °C for 40 s, and 72 °C 40 s. The final extension step was at 72 °C for 10 min. The PCR products were resolved in a 2% ethidium bromide-containing agarose gel and visualized using ChemiDoc MP Imager (Bio-Rad). Quantitative RT-PCR The qPCR amplification was done in a 20-uL reaction mixture containing 100 ng of cDNA, 10 uL 2× All-in-One™ qPCR mix (GeneCopoeia, Rockville, MD, USA), 0.3 mM of upstream and downstream primers, and nuclear-free water. The PCR reaction was conducted with 1 cycle at 95 °C for 10 min, 40 cycles at 95 °C for 15 s, 40 °C for 30 s, and 60 °C for 1 min, followed by dissociation curve analysis distinguishing PCR products. The expression level of a gene was normalized with the endogenous control gene β-actin. The relative expressions of genes were calculated using the 2-ΔΔCT method, normalized by β-actin and presented as mean ± SD (n = 3) (Fig. 6a). The sequences of the paired sense and antisense primers for human SOX2, NANOG, OCT4, c-MYC, KLF4, and β-actin are listed in Additional file 1: Table S5. Cells were lysed with 1× RIPA buffer supplemented with protease and phosphatase inhibitor cocktail (Roche Applied Science, Indianapolis, IN, USA) and stored in aliquots at − 20 °C until use. Twenty micrograms of cell lysates were mixed with an equal volume of Laemmli sample buffer, denatured by boiling, and separated by SDS-PAGE. The separated proteins were then transferred to PVDF membranes (Bio-Rad, Hercules, CA, USA). The membranes were blocked using 5% BSA for 1 h at room temperature and incubated with the first antibodies against 1% BSA overnight. SOX2, OCT4, NANOG, KLF4, c-MYC, and β-actin antibodies were from Cell Signalling Technology (Beverly, MA, USA). PBX1 antibody was from Abcam (Cambridge, MA, USA) and PBX1b antibody from Santa Cruz Technology (Dallas, TX, USA). After incubation with IgG horseradish peroxidase-conjugated secondary antibodies (Cell Signalling) for 2 h at room temperature, the immunoblots were developed using SuperSignal West Pico PLUS Chemiluminescent reagent (Thermo Fisher Scientific, Waltham, MA, USA) and imaged using ChemiDoc MP Imager (Bio-Rad). Co-immunoprecipitation (Co-IP) Co-IP was used to validate the PSIP1-SRSF1 interaction (Fig. 6e). Cells were lysed with ice-cold non-denaturing lysis buffer containing 20 mM Tris HCl (pH 8), 137 mM NaCl, 1% Nonidet P-40, 2 mM EDTA, and proteinase inhibitor cocktails. A total of 200 μg protein were pre-cleared with 30 uL protein G magnetic beads (Cell Signalling) for 30 min at 4 °C on a rotator to reduce non-specific protein binding. The pre-cleared lysate was immunoprecipitated with the 5 μL anti-SRSF1 antibody (Invitrogen) overnight and then incubated with protein G magnetic beads for 4 h at 4 °C. Beads without anti-SRSF1 antibody were used as an IP control. The protein G magnetic beads were washed five times with lysis buffer and the precipitated protein collected. The SRSF1-bound PSIP1 protein (also known as LEDGF) level was determined with the PSIP1 antibody (Human LEDGF Antibody, R&D Systems) using western blot as described above. Three specific bands (one for p75 and two for p52) were detected (Fig. 6e) as indicated on the R&D Systems website (https://www.rndsystems.com/products/human-ledgf-antibody-762815_mab3468). ChIP assay was performed using a SimpleChIP Enzymatic Chromatin IP Kit (Cell Signaling Technology) according to the manufacturer's instructions. Briefly, 2 × 107 cells were cross-linked with 1% formaldehyde and lysed with lysis buffer. Chromatin was fragmented by partial digestion with Micrococcal Nuclease Chromatin. The protein–DNA complex was precipitated with ChIP-Grade Protein G Magnetic Beads (Cell Signalling) and ChIP-validated antibodies against H3K36me3 (Abcam), PSIP1 (Novus), and PBX1b (Santa Cruz). Normal mouse IgG and normal rabbit IgG were used as negative controls. After reversal of protein–DNA cross-links, the DNA was purified using DNA purification spin columns. The purified DNA fragments were then amplified with the appropriate primers on T100 thermal cycler (Bio-Rad). The primer pairs used for PCR are listed in Additional file 1: Table S5. The H3K36me3-immunoprecipitated and PSIP1-immunoprecipitated DNA fragments surrounding exon 7 of PBX1b and the promoter region of NANOG in the PBX1b-immunoprecipitated DNA fragments were PCR-amplified. The ChIP-PCR products were revealed by electrophoresis on a 2% agarose gel (Fig. 6c). RIP assay was performed using Magna RIP™ RNA-Binding Protein Immunoprecipitation Kit (Millipore Sigma) following the manufacturer's instructions. Briefly, the cells were lysed with RIP lysis buffer with RNase and protease inhibitors. In total, 500 μg of total protein was precleared with protein G magnetic beads for 30 min. The protein G magnetic beads were preincubated with 5 μg of mouse monoclonal anti-SRSP1 antibody or normal mouse IgG for 2 h at 4 °C. The antibody-coated beads were then incubated with precleared cell lysates at 4 °C overnight with rotation. The RNA/protein/beads conjugates were washed five times with RIP was buffer and the RNA–protein complexes were eluted from the protein G magnetic beads on the magnetic separator. The SRSP1-bound RNA was extracted using acid phenol-chloroform and precipitated with ethanol. The RNA was then reverse-transcribed and the expression levels of PBX1a and PBX1b in the immunoprecipitated and non-immunoprecipitated (input) samples were analyzed using RT-PCR (Fig. 6d). Alternative (pre-mRNA) splicing CS: Constitutive splicing Embryonic stem cell H2AK5ac: H2A acetylated lysine 5 H2BK120ac: H2B acetylated lysine120 H2BK5ac: H2B acetylated lysine 5 H3K18ac: H3 acetylated lysine 18 H3K27me3: H3 tri-methylated lysine 27 H3K4ac: H3 acetylated lysine 4 H3K4me1: H3 mono-methylated lysine 4 H3 di-methylated lysine 4 H3 tri-methylated lysine 4 H3 mono-methylated lysine 79 HM: MXE: Mutually excluded exon Percent splice in Skipped exon Splicing factor Thomson JA, Itskovitz-Eldor J, Shapiro SS, Waknitz MA, Swiergiel JJ, Marshall VS, et al. Embryonic stem cell lines derived from human blastocysts. Science. 1998;282:1145–7. Graveley BR. Splicing up pluripotency. Cell. 2011;147:22–4. Boyer LA, Lee TI, Cole MF, Johnstone SE, Levine SS, Zucker JR, et al. Core transcriptional regulatory circuitry in human embryonic stem cells. Cell. 2005;122:947–56. Lian XJ, Zhang JH, Azarin SM, Zhu KX, Hazeltine LB, Bao XP, et al. Directed cardiomyocyte differentiation from human pluripotent stem cells by modulating Wnt/beta-catenin signaling under fully defined conditions. Nat Protoc. 2013;8:162–75. Ogaki S, Shiraki N, Kume K, Kume S. Wnt and Notch Signals Guide Embryonic Stem Cell Differentiation into the Intestinal Lineages. Stem Cells. 2013;31:1086–96. Clevers H, Loh KM, Nusse R. An integral program for tissue renewal and regeneration: Wnt signaling and stem cell control. Science. 2014;346:1248012. Itoh F, Watabe T, Miyazono K. Roles of TGF-beta family signals in the fate determination of pluripotent stem cells. Semin Cell Dev Biol. 2014;32:98–106. Sakaki-Yumoto M, Katsuno Y, Derynck R. TGF-beta family signaling in stem cells. Biochimica Et Biophysica Acta-General Subjects. 1830;2013:2280–96. Watabe T, Miyazono K. Roles of TGF-beta family signaling in stem cell renewal and differentiation. Cell Res. 2009;19:103–15. Tay Y, Zhang JQ, Thomson AM, Lim B, Rigoutsos I. MicroRNAs to Nanog, Oct4 and Sox2 coding regions modulate embryonic stem cell differentiation. Nature. 2008;455:1124–8. Shenoy A, Blelloch RH. Regulation of microRNA function in somatic stem cell proliferation and differentiation. Nat Rev Mol Cell Biol. 2014;15:565–76. Fatica A, Bozzoni I. Long non-coding RNAs: new players in cell differentiation and development. Nat Rev Genet. 2014;15:7–21. Gabut M, Samavarchi-Tehrani P, Wang XC, Slobodeniuc V, O'Hanlon D, Sung HK, et al. An Alternative Splicing Switch Regulates Embryonic Stem Cell Pluripotency and Reprogramming. Cell. 2011;147:132–46. Salomonis N, Schlieve CR, Pereira L, Wahlquist C, Colas A, Zambon AC, et al. Alternative splicing regulates mouse embryonic stem cell pluripotency and differentiation. Proc Natl Acad Sci U S A. 2010;107:10514–9. Xie W, Schultz MD, Lister R, Hou ZG, Rajagopal N, Ray P, et al. Epigenomic Analysis of Multilineage Differentiation of Human Embryonic Stem Cells. Cell. 2013;153:1134–48. Gifford CA, Ziller MJ, Gu HC, Trapnell C, Donaghey J, Tsankov A, et al. Transcriptional and Epigenetic Dynamics during Specification of Human Embryonic Stem Cells. Cell. 2013;153:1149–63. Hawkins RD, Hon GC, Lee LK, Ngo Q, Lister R, Pelizzola M, et al. Distinct Epigenomic Landscapes of Pluripotent and Lineage-Committed Human Cells. Cell Stem Cell. 2010;6:479–91. Boland MJ, Nazor KL, Loring JF. Epigenetic Regulation of Pluripotency and Differentiation. Circ Res. 2014;115:311–24. Xu Y, Wang Y, Luo J, Zhao W, Zhou X. Deep learning of the splicing (epi)genetic code reveals a novel candidate mechanism linking histone modifications to ESC fate decision. Nucleic Acids Res. 2017;45:12100–12. Vallier L. Cell Cycle Rules Pluripotency. Cell Stem Cell. 2015;17:131–2. Pan Q, Shai O, Lee LJ, Frey J, Blencowe BJ. Deep surveying of alternative splicing complexity in the human transcriptome by high-throughput sequencing. Nat Genet. 2008;40:1413–5. Gerstein MB, Lu ZJ, Van Nostrand EL, Cheng C, Arshinoff BI, Liu T, et al. Integrative Analysis of the Caenorhabditis elegans Genome by the modENCODE Project. Science. 2010;330:1775–1787. Graveley BR, Brooks AN, Carlson J, Duff MO, Landolin JM, Yang L, et al. The developmental transcriptome of Drosophila melanogaster. Nature. 2011;471:473–9. Ramani AK, Calarco JA, Pan Q, Mavandadi S, Wang Y, Nelson AC, et al. Genome-wide analysis of alternative splicing in Caenorhabditis elegans. Genome Res. 2011;21:342–8. Wang ET, Sandberg R, Luo SJ, Khrebtukova I, Zhang L, Mayr C, et al. Alternative isoform regulation in human tissue transcriptomes. Nature. 2008;456:470–476. Yeo GW, Xu XD, Liang TY, Muotri AR, Carson CT, Coufal NG, et al. Alternative splicing events identified in human embryonic stem cells and neural progenitors. PLoS Comput Biol. 2007;3:1951–67. Salomonis N, Nelson B, Vranizan K, Pico AR, Hanspers K, Kuchinsky A, et al. Alternative Splicing in the Differentiation of Human Embryonic Stem Cells into Cardiac Precursors. PLoS Comput Biol. 2009;5:e1000553. Lu XY, Goke J, Sachs F, Jacques PE, Liang HQ, Feng B, et al. SON connects the splicing-regulatory network with pluripotency in human embryonic stem cells. Nat Cell Biol. 2013;15:1141–52. Kalsotra A, Cooper TA. Functional consequences of developmentally regulated alternative splicing. Nat Rev Genet. 2011;12:715–29. Mayshar Y, Rom E, Chumakov I, Kronman A, Yayon A, Benvenisty N. Fibroblast growth factor 4 and its novel splice isoform have opposing effects on the maintenance of human embryonic stem cell self-renewal. Stem Cells. 2008;26:767–74. Rao S, Zhen S, Roumiantsev S, McDonald LT, Yuan GC, Orkin SH. Differential Roles of Sall4 Isoforms in Embryonic Stem Cell Pluripotency. Mol Cell Biol. 2010;30:5364–80. Fiszbein A, Kornblihtt AR. Alternative splicing switches: Important players in cell differentiation. Bioessays. 2017;39 Barash Y, Calarco JA, Gao WJ, Pan Q, Wang XC, Shai O, et al. Deciphering the splicing code. Nature. 2010;465:53–9. Han H, Irimia M, Ross PJ, Sung HK, Alipanahi B, David L, et al. MBNL proteins repress ES-cell-specific alternative splicing and reprogramming. Nature. 2013;498:241–5. Schwartz S, Meshorer E, Ast G. Chromatin organization marks exon-intron structure. Nat Struct Mol Biol. 2009;16:990–5. Kornblihtt AR, De la Mata M, Fededa JP, Munoz MJ, Nogues G. Multiple links between transcription and splicing. Rna-a Publication of the Rna Society. 2004;10:1489–98. Wang GS, Cooper TA. Splicing in disease: disruption of the splicing code and the decoding machinery. Nat Rev Genet. 2007;8:749–61. Luco RF, Allo M, Schor IE, Kornblihtt AR, Misteli T. Epigenetics in Alternative Pre-mRNA Splicing. Cell. 2011;144:16–26. Sharov AA, Ko MSH. Human ES cell profiling broadens the reach of bivalent domains. Cell Stem Cell. 2007;1:237–8. Singh AM, Sun YH, Li L, Zhang WJ, Wu TM, Zhao SY, et al. Cell-Cycle Control of Bivalent Epigenetic Domains Regulates the Exit from Pluripotency. Stem Cell Reports. 2015;5:323–36. Kouzarides T. Chromatin modifications and their function. Cell. 2007;128:693–705. Sims RJ, Millhouse S, Chen CF, Lewis BA, Erdjument-Bromage H, Tempst P, et al. Recognition of trimethylated histone h3 lysine 4 facilitates the recruitment of transcription postinitiation factors and pre-mRNA splicing. Mol Cell. 2007;28:665–76. Piacentini L, Fanti L, Negri R, Del Vescovo V, Fatica A, Altieri F, et al. Heterochromatin Protein 1 (HP1a) Positively Regulates Euchromatic Gene Expression through RNA Transcript Association and Interaction with hnRNPs in Drosophila. PLoS Genet. 2009;5:e1000670. Luco RF, Pan Q, Tominaga K, Blencowe BJ, Pereira-Smith OM, Misteli T. Regulation of alternative splicing by histone modifications. Science. 2010;327:996–1000. Pradeepa MM, Sutherland HG, Ule J, Grimes GR, Bickmore WA. Psip1/Ledgf p52 Binds Methylated Histone H3K36 and Splicing Factors and Contributes to the Regulation of Alternative Splicing. PLoS Genet. 2012;8:e1002717. Gunderson FQ, Johnson TL. Acetylation by the Transcriptional Coactivator Gcn5 Plays a Novel Role in Co-Transcriptional Spliceosome Assembly. PLoS Genet. 2009;5:e1000682. Schor IE, Rascovan N, Pelisch F, Allo M, Kornblihtt AR. Neuronal cell depolarization induces intragenic chromatin modifications affecting NCAM alternative splicing. Proc Natl Acad Sci U S A. 2009;106:4325–30. Schor IE, Kornblihtt AR. Playing inside the genes: Intragenic histone acetylation after membrane depolarization of neural cells opens a path for alternative splicing regulation. Commun Integr Biol. 2009;2:341–3. Zhou HL, Hinman MN, Barron VA, Geng C, Zhou G, Luo G, et al. Hu proteins regulate alternative splicing by inducing localized histone hyperacetylation in an RNA-dependent manner. Proc Natl Acad Sci U S A. 2011;108:E627–35. Sharma A, Nguyen H, Cai L, Lou H. Histone hyperacetylation and exon skipping: a calcium-mediated dynamic regulation in cardiomyocytes. Nucleus. 2015;6:273–8. Dalton S. Linking the Cell Cycle to Cell Fate Decisions. Trends Cell Biol. 2015;25:592–600. Coronado D, Godet M, Bourillot PY, Tapponnier Y, Bernat A, Petit M, et al. A short G1 phase is an intrinsic determinant of naive embryonic stem cell pluripotency. Stem Cell Res. 2013;10:118–31. Pauklin S, Vallier L. The Cell-Cycle State of Stem Cells Determines Cell Fate Propensity. Cell. 2013;155:135–47. Gonzales KAU, Liang H, Lim YS, Chan YS, Yeo JC, Tan CP, et al. Deterministic Restriction on Pluripotent State Dissolution by Cell-Cycle Pathways. Cell. 2015;162:564–79. Aaronson Y, Meshorer E. STEM CELLS Regulation by alternative splicing. Nature. 2013;498:176–7. Shen S, Park JW, Lu ZX, Lin L, Henry MD, Wu YN, et al. rMATS: robust and flexible detection of differential alternative splicing from replicate RNA-Seq data. Proc Natl Acad Sci U S A. 2014;111:E5593–601. Magen A, Ast G. The importance of being divisible by three in alternative splicing. Nucleic Acids Res. 2005;33:5574–82. Zheng CL, Fu XD, Gribskov M. Characteristics and regulatory elements defining constitutive splicing and different modes of alternative splicing in human and mouse. Rna. 2005;11:1777–87. Pinto JP, Kalathur RK, Oliveira DV, Barata T, Machado RSR, Machado S, et al. StemChecker: a web-based tool to discover and explore stemness signatures in gene sets. Nucleic Acids Res. 2015;43:W72–7. Barski A, Cuddapah S, Cui K, Roh TY, Schones DE, Wang Z, et al. High-resolution profiling of histone methylations in the human genome. Cell. 2007;129:823–37. Kornblihtt AR, Schor IE, Allo M, Blencowe BJ. When chromatin meets splicing. Nat Struct Mol Biol. 2009;16:902–3. Mabon SA, Misteli T. Differential recruitment of pre-mRNA splicing factors to alternatively spliced transcripts in vivo. PLoS Biol. 2005;3:e374. Kolasinska-Zwierz P, Down T, Latorre I, Liu T, Liu XS, Ahringer J. Differential chromatin marking of introns and expressed exons by H3K36me3. Nat Genet. 2009;41:376–81. Mikkelsen TS, Ku MC, Jaffe DB, Issac B, Lieberman E, Giannoukos G, et al. Genome-wide maps of chromatin state in pluripotent and lineage-committed cells. Nature. 2007;448:553–60. Feng JX, Liu T, Qin B, Zhang Y, Liu XS. Identifying ChIP-seq enrichment using MACS. Nat Protoc. 2012;7:1728–40. Curado J, Iannone C, Tilgner H, Valcarcel J, Guigo R. Promoter-like epigenetic signatures in exons displaying cell type-specific splicing. Genome Biol. 2015;16:236. Anczukow O, Akerman M, Clery A, Wu J, Shen C, Shirole NH, et al. SRSF1-Regulated Alternative Splicing in Breast Cancer. Mol Cell. 2015;60:105–17. Pandit S, Zhou Y, Shiue L, Coutinho-Mansfield G, Li HR, Qiu JS, et al. Genome-wide Analysis Reveals SR Protein Cooperation and Competition in Regulated Splicing. Mol Cell. 2013;50:223–35. Xue YC, Zhou Y, Wu TB, Zhu T, Ji X, Kwon YS, et al. Genome-wide Analysis of PTB-RNA Interactions Reveals a Strategy Used by the General Splicing Repressor to Modulate Exon Inclusion or Skipping. Mol Cell. 2009;36:996–1006. Chen X, Xu H, Yuan P, Fang F, Huss M, Vega VB, et al. Integration of external signaling pathways with the core transcriptional network in embryonic stem cells. Cell. 2008;133:1106–17. Silva J, Nichols J, Theunissen TW, Guo G, van Oosten AL, Barrandon O, et al. Nanog Is the Gateway to the Pluripotent Ground State. Cell. 2009;138:722–37. Takahashi K, Tanabe K, Ohnuki M, Narita M, Ichisaka T, Tomoda K, et al. Induction of pluripotent stem cells from adult human fibroblasts by defined factors. Cell. 2007;131:861–72. Kim JW, Chu JL, Shen XH, Wang JL, Orkin SH. An extended transcriptional network for pluripotency of embryonic stem cells. Cell. 2008;132:1049–61. Chang CP, Jacobs Y, Nakamura T, Jenkins NA, Copeland NG, Cleary ML. Meis proteins are major in vivo DNA binding partners for wild-type but not chimeric Pbx proteins. Mol Cell Biol. 1997;17:5679–87. Merabet S, Mann RS. To Be Specific or Not: The Critical Relationship Between Hox And TALE Proteins. Trends Genet. 2016;32:334–47. Melendez CLC, Rosales L, Herrera M, Granados L, Tinti D, Villagran S, et al. Frequency of the ETV6-RUNX1, BCR-ABL1, TCF3-PBX1 and MLL-AFF1 fusion genes in Guatemalan Pediatric Acute Lymphoblastic Leukemia Patients and Its Ethnic Associations. Clin Lymphoma Myeloma Leuk. 2015;15:S170. Tesanovic T, Badura S, Oellerich T, Doering C, Ruthardt M, Ottmann OG. Mechanisms of Antileukemic Activity of the Multikinase Inhibitors Dasatinib and Ponatinib in Acute Lymphoblastic Leukemia (ALL) Harboring the E2A-PBX1 Fusion Gene. Blood. 2014;124 Foa R, Vitale A, Cuneo A, Mecucci C, Mancini H, Cimino G, et al. E2A-PBX1 fusion in adult acute lymphoblastic leukemia (ALL) with t(1;19) translocation: Biologic and clinical features. Blood. 2000;96:189b. Kamps MP, Baltimore D. E2a-Pbx1, the T(1, 19) Translocation Protein of Human Pre-B-Cell Acute Lymphocytic-Leukemia, Causes Acute Myeloid-Leukemia in Mice. Mol Cell Biol. 1993;13:351–7. Jung JG, Kim TH, Gerry E, Kuan JC, Ayhan A, Davidson B, et al. PBX1, a transcriptional regulator, promotes stemness and chemoresistance in ovarian cancer. Clin Cancer Res. 2016;22 Magnani L, Patten DK, Nguyen VTM, Hong SP, Steel JH, Patel N, et al. The pioneer factor PBX1 is a novel driver of metastatic progression in ER'-positive breast cancer. Oncotarget. 2015;6:21878–91. Jung JG, Park JT, Stoeck A, Hussain T, Wu RC, Shih IM, et al. Targeting PBX1 signaling to sensitize carboplatin cytotoxicity in ovarian cancer. Clin Cancer Res. 2015;21 Feng Y, Li L, Zhang X, Zhang Y, Liang Y, Lv J, et al. Hematopoietic pre-B cell leukemia transcription factor interacting protein is overexpressed in gastric cancer and promotes gastric cancer cell proliferation, migration, and invasion. Cancer Sci. 2015;106:1313–22. Li WH, Huang K, Guo HZ, Cui GH, Zhao S. Inhibition of non-small-cell lung cancer cell proliferation by Pbx1. Chin J Cancer Res. 2014;26:573–8. Thiaville MM, Stoeck A, Chen L, Wu RC, Magnani L, Oidtman J, et al. Identification of PBX1 Target Genes in Cancer Cells by Global Mapping of PBX1 Binding Sites. PLoS One. 2012;7 Magnani L, Ballantyne EB, Zhang X, Lupien M. PBX1 genomic pioneer function drives ERalpha signaling underlying progression in breast cancer. PLoS Genet. 2011;7:e1002368. Park JT, Shih Ie M, Wang TL. Identification of Pbx1, a potential oncogene, as a Notch3 target gene in ovarian cancer. Cancer Res. 2008;68:8852–60. Qiu Y, Tomita Y, Zhang B, Nakamichi I, Morii E, Aozasa K. Pre-B-cell leukemia transcription factor 1 regulates expression of valosin-containing protein, a gene involved in cancer growth. Am J Pathol. 2007;170:152–9. Berge V, Ramberg H, Eide T, Svindland A, Tasken KA. Expression of Pbx1 and UGT2B7 in prostate cancer cells. Eur Urol Suppl. 2006;5:794. Kim SK, Selleri L, Lee JS, Zhang AY, Gu XY, Jacobs Y, et al. Pbx1 inactivation disrupts pancreas development and in Ipf1-deficient mice promotes diabetes mellitus. Nat Genet. 2002;30:430–5. Ficara F, Murphy MJ, Lin M, Cleary ML. Pbx1 regulates self-renewal of long-term hematopoietic stem cells by maintaining their quiescence. Cell Stem Cell. 2008;2:484–96. Xu B, Cai L, Butler JM, Chen D, Lu X, Allison DF, et al. The Chromatin Remodeler BPTF Activates a Stemness Gene-Expression Program Essential for the Maintenance of Adult Hematopoietic Stem Cells. Stem Cell Reports. 2018;10:675–83. Koss M, Bolze A, Brendolan A, Saggese M, Capellini TD, Bojilova E, et al. Congenital Asplenia in Mice and Humans with Mutations in a Pbx/Nkx2-5/p15 Module. Dev Cell. 2012;22:913–26. Grebbin BM, Schulte D. PBX1 as Pioneer Factor: A Case Still Open. Front Cell Dev Biol. 2017;5:9. Pruitt KD, Brown GR, Hiatt SM, Thibaud-Nissen F, Astashyn A, Ermolaeva O, et al. RefSeq: an update on mammalian reference sequences. Nucleic Acids Res. 2014;42:D756–63. Burglin TR, Ruvkun G. New Motif in Pbx Genes. Nat Genet. 1992;1:319–20. Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, et al. The NIH Roadmap Epigenomics Mapping Consortium. Nat Biotechnol. 2010;28:1045–8. Thomas DJ, Rosenbloom KR, Clawson H, Hinrichs AS, Trumbower H, Raney BJ, et al. The ENCODE project at UC Santa Cruz. Nucleic Acids Res. 2007;35:D663–7. Chan KKK, Zhang J, Chia NY, Chan YS, Sim HS, Tan KS, et al. KLF4 and PBX1 Directly Regulate NANOG Expression in Human Embryonic Stem Cells. Stem Cells. 2009;27:2114–25. Asahara H, Dutta S, Kao HY, Evans RM, Montminy M. Pbx-hox heterodimers recruit coactivator-corepressor complexes in an isoform-specific manner. Mol Cell Biol. 1999;19:8219–25. DiRocco G, Mavilio F, Zappavigna V. Functional dissection of a transcriptionally active, target-specific Hox-Pbx complex. EMBO J. 1997;16:3644–54. Wang ZB, Zang CZ, Rosenfeld JA, Schones DE, Barski A, Cuddapah S, et al. Combinatorial patterns of histone acetylations and methylations in the human genome. Nat Genet. 2008;40:897–903. Sim RJ, Belotserkovskaya R, Reinberg D. Elongation by RNA polymerase II: the short and long of it. Genes Dev. 2004;18:2437–68. Linares AJ, Lin CH, Damianov A, Adams KL, Novitch BG, Black DL. The splicing regulator PTBP1 controls the activity of the transcription factor Pbx1 during neuronal differentiation. Elife. 2015;4 Katz Y, Wang ET, Airoldi EM, Burge CB. Analysis and design of RNA sequencing experiments for identifying isoform regulation. Nat Methods. 2010;7:1009–15. Wang ET, Cody NA, Jog S, Biancolella M, Wang TT, Treacy DJ, et al. Transcriptome-wide regulation of pre-mRNA splicing and mRNA localization by muscleblind proteins. Cell. 2012;150:710–24. Tapial J, Ha KCH, Sterne-Weiler T, Gohr A, Braunschweig U, Hermoso-Pulido A, et al. An atlas of alternative splicing profiles and functional associations reveals new regulatory programs and genes that simultaneously express multiple major isoforms. Genome Res. 2017;27:1759–68. Singh B, Trincado JL, Tatlow PJ, Piccolo SR, Eyras E. Genome Sequencing and RNA-Motif Analysis Reveal Novel Damaging Noncoding Mutations in Human Tumors. Mol Cancer Res. 2018;16:1112–24. Lin L, Park JW, Ramachandran S, Zhang Y, Tseng YT, Shen S, et al. Transcriptome sequencing reveals aberrant alternative splicing in Huntington's disease. Hum Mol Genet. 2016;25:3454–66. Ji X, Park JW, Bahrami-Samani E, Lin L, Duncan-Lewis C, Pherribo G, et al. alphaCP binding to a cytosine-rich subset of polypyrimidine tracts drives a novel pathway of cassette exon splicing in the mammalian transcriptome. Nucleic Acids Res. 2016;44:2283–97. Bebee TW, Park JW, Sheridan KI, Warzecha CC, Cieply BW, Rohacek AM, et al. The splicing regulators Esrp1 and Esrp2 direct an epithelial splicing program essential for mammalian development. Elife. 2015;4 Melikishvili M, Chariker JH, Rouchka EC, Fondufe-Mittendorf YN. Transcriptome-wide identification of the RNA-binding landscape of the chromatin-associated protein PARP1 reveals functions in RNA biogenesis. Cell Discov. 2017;3:17043. Humphrey J, Emmett W, Fratta P, Isaacs AM, Plagnol V. Quantitative analysis of cryptic splicing associated with TDP-43 depletion. BMC Med Genet. 2017;10:38. Katz Y, Wang ET, Silterra J, Schwartz S, Wong B, Mesirov JP, et al. Sashimi plots: quantitative visualization of RNA sequencing read alignments. arXiv preprint arXiv:13063466 2013. Zhang Y, Liu T, Meyer CA, Eeckhoute J, Johnson DS, Bernstein BE, et al. Model-based Analysis of ChIP-Seq (MACS). Genome Biol. 2008;9:R137. Chiara MD, Reed R. A two-step mechanism for 5′ and 3′ splice-site pairing. Nature. 1995;375:510–3. Liu L, Jin G, Zhou X. Modeling the relationship of epigenetic modifications to transcription factor binding. Nucleic Acids Res. 2015;43:3873–85. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH. The WEKA data mining software: an update. ACM SIGKDD Explor Newslett. 2009;11:10–8. Trapnell C, Roberts A, Goff L, Pertea G, Kim D, Kelley DR, et al. Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nat Protoc. 2012;7:562–78. Carbon S, Ireland A, Mungall CJ, Shu S, Marshall B, Lewis S, et al. AmiGO: online access to ontology and annotation data. Bioinformatics. 2009;25:288–9. Karolchik D, Hinrichs AS, Furey TS, Roskin KM, Sugnet CW, Haussler D, et al. The UCSC Table Browser data retrieval tool. Nucleic Acids Res. 2004;32:D493–6. Chen J, Bardes EE, Aronow BJ, Jegga AG. ToppGene Suite for gene list enrichment analysis and candidate gene prioritization. Nucleic Acids Res. 2009;37:W305–11. Pathan M, Keerthikumar S, Ang CS, Gangoda L, Quek CY, Williamson NA, et al. FunRich: An open access standalone functional enrichment and interaction network analysis tool. Proteomics. 2015;15:2597–601. Pathan M, Keerthikumar S, Chisanga D, Alessandro R, Ang CS, Askenase P, et al. A novel community driven software for functional enrichment analysis of extracellular vesicles data. J Extracell Vesicles. 2017;6:1321455. Vodyanik MA, Yu J, Zhang X, Tian S, Stewart R, Thomson JA, et al. A mesoderm-derived precursor for mesenchymal stem and endothelial cells. Cell Stem Cell. 2010;7:718–29. Meyerrose TE, De Ugarte DA, Hofling AA, Herrbrich PE, Cordonnier TD, Shultz LD, et al. In vivo distribution of human adipose-derived mesenchymal stem cells in novel xenotransplantation models. Stem Cells. 2007;25:220–7. Sekiya I, Larson BL, Smith JR, Pochampally R, Cui JG, Prockop DJ. Expansion of human adult stem cells from bone marrow stroma: conditions that maximize the yields of early progenitors and evaluate their quality. Stem Cells. 2002;20:530–41. Stamatoyannopoulos J. University of Washington Human Reference Epigenome Mapping Project. NCBI GEO 2010, GSE18927. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE18927. Nov 2015. Lister R PM, Dowen RH, Hawkins RD, Hon G, Tonti-Filippini J, Nery JR, et al. UCSD Human Reference Epigenome Mapping Project. NCBI GEO 2009. https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE16256. Nov 2015. The authors thank Drs Guangxu Jin, Liang Liu, and Dongmin Guo from the Wake Forest School of Medicine, who gave a lot of comments and suggestions on the data analyses and interpretations. They also acknowledge the editorial assistance of Karen Klein, MA, from the Wake Forest Clinical and Translational Science Institute (UL1 TR001420; PI: McClain). This work was funded by the National Institutes of Health (NIH) [1R01GM123037 and AR069395]. All RNA-seq and 16 HMs ChIP-seq data of H1 and five other differentiated cells are available in Gene Expression Omnibus (GEO) under accession number GSE16256 [128]. The BAM files of the RNA-seq data (two replicates for each, aligned to human genome hg18) are alternatively available at http://renlab.sdsc.edu/differentiation/download.html. Both RNA-seq and ChIP-seq data of 56 cell lines/tissues from the Roadmap/ENCODE projects [97, 98] are available on their official website (RoadMap: ftp://ftp.ncbi.nlm.nih.gov/pub/geo/DATA/roadmapepigenomics/by_sample/; ENCODE: ftp://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/) and all raw files are also available at GEO under the accession numbers GSE18927 [128] and GSE16256 [129]. Additional file 4: Table S4 provides the detailed information of these data. Yungang Xu and Weiling Zhao contributed equally to this work. Center for Computational Systems Medicine, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA Yungang Xu , Weiling Zhao & Xiaobo Zhou Center for Bioinformatics and Systems Biology, Wake Forest School of Medicine, Winston-Salem, NC, 27157, USA Department of Pediatric Surgery, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA Scott D. Olson & Karthik S. Prabhakara Search for Yungang Xu in: Search for Weiling Zhao in: Search for Scott D. Olson in: Search for Karthik S. Prabhakara in: Search for Xiaobo Zhou in: YX and XZ conceived the study. YX carried out the sequencing data analysis, interpreted the results, and proposed the mechanisms. WZ conducted the experimental validations. WZ, SO, and KP worked on growing and characterizing the MSCs. YX wrote the first draft of the manuscript and all authors revised it. All authors read and approved the final version of the manuscript. Correspondence to Xiaobo Zhou. Figure S1. Identifying hESC differentiation-related AS exons. Figure S2. The hESC differentiation-related AS exons possess the typical properties of AS exons. Figure S3. AS profiles upon hESC differentiation show lineage-specific splicing pattern. Figure S4. HMs change significantly around the alternatively spliced exons upon hESC differentiation. Figure S5. A subset of AS events are significantly associated with some HMs upon hESC differentiation. Figure S6. K-means clustering based on selected epigenetic features of eight HMs for MXE and SE AS exons. Figure S7. HM-associated AS genes are more lineage-specific. Figure S8. HM-unassociated AS genes are enriched in G1 cell-cycle phase and pathways for self-renewal. Figure S9. Isoform switch from PBX1a and PBX1b during hESC differentiation. Figure S10. Isoform switch of PBX1 links H3K36me3 to hESC fate decision. Figure S11. The effect of ΔPSI cutoffs for AS-HM correlations. Table S1. The number of all AS events identified during hESC differentiation. Table S5. The PCR primers used in this study. (PDF 1917 kb) Table S2. AS events (AS exons) during the differentiation from H1 cells to differentiated cells. (XLSX 1852 kb) Table S3. HM-associated AS exons based on k-means clustering. (XLSX 1088 kb) Table S4. 56 cell lines/tissues and their corresponding RNA-seq data sources from ENCODE and Roadmap projects. (XLSX 14 kb) Xu, Y., Zhao, W., Olson, S.D. et al. Alternative splicing links histone modifications to stem cell fate decision. Genome Biol 19, 133 (2018). https://doi.org/10.1186/s13059-018-1512-3 Alternative splicing Cell fate decision Cell cycle machinery
CommonCrawl
Motivic functions, integrability, and applications to harmonic analysis on $p$-adic groups ERA-MS Home This Volume Minkowski bases on algebraic surfaces with rational polyhedral pseudo-effective cone 2014, 21: 132-136. doi: 10.3934/era.2014.21.132 An arithmetic ball quotient surface whose Albanese variety is not of CM type Chad Schoen 1, Department of Mathematics, Duke University, Box 90320, Durham, NC 27708-0320, United States Received October 2013 Revised May 2014 Published September 2014 An example is given of a compact quotient of the unit ball in $\mathbb{C}^2$ by an arithmetic group acting freely such that the Albanese variety is not of CM type. Such examples do not exist for congruence subgroups. Keywords: CM type., Albanese variety, Ball quotient surface, complex multiplication. Mathematics Subject Classification: 11F75 (14J29. Citation: Chad Schoen. An arithmetic ball quotient surface whose Albanese variety is not of CM type. Electronic Research Announcements, 2014, 21: 132-136. doi: 10.3934/era.2014.21.132 T. Chinburg and M. Stover, Arizona winter school course lecture notes,, 2012. Available from: \url{http://swc.math.arizona.edu/aws/2012/index.html}., (). Google Scholar D. Cox, Primes of the Form $x^2+ny^2$, A Wiley-Interscience Publication, John Wiley & Sons, Inc., New York, 1989. Google Scholar J. Cremona, Algorithms for Modular Elliptic Curves, Second edition, Cambridge University Press, Cambridge, 1997. Google Scholar F. Diamond and J. Shurman, A First Course in Modular Forms, Graduate Texts in Mathematics, 228, Springer-Verlag, New York, 2005. Google Scholar N. Elkies, The Klein Quartic in Number Theory, in The Eightfold Way, Math. Sci. Res. Inst. Publ., 35, Cambridge Univ. Press, Cambridge, 1999, 51-101. Google Scholar R. Hartshorne, Algebraic Geometry, Graduate Texts in Mathematics, No. 52, Springer-Verlag, New York, 1977. Google Scholar F. Hirzebruch, Arrangements of lines and algebraic surfaces, Arithmetic and Geometry, Vol. II, Progr. Math., 36, Birkhäuser, Boston, Mass., 1983, 113-140. Google Scholar M. Inoue, Some new surfaces of general type, Tokyo J. Math., 17 (1994), 295-319. doi: 10.3836/tjm/1270127954. Google Scholar M.-N. Ishida, The irregularities of Hirzebruch's examples of surfaces of general type with $c_1^2=3c_2$, Math. Ann., 262 (1983), 407-420. doi: 10.1007/BF01456018. Google Scholar S. Lang, Abelain Varieties, Interscience Tracts in Pure and Applied Mathematics. No. 7, Interscience Publishers, Inc., New York, 1959. Google Scholar R. Livné, On Certain Covers of the Universal Elliptic Curve, Ph.D. Thesis, Harvard University, 1981. Google Scholar Y. Miyoaka, The maximal number of quotients singularities on surfaces with given numerical invariants, Math. Ann., 268 (1984), 159-171. doi: 10.1007/BF01456083. Google Scholar K. Murty and D. Ramakrishnan, The Albanese of unitary Shimura varieties, in The Zeta Function of Picard Modular Surfaces (eds. R. Langlands and D. Ramakrishnan), Univ. Montréal, Montréal, 1992, 445-464. Google Scholar J. D. Rogawski, Analytic expression for the number of points mod $p$, in The Zeta Function of Picard Modular Surfaces (eds. R. Langlands and D. Ramakrishnan), Univ. Montréal, Montréal, 1992, 65-109. Google Scholar J. Silverman, The Arithmetic of Elliptic Curves, Graduate Texts in Mathematics, 106, Springer-Verlag, New York, 1986. doi: 10.1007/978-1-4757-1920-8. Google Scholar J. Silverman, Advanced Topics in the Arithmetic of Elliptic Curves, Graduate Texts in Mathematics, 151, Springer-Verlag, New York, 1994. doi: 10.1007/978-1-4612-0851-8. Google Scholar R. O. Wells, Differential Analysis on Complex Manifolds, Prentice-Hall Series in Modern Analysis, Prentice Hall, Inc., Englewood Cliffs, NJ, 1973. Google Scholar T. Yamazaki and M. Yoshida, On Hirzebruch's examples of surfaces with $c_1^2=3c_2$, Math. Ann., 266 (1984), 421-431. doi: 10.1007/BF01458537. Google Scholar S.-T. Yau, Calabi's conjecture and some new results in algebraic geometry, Proc. Natl. Acad. Sci. USA, 74 (1977), 1798-1799. doi: 10.1073/pnas.74.5.1798. Google Scholar Joseph Nebus. The Dirichlet quotient of point vortex interactions on the surface of the sphere examined by Monte Carlo experiments. Discrete & Continuous Dynamical Systems - B, 2005, 5 (1) : 125-136. doi: 10.3934/dcdsb.2005.5.125 Jundong Zhou. A class of the non-degenerate complex quotient equations on compact Kähler manifolds. Communications on Pure & Applied Analysis, 2021, 20 (6) : 2361-2377. doi: 10.3934/cpaa.2021085 Kathryn Lindsey, Rodrigo Treviño. Infinite type flat surface models of ergodic systems. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5509-5553. doi: 10.3934/dcds.2016043 Alfonso Castro, Shu-Zhi Song. Infinitely many radial solutions for a super-cubic Kirchhoff type problem in a ball. Discrete & Continuous Dynamical Systems - S, 2020, 13 (12) : 3347-3355. doi: 10.3934/dcdss.2020127 Ziyi Cai, Haiyang He. Asymptotic behavior of solutions for nonlinear integral equations with Hénon type on the unit Ball. Communications on Pure & Applied Analysis, 2020, 19 (9) : 4349-4362. doi: 10.3934/cpaa.2020196 Elisa Gorla, Maike Massierer. Index calculus in the trace zero variety. Advances in Mathematics of Communications, 2015, 9 (4) : 515-539. doi: 10.3934/amc.2015.9.515 Julii A. Dubinskii. Complex Neumann type boundary problem and decomposition of Lebesgue spaces. Discrete & Continuous Dynamical Systems, 2004, 10 (1&2) : 201-210. doi: 10.3934/dcds.2004.10.201 Michael J. Jacobson, Jr., Monireh Rezai Rad, Renate Scheidler. Comparison of scalar multiplication on real hyperelliptic curves. Advances in Mathematics of Communications, 2014, 8 (4) : 389-406. doi: 10.3934/amc.2014.8.389 Jan-Phillip Bäcker, Matthias Röger. Analysis and asymptotic reduction of a bulk-surface reaction-diffusion model of Gierer-Meinhardt type. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2022013 Vincent Guyonne, Luca Lorenzi. Instability in a flame ball problem. Discrete & Continuous Dynamical Systems - B, 2007, 7 (2) : 315-350. doi: 10.3934/dcdsb.2007.7.315 Nicolas Boizot, Jean-Paul Gauthier. On the motion planning of the ball with a trailer. Mathematical Control & Related Fields, 2013, 3 (3) : 269-286. doi: 10.3934/mcrf.2013.3.269 Anna Maria Cherubini, Giorgio Metafune, Francesco Paparella. On the stopping time of a bouncing ball. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 43-72. doi: 10.3934/dcdsb.2008.10.43 Sheri M. Markose. Complex type 4 structure changing dynamics of digital agents: Nash equilibria of a game with arms race in innovations. Journal of Dynamics & Games, 2017, 4 (3) : 255-284. doi: 10.3934/jdg.2017015 B. Ambrosio, M. A. Aziz-Alaoui, V. L. E. Phan. Global attractor of complex networks of reaction-diffusion systems of Fitzhugh-Nagumo type. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 3787-3797. doi: 10.3934/dcdsb.2018077 Kwang Ho Kim, Junyop Choe, Song Yun Kim, Namsu Kim, Sekung Hong. Speeding up regular elliptic curve scalar multiplication without precomputation. Advances in Mathematics of Communications, 2020, 14 (4) : 703-726. doi: 10.3934/amc.2020090 Bertrand Lods. Variational characterizations of the effective multiplication factor of a nuclear reactor core. Kinetic & Related Models, 2009, 2 (2) : 307-331. doi: 10.3934/krm.2009.2.307 Ser Peow Tan, Yan Loi Wong and Ying Zhang. The SL(2, C) character variety of a one-holed torus. Electronic Research Announcements, 2005, 11: 103-110. Christopher C. Tisdell. Reimagining multiplication as diagrammatic and dynamic concepts via cutting, pasting and rescaling actions. STEM Education, 2021, 1 (3) : 170-185. doi: 10.3934/steme.2021013 Gökhan Mutlu. On the quotient quantum graph with respect to the regular representation. Communications on Pure & Applied Analysis, 2021, 20 (2) : 885-902. doi: 10.3934/cpaa.2020295 Vesselin Petkov, Georgi Vodev. Localization of the interior transmission eigenvalues for a ball. Inverse Problems & Imaging, 2017, 11 (2) : 355-372. doi: 10.3934/ipi.2017017 Chad Schoen
CommonCrawl
Fair Visit Given a set of areas the robot(s) should visit all areas in a fair way. Given a set of locations the robot should visit all the locations in a fair way. $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{F} (l_i) $ $\overset{n}{\underset{i=1}{\bigwedge}} \mathcal{G} (l_{i} \rightarrow \mathcal{X} ((\neg l_i)\ \mathcal{W}\ l_{(i+1)\%n}))$ , where ($l_1, l_2, \ldots$ are location propositions) where "l1" and "l2" are expressions that indicate that a robot r is in a specific area or at a given point. Note that the pattern is general and consider the case in which a robot can be in two locations at the same time. For example, a robot can be in an area of a building indicated as l1 (e.g., area 01) and at the same time in a room of the area indicated as l2 (e.g., room 002) at the same time. If the topological intersection of the considered locations is empty, then the robot cannot be in two locations at the same time and the transitions labeled with both l1 and l2 cannot be fired. Locations $l_1$, $l_2$, $l_3$ must be covered in a fair way. The trace $l_1 \rightarrow l_4 \rightarrow l_1 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow (l_{\# -\{1,2,3\}})^\omega$ does not perform a fair visit since it visits $l_1$ three times while $l_2$ and $l_3$ are visited once. The trace $l_1 \rightarrow l_4 \rightarrow l_3 \rightarrow l_1 \rightarrow l_4 \rightarrow l_2 \rightarrow l_2 \rightarrow l_4 \rightarrow (l_{\# \setminus\{1,2,3\}})^\omega$ performs a fair visit since it visits locations $l_1$, $l_2$, and $l_3$ twice. The Fair visit pattern specializes the Visit pattern by further constraining how locations are visited to ensure a fair visit. Smith et al.proposed an LTL mission specification for the following mission requirement "an equal number of visits to each data-gather location" is required. The LTL mission specification is obtained by forcing an order on how the data-gathered locations are visited. However, fair visiting may be required even without the specification of an order in which the locations must be visited. $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{F} (l_i) $\\ $\overset{n}{\underset{i=1}{\bigwedge}} \forall \mathcal{G} (l_{i} \rightarrow \forall \mathcal{X} (\forall (\neg l_i) \mathcal{W} l_{(i+1)\%n}))$ Tagged: coverage
CommonCrawl
European Journal of Hybrid Imaging EJNMMI Multimodality Journal Single-site 123I-FP-CIT reference values from individuals with non-degenerative parkinsonism—comparison with values from healthy volunteers Rachid Fahmi1, Günther Platsch2, Alexandre Bani Sadr3, Sylvain Gouttard3, Stephane Thobois4,5,6, Sven Zuehlsdorff1 & Christian Scheiber ORCID: orcid.org/0000-0002-7670-19333 European Journal of Hybrid Imaging volume 4, Article number: 5 (2020) Cite this article Iodine 123-radiolabeled 2β-carbomethoxy-3β-(4-iodophenyl)-N-(3-fluoropropyl) nortropane (123I-FP-CIT) SPECT can be performed to distinguish degenerative forms of movement disorders/parkinsonism/tremor from other entities such as idiopathic tremor or drug-induced parkinsonism. For equivocal cases, semi-quantification and comparison to reference values are a necessary addition to visual interpretation of 123I-FP-CIT scans. To overcome the challenges of multi-center recruitment and scanning of healthy volunteers, we generated 123I-FP-CIT reference values from individuals with various neurological conditions but without dopaminergic degeneration, scanned at a single center on the same SPECT-CT system following the same protocol, and compared them to references from a multi-center database built using healthy volunteers' data. From a cohort of 1884 patients, we identified 237 subjects (120 men, 117 women, age range 16–88 years) through a two-stage selection process. Every patient had a final clinical diagnosis after a mean follow-up of 4.8 ± 1.3 years. Images were reconstructed using (1) Flash3D with scatter and CT-based attenuation corrections (AC) and (2) filtered back projection with Chang AC. Volume-of-interest analysis was performed using a commercial software to calculate specific binding ratios (SBRs), caudate-to-putamen ratios, and asymmetry values on different striatal regions. Generated reference values were assessed according to age and gender and compared with those from the ENC-DAT study, and their robustness was tested against a cohort of patients with different diagnoses. Age had a significant negative linear effect on all SBRs. Overall, the reduction rate per decade in SBR was between 3.80 and 5.70%. Women had greater SBRs than men, but this gender difference was only statistically significant for the Flash3D database. Linear regression was used to correct for age-dependency of SBRs and to allow comparisons to age-matched reference values and "normality" limits. Generated regression parameters and their 95% confidence intervals (CIs) were comparable to corresponding European Normal Control Database of DaTscan (ENC-DAT) results. For example, 95% CI mean slope for the striatum in women is − 0.015 ([− 0.019, − 0.011]) for the Flash3D database versus − 0.015 ([− 0.021, − 0.009]) for ENC-DAT. Caudate-to-putamen ratios and asymmetries were not influenced by age or gender. The generated 123I-FP-CIT references values have similar age-related distribution, with no increase in variance due to comorbidities when compared to values from a multi-center study with healthy volunteers. This makes it possible for sites to build their 123I-FP-CIT references from scans acquired during routine clinical practice. Single photon emission computed tomography (SPECT) imaging with iodine 123-radiolabeled 2β-carbomethoxy-3β-(4-iodophenyl)-N-(3-fluoropropyl) nortropane (123I-FP-CIT or 123I-ioflupane, brand name: DaTscan™, GE Healthcare) is a widely used nuclear imaging method to assess the integrity of the presynaptic dopaminergic system by measuring dopamine active transporters' (DATs) availability in the striatum (Booij et al. 1997a; Neumeyer et al. 1994). Knowing whether a patient, exhibiting movement disorders or parkinsonian signs, presents with degeneration of the dopaminergic system has major implications in terms of diagnosis, prognosis, and care (Thobois et al. 2019; O'sullivan et al. 2008). However, the clinical diagnosis of a parkinsonian syndrome remains sometimes challenging and inconclusive, especially at disease onset and for patients with atypical clinical presentation or those taking drugs that might induce per se the concomitant neurological abnormalities (Thobois et al. 2019; Catafau and Tolosa 2004). A normal 123I-FP-CIT SPECT excludes the diagnosis of Parkinson's disease as mentioned in the revised diagnosis criteria for Parkinson's disease (Postuma et al. 2015). In addition, 123I-FP-CIT is valuable in distinguishing idiopathic Parkinson's disease (PD) and Parkinson "plus" syndromes (reduced radiotracer binding) from essential tremor (ET), psychogenic, post-neuroleptic or vascular parkinsonism, and dopa-responsive dystonia (normal radiotracer binding) (Booij et al. 1997b; Booij et al. 2001; Booth et al. 2015). 123I-FP-CIT has also been proven as valuable in differential diagnosis of dementia with Lewy body (DLB) and Alzheimer's disease (AD) (Thobois et al. 2019; Booth et al. 2015; Walker et al. 2002). In clinical settings, 123I-FP-CIT images are routinely assessed by expert readers based on visual evaluation using a 0 to 3 grading system according to the visual scale proposed by Catafau et al. (Catafau and Tolosa 2004). The reproducibility of this technique has been questioned (Tondeur et al. 2010), in particular for cases that are equivocal or exhibit an early onset of the disease. Furthermore, visual interpretation can be especially challenging in a follow-up scenario to differentiate relevant changes from irrelevant ones. In such difficult-to-interpret cases, region-based quantification is often advised to be used in combination with visual interpretation (Darcourt et al. 2010; Djang et al. 2012) as it can help clinicians reach a more confident interpretation of the scan and increases confidence in less experienced readers (Söderlund et al. 2013; Albert et al. 2016). Semi-quantification was also shown to add value to visual analysis of 123I-FP-CIT in differential diagnosis of DLB from AD (Nicastro et al. 2017). These semi-quantification methods use either manually drawn or predefined regions of interest. However, quantitative measures have little clinical value without reference ranges, especially in cases when diagnostic decisions cannot be drawn from quantification as the scans cannot be interpreted visually as abnormal (Albert et al. 2016). In addition, as 123I-FP-CIT uptake depends on normal ageing, quantitative values are only of value if age effect on DAT availability is accounted for (Nicastro et al. 2016). A few years ago, a 123I-FP-CIT database was generated from healthy volunteers in a multi-center European study (Varrone et al. 2013). This study included 139 healthy individuals (74 men, 65 women; age range 20–83 years, mean 53 ± 19 years) scanned at 13 different centers on 17 different SPECT systems of 11 different models. To address inter-center variability, quantitative values for each SPECT system were corrected using camera-specific calibration factors generated through phantom experiments (Koch et al. 2006; Tossici-Bolt et al. 2011; Dickson et al. 2012). Despite the efforts to minimize inter-site variability, there remained significant variability in specific binding ratio (SBR) which could not be explained by age and other considered covariates and which might be related to SPECT equipment, including hardware and reconstruction software, used in the European Normal Control Database of DaTscan (ENC-DAT) study as was noted in (Buchert et al. 2016a). This led to some inconsistencies when comparing data and may have resulted in higher variability in the estimation of the regression lines, modelling the uptake decline as a function of age, and that of the percentage declines per decade as was noted by the authors. Two semi-quantitative methods were used to calculate reference values from both corrected and non-corrected data, and corresponding outcomes were compared to each other. Depending on the quantification method, the study showed a significant decline of DAT availability between 4 and 6.7% per decade. Regression analyses were performed, and outcomes were presented according to gender, showing a significant gender effect on uptake ratios in caudate and putamen when using non-corrected data and when using only attenuation corrected data. Significant effects of both age and gender on striatal SBRs were observed for all data, corrected and uncorrected, when a semi-quantification method accounting for partial volume effect was used. In order to cope with the challenge of recruiting healthy volunteers and multi-center variability, Nicastro et al. (Nicastro et al. 2016) proposed an approach to calculate site-specific reference values using scans from individuals with non-degenerative conditions scanned following the same protocol. A normal scan was defined as a grade 0 123I-FP-CIT SPECT (Catafau and Tolosa 2004). This study included a cohort of 182 subjects with an older age range compared to the ENC-DAT database (73 men and 109 women (60%); age range 40–93 years, mean 69.1 ± 11.2 years). Most of the included subjects had drug-induced parkinsonism (DIP) (44%), while the remaining subjects had essential tremor (ET) (21%), psychogenic parkinsonism (17%), or other non-parkinsonian conditions. Not all included subjects were followed over time, and final neuropathological assessment was unavailable for the majority of them. Therefore, it may be possible that this study included cases with pre-clinical degenerative conditions affecting the nigrostriatal system. Reference limits were generated using an automated semi-quantitative software, and age-dependent reference limits were established based on the percentile approach. Corresponding outcomes were compared to a manual analysis method. The study has also shown a linear effect of age on striatal uptake decrease of 6.8% per decade. This effect was accounted for using linear regression which helped to determine age-dependent reference limits of SBRs. The authors have also shown greater decline of uptake in women than men in all striatal regions, yet none of these differences in slope was statistically significant. Hence, identical reference values were established for both genders. In comparison with the results from the ENC-DAT study (Varrone et al. 2013), Nicastro et al. (Nicastro et al. 2016) found slightly higher intercepts and greater slopes, but comparable mean caudate and putamen SBR values according to gender. In order to address some of the limitations mentioned above, we propose to build a database of reference values for 123I-FP-CIT using a large cohort of subjects across a wide age range and with a well-balanced gender representation. Each included subject had both a normal 123I-FP-CIT SPECT scan, based on semi-quantification and on visual interpretation by two trained nuclear medicine physicians, as well as a clinical diagnosis of non-degenerative parkinsonism or other non-parkinsonian entities at baseline and confirmed after a follow-up period ranging from 3.5 to 10 years by experienced neurologists. Imaging data were all acquired at the same center on the same SPECT-CT imaging system and following the same imaging protocol. We have established reference values and limits for the striatum, caudate nucleus, putamen, anterior putamen, and posterior putamen while accounting for the age effect on DAT availability. We computed regression parameters and their 95% confidence intervals and compared them to parameters from the ENC-DAT study. C/P ratios and asymmetry values were also calculated as additional parameters that could be useful to assess the integrity of the nigrostriatal pathway. The robustness of the generated reference values and limits was tested using a cohort of 22 patients with mixed diagnoses. The selection of our cohort was performed in two different stages as illustrated on the diagram shown in Fig. 1. In the first stage, selection was based on visual reads combined with semi-quantification with an in-house software, and assessment of clinical diagnoses. This stage was performed at Lyon University Neurological Hospital nuclear medicine department where data were collected (CS and ABS, Lyon, France). The second stage consisted of additional and rigorous quality control of both anatomical and functional imaging data, combined with visual reads and semi-quantification of the SPECT data with syngo.via® software. Patients' clinical information was also taken into consideration when making a decision to include or reject a patient at this second selection stage. This stage was performed by RF and GP. Following are detailed descriptions of these selection stages. Diagram showing the two-stage selection procedure for patients' selection and inclusion in the present study. The top panel corresponds to stage 1 (clinical site, Lyon, France), and the bottom panel corresponds to the second selection stage (Siemens Medical Solutions USA, Inc., Molecular Imaging, Knoxville, TN). MSA-C cerebellar multi-system atrophy Patient selection—stage 1 Every patient included in the present study was evaluated by a neurologist from Lyon (France) university hospital's departments of neurology, psychiatry, or geriatrics and then referred to the nuclear medicine imaging center for a 123I-FP-CIT examination. The aim of this 123I-FP-CIT SPECT was to determine whether the patients' abnormal movements (tremor mostly) or parkinsonian syndrome were related to a degeneration of the nigrostriatal dopaminergic pathway or not. More precisely, these patients exhibited atypical tremor, parkinsonian syndrome, or dementia. A pool of 1884 123I-FP-CIT scans, performed on the same Symbia T2 scanner (Siemens Medical Solutions USA, Inc.) between January 2008 and December 2015, was available. We consecutively assessed all available scans, one at a time by the order of their scan date. Scan "normality" was mainly determined by visual interpretation and confirmed by semi-quantification using a fully automated in-house analysis program (CS). This program was implemented based on routines from the statistical parametric mapping (SPM) toolboxFootnote 1, with CT-based attenuation correction (CTAC) and correction for age effect on DAT availability. Note that this analysis program is independent of the software application implemented in syngo.via®, which was used to generate the reference values. Once a scan was classified as normal, a second NM physician (ABS) was asked to visually confirm its normality. Then, we assessed the integrity of corresponding imaging and clinical data and the availability of final diagnosis based on a follow-up of at least 3 years after the 123I-FP-CIT SPECT scan was acquired. As the number of normal SPECT scans increased, new inclusions were further segregated to achieve a targeted normal age distribution and a balanced gender distribution. Subjects selected at this stage were contacted to get their written informed consent to use their anonymized data including for commercial purposes. The anonymized clinical and imaging data of the 256 subjects selected in stage 1 were further quality-checked (GP) looking for any large patient motion in the SPECT projections and/or any reconstruction or imaging artifacts, as well as any anatomical abnormalities (e.g., major hydrocephalus or atrophies) in the CT images. Additional visual and semi-quantitative assessments of the SPECT data were performed (RF and GP) using a striatal analysis methodology implemented in syngo.via® software looking for any uptake abnormalities, such as large asymmetries. This software application generates quantifiable measures as well as a parametric "slab" view of the patient's image for improved visual assessment (Buchert et al. 2016b). The final clinical diagnoses as well as the clinical reads, performed at stage 1, of all SPECT images were considered when selecting which subjects to include in the database. SPECT imaging: data acquisition and reconstruction Patients were scanned on the same Symbia™ T2 system equipped with low-energy high-resolution collimators. Scans started 3 h post-intravenous injection of ~ 185 MBq of 123I-FP-CIT which occurred 1 h following thyroid blockade with perchlorate when needed. In every case, the patient's head was constrained with a head holder to minimize motion, and acquisition was performed with the following parameters: rotational radius kept between 13 and 15 cm; matrix 128 × 128; 120 projection angles over 360°; and a hardware zoom of 1.23 × 1.23 to achieve an in-plane pixel size of 3.9 × 3.9 mm2. In addition to the photopeak imaging window (159 keV ± 8%, 147–171 keV), two additional scatter energy windows were acquired. For each patient, a diagnostic CT was acquired (with a CT dose index of 35 mGy, and a dose length product of 940 mGy*cm) and was used for attenuation correction (AC). We generated two databases of reference values corresponding to two image reconstructions methods: (1) Flash3D (OSEM3D with resolution recovery) with 10 iterations and 8 subsets, CTAC (as recommended for OSEM reconstructions), a triple energy window method for scatter correction, and 8-mm FWHM Gaussian post-reconstruction smoothing; and (2) filtered back projection (FBP) reconstruction with Chang attenuation correction (linear correction coefficient: μ = 0.11 cm− 1), and Butterworth filter of order 5 and a cut-off frequency of 0.45 cycles/pixel. Image quantification and analyses We used syngo.via® software to calculateFootnote 2 reference values on twelve striatal volumes-of-interest (VOIs), including the left and the right sides of the caudate, striatum, putamen, anterior putamen, and posterior putamen. For each VOI, left and right specific binding ratios (SBRs) and asymmetry values as well as C/P ratios2 were generated as follows: first, each 123I-FP-CIT image was automatically registered to a SPECT template in the Montreal Neurological Institute (MNI) space using an affine transformation, followed by (automatic) positioning of predefined striatal and background (occipital lobe) VOIs. To ensure accurate positioning of these VOIs on the spatially normalized SPECT images, manual adjustments (translations and rotations) of the automatically placed VOIs, in three planes, were performed whenever needed. Figure 2 shows an example of the predefined striatal and background VOIs overlaid on a spatially and intensity normalized 123I-FP-CIT image. The automatic SPECT-based registration to MNI space is sensitive to striatal uptake and to background noise characteristics and hence could be suboptimal for some challenging cases. In a few such cases, we used a CT-based registration method to guide the spatial normalization process of the SPECT image. Axial, coronal, and sagittal views showing predefined striatal and background volumes of interest overlaid on a spatially and intensity normalized 123I-FP-CIT image. R right, L left For each striatal VOI, left and right SBR values were calculated as: $$ \mathrm{SBR}=\left({C}_{\mathrm{voi}}-{C}_{\mathrm{occip}}\right)/{C}_{\mathrm{occip}} $$ where Cvoi and Coccip are the mean uptakes of the 75% hottest voxels within a side of the striatal VOI and within the occipital lobe, respectively. The asymmetry values, between corresponding left and right VOIs, were calculated using the following formula: $$ \mathrm{asym}\ \left(\%\right)=200\times \mathrm{abs}\ \left({\mathrm{SBR}}_{\mathrm{L}}-{\mathrm{SBR}}_{\mathrm{R}}\right)/\left({\mathrm{SBR}}_{\mathrm{L}}+{\mathrm{SBR}}_{\mathrm{R}}\right), $$ where SBRR and SBRL denote the SBR values computed on the left and the right sides of a given striatal VOI, respectively. We have also calculated the left and the right C/P ratios for the striatum, caudate, and putamen. Comparison of new subjects to reconstructed databases In order to test the robustness of the generated reference values, we have compared a set of subjects with different diagnoses against the generated databases. A total of 22 subjects (9 normal and 13 with confirmed degenerative forms of parkinsonism) were used for this purpose. For a test subject of a given age and for each striatal VOI, the patient's left and right SBRs were separately compared to the corresponding predicted and age-matched reference SBR. The latter is calculated by substituting the subject's age in the corresponding regression line equation. Using the equation of the lower bounding line of the predictive interval (− 95% PI), the patient's age-matched normality threshold is also calculated and used to classify the subject. Left and right C/P ratios as well as asymmetries were also compared against corresponding references. A patient was classified as normal if each of its measured parameters falls within the corresponding normal range defined by the normality thresholds. For each striatal VOI, differences between corresponding left and right SBR values were evaluated with a paired t test. A multivariate analysis was used to investigate the effects of age and gender on SBR values, with the dependent variable being the SBR. The relationship between age and SBR for both genders was analyzed using linear regression. Regression parameters are reported with their 95% confidence intervals (95% CI). For each regional SBR, the regression line, defined as yr = slope × age + intercept, served to calculate age-matched references, and the standard error (SE) of the fitting model was used to derive the model's ± 95% prediction interval (PI) as y = yr ± 2 × SE. The PI represents the range where SBR values of a subject with normal striatal uptake (calculated in a similar way as the generated reference values) has a probability of 95% to fall within. The lower bounds of these intervals (i.e., − 95% PI given by yl= yr – 2 × SE) are used as reference limits for SBR values. That is, a subject whose scan is acquired and analyzed following the same protocols is suspected of having dopaminergic dysfunction if one of the SBR values is below the corresponding − 95% PI line (i.e., a value that is more than 2 × SE below corresponding age-matched reference). Normality of measured outcomes was assessed using Normal Q–Q plots which revealed a normal distribution of all SBR distributions, whereas the asymmetries and the C/P ratios were normalized using the box-cox power transformation method prior to assessing the effect of age and gender on them. Finally, the difference between the r values of two regression fits, r1 and r2, was evaluated using the following formula (Cohen et al. 2014): $$ z=\frac{\left({z}_1-{z}_2\right)}{\sqrt{\frac{1}{\left({n}_1-3\right)}+\frac{1}{\left({n}_2-3\right)}}} $$ where z1 and z2 refer to the Fisher's r to z transformation of the coefficients r1 and r2 and n1 and n2 denote the corresponding sample sizes. Significance was set at p = 0.05. Statistical analyses were mainly performed using the Statistics Toolbox of Matlab R2014a (MathWorks, Natick, MA). Stage 1 of the patients' selection process initially resulted in identifying 357 eligible patients who were subsequently contacted to get their written informed consent to be included in the study. Further checks resulted in 101 exclusions: 44 subjects were excluded due to incomplete imaging data files. Additional 55 subjects were excluded due to lack of their follow-up diagnoses, and two subjects did not consent to be included in the study. Finally, we retained 256 subjects: 20 of whom were young (age range 23–57 years) with attention deficit hyperactivity disorder (ADHD) diagnosis (see the "Discussion" section), and the remaining 236 patients had a final diagnosis of non-degenerative parkinsonism, other movement disorders, or epilepsy based on long-term follow-ups ranging from 3 to 10 years (mean 4.8 ± 1.3 years) (Rizzo et al. 2016). The second selection stage resulted in the exclusion of 19 additional subjects for the following reasons: scans of four subjects had large head motion, two patients had major hydrocephalus which rendered the positioning of the predefined striatal VOIs on their SPECT images very challenging, three patients were imaged using different collimators, four patients without "evidence of dopaminergic deficit or SWEDD" had a final diagnosis of MSA-C and hence excluded by our clinical expert (GP), and six patients had large 123I-FP-CIT uptake asymmetries and were then considered as abnormal scans. In total, 237 subjects (120 men and 117 women, age 62.2 ± 15.7 years, range 16–88 years) were included in the present study. Most included subjects had either ET or DIP (49.8%). The young patient sub-group (8.5%) with ADHD was included to have references at young ages. Table 1 summarizes final clinical diagnoses for all included subjects. Table 1 Final diagnoses of subjects included in the generated databases Differences in specific binding ratio between hemispheres No statistical differences were found between left and right SBR values for any striatal VOI irrespective of gender, nor between left and right SBRs from men and women considered separately. Hence, for each striatal VOI, the left and the right SBRs were averaged to estimate a single reference value. Left and right SBR values for the Flash3D database are presented in Table 2. Corresponding results for the FBP database are not shown. Averaged left and right SBRs are shown in Table 4 for both databases and genders. Table 2 Comparison between SBRs in both hemispheres for the Flash3D database according to gender A one-way ANOVA test comparing the averaged right and left SBR values for different diagnostic subgroups (see Table 1) resulted in no significant differences for caudate, putamen, and striatum (p > 0.05). These results are shown in Table 3 for the FBP database. Similar results were found for the Flash3D database but are not shown here. Table 3 Averaged left and right SBR values corresponding to different diagnostic groups Gender and age effects on specific binding ratio We found that women had slightly higher SBRs with greater variances compared to men in all striatal regions regardless of the reconstruction method. Figures 3 and 4 show SBRs measured within the striatum according to gender for the Flash3D and the FBP databases, respectively, as a function of age. This difference tends to decrease with ageing. The differences in SBR between genders were only statistically significant for the Flash3D database as shown in Table 4. Hence, we have established SBR reference values identical for men and women for both reconstructions. Effect of age and gender on specific binding ratio (SBR) values computed as the average of left and right SBRs within the striatum and plotted as a function of age. Also shown are regression lines with slope and intercept for the Flash3D database (female, blue; male, red) Effect of age and gender on specific binding ratio (SBR) values computed as the average of left and right SBRs within the striatum and plotted as a function of age. Also shown are regression lines with slope and intercept for the FBP database (female, blue; male, red) Table 4 Average of left and right SBRs for Flash3D and FBP databases, for men, women, and for combined genders Conversely, the effect of age was statistically significant for both databases and all SBRs as shown on Table 4. We observed a consistent decrease of SBR values with age for men and women in all striatal VOIs (e.g., Figs. 3 and 4). Parameters of the linear regression analysis of SBR as a function of age and their 95% CIs for both databases are presented in Table 5 for all striatal VOIs, irrespective of gender. Linear regression analysis revealed that measured SBRs in the striatum, for both genders combined, decreased by 30.54% (Flash3D) and 33.39% (FBP) within the considered age range (16–88 years). This corresponds to a reduction rate in striatum SBR of 4.24% (Flash3D) and 4.64% (FBP) per decade. Women showed a steeper decline with advancing age compared to men in all striatal regions. For both men and women, posterior putamen showed the highest SBR decline while caudate SBR was the slowest to decline. The mean reduction rate of SBR was between 3.80 and 5.67% per decade for men and between 4.47 and 5.70% for women, depending on the used reconstruction method and on the striatal VOI. The calculated percentage declines of SBRs per decade are summarized in Table 6. Table 5 Linear regression analysis of DAT availability versus age, irrespective of gender Table 6 Age-related decline in DAT availability (in % per decade) We have compared regression parameters from the reconstructed Flash3D database to parameters generated from healthy volunteers by the ENC-DAT study using the attenuation and scatter corrected (ACSC) and uncalibrated scans processed with BRASS analysis method (Varrone et al. 2013). Corresponding results are shown in Table 7 and graphically displayed in Fig. 5. Using Eq. 3, we found no differences in r values between men and women for both databases. Similarly, no differences in r values were found between Flash3D database and the "uncalibrated ENC-DAT-ACSC" database according to gender in all striatal VOIs. Table 7 Comparison of linear regression parameters from the generated Flash3D database with those from the ENC-DAT study using BRASS analysis of uncalibrated and ACSC data Comparison between SBR values from Flash3D database with values from the ENC-DAT study using BRASS analysis of uncalibrated attenuation and scatter corrected scans (ENC-DAT-ACSC), according to gender Asymmetries and caudate-to-putamen ratios Linear regression analysis performed on the box-cox transformed asymmetries and C/P ratios revealed an age effect only on the putamen asymmetry for the Flash3D database (y = 0.003 × x + 1.19, r2= 0.018, SE = 0.37, p = 0.037). When comparing these parameters between men and women, the differences were not statistically significant. The reference limit corresponding to each one of these parameters is estimated as the sample mean plus twice the sample standard deviation (i.e., mean and SD of all corresponding reference values in the database). For instance, the reference limit corresponding to striatum asymmetry is 3.02 + 2 × 2.41 = 7.84% (Flash3D) and 2.44 + 2 × 1.91 = 6.26 (FBP). Sample means and SDs of the left and right C/P ratios and asymmetry values are summarized in Table 8 for both databases. Comparisons between C/P ratios corresponding to Flash3D database with those from the ENC-DAT study are presented in Table 9 for men and women. Figures 6 and 7 show scatter plots of left and right C/P ratios and asymmetry values, respectively, with horizontal lines defining reference limits. Table 8 Left and right C/P ratios and asymmetry values for the Flash3D and the FBP databases Table 9 Comparison of C/P ratios from the Flash3D database with those from the ENC-DAT study with BRASS analysis of uncalibrated and ACSC data Left and right putamen to caudate ratios for the Flash3D (upper row) and the FBP (bottom row) databases irrespective of gender. The solid blue line (y = mean + 2 × standard deviations of corresponding reference values) defines the reference limit above which a scan could be suspected as being positive Asymmetry values (%) corresponding to caudate, putamen, and striatum (from top to bottom) and for the Flash3D (left) and FBP (right) databases. The solid blue line (y = mean + 2 × standard deviations of corresponding reference values) defines the reference limit above which a scan could be suspected as being positive Scans belonging to the 22 test subjects were compared to the generated databases using the syngo.via® software. For all abnormal cases (13/13), and independent of the reconstruction method, one or more SBR values fell below corresponding age-matched reference limit confirming abnormality of the scans. For most of these cases, the C/P ratios and/or the asymmetry values were also greater than their corresponding reference limits. The majority of tested cases, including the nine normal ones, could easily be accurately classified based only on visual assessments of the corresponding parametric slab views (see examples on Fig. 8) (Buchert et al. 2016b). However, for a normal case corresponding to a 63-year old male subject, the tracer uptake was uniformly reduced on the left side compared to the right side as can be clearly seen on the slab view on Fig. 9 for both reconstructions. Without quantification, this may lead to visually interpreting this scan as abnormal or to reducing confidence in its visual interpretation. However, for this case, all calculated binding ratios and asymmetries were within normal ranges, with the lowest SBR being − 1.32 (Flash3D) and − 1.86 (FBP) SDs from age-matched mean. In addition, all asymmetries were within 2 × SDs from respective reference values for both reconstructions, with the caudate exhibiting the highest asymmetry for the Flash3D reconstruction (4.41% or 0.24 × SDs from sample mean). This shows the added value of quantifying and comparing scans to reference values. Examples of three test cases used to validate the generated databases: two positive (first and second rows) and one negative (third row) case. Each row from left to right, axial, sagittal, and coronal fused view of the the patient's SPECT and CT images showing the striatal and occipital VOIs, and the corresponding slab view images (Buchert et al. 2016b) generated within the syngo.via® software and used for visual assessments Slab view (Buchert et al. 2016b) corresponding to a 63-year-old male patient showing uniformly asymmetric uptake with a reduction on the entire left striatum. Left (FBP) and right (Flash3D) reconstructions In this study, we generated a large database of 123I-FP-CIT reference values from a cohort of patients with normal 123I-FP-CIT scans and no confirmed forms of degenerative parkinsonism at baseline nor after a follow-up ranging from 3 to 10 years. Our cohort was rigorously selected, in two separate stages, from a pool of patients' data collected over 8 years at the same center and acquired on the same SPECT-CT imaging system following the same acquisition and processing protocols. This helped overcome the inter-center variability and the challenge of recruiting a large enough cohort of healthy volunteers as in the previously published European (n = 139) (Varrone et al. 2013) and Japanese (n = 256) (Matsuda et al. 2018) databases. To the best of our knowledge, this is the largest cohort of patients with normal 123I-FP-CIT SPECT, and a well-balanced gender distribution (50.65% men and 49.35% women) across a wide age range (16–88 years). To better fit the age effect on tracer uptake at younger ages, we have included 20 young subjects (age range 23–57 years, mean 37 years) with ADHD, two siblings (17 and 30 years old) who were initially suspected of genetic parkinsonism but was not confirmed, and two young adults (25 and 31 years old) with psychogenic parkinsonism. The ADHD patients underwent a 123I-FP-CIT SPECT for research purposes given the controversy in the literature whether the dopaminergic system of patients with this disease is affected or not (van Dyck et al. 2002; Volkow et al. 2007; Costa et al. 2013). Discussions related to ADHD pathophysiology is outside the scope of this paper. One originality of this study is that our "normal" data sets, as identified at the first stage of the patients' selection process, went through additional and independent quality and medical checks followed by analysis using a dedicated commercial software that allows quantification and improved visual assessment of 123I-FP-CIT SPECT scans (Buchert et al. 2016b). At this second selection stage, we excluded 19/256 patients, four of which had normal 123I-FP-CIT scans and were diagnosed with MSA-C. These SWEDD cases represent a negligible proportion of our cohort (1.6%), in agreement with the findings by Nicastro et al. in their prospective study (Nicastro et al. 2018) where they showed that 3/155 (2.1%) subjects with suspected degenerative parkinsonism (1 corticobasal syndrome, 1 MSA-C, and 1 PD) and 1/53 (1.9%) with DLB had a normal visual 123I-FP-CIT SPECT. Using a dedicated Striatal Analysis software (syngo.via®), we generated two different versions of the database corresponding to two reconstruction methods often used in clinical routine, and we investigated the effects of gender and age on all measured outcomes. Our results showed a significant linear effect of age on SBR values in all striatal volumes of interest, as was previously reported by others (e.g., (Nicastro et al. 2016; Varrone et al. 2013)). We obtained an overall rate of SBR decline per decade, averaged over all striatal VOIs and over both genders, of 4.80% (Flash3D) and 5.02% (FBP). This agrees with what has been reported in the literature for healthy volunteers (between 3.6 and 7.5% per decade) (Nicastro et al. 2016; Varrone et al. 2013; Matsuda et al. 2018; Pirker et al. 2000; van Dyck et al. 1995; van Dyck et al. 2002). In particular, the ENC-DAT study (Varrone et al. 2013) reported an average age-related decline in DAT availability of 5.5% per decade for both genders in good agreement with our measured decline. These results clearly validate our approach of using normal 123I-FP-CIT SPECT uptake values obtained in non-healthy subjects but without dopaminergic degeneration to generate a normal and age-stratified 123I-FP-CIT database. Several factors, including image reconstruction and correction as well as quantitative analysis, may have contributed to the variability in the estimated decline rate of DAT availability seen between different published studies. We also found that regardless of gender and irrespective of the reconstruction method, the rate of SBR decline was faster for putamen compared to that for caudate, and even more so in the posterior region of the putamen compared to that in the anterior region. There are conflicting results on this topic in the literature (Varrone et al. 2013; Matsuda et al. 2018; Eusebio et al. 2012; Kaasinen et al. 2015; Volkow et al. 2007; Nobili et al. 2013) which we believe deserves further investigations. To describe the age-related effect on DAT availability, we used a linear model to best fit our data, as was performed in other studies (Nicastro et al. 2016; Varrone et al. 2013; Matsuda et al. 2018). A previous study using 123I-IPT SPECT (Mozley et al. 1996) on a small sample of healthy volunteers (N = 18, age range 19–67 years) has suggested that age-related decline is rapid during young adulthood, followed by slower decline throughout middle age, and reported that non-linear function provided better fit for age effect on DAT availability. In a different study using a larger cohort (N = 126) of healthy subjects and 123I-β-CIT SPECT (van Dyck et al. 2002), the authors suggested that linear models provide appropriate fit of DAT loss with ageing and used such a model to report an average decline per decade of 6.5%. The authors have also noted that second order polynomial was slightly better but nearly linear in the considered age range (18–88 years) which is similar to the age range of our cohort. In the present study, we have tested different fitting models (linear, quadratic, logarithmic, exponential, and power) and calculated correlation coefficients to evaluate their quality of fit. We found that, for most outcomes, the linear and the quadratic models are both equally superior compared to the other models. Hence, we decided to use the linear model to investigate the effect of age on our population and to compare our results against studies that used similar models. Linear regression analysis was used to establish age-matched reference SBRs and their corresponding normality limits in the striatum, caudate, putamen, anterior putamen, and posterior putamen. This is the first study that reports reference values for the anterior and posterior regions of the putamen. This may be very useful for early detection of dopaminergic alteration as the posterior putamen was shown to be the most and earliest affected structure of the striatum (Tatsch and Poepperl 2013). No differences were found between left and right SBRs for any striatal VOI. However, conflicting reports have been published regarding interhemispheric differences in SBR values. For example, in (Matsuda et al. 2018), the authors have reported greater SBRs on the right side, while others have reported greater SBRs on the left side (e.g., (Varrone et al. 2013)). Others, like us, found no interhemispheric differences in any of the measured SBRs (Kaasinen et al. 2015; Lavalaye et al. 2000). Similarly, no effect of age or gender was found on the interhemispheric asymmetries or on the C/P ratios. We estimated the average of left and right upper reference limits (mean + 2 × SD) of the C/P ratios to be 1.16 (Flash3D) and 1.13 (FBP). These values are lower than the ones reported in (Nicastro et al. 2016; Varrone et al. 2013) and can better differentiate between normal and PD patients who have much higher C/P ratios due to early putaminal degeneration (Nicastro et al. 2016; Shin et al. 2007; Haapaniemi et al. 2001). Depending on the reconstruction method and on the striatal VOI, mean asymmetry values ranged between 6.26 and 8.81%. A new scan acquired and reconstructed using the same protocols as scans in one of the generated databases should raise the possibility of alteration of the presynaptic dopaminergic system when its left or right C/P ratio, or one of its interhemispheric asymmetries, is two SDs above corresponding sample mean. For example, our results suggest that a normal 123I-FP-CIT scan is associated with a striatum asymmetry that is lower than 7.84% (Flash3D) or than 6.26% (FBP). In general, normal 123I-FP-CIT scans should be associated with low asymmetry values and with C/P ratios that are closer to unity, irrespective of age and gender. Contrary to results reported in (Matsuda et al. 2018), our results revealed a slight but not statistically significant increase of asymmetry values with advancing age, except for the putamen asymmetry for the Flash3D database. It has been proposed that the asymmetry values and C/P ratios may be used for early detection of PD and to differentiate between various forms of parkinsonism (Nicastro et al. 2016; El Fakhri et al. 2006; Sixel-Döring et al. 2011; Contrafatto et al. 2012; Benítez-Rivero et al. 2013), hence the importance of establishing reference limits for these parameters. For each regional SBR, we calculated the regression line and the ± 95% PI lines. The regression lines (yr = slope × age + intercept) determine age-matched references, and the – 95% PI lines (i.e., lower PI lines given by yl = yr − 2 × SE) act as normality thresholds. The 123I-FP-CIT scan of a patient can be labeled as abnormal if its SBR is 2 × SE or more below age-matched reference. For example, the Flash3D striatum SBR of a 65-year-old patient with an abnormal 123I-FP-CIT scan is supposed to be below y = (− 0.016 × 65 + 4.028) − 0.8 = 2.19 (see Table 5). Regarding the effect of gender, we found that women had higher uptake than men in all striatal VOIs, in agreement with some previous studies (Varrone et al. 2013; Matsuda et al. 2018; Eusebio et al. 2012; Kaasinen et al. 2015; Lavalaye et al. 2000). However, we found that this difference was only significant for the Flash3D database. The lack of such statistical significance for the FBP database may either be due to the fact that there is no correlation between gender and DAT availability or that such gender difference exists but was not statistically significant due to inaccuracies in the Chang attenuation correction method which uses a simpler approximation to individual head geometry and assumes a uniform attenuation inside the head. This may have led to more statistical noise and then to higher p values compared to using CT-based attenuation correction for the Flash3D database. Other factors differentiating the two databases, such as scatter correction, may have led to the discrepancy seen in gender effect. Some published studies have reported higher DAT availability in women compared to men (Lavalaye et al. 2000; Staley et al. 2001; Mozley et al. 2001), whereas others did not report any gender-related difference (Ryding et al. 2004; van Dyck et al. 1995; van Dyck et al. 2002). Various hypotheses were put forward as to why women exhibit higher striatal uptake than men, related either to higher DAT density in women compared to men, or to the differences in the striatal volume between genders (Varrone et al. 2013; Eusebio et al. 2012). In this study, we observed a greater separation between regression lines of men and women SBR data at earlier decades and even more so for the Flash3D database. For both databases, we have established reference values identical for men and women. In comparison with the ENC-DAT study (Varrone et al. 2013), we found that Flash3D reference values are comparable to those generated by the BRASS method from uncalibrated and ACSC data in the ENC-DAT study for both men and women. For example, the 95% CI mean slope for striatum in men is − 0.015 (range − 0.019 to − 0.011) in the Flash3D database versus − 0.015 (range − 0.021 to − 0.09) in the ENC-DAT study. For women, we found a mean slope of − 0.018 (range − 0.024 to − 0.013) in the Flash3D database versus − 0.018 (range − 0.025 to − 0.011) in the ENC-DAT database. Slight differences between regression parameters in the two databases may be attributed in part to the difference in attenuation correction methods used in both studies. In addition, our mean C/P ratios are uniform across genders and less variable compared to those from the ENC-DAT uncalibrated ACSC data. Note that the comparisons between our study and the ENC-DAT study may be affected by several factors, including methodological differences in acquisition of scans; their reconstruction, correction, and quantification; and most importantly by the differences between the two considered cohorts. However, using data from patients with normal 123I-FP-CIT scans who were adequately followed to confirm that they have not developed any form of degenerative parkinsonism, and scanned at a single site following the same protocol, we were able to generate reference values comparable to those generated from healthy volunteers. Equally important is that our mean values for caudate and putamen SBRs in men and women were similar if not identical to those from the ENC-DAT study using BRASS analysis of uncalibrated ACSC data. Our data also showed that comorbidities (e.g., diabetes, hypertension) did not seem to increase the variance in outcomes when compared to healthy subjects, making it possible for clinicians to build their own normal 123I-FP-CIT databases from scans acquired during routine clinical practice. In comparison with a published database generated from patients with normal 123I-FP-CIT scans and no degenerative conditions (Nicastro et al. 2017; Nicastro et al. 2016), we have used a larger cohort (237 vs. 182) with a well-balanced gender distribution and a wider age range, including a relatively very young subset of participants. We also had longer follow-ups for all of our subjects who were included through a stringent two-phase selection process. This may have contributed to generating reference values and limits comparable to those of healthy volunteers. There are some limitations to this study. First, the diagnoses were mainly based on clinical criteria without autopsy confirmation. However, the non-degenerative nature of parkinsonism was assessed by the clinical evolution for 3 to 10 years (4.8 ± 1.3 years), which represents the gold standard to establish a diagnosis in routine clinical practice. Second, our approach for normative data definition was monocentric, which may limit the ability to extend our references to other groups and centers. Effort is underway to generalize the use of the reconstructed databases for other imaging systems and/or other reconstruction/correction methods. This warrants a thorough investigation on how the reconstruction and scanner characterization impact the performance of our reference values in assessing clinical scan normality. An evaluation of the generated reference values against a large sample of degenerative cases is also warranted but is beyond the scope of this paper. Third, we have found that differences in SBRs between genders were only statistically significant for the Flash3D database. We have postulated that the lack of such significance for the FBP database is mainly due to inaccuracies in Chang attenuation correction method which has been shown to be a challenging and often inconsistent task with DAT imaging due to low tracer binding in cortical areas (Barnden et al. 2006). This deserves further investigation. The use of CTAC for the Flash3D database, although it comes at the expense of exposing patients to additional radiation (~ 2 mSv), has the advantage of providing more accurate attenuation correction. This has led to a better separation between men and women SBRs. The importance of this separation may not be an enough justification for the additional radiation exposure. However, the availability of a CT scan had two other major advantages in our approach: (1) the anatomical information provided by the CT scan helped greatly in the diagnostic interpretation of the 123I-FP-CT images, in particular for patients with atypical parkinsonian syndromes which are complicated and often involve different pathologies, and (2) the use of a CT scan plays an important role in the semi-quantification pipeline especially for the spatial normalization of 123I-FP-CT images that are often very noisy and hard to directly register to a standard template. In this study, CT-based registration was required for 26/237 (11%) cases for which the SPECT-based alignment failed. Finally, the similarity of our results with the results from large multi-center and multi-system studies with healthy volunteers and using similar reconstruction/corrections methods suggests that the 123I-FP-CIT binding as a function of age is mostly dependent on the biology of DAT availability. Given the high reproducibility and reliability of the 123I-FP-CIT scan (Booij et al. 1998), better understanding of how inter-subject differences affect the relationship between DAT availability, as measured by 123I-FP-CIT, and age would require longitudinal studies which are ethically and economically difficult to design and perform. This study provides a large database of 123I-FP-CIT reference parameters from a cohort of patients with normal SPECT scans and no degenerative parkinsonism, scanned at the same center on the same imaging system and following the same protocol. The generated age-matched SBRs as well as C/P and asymmetry reference values and limits compared favorably to parameters from a normal database of healthy volunteers. Our results could be used to assess new scans acquired and processed in an equivalent manner as one of the reconstructed databases. Generalization to assess scans acquired with other imaging systems and processed with other reconstruction/correction methods is currently under way. The datasets generated and/or analyzed during the current study are not publicly available due institutional restrictions on patient confidentiality, prior consent, and privacy. Wellcome Department of Cognitive Neurology, London, UK In syngo.via® software, we report distribution volume ratios (DVRs) instead of SBRs, where DVR = Cvoi/Coccip = SBR + 1. We also report putamen-to-caudate ratio instead of C/P ratio. 123I-FP-CIT: Iodine 123-radiolabeled 2β-carbomethoxy-3β-(4-iodophenyl)-N-(3-fluoropropyl) nortropane Attenuation correction ACSC: Attenuation and scatter corrections C/P: Caudate-to-putamen ratio CTAC: CT-based attenuation correction DAT: Dopamine active transporter DIP: Drug-induced parkinsonism DLB: Dementia with Lewy body ENC-DAT: European Normal Control Database of DaTscan FBP: Filtered back projection MNI: Montreal Neurological Institute MSA-C: Cerebellar multiple system atrophy OSEM-3D: Three-dimensional iterative method based on ordered subset expectation maximization Idiopathic Parkinson's disease SBR: Specific binding ratio SE: Standard error of regression model SPECT: Single photon emission computed tomography SWEDD: Scan without evidence of dopaminergic deficit Albert NL, Unterrainer M, Diemling M, Xiong G, Bartenstein P, Koch W, Varrone A, Dickson JC, Tossici-Bolt L, Sera T, Asenbaum S (2016) Implementation of the European multicentre database of healthy controls for [123 I] FP-CIT SPECT increases diagnostic accuracy in patients with clinically uncertain parkinsonian syndromes. Eur J Nucl Med Mol Imaging. 43(7):1315–1322 Barnden LR, Dickson J, Hutton BF (2006) Detection and validation of the body edge in low count emission tomography images. Comput Methods Programs Biomed. 84(2-3):153–161 Benítez-Rivero S, Marín-Oyaga VA, García-Solís D, Huertas-Fernández I, García-Gómez FJ, Jesús S, Cáceres MT, Carrillo F, Ortiz AM, Carballo M, Mir P (2013) Clinical features and 123I-FP-CIT SPECT imaging in vascular parkinsonism and Parkinson's disease. J Neurol Neurosurg Psychiatry. 84(2):122–129 Booij J, Andringa G, Rijks LJ, Vermeulen RJ, De Bruin K, Boer GJ, Janssen AG, Van Royen EA (1997a) [123I] FP-CIT binds to the dopamine transporter as assessed by biodistribution studies in rats and SPECT studies in MPTP-lesioned monkeys. Synapse. 27(3):183–190 Booij J, Habraken JB, Bergmans P, Tissingh G, Winogrodzka A, Wolters EC, Janssen AG, Stoof JC, Van Royen EA (1998) Imaging of dopamine transporters with iodine-123-FP-CIT SPECT in healthy controls and patients with Parkinson's disease. J Nucl Med 39(11):1879–1884 Booij J, Speelman JD, Horstink MW, Wolters EC (2001) The clinical benefit of imaging striatal dopamine transporters with [123I] FP-CIT SPET in differentiating patients with presynaptic Parkinsonism from those with other forms of Parkinsonism. Eur J Nucl Med. 28(3):266–272 Booij J, Tissingh G, Boer GJ, Speelman JD, Stoof JC, Janssen AG, Wolters EC, Van Royen EA (1997b) [123I] FP-CIT SPECT shows a pronounced decline of striatal dopamine transporter labelling in early and advanced Parkinson's disease. J Neurol Neurosurg Psychiatry. 62(2):133–140 Booth TC, Nathan M, Waldman AD, Quigley AM, Schapira AH, Buscombe J (2015) The role of functional dopamine-transporter SPECT imaging in parkinsonian syndromes, part 2. Am J Neuroradiol 36(2):236–244 Buchert R, Hutton C, Lange C, Hoppe P, Makowski M, Bamousa T, Platsch G, Brenner W, Declerck J (2016b) Semiquantitative slab view display for visual evaluation of 123I-FP-CIT SPECT. Nucl Med Commun 37(5):509–518 Buchert R, Kluge A, Tossici-Bolt L, Dickson J, Bronzel M, Lange C, Asenbaum S, Booij J, Kapucu LÖ, Svarer C, Koulibaly PM (2016a) Reduction in camera-specific variability in [123 I] FP-CIT SPECT outcome measures by image reconstruction optimized for multisite settings: impact on age-dependence of the specific binding ratio in the ENC-DAT database of healthy controls. Eur J Nucl Med Mol Imaging. 43(7):1323–1336 CAS PubMed Article PubMed Central Google Scholar Catafau AM, Tolosa E (2004) Impact of dopamine transporter SPECT using 123I-ioflupane on diagnosis and management of patients with clinically uncertain Parkinsonian syndromes. Mov Disord 19(10):1175–1182 Cohen P, West SG, Aiken LS (2014) Applied multiple regression/correlation analysis for the behavioral sciences. Psychology Press, Taylor and Francis Group, New York Contrafatto D, Mostile G, Nicoletti A, Dibilio V, Raciti L, Lanzafame S, Luca A, Distefano A, Zappia M (2012) [123I] FP-CIT-SPECT asymmetry index to differentiate Parkinson's disease from vascular parkinsonism. Acta Neurol Scand 126(1):12–16 Costa A, la Fougère C, Pogarell O, Möller HJ, Riedel M, Ettinger U (2013) Impulsivity is related to striatal dopamine transporter availability in healthy males. Psychiatry Res 211(3):251–256 Darcourt J, Booij J, Tatsch K, Varrone A, Vander Borght T, Kapucu ÖL, Någren K, Nobili F, Walker Z, Van Laere K (2010) EANM procedure guidelines for brain neurotransmission SPECT using 123I-labelled dopamine transporter ligands, version 2. Eur J Nucl Med Mol Imaging. 37(2):443–450 Dickson JC, Tossici-Bolt L, Sera T, De Nijs R, Booij J, Bagnara MC, Seese A, Koulibaly PM, Akdemir UO, Jonsson C, Koole M (2012) Proposal for the standardisation of multi-centre trials in nuclear medicine imaging: prerequisites for a European 123 I-FP-CIT SPECT database. Eur J Nucl Med Mol Imaging. 39(1):188–197 PubMed Article PubMed Central Google Scholar Djang DS, Janssen MJ, Bohnen N, Booij J, Henderson TA, Herholz K, Minoshima S, Rowe CC, Sabri O, Seibyl J, Van Berckel BN (2012) SNM practice guideline for dopamine transporter imaging with 123I-ioflupane SPECT 1.0. J Nucl Med. 53(1):154–163 El Fakhri G, Habert MO, Maksud P, Kas A, Malek Z, Kijewski MF, Lacomblez L (2006) Quantitative simultaneous 99m Tc-ECD/123 I-FP-CIT SPECT in Parkinson's disease and multiple system atrophy. Eur J Nucl Med Mol Imaging 33(1):87–92 Eusebio A, Azulay JP, Ceccaldi M, Girard N, Mundler O, Guedj E (2012) Voxel-based analysis of whole-brain effects of age and gender on dopamine transporter SPECT imaging in healthy subjects. Eur J Nucl Med Mol Imaging. 39(11):1778–1783 Haapaniemi TH, Ahonen A, Torniainen P, Sotaniemi KA, Myllylä VV (2001) [123I] β-CIT SPECT demonstrates decreased brain dopamine and serotonin transporter levels in untreated parkinsonian patients. Mov Disord 16(1):124–130 Kaasinen V, Joutsa J, Noponen T, Johansson J, Seppänen M (2015) Effects of aging and gender on striatal and extrastriatal [123I] FP-CIT binding in Parkinson's disease. Neurobiol Aging. 36(4):1757–1763 Koch W, Radau PE, Münzing W, Tatsch K (2006) Cross-camera comparison of SPECT measurements of a 3-D anthropomorphic basal ganglia phantom. Eur J Nucl Med Mol Imaging. 33(4):495–502 Lavalaye J, Booij J, Reneman L, Habraken JB, van Royen EA (2000) Effect of age and gender on dopamine transporter imaging with [123 I] FP-CIT SPET in healthy volunteers. Eur J Nucl Med. 27(7):867–869 Matsuda H, Murata M, Mukai Y, Sako K, Ono H, Toyama H, Inui Y, Taki Y, Shimomura H, Nagayama H, Tateno A (2018) Japanese multicenter database of healthy controls for [123 I] FP-CIT SPECT. Eur J Nucl Med Mol Imaging. 45(8):1405–1416 Mozley LH, Gur RC, Mozley PD, Gur RE (2001) Striatal dopamine transporters and cognitive functioning in healthy men and women. Am J Psychiatry. 158(9):1492–1499 Mozley PD, Kim HJ, Gur RC, Tatsch K, Muenz LR, McElgin WT, Kung MP, Mu M, Myers AM, Kung HF (1996) Iodine-123-IPT SPECT imaging of CNS dopamine transporters: nonlinear effects of normal aging on striatal uptake values. J Nucl Med. 37(12):1965–1970 Neumeyer JL, Campbell A, Wang S, Gao Y, Milius RA, Kula NS, Baldessarini RJ, Zea-Ponce Y, Baldwin RM, Innis RB (1994) N-Omega-fluoroalkyl analogs of (1R)-2β-carbomethoxy-3β-(4-iodophenyl) tropane (β-CIT): radiotracers for positron emission tomography and single photon emission computed tomography imaging of dopamine transporters. J Med Chem 37(11):1558–1561 Nicastro N, Burkhard PR, Garibotto V (2018) Scan without evidence of dopaminergic deficit (SWEDD) in degenerative parkinsonism and dementia with Lewy bodies: a prospective study. J Neurol Sci 385:17–21 Nicastro N, Garibotto V, Allali G, Assal F, Burkhard PR (2017) Added value of combined semi-quantitative and visual [123I] FP-CIT SPECT analyses for the diagnosis of dementia with Lewy bodies. Clin Nucl Med 42(2):96–102 Nicastro N, Garibotto V, Poncet A, Badoud S, Burkhard PR (2016) Establishing on-site reference values for 123 I-FP-CIT SPECT (DaTscan®) using a cohort of individuals with non-degenerative conditions. Mol Imaging Biol 18(2):302–312 Nobili F, Naseri M, De Carli F, Asenbaum S, Booij J, Darcourt J, Ell P, Kapucu Ö, Kemp P, Varer C, Morbelli S (2013) Automatic semi-quantification of [123 I] FP-CIT SPECT scans in healthy volunteers using BasGan version 2: results from the ENC-DAT database. Eur J Nucl Med Mol Imaging. 40(4):565–573 O'sullivan SS, Williams DR, Gallagher DA, Massey LA, Silveira-Moriyama L, Lees AJ (2008) Nonmotor symptoms as presenting complaints in Parkinson's disease: a clinicopathological study. Mov Disord. 23(1):101–106 Pirker W, Asenbaum S, Hauk M, Kandlhofer S (2000) Imaging serotonin and dopamine transporters with 123I-beta-CIT SPECT: binding kinetics and effects of normal aging. J Nucl Med. 41(1):36–44 Postuma RB, Berg D, Stern M, Poewe W, Olanow CW, Oertel W, Obeso J, Marek K, Litvan I, Lang AE, Halliday G (2015) MDS clinical diagnostic criteria for Parkinson's disease. Mov Disord. 30(12):1591–1601 Rizzo G, Copetti M, Arcuti S, Martino D, Fontana A, Logroscino G (2016) Accuracy of clinical diagnosis of Parkinson disease: a systematic review and meta-analysis. Neurology. 86(6):566–576 Ryding E, Lindström M, Brådvik B, Grabowski M, Bosson P, Träskman-Bendz L, Rosén I (2004) A new model for separation between brain dopamine and serotonin transporters in 123I-β-CIT SPECT measurements: normal values and sex and age dependence. Eur J Nucl Med Mol Imaging. 31(8):1114–1118 Shin HY, Kang SY, Yang JH, Kim HS, Lee MS, Sohn YH (2007) Use of the putamen/caudate volume ratio for early differentiation between parkinsonian variant of multiple system atrophy and Parkinson disease. J Clin Neurol 3(2):79–81 Sixel-Döring F, Liepe K, Mollenhauer B, Trautmann E, Trenkwalder C (2011) The role of 123 I-FP-CIT-SPECT in the differential diagnosis of Parkinson and tremor syndromes: a critical assessment of 125 cases. J Neurol 258(12):2147–2154 Söderlund TA, Dickson JC, Prvulovich E, Ben-Haim S, Kemp P, Booij J, Nobili F, Thomsen G, Sabri O, Koulibaly PM, Akdemir OU (2013) Value of semiquantitative analysis for clinical reporting of 123I-2-β-carbomethoxy-3β-(4-iodophenyl)-N-(3-fluoropropyl) nortropane SPECT studies. J Nucl Med. 54(5):714–722 Staley JK, Krishnan-Sarin S, Zoghbi S, Tamagnan G, Fujita M, Seibyl JP, Maciejewski PK, O'Malley S, Innis RB (2001) Sex differences in [123I] β-CIT SPECT measures of dopamine and serotonin transporter availability in healthy smokers and nonsmokers. Synapse. 41(4):275–284 Tatsch K, Poepperl G (2013) Nigrostriatal dopamine terminal imaging with dopamine transporter SPECT: an update. J Nucl Med 54(8):1331–1338 Thobois S, Prange S, Scheiber C, Broussolle E (2019) What a neurologist should know about PET and SPECT functional imaging for parkinsonism: a practical perspective. Parkinsonism Relat Disord 59:93–100 Tondeur MC, Hambye AS, Dethy S, Ham HR (2010) Interobserver reproducibility of the interpretation of I-123 FP-CIT single-photon emission computed tomography. Nuclear Med Commun 31(8):717–725 Tossici-Bolt L, Dickson JC, Sera T, De Nijs R, Bagnara MC, Jonsson C, Scheepers E, Zito F, Seese A, Koulibaly PM, Kapucu OL (2011) Calibration of gamma camera systems for a multicentre European 123 I-FP-CIT SPECT normal database. Eur J Nucl Med Mol Imaging. 38(8):1529–1540 van Dyck CH, Seibyl JP, Malison RT, Laruelle M, Wallace E, Zoghbi SS, Zea-Ponce Y, Baldwin RM, Charney DS, Hoffer PB, Innis RB (1995) Age-related decline in striatal dopamine transporter binding with iodine-123-β-CITSPECT. J Nucl Med. 36(7):1175–1181 van Dyck CH, Seibyl JP, Malison RT, Laruelle M, Zoghbi SS, Baldwin RM, Innis RB (2002) Age-related decline in dopamine transporters: analysis of striatal subregions, nonlinear effects, and hemispheric asymmetries. Am J Geriatr Psychiatry. 10(1):36–43 Varrone A, Dickson JC, Tossici-Bolt L, Sera T, Asenbaum S, Booij J, Kapucu OL, Kluge A, Knudsen GM, Koulibaly PM, Nobili F (2013) European multicentre database of healthy controls for [123 I] FP-CIT SPECT (ENC-DAT): age-related effects, gender differences and evaluation of different methods of analysis. Eur J Nucl Med Mol Imaging. 40(2):213–227 Volkow ND, Wang GJ, Newcorn J, Fowler JS, Telang F, Solanto MV, Logan J, Wong C, Ma Y, Swanson JM, Schulz K (2007) Brain dopamine transporter levels in treatment and drug naive adults with ADHD. Neuroimage. 34(3):1182–1190 Walker Z, Costa DC, Walker RW, Shaw K, Gacinovic S, Stevens T, Livingston G, Ince P, McKeith IG, Katona CL (2002) Differentiation of dementia with Lewy bodies from Alzheimer's disease using a dopaminergic presynaptic ligand. J Neurol Neurosurg Psychiatry. 73(2):134–140 We would like to thank all the technologists and technical staff at the clinical site (Lyon, France) for supporting this work. This study was supported by a research collaboration grant between Siemens Medical Solution USA, Inc., Molecular Imaging, Knoxville, TN, USA, and the Hospices Civils de Lyon, Nuclear Medicine Department. Siemens Medical Solutions USA, Inc., Molecular Imaging, Knoxville, TN, USA Rachid Fahmi & Sven Zuehlsdorff Siemens Healthineers, Erlangen, Germany Günther Platsch Nuclear Medicine, Hospices Civils de Lyon, 69500, Bron, France Alexandre Bani Sadr, Sylvain Gouttard & Christian Scheiber Movement Disorder Clinic, Pierre Wertheimer Neurologic Hospital, Hospices Civils de Lyon, 69500, Bron, France Stephane Thobois Faculté de Médecine Lyon Sud, Université Claude Bernard Lyon 1, Lyon, France Institut des Sciences Cognitives Marc Jeannerod, UMR 5229, CNRS, Bron, France Rachid Fahmi Alexandre Bani Sadr Sylvain Gouttard Sven Zuehlsdorff Christian Scheiber Guarantor of integrity of entire study: RF, GP, and SC; Concept and design: RF, GP, SZ, and CS; Data acquisition: SC; Data management: SC and SG; Image reconstruction and quality checks: RF, GP, and CS; Patient selection: SC (stage 1) and RF and GP (stage 2); Reading of clinical studies: SC and ABS; Statistical analyses: RF; Original manuscript drafting: RF and SC; manuscript revision for important intellectual content: GP, ST, SZ, and CS. All authors read and approved the final manuscript. Correspondence to Christian Scheiber. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Written informed consent was obtained from all individual participants included in the study. R. Fahmi and S. Zuehlsdorff are full-time employees of Siemens Medical Solutions USA, Inc. G. Platsch is a full-time employee of Siemens Healthineers, Germany. C. Scheiber has received research grant support from Siemens Medical Solutions USA, Inc. A. Bani Sadr, S. Gouttard, and S. Thobois have no conflicts of interest to declare. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Fahmi, R., Platsch, G., Sadr, A.B. et al. Single-site 123I-FP-CIT reference values from individuals with non-degenerative parkinsonism—comparison with values from healthy volunteers. European J Hybrid Imaging 4, 5 (2020). https://doi.org/10.1186/s41824-020-0074-2 Dopamine transporter 123I-FP-CIT SPECT,·Age effect Gender difference
CommonCrawl
A cluster positioning architecture and relative positioning algorithm based on pigeon flock bionics Zhongliang Deng1, Hang Qi ORCID: orcid.org/0000-0003-3719-55611, Chengfeng Wu2, Enwen Hu1 & Runmin Wang1 Unmanned clusters can realize collaborative work, flexible configuration, and efficient operation, which has become an important development trend of unmanned platforms. Cluster positioning is important for ensuring the normal operation of unmanned clusters. The existing solutions have some problems such as requiring external system assistance, high system complexity, poor architecture scalability, and accumulation of positioning errors over time. Without the aid of the information outside the cluster, we plan to construct the relative position relationship with north alignment to adopt formation control and achieve robust cluster relative positioning. Based on the idea of bionics, this paper proposes a cluster robust hierarchical positioning architecture by analyzing the autonomous behavior of pigeon flocks. We divide the clusters into follower clusters, core clusters, and leader nodes, which can realize flexible networking and cluster expansion. Aiming at the core cluster that is the most critical to relative positioning in the architecture, we propose a cluster relative positioning algorithm based on spatiotemporal correlation information. With the design idea of low cost and large-scale application, the algorithm uses intra-cluster ranging and the inertial navigation motion vector to construct the positioning equation and solves it through the Multidimensional Scaling (MDS) and Multiple Objective Particle Swarm Optimization (MOPSO) algorithms. The cluster formation is abstracted as a mixed direction-distance graph and the graph rigidity theory is used to analyze localizability conditions of the algorithm. We designed the cluster positioning simulation software and conducted localizability tests and positioning accuracy tests in different scenarios. Compared with the relative positioning algorithm based on Extended Kalman Filter (EKF), the algorithm proposed in this paper has more relaxed positioning conditions and can adapt to a variety of scenarios. It also has higher relative positioning accuracy, and the error does not accumulate over time. At present, unmanned platforms with the characteristics of flexible configuration, multi-function, and miniaturization are widely used in detection, inspection, delivery, and other scenarios. However, due to the limitation of cost, volume, and other factors, one unmanned platform is difficult to meet the requirements in complex environments and tasks. Unmanned cluster as an important development trend, is more efficient through the collaboration among members (Gautam & Mohan, 2012; Tahir et al., 2019). The advantages of unmanned clusters are as follows: Parallel perception, distributed computing and execution capabilities, and better fault tolerance and robustness. Multiple Unmanned Aerial Vehicles (UAV) can realize multi-dimensional parallel perception through the complementary collocation among heterogeneous sensors and distributed execution of total tasks through task splitting and reasonable allocation. In addition, when some UAVs fail, other UAVs in the cluster can replace them to complete the scheduled tasks, to improve the fault tolerance and robustness of the system. A higher capacity limit to complete the tasks that are difficult for a single machine to complete. Through collaboration, unmanned cluster systems achieve capabilities beyond the simple superposition of individual unmanned nodes, so it has better function expansion and can adapt to more types of tasks. First, through reasonable system design and coordination, a low-cost unmanned cluster system can be used to replace a single complex system with high cost, which is more economical. Secondly, based on the design concept of miniaturization, integration, modularization, and collaboration, it can greatly reduce the design cycle and the cost of building a new task platform. Navigation and positioning are important for ensuring the normal operation of unmanned clusters. They form the basis for the research and development of unmanned clusters together with cluster architecture design, task allocation, collaborative control, self-organizing networks, and other technologies, and become a key research direction (Gyagenda et al., 2022; WU et al., 2022). The existing cluster navigation and positioning methods mainly face the following problems: Need the assistance of an external positioning system, resulting in poor flexibility and easy interference. For example, radio positioning based on Global Navigation Satellite System (GNSS) (Yoo & Ahn, 2003) and road-based transmitters (Shamaei & Kassas, 2019), and visual positioning based on landmarks (Conte & Doherty, 2008). The constructed relative coordinate system has no physical meaning and cannot be used for cluster control and mission planning. For example, the positioning algorithm of principal components analysis, MDS (Strohmeier et al., 2018), Fast Clustering MDS (Fan et al., 2020), vMDS (Kumar et al., 2016), etc. The multi-source fusion algorithm can achieve high positioning accuracy but requires many types of sensors, which leads to a complex system and high cost, and it is difficult to apply to large-scale clusters. For example, positioning systems based on binocular vision cameras/Inertial Measurement Unit (IMU)/ Ultra-Wide Band (UWB)/Lidar often need to be equipped with high-performance computing boards (Chen & Gao, 2019). Accumulated errors during long-term operation can cause the divergence of positioning results. Most exist in the positioning algorithm based on the inertial navigation system and without external information assistance, such as the EKF positioning algorithm based on IMU and relative distance or angle (Masinjila, 2016). As the number of cluster nodes increases, the resource occupancy and computational complexity of the positioning algorithm will increase exponentially. This results in the poor scalability of the localization algorithm and cannot adapt to large-scale clusters, such as partially centralized algorithms (Panzieri et al., 2006). Compared with traditional wireless sensor networks, unmanned cluster networks are more complex (Doriya et al., 2015), mainly reflected in the following points: Cluster nodes are often in a relative motion state, resulting in dynamic changes in the cluster geometry, and the relative position configuration between nodes will affect the overall positioning accuracy of the cluster. There is a conflict between the growing number of clusters and limited radio performance. With the increase of the number of cluster nodes, the distance between nodes becomes larger, but the communication distance is limited, which makes it difficult for unmanned cluster networks to achieve global direct connectivity. The network structure is characterized by regionalization and multi-hop connectivity. With the movement between nodes, the topology relationship of the cluster network and the communication connection will change, and it is difficult to ensure that all nodes can establish stable connections. Relative positioning is more critical than absolute positioning for cluster control and maintenance. At the same time, the north and east directions of the relative coordinate system should be consistent with the geodetic coordinate system, such as the North East Down (NED) coordinate system Considering the robustness of cluster positioning and flexible deployment, the relative cluster positioning algorithm should use the observations inside the cluster and avoid using external signal sources. Considering the economy, to adapt to large-scale cluster applications it is necessary to reduce the number and types of sensors on a single node as much as possible, as well as the computing power required for positioning solutions. Based on the above analysis, we plan to construct the relative position relationship with north alignment to adopt formation control and achieve robust cluster relative positioning without the aid of the information outside the cluster. This paper proposes a hierarchical unmanned cluster positioning architecture based on the bionic concept of pigeon flock and adopts the design of virtual general leader + core cluster + follower cluster to realize a flexible and scalable unmanned cluster. At the same time, considering the robustness of positioning and limited hardware resources, this paper proposes a relative positioning algorithm for unmanned clusters that only relies on internal ranging and inertial navigation data. The algorithm uses the spatial–temporal correlation information to solve the localization and reduces the dimension of the solution space through the MDS method. At the same time, the MOPSO algorithm is used to calculate the dimensionality reduction objective function to obtain the localization result. The algorithm constructs the relative coordinate relationship in the NED coordinate system, which is convenient for cluster control. At the same time, this paper uses hybrid graph theory to analyze the localizability of the proposed algorithm and obtains the node state in the scene that meets the localization conditions, which can provide theoretical support for formation control. Finally, the paper selects some typical scenarios for simulation test analysis. The simulation results show that the cluster relative positioning algorithm proposed in this paper has good positioning accuracy and error divergence suppression characteristics, and verifies the localizability analysis conclusion. Cluster positioning architecture and model Many creatures in nature show cluster phenomenon, such as fish, birds, wolves, insects, and so on. Individuals in a cluster exhibit certain group behaviors through mutual communication and cooperation, such as cooperative hunting by wolves, formation flying of birds, and nectar collection by bees. These biological swarm behaviors formed through long-term evolution provide inspiration and reference for us to design the unmanned cluster positioning architecture. Individuals in biological groups often follow simple behavioral rules and have only limited ability to sense, plan, and communicate. Individuals adjust their behaviors by interacting with neighboring peers and obtaining the information about the surrounding environment, and finally achieve a unified cluster behavior. The cluster system has strong robustness and will not cause fatal effects to the whole system due to the failure of some individuals (Garnier et al., 2007). A pigeon flock is a collection of a large number of autonomous individuals. Through the interaction between individuals, the entire pigeon flock presents a complex macroscopic emergent behavior. The positioning and navigation of the pigeon group show three characteristics (Luo & Duan, 2017): Limited by the vision and communication distance of the pigeons, the individuals in the pigeon group follow a topological distance interaction method, that is, each pigeon only interacts with a certain number of individuals around it, which makes the information interaction of the whole pigeon group simple and efficient. The pigeon group presents as a hierarchical network in the flight, high-level individuals play a leading role, and the behavior of low-level individuals is affected by high-level individuals. At the same time, the rule of the pigeon group formation is not a common single-leader system. The lower-class pigeons only need to communicate in real-time with the nearest higher-class pigeons and follow. This network structure enables the group to respond quickly when dealing with external stimuli or avoiding obstacles. In high-level pigeons, a general leader will control the overall flight track of the cluster, that is, the absolute positioning and navigation of the cluster are often completed by the general leader pigeon. At the same time, pigeons have different navigation methods in different stages of flight. In the early stage, the geomagnetic field is used to determine the general direction, and the direction is corrected by landmarks in the later stage. The altitude of the sun will also affect its navigation. Cluster hierarchical positioning architecture based on pigeon flock autonomous behavior Inspired by the hierarchical behavior and navigation methods of pigeon flocks, we propose a bionic unmanned cluster localization architecture based on the autonomous behavior of pigeon flocks. The cluster uses a hierarchical architecture, and the nodes are divided into follower clusters, core clusters, and the general leader. The meanings and interrelationships of each level are shown in the Fig. 1. The spatial relationship between the core cluster and the follower cluster is not a fixed front-to-back formation, and the follower cluster can be distributed around the core cluster. The arrangement here is only to facilitate the description of the overall cluster hierarchical structure. The members of different clusters are marked with different colors. The connections between nodes represent interactions. A core cluster is like a high-level pigeon in a flock, except that its responsibilities are carried out by a cluster of nodes rather than a single node. The reason is related to its cluster positioning algorithm, and the specific algorithm will be explained in the following sections. The unmanned nodes of the core cluster establish and maintain a stable relative position relationship through mutual measurement and information exchange within the cluster. In the core cluster, all nodes can establish a stable pairwise communication relationship and exchange information. There can be multiple core clusters in a cluster, and connections can also be established between core clusters. The nodes in the follower cluster are like the low-level pigeons. The nodes in each follower cluster need to be registered in a core cluster (often choose the closest one), and all the follower nodes of a core cluster form a follower cluster. In a follower cluster, a node does not need to interact with other follower nodes to locate itself but uses the core cluster to determine its position, and its track control and task assignment are also undertaken by the core cluster. The nodes in the core cluster and the follower cluster can be transformed into each other. The general leader node is equivalent to the general leader pigeon in the pigeon group and at the highest level in the entire cluster. Because the core clusters are at the same level, we need a higher-level role to coordinate all core clusters, and this role also assumes the responsibility of absolute positioning of the entire cluster. However, considering the anti-interference and survivability of the cluster, it is defined as a virtual character in the architecture of this paper. The reason why an entity node is not directly designated as the general leader node is that the attack or interference of the entity general leader node will cause confusion and failure in the entire cluster. This paper does not discuss the implementation of the leader node in detail. The pigeon flock positioning architecture proposed in this paper adopts a heterogeneous layered networking, which not only has the robustness brought by decentralization, but also is easier to obtain global information than the distributed architecture. At the same time, compared with the traditional pigeon flock architecture, the architecture proposed in this paper has following advantages. The traditional pigeon group architecture designates one node as the general leader node. The general leader node is in an absolute leadership position, can contribute the most weight to the decision-making of the cluster, and has the most followers. Once the general leader node fails or is attacked, the entire cluster will be greatly affected. In this paper, the functions of the general leader node, such as absolute positioning and navigation and formation configuration control, are shared by other core clusters to virtualize the general leader. The advanced node network composed of core clusters functions as the general leader, which can significantly improve the anti-damage ability of the entire cluster organization and avoid the overall collapse caused by the failure of the traditional physical general leader node. The functions of high-level nodes in the traditional pigeon cluster architecture are also realized by a small cluster of UAVs, rather than a single physical node. This cluster is the core cluster mentioned in the article, which brings two benefits: (1) Multiple nodes bring redundancy. Even if some nodes fail, the remaining nodes can still ensure the functional integrity of the core cluster. (2) In traditional methods, a single high-level node needs to carry high-precision inertial sensors to ensure high dead reckoning accuracy, to assist and improve the positioning accuracy of its following nodes. The core cluster composed of multiple nodes can build a stable relative position relationship through cooperative positioning without expensive and cumbersome high-precision inertial navigation devices. In this regard, we propose a cluster relative positioning algorithm based on spatial–temporal correlation information, which is suitable for core clusters. See Sect. Core cluster localization algorithm for details. The architecture design considers the positioning accuracy, communication load and system complexity. The core cluster composed of a small number of nodes adopts full connectivity networking mode to obtain the global information within the cluster, and it is easier to obtain the global optimal solution when positioning. We limit the number of the core cluster members to avoid the network communication pressure caused by the full connectivity networking mode. After the core cluster constructs a local relative positioning architecture, it can be used as anchor nodes to broadcast its own position information and ranging signals to the surrounding nodes, and the surrounding nodes calculate their own positions accordingly and become the following nodes of the core cluster. If one-way broadcast is used for positioning between core and follower cluster, theoretically, there is no upper limit for the number of follower nodes and no channel congestion caused by more nodes, and the system capacity can adapt to the large-scale cluster nodes. Parallel multiple core clusters are used to expand the coverage of clusters and improve flexibility. One single core cluster can only cover a limited range, and multiple clusters can cover more and control the member number of a single core cluster, relieving the communication pressure brought by the full connecting networking mode. At the same time, more flexible formation configuration can be used among multiple clusters to adapt different task requirements, while single clusters are limited by the coverage of communication equipment and tend to form a huge circle when the number of cluster nodes increases. Multiple core clusters are relatively independent and distributed computing, which reduces the complexity of the overall positioning solution of core clusters. Clusters are calculated independently, which reduces the complexity of the overall clusters' location. Multiple core clusters can also serve more follower nodes, facilitating cluster expansion. By merging coordinate systems, core clusters can form a unified relative coordinate system, which is explained in Sect. Core cluster localization algorithm (5). Unmanned cluster observation model Unmanned vehicles, ships, and other unmanned clusters are often at the same height, so system modeling is often analyzed in two-dimensional space, while unmanned aerial vehicles, underwater detectors, etc. have a higher degree of freedom in movement space and need to be modeled and analyzed in three-dimensional space. Since the height information of nodes can often be obtained through laser altimetry, air pressure/water pressure gauge, etc., the clusters in three-dimensional space can be transformed into those in two-dimensional space by eliminating the height information (Martinelli et al., 2005). To simplify the analysis model, this paper analyzes the cluster localization algorithm in a two-dimensional space. Assuming that each node constructs a NED coordinate system with itself as the origin, and its north alignment can be achieved by using the strap-down compass. Each node is equipped with inertial sensors capable of measuring angular velocity and acceleration and an onboard communication module, which can measure the distance to other nodes in addition to two-way information exchange with surrounding nodes. \({d}_{ij}\left(t\right)\) represents the distance between nodes \(i\) and \(j\) at time \(t\). $${d}_{ij}\left(t\right)={d}_{ji}\left(t\right)=\left|{P}_{i}\left(t\right)-{P}_{j}\left(t\right)\right|=\left|({x}_{i}(t)-{x}_{j}(t),{y}_{i}(t)-{y}_{j}(t))\right|$$ \({P}_{i}(t\)) = \(({x}_{i}(t),{y}_{i}(t)\)) represents the coordinates of node \(i\) at time \(t\). The NED coordinate system is established with the position at time \(t-1\) as the origin, and \({ins}_{N}(t)\) represents the relative motion vector of node \(N\) in this coordinate system from time \(t-1\) to time \(t\), that is, the relative displacement vector obtained by the inertial navigation system track reckoning at the adjacent positioning time. $${ins}_{N}\left(t\right)={\int }_{t-1}^{t}{{\varvec{v}}}_{N}\mathrm{d}t$$ where \({{\varvec{v}}}_{N}\) represents the vector of node \(N\). The observation model of the unmanned cluster is shown in the Fig. 2. Observational model of a three-member unmanned cluster The notations used in this section and their meanings are summarized in Table 1. Table 1 Notations used in this chapter This paper does not specifically define the network protocol implementation mode of the cluster system, but only makes a general assumption, that is, the nodes complete time synchronization with the cluster while accessing the network. The ranging tasks at the same time are uniformly allocated to adjacent time slots to ensure that the required ranging information can be completed as quickly as possible, reducing the impact of incomplete synchronization of ranging. Cluster relative positioning algorithm From the positioning architecture proposed in Sect. Cluster positioning architecture and model, cluster positioning can be divided into relative positioning and absolute positioning. Absolute positioning is based on relative positioning and is mainly completed by the general leader node. The relative positioning of the core cluster is completed autonomously, and after the completion, it acts as an anchor node in the positioning of the follower cluster. Therefore, the key to cluster positioning is the relative positioning algorithm of the core cluster. In this regard, this paper proposes a positioning algorithm that uses spatial–temporal correlation information and solves it by the MDS + MOPSO algorithm. Core cluster localization algorithm Taking the core cluster composed of 3 nodes as an example, the algorithm process, constraint relationship, objective function, and the number of unknowns of the entire relative positioning are summarized as follows. As shown in the Fig. 3, based on the cluster observation model, we first build the core cluster positioning equations, which are composed of six objective functions. To reduce the difficulty of solution, we use MDS algorithm to lower the dimensions. The unknown parameters are changed from 6 (\(\mathrm{number of nodes}\times 2\)) to 1 (rotation angle), reducing the search space dimension. At the same time, the objective functions of the equations are changed from 6 to 3. We use the reduced dimension objective function as the fitness function of MOPSO algorithm to solve the rotation angle, and then the rotation matrix can be constructed. Finally, the relative positioning results are obtained by coordinate transformation using the rotation matrix. Next, we will introduce the location algorithm in detail. Flow chart of core cluster relative positioning algorithm Constructing the cluster positioning equation Based on the cluster observation model, we can give a set of equations for calculating the position of the core cluster consisting of three nodes. By combining the distance observation information of the current and previous moment, the equations are obtained as follows: $$ \begin{gathered} d_{AB} \left( t \right) = \left| {P_{A} \left( t \right) - P_{B} \left( t \right)} \right| \hfill \\ d_{AC} \left( t \right) = \left| {P_{A} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{BC} \left( t \right) = \left| {P_{B} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{AB} \left( {t - 1} \right) = \left| {P_{A} \left( {t - 1} \right) - P_{B} \left( {t - 1} \right)} \right| \hfill \\ d_{AC} \left( {t - 1} \right) = \left| {P_{A} \left( {t - 1} \right) - P_{C} \left( {t - 1} \right)} \right| \hfill \\ d_{BC} \left( {t - 1} \right) = \left| {P_{B} \left( {t - 1} \right) - P_{C} \left( {t - 1} \right)} \right| \hfill \\ \end{gathered} $$ \({P}_{N}\left(t-1\right)\) can be calculated from \({P}_{N}\left(t\right)\) and \({ins}_{N}(t)\). So, the equation can be transformed into: $$ \begin{gathered} d_{AB} \left( t \right) = \left| {P_{A} \left( t \right) - P_{B} \left( t \right)} \right| \hfill \\ d_{AC} \left( t \right) = \left| {P_{A} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{BC} \left( t \right) = \left| {P_{B} \left( t \right) - P_{C} \left( t \right)} \right| \hfill \\ d_{AB} \left( {t - 1} \right) = \left| {\left( {P_{A} \left( t \right) - ins_{A} } \right) - \left( {P_{B} \left( t \right) - ins_{B} \left( t \right)} \right)} \right| \hfill \\ \left| {d_{AC} \left( {t - 1} \right) = } \right|\left| {\left( {P_{A} \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {P_{C} \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \left| {d_{BC} \left( {t - 1} \right) = } \right|\left| {\left( {P_{B} \left( t \right) - ins_{B} \left( t \right)} \right) - \left( {P_{C} \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \end{gathered} $$ In the equation constructed by motion vector and ranging information, the unknown variables are \({P}_{A}\left(t\right),{P}_{B}\left(t\right),{P}_{C}(t)\). Because the analysis is performed in a two-dimensional space, there are 6 unknowns, and it is not easy to converge to the global optimal solution when solving in a high-dimensional solution space. Therefore, we introduce a multi-dimensional calibration method for dimensionality reduction, which makes it easier for the positioning results to be converged to the optimal solution. Dimensionality reduction by MDS The essence of MDS is to map the similarity measures of several analysis objects from a high-dimensional space of unknown dimension to a lower-dimensional space, and fit the similarity between them in the lower-dimensional space(Niu et al., 2010; Yi & Ruml, 2004). Corresponding to unmanned cluster positioning, that is, mapping the Euclidean distance between nodes from the distance measurement dimensional space to the two-dimensional coordinate space, and obtaining the relative coordinates of each node (Chen et al., 2013). Firstly, the distance matrix D between nodes is constructed with the ranging information between nodes at time \(t\). where \({d}_{AB}(t)\) is abbreviated as \({d}_{AB}\). $${\varvec{D}}= \left[\begin{array}{ccc}0& {d}_{AB}& {d}_{AC}\\ {d}_{BA}& 0& {d}_{BC}\\ {d}_{CA}& {d}_{CB}& 0\end{array}\right]$$ $${{\varvec{D}}}^{2}= \left[\begin{array}{ccc}0& {d}_{AB}^{2}& {d}_{AC}^{2}\\ {d}_{BA}^{2}& 0& {d}_{BC}^{2}\\ {d}_{CA}^{2}& {d}_{CB}^{2}& 0\end{array}\right]$$ Let the coordinates of node \(N\) be \({P}_{N}(t)=({x}_{N}\left(t\right),{y}_{N}(t))\), abbreviated as \({P}_{N}=({x}_{N},{y}_{N})\), then \({d}_{AB}^{2}={x}_{A}^{2}+{y}_{A}^{2}+{x}_{B}^{2}+{y}_{B}^{2}-2{x}_{A}{x}_{B}-2{y}_{A}{y}_{B}\). \({I}_{N}^{2}= {x}_{N}^{2}+{y}_{N}^{2}\), and the matrix \({\varvec{R}}\) can be constructed as. $${\varvec{R}}= \left[\begin{array}{ccc}{I}_{A}^{2}& {I}_{A}^{2}& {I}_{A}^{2}\\ {I}_{B}^{2}& {I}_{B}^{2}& {I}_{B}^{2}\\ {I}_{C}^{2}& {I}_{C}^{2}& {I}_{C}^{2}\end{array}\right]$$ The coordinate matrix \({\varvec{X}}\) of the nodes is constructed as. $${\varvec{X}}= \left[\begin{array}{ccc}{x}_{A}& {x}_{B}& {x}_{C}\\ {y}_{A}& {y}_{B}& {y}_{C}\end{array}\right]$$ And \({{\varvec{D}}}^{2}={\varvec{R}}+{{\varvec{R}}}^{T}-2{{\varvec{X}}}^{T}{\varvec{X}}\). Dual centralized of \({{\varvec{D}}}^{2}\), and receive a positive definite symmetric matrix \({\varvec{B}}\) (Borg & Groenen, 2005), which is. $${\varvec{B}}= -\frac{1}{2}{\varvec{J}}{{\varvec{D}}}^{2}{\varvec{J}}$$ $${\varvec{J}}={\varvec{E}}-{n}^{-1}{\varvec{I}}= \left[\begin{array}{ccc}1-\frac{1}{n}& -\frac{1}{n}& -\frac{1}{n}\\ -\frac{1}{n}& 1-\frac{1}{n}& -\frac{1}{n}\\ -\frac{1}{n}& -\frac{1}{n}& 1-\frac{1}{n}\end{array}\right]$$ where \(n=3\). $${\varvec{B}}=-\frac{1}{2}{\varvec{J}}({\varvec{R}}+{{\varvec{R}}}^{T}-2{{\varvec{X}}}^{T}{\varvec{X}}){\varvec{J}}$$ Because of \({\varvec{R}}{\varvec{J}}=0,\, {\varvec{J}}{{\varvec{R}}}^{T}=0\), so $${\varvec{B}}=-{\varvec{J}}{{\varvec{X}}}^{{\varvec{T}}}{\varvec{X}}{\varvec{J}}={\varvec{V}}{\varvec{U}}{{\varvec{V}}}^{{\varvec{T}}}={\varvec{V}}\sqrt{{\varvec{U}}}{\left({\varvec{V}}\sqrt{{\varvec{U}}}\right)}^{{\varvec{T}}}$$ Eigenvalue decomposition of matrix \({\varvec{B}}\), because in two-dimensional space, retain the two largest eigenvalues \({\lambda }_{1},{\lambda }_{2}\) and corresponding eigenvectors \({q}_{1},{q}_{2}\) to calculate the 2D coordinates of the node. The relative coordinates of each node after centralization are \({\varvec{J}}{{\varvec{X}}}^{T}={\varvec{V}}\sqrt{{\varvec{U}}}\), where \({\varvec{U}}=diag({\lambda }_{1},{\lambda }_{2})\) and \({\varvec{V}}=[{q}_{1},{q}_{2}]\). The coordinates obtained after MDS only represent the distance relationship between nodes, and its coordinate system will change per location calculation, which has no actual physical meaning, and is different from the NED coordinate system by a rotation angle. Therefore, the original problem of solving 6 unknowns is transformed into solving a rotation angle, and the solution space dimension is significantly reduced. The coordinates obtained by MDS are represented as \({P}_{N}\mathrm{^{\prime}}(t)=({x}_{N}\mathrm{^{\prime}}\left(t\right),{y}_{N}\mathrm{^{\prime}}(t))\), abbreviated as \({P}_{N}\mathrm{^{\prime}}=({x}_{N}\mathrm{^{\prime}},{y}_{N}\mathrm{^{\prime}})\), and the relationship between the target \({P}_{N}\left(t\right)\) can be expressed as follows. $$\left[\begin{array}{c}{x}_{N}\\ {y}_{N}\end{array}\right]=\left[\begin{array}{cc}cos\alpha & sin\alpha \\ -sin\alpha & cos\alpha \end{array}\right]\left[\begin{array}{c}{x}_{N}{^{\prime}}\\ {y}_{N}{^{\prime}}\end{array}\right]$$ where \(\alpha \) is the rotation angle to be solved, and the objective function is rewritten as. $$ \begin{gathered} f_{1} = d_{{AB}} \left( {t - 1} \right) = \left| {\left( {R_{\alpha } P_{A} \prime \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {R_{\alpha } P_{B} \prime \left( t \right) - ins_{B} \left( t \right)} \right)} \right| \hfill \\ f_{2} = d_{{AC}} (t - 1) = \left| {\left( {R_{\alpha } P_{A} \prime \left( t \right) - ins_{A} \left( t \right)} \right) - \left( {R_{\alpha } P_{C} \prime \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ f_{3} = d_{{BC}} (t - 1) = \left| {\left( {R_{\alpha } P_{B} \prime \left( t \right) - ins_{B} \left( t \right)} \right) - \left( {R_{\alpha } P_{C} \prime \left( t \right) - ins_{C} \left( t \right)} \right)} \right| \hfill \\ \end{gathered} $$ There is only one unknown variable (the rotation angle α) in the objective function. Solution using MOPSO In order to solve this multi-objective function composed of nonlinear equations, we introduce the MOPSO algorithm. Particle swarm optimization is an evolutionary algorithm, which was originally inspired by the regularity of birds flocking activities, and then used swarm intelligence to establish a simplified model (Shi & Eberhart, 1998). It makes the movement of the whole group in the problem-solving space evolve from disorder to order. The advantage of Particle Swarm Optimization (PSO) is that it is not easy to fall into the local optimal solution, has strong versatility, and can solve complex optimization problems (Marini & Walczak, 2015). The algorithm randomly distributes a certain number of particles in the feasible region of the problem space, and each particle flies at a certain speed. During the flight, the particle adjusts its own state by combining its current best position and the best position of the population, and then flies to a better area to finally achieve the purpose of searching for the optimal solution (Wei & Li, 2004). In the single-objective PSO algorithm, since there is only one objective function, the position of the global best particle (\({g}_{best}\)) and the best position of individual particle (\({p}_{best}\)) can be uniquely determined simply by comparing their fitness values. In the MOPSO (Reyes-Sierra & Coello, 2006), when selecting \({p}_{best}\), if each objective function value of the particle position is optimal, it should be the optimal particle position. If it cannot strictly compare which is better, one of them is randomly selected as the optimal position. Regarding the selection of \({g}_{best}\), there are many non-inferior solutions that can be used as the global optimal one. Therefore, saving these non-inferior solutions and selecting a better one are the core of particle swarm optimization for multi-objective optimization problems. The choice of \({g}_{best}\) is the core problem of multi-objective particle swarm optimization. Coello and Lechuga (2002) proposed a method, where the objective space is divided into hypercubes, and each hypercube is assigned a fitness value depending on its particles number. The more particles, the less fitness value is. Then roulette-wheel selection is applied on the hypercubes to select one. At the end, \({g}_{best}\) is randomly selected from this hypercube. Meanwhile, MOPSO adopts an external repository (name is Archive) to maintain the diversity of the population, Archive stores the non-dominated solution set for each iteration (Mostaghim & Teich, 2003). The algorithm flow is as follows. Initialize the particle swarm. Set the population size \(N\), factor parameters, etc. Randomly generate the position \({X}_{i}\) and velocity \({V}_{i}\) of each particle; Divide the target space and calculate the crowding degree according to the number of particles in the grid; Calculate the objective function value of the particle. Update the individual optimal position \({p}_{best}\) of the particle; Calculate the non-dominated solution of the population and update the Archive set. If the number of non-dominated solutions exceeds the size of the external repository, random deletion is performed according to the degree of congestion; Update the global optimal particle \({g}_{best}\); Update the velocity and position of each particle. The particle velocity and position update equations are as follows (Fallah-Mehdipour et al., 2010): $$\upsilon (t+1)=\omega \upsilon (t)+{c}_{1}{r}_{1}(p(t)-x(t))+{c}_{2}{r}_{2}(g(t)-x(t))$$ $$x(t+1)=x(t)+\upsilon (t+1)$$ Among them, \(\omega \) is the inertia weight; \({c}_{1},{c}_{2}\) are the individual experience coefficient and social experience coefficient, respectively; \({r}_{1},{r}_{2}\) are random numbers in the range [0, 1]; \(p(t)\) and \(g\)(t) are the individual optimal solution and the global optimal solution, respectively. 7) Terminate the program if the termination condition is satisfied, otherwise go to step 3. Use the objective functions after MDS dimensionality reduction, that is, the equation set finally obtained in Sect. Core cluster localization algorithm 1(2), as the fitness function of MOPSO, where only the rotation angle \(\alpha \) is the unknown to be solved. Relative coordinates calculation Finally, target \({P}_{N}\left(t\right)\) can be obtained by constructing rotation matrix \({\varvec{R}}\) and MDS coordinate \({P}_{N}\mathrm{^{\prime}}(t)\) use Eq. (13). So far, we have constructed the relative position relationship within the core cluster. The relative coordinate system is the NED coordinate system, which takes the centroid of the core cluster node as the origin. Summary and discussion Since MDS algorithm requires the complete ranging information between two nodes to build a distance matrix, it is difficult to ensure this condition when there are many nodes involved. In two-dimensional space, the minimum number of nodes in the system is 3, and the number of ranging between nodes to be maintained is \({c}_{n}^{2}\), where n is the number of nodes. When the fault of individual node in the cluster results in the loss or unreliability of ranging information, the common ideas are: 1) Discard the observations related to this node. 2) Use algorithm to estimate and recover the lost or damaged measurement information, for example, matrix completion algorithm based on norm regularization(Xiao et al., 2015), Multidimensional Scaling Map (MDSMAP) (Shang et al., 2003), etc. 3) Improve the weight of reliable nodes in positioning, such as node reordering and edge reordering algorithms(Hamaoui, 2019). Although ideas 2 and 3 can bring some compensation, they will also introduce errors to some extent. Since the positioning result of the core cluster will affect the following cluster, any error in the core cluster should be avoided as far as possible. Considering the complexity and reliability of engineering implementation, we suggest that the number of nodes in the core cluster should be limited to about 3–5. More than three will bring some redundancy to the core cluster positioning. When a node does not meet the conditions, it will be discarded with the relevant observations, and the remaining nodes can still ensure the minimum demand. Another issue worth discussing is the coordinate system merging between multiple core clusters, coordinate conversion can be performed through the common nodes. Compared with the traditional methods (Moore et al., 2004) which need 2 to 3 common points, because the coordinate system constructed by each core cluster in this paper is NED coordinate system, there is no need to consider the rotation and mirror of the coordinate system in coordinate merging, so only one common point is needed to calculate the required translation. Follower cluster localization algorithm Considering that the number of nodes in the follower cluster is significantly larger than that of the core cluster, one should use a localization algorithm that is insensitive to the number of nodes. For example, with the help of the robust relative coordinate relationship of the core cluster, the core cluster nodes can be used as anchor nodes to broadcast ranging signals and self-position information. After receiving the location information and relative distances of multiple core cluster nodes, the follower cluster node uses the trilateration method to calculate its own location. This algorithm is simple and of good scalability, and not affected by the number of nodes to be located. In addition to the above algorithm, other methods can also be used to locate the nodes in the follower cluster. Have given the relative coordinates of the core clusters, the positioning of the follower clusters will be easy. The positioning of the follower clusters is not the focus of this paper, so only a brief discussion is given. Localizability analysis The localizability is also called observability. Satisfying the localizability condition is the premise to ensure the reliable localization of the cluster. When the cluster state is fully observable, the localization problem has a unique and reliable solution (Arrichiello et al., 2011). Since the formation configuration, motion state, and observation constraints of the cluster will affect the positioning results, it is necessary to clarify the conditions for effective cluster positioning and design a reasonable swarm distribution configuration and motion control strategy to avoid the unlocatable state. Furthermore, the cluster observation information can be optimized through localizability analysis, and redundant observations can be eliminated while ensuring the cluster positioning capability (Frew et al., 2005). By abstracting the cluster network into an undirected graph consisting of nodes, directions and distance constraints, the cluster localization algorithm proposed in this paper is analyzed by the rigidity theory of graphs. Since the observation error does not affect the cluster localizability, to simplify the analysis process the systematic measurement error is not considered in this analysis. Abstracted as a mixed direction-distance graph Rigid Graph is a special network configuration. Intuitively, a rigid graph maintains the stable structure of the entire graph by constraining some edges in the graph (Whiteley, 1996). In cluster localization, rigidity theory studies the relationship between the uniqueness of network node location and network structure from the perspective of graph theory (Eren et al., 2004). At present, some scholars have used graph theory for locatable analysis (Cano & Ny, 2021), but most of them build abstract graphs for a single type of distance/direction observation (Liu et al., 2021; Yang et al., 2013) without considering the constraints between nodes at different times, which has limitations and is difficult to apply to the cluster positioning algorithm proposed in this paper. First, the cluster observation model is abstracted into an undirected graph composed of distance constraints, direction constraints, and nodes. The specific methods are as follows. Taking the core cluster consisting of three nodes as an example, due to the joint analysis of spatiotemporal information, the node set \({\varvec{V}}\) consists of 6 nodes at time \(t\) and \(t-1\), \({\varvec{V}}=\{{v}_{1},{v}_{2},..., {v}_{6}\}\). The observation information between nodes is abstracted as the edge of the graph, and the set of edges is defined as\({\varvec{E}}\),\({\varvec{E}}=\{({v}_{i} ,{v}_{j}) |{v}_{i} ,{v}_{j}\in {\varvec{V}},i \ne j\}\). The cluster network can be represented by an undirected connected graph \({\varvec{G}}=\boldsymbol{ }({\varvec{V}},\boldsymbol{ }{\varvec{E}})\). There are two types of observations, the ranging information \({d}_{AB}(t)\) and \({d}_{AB}(t-1)\) between nodes at time \(t\) and \(t-1\), and the motion vector \({ins}_{A}(t)\) of the node from time \(t\) to time \(t-1\). The set of distance constraints that define the graph is L, and the direction constraint is \({\varvec{D}}\). Then the ranging information can be abstracted into elements in L. The motion vector contains both the distance and direction information between the two nodes. Therefore, a motion vector observation is abstracted as a distance constraint plus a direction constraint. Set \({\varvec{E}}=\{{\varvec{L}},{\varvec{D}}\}\), use \(|{\varvec{N}}|\) to denote the number of elements in the set, and \(\left|{\varvec{L}}\right|=9,\left|{\varvec{D}}\right|=3\). The frame \({{\varvec{F}}}_{p}\) represents the mapping pair \(({\varvec{G}},{\varvec{P}})\) from the abstract set \({\varvec{G}}\) to \({{\varvec{R}}}^{2}\). It corresponds to an implementation of the graph. where \({\varvec{P}}=({p}_{1},{p}_{2},\dots ,{p}_{6})\) is the mapping of node \({\varvec{V}}\) in \({{\varvec{R}}}^{2}\) space, that is, the coordinates of the node in two-dimensional space, and the mapping satisfies the constraints in \({\varvec{E}}\). From the perspective of graph theory, the problem of cluster localizability is whether the unique realization of the graph of a given network structure can be obtained through the distance information and direction information between nodes. A sufficient and necessary condition for a cluster to be uniquely localized is that \(({\varvec{G}},{\varvec{P}})\) is globally rigid, and the problem of cluster localizability is transformed into a global rigidity determination problem based on distance and direction constraints. Table 2 Notations used in this section Rigidity matrix and rigidity analysis For a direction-length framework \(({\varvec{G}},{\varvec{P}})\), we obtain a dot product for each equation of distance constraints and direction constraints (Clinch, 2018). By taking derivatives of these dot products at \(t = 0\), we obtain the Jacobian matrix \({\varvec{R}}({\varvec{G}},{\varvec{P}})\), which is the rigidity matrix of \(({\varvec{G}},{\varvec{P}})\). The matrix \({\varvec{R}}\) has \(2 |{\varvec{V}}|\) columns and \(|{\varvec{E}}|\) rows, one row for each constraint, and one pair of columns for each node, as shown in the Fig. 4. The row identified by \({L}_{n}\) on the left corresponds to the distance constraint, and the row identified by \({D}_{n}\) corresponds to the direction constraint. In the column where the corresponding nodes (\({V}_{1}\) and \({V}_{2}\)) are located, the pair of columns' elements of the \({L}_{n}\) row are\(({p}_{i}-{p}_{j})\),\(({p}_{j}-{p}_{i})\), the pair of columns' elements of the \({D}_{n}\) row are\({({p}_{i}-{p}_{j})}^{\perp }\),\({({p}_{j}-{p}_{i})}^{\perp }\), where\({\left(x , y\right)}^{\perp }=(y,-x)\). Rigidity matrix \({\varvec{R}}\left({\varvec{G}},{\varvec{p}}\right)\) The abstract rules for the linear dependencies between rows or columns in matrices are described in the theory of matroids proposed by Whitney(1992). Given a matrix \({\varvec{M}}\), if the linear combination of row vectors is not zero, then the subset of rows of \({\varvec{M}}\) is linearly independent. Looking for the row subset of \({\varvec{R}}\) whose linear combination is zero, after excluding the case where the nodes are in the same position, we can get the following 3 combinations: (\(\mathrm{where }j,k,m,n\) are unknowns) $$(1){L}_{1}+j{L}_{7}+k{L}_{8}+m{D}_{1}+n{D}_{2}-{L}_{4}$$ $$ \begin{gathered} \Delta x_{{AB}} + j\Delta x_{{AA'}} + m\Delta y_{{AA'}} = 0 \hfill \\ \Delta y_{{AB}} + j\Delta y_{{AA'}} - m\Delta x_{{AA'}} = 0 \hfill \\ - \Delta x_{{A'B'}} - j\Delta x_{{AA'}} - m\Delta y_{{AA'}} = 0 \hfill \\ - \Delta y_{{A'B'}} - j\Delta y_{{AA'}} + m\Delta x_{{AA'}} = 0 \hfill \\ - \Delta x_{{AB}} + k\Delta x_{{BB'}} + n\Delta y_{{BB'}} = 0 \hfill \\ - \Delta y_{{AB}} + k\Delta y_{{BB'}} - n\Delta x_{{BB'}} = 0 \hfill \\ \Delta x_{{A'B'}} + k\Delta x_{{BB'}} - n\Delta y_{{BB'}} = 0 \hfill \\ \Delta y_{{A'B'}} + k\Delta y_{{BB'}} + n\Delta x_{{BB'}} = 0 \hfill \\ \end{gathered} $$ After linear combination of the equations, we get: $$ \begin{gathered} 1 + 5:j\Delta x_{AA^{\prime}} + m\Delta y_{AA^{\prime}} + k\Delta x_{BB^{\prime}} + n\Delta y_{BB^{\prime}} = 0 \hfill \\ 2 + 6: j\Delta y_{AA^{\prime}} - m\Delta x_{{AA^{\prime}}} + k\Delta y_{{BB^{\prime}}} - n\Delta x_{{BB^{\prime}}} = 0 \hfill \\ 1 + 3:\Delta x_{AB} - \Delta x_{{A^{\prime}B^{\prime}}} = 0 \hfill \\ 2 + 4:\Delta y_{AB} - \Delta y_{A^{\prime}B^{\prime}} = 0 \hfill \\ \end{gathered} $$ After further derivation: $$ \begin{gathered} \Delta x_{AA^{\prime}} = \Delta x_{BB^{\prime}} \hfill \\ \Delta y_{AA^{\prime}} = \Delta y_{BB^{\prime}} \hfill \\ m + n = 0 \hfill \\ j + k = 0 \hfill \\ \end{gathered} $$ $$(2)L_{1} + jL_{7} + kL_{8} - L_{4} $$ $$ \begin{gathered} \Delta x_{AB} + j\Delta_{AA^{\prime}} = 0 \hfill \\ \Delta y_{AB} + j\Delta y_{AA^{\prime}} = 0 \hfill \\ - \Delta x_{A^{\prime}B^{\prime}} - j\Delta x_{AA^{\prime}} = 0 \hfill \\ - \Delta y_{A^{\prime}B^{\prime}} - j\Delta y_{AA^{\prime}} = 0 \hfill \\ - \Delta x_{AB} + k\Delta x_{BB^{\prime}} = 0 \hfill \\ - \Delta y_{AB} + k\Delta y_{BB^{\prime}} = 0 \hfill \\ \Delta x_{A^{\prime}B^{\prime}} - k\Delta x_{BB^{\prime}} = 0 \hfill \\ \Delta y_{{A^{\prime}B^{\prime}}} - ky_{BB^{\prime}} = 0 \hfill \\ \end{gathered} $$ $$ \begin{gathered} 1 + 5:j\Delta x_{AA^{\prime}} + k\Delta_{BB^{\prime}} = 0 \hfill \\ 2 + 6: j\Delta y_{AA^{\prime}} + k\Delta y_{BB^{\prime}} = 0 \hfill \\ 1 + 3: \Delta x_{AB} - \Delta x_{{A^{\prime}B^{\prime}}} = 0 \hfill \\ 2 + 4:\Delta y_{AB} - \Delta y_{A^{\prime}B^{\prime}} = 0 \hfill \\ \end{gathered} $$ $$ \begin{gathered} \Delta x_{AA^{\prime}} = \Delta x_{BB^{\prime}} \hfill \\ \Delta y_{AA^{\prime}} = \Delta y_{BB^{\prime}} \hfill \\ j + k = 0 \hfill \\ \end{gathered} $$ $$ (3) L_{1} + mD_{1} + nD_{2} - L_{4} $$ $$ \begin{gathered} \Delta x_{AB} + m\Delta y_{AA^{\prime}} = 0 \hfill \\ \Delta y_{AB} - m\Delta x_{{AA^{\prime}}} = 0 \hfill \\ - \Delta x_{{A^{\prime}B^{\prime}}} - m\Delta y_{{AA^{\prime}}} = 0 \hfill \\ - \Delta y_{{A^{\prime}B^{\prime}}} + m\Delta x_{{AA^{\prime}}} = 0 \hfill \\ - \Delta x_{AB} + n\Delta y_{{BB^{\prime}}} = 0 \hfill \\ - \Delta y_{AB} - n\Delta x_{{BB^{\prime}}} = 0 \hfill \\ \Delta x_{{A^{\prime}B^{\prime}}} - n\Delta y_{{BB^{\prime}}} = 0 \hfill \\ \Delta y_{{A^{\prime}B^{\prime}}} + n\Delta x_{BB^{\prime}} = 0 \hfill \\ \end{gathered} $$ $$ \begin{gathered} 1 + 5:m\Delta y_{AA^{\prime}} + n\Delta y_{BB^{\prime}} = 0 \hfill \\ 2 + 6: - m\Delta x_{{AA^{\prime}}} - n\Delta x_{{BB^{\prime}}} = 0 \hfill \\ 1 + 3:\Delta x_{AB} - \Delta x_{{A^{\prime}B^{\prime}}} = 0 \hfill \\ 2 + 4: \Delta y_{AB} - \Delta y_{A^{\prime}B^{\prime}} = 0 \hfill \\ \end{gathered} $$ $$ \begin{gathered} \Delta x_{AA^{\prime}} = \Delta x_{BB^{\prime}} \hfill \\ \Delta y_{AA^{\prime}} = \Delta y_{BB^{\prime}} \hfill \\ m + n = 0 \hfill \\ \end{gathered} $$ So, it is known that when \(ins_{A} \left( t \right) = ins_{B} \left( t \right)\), the rank of matrix \({\varvec{R}}\) is 11, that is, there are 11 linearly independent observations. In the same way, \(ins_{A} \left( t \right) = ins_{C} \left( t \right)\) can be deduced from \((L_{2} + jL_{7} + kL_{9} + mD_{1} + nD_{3} - L_{5} ), (L_{2} + jL_{7} + kL_{9} - L_{5} )\), \((L_{2} + mD_{1} + nD_{3} - L_{5} )\), \(ins_{B} \left( t \right) = ins_{C} \left( t \right)\) can be deduced from \((L_{3} + jL_{8} + kL_{9} + mD_{2} + nD_{3} - L_{6} )\), \((L_{3} + jL_{8} + kL_{9} - L_{6} )\), \((L_{3} + mD_{2} + nD_{3} - L_{6} )\). Since any non-empty frame can be translated in two-dimensional space, the rank of the rigid matrix \({\varvec{R}}\) of a rigid frame is \(2\left| {\varvec{V}} \right| - 2\), i.e. guaranteeing a unique solution requires \(2\left| {\varvec{V}} \right| - 2\) independent constraints, or requires \(2\left| {\varvec{V}} \right| - 2\) equivalent spanning constraints (Jackson & Keevash, 2011). Knowing that \({ }\left| {\varvec{V}} \right| = 6\), to ensure the rigidity of the frame \({\varvec{F}}_{p}\), the rank of the rigid matrix \({\varvec{R}}\) needs to be 10. From the above inference, when any pair of motion vectors in \(ins_{A} \left( t \right),ins_{B} \left( t \right),ins_{C} \left( t \right)\) are equal, the rank of \({\varvec{R}}\) is 11, then \(\left( {{\varvec{G}},\user2{ P}} \right)\) is redundant rigidity. When the three motion vectors are all equal, the rank of \({\varvec{R}}\) is 9, and the frame \({\varvec{F}}_{p}\) is non-rigid, that is, the localization solution of the cluster cannot be uniquely determined. Rough the above analysis, to ensure the localizability of the cluster relative localization algorithm proposed in this paper, the situation of same motion vectors of multiple nodes at the same time should be avoided. When the rank of the rigid matrix \({\varvec{R}}\) is less than \(2\left| {\varvec{V}} \right| - 2\), the clusters are not localizable. As a special case, when all nodes remain stationary, the motion vector is zero, and the localizability condition cannot be satisfied either. To verify the localizability and positioning accuracy of the proposed cluster positioning algorithm, we designed a cluster positioning simulation software and conducted localizability tests and positioning accuracy tests in different scenarios. Through the simulation software, the motion trajectory of the cluster nodes is designed first, and the trajectory is imported into the analysis unit to generate the status information of the nodes at each moment. Further modeling and simulation of the observations are performed to obtain gyroscope and accelerometer measurements, magnetic compass measurements, and wireless ranging values. The cluster positioning algorithm unit uses the observation data to calculate the relative positioning result. The UI interface of the software can easily display and debug the simulation and help evaluate the results. In the localizability simulation, we selected typical cluster motion scenarios for verification, such as parallel formation (constant velocity), parallel formation (non-constant velocity), cross-line formation, circular formation, and collinear formation (non-constant velocity). In the positioning accuracy simulation, we choose a three-node circular motion cluster scene to analyze the accuracy of the core cluster relative positioning algorithm proposed in this paper and compare it with the traditional pure inertial navigation-based and EKF-based relative positioning algorithms. The architecture of the software simulation platform includes a user interface, a simulation data processing unit, and a positioning solution unit. The user interface includes input function panels such as parameter setting, file input/output, motion trajectory design, debugging buttons, and output function panels such as positioning result display and trajectory display. The simulation data processing unit can analyze the trajectory data and generate the original observation data. The relative positioning algorithm unit performs positioning operations according to the obtained observation information and state information and can select different positioning algorithms. The simulation platform can verify cluster positioning capability under various task scenarios. The overall architecture is shown in Fig. 5. Cluster positioning simulation platform architecture The soft interface of the simulation platform is shown in the Fig. 6. Cluster positioning simulation platform user interface Localizability simulation In this section, we test the localizability of the proposed algorithm in different scenarios, prove the algorithm adaptability to multiple scenarios, and confirm the conclusion of the localizability analysis in Sect. Localizability analysis. Simulation scene In the scenario design, we selected several typical formation scenarios for verification, including parallel formation (constant velocity), parallel formation (non-constant velocity), cross-line formation, circular formation, collinear formation (non-constant velocity) and random sports formation. The various motion trajectories are shown in the Fig. 7. Different nodes in the figure are marked with different colors, and the start and end points of the trajectory are also marked. Several formation sports scenes Observation noise is not added to the simulation to avoid interference with the localizability analysis. The simulation interval is 1 s, and the output frequency of the positioning result is 1 Hz. The number of cluster nodes is three, and they are always at the same height. Evaluation indicators We choose the objective function curve and the two-dimensional positioning result plan to judge the localizability. By drawing the objective function curve participating in the MOPSO solution, we can judge whether there is a solution to the objective function equations. The two-dimensional positioning result plan is compared with the real position plan to verify if the relative positional relationship between nodes is consistent. Limited to the length of the article, we only list the simulation results at time \(t = 1\) in each scenario, and the complete results can be found in this url: https://github.com/chgmqh/relative-positioning-simulation-results (Figs. 8, 9, 10, 11, 12, 13). Parallel formation (constant velocity) scenario, simulation results at time \(t=1\) Parallel formation (non-constant velocity) scenario, simulation results at time \(t=1\) Cross-line formation scenario, simulation results at time \(t=1\) Circular formation scenario, simulation result at time \(t=1\) Colinear formation (non-constant velocity) scenario, simulation results at time \(t=1\) Random sports scenario, simulation results at time \(t=1\) From the simulation results, one can see the two-dimensional position result in the first scenario is quite different from the real position, and the objective function curve is disorganized. It suggests that the proposed algorithm cannot effectively locate in this scenario. In other scenarios, the positioning is successful according to the objective function curve graph and the two-dimensional positioning results. The simulation results can corroborate the conclusion in Sect. Localizability analysis. Liu analyzed the locatable conditions of the mainstream EKF collaborative location algorithm based on ranging and IMU (Liu, 2015). Compared with the algorithm based on EKF, the proposed algorithm has more relaxed localizable conditions and can adapt to more scenarios, as shown in Table 3. Table 3 Algorithm localizability condition comparison Positioning accuracy simulation This experiment conducts a comparative analysis and verification of the performance of cluster relative positioning algorithm. The information required for positioning is obtained by using the motion trajectory simulation and observation volume generation functions of the simulation platform. In addition to the relative positioning algorithm proposed in this paper, the EKF positioning algorithm based on ranging and IMU and the relative positioning algorithm based on pure Inertial Navigation System (INS) are selected for comparison. We designed a cluster scene consisting of three nodes. They start from the origin and perform circular motions of different radii after a short acceleration motion. The simulation time is 210 s, and the five-pointed star and the cross star represent the starting and ending points, respectively. The motion trajectory of the node is shown in the Fig. 14. The trajectory of cluster The measured values of the inertial device and the ranging device are all obtained by applying the error model to the true value. The constant bias of the gyroscope is \(0.01^\circ /{\text{h}}\), the random walk error is \(0.001^\circ /\sqrt {\text{h}}\), the constant bias of the accelerometer is 100ug, and the random walk error is \(10{\text\,{ug}}/\sqrt {{\text{Hz}}}\). The sampling frequency of the inertial device is 10 Hz. The wireless ranging accuracy is 0.1 m, and the sampling frequency is 1 Hz. The output frequency of the positioning result is 1 Hz. The relative positioning error evaluation index is to calculate the modulus of the difference between the relative position of the two nodes under the positioning result and the real coordinates. The formula is expressed as follows $$ \varepsilon_{AB} \left( t \right) = \left| {P_{AB} - P_{AB}^{^{\prime}} } \right| $$ \(P_{AB}\) and \(P_{AB} {^{\prime}}\) respectively represent the relative positions of the two nodes under the real coordinates and the positioning result. $$ \begin{aligned} & P_{A} = \left( {x_{A} ,y_{A} } \right) \\ & P_{B} = \left( {x_{B} ,y_{B} } \right) \\ & P_{AB} = \left( {x_{A} - x_{B} ,y_{A} - y_{B} } \right) \\ & P_{A} ^{\prime} = \left( {x_{A} ^{\prime},y_{A} ^{\prime}} \right) \\ & P_{B} ^{\prime} = \left( {x_{B}^{^{\prime}} ,y_{B}^{^{\prime}} } \right) \\ & P_{AB} ^{\prime} = \left( {x_{A} ^{\prime} - x_{B} ^{\prime},y_{A} ^{\prime} - y_{B} ^{\prime}} \right) \\ \end{aligned} $$ At the same time, the Root Mean Square Error (RMSE) of the positioning is calculated to evaluate the overall positioning accuracy. $$ {\text{RMSE}}_{AB} = \frac{{\sqrt {\mathop \sum \nolimits_{t = 1}^{N} \varepsilon_{AB} \left( t \right)} }}{N} $$ The comparison of relative positioning accuracy over time between INS, EKF-based collaborative algorithm, and the proposed algorithm is shown in Fig. 15. Comparison of relative positioning error \(\varepsilon (t)\) of each algorithm over time From the simulation results, the relative positioning error of the INS accumulates with time, showing a divergent state. The relative positioning algorithm based on EKF can reduce some positioning errors, but only suppress the speed of error divergence. Compared with the previous two methods, the proposed algorithm does not have the problem of error accumulation over time, and the overall positioning accuracy is higher. The Root Mean Square Error (RMSE) indicators in Table 4 of the three positioning algorithms can also confirm this. Table 4 Comparison of RMSE index of three relative positioning algorithms Time-consuming simulation Although the simulation is not a real-time demonstration system, we can still estimate the single shot time consumption by dividing the total operation time by the number of simulation steps. The simulation platform uses Lenovo PC, i5-10400 CPU, 8G memory, runs simulation program we designed, and calculates the execution time. Taking a 3 nodes core cluster as an example, the number of simulation steps is 5 and the simulation interval is 1 s. The relative location algorithm proposed in this paper takes 1.16 s in total, and 0.23 s in a single shot, which can fully meet the requirements of 3 Hz real-time positioning. However, the operation speed of EKF algorithm is obviously faster, and the single shot location takes about 0.024 s. Conclusion and future work Aiming at the problems in the localization of moving time-varying cluster networks, this paper proposes a hierarchical unmanned cluster localization architecture based on the concept of pigeon flock bionics. The design of virtual general leader + core cluster + follower cluster is adopted to realize a flexible and scalable unmanned cluster positioning system. At the same time, a new relative positioning algorithm based on spatial–temporal correlation observation for unmanned clusters is proposed, which only relies on mutual ranging and inertial measurement within the cluster for positioning calculation. No external signal source is needed, which improves the deployment flexibility and system robustness of the cluster positioning system. The MDS algorithm is introduced to reduce the dimension of solution space and simplify the objective function as much as possible, which can shorten the subsequent calculation time and make it easier for the optimization search algorithm to find the optimal solution. The introduced MOPSO algorithm is suitable for solving multi-objective function problems and hardly falls into the local optimal solution. We introduce the mixed graph rigidity theory to analyze the localizability of clusters, which can analyze abstract graphs with both distance and direction constraints. At the same time, we abstract the inertial navigation data as a mixed constraint of direction and distance, which expands the application object of localizability theoretical analysis based on graph theory. Therefore, graph theory can now be used to analyze the localizability of the abstract maps constructed by continuous observation. In this paper, the method is applied to analyze the locatable conditions of the proposed relative location algorithm and verified by simulation. To verify the performance and localizability of the cluster location algorithm, we designed a cluster location simulation software, which can generate simulation data and display the location results. Based on this, we carried out localizability simulation and positioning accuracy simulation under different scenarios. The simulation results show that compared with the traditional pure INS algorithm and EKF based algorithm, the proposed algorithm has better positioning accuracy in long run and produces the positioning error not diverging over time, which is applicable to the tasks under the condition of satellite rejection. The proposed algorithm cannot be located if the two nodes have the same direction and speed of movement, and the locatable constraints are more relaxed, which can adapt to more cluster motion scenes. Of course, the proposed algorithm also has some limitations. The time consumption of the algorithm is improved compared with the traditional algorithm, but it can still meet the real-time positioning requirements of larger than 3 Hz; Because the complete distance information between all nodes is required, the number of nodes participating in core cluster location is limited, otherwise it will bring a large communication load. It is necessary to ensure the communication between two nodes, which limits the spatial distribution of nodes. The future works are summarized as follows: This paper initially proposes a hierarchical unmanned cluster positioning architecture design based on the concept of pigeon flock bionics, but only provides specific algorithm implementation and simulation analysis for the core cluster level. Further research should be carried out on the leader node and follower cluster, and more detailed analysis and algorithm implementation should be given. We will carry out the research on the leader node and follower cluster, and give more detailed analysis and algorithm implementation. This includes, but is not limited to, the virtualization method, selection strategy, and absolute positioning method of the leader node, the positioning method and the anchor point selection strategy of the follower cluster. The core cluster level currently only analyzes the positioning of a single cluster and needs further study on the positioning and coordinate system merging algorithms between multiple clusters. Improve the simulation platform to support a complete cluster positioning architecture. Carry out a cluster positioning simulation experiment under the leader node + core cluster + follower cluster architecture to evaluate its absolute positioning and relative positioning capabilities. On the basis of the simulation test, carry out the test based on the unmanned car platform to verify the effectiveness of the cluster positioning algorithm. The datasets generated and/or analysed during the current study are not publicly available due the foundation requirements but are available from the corresponding author on reasonable request. MDS: Multidimensional scaling MDSMAP: Multidimensional scaling map PSO: Particle swarm optimization MOPSO: Multiple objective particle swarm optimization EKF: Extended Kalman filter UAV: Unmanned aerial vehicles GNSS: Global navigation satellite system RMSE: Root mean square error IMU: Inertial measurement unit UWB: Ultra-wide band NED: North east down Inertial navigation system Arrichiello, F., Antonelli, G., Aguiar, A. P., & Pascoal, A. (2011). Observability metric for the relative localization of AUVs based on range and depth measurements: Theory and experiments. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 25–30 Sept. 2011, San Francisco, CA, USA. Borg, I., & Groenen, P. J. (2005). Modern multidimensional scaling: Theory and applications. Springer New York, NY. Cano, J., & Ny, J. L. (2021). Improving ranging-based location estimation with rigidity-constrained CRLB-based motion planning. In 2021 IEEE International Conference on Robotics and Automation (ICRA), 30 May-5 June 2021, Xi'an China. Chen, H. B., Wang, D. Q., Yuan, F., & Xu, R. (2013). A MDS-based localization algorithm for underwater wireless sensor network. 2013 OCEANS - San Diego, 1–5. Chen, D., & Gao, G. X. (2019). Probabilistic graphical fusion of LiDAR, GPS, and 3D building maps for urban UAV navigation. Navigation, 66(1), 151–168. https://doi.org/10.1002/navi.298 Clinch, K. (2018). Global rigidity and symmetry of direction-length Frameworks. Ph.D. Thesis, Queen Mary University of London, UK. Coello, C. A. C., & Lechuga, M. S. (2002). MOPSO: a proposal for multiple objective particle swarm optimization. Proceedings of the 2002 Congress on evolutionary computation. CEC'02 (Cat. No.02TH8600). 12–17 May, 2002, Honolulu, HI, USA Conte, G., & Doherty, P. (2008). An integrated UAV Navigation system based on aerial image matching. In 2008 IEEE Aerospace Conference. 1–8 March 2008, Montana, USA. Doriya, R., Mishra, S., & Gupta, S. (2015, 2015/05). A brief survey and analysis of multi-robot communication and coordination In International conference on computing, communication & automation, http://dx.doi.org/https://doi.org/10.1109/CCAA.2015.7148524 Eren, T., Goldenberg, O. K., Whiteley, W., Yang, Y. R., Morse, A. S., Anderson, B. D. O., & Belhumeur, P. N. (2004). In Rigidity, computation, and randomization in network localization. IEEE INFOCOM. 7–11, March 2004, Hong Kong, China. Fallah-Mehdipour, E., Haddad, O. B., & Mariño, M. A. (2010). MOPSO algorithm and its application in multipurpose multireservoir operations. Journal of Hydroinformatics, 13(4), 794–811. https://doi.org/10.2166/hydro.2010.105 Fan, Y., Qi, X., Li, B., & Liu, L. (2020). Fast clustering-based multidimensional scaling for mobile networks localisation. IET Communications, 14(1), 135–143. https://doi.org/10.1049/iet-com.2019.0444 Frew, E., Dixon, C., Argrow, B., & Brown, T. (2005). Radio source localization by a cooperating UAV team. In Infotech@ Aerospace. 26–29 September 2005, Arlington, Virginia, USA. https://doi.org/10.2514/6.2005-6903. Garnier, S., Gautrais, J., & Theraulaz, G. (2007). The biological principles of swarm intelligence. Swarm Intelligence, 1(1), 3–31. https://doi.org/10.1007/s11721-007-0004-y Gautam, A., & Mohan, S. (2012). A review of research in multi-robot systems. In 2012 IEEE 7th International conference on industrial and information systems (ICIIS). 6–9 August 2012, Chennai, India. https://doi.org/10.1109/ICIInfS.2012.6304778 Gyagenda, N., Hatilima, J. V., Roth, H., & Zhmud, V. (2022). A review of GNSS-independent UAV navigation techniques. Robotics and Autonomous Systems, 152, 104069. https://doi.org/10.1016/j.robot.2022.104069 Hamaoui, M. (2019). Non-iterative MDS method for collaborative network localization with sparse range and pointing measurements. IEEE Transactions on Signal Processing, 67(3), 568–578. https://doi.org/10.1109/TSP.2018.2879623 Article MathSciNet MATH Google Scholar Jackson, B., & Keevash, P. (2011). Necessary conditions for the global rigidity of direction-length frameworks. Discrete & Computational Geometry, 46(1), 72–85. https://doi.org/10.1007/s00454-011-9326-z Kumar, S., Kumar, R., & Rajawat, K. (2016). Cooperative localization of mobile networks via velocity-assisted multidimensional scaling. IEEE Transactions on Signal Processing, 64(7), 1744–1758. https://doi.org/10.1109/TSP.2015.2507548 Liu, Y. (2015). Optimized Algorithm of Multi-AUV Cooperative Navigation System and Design of Its Formation Configuration. Ph.D. Thesis, Harbin Engineering University, China. Liu, Q., Liu, R., Wang, Z., & Thompson, J. S. (2021). UAV swarm-enabled localization in isolated region: a rigidity-constrained deployment perspective. IEEE Wireless Communications Letters, 10(9), 2032–2036. https://doi.org/10.1109/LWC.2021.3091215 Luo, Q., & Duan, H. (2017). Distributed UAV flocking control based on homing pigeon hierarchical strategies. Aerospace Science and Technology, 70, 257–264. https://doi.org/10.1016/j.ast.2017.08.010 Marini, F., & Walczak, B. (2015). Particle swarm optimization (PSO). A tutorial. Chemometrics and Intelligent Laboratory Systems, 149, 153–165. https://doi.org/10.1016/j.chemolab.2015.08.020 Martinelli, A., Pont, F., & Siegwart, R. (2005). Multi-Robot Localization Using Relative Observations. In Proceedings of the 2005 IEEE International Conference on Robotics and Automation. 18–22 April 2005, Barcelona, Spain. Masinjila, R. (2016). Multirobot Localization Using Heuristically Tuned Extended Kalman Filter. Masters' thesis, Université d'Ottawa/University of Ottawa, Canada. Moore, D., Leonard, J., Rus, D., & Teller, S. (2004). Robust distributed network localization with noisy range measurements. In Proceedings of the 2nd international conference on Embedded networked sensor systems. 3–5 November 2004, Baltimore, MD, USA. Mostaghim, S., & Teich, J. (2003). Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO). In Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706), 26–26 April 2003, Indianapolis, IN, USA. Niu, D., Guan, B., Zhou, W., & Yu, D. (2010). 3D localization method based on MDS-RSSI in wireless sensor network. In 2010 IEEE International Conference on Intelligent Computing and Intelligent Systems. 22–24 Oct. 2010, Guilin, China. Panzieri, S., Pascucci, F., & Setola, R. (2006). Multirobot Localisation Using Interlaced Extended Kalman Filter. In 2006 IEEE/RSJ International conference on intelligent robots and systems. 9–15 October 2006, Beijing, China. https://doi.org/10.1109/IROS.2006.282065 Reyes-Sierra, M., & Coello, C. C. (2006). Multi-objective particle swarm optimizers: A survey of the state-of-the-art. International Journal of Computational Intelligence Research, 2(3), 287–308. MathSciNet Google Scholar Shamaei, K., & Kassas, Z. (2019). Sub-Meter Accurate UAV Navigation and Cycle Slip Detection with LTE Carrier Phase Measurements. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019). 16–20 September 2019, Miami, Florida, USA. https://doi.org/10.33012/2019.17051 Shang, Y., Ruml, W., Zhang, Y., & Fromherz, M. P. (2003). Localization from mere connectivity. Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing. 1–3 June 2003, Annapolis, Maryland, USA. Shi, Y., & Eberhart, R. (1998). A modified particle swarm optimizer. In 1998 IEEE International conference on evolutionary computation proceedings. IEEE world congress on computational intelligence (Cat. No.98TH8360), 4–9 May 1998, Anchorage, AK, USA. Strohmeier, M., Walter, T., Rothe, J., & Montenegro, S. (2018). Ultra-wideband based pose estimation for small unmanned aerial vehicles. IEEE Access, 6, 57526–57535. https://doi.org/10.1109/ACCESS.2018.2873571 Tahir, A., Böling, J., Haghbayan, M.-H., Toivonen, H. T., & Plosila, J. (2019). Swarms of unmanned aerial vehicles—A survey. Journal of Industrial Information Integration, 16, 100106. https://doi.org/10.1016/j.jii.2019.100106 Wei, Y., & Li, Q. (2004). Survey on particle swarm optimization algorithm. Engineering Science., 6(5), 87–94. Whiteley, W. (1996). Some matroids from discrete applied geometry. Contemporary Mathematics, 197, 171–312. Whitney, H. (1992). On the Abstract Properties of Linear Dependence. In J. Eells & D. Toledo (Eds.), Hassler Whitney Collected Papers (pp. 147–171). Birkhäuser Boston. https://doi.org/10.1007/978-1-4612-2972-8_10 Wu, C.-F., Cheng, J., Guo, X.-Y., Xu, C., Hu, E.-W., & Qi, H. (2022). Development and Prospect of Aircraft Clusters Cooperative Positioning and Navigation Countermeasures Technology. Yuhang Xuebao/Journal of Astronautics, 43(2), 131–142. https://doi.org/10.3873/j.issn.1000-1328.2022.02.001. Xiao, F., Sha, C., Chen, L., Sun, L., & Wang, R. (2015). Noise-tolerant localization from incomplete range measurements for wireless sensor networks. In 2015 IEEE Conference on computer communications (INFOCOM). 26 April–1 May 2015, Hong Kong, China. Yang, Z., Wu, C., Chen, T., Zhao, Y., Gong, W., & Liu, Y. (2013). Detecting outlier measurements based on graph rigidity for wireless sensor network localization. IEEE Transactions on Vehicular Technology, 62(1), 374–383. https://doi.org/10.1109/TVT.2012.2220790 Yi, S., & Ruml, W. (2004). Improved MDS-based localization. In IEEE INFOCOM, 7–11 March 2004, Hong Kong, China. Yoo, C.-S., & Ahn, I.-K. (2003). Low cost GPS/INS sensor fusion system for UAV navigation. Digital Avionics Systems Conference, 2003. DASC '03. The 22nd. IEEE. https://doi.org/10.1109/DASC.2003.1245891 The authors would like to express thanks to Ruixin Xue for the help in the paper's revisions and Huayu Yuan, Wenju Su, Jianmin Zhao for the help in the implementation of experiments. Science and Technology on Complex System Control and Intelligent Agent Cooperative Laboratory foundation (201101). Beijing University of Posts and Telecommunications, Beijing, 100876, China Zhongliang Deng, Hang Qi, Enwen Hu & Runmin Wang Science and Technology on Complex System Control and Intelligent Agent Cooperative Laboratory, Beijing, 100074, China Chengfeng Wu Zhongliang Deng Hang Qi Enwen Hu Runmin Wang Conceptualization, ZD and HQ; Methodology, ZD and HQ; Software HQ; Validation, HQ, RW, and EH; Formal analysis, CW; Investigation, HQ, RW; resources, CW; Data curation, RW; Writing—original draft preparation, HQ; writing—review and editing, HQ.; visualization, HQ; supervision, ZD; project administration, ZD and HQ; funding acquisition, EH, CW All authors have read and agreed to the published version of the manuscript. Correspondence to Hang Qi. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations Deng, Z., Qi, H., Wu, C. et al. A cluster positioning architecture and relative positioning algorithm based on pigeon flock bionics. Satell Navig 4, 1 (2023). https://doi.org/10.1186/s43020-022-00090-2 Received: 02 September 2022 Accepted: 15 November 2022 Cluster positioning architecture Cluster relative positioning Unmanned aerial vehicles positioning Rigid graph Navigation Technologies for Autonomous Systems
CommonCrawl
Methods/design Outcome Measures in Rheumatology - Interventions for medication Adherence (OMERACT-Adherence) Core Domain Set for Trials of Interventions for Medication Adherence in Rheumatology: 5 Phase Study Protocol Ayano Kelly1, 2, 3, 4Email authorView ORCID ID profile, Allison Tong4, 5, Kathleen Tymms1, 2, 3, Lyn March6, 7, 8, Jonathan C. Craig4, 5, Mary De Vera9, 10, Vicki Evans11, Geraldine Hassett12, 13, Karine Toupin-April14, 15, Bart van den Bemt16, 17, Armando Teixeira-Pinto4, 5, Rieke Alten18, Susan J. Bartlett19, 20, Willemina Campbell21, Therese Dawson22, 23, Michael Gill24, Renske Hebing25, Alexa Meara26, Robby Nieuwlaat27, Yomei Shaw28, Jasvinder A. Singh29, 30, 31, Maria Suarez-Almazor32, Daniel Sumpton4, 5, 33, Peter Wong34, 35, Robin Christensen36, Dorcas Beaton37, 38, 39, Maarten de Wit40, Peter Tugwell41 and On behalf of the OMERACT-Adherence Group Over the last 20 years, there have been marked improvements in the availability of effective medications for rheumatic conditions such as gout, osteoporosis and rheumatoid arthritis (RA), which have led to a reduction in disease flares and the risk of re-fracture in osteoporosis, and the slowing of disease progression in RA. However, medication adherence remains suboptimal, as treatment regimens can be complex and difficult to continue long term. Many trials have been conducted to improve adherence to medication. Core domains, which are the outcomes of most relevance to patients and clinicians, are a pivotal component of any trial. These core domains should be measured consistently, so that all relevant trials can be combined in systematic reviews and meta-analyses to reach conclusions that are more valid. Failure to do this severely limits the potential for trial-based evidence to inform decisions on how to support medication adherence. The Outcome Measures in Rheumatology (OMERACT) – Interventions for Medication Adherence study by the OMERACT-Adherence Group aims to develop a core domain set for interventions that aim to support medication adherence in rheumatology. This OMERACT-Adherence study has five phases: (1) a systematic review to identify outcome domains that have been reported in interventions focused on supporting medication adherence in rheumatology; (2) semi-structured stakeholder interviews with patients and caregivers to determine their views on the core domains; (3) focus groups using the nominal group technique with patients and caregivers to identify and rank domains that are relevant to them, including the reasons for their choices; (4) an international three-round modified Delphi survey involving patients with diverse rheumatic conditions, caregivers, health professionals, researchers and other stakeholders to develop a preliminary core domain set; and (5) a stakeholder workshop with OMERACT members to review, vote on and reach a consensus on the core domain set for interventions to support medication adherence in rheumatology. Establishing a core domain set to be reported in all intervention studies undertaken to support patients with medication adherence will enhance the relevance and the impact of these results and improve the lives of people with rheumatic conditions. Core domain set Patient-centred outcomes Adherence Musculoskeletal conditions are a major cause of disability worldwide and a burden on individuals and health-care systems [1]. Advances in drug development throughout the 21st century have led to a dramatic improvement in outcomes for patients with rheumatic conditions [2, 3]. Conditions such as gout, osteoporosis and rheumatoid arthritis (RA) are amongst the most common rheumatic conditions that require long-term use of medications to improve morbidity, mortality and other health outcomes [4–7]. However, rates of medication adherence have been reported to be as low as 10% in gout, 30% in RA and 45% in osteoporosis [8–10]. Barriers to medication adherence include perceptual barriers (e.g. concerns about side effects and uncertainty regarding the efficacy of medications) and practical barriers (e.g. forgetfulness, inconvenience and cost) [11–14]. Researchers most commonly support the use of the word 'adherence' in preference to 'compliance' or 'concordance' [15, 16]. 'Adherence' highlights the outcomes of a shared decision-making approach where the patient and physician agree upon a treatment plan that the patient will follow [17]. 'Compliance' may portray a negative paternalistic relationship between the health-care provider and the patient [15]. 'Concordance' emphasises a balanced therapeutic alliance between the patient and the health-care provider [18]; however, even when 'concordance' is successful, patients may alter or decide not to take their medicine [18]. Thus, 'adherence' remains the preferred term. While non-pharmacological management is an important aspect of many rheumatic conditions, adherence to non-pharmacological management is currently beyond the scope of this study. The ABC taxonomy of adherence [15, 19] defines adherence as 'the process by which patients take their medications as prescribed' and comprises: (a) initiation (when the patient takes the first dose of a prescribed medication), (b) implementation (the extent to which a patient's actual dosing corresponds to the prescribed dosing regimen, from initiation until the last dose) and (c) persistence (the length of time between initiation and the last dose, which immediately precedes discontinuation, i.e., when the patient stops taking the prescribed medication) [15]. The behaviour change wheel will be used to categorise intervention approaches relevant to improving adherence behaviours (Appendix 1) [19]. In the OMERACT-Adherence study, interventions may focus on any adherence phase (initiation, implementation or persistence), source of medication, adherence behaviour (capability, opportunity or motivation) and method (education, persuasion, incentivisation, coercion, training, restriction, environmental restructuring, modelling and enablement) (Fig. 1). Scope and definitions of OMERACT-Adherence study Adherence research plays an important role in bridging the chasm between recommended and best practice approaches to disease management to improve medication adherence. Clinical trials have been conducted in people with rheumatic conditions to resolve ambivalence and improve medication acceptance and adherence, and thereby enhance health outcomes [20]. Yet few interventions have demonstrated meaningful improvements in either medication adherence or clinical outcomes across medical specialties [20, 21]. A limitation in collating the results of these trials to identify successful interventions better is the lack of clarity of core outcomes and the wide variability in adherence measures. There is need for a consensus-based core domain set for interventions to improve medication adherence. Worldwide, there have been many initiatives to develop core domain sets [22, 23], defined as the minimum set of outcome domains that should be measured and reported in clinical trials for a specific condition. The Outcome Measures in Rheumatology (OMERACT) initiative commenced in 1992 and has expanded to develop core domain sets in multiple rheumatic conditions [24]. There are now over 20 groups developing core domain sets for specific conditions [22, 25] and there are several methodological groups examining the core domains of interventions and measurements of outcomes that are relevant across rheumatic conditions, including health literacy, shared decision-making and work productivity [26–28]. The OMERACT-Adherence Group aims to establish a core domain set for clinical trials to support medication adherence in patients with rheumatic conditions of all ages (Fig. 2). The OMERACT-Adherence Group was established in December 2016, and comprises over 40 members from 11 countries: Australia, Canada, Germany, Greece, the Netherlands, Singapore, the United Kingdom, Oman, Switzerland, Denmark and the United States. The members include patients, rheumatologists, nurses, pharmacists, behavioural scientists, occupational therapists, industry representatives, researchers in outcomes and medication adherence, and clinical trialists. The patient perspective is highly valued and integrated into all OMERACT activities, as the ultimate aim is to improve clinical outcomes for patients [29]. Patient research partners are members of the steering committee of the OMERACT-Adherence Group and help with the design, conduct, analysis and dissemination of all studies. Conceptual schema of OMERACT-Adherence core domain set. ACR American College of Rheumatology The five specific objectives of this OMERACT-Adherence study are to: (1) conduct a systematic literature review to describe the scope and consistency of domains used in rheumatology interventions addressing medication adherence; (2) identify additional domains that are important to patients and their caregivers and elucidate the reasons for their choices; (3) ascertain the perspectives of other stakeholders including health professionals, researchers, purchasers, payers, policymakers and industry representatives on core domains; (4) develop a preliminary core domain set for clinical trials with input from all stakeholder groups and (5) seek a consensus on the OMERACT-Adherence core domain set by a ballot of the OMERACT members. The OMERACT-Adherence study methodology is adapted from the OMERACT framework, which is recognised as a valid approach for establishing a core domain set [22]. The protocol includes a SPIRIT checklist for recommended items to address in a clinical trial protocol and related documents (Additional file 1). The proposed scope of work to achieve the five OMERACT-Adherence study objectives is outlined in Fig. 3. OMERACT-Adherence study process Phase 1: systematic review of outcome domains and measures reported in trials of medication adherence A systematic review will be conducted to identify and compare outcome domains and measures reported in interventions to improve medication adherence in rheumatology clinical trials. Outcome domains are the name of the broad concept that is measured (e.g. adherence, medication knowledge and medication skill). An outcome is the specific result in a domain arising from exposure to a causal factor or a health intervention (e.g. disease-modifying anti-rheumatic drug knowledge in RA and self-injection skill). An outcome measure includes the specific measurement instrument (the tool to measure a quality or quantity of a variable, e.g. pill count), the specific metric (e.g. a change from baseline) and the method of aggregation (e.g. mean or median for continuous measures or proportion for categorical measures) [30, 31]. Electronic databases (MEDLINE, Embase, PsycINFO, CINAHL and CENTRAL) will be searched to 31 October 2017 to identify all trials of interventions aiming to improve medication adherence involving patients with rheumatic conditions. The search will use medical subject headings for concepts including 'patient compliance', 'medication adherence', 'intervention', 'inflammatory arthritis', 'rheumatoid arthritis', 'psoriatic arthritis', 'ankylosing spondylitis', 'juvenile idiopathic arthritis', 'connective tissue diseases', 'systemic lupus erythematosus', 'vasculitis', 'Sjogren's syndrome', 'osteoporosis' and 'gout' and keywords for concepts that do not match. The bibliographies of included articles will be searched by hand. Types of studies and interventions All publications studying interventions aiming to improve medication adherence in rheumatic conditions will be included. Given the limited number of randomised controlled trials (RCTs) for medication adherence in rheumatic conditions [20], non-controlled and single-arm interventions for medication adherence in rheumatic conditions will be included. Studies involving participants of all ages with any rheumatic condition, including inflammatory arthritis, connective tissue diseases and osteoporosis, will be included. Conference reports and abstracts will be excluded given their space constraints. For feasibility, the search will be restricted to English language articles. Eligibility of studies Two reviewers will independently screen the abstracts and full text of all potentially relevant studies. Any uncertainties on the eligibility of studies to be included will be resolved through a third reviewer. Data will be extracted and entered into Microsoft Excel using a pre-designed form, piloted before full data extraction with a sample of included studies. The primary reviewer will extract the following from all included interventions: first author, date of publication, countries in which the trial was conducted, sample size, participant characteristics (age, gender, condition and medication) and trial duration. In addition, the type of intervention and all adherence-related outcomes reported in the trial will be extracted. Adherence-related outcomes include adherence, and any other outcomes related to adherence behaviour (including capability, opportunity and motivation) [19]. For each outcome, the definitions, outcome measures used, time points, metric and method of aggregation will be extracted. Clinical outcomes for specific conditions will not be extracted, as this work is already being undertaken by other OMERACT groups [24]. Clinical outcomes are defined as any outcome that would fall under the four core areas in the OMERACT filter of death, life impact, resource use and pathophysiological manifestations for the specific condition, and they also include adverse events [24]. Data analysis and presentation Two reviewers will group similar outcomes into outcome domains, which will be reviewed and modified by the OMERACT-Adherence steering committee. The frequency of each domain and outcome measure reported across trials will be calculated. Domains and measures will be compared with those identified in the 2014 Cochrane Systematic Review of RCTs to enhance medication adherence, which includes 182 RCTs across other specialties [32]. Phase 2: stakeholder interviews Semi-structured interviews will be conducted with patients and caregivers to ascertain individual perspectives on outcome domains. The interview guide will incorporate findings from phase 1 and help to give a greater understanding of the values and beliefs that underlie candidate domains. Additional outcome domains will also be identified in this phase. We will follow the consolidated criteria for reporting qualitative research (COREQ) to guide our methods and reporting [33]. Participants and recruitment Adults with gout, osteoporosis or RA and their caregivers (defined by the patient as a significant person or family member who is aware of the patient's illness and treatments) will be eligible to participate in an interview. Three conditions have been chosen for the phase 2 interviews and phase 3 focus groups, based on feasibility. They represent common rheumatic conditions with known poor levels of adherence [8–10]. Patients with diverse rheumatic conditions will be included in phases 1, 4 and 5 to ensure the core domain set is applicable to all rheumatic conditions. Participants will be identified by treating rheumatologists at participating centres in Australia: Liverpool Hospital (NSW), Canberra Rheumatology (ACT and NSW), BJC Health (NSW) and Royal North Shore Hospital (NSW). Although this phase includes participants from one country only, all other phases will include participants from different countries. A purposive sampling technique will be applied to include a broad range of demographic characteristics (age, gender, socioeconomic status, educational level and ethnicity) and clinical characteristics (type, duration and severity of condition). Based on our experience with previous qualitative interview studies, target recruitment will be approximately 30 participants. However, final numbers will be determined by data saturation, defined as the point at which no new concepts or outcome domains are being identified. To achieve adequate participant enrolment at each site, additional recruiting clinicians will be contacted if needed. Written informed consent will be obtained from all participants. The interviews will be conducted face-to-face as first preference or by Skype or Facetime or telephone interviews if preferred by the participant. Each interview will take approximately 40 min and will be audio-recorded and transcribed verbatim. A preliminary interview guide is provided (Appendix 2). Transcripts will be available for participants to review and revise. A summary of the interview findings will be sent to participants for member checking. The transcripts will be imported into software HyperRESEARCH (ResearchWare Inc, http://www.researchware.com, version 3.7.5) for the qualitative data analysis. Two experienced qualitative investigators will supervise the coding and development of descriptive and analytical themes. Using inductive thematic analysis, the findings from the study will be grounded in the participant data [34]. The transcripts will be coded line by line to identify concepts. Similar concepts will be grouped into themes that reflect different outcome domains with the reasons for identifying them. The analysis will be iterative, repetitively moving between the transcripts, analysis and subsequent interviews. The preliminary results will be reviewed and modified by the OMERACT-Adherence steering committee. Conceptual links amongst themes and subthemes will be identified to develop an analytical thematic schema. Phase 3: focus groups with modified nominal group technique with patients and caregivers Patients and their caregivers will be asked to identify outcome domains they regard as important and relevant to measure in trials to support medication adherence, and to discuss the reasons for their choices. A modified nominal group technique will be used to generate a prioritised set of ideas in a group systematically and to encourage the participation of each member [35, 36]. The outcome domains from phases 1 and 2 will be incorporated for discussion and ranking in nominal groups. Additional outcome domains will also be identified in this phase. This study uses both quantitative and qualitative data and has been used successfully in the development of other OMERACT core domain sets [37, 38]. At least 12 focus groups (with a minimum of five participants per group) will be convened. Adults aged 18 years and over with gout, osteoporosis or RA and their caregivers will be invited to participate. The recruitment sites and purposive sampling technique are outlined in phase 2. In addition, focus groups will take place in the Netherlands (through Sint Maartenskliniek). Participants who participate in focus groups will be different to those in individual interviews. The groups will be convened until data saturation. The focus groups will be convened by condition at each site. To achieve adequate participant enrolment at each site, additional recruiting clinicians will be contacted if needed. Written informed consent will be obtained from all participants. The focus groups will be up to 2 h in duration. An experienced facilitator with training in the nominal group technique and who is not involved in any patient's care will moderate the groups to encourage open discussion. The questions will be described in an interview guide and discussed among the steering committee [31]. All focus groups will be audio-taped and transcribed verbatim. De-identified transcripts will be available for participants to review and revise. A note-taker will record notes on the interaction among the participants. The preliminary content for the focus group run sheet is provided (Appendix 3). An importance score will be calculated for each outcome domain, based on the rankings attributed in the focus groups, to give an overall ranking of all outcome domains identified. The distribution of the ranking for each outcome domain is calculated from the probability of each rank for each outcome domain. The probability has two components: (1) the importance given to the outcome domain by the ranking and (2) the consistency of being nominated by the participants. Higher scores identify outcome domains that are more valued by the participants. These probabilities will be used to compute the weighted sum of the inverted ranking (1/i) to obtain the importance score (IS): $$ \mathrm{IS}=\sum \limits_{i=1}^{\mathrm{no}\ \mathrm{of}\ \mathrm{outcomes}}\mathrm{P}\left({O}_j\ \mathrm{in} \operatorname {rank}\ j\right)\times \frac{1}{i}. $$ The importance scores will also be calculated separately for each condition, as well as for patients and caregivers and compared using a t-test with a statistical significance level of p < 0.05. Participants who have not ranked at least ten outcome domains will be excluded from this analysis. The analysis will be conducted using statistical software Stata/SE (StataCorp. College Station, TX) and R (R Foundation for Statistical Computing, Vienna, Austria). Transcripts will be imported into HyperRESEARCH (ResearchWare Inc, http://www.researchware.com, software for qualitative data analysis. Using a thematic analysis, the transcripts will be coded line by line by an investigator experienced in qualitative research to identify concepts. Similar concepts will be grouped into themes that reflect the reasons for identifying and ranking the outcome domains. These themes will be discussed by the OMERACT-Adherence steering committee. Phase 4: modified Delphi consensus survey An international online OMERACT-Adherence survey will incorporate all domains identified in phases 1–3 and generate a consensus on up to seven core domains, as well as other domains that may fit under the optional or research domains. Delphi surveys have been used to gain consensus on core domain sets in a range of health conditions [39–42]. The online survey will involve three rounds completed by participants with knowledge, experience or expertise on the topic. Although Delphi surveys used to develop core domain sets for trials in OMERACT have involved up to 250 participants [41–43], there is no agreement on the sample size required for a Delphi survey [44, 45]. To achieve a minimum sample size of 200 respondents at the end of the Delphi survey, by assuming 20% attrition for each round, the initial target sample size will be 390. Participant retention in Delphi rounds will be encouraged with at least two reminder emails. This will include patients and caregivers (minimum n = 200); rheumatologists (minimum n = 63); pharmacists, nurses, allied health professionals and general practitioners (minimum n = 63); outcomes researchers, adherence researchers, clinical trialists, representatives from the pharmaceutical industry and policymakers (minimum n = 63). To achieve adequate participant enrolment, participants will be identified from the networks of the OMERACT-Adherence Group. Following this, a snowball sampling technique will be utilised for recruitment, whereby key informants will be identified for recruitment by existing participants to ensure that a broad range of participant characteristics (including countries and health-care systems) and experiences are captured. Generating the list of outcome domains The modified Delphi survey will include outcome domains identified in phases 1 to 3. The survey will include a plain language definition of each listed outcome domain. The survey will be reviewed by the OMERACT-Adherence Group, and piloted with at least three patients, three clinicians and three other relevant stakeholders. Survey administration The surveys will be completed online using the survey platform Qualtrics (Qualtrics Provo, UT). Each participant will be given a unique identifier so that their responses from each round of the survey can be linked anonymously. A minimum of three reminders will be sent to participants during the Delphi rounds, with the aim of achieving a response rate of at least 70% across all three rounds of those who have agreed to participate. Delphi round 1 Participants will rate each outcome domain using a nine-point Likert scale. Ratings 1 to 3 are not important, 4 to 6 important but not a priority, and 7 to 9 very important and a priority. Unsure will also be an option. Responses will be mandatory and participants will be encouraged to use the full range of scores. The sequence of outcome domains will be randomised to minimise ordering bias. Participants can provide comments for each outcome domain in a free-text box and suggest new outcome domains. All new outcome domains that are suggested will be reviewed by the steering committee and discussed for inclusion in round 2. Any outcome domain where ≥70% of either patients and caregivers or other stakeholders have rated the outcome domain as very important and a research priority (scores 7–9) will be retained in round 2 and reported back to participants. All items where ≥70% of the participants voted the item as not important (1–3) will be excluded from the Delphi list. All the remaining items and new items will be sent back for re-scoring in round 2. Participants will be presented with a graph showing the distribution of scores for all retained domains for (1) patients and caregivers, (2) other stakeholders and (3) all participants. Comments from round 1 by all other participants will also be provided. The participant's own response from round 1 will be highlighted. Participants will use the same Likert scale for re-scoring. Participants can provide comments for each outcome domain in a free-text box. Any outcome domain where ≥70% of either patients and caregivers or other stakeholders have rated the outcome domain as very important and a research priority (scores 7–9) will be retained in round 3 and reported back to participants. All items where ≥70% of the participants voted the item as not important (1–3) will be excluded from the Delphi list. All the remaining items will be sent back for re-scoring in round 3. Participants will view the distribution of scores and comments for each domain from round 2. Participants will see their own scores from round 2 highlighted and re-score outcome domains. After the rating questions, participants will be asked to complete a Best–worst scale survey [46]. In the best-worse survey, the group will be presented with up to six lists that will contain a subset of six of the outcome domains remaining in round 3. Participants will be asked to choose the most important and least important outcome domains from each list. The Best–worst scaling survey will quantify the relative importance of each of the round 2 outcome domains. The mean, median and proportion of the ratings for each outcome domain from all three rounds will be calculated. The scores will be calculated separately for patients/caregivers and other stakeholders. A Wilcoxon sign rank test or t-test will be used to compare the mean difference in rating scores between both stakeholder groups, with a significance value of p < 0.05. The Best–worst scale survey will be used to calculate the relative importance score for each of the round 2 outcome domains. Multinomial logistic regression models will be used to calculate a relative importance score for each outcome domain normalised to the range 0 (least important) to 10 (most important). Importance scores will be calculated separately for patients, caregivers and other stakeholders. The influence of demographic factors, such as age, gender and condition, will be investigated. Participants who have not completed all three Delphi rounds will be excluded from the analysis. Based on previous Delphi surveys used in outcomes research, a preliminary core domain set will be based on the outcome domains for which ≥70% of both patients/caregivers and other stakeholders have rated it as critically important (rating 7–9) [43]. For feasibility, up to seven critically important outcome domains (based on the means, medians and proportions of ratings and importance score) will be identified as the preliminary core domain set. Phase 5: consensus workshop A consensus workshop will review the results from phases 1 to 4 and discuss the potential core domain set. Strategies to develop outcome measures will also be discussed. The target will be at least 60 participants, with a minimum of 20 patients and caregivers. To achieve adequate participant enrolment, the stakeholder workshop is anticipated to occur during the 2020 OMERACT meeting. Invitations will be extended to health professionals (rheumatologists, pharmacists, nurses and other allied health professionals), researchers, policymakers and pharmaceutical industry representatives with expertise in medication adherence in rheumatology. To facilitate implementation, invitees will include health professionals who have key roles in specialty professional organisations, guidelines, registries, journals, regulatory agencies and funding organisations. All parts of the workshop will be audio-recorded and transcribed. Participants will be sent a copy of the results from phases 1 to 4 prior to the workshop and asked to consider the results to date, so that they are prepared to give informed and considered feedback. The preliminary agenda for the consensus workshop is presented below. Part 1: introduction The aims, method and the results from OMERACT-Adherence phases 1 to 4, including the preliminary core domain set, proposed consensus definition and strategies to develop outcome measurements, will be presented by the chair of the OMERACT-Adherence Group. Part 2: breakout groups Participants will be assigned to breakout groups with approximately 12 participants per group (each with a facilitator and co-facilitator chosen from the OMERACT-Adherence Group). The groups will contain a mixture of stakeholders, including a minimum of two patients or caregivers, to promote the exchange of different perspectives. A briefing session, including a detailed run sheet with the question guide, will be provided to train facilitators. The facilitators will moderate the group discussion and take notes to report back to the larger group, focusing on the candidate core domains and strategies to develop outcome measures. Part 3: plenary discussion The group will reconvene after the breakout group session. Each group will report back the results of their discussion to the wider group. Participants will be encouraged to provide feedback on the issues raised by other groups. The workshop chair will moderate the forum and summarise key points. Finalisation of the core domain set Final consensus voting will include voting on each proposed domain. Changes to domains (e.g. wording or definition) will be permitted during phase 5. All domains voted for by ≥70% of participants will be included in the core domain set. In addition, attendees will vote on whether appropriate steps outlined in phases 1–4 were followed to obtain the core domain set and agreement on a proposed research agenda for core outcome measurement development. Following the workshop, all transcripts will be entered into the software HyperRESEARCH (ResearchWare Inc. http://www.researchware.com, version 3.7.5). The data will be coded and analysed to identify participant perspectives on the potential core domain set, and suggestions and challenges for implementation. The key findings will be reviewed by the OMERACT-Adherence steering committee prior to submitting a finalised workshop report. Phases 1 to 5 of the OMERACT-Adherence process, including the workshop report on the core domain set, will be published in peer-reviewed journals. OMERACT-Adherence will use a validated and systematic approach to develop a consensus-based core domain set that OMERACT will recommend is reported in all clinical trials of interventions aimed to improve medication adherence in paediatric and adult rheumatic conditions. The OMERACT-Adherence core domain set may be considered for other contexts including other specialties, and other types of studies such as observational studies in which medication adherence is a key requirement to ensure the optimal uptake of new medications. Once the OMERACT-Adherence core domain set has been ratified by OMERACT attendees, core outcome measurements for each of the core domains will be identified or developed as needed using the OMERACT filter to ensure that measures are truthful, discriminative and feasible [47]. Guidelines for selecting outcome measurements for core domains that have been developed by the Core Outcome Measures in Effectiveness Trials (COMET) and Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) initiatives will also be used to guide this process [23, 48]. In addition to publications and research presentations, to facilitate the dissemination and uptake of the OMERACT-Adherence core domains set into clinical trials, national and international stakeholders will be consulted throughout the study phases and at an implementation workshop at the completion of the study. Ultimately, the standardised use of a consensus-based set of high-priority outcome domains will enable all stakeholders to make decisions about strategies to improve medication adherence. Study status Data collection and recruitment commenced for phases 1, 2 and 3 in October 2017. A time schedule has been adapted from the SPIRIT figure and is provided (Appendix 1). The OMERACT-Adherence five-phase study was registered on the COMET database on 27 November 2017 (http://www.comet-initiative.org/studies/details/1068). Any important amendments to the protocol will be discussed amongst the OMERACT-Adherence steering committee and submitted to the COMET database. The date of submission for this protocol (version 1) is 29 November 2017. ACR: COMET: Core Outcome Measures Effectiveness Trials COSMIN: Consensus-based Standards for the Selection of Health Measurement Instruments OMERACT: Outcome Measures in Rheumatology RCT: Randomised Controlled Trial We would like to acknowledge the contribution of the other OMERACT-Adherence Group members (Marieke Scholte-Voshaar, Khoula AlMaqbali, Annica Barcinella-Wong, Peter Cheung, Luke Crimston-Smith, Marita Cross, Rebecca Davey, Paul Emery, Kieran Fallon, Sarah Flint, David Graham, Stephen Hall, Susan Hermann, Helen Keen, Katerina Koutsogianni, Irwin Lim, Francois Nantel, Sean O'Neill, Clare O'Sullivan, Premarani Sinnathurai, and Biljana Zeljkovic). Role of members of the OMERACT-Adherence Group Fellow and chair: AK. Responsibility: Principal investigator and co-ordination of OMERACT-Adherence Group. Steering group members: AT, KT, LM, MDV, VE, GH, KTA, BVB, Marieke Scholte-Voshaar, SJB, PT. Responsibilities: Major input into study design, collection, management, analysis, interpretation of data and writing of reports. OMERACT supervisors: LM, PT. Responsibilities: Supervision of studies and ensuring studies follow OMERACT procedures. Working group members: RA, WC, TD, MG, RH, AM, RN, YS, JAS, MSA, DS, PW, RC, DB, MdW, Khoula AlMaqbali, Annica Barcinella-Wong, Peter Cheung, Luke Crimston-Smith, Marita Cross, Rebecca Davey, Paul Emery, Kieran Fallon, Sarah Flint, David Graham, Stephen Hall, Susan Hermann, Helen Keen, Katerina Koutsogianni, Irwin Lim, Francois Nantel, Sean O'Neill, Clare O'Sullivan, Premarani Sinnathurai, Biljana Zeljkovic. Responsibilities: Input into study design, collection, management, analysis, interpretation of data and writing of reports. Authorship eligibility guidelines All studies will be submitted to peer-reviewed journals for publication. The Uniform Requirements for Manuscripts Submitted to Biomedical Journals criteria for authorship will be followed for all publications. No professional writers are intended to be used for publications. All digital recordings of interviews, focus groups and workshops will be de-identified, transcribed and deleted after transcription. Hard copies of transcripts, data analysis and participant information will be kept on a password-locked USB that will be kept in a locked cabinet in the Higher Degrees Research Office at the Australian National University (Florey Building 54 Mills Road, Acton ACT 2601, Australia). The data will be stored for 5 years, after which digital files will be deleted. Ancillary and post-study care All patient information forms contain contact details of the principal investigator and the Research and Ethics office, who are willing to discuss any medical problems that may be related to the project or concerns or complaints about the conduct of the study. The OMERACT-Adherence Group receives funding from OMERACT, which will be used to support a patient research partner in the OMERACT-Adherence Group to attend the OMERACT conference. OMERACT (http://www.omeract.org, contact: secretariat admin@omeract.org) is the primary sponsor responsible for approving the initiation and overviewing the ongoing progress and management of the study. OMERACT mentors overview the design and conduct of the studies, including the interpretation of data and preparation, and review and approval of manuscripts. The following funding organisations had no role in the design and conduct of the studies; collection, management, analysis and interpretation of the data; or preparation, review or approval of manuscripts. AK is supported by the Arthritis Australia Scholarship funded by the Allan and Beryl Stephens Grant from the Estate of the Late Beryl Stephens. AT is supported by a National Health and Medical Research Council Fellowship (1037162). RC's employer, the Parker Institute, Bispebjerg, and Frederiksberg Hospital, is supported by a core grant (OCAY-13-309) from the Oak Foundation. Phases 1–3 of the OMERACT-Adherence study were funded by a 2018 Arthritis Australia project grant (major funder), and a private research grant provided by Professor Stephen Hall. Funding for phases 4 and 5 has not been confirmed. The datasets generated and analysed during the OMERACT-Adherence study are available from the corresponding author on reasonable request. The link to the full protocol will be available after publication via the publishing journal and COMET website. All authors have made substantial contributions to the conception and design of the protocol. AK, GH, LM, DS, AT, KT and BVB will be involved in the data collection of phases 1–3. ATP, AT and RC contributed to the method of data analysis and interpretation. All authors of this protocol have made significant contributions to the drafting of the protocol and have revised it critically for important intellectual content. All authors have read and approved the final protocol. Liverpool Hospital, BJC Health (Parramatta and Chatswood), Royal North Shore Hospital (South Western Sydney Local Health District Research and Ethics Office, Locked Bag 7103, Liverpool BC NSW 1871, Australia, approval number HE 16/373 LNR, approved 2 March 2017), Canberra Hospital and Canberra Rheumatology (Australian Capital Territory Health Human Research Ethics Committee, ACT Government Health Directorate Research Office, Building 10 Level 6, Canberra Hospital, Yamba Drive, Garran ACT 2605, Australia, approval number ETHLR.15.137, approved 11 August 2015) have provided an ethical review and approval for the phase 2 and 3 studies. Ethics approval will be sought for the remainder of the OMERACT study. The primary investigator or co-ordinating investigator at each study site will obtain informed consent from all participants in the study (available on request). PT, LM, MDW, DB and JAS are members of the executive of OMERACT, an organisation that develops outcome measures in rheumatology and receives arms-length funding from 36 companies. JAS has received research grants from Takeda and Savient and consultant fees from Savient, Takeda, Regeneron, Merz, Iroko, Bioiberica, Crealta/Horizon and Allergan Pharmaceuticals, WebMD, UBM LLC and the American College of Rheumatology (ACR). JAS serves as the principal investigator for an investigator-initiated study funded by Horizon Pharmaceuticals through a grant to DINORA, Inc, a 501(c)(3) entity. JAS is a member of the ACR's Annual Meeting Planning Committee; chair of the ACR Meet-the-Professor, Workshop and Study Group Subcommittee; and a member of the Veterans Affairs Rheumatology Field Advisory Committee. JAS is the editor and the director of the University of Alabama at Birmingham (UAB) Cochrane Musculoskeletal Group Satellite Center on Network Meta-analysis. RC is a member of the Technical Advisory Group for the OMERACT Domain & Instrument Selection Process, which might be perceived as an intellectual conflict of interest. Definitions and examples of intervention functions and sources of behaviour Intervention functions Increasing understanding or knowledge Group patient education meetings Using communication to stimulate action or induce positive or negative feelings Using motivational interviewing to encourage medication adherence Incentivisation Creating an expectation of reward Payment to complete computer-based interactive adherence programme Creating an expectation of punishment or cost Punishment system for a child who does not take their medications Self-management training Using rules to decrease the opportunity to engage in the target behaviour Restricting biologic prescriptions to those with adequate adherence Environmental restructuring Changing the social or physical context On-screen prompts to remind rheumatologist to address medication adherence with patients Providing an example for people to imitate or aspire to Peer educators motivating other patients Increasing means or reducing barriers to increase capability or opportunity (excluding education and training or environmental restructuring) Alarm device to remind patients to take medications Controlled-release medications to reduce number or frequency of medications Sources of behaviour The individual's psychological or physical capacity to engage in the behaviour Psychological capability (e.g. medication knowledge) Physical capability (e.g. medication-taking skill) Factors that lie outside the individual that prompt a behaviour or make it possible Physical opportunity (e.g. cost of medication) Social opportunity (e.g. societal acceptance of medication taking) All the brain processes that energise and direct behaviour Reflective motivation (e.g. analytical decision-making) Automatic motivation (e.g. immediate emotional response to medication taking) Schedule of study phases Preliminary interview guide for interviews with patients and caregivers Welcome and introduction: Aspects of treatment that are important and challenging. Current outcomes: Presentation and discussion of outcome domains currently reported in interventions to support patients with medication adherence (as identified in phase 1). Perspectives on outcomes: Outcome domains the participants believe are important and relevant for trials and the reasons why. Outcome implementation: Ideas on how a core domain set can be disseminated to the broader group of stakeholders Preliminary run sheet for focus groups with patients and caregivers Welcome and introduction (10 min): The facilitator will explain the aims of the study, define what outcome domains are in the context of clinical trials, describe the scope of interventions to support medication adherence (Fig. 1) and ask participants to introduce themselves (including their diagnosis and medications). Focus group discussion (40 min): Participants will be asked to discuss their experiences and motivation of taking medications for their rheumatic condition, including perceived benefits, harms and complications related to their treatment. Nominal group technique (70 min): Each participant will be asked to suggest one to two domains they consider are most important to include in trials of interventions to support medication adherence. The facilitator will ask participants in turn to read out their suggestions, which will be recorded for all participants to see on a whiteboard or flip chart. Once the group has created their list of outcome domains, the facilitator will add outcome domains identified from the systematic review, patient and caregiver interviews, and previous nominal groups. The list of outcome domains will be discussed to ensure that all members understand the meaning of the outcome domains. Participants will then individually rank all the outcome domains in the order of perceived importance, from 1 (most important) to X (least important). If participants have difficulty ranking all the outcome domains, they will be asked to rank their top ten domains. The facilitator will ask participants to read out their top three choices and will note this for all to see and discuss. Similarities and differences in ranking will be discussed among the participating groups. The outcome domains from each group will be reviewed and discussed among the OMERACT-Adherence steering committee. Additional file 1: SPIRIT 2013 Checklist: Recommended items to address in a clinical trial protocol and related documents. (DOC 121 kb) Canberra Rheumatology, Level 9, 40 Marcus Clarke St, Canberra City, ACT, 2606, Australia Department of Rheumatology, Canberra Hospital, Canberra, ACT, Australia College of Health and Medicine, Australian National University, Canberra, ACT, Australia Centre for Kidney Research, The Children's Hospital at Westmead, Sydney, NSW, Australia Sydney School of Public Health, The University of Sydney, Sydney, NSW, Australia Department of Rheumatology, Royal North Shore Hospital, Sydney, NSW, Australia Institute of Bone and Joint Research, Kolling Institute of Medical Research, Sydney, NSW, Australia Northern Clinical School, The University of Sydney, Sydney, NSW, Australia Faculty of Pharmaceutical Sciences, The University of British Columbia, Vancouver, BC, Canada Arthritis Research Centre of Canada, Richmond, BC, Canada Patient Research Partner, Clear Vision Consulting, Canberra, ACT, Australia Department of Rheumatology, Liverpool Hospital, Sydney, NSW, Australia Ingham Institute of Applied Medical Research, Sydney, NSW, Australia Children's Hospital of Eastern Ontario Research Institute, Ottawa, ON, Canada Department of Pediatrics and School of Rehabilitation Sciences, University of Ottawa, Ottawa, ON, Canada Department of Pharmacy, Sint Maartenskliniek, Ubbergen, Netherlands Radboud University Medical Centre, Nijmegen, Netherlands Department of Rheumatology, Clinical Immunology, Osteology, Physical therapy and Sports Medicine, Schlosspark Klinik, Charité University Medicine, Berlin, Germany Department of Medicine, McGill University, Montreal, Canada Division of Rheumatology, Johns Hopkins School of Medicine, Baltimore, MD, USA Patient Research Partner, Toronto Western Hospital, Toronto, Ottawa, Canada Lord Street Specialist Centre, Port Macquarie, NSW, Australia Mayo Hospital Specialist Centre, Taree, NSW, Australia Patient Research Partner, Dragon Claw, Sydney, NSW, Australia Amsterdam Rheumatology and Immunology Centre, Amsterdam, Netherlands The Ohio State University, Wexner Medical Center, Columbus, OH, USA Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, ON, Canada National Data Bank for Rheumatic Diseases, Wichita, KS, USA Medicine Service, VA Medical Center, Birmingham, AL, USA Department of Medicine, School of Medicine, University of Alabama, Birmingham, AL, USA Division of Epidemiology, School of Public Health, University of Alabama, Birmingham, AL, USA Section of Rheumatology and Clinical Immunology, Department of General Internal Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX, USA Department of Rheumatology, Concord Hospital, Sydney, NSW, Australia Mid-North Coast Arthritis Clinic, Coffs Harbour, NSW, Australia University of New South Wales Rural Clinical School, Coffs Harbour, NSW, Australia Musculoskeletal Statistics Unit, The Parker Institute, Bispebjerg and Frederiksberg Hospital, Copenhagen, Denmark Musculoskeletal Health & Outcomes Research, Li Ka Shing Knowledge Institute, St. Michael's Hospital, Toronto, ON, Canada Institute for Work & Health, Toronto, ON, Canada Department of Occupational Science & Occupational Therapy and the Institute of Health Policy Management & Evaluation, University of Toronto, Toronto, ON, Canada Metamedica, VU Medical Centre, Amsterdam, The Netherlands Department of Medicine, University of Ottawa, Ottawa, ON, Canada Smith E, Hoy DG, Cross M, Vos T, Naghavi M, Buchbinder R, et al. The global burden of other musculoskeletal disorders: estimates from the Global Burden of Disease 2010 study. Ann Rheum Dis. 2014;73(8):1462–9.View ArticlePubMedGoogle Scholar Tak PP, Kalden JR. Advances in rheumatology: new targeted therapeutics. Arthritis Res Ther. 2011;13(1):S5.View ArticlePubMedPubMed CentralGoogle Scholar Rachner TD, Khosla S, Hofbauer LC. Osteoporosis: now and the future. Lancet. 2011;377(9773):1276–87.View ArticlePubMedPubMed CentralGoogle Scholar Schumacher H Jr, Becker M, Lloyd E, MacDonald P, Lademacher C. Febuxostat in the treatment of gout: 5-yr findings of the FOCUS efficacy and safety study. Rheumatology. 2009;48(2):188–94.View ArticlePubMedGoogle Scholar Center JR, Bliuc D, Nguyen ND, Nguyen TV, Eisman JA. Osteoporosis medication and reduced mortality risk in elderly women and men. J Clin Endocrinol Metab. 2011;96(4):1006–14.View ArticlePubMedGoogle Scholar Singh JA, Christensen R, Wells GA, Suarez-Almazor ME, Buchbinder R, Lopez-Olivo MA, et al. Biologics for rheumatoid arthritis: an overview of Cochrane reviews. Sao Paulo Med J. 2010;128(5):309–10.View ArticlePubMedGoogle Scholar Jacobsson LT, Turesson C, Nilsson J-Å, Petersson IF, Lindqvist E, Saxne T, et al. Treatment with TNF blockers and mortality risk in patients with rheumatoid arthritis. Ann Rheum Dis. 2007;66(5):670–5.View ArticlePubMedGoogle Scholar Vera MA, Marcotte G, Rai S, Galo JS, Bhole V. Medication adherence in gout: a systematic review. Arthritis Care Res. 2014;66(10):1551–9.View ArticleGoogle Scholar Van Den Bemt BJ, Zwikker HE, Van Den Ende CH. Medication adherence in patients with rheumatoid arthritis: a critical appraisal of the existing literature. Expert Rev Clin Immunol. 2012;8(4):337–51.View ArticlePubMedGoogle Scholar Kothawala P, Badamgarav E, Ryu S, Miller RM, Halbert R. Systematic review and meta-analysis of real-world adherence to drug therapy for osteoporosis. Mayo Clin Proc. 2007;82(12):1493–501.View ArticlePubMedGoogle Scholar Kelly A, Tymms K, Tunnicliffe D, Sumpton D, Perera C, Fallon K, et al. Patients' attitudes and experiences of disease-modifying anti-rheumatic drugs in rheumatoid arthritis and spondyloarthritis; a qualitative synthesis. Arthritis Care Res. 2017; https://doi.org/10.1002/acr.23329. Singh JA. Challenges faced by patients in gout treatment: a qualitative study. J Clin Rheumatol. 2014;20(3):172.View ArticlePubMedPubMed CentralGoogle Scholar Sale JE, Gignac MA, Hawker G, Frankel L, Beaton D, Bogoch E, et al. Decision to take osteoporosis medication in patients who have had a fracture and are 'high' risk for future fracture: a qualitative study. BMC Musculoskelet Disord. 2011;12(1):92.View ArticlePubMedPubMed CentralGoogle Scholar Linn AJ, van Weert JC, Schouten BC, Smit EG, van Bodegraven AA, van Dijk L. Words that make pills easier to swallow: a communication typology to address practical and perceptual barriers to medication intake behavior. Patient Prefer Adher. 2012;6:871–85.View ArticleGoogle Scholar Vrijens B, De Geest S, Hughes DA, Przemyslaw K, Demonceau J, Ruppar T, et al. A new taxonomy for describing and defining adherence to medications. Bri J Clin Pharmacol. 2012;73(5):691–705.View ArticleGoogle Scholar Helmy R, Zullig L, Dunbar-Jacob J, Hughes D, Vrijens B, Wilson I, et al. ESPACOMP Medication Adherence Reporting Guidelines (EMERGE): a reactive-Delphi study protocol. BMJ Open. 2017;7(2):e013496.View ArticlePubMedPubMed CentralGoogle Scholar Sabaté E. Adherence to long-term therapies: evidence for action. Geneva: World Health Organization; 2003.Google Scholar Dickinson D, Wilkie P, Harris M. Taking medicines: concordance is not compliance. BMJ. 1999;319(7212):787.View ArticlePubMedPubMed CentralGoogle Scholar Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.View ArticlePubMedPubMed CentralGoogle Scholar Galo JS, Mehat P, Rai SK, Avina-Zubieta A, De Vera MA. What are the effects of medication adherence interventions in rheumatic diseases: a systematic review. Ann Rheum Dis. 2016;75(4):667–73.View ArticlePubMedGoogle Scholar Nieuwlaat R, Wilczynski N, Navarro T, Hobson N, Jeffery R, Keepanasseril A, et al. Interventions for enhancing medication adherence. Cochrane Database Syst Rev. 2014;11:CD000011. https://doi.org/10.1002/14651858.CD000011.pub4 Boers M, Kirwan JR, Tugwell P. The OMERACT handbook. Canada: OMERACT; 2016.Google Scholar Williamson PR, Altman DG, Bagley H, Barnes KL, Blazeby JM, Brookes ST, et al. The COMET Handbook: version 1.0. Trials. 2017;18(3):280.View ArticlePubMedPubMed CentralGoogle Scholar OMERACT Initiative. Outcome Measures in Rheumatology. http://www.omeract.org. Accessed 25 Aug 2017. Kirkham JJ, Clarke M, Williamson PR. A methodological approach for assessing the uptake of core outcome sets using ClinicalTrials.gov: findings from a review of randomised controlled trials of rheumatoid arthritis. BMJ. 2017;357:j2262.View ArticlePubMedPubMed CentralGoogle Scholar Buchbinder R, Batterham R, Ciciriello S, Newman S, Horgan B, Ueffing E, et al. Health literacy: what is it and why is it important to measure? J Rheumatol. 2011;38(8):1791–7.View ArticlePubMedGoogle Scholar Toupin-April K, Barton J, Fraenkel L, Li L, Grandpierre V, Guillemin F, et al. Development of a draft core set of domains for measuring shared decision making in osteoarthritis: an OMERACT working group on shared decision making. J Rheumatol. 2015;42(12):2442–7.View ArticlePubMedPubMed CentralGoogle Scholar Tang K, Boonen A, Verstappen SM, Escorpizo R, Luime JJ, Lacaille D, et al. Worker productivity outcome measures: OMERACT filter evidence and agenda for future research. J Rheumatol. 2014;41(1):165–76.View ArticlePubMedGoogle Scholar Cheung PP, de Wit M, Bingham CO 3rd, Kirwan JR, Leong A, March LM, et al. Recommendations for the involvement of patient research partners (PRP) in OMERACT working groups. a report from the OMERACT 2014 Working Group on PRP. J Rheumatol. 2016;43(1):187–93.View ArticlePubMedGoogle Scholar Richards P, De Wit M. The OMERACT glossary for patient research partners. 2004. https://omeract.org/resources. Accessed 12 Jan 2018. Song Initiative. The SONG Handbook. Version 1.0 June 2017, Sydney, Australia. songinitiative.org/reports-and-publications/. Accessed 12 Jan 2018. Kelly A, Sumpton D, O'Sullivan C, Meara A, Nieuwlaat R, Tugwell P, et al. Scope and consistency of adherence related outcomes in randomized controlled trials of interventions for improving medication adherence. Arthritis Rheumatol. 2017;69(S10):1-4481.Google Scholar Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.View ArticlePubMedGoogle Scholar Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.View ArticleGoogle Scholar Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376.View ArticlePubMedPubMed CentralGoogle Scholar Urquhart-Secord R, Craig JC, Hemmelgarn B, Tam-Tham H, Manns B, Howell M, et al. Patient and caregiver priorities for outcomes in hemodialysis: an international nominal group technique study. Am J Kidney Dis. 2016;68(3):444–54.View ArticlePubMedGoogle Scholar Fried BJ, Boers M, Baker PR. A method for achieving consensus on rheumatoid arthritis outcome measures: the OMERACT conference process. J Rheumatol. 1993;20(3):548–51.PubMedGoogle Scholar Orbai AM, de Wit M, Mease P, Shea JA, Gossec L, Leung YY, et al. International patient and physician consensus on a psoriatic arthritis core outcome set for clinical trials. Ann Rheum Dis. 2017;76(4):673–80.View ArticlePubMedGoogle Scholar Linstone HA, Turoff M. The Delphi method: techniques and applications. Addison-Wesley Reading, MA; 1975.Google Scholar Bartlett SJ, Hewlett S, Bingham CO, Woodworth TG, Alten R, Pohl C, et al. Identifying core domains to assess flare in rheumatoid arthritis: an OMERACT international patient and provider combined Delphi consensus. Ann Rheum Dis. 2012;71(11):1855–60.Google Scholar Taylor WJ, Schumacher HR Jr, Baraf HS, Chapman P, Stamp L, Doherty M, et al. A modified Delphi exercise to determine the extent of consensus with OMERACT outcome domains for studies of acute and chronic gout. Ann Rheum Dis. 2008;67(6):888–91.View ArticlePubMedGoogle Scholar Gazi H, Pope JE, Clements P, Medsger TA, Martin RW, Merkel PA, et al. Outcome measurements in scleroderma: results from a Delphi exercise. J Rheumatol. 2007;34(3):501–9.PubMedGoogle Scholar Toupin-April K, Barton J, Fraenkel L, Li LC, Brooks P, De Wit M, et al. Toward the Development of a Core Set of Outcome Domains to Assess Shared Decision-making Interventions in Rheumatology: Results from an OMERACT Delphi Survey and Consensus Meeting. J Rheumatol. 2017;44(10):1544-50.Google Scholar Akins RB, Tolson H, Cole BR. Stability of response characteristics of a Delphi panel: application of bootstrap data expansion. BMC Med Res Methodol. 2005;5(1):37.View ArticlePubMedPubMed CentralGoogle Scholar Sinha IP, Smyth RL, Williamson PR. Using the Delphi technique to determine which outcomes to measure in clinical trials: recommendations for the future based on a systematic review of existing studies. PLoS Med. 2011;8(1):e1000393.View ArticlePubMedPubMed CentralGoogle Scholar Flynn TN, Louviere JJ, Peters TJ, Coast J. Best–worst scaling: what it can do for health care research and how to do it. J Health Econ. 2007;26(1):171–89.View ArticlePubMedGoogle Scholar Boers M, Kirwan JR, Wells G, Beaton D, Gossec L, d'Agostino MA, et al. Developing core outcome measurement sets for clinical trials: OMERACT filter 2.0. J Clin Epidemiol. 2014;67(7):745–53.View ArticlePubMedGoogle Scholar Mokkink LB, Prinsen CA, Bouter LM, de Vet HC, Terwee CB. The Consensus-based Standards for the Selection of Health Measurement Instruments (COSMIN) and how to select an outcome measurement instrument. Braz J Phys Ther. 2016;20(2):105–13.View ArticlePubMedPubMed CentralGoogle Scholar
CommonCrawl
Weyl tensor From Encyclopedia of Mathematics In Riemannian geometry one has a manifold $M$ of dimension $n$ which admits a metric tensor $g$ whose signature is arbitrary. Let $\Gamma$ be the unique Levi-Civita connection on $M$ arising from $g$ and let $\mathcal{R}$ be the associated curvature tensor with components . Of importance in Riemannian geometry is the idea of a conformal change of metric, that is, the replacement of the metric $g$ by the metric $\phi g$ where $\phi$ is a nowhere-zero real-valued function on $M$. The metrics $g$ and $g ^ { \prime }$ are then said to be conformally related (or $g ^ { \prime }$ is said to be "conformal" to $g$). One now asks for the existence of a tensor on $M$ which is constructed from the original metric on $M$ and which would be unchanged if it were to be replaced with another metric conformally related to it. (It is noted here that the curvature tensor would only be unaffected by such a change, in general, if the function $\phi$ were constant.) The answer was provided mainly by H. Weyl [a1], but with important contributions from J.A. Schouten [a2] (see also [a3]). For $n > 3$, Weyl constructed the tensor $C$ (now called the Weyl tensor) with components given by where $R _ { ab } \equiv R ^ { c } \square _ { a c b }$ are the Ricci tensor components, is the Ricci scalar and square brackets denote the usual skew-symmetrization of indices. If this tensor is written out in terms of the metric $g$ and its first- and second-order derivatives, it can then be shown to be unchanged if $g$ is replaced by the metric $g ^ { \prime }$. (It should be noted that this is not true of the tensor with components $C_{abcd}$, which would be scaled by a factor $\phi$ on exchanging $g$ for $g ^ { \prime }$.) If $g$ is a flat metric (so that $\mathcal{R} = 0$), then the Weyl tensor constructed from $g$ (and from $g ^ { \prime }$) is zero on $M$. Conversely, if $g$ gives rise, from (a1), to a zero Weyl tensor on $M$, then for each $p$ in $M$ there are a neighbourhood $U$ of $p$ in $M$, a real-valued function $\psi$ on $U$ and a flat metric $h$ on $U$ such that $g = \psi h$ on $U$ (i.e. $g$ is locally conformal to a flat metric on $M$). When $C = 0$ on $M$, the latter is called conformally flat. If $n = 3$, it can be shown from (a1) that $C \equiv 0$ on $M$. Since not every metric on such a manifold is locally conformally related to a flat metric, the tensor $C$ is no longer appropriate. The situation was resolved by Schouten [a2] when he found that the tensor given in components by (using a semi-colon to denote a covariant derivative with respect to the Levi-Civita connection arising from the metric) played exactly the same role in dimension $3$ as did $C$ for $n > 3$. If $n = 2$, every metric on $M$ is locally conformally related to a flat metric [a3]. The tensor $C$ has all the usual algebraic symmetries of the curvature tensor, together with the extra relation . If the Ricci tensor is zero on $M$, the Weyl tensor and the curvature tensor are equal on $M$. The tensor introduced in (a2) by Schouten possesses the algebraic identities It is interesting to ask if two metrics on $M$ ($n > 3$) having the same Weyl tensor as in (a1) are necessarily (locally) conformally related. The answer is clearly no if $C = 0$. If $C$ is not zero on $M$, the answer is still no, a counter-example (at least) being available for a space-time manifold (i.e. a $4$-dimensional manifold admitting a metric with Lorentz signature $( + + + - )$). The Weyl tensor finds many uses in differential geometry and also in Einstein's general relativity theory. In the latter it has important physical interpretations and its algebraic classification is the famous Petrov classification of gravitational fields [a4] [a1] H. Weyl, "Reine Infinitesimalgeometrie" Math. Z. , 2 (1918) pp. 384–411 [a2] J.A. Schouten, "Ueber die konforme Abbildung $n$-dimensionaler Mannigfaltigkeiten mit quadratischer Massbestimmung auf eine Mannigfaltigkeit mit Euklidischer Massbestimmung" Math. Z. , 11 (1921) pp. 58–88 [a3] L.P. Eisenhart, "Riemannian geometry" , Princeton Univ. Press (1966) [a4] A.Z. Petrov., "Einstein spaces" , Pergamon (1969) How to Cite This Entry: Weyl tensor. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Weyl_tensor&oldid=50848 This article was adapted from an original article by G.S. Hall (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Weyl_tensor&oldid=50848" TeX semi-auto TeX partially done
CommonCrawl
An adaptive prediction method for mechanical properties deterioration of sandstone under freeze–thaw cycles: a case study of Yungang Grottoes Chenchen Liu ORCID: orcid.org/0000-0002-5274-23781,2, Yibiao Liu1,2, Weizhong Ren1,2, Wenhui Xu1,2, Simin Cai1,2 & Junxia Wang1 Heritage Science volume 9, Article number: 154 (2021) Cite this article Due to the location of the Yungang Grottoes, freeze–thaw cycles contribute significantly to the degradation of the mechanical properties of the sandstone. The factors influencing the freeze–thaw cycle are classified into two categories: external environmental conditions and the inherent properties of the rock itself. Since the parameters of rock properties are inherent to each rock, the effect of rock properties on freeze–thaw degradation cannot be investigated by the control variates method. An adaptive multi-output gradient boosting decision trees (AMGBDT) algorithm is proposed to fit nonlinear relationships between mechanical properties and physical factors. The hyperparameters in the GBDT algorithm are set as variables, and the Sequential quadratic programming (SQP) algorithm is applied to solve the hyperparameter optimization, which means finding the maximum Score. The case study illustrates that the AMGBDT algorithm can precisely determine the effect of each independent factor on the output. The patterns of mechanical properties are similar when the number of freeze–thaw cycles and porosity are used as variables separately and when both are used simultaneously. The uniaxial compressive strength decay rate is positively correlated with the number of freeze–thaw cycles and porosity. The modulus of elasticity is negatively correlated with the number of freeze–thaw cycles and porosity. The results show that the number of freeze–thaw cycles is the main factor influencing the freeze–thaw cycling action, and the porosity is minor. In addition, the fitting accuracy of the AMGBDT algorithm is generally higher than neural networks (NN) and random forests (RF). Studying the influence of porosity and other rock properties on the freeze–thaw cycle will help to understand the failure mechanism of rock freeze–thaw cycles. The Yungang Grottoes, one of the three major grotto groups in China and world-famous stone sculpture art, was inscribed on the UNESCO World Heritage List in 2001. Therefore, Yungang Grottoes has a significant artistic and cultural value. Yungang Grottoes (113°20′ E, 40°04′ N) is located in a continental monsoon semi-arid climate. The annual average temperature in the study area is 7–10 °C, the monthly average temperature in January is − 11.4 °C, and the monthly average temperature in July is 23.1 °C. The freezing period is up to six months, with a standard frozen depth of 1.5 m. The major stratigraphy of Yungang Grottoes is the Mesozoic Jurassic stratigraphy, mainly composed of fluvial and lacustrine deposits [1]. Yungang Grottoes is distributed in a vast lenticular miscellaneous sandstone. The lithology of the sandstone is mainly calcareous cemented sandstone, and gravel is often visible at the bottom of the sandstone's lenticular body. The main components of the sand are quartz, feldspar, and rock debris. The main components of the conglomerate are quartz breccia, rock clasts, and a small portion of mudstone clumps. Due to the geographical location of the Yungang Grottoes and the environment, the effects of weathering caused by freeze–thaw cycles cannot be ignored. A number of experiments to investigate the effect of freeze–thaw cycling tests on the mechanical properties of sandstone were conducted [2, 3], such as porosity [4], the number of freeze–thaw cycles [5,6,7]. The water content of the rock and the number of freeze–thaw cycles affect the degree of deterioration by freeze–thaw cycles. The main reasons are as follows. (1) The main cause of freeze–thaw damage in rocks is the repeated generation and dissipation [8] of frost heaving forces. The number of freeze–thaw cycles defines the number of times the frost heaving forces are repeatedly generated and dissipated, which means the more times, the more serious the freeze–thaw damage. (2) Pore water influences rock damage through three mechanisms: phase swelling, hydrostatic pressure, and formation of ice lenses or wedges [9], phase swelling, hydrostatic pressure, and formation of ice lenses or wedges. The mechanisms indicate a positive correlation between the pore water content of rocks and damage to saturated rocks. The larger the initial porosity of the rock, the greater the water content at saturation, resulting in greater tensile stresses in the pore walls caused by volume expansion due to water–ice phase changes. The initial porosity affects the freeze–thaw deterioration of rocks in part. However, most experiments [10, 11] focused on the effect of external environmental conditions on the freeze–thaw deterioration of sandstone. The mechanism of freeze–thaw damage in rocks shows that the initial porosity of rocks also affects freeze–thaw damage. It is impossible to experimentally achieve the control of variables such as intrinsic rock parameters, which means that it is hard to explore the degree of influence of each variable when multiple variables act. In this paper, with the help of the developed AMGBDT algorithm, we investigate the changes in mechanical properties of sandstone when initial porosity and the number of freeze–thaw cycles are adopted as variables. The selection of hyperparameters has a significant impact on the fitting performance of GBDT. In this paper, the AMGBDT algorithm is proposed to complete the GBDT hyperparameter tuning task, and a prediction model is given based on this algorithm. The prediction model is based on a series of freeze–thaw cycling experimental data to quantify the extent of porosity effects on rock deterioration under freeze–thaw cycles. The prominent advantage of this method is that the degree of contribution of the rock's inherent parameters to the deterioration can be calculated analytically. This paper provides a considerably more efficient and cost-effective means of studying the failure mechanism of freeze–thaw cycles. The remainder of this paper is organized as follows. The methodology section introduces the principles of the GBDT algorithm and the SQP algorithm and the construction framework of the AMGBDT-SQP algorithm. The comparison between the fitted curves of experimental data and the AMGBDT algorithm is introduced in the case study when the number of freeze–thaw cycles and the porosity are applied as independent variables separately. The discussion section discusses three methods to demonstrate the GBDT algorithm prediction accuracy by the algorithm Score's evaluation index. The fifth part outlines conclusions. High-accuracy predictions can be obtained by these non-parametric machine learning (ML) methods, including classification and regression trees (CART) [12], support vector machines (SVM) [13,14,15], and k-nearest neighbor algorithm (KNN) [16, 17]. The above methods are supervised learning methods, and target variables need to be prepared for the dataset [18]. The potential relationships in the experimental data set can be captured effectively by these ML methods, with good predictive performance but hard to interpret [19]. Ensemble learning has been a popular machine learning method in recent years. It refers to a classification system that uses the idea of bagging [20], stacking [21], or boosting [22] to build and combine multiple weak base learners to complete the learning task. It can obtain superior generalization performance than a single learner [23]. Compared with other ML methods, GBDT can interpret the interactions between input variables and predictive models and also identify the relative importance of key factors through ensemble learning [24]. The algorithms combining weak learners with decision trees come in two relatively primary forms: RF and GBDT. The GBDT algorithm differs significantly from the traditional boosting algorithm in that each computation is designed to reduce the residuals from the previous one. The model is built in the direction of the decreasing gradient of the residuals to eliminate the residuals. The GBDT algorithm is a non-parametric model. The number of parameters is uncertain before training, so the fully grown decision tree has a large degree of freedom, resulting in fitting the training data to the maximum extent. The gradient boosting method for integrating multiple decision trees can well solve the problem of overfitting. Tree-based integration methods have been widely used in the field of geotechnical engineering in recent years. Due to the successful applications of the GBDT algorithm in many research fields [25,26,27], we argue that the GBDT algorithm will be reasonably competent for the work of discovering the relationship between the variables and responses. To construct a hybrid model that accurately predicts the mechanical property deterioration of sandstone after freeze–thaw cycles, the AMGBDT method is proposed by combing the GBDT algorithm and the SQP algorithm into one framework. The relationship between independent variables and the response is learned in the proposed model by applying the GBDT algorithm as a regressor. The wide range of hyperparameter variations in the GBDT algorithm results in difficult manual adjustment of the hyperparameters. Therefore, the SQP algorithm is applied as an optimizer to find the optimal parameters of the GBDT algorithm, increasing the model's adaptability. GBDT algorithm The GBDT algorithm is an algorithm that classifies or regresses data by using an additive model (i.e., a linear combination of basis functions) and continuously reducing the residuals generated by the training process. The bottom layer of the algorithm is mainly based on regression trees and gradient descent algorithms in function space. The algorithm not only possesses the advantages of solid interpretability of the tree model, effective handling of mixed types of features, scale-invariance (no need to normalize the data), and robustness to missing values but also has the advantages of reliable predictive power and good stability. Compared to its successor XGB (Extreme gradient boosting) and LGB (Light gradient boosting machine) algorithm, the GBDT algorithm only requires the loss function to be first-order derivable, and both convex and non-convex functions are applicable. However, the XGB/LGB algorithm requires a stricter loss function, first- and second-order derivable, and a strictly convex function. Therefore, the GBDT algorithm was adopted to explore the effects of porosity and the number of freeze–thaw cycles on the deterioration of rock mechanical properties. Data description. Before building the GBDT model, the freeze–thaw cycle experimental data are given as follows: $$D=\{\left({X}_{1},{Y}_{1}\right),\left({X}_{2},{Y}_{2}\right),\dots ,\left({{X}_{N},Y}_{N}\right)\}$$ $${X}_{i}=\left\{{ftt}_{i}, {p}_{i}, {v}_{i}\right\}; {Y}_{i}=\left\{{sl}_{i}, {e}_{i}, m{l}_{i}\right\}; {X}_{i} \subset {\mathbb{R}}^{N}; {Y}_{i} \subset {\mathbb{R}}^{N}; i\in \left[1,N\right]$$ where the \({ftt}_{i}\) refers to freeze–thaw cycle times of the \(i\)th sample; \({p}_{i}\) refers to the initial porosity of the \(i\)th sample; \({v}_{i}\) represents the longitudinal wave velocity of the \(i\)th sample;\({sl}_{i}\) is the loss rate of uniaxial compressive strength,\({e}_{i}\) refers to the elastic modulus after the deterioration of the \(i\)th sample,\(m{l}_{i}\) refers to the loss rate of mass, \(N\) is the number of samples. After inputting the training dataset, the GBDT algorithm first constructs an initial regressor which can be defined as follows: $${F}_{0}\left({X}_{i}\right)= argmin \sum_{i=1}^{N}\left({Y}_{i},c\right)$$ $$c = {{\left( {\sum\limits_{{i = 1}}^{N} {Y_{i} } } \right)} \mathord{\left/ {\vphantom {{\left( {\sum\limits_{{i = 1}}^{N} {Y_{i} } } \right)} N}} \right. \kern-\nulldelimiterspace} N}$$ When initializing the base regressor, \(c\) adopts the value of the mean of all training sample labels. Following the construction of an initial decision tree, the GBDT algorithm performs multiple iterations, where each iteration produces a new base regressor. Each base regressor is fitted with the previous regressor's residuals to minimize the current round's residuals. The final total regressor is obtained by weighting and summing base regressors obtained from each round of training. The framework of the AMGBDT-SQP algorithm is shown in Fig. 1. Flowchart of the proposed work Aiming at the problem of the loss function fitting method, Freidman [28] proposed to fit an approximation of the loss of the current round with the negative gradient of the loss function, which in turn fits a CART regression tree. The specific implementation flow of the algorithm is as follows: For the number of iterations \(t\) = 1, 2, …, \(T\). For \(i\) = 1, 2, …, \(N\), calculate the negative gradient of the loss function in this iteration. $${b}_{ti}=-[\frac{\partial \mathcal{L}\left({Y}_{i},F\left({{\varvec{X}}}_{i}\right)\right)}{\partial F\left({{\varvec{X}}}_{i}\right)}]|F\left({X}_{i}\right)={F}_{t-1}\left({X}_{i}\right)$$ where \(\mathcal{L}\left(\bullet \right)\) is the loss function and is constructed by the least-squares method in this paper. $$\mathcal{L}\left({Y}_{i},F\left({X}_{i}\right)\right)=\frac{1}{2}{\left({Y}_{i}-F\left({X}_{i}\right)\right)}^{2}$$ The \(t\)th regression tree can be obtained by fitting a CART regression tree to the residuals generated at the (\(t-1\))th iteration. The corresponding leaf node region of the regression tree is \({B}_{tj}\), \(j\) = 1, 2, …, \(J\). Where \(J\) is the number of leaf nodes of \(t\)th regression tree. For the leaf node region \(j\) = 1, 2, …, \(J\), calculate the best-fit value for each region. $${c}_{tj}=\mathrm{arg}\underset{{\varvec{c}}}{\mathrm{min}}\sum\mathcal{L}\left({Y}_{i},{F}_{t-1}\left({X}_{i}\right)+c\right)$$ Combination of base regressors. $${F_t}\left( {{X_i}} \right) = {F_{t - 1}}\left( {{X_i}} \right) + \sum {\left( {{c_{tj}}I} \right), {X_i}~ \subset {R^{tj}}}$$ Finally, a strong regressor is obtained. $${F}_{T}\left(X\right)={F}_{0}\left(X\right)+\sum\left({c}_{tj}I\right)$$ SQP algorithm The adaptive algorithm allows the model to automatically select the optimal parameters for any reasonable data set, resulting in the best prediction of the model, thus enabling the model to be more adaptive. The objective functions are all nonlinear functions of the independent variables when the hyperparameters in the model are the independent variables and need to be solved using nonlinear programming algorithms. SQP algorithm is one of the most effective methods in current algorithms for solving small and medium-scale nonlinear optimization problems. Its main idea is to utilize a number of quadratic programming to sequentially approximate the original nonlinear programming problem [29, 30]. The number of trees (M) and the learning rate of a model (µ) are the hyperparameters that significantly impact the performance of the GBDT model. Therefore, the M and the µ are considered as the independent variables of the SQP algorithm. The number of trees M refers to the number of iterations or base models in the additive model. Underfitting or overfitting is prone to occur when the value of M is too large or too small. The learning rate µ refers to the weight reduction coefficient of each CART regressor tree, and the value is greater than 0 and less than 1. Generally speaking, the smaller the µ is, the more trees are needed to fit a model. M is inversely related to µ. Therefore, M and μ should be adjusted simultaneously to optimize the model prediction. The M and μ parameters are interconnected, causing the optimization problem a combined, constrained optimization problem. The mathematical model for parameter optimization is defined as follows. $$\mathrm{min}-f\left({\varvec{Z}}\right)$$ $$subject\, to: 0<\mu <1;$$ $$100<M<1000$$ where \(f\left({\varvec{Z}}\right)\) is the coefficient of determination \(Score\). $$f\left({\varvec{Z}}\right)=Score=1-\frac{\sum_{i=1}^{N}{({Y}_{i}-{F}_{t}({X}_{i}))}^{2}}{\sum_{i=1}^{N}{({Y}_{i}-c)}^{2}}$$ The objective function of the nonlinearly constrained problem at the current iteration point \({{\varvec{Z}}}^{k}\) is reduced to a quadratic function on the variable \(S\) via Taylor expansion. $$\mathrm{min}-f\left(\mathbf{Z}\right)=-\frac{1}{2}{S}^{T}{\nabla }^{2}f\left({{\varvec{Z}}}^{k}\right)S-\nabla f{\left({{\varvec{Z}}}^{k}\right)}^{T}S$$ $$S={\varvec{Z}}-{{\varvec{Z}}}^{k}$$ Adopt the optimum solution \({S}^{*}\) as the next search direction \({S}^{k}\) of the current problem, and use a one-dimensional search along the above direction to obtain \({{\varvec{Z}}}^{k+1}\). An approximate solution to the original problem. SQP algorithm has an excellent performance in convergence, computational efficiency, and boundary search. Besides, it is based on sound mathematical theory support. However, the relationship between the objective function and the independent variables is relatively complex for model parameter optimization problems. The objective function may have more than one extreme value point, rendering it easy to fall into a local optimum solution applying this algorithm. A large number of initial points are generated and computed to find local optimal solutions to obtain the highest global Score. The local relative optimal solution can be approximated as the global optimal solution with a sufficiently large number of initial points. The data used in this paper is from the existing test data recorded in the subproject III of Yungang Grottoes conservation project [1]. The sampling locations for the freeze–thaw cycle experiments and the photograph of part of the specimens are shown in Fig. 2. This section investigates the effect of freeze–thaw cycles and initial porosity on mechanical property deterioration. The statistical properties of the experimental data are listed in Table 1. The standard deviation is adopted to quantify the extent of the variation of data. The formula for standard deviation is as follows: Location map of the sampling regions and part of sandstone specimens Table 1 Statistical properties of experimental data $$\sigma =\sqrt{\frac{1}{N}\sum_{i=1}^{N}{\left({X}_{ik}-\overline{{X }_{k}}\right)}^{2}}$$ where σ is the symbol for standard deviation and the \(\overline{{X }_{k}}\) is the symbol for the mean value. Preprocessing of the experiment data The number of freeze–thaw cycles and initial porosity vary in values and magnitudes. The number of freeze–thaw cycles varies from 10 to 35, while the range of initial porosity is from 3.32% to 8.2%. The range of values for the number of freeze–thaw cycles is more extensive than the initial porosity, making the contour of the loss function steep. When seeking the optimal solution using gradient descent, it is quite possible to take a Zig–Zag route (vertical contour line), as Fig. 3a shows, resulting in many iterations to converge. Figure 3b shows that after normalizing the data, the optimal solution finding process becomes significantly smoother and easier to converge to the optimal solution in the right way. Schematic diagrams of gradient descent: a Schematic diagram of gradient descent before normalization; b Schematic diagram of the gradient descent after normalization Therefore, to eliminate the differences in magnitude and range of values between different features, the maximum minimization method is used for data normalization in this paper. The specific formula for the maximum–minimum normalization process is defined as $${{X}_{ik}}^{^{\prime}}= \frac{{X}_{ik}-\mathrm{min}({X}_{ik})}{\mathrm{max}\left({X}_{ik}\right)-\mathrm{min}\left({X}_{ik}\right)}\quad (k=\mathrm{1,2},3)$$ where \({X}_{i1}=ft{t}_{i}\), \({X}_{i2}= {p}_{i}, {X}_{i3}={v}_{i}\). The normalization process allows the preprocessed data to be limited to [0,1], eliminating the undesirable effects caused by differences in magnitude and range and speeding up the training speed. Prediction results of the AMGBDT algorithm The normalized training set data are input into the GBDT model, and the SQP method is applied to optimize the GBDT parameters. The optimal parameters are obtained as \(M=353.60919; \mu =0.87306678\). The optimal parameters are input into the GBDT model to obtain the prediction model of mechanical property deterioration of sandstone. The influence of the number of freeze–thaw cycles In this experiment, the results of the uniaxial compressive strength test of rock samples after freeze–thaw cycles were analyzed to determine a correlation between the number of freeze–thaw cycles and the decay of mechanical indexes of rock masses. The experimental data of the same group of samples were averaged to reduce the error caused by the dispersion of the experimental data. The experimental data were fitted to obtain fitted curves regarding the number of freeze–thaw cycles and mechanical indices. The decay rate of uniaxial compressive strength. It is seen from Fig. 4a that the predicted curves of the model generally match with the fitted curve change pattern of the test. The change in strength decay rate is large at the beginning and the middle of the freeze–thaw cycle. After entering the end (30 times ~ 35 times), the change in strength decay slows down and enters the plateau period. However, the threshold range of uniaxial compressive strength in the prediction curve of the GBDT model is slightly smaller than the threshold range of the test curve. It can be seen that, among the independent variables, the number of freeze–thaw cycles significantly influences the uniaxial compression strength. Fitted curves of experimental data and the AMGBDT model when the number of freeze–thaw cycles is adopted as a variable: a relationship between the strength loss rate and the number of freeze–thaw cycles; b relationship between the elastic modulus and the number of freeze–thaw cycles The elastic modulus. Similarly, the two predicted curves have the same pattern of variation, with the elasticity modulus showing a decreasing trend with the increase of the number of freeze–thaw cycles. However, the magnitude of the change in the AMGBDT model fitted curve is slightly smaller than the experimental. In addition, the experimental curve fit's effectiveness is reduced due to the discrete nature of the experimental data points. The robustness of the AMGBDT model to missing values makes the dispersion of data points have less impact on the fitting effect. The variation law of the fitted curve of the GBDT algorithm is basically the same as that of the fitted curve of the test. However, since the number of freeze–thaw cycles is set as the independent variable in the AMGBDT model only, the law of the number of freeze–thaw cycles and the decay of mechanical indexes obtained by the AMGBDT model is more accurate. The influence of the initial porosity The initial porosity is an inherent parameter of the rock, whose extent of influence in freeze–thaw cycles cannot be determined by experimental means. In other words, it is hard to obtain the experimental fitted curve of initial porosity and deterioration of mechanical properties. In the AMGBDT model, the number of freeze–thaw cycles as a fixed value, the strength decay rate is positively correlated with initial porosity, and the modulus of elasticity is negatively correlated with initial porosity. According to the magnitude of the curve, the deterioration caused by initial porosity as a variable is less than that caused by the number of freeze–thaw cycles. It was demonstrated that the number of freeze–thaw cycles had a more significant effect on the uniaxial compressive strength decay rate and elastic modulus than initial porosity. Figure 5 provides the exact changes in elastic modulus and uniaxial compressive strength of the sandstone after freeze–thaw cycles when the initial porosity is used as the independent variable. Fitted curves of experimental data and the AMGBDT model when the initial porosity is adopted as a variable: a relationship between the strength loss rate and the number of freeze–thaw cycles; b relationship between the elastic modulus and the number of freeze–thaw cycles Different modeling methods can result in different impacts in mining the nonlinear mapping relationship between the independent vector X and the dependent vector Y. There are many studies on the application of GBDT in geological disciplines. In addition, RF, NN performs well in fitting highly nonlinear relationships. Therefore NN, RF, and the AMGBDT model are selected for fitting prediction. The mutual constraints and interactions between neurons in the neural network algorithm enable the entire network to exhibit a nonlinear mapping from the input state space to the output state space. Therefore, the NN algorithm is suitable for solving problems with complex environmental information and unclear inference rules. The structure of the deep neural network used is shown in Fig. 6. The number of units in the input and output layers is 3 and 3, respectively, and the number of layers in the hidden layer is 3. The connection between layers is fully connected with each other. The number of neurons in each hidden layer is 128, 64, 32, respectively. Batch Normalization is used to solve the problem of gradient disappearance in deep neural networks, increasing the training speed of the network. Structure of the NN A random forest is composed of many decision trees in a randomly selected subset of training data [31], where each tree in the forest is voted on and decided jointly at decision time [32]. It performs well on a number of datasets and can handle high-dimensional data [33]. In addition, RF does not select features and can be computed in parallel to speed up training. RF exhibits some fault tolerance for training data and is an effective method for estimating missing data. Moreover, RF suits multi-classification problems and detects the interactions between variable features and the degree of importance. The number of trees in the random forest is 100. Furthermore, the AMGBDT model's Score calculated by Eq. (10) is chosen as the evaluation index. The Score of the three models is shown in Fig. 7. A higher value of Score indicates a better prediction. The comparison in Fig. 7 illustrates that with the number of freeze–thaw cycles and initial porosity as independent variables, the Score value of the AMGBDT model is closer to 1 than the remaining two algorithms, which means that the AMGBDT fits best. Therefore, the AMGBDT algorithm with multiple outputs has been proven to be the most accurate. Score of three methods To study the effects of freeze–thaw cycles and initial porosity changes on the deterioration of mechanical properties of sandstone under freeze–thaw cycles, an AMGBDT model was developed for the study in this paper. Some limitations should be noted. The initial porosity is adopted as the independent variable when we explore the extent of influence of pore space inside the rock in the freeze–thaw damage. Due to the low cost and accessibility of the measurement of connected porosity and the considerable contribution in the freeze–thaw cycle deterioration, the initial porosity in this paper refers to the connected porosity. The frost damage mechanism of sandstone [34] demonstrates that both closed pores and connected pores contribute to rock freeze–thaw damage. The role of connected pores during freeze–thaw cycles may be amplified to some extent because no data on closed pores were measured in our study. The conclusions are as follows. Based on the theory of SQP and GBDT, the AMGBDT algorithm is proposed to allow its application to explore the effects of freeze–thaw cycling on the macroscopic deterioration patterns of sandstones in various locations. The comparison of the prediction curves of the AMGBDT model with the curves fitted to the experimental data yields the following conclusions. When the initial porosity is fixed and the number of freeze–thaw cycles is the variable, the predicted curve of the AMGBDT model follows the same trend as the fitted curve of the test. However, the change in the decay rate of strength and elasticity modulus is less than the change range corresponding to the experimental curve. When the number of freeze–thaw cycles is fixed, and initial porosity is the variable, the changes in strength decay rate and modulus of elasticity are small and positively correlated. In summary, for dense rocks like sandstone, both initial porosity and the number of freeze–thaw cycles exacerbate the deterioration of mechanical properties, and the number of freeze–thaw cycles is the dominant factor. The NN and RF are selected to perform the same training prediction. Comparing the Score of three models reveals that the model built by the AMGBDT algorithm performs optimally, which illustrates the feasibility of the proposed method. The model constructed in this paper can be further applied to the multi-field coupling analysis of rock weathering to obtain the degree of contribution of the influencing factors of each action field to the deterioration of the final mechanical properties. The influence ranking weights of each factor at the time of coupling are obtained, and the mechanism of multi-field coupling can be further studied. Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study. AMGBDT: An adaptive multi-output gradient boosting decision trees SQP: Sequential quadratic programming NN: RF: Random forests Classification and regression trees SVM: Support vector machine KNN: K-nearest neighbor XGB: Extreme gradient boosting LGB: Light gradient boosting machine. \(D\) : Data set of the freeze–thaw cycle experiment \(X\) : Independent variables of the freeze–thaw cycle experiment data \(Y\) : Dependent variable of the freeze–thaw cycle experiment data set \({ftt}_{i}\) : Number of freeze–thaw cycles of the \(i\)th sample \({p}_{i}\) : Porosity of the \(i\)th sample \({v}_{i}\) : Longitudinal wave velocity of the \(i\) th sample \({sl}_{i}\) : Loss rate of uniaxial compressive strength \({e}_{i}\) : Elastic modulus after the deterioration of the \(i\)th sample \(m{l}_{i}\) : Loss rate of mass \(N\) : \({F}_{0}\left({X}_{i}\right)\) : Initial base regressor \(T\) : Number of iterations \({b}_{ti}\) : Negative gradient of the loss function in this iteration \(\mathcal{L}\left(\bullet \right)\) : \({B}_{tj}\) : The corresponding leaf node region of the regression tree \(J\) : Number of leaf nodes of \(t\)th regression tree \({c}_{tj}\) : The best-fit value for each leaf node region Number of trees µ : Learning rate of a model \(f\left({\varvec{Z}}\right)\) : Objective function of the optimization problem \({{\varvec{Z}}}^{k}\) : Current iteration point \(S\) : \({S}^{*}\) : Optimum solution of the current problem \({S}^{k}\) : The next search direction \({{\varvec{Z}}}^{k+1}\) : An approximate solution to the original problem \(Score\) : \({X}_{ik}\) : \({{X}_{ik}}^{^{\prime}}\) : Independent variable after normalization Wang JH, Yan SJ, Ren WZ, Fang Y. Research on structural stability analysis and evaluation system of cave rock body. Wuhan: China University of Geosciences Press; 2013. Ke B, Zhou KP, Xu CS, Deng HW, Li JL, Bin F. Dynamic mechanical property deterioration model of sandstone caused by freeze-thaw weathering. Rock Mech Rock Eng. 2018;51(9):2791–804. https://doi.org/10.1007/s00603-018-1495-0. Zhang J, Deng HW, Taheri A, Ke B, Liu CJ, Yang X. Degradation of physical and mechanical properties of sandstone subjected to freeze-thaw cycles and chemical erosion. Cold Regions Sci Technol. 2018;155:37–46. https://doi.org/10.1016/j.coldregions.2018.07.007. Inada Y, Kinoshita N, Ebisawa A, Gomi S. Strength and deformation characteristics of rocks after undergoing thermal hysteresis of high and low temperatures. Int J Rock Mech Mining Scie Geomech. 1997;34(3–4):688. https://doi.org/10.1016/S1365-1609(97)00048-8. Yahaghi J, Liu HY, Chan A, Fukuda D. Experimental and numerical studies on failure behaviours of sandstones subject to freeze-thaw cycles. Transportation Geotech. 2021;31:100655. https://doi.org/10.1016/j.trgeo.2021.100655. Xu JC, Pu H, Sha ZH. Mechanical behavior and decay model of the sandstone in Urumqi under coupling of freeze-thaw and dynamic loading. Bull Eng Geol Environ. 2021;80(4):2963–78. https://doi.org/10.1007/s10064-021-02133-5. Gao F, Xiong X, Zhou KP, Li JL, Shi WC. Strength deterioration model of saturated sandstone under freeze-thaw cycles. Rock Soil Mech. 2019;40(3):926–32. https://doi.org/10.16285/j.rsm.2017.1886. Huang SB, Ye YH, Cui XZ, Cheng AP, Liu GF. Theoretical and experimental study of the frost heaving characteristics of the saturated sandstone under low temperature. Cold Regions Sci Technol. 2020;174:103036. https://doi.org/10.1016/j.coldregions.2020.103036. Sarici DE, Ozdemir E. Determining point load strength loss from porosity, Schmidt hardness, and weight of some sedimentary rocks under freeze-thaw conditions. Environ Earth Sci. 2018;77(3):1–9. https://doi.org/10.1007/s12665-018-7241-9. Li JL, Zhou KP, Liu WJ, Deng H. NMR research on deterioration characteristics of microscopic structure of sandstones in freeze–thaw cycles. Trans Nonferrous Metals Soc China. 2016;26(11):2997–3003. https://doi.org/10.1016/S1003-6326(16)64430-8. Yu J, Chen X, Li H, Zhou JW, Cai YY. Effect of freeze-thaw cycles on mechanical properties and permeability of red sandstone under triaxial compression. J Mountain Sci. 2015;12(2):218–31. https://doi.org/10.1007/s11629-013-2946-4. Jakubowski J, Stypulkowski JB, Bernardeau FG. Multivariate Linear Regression and CART Regression Analysis of TBM Performance at Abu Hamour Phase-I Tunnel. Arch Min Sci. 2017;62(4):825–41. https://doi.org/10.1515/amsc-2017-0057. Armaghani DJ, Asteris PG, Askarian B, Hasanipanah M, Tarinejad R, van Huynh V. Examining hybrid and single SVM models with different kernels to predict rock brittleness. Sustainability. 2020;12(6):1–17. https://doi.org/10.3390/su12062229. Yao BZ, Yang CY, Yao JB, Sun J. Tunnel surrounding rock displacement prediction using support vector machine. Int J Comput Intell Syst. 2010;3(6):843–52. https://doi.org/10.1080/18756891.2010.9727746. Gupta S, Mohan N, Kumar M. A Study on Source Device Attribution Using Still Images. Arch Comput Methods Eng. 2021;28(4):2209–23. https://doi.org/10.1007/s11831-020-09452-y. Akbulut Y, Sengur A, Guo YH, Smarandache F. NS-k-NN: Neutrosophic set-based k-nearest neighbors classifier. Symmetry. 2017;9(9):1–10. https://doi.org/10.3390/sym9090179. Bansal M, Kumar M, Kumar M, Kumar K. An efficient technique for object recognition using Shi-Tomasi corner detection algorithm. Soft Comput. 2021;25(6):4423–32. https://doi.org/10.1007/s00500-020-05453-y. Breiman L. Statistical modeling: The two cultures. Stat Sci. 2001;16(3):199–231. https://doi.org/10.1214/ss/1009213726. Zhang Y, Haghani A. A gradient boosting method to improve travel time prediction. Transport Res. 2015;58:308–24. https://doi.org/10.1016/j.trc.2015.02.019. Breiman L. Bagging Predictors. Mach Learn. 1996;24:123–40. https://doi.org/10.1023/A:1018054314350. Wang YY, Wang DJ, Geng N, Wang YZ, Yin YQ, Jin YC. Stacking-based ensemble learning of decision trees for interpretable prostate cancer detection. Applied Soft Computing J. 2019;77:188–204. https://doi.org/10.1016/j.asoc.2019.01.015. Freund Y, Robert E, Schapire. A short introduction to boosting. J Jpn Soc Artif Intell 1999;14(5):771–80. Kuncheva LI, Bezdek JC, Duin RPW. Decision templates for multiple classifier fusion: an experimental comparison. Pattern Recogn. 2001;34(2):299–314. https://doi.org/10.1016/S0031-3203(99)00223-X. Elith J, Leathwick JR, Hastie T. A working guide to boosted regression trees. J Anim Ecol. 2008;77(4):802–13. https://doi.org/10.1111/j.1365-2656.2008.01390.x. Wang Y, Feng LW, Li SJ, Ren F, Du QY. A hybrid model considering spatial heterogeneity for landslide susceptibility mapping in Zhejiang Province, China. CATENA. 2020;188:104425. https://doi.org/10.1016/j.catena.2019.104425. Liu JJ, Liu JC. An intelligent approach for reservoir quality evaluation in tight sandstone reservoir using gradient boosting decision tree algorithm - A case study of the Yanchang Formation, mid-eastern Ordos Basin, China. Marine and Petroleum Geology. 2021;126:104939. https://doi.org/10.1016/j.marpetgeo.2021.104939 Chen T, Zhu L, Niu R, Trinder CJ, Peng L, Lei T. Mapping landslide susceptibility at the Three Gorges Reservoir, China, using gradient boosting decision tree, random forest and information value models. J Mountain Sci. 2020;17(3):670–85. https://doi.org/10.1007/s11629-019-5839-3. Friedman JH. Greedy function approximation: a gradient boosting machine. Ann Stat. 2001;29(5):1189–1232. http://www.jstor.org/stable/2699986 Radosavljević J, Jevtić M. Hybrid GSA-SQP algorithm for optimal coordination of directional overcurrent relays. IET Gener Transm Distrib. 2016;10(8):1928–37. https://doi.org/10.1049/iet-gtd.2015.1223. Boggs PT, Tolle JW. Sequential quadratic programming. Acta Numer. 1995;4:1–51. https://doi.org/10.1017/S0962492900002518. Kumar M, Jindal MK, Sharma RK, Jindal SR. Performance evaluation of classifiers for the recognition of offline handwritten Gurmukhi characters and numerals: a study. Artif Intell Rev. 2020;53(3):2075–97. https://doi.org/10.1007/s10462-019-09727-2. Breiman L. Random Forests. Mach Learn. 2001;45:5–32. https://doi.org/10.1023/A:1010933404324. Bansal M, Kumar M, Kumar M. 2D Object Recognition Techniques: State-of-the-Art Work. Arch Comput Methods Eng. 2021;28(3):1147–61. https://doi.org/10.1007/s11831-020-09409-1. Jia HL, Ding S, Zi F, Dong YH, Shen YJ. Evolution in sandstone pore structures with freeze-thaw cycling and interpretation of damage mechanisms in saturated porous rocks. CATENA. 2020;195:104915. https://doi.org/10.1016/j.catena.2020.104915. This research was not supported by any public funding. Institute of Rock and Soil Mechanics, Chinese Academy of Sciences, Wuhan, 430071, China Chenchen Liu, Yibiao Liu, Weizhong Ren, Wenhui Xu, Simin Cai & Junxia Wang University of Chinese Academy of Sciences, Beijing, 100049, China Chenchen Liu, Yibiao Liu, Weizhong Ren, Wenhui Xu & Simin Cai Chenchen Liu Yibiao Liu Weizhong Ren Wenhui Xu Simin Cai Junxia Wang CCL completed the conceptualization, investigation, and original draft writing. YBL completed the methodology and formal analysis. WZR and JXW supervised the draft. WHX and SMC performed the investigation, data curation, and validation. All authors read and approved the final manuscript. Correspondence to Chenchen Liu. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Liu, C., Liu, Y., Ren, W. et al. An adaptive prediction method for mechanical properties deterioration of sandstone under freeze–thaw cycles: a case study of Yungang Grottoes. Herit Sci 9, 154 (2021). https://doi.org/10.1186/s40494-021-00628-8 Received: 24 July 2021 Freeze–thaw cycle Mechanical property parameters Gradient boosting decision trees
CommonCrawl
Can we interpret the Einstein field equations to mean that stress-energy *is* the curvature of spacetime? What do I mean? There are two kind of equalities, or two ways to interpret an equality. Take for example the ideal gas law $$PV = Nk_BT$$ We all know what this equation means: when you calculate both sides of the equation, you find the same physical quantity. This equation in other words is saying that temperature of an ideal gas is proportional to the pressure and the volume of the enclosing container, and inversely proportional to the number of molecules of the gas. There are so many of these equations in physics, but there is another, more subtle kind. Take this other equation about from the statistics of ideal gas: $$\langle E\rangle = \frac{1}{2}mv_{rms}^2 = \frac{3}{2}k_bT$$ Now, this equation can also be taken as an expression of proportionality. However, this can also be taken as a definition for temperature. We can read this equation to mean that temperature (a macroscopic phenomenon) is the average kinetic energy of a gas particle (up to a multiplication). Incidentally, one take $$F=ma$$ in a similar way. For example, when we are in a rotating or generally accelerated frame, the equation actually defines the fictional force in terms of the acceleration. So is $$G = 8\pi T$$ an expression of proportionality, or a definitional identity? and why? The ideal gas law does not tell us why $lhs = rhs$, it expresses a law, it does not explain nature. On the other hand, the second equation informs us on the nature of temperature, it explains nature, it tells us: this is what temperature is. I find these kinds of equations very satisfying, and they are much rarer in physics. If you disagree on any of this, please leave a comment, I am interested. general-relativity gravity spacetime curvature stress-energy-momentum-tensor AndreaAndrea $\begingroup$ Is this question ontological, or is it mechanical? $\endgroup$ – can-ned_food Oct 2 '17 at 21:40 $\begingroup$ @can-ned_food It's definitely ontological. In the kinetic example, that equation to me answers the question: "what is temperature, really?". One hypothetical answer to my question about GR is, stress energy is, really an aspect/consequence of the geometry of spacetime. Not sure by what you mean by "mechanical" in this context. $\endgroup$ – Andrea Oct 2 '17 at 23:20 There is a pure geometric definition of the Einstein tensor $G_{\mu\nu}$ in terms of derivatives of the metric. Independent of any physics. Likewise, given a field theory, you could in principle calculate the stress energy tensor. GR is a physical theory which couples the geometry, through the Einstein tensor, to the matter content, via the stress energy tensor. There are other self-consistent theories which couple geometry to matter in different ways. In this sense, the equation $G=8\pi T$ is model dependent. Just like the ideal gas law $PV=NRT$, which only applies for ideal gases, not interacting gases. Mr.WeathersMr.Weathers $\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$ – ACuriousMind♦ Oct 5 '17 at 10:50 In order to elucidate a potential answer to your question, it is helpful to consider the Einstein field equations in the context of field theory. The action, $$S=\frac{1}{16\pi G}\int d^4 x \, \sqrt{|g|} \, R + \int d^4x \, \sqrt{|g|} \, \mathcal L_M$$ gives rise to the Einstein field equations through the principle of stationary action, with a right hand side corresponding to the usual symmetric definition of the stress-energy tensor of a field theory described by $\mathcal L_M$. In this context, we can think of, $$R_{\mu\nu}-\frac12 g_{\mu\nu}R = 8\pi G \, T_{\mu\nu}$$ as being the equations of motion for the metric, and any fields in $\mathcal L_M$ coupled to gravity. So it's not so much that stress-energy is curvature but rather the fields (or other quantities) which contribute to the stress-energy are coupled to gravity. Andrii Magalich JamalSJamalS Short answer: If you like, you can say that the Einstein field equations define the active gravitational mass (or, more precisely, the active gravitational mass-energy-momentum-stress). But active gravitational mass equals inertial and passive gravitational mass, so this makes the EFE more than a definition. The EFE also contain all kinds of information that has nothing to do with sources. For example, they say that certain vacuum fields are not possible, and they predict the existence of gravitational waves. There are some ambiguities that come into play in the case of dark energy. The Einstein tensor $G$ is measurable. For example, when I drop a pencil and see how long it takes to hit the ground, I am finding out something about the Riemann tensor in a certain region of space. By doing enough measurements, I can measure the entire Riemann tensor and then determine $G$. This constitutes an operational definition of $G$. We could define $G$ in some other way, but we don't need to, it seems undesirable, and nobody does it. The Einstein field equations relate the Einstein tensor to the stress-energy tensor. In the nonrelativistic limit, this is simply equivalent to the Newtonian equation $g=Gm_a/r^2$, which relates the active gravitational mass $m_a$ to the gravitational field. This can be taken as the definition of active gravitational mass in Newtonian physics, and there is no other way to define it. However, this does not make Newton's law of gravity a tautology or a definition, because in Newtonian gravity the active gravitational mass is strictly equal to both the passive gravitational mass and the inertial mass. Since we have other types of experiments that can measure inertial mass, there is no circularity involved. Furthermore, Newton's law of gravity specifies the distance dependence of the field, which is not a matter of definition. This $1/r^2$ form of the force law results, for example, in the prediction of elliptical orbits. Similarly, in GR, the active gravitational mass (or, more precisely, active gravitational mass-energy-momentum-stress) is defined as $G/8\pi$, and there is no other way to define it. However, this does not make the Einstein field equations tautological or a matter of definition, for the same reasons as in the Newtonian theory. Note that in GR, the equality of inertial, active, and passive gravitational masses is not just an optional feature as in Newtonian gravity. If any of these equalities fails, then GR is falsified and cannot be fixed by tinkering. (E.g., it's a theorem in GR that test particles follow geodesics.) One place where I think it gets a little trickier to make proper operational definitions is in the case of dark energy. We have no way to measure the inertia or passive gravitational mass of dark energy. This is basically because our model of dark energy is a cosmological constant, and the Einstein field equations do not allow us to simply make solutions in which the cosmological constant varies from point to point. Such solutions always violate the field equations. Therefore the cosmological constant is ordinarily modeled as a constant -- it has no dynamics. (You can have a dynamical dark energy, but doing so requires something more elaborate than just letting $\Lambda$ vary.) This lack of dynamics in $\Lambda$ prevents us from measuring dark energy's inertial or passive gravitational mass. For this reason, it's not uncommon to see different people making different choices about whether or not to include the dark energy piece as part of the stress-energy tensor. Ben CrowellBen Crowell I am not sure on the distinction you draw between 'proportionality' and 'definitional identity' specially in the examples you use. For example, one could argue that $PV = nk_BT$ is just a definition of pressure (why not?), in the same way that $F = ma$ is used to define force. The interesting thing is that these concepts are consistent with other elements in the theory, for example, you can also see that if you place the gas in chamber of area $A$ then the force a side of the box experiences is $F = PA$. According to your argument this last expression would automatically change the label of $PV = nk_BT$ from definition to proportionality. With this in mind, if you define both $G$ and $T$ someway and show that $G = 8\pi T$ then in light of your argument this would be a mere proportionality. If, on the other hand, you never defined one of them, you could take this expression as a definition. Turns out this terms are separately defined and Einstein's equations are a result of the theory. But again, you could argue that this is a definition and then show that the other instances where $G$ appears in the theory are proportionalities. Finally I would like to comment on your last statement. I certainly believe the word law is not as black-and-white as you assume. To give you an example, $$ {\bf F} = -\frac{GMm}{r^2}\hat{\bf r} $$ is known as the law of gravity, whereas $$ G_{\mu\nu} = 8\pi T_{\mu\nu} $$ is labeled as the theory of general relativity, which is essentially gravity, but somehow we dediced not use the word law anymore. Why? I think it is just a historical reason which reflects the fact that at some point we realized laws may be wrong caveraccaverac $\begingroup$ I agree with you about the word "law". But most often "law" is used to refer to an equation that is empirical (drawn from experiments) and is not necessarily explained by other means. $\endgroup$ – Andrea Oct 2 '17 at 23:37 $\begingroup$ About your first bit, I disagree. When we see physics as a process of discovery of nature, we look for the why and the what of things, not just the how. The ideal gas law tell us how are those quantities related, and when it was discovered, we already had things that told us the what of these things. Pressure, volume and temperature were already defined. In contrast, the kinetic equation, relates temperature to something more fundamental (atoms), thus it can be used to replace the old definition of temperature in terms of thermometers and partial derivatives of potentials. $\endgroup$ – Andrea Oct 2 '17 at 23:46 $\begingroup$ @AndreaDiBiagio Correct, but my point still stands: you can use this relation to define pressure, the same way you use the equipartition law to define temperature. What gives the concept of temperature a privilege over pressure in this context? $\endgroup$ – caverac Oct 3 '17 at 1:37 $\begingroup$ In total absence of actual physical knowledge, yes, yes you could. I will try to reformulate. The ideal gas law was derived from experiments and it linked observations of directly measurable quantities. The reason it does not define anything in terms of anything is that, at that level, we already know what each term means. In contrast, kinetic theory provides explanation of macroscopic thermodynamic quantities in terms of mechanical quantities of atoms. In physics, atoms are taken as more fundamental, and so an equation linking a microscopic entity with a macroscopic one defines the latter. $\endgroup$ – Andrea Oct 3 '17 at 9:38 $\begingroup$ A few other examples from that theory. Pressure, the average force per unit aerea, is caused by the collisions of the atoms on the surface, the internal energy of the gas is the energy of the molecules and $S = k_B\log\Omega$. Remember that thermodynamics was developed before people accepted the existence of atoms, so these were not obvious relations when they were first written down. They explained what we see in terms of another layer of reality. $\endgroup$ – Andrea Oct 3 '17 at 9:42 No. For the simple reason that there is space-time curvature without any stress-energy. The Schwarzschild solution is a vacuum solution. Everywhere that the solution holds, the stress-energy is zero yet spacetime is curved. The famous experiments showing star light "bending" when passing near the sun, or distant galaxies gravitational lensing, are approximated with vacuum equations outside of the sources. Einstein's field equations only express the Ricci curvature. The full curvature is given by the Riemannian curvature. JJMaloneJJMalone $\begingroup$ I don't think this quite works. The OP didn't state which side of the Einstein field equations they wanted to use as a definition and which side as the thing being defined. There is nothing in your argument that prevents us from defining the stress-energy in terms of the curvature, which I would say is more reasonable than defining the curvature in terms of the stress-energy. $\endgroup$ – Ben Crowell Oct 2 '17 at 20:31 $\begingroup$ Also, although the OP used the word "curvature" in the title, when they actually stated the question in more mathematical detail they specified the Einstein tensor. It doesn't seem likely that you could define the Einstein tensor in terms of the stress-energy and then go on and define the whole Riemann tensor in some way that would fit logically -- but your argument doesn't address that. $\endgroup$ – Ben Crowell Oct 2 '17 at 20:32 I'd like to add the following notes to mainly Mr. Weather's good answer. The Einstein tensor has definite geometric meaning independent of physics; there are two physics independent meanings we should take heed of: Its divergence vanishes, by dint of the first Bianci identity. This fact is an expression of the geometrical relationship $\partial^2 =\emptyset$ - the boundary of the boundary of a set is always the empty set. This is one intuitive reason why we can "wire up" the Einstein tensor to the stress energy $G\propto T$: the divergence of $G$ vanishes through geometry, and this forces the divergence of $T$ to be nought. In "wiring up" geometry to stress energy in this way, we force the theory to encode local conservation of energy-momentum through a fundamental, simple geometrical fact; To answer your title question: "definitely not!" The Einstein equation determines the Ricci tensor, and this tensor encodes only hypervolume distortion as I discuss further in my answer here. The full Riemann curvature tensor has further degrees of freedom not set by the Einstein equation; these further degrees of freedom encode shape distortion through the Weyl tensor. Boundary conditions are needed further to stress energy to fully define the Riemann - I discuss how the Riemann decomposes into Ricci and Weyl tensors, then to Ricci + Schouten tensor + Boundary conditions in my linked answer above. These degrees of freedom unspecified by the Einstein equation itself are what allow, amongst other things, gravitational waves to propagate in the vacuum, and also allow all kinds of interesting vacuum solutions to the Einstein equations, such as Calabi-Yau manifolds or even something as "mundane" as the Schwarzschild and other black hole solutions away from the singularity. WetSavannaAnimalWetSavannaAnimal Not the answer you're looking for? Browse other questions tagged general-relativity gravity spacetime curvature stress-energy-momentum-tensor or ask your own question. Does curved spacetime change the volume of the space? How does the curvature of spacetime induce gravitational attraction? Source term of the Einstein field equation Does the actual curvature of spacetime hold energy? What are the factors affecting the spacetime curvature? How did Einstein relate energy and the curvature of spacetime? Einstein field equations in empty space, question about non-zero curvature Is there a limitation on the values ​that Einstein tensor $G_{\mu\nu}$ can take?
CommonCrawl
RL Theory The work you do Planning in MDPs 1. Introductions 2. The Fundamental Theorem 3. Value Iteration and Our First Lower Bound 4. Policy Iteration 5. Local Planning - Part I. 6. Local Planning - Part II. 7. Function Approximation 8. Approximate Policy Iteration 9. Limits of query-efficient planning 10. Planning under $q^*$ realizability 11. Planning under $v^*$ realizability (TensorPlan I.) 12. TensorPlan and eluder sequences 13. From API to Politex 14. Politex 15. From policy search to policy gradients 16. Policy gradients Batch RL 17. Introduction 18. Sample complexity in finite MDPs 19. Scaling with value function approximation Online RL Winter 2021 Lecture Notes Website of the course CMPUT 653: Theoretical Foundations of Reinforcement Learning. In the previous lectures we attempted to reduce the complexity of planning by assuming that value functions over the large state-action spaces can be compactly represented with a few parameters. While value-functions are an indispensable component of poly-time MDP planners (see Lectures 3 and 4), it is far from clear whether they should also be given priority when working with larger MDPs. Indeed, perhaps it is more natural to consider sets of policies with a compact description. Formally, in this problem setting the planner will be given a black-box simulation access to a (say, $\gamma$-discounted) MDP $M=(\mathcal{S},\mathcal{A},P,r)$ as before, but the interface also provides access to a parameterized family of policies over $(\mathcal{S},\mathcal{A})$, \(\pi = (\pi_\theta)_{\theta\in \mathbb{R}^d}\), where for any fixed parameter $\theta\in \mathbb{R}^d$, $\pi_\theta$ is a memoryless stochastic policy: $\pi_\theta:\mathcal{S} \to \mathcal{M}_1(\mathcal{A})$. For example, $\pi_\theta$ could be such that for some feature-map $\phi: \mathcal{S}\times \mathcal{A} \to \mathcal{R}^d$, \[\begin{align} \pi_\theta(a|s) = \frac{\exp( \theta^\top \varphi(s,a))}{\sum_{a'} \exp(\theta^\top \varphi(s,a'))}\,, \qquad (s,a)\in \mathcal{S}\times \mathcal{A}\,. \label{eq:boltzmannpp} \end{align}\] In this case "access" to $\pi_\theta$ means access to $\varphi$, which can be either global (i.e., the planner is given the "whole" of $\varphi$ and can run any preprocessing on it), or local (i.e., $\varphi(s',a)$ is returned by the simulator for the "next states" $s'\in \mathcal{S}$ and for all actions $a$). Of course, the exponential function can be replaced with other functions, or, one can just use a neural network to output "scores", which are turned into probabilities in some way. Dispensing with stochastic policies, a narrower class is the class of policies that are greedy with respect to action-value functions that belong to some parametric class. One special case that is worthy of attention due to its simplicity is the case when $\mathcal{S}$ is partitioned into $m$ (disjoint) subsets $\mathcal{S}_1,\dots,\mathcal{S}_m$ and for $i\in [m]$, we have $\mathrm{A}$ basis functions defined as follows: \[\begin{align} \phi_{i,a'}(s,a) = \mathbb{I}( s\in \mathcal{S}_i, a= a' )\,, \qquad s\in \mathcal{S}, a,a'\in \mathcal{A}, i\in [m]\,. \label{eq:stateagg} \end{align}\] Here, to minimize clutter, we allow the basis functions to be indexed by pairs and identified $\mathcal{A}$ with ${ 1,\dots,\mathrm{A}}$, as usual. Then, the policies are given by $\theta = (\theta_1,\dots,\theta_m)$, the collection of $m$ probability vectors $\theta_1,\dots,\theta_m\in \mathcal{M}_1(\mathcal{A})$: \[\begin{align} \pi_\theta(a|s) = \sum_{i=1}^m \sum_{a'} \phi_{i,a'}\theta_{i,a'}\,. \label{eq:directpp} \end{align}\] Note that because of the special choice of $\phi$, $\pi_{\theta}(a|s) = \theta_{i,a}$ for the unique index $i\in [m]$ such that $s\in \mathcal{S}_i$. This is known as state-aggregretion: States belonging to the same group give rise to the same probability distribution over the actions. We say that the featuremap $\varphi:\mathcal{S}\times \mathcal{A}\to \mathbb{R}^d$ is of the state-aggregation type if it takes the form \eqref{eq:stateagg} with an appropriate reindexing of the basis functions. Fix now a state-aggregation type featuremap. We can consider both the direct parameterization of policies given in \eqref{eq:directpp}, or the "Boltzmann" parameterization given in \eqref{eq:boltzmannpp}. As it is easy to see the set of possible policies that can be expressed with the two parameterizations are nearly identical. Letting $\Pi_{\text{direct}}$ be the set of policies that can be expressed using $\varphi$ and the direct parameterization and letting $\Pi_{\text{Boltzmann}}$ be the set of policies that can be expressed using $\varphi$ but with the Boltzmann parameterization, first note that \(\Pi_{\text{direct}},\Pi_{\text{Boltzmann}} \subset \mathcal{M}_1(\mathcal{A})^{\mathcal{S}} \subset ([0,1]^{\mathrm{A}})^{\mathrm{S}}\), and if we take the closure, $\text{clo}(\Pi_{\text{Boltzmann}})$ of $\Pi_{\text{Boltzmann}}$ then we can notice that \[\text{clo}(\Pi_{\text{Boltzmann}}) = \Pi_{\text{direct}}\,.\] In particular, the Boltzmann policies cannot express point-mass distributions with finite parameters, but letting the parameter vectors grow without bound, any policy that can be expressed with the direct parameterization can also be expressed by the Boltzmann parameterization. There are many other possible parameterizations, as also mentioned earlier. The important point to notice is that while the parameterization is necessary so that the algorithms can work with a compressed representation, different representations may describe an identical set of policies. Policy search A reasonable goal then is to ask for a planner that competes with the best policy within the parameterized family, or the $\varepsilon$-best policy policy for some positive $\varepsilon$. Since there may not be a parameter $\theta$ such that $v^{\pi_\theta}\ge v^{\pi_{\theta'}}-\varepsilon\boldsymbol{1}$ for any $\theta'\in \mathbb{R}^d$, we simplify the problem by requiring that the policy computed is nearly best when started from some initial distribution $\mu \in \mathcal{M}_1(\mathcal{S})$. Defining $J: \text{ML} \to \mathbb{R}$ as \[J(\pi) = \mu v^{\pi} (=\sum_{s\in \mathcal{S}}\mu(s)v^{\pi}(s)),\] the policy search problem is to find a parameter $\theta\in \mathbb{R}^d$ such that \[\begin{align*} J(\pi_{\theta}) = \max_{\theta'} J(\pi_{\theta'})\,. \end{align*}\] The approximation version of the problem asks for finding $\theta'\in \mathbb{R}^d$ such that \[\begin{align*} J(\pi_{\theta}) = \max_{\theta'} J(\pi_{\theta'}) - \varepsilon\,. \end{align*}\] The formal problem definition then is as follows: a planning algorithm is given the MDP $M$ and a policy parameterization $(\pi_\theta)_{\theta}$ and we are asking for an algorithm that returns the solution to the policy search problem in time polynomial in the number of actions $\mathrm{A}$ and the number of parameters $d$ that describes the policy. An even simpler problem is when the MDP has finitely many states, and the algorithm needs to run in polynomial time in $\mathrm{S}$, $\mathrm{A}$ and $d$. In this case, it is clearly advantageous for the algorithm if it is given the exact description of the MDP (as described in Lecture 3) Sadly, even this mild version of policy search is intractable. Theorem (Policy search hardness): Unless $\text{P}=\text{NP}$, there is no polynomial time algorithm for the finite policy search problem even when the policy space is restricted to the constant policies and the MDPs are restricted to be deterministic with binary rewards. The constant policies are those that assign the same probability distribution to each state. This is a special case of state aggregation when all the states are aggregated into a single class. As the policy does not depend on the state, the problem is also known as the blind policy search problem. Note that the result holds regardless of the representation used to express the set of constant policies. Proof: Let $\mathcal{S} = \mathcal{A}=[n]$. The dynamics is deterministic: The next state is $a$ if action $a\in \mathcal{A}$ is taken in state $n$. A policy is simple a probability distribution \(\pi \in \mathcal{M}_1([n])\) over the action space, which we shall view as a column vector taking values in $[0,1]^n$. The transition matrix of $\pi$ is $P_{\pi}(s,s') = \pi(s')$, or, in matrix form, $P_\pi = \boldsymbol{1} \pi^\top$. Clearly, $P_\pi^2 = \boldsymbol{1} \pi^\top \boldsymbol{1} \pi^\top = P_\pi$ (i.e., $P_\pi$ is idempotent). Thus, $P_\pi^t = \boldsymbol{1}\pi^\top$ for any $t>0$ and hence \[\begin{align*} J(\pi) & = \mu (r_\pi + \sum_{t\ge 1} \gamma^t P_\pi^t r_\pi) = \mu \left(I + \frac{\gamma}{1-\gamma} \boldsymbol{1} \pi^\top \right)r_\pi\,. \end{align*}\] Defining $R_{s,a} = r_a(s)$ so that $R\in [0,1]^{n\times n}$, we have $r_\pi = R\pi$. Plugging this in into the previous displayed equation and using that $\mu \boldsymbol{1}=1$, we get \[\begin{align*} J(\pi) & = \mu R \pi + \frac{\gamma}{1-\gamma} \pi^\top R \pi\,. \end{align*}\] Thus we see that the policy search problem is equivalent to maximizing the quadratic expression in the previous display over the probability simplex. Since there is no restriction on $R$, one may at this point conjecture that this will be hard to do. That this is indeed the case can be shown by a reduction to the maximum independent set problem, which asks for checking whether the independence number of a graph is above a threshold and which is known to be NP-hard even for $3$-regular graphs (i.e., graphs where every vertex has exactly three neighbours). Here, the independence number of a graph is defined as follows: We are given a simple graph $G=(V,E)$ (i.e., there are no self-loops, no double edges, and the graph is undirected). An independent set in $G$ is a neighbour-free subset of vertices. The independence number of $G$ is defined as \[\begin{align*} \alpha(G) = \max \{ |V'| \,:\, V'\subset \text{ independent in } G \}\,. \end{align*}\] Quadratic optimization has close ties to the maximum independent set problem: Lemma (Motzkin-Strauss '65): Let \(G\in \{0,1\}^n\) be the vertex-vertex adjacency matrix of simple graph (i.e., $G_{ij}=1$ if and only if $(i,j)$ is an edge of the graph). Then, for \(I\in \{0,1\}^{n\times n}\) the $n\times n$ identity matrix, \[\begin{align*} \frac{1}{\alpha(G)} = \min_{y\in \mathcal{M}_1([n])} y^\top (G+I) y\,. \end{align*}\] We now show that if there is an algorithm that solves policy search in polynomial time then it can also be used to solve the maximum independent set problem for simple, $3$-regular graphs. For this pick a $3$-regular graph $G$ with $n$ vertices. Define the MDP as above with $n$ states and actions and the rewards chosen to that $R = E-(I+G)$ where $G$ is the vertex-vertex adjacency matrix of the graph and $E$ is the all-ones matrix: $E = \boldsymbol{1} \boldsymbol{1}^\top$. We add $E$ so that the rewards are in the $[0,1]$ interval and in fact are binary as required. Choose $\mu$ as the uniform distribution over the states. Note that $\boldsymbol{1}^\top (I+G) = 4 \boldsymbol{1}^\top$ because the graph is $3$-regular. Then, for $\pi \in \mathcal{M}_1(\mathcal{A})$, \[\begin{align*} J(\pi) & = \frac{1}{1-\gamma}- \mu(E+I+G) \pi - \frac{\gamma}{1-\gamma} \pi^\top (E+I+G) \pi \\ & = \frac{1}{1-\gamma}- \frac{1}{n} \boldsymbol{1}^\top (I+G) \pi - \frac{\gamma}{1-\gamma} \pi^\top (I+G) \pi \\ & = \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \pi^\top (I+G) \pi\,. \end{align*}\] Hence, \(\begin{align*} \max_{\pi \in \mathcal{M}_1([n]} J(\pi) & = \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \frac{1}{\alpha(G)} \ge \frac{1}{1-\gamma}- \frac{4}{n} - \frac{\gamma}{1-\gamma} \frac{1}{m} \end{align*}\) holds if and only if $\alpha(G)\ge m$. Thus, the decision problem of deciding that $J(\pi)\ge a$ is at least as hard as the maximum independent set problem. As noted, this is an NP-hard problem, hence the result follows. \(\qquad \blacksquare\) Potential remedy: Local search Based on the theorem just proved it is not very likely that we can find computationally efficient planners to compete with the best policy in a restricted policy class, even if the class looks quite benign. This motivates aiming at some more modest goal, one possibility of which is to aim for computing stationary points of the map $J:\pi \mapsto \mu v^{\pi}$. Let $\Pi = { \pi_\theta \,:\, \theta\in \mathbb{R}^d } \in [0,1]^{\mathcal{S}\times\mathcal{A}}$ be the set of policies that can represented; we view these now as "large vectors". Then, in this approach we aim to identify \(\pi^*\in \Pi\) (and its parameters) so that for any $\pi'\in \Pi$ and small enough $\delta>0$ so that \(\pi^*+\delta (\pi'-\pi^*)\in \Pi\), \(J(\pi^*+\delta (\pi'-\pi^*))\le J(\pi^*)\). For $\delta$ small, \(J(\pi^*+\delta (\pi'-\pi^*))\approx J(\pi^*) + \delta \langle J'(\pi^*), \pi'- \pi^* \rangle\). Plugging this in into the previous inequality, reordering and dividing by $\delta>0$ gives \[\begin{align} \langle J'(\pi^*), \pi'- \pi^* \rangle \le 0\,, \qquad \pi' \in \Pi\,. \label{eq:stp} \end{align}\] Here, $J'(\pi)$ denotes the derivative of $J$. What remains to be seen is whether (1) relaxing the goal to computing \(\pi^*\) helps with the computation (and when) and (2) whether we can get some guarantees for how well $\pi^*$ satisfying \eqref{eq:stp} will do compared to \(J^* = \max_{\pi\in \Pi} J(\pi)\), that is obtaining some approximation guarantees. For the latter we seek for some function $\varepsilon$ of the MDP $M$ and $\Pi$ (or $\phi$, when $\Pi$ is based on some featuremap) so that \[\begin{align*} J(\pi^*) \ge J^* - \varepsilon(M,\Pi) \end{align*}\] As to the computational approaches, we will consider a simple approach based on (approximately) following the gradient of $\theta \mapsto J(\pi_\theta$. Access models The reader may be wondering about what is the appropriate "access model" when $\pi_\theta$ is not restricted to the form given in \eqref{eq:boltzmannpp}. There are many possibilities. One is to develop planners for specific parametric forms. A more general approach is to let the planner access \(\pi_{\theta}(\cdot\vert s)\) and $\frac{\partial}{\partial\theta}\pi_{\theta}(\cdot \vert s)$ for any $s$ it has encountered and any value of $\theta\in \mathbb{R}^d$ it chooses. This is akin to the first-order black-box oracle model familiar from optimization theory. From function approximation to POMDPs The hardness result for policy search is taken from a paper of Vlassis, Littman and Barber, who actually were interested in the computational complexity of planning in partially observable Markov Decision Problems (POMDPs). It is in fact an important observation that with function approximation, planning in MDPs becomes a special case planning in POMDPs: In particular, if policies are restricted to depend on the states through a feature-map $\phi:\mathcal{S}\to \mathbb{R}^d$ (any two states with identical features will get the same action distribution assigned to them), then planning to achieve high reward with this restricted class is almost the same as planning to achieve high reward in a partially observable MDP where the observation function is $\phi$. Planners for the former problem could still have some advantage though if they can also access the states: In particular, a local planner which is given a feature-map to help its search but is also given access to the states is in fact not restricted to return actions whose distribution follows a policy from the feature-restricted class of policies. In machine learning, in the analogue problem of competing with a best predictor within a class but using predictors that do not respect the restrictions put on the competitors are called improper and it is known that improper learning is often more powerful than proper learning. However, when it comes to learning online or in a batch fashion then feature-restricted learning and learning in POMDPs become exact analogs. Finally, we note in passing that Vlassis et al. (2012) also add an argument that show that it is not likely that policy search is in NP. Open problem: Hardness of approximate policy search The result almost implies that the approximate version of policy search is also NP-hard (Theorem 11.15, Arora, Barak 2009). In particular, it is not hard to see with the same construction that if one has an efficient method find a policy with $J(\pi) \ge \max_\pi J_\pi - \varepsilon$ then this gives an efficient method to find an independent set of size $\alpha(G)/c$ for the said $3$-regular graphs where \[c = 1 + \frac{1-\gamma}{\gamma} \varepsilon \alpha(G) \le 1+ \frac{1-\gamma}{\gamma} \varepsilon n \le 1+\varepsilon n \,,\] where the last inequality follows if we choose $\gamma=0.5$. Now, while there exist results that show that the maximum independent set is hard to approximate (i.e., for any fixed $c>1$ finding an independent set of size $\alpha(G)/c$ is hard), this would only imply hardness of approximate policy search if the hardness result also uses $3$-regular graphs. Also, the above bound on $c$ may be too naive: For example, to get $2$-approximations, one needs $\varepsilon\le 1/n$, which is small range for $\varepsilon$. To get a hardness result for a "constant" $\varepsilon$ (independent of $n$) needs significantly more work. Dealing with large action spaces A common reason to consider policy search is because working with a restricted parametric family of policies holds the promise of decoupling the computational cost of learning and planning from the cardinality of the action-space. Indeed, with action-value functions, one usually needs an efficient way of computing greedy actions (with respect to some fixed action-value function). Computing $\arg\max_{a\in \mathcal{A}} q(s,a)$ in the lack of extra structure of the action-space and the function $q(s,\cdot)$ takes linear time in the size of $\mathcal{A}$, which is highly problematic unless $\mathcal{A}$ has a small cardinality. In many applications of practical interest this is not the case: The action space can be "combinatorially sized", or even a subset of some (potentially multidimensional) continuous space. If sampling from $\pi_{\theta}(\cdot\vert s)$ can be done efficiently, one may then potentially avoid the above expensive calculation. Thus, policy search is often proposed as a remedy to extend algorithms to work with large action spaces. Of course, this only applies if the sampling problem can indeed be efficiently implemented, which adds an extra restriction on the policy representation. Nevertheless, there are a number of options to achieve this: One can use for example an implicit representation (perhaps in conjunction with a direct one that uses probabilities/densities) for the policy. For example, the policy may be "represented" as a map $f_\theta: \mathcal{S} \times \mathcal{R} \to \mathcal{A}$ so that sampling from $\pi_\theta(\cdot\vert s)$ is accomplished by drawing a sample $R\sim P$ from a fixed distribution over the set $\mathcal{R}$ and then returning $f(s,R)\in \mathcal{A}$. Clearly, this is efficient as long as $f_\theta$ can be efficiently evaluated at any of its inputs and the random value $R$ can be efficiently produced. If $f_\theta$ is sufficiently flexible, one can in fact choose a very simple distribution for $P$, such as the standard normal distribution, or the uniform distribution. Note that when $\mathcal{A}$ is continuous and the policies are deterministic is a special case: The key is still to be able to efficiently produce a sample from $\pi_\theta(\cdot\vert s)$, just in this case this means a deterministic computation. The catch is that one may also still need the derivatives of $\pi_{\theta}(\cdot\vert s)$ with respect to the parameter $\theta$ and with an implicit representation as described above, it is unclear whether these derivatives can be efficiently obtained. As it turns out, this can be arranged if $f_{\theta}(\cdot\vert s)$ is made of composition of elementary (invertible, differentiable) transformations with this property (by the chain rule). This observation is the basis of various approaches to "neural" density estimation (e.g., Tabak and Vanden-Eijnden, 2010, Rezende, Mohamed, 2015, or Jaini et al. 2019). Vlassis, Nikos, Michael L. Littman, and David Barber. 2012. "On the Computational Complexity of Stochastic Controller Optimization in POMDPs." ACM Trans. Comput. Theory, 12, 4 (4): 1–8. Esteban G. Tabak. Eric Vanden-Eijnden. "Density estimation by dual ascent of the log-likelihood." Commun. Math. Sci. 8 (1) 217 - 233, March 2010. Rezende, Danilo Jimenez, and Shakir Mohamed. 2015. "Variational Inference with Normalizing Flows" link. Rezende, D. J., and S. Mohamed. 2014. "Stochastic Backpropagation and Approximate Inference in Deep Generative Models." ICML. link. Jaini, Priyank, Kira A. Selby, and Yaoliang Yu. 2019. "Sum-of-Squares Polynomial Flow." In Proceedings of the 36th International Conference on Machine Learning, edited by Kamalika Chaudhuri and Ruslan Salakhutdinov, 97:3009–18. Proceedings of Machine Learning Research. PMLR. Arora, Sanjeev, and Boaz Barak. 2009. Computational Complexity. A Modern Approach. Cambridge: Cambridge University Press. The hardness of the maximum independent set problem is a classic result; see, e.g., Theorem 2.15 in the book of Arora and Barak (2009) above, though this proof does not show that the hardness also applies to the case of 3-regular graphs. According to a comment by Gamow on stackexchange, a "complete NP-completeness proof for this problem is given right after Theorem 4.1 in the following paper": Bojan Mohar: "Face Covers and the Genus Problem for Apex Graphs" Journal of Combinatorial Theory, Series B 82, 102-117 (2001) On the same page, Yixin Cao notes that there is a way to remove vertices of degree larger than three (presumable without changing the independence number) and refers to another stackexchange page. Copyright © 2020 RL Theory.
CommonCrawl
Methodology article Biomedical relation extraction via knowledge-enhanced reading comprehension Jing Chen1, Baotian Hu1, Weihua Peng2, Qingcai Chen ORCID: orcid.org/0000-0001-8473-72931,3 & Buzhou Tang1,3 BMC Bioinformatics volume 23, Article number: 20 (2022) Cite this article In biomedical research, chemical and disease relation extraction from unstructured biomedical literature is an essential task. Effective context understanding and knowledge integration are two main research problems in this task. Most work of relation extraction focuses on classification for entity mention pairs. Inspired by the effectiveness of machine reading comprehension (RC) in the respect of context understanding, solving biomedical relation extraction with the RC framework at both intra-sentential and inter-sentential levels is a new topic worthy to be explored. Except for the unstructured biomedical text, many structured knowledge bases (KBs) provide valuable guidance for biomedical relation extraction. Utilizing knowledge in the RC framework is also worthy to be investigated. We propose a knowledge-enhanced reading comprehension (KRC) framework to leverage reading comprehension and prior knowledge for biomedical relation extraction. First, we generate questions for each relation, which reformulates the relation extraction task to a question answering task. Second, based on the RC framework, we integrate knowledge representation through an efficient knowledge-enhanced attention interaction mechanism to guide the biomedical relation extraction. The proposed model was evaluated on the BioCreative V CDR dataset and CHR dataset. Experiments show that our model achieved a competitive document-level F1 of 71.18% and 93.3%, respectively, compared with other methods. Result analysis reveals that open-domain reading comprehension data and knowledge representation can help improve biomedical relation extraction in our proposed KRC framework. Our work can encourage more research on bridging reading comprehension and biomedical relation extraction and promote the biomedical relation extraction. Chemical, disease, and their relations play an important role in biomedical research [1] and relation extraction is an essential task in biomedical text information extraction. Many experts have been making efforts to perform research on automatic biomedical information extraction from unstructured text. To promote research on chemical-disease relation (CDR) extraction, the BioCreative-V community proposed a subtask: chemical-induced disease (CID) relation extraction. Additionally, [2] proposed a document-level dataset for chemical reaction (CHR) relation extraction. Here, the relations between entities are expressed not only in a single sentence but also across sentences. As described by [3], 30% of relations in the Biocreative V CDR data are expressed across more than one sentence. As an example in Fig. 1, it shows the title and abstract of a document containing two chemical-induced disease pairs (D005445, D004244) and (D005445, D010146). Among these instances, chemical 'flunitrazepam' and disease 'pain' appear in the same sentence, while chemical 'flunitrazepam' and disease 'dizziness' are expressed across sentence boundaries. Typically, relation extraction can be formulated as a classification task for candidate entity pairs, and many machine learning methods have been investigated to score mention pairs to extract relations, including traditional machine learning (ML) methods and neural network (NN)-based methods. Most of them attempt to mine the context information between entity mention pairs to provide evidence for relation extraction. Some extract rich statistical and knowledge features, some mark the entities by start and end symbols [4, 5] or extract the shortest dependency path between entities [1, 6, 7]. It helps to capture the context information and make up the ability to model long-distance context sequences. Early studies mainly utilized maximum entropy (ME) models, support vector machines(SVMs)and other kernel-based models combined with rich context features (e.g., statistical linguistic features), knowledge features and graph structures [8,9,10]. Li et al. [10] also exploits co-training with additional unlabeled training data. Since feature extraction is time-consuming and difficult to expand, neural network-based methods are widely explored and achieve significant performance. Le et al. [6] extracts the shortest dependency path (SDP) and learned context information through Convolutional Neural Network (CNN) for CID extraction. Nguyen et al. [11] investigates the incorporation of character-based word representations into a standard CNN-based relation extraction model. Verga et al. [3] forms pairwise predictions over entire abstracts using a self-attention encoder. Zheng et al. [4] uses CNN and LSTM to learn the document semantic formation and integrated knowledge representation. Li et al. [5] utilizes recurrent piecewise convolutional neural networks integrating knowledge features. Sahu et al. [2] proposes to build a labeled edge graph convolutional neural network on a document to capture local and non-local context dependency information for inter-sentence biomedical relation extraction. Zhou et al. [1] proposes a knowledge-guided convolution network to leverage prior knowledge representation on the SDP sequence for CID extraction. Machine reading comprehension (MRC) aims to answer a query according to its corresponding contexts, one of which is to extract answer spans from contexts. The task is formulated as a multi-classification task to classify the start index and the end index of the answer over its contexts. Inspired by the performance and the comprehension ability, MRC has been a trend to solve other natural language processing (NLP) tasks. Levy et al. [12] reduces the zero-shot relation to the problem of answering simple reading comprehension questions to potentially extract facts of new types that were neither specified nor observed a priori. Li et al. [13] casts the entity-relation task as a multi-turn question answering problem and identifies the answer spans from the context. Li et al. [14] proposes to formulate the flat and nested named entity recognition problems as a machine reading comprehension task instead of a sequence labeling task. Additionally, tasks such as summarization, machine translation and so on are framed as question answering by making task specifications to take the form of a question, a context and an answer [15]. Motivated by the capability of context understanding on documents, we regard biomedical relation extraction as a reading comprehension problem. We utilize a question formulated by the chemical and relation description to query the context for diseases or chemicals, hence acquiring the relation between chemical and disease entities or the relation between chemical entities. In this paper, we are interested in handling biomedical relation extraction with the reading comprehension framework based on the efficient pretrained language model (LM), effectively integrating knowledge with context together and distinguishing different knowledge in this framework. Hence, we propose a knowledge-enhanced RC (KRC) framework for biomedical relation extraction, which integrates knowledge by effective two-step attention layers. The proposed method was evaluated on the BioCreative V CDR dataset and the CHR dataset respectively. Experiments show that our proposed model achieved competitive performance on both datasets compared with other state-of-the-art methods. Our contributions are as follows: To the best of our knowledge, this paper first proposes a novel reading comprehension (RC) framework to address the biomedical relation extraction from the literature. Our work may encourage more research on bridging MRC and biomedical relation extraction so as to take advantage of MRC. To make full use of the pretrained language model (LM) and knowledge representation, this paper proposes a knowledge-enhanced RC model based on pretrained LMs to improve biomedical relation extraction. Through experiments, we demonstrate the effectiveness of using open-domain reading comprehension data and knowledge information in our proposed RC framework for biomedical relation extraction. We show that our method can achieve competitive performance on two document-level datasets. The sample document. Chemical and disease mentions are marked in blue and red, respectively. CID means the chemical-induced disease relation Given a context sequence \(C=(w_c^1, w_c^2, \ldots , w_c^n)\) and two entities \(e_1=(w_{c}^{s_{e1}}, w_{c}^{s_{e1}+1}, \ldots , w_{c}^{e_{e1}})\) and \(e_2=(w_{c}^{s_{e2}}, w_{c}^{s_{e2}+1}, \ldots , w_{c}^{e_{e2}})\) in the context, relation extraction(RE) aims to clarify the relation r between \(e_1\) and \(e_2\), where \(r \in R\) is selected from a predefined relation list R. For the chemical-induced disease (CID) relation extraction task, the relation r is 'induce'. Here, we reduce the relation extraction task as a reading comprehension task with unanswerable answers. We transform the annotated RE data \((Context, Entity\ e_1, Entity\ e_2, relation\ r)\) to the RC data \((Context, Query(e_1,r), Answer\ e_2)\). Given the context sequence \(C=(w_c^1, w_c^2, \ldots , w_c^n)\), the entity \(e_1=(w_{c}^{s_{e1}}, w_{c}^{s_{e1}+1}, \ldots , w_{c}^{e_{e1}})\) and the relation r, extract the entity \(e_2=(w_{c}^{s_{e2}}, w_{c}^{s_{e2}+1}, \ldots , w_{c}^{e_{e2}})\) from the context by answering a query \(Q=(w_q^1, w_q^2, \ldots , w_q^m)\) constructed by \(e_1\) and relation r description. \(s_{e2}\) and \(e_{e2}\) respectively denote the start index and the end index, where \(s_{e2} \in [1,n]\), \(e_{e2} \in [1,n]\) and \(s_{e2} \le e_{e2}\). Given context C and question Q, our method either returns an answer span or indicates that there is no answer. We adopt a competitive pretrained language model BERT [16] as our backbone that deals with SQuAD [17] and suits the condition of no answer. Our model consists of three major layers: (1) BERT encoder layer; (2) knowledge-enhanced attention layer; (3) prediction layer. Details are described as follows. BERT encoder layer To be in line with BERT, given the context sequence \(C=(w_c^1, w_c^2, \ldots , w_c^n)\) and the query sequence \(Q=(w_q^1, w_q^2, \ldots , w_q^m)\), the input is formulated as a sequence \(S_{q,c} =(\mathrm{[CLS]}, w_q^1, w_q^2, \ldots , w_q^m,\) \(\mathrm{[SEP]}, w_c^1, w_c^2, \ldots , w_c^n,\) \(\mathrm{[SEP]})\), where \(\mathrm{[CLS]}\) indicates the start token of Q and \(\mathrm{[SEP]}\) separates Q and C. Then, the word sequence input is tokenized to token sequence \({\mathbf{s}} ={[s_i]_{i=1}^k}\) concatenating with their position embedding and segment embedding. Denote the BERT encoder which consists of L stacks of transformers as \(BERT(\cdot )\) as follows: $$\begin{aligned} {\mathbf{s}} _i^l=Transformer({\mathbf{s}} _i^{l-1}),l \in [1,L] \end{aligned}$$ The hidden representation \({\mathbf{h}} ={[h_i]_{i=1}^k}\) for the token sequence obtained from BERT is \({\mathbf{h}} =BERT({\mathbf{s}} )\). Knowledge-enhanced attention layer To obtain the knowledge-enhanced context representation, this layer is designed to integrate knowledge representation with the context representation of BERT. Here, we describe the details of this layer based on CID relation extraction. In the knowledge base, the same entity pair in different documents may contain different relation types. This layer shows how to integrate noisy knowledge representation into the context representation simply and effectively. It takes the BERT hidden representation \({\mathbf{h}}\) and the knowledge representation \({\mathbf{r}}\) as inputs, and outputs the knowledge-enhanced representation \({\mathbf{h}} ^{'}\). To integrate prior knowledge representation, we first extract chemical-disease triples from the Comparative Toxicogenomics Database(CTD) [18] and employ TransE [19] to learn knowledge representation. Following [1], we extract (chemical, disease, relation) triples from both the CDR corpus and the CTD knowledge base. In the CTD base, there are three types of relations, including 'marker/mechanism', 'therapeutic' and 'inferred-association', where only 'marker/mechanism' indicates the CID relation. For those pairs in CDR but not in CTD, we set their relations to a specific symbol 'null'. Thus, there are four types of relations among all the triples and we finally obtain 2577184 triples for knowledge representation learning. Then, all the generated triples are regarded as correct examples to learn lower-dimension chemical representation, disease representation and relation representation \({\mathbf{r}} _t\) by TransE, where \({\mathbf{r}} _t \in {\mathbb {R}}^{d_2}\), and \(d_2\) denotes the representation dimension. Here, the chemical, disease and relation representations are initialized randomly with the normal distribution for training. It is worth noting that there may be more than one relation type between an entity pair (Fig. 2). The overview of the knowledge-enhanced RC model Then, the probable relation representation \({\mathbf{r}} =[{\mathbf{r}} _t]_{t=1}^3\) between candidate entities of each instance is introduced into the RC model to provide evidence. Here, we take two-step attention to combine the knowledge information and context information. First, we adopt an attention mechanism to select the most relevant KB relation representation for hidden representation of each token. A bilinear [20] operation is employed to calculate the attention weights between hidden representation \({\mathbf{h} }_i \in {\mathbb {R}}^{d_1}\) and relation representation \({\mathbf{r}} _t \in {\mathbb {R}}^{d_2}\), where \({\mathbf{W}} _{1} \in {\mathbb {R}}^{{d_1}\times {d_2}}\) and \({\mathbf{b}} _{1} \in {\mathbb {R}}^{d_2}\) are trainable parameters. $$\begin{aligned} {\alpha _{it}}=\frac{exp({\mathbf{h}} _{i}{} {\mathbf{W} }_{1}{} {\mathbf{r}} _{t}+{\mathbf{b}} _{1})}{\sum _{t^{'}=1}^{3}exp({\mathbf{h}} _{i}{} {\mathbf{W}} _{1}{} {\mathbf{r}} _{t^{'}}+{\mathbf{b}} _{1})} \end{aligned}$$ Then, each relation representation \({\mathbf{r}} _t\) is aligned to each hidden state \({\mathbf{h}} _i\). Here, \({\mathbf{k}} _i\) is regarded as the weighted relation representation corresponding to each token. $$\begin{aligned} {\mathbf{ k _{i}}}=\sum _{t}\alpha _{it}{} {\mathbf{r}} _{t} \end{aligned}$$ Second, we adopt a knowledge-context attention mechanism between the token's knowledge representation \({\mathbf{k}} _{i}\) on each position of the token sequence and the hidden representation \({\mathbf{h}} _j\). A bilinear operation is employed between \({\mathbf{k} }_i\) and \({\mathbf{h}} _j\) to achieve weights on the hidden representation, while \({\mathbf{W}} _2 \in {\mathbb {R}}^{{d_2}\times {d_1}}\) and \({\mathbf{b}} _2 \in {\mathbb {R}}^{d_1}\) are parameters. $$\begin{aligned} {\beta _{ij}}=\frac{exp({\mathbf{k}} _{i}{} {\mathbf{W}} _{2}{} {\mathbf{h}} _{j}+{\mathbf{b}} _{2})}{\sum _{j^{'}=1}^{k}exp({\mathbf{k}} _{i}{} {\mathbf{W}} _{2}{} {\mathbf{h}} _{j^{'}}+{\mathbf{b}} _{2})} \end{aligned}$$ Finally, the hidden representation \({\mathbf{h}}\) of tokens is aligned to the weighted knowledge representation \({\mathbf{k}}\) and weighted to each position i. Here, we denote the output after our two-step attention as \({\mathbf{h}} ^{'}=[{\mathbf{h}} _{i}^{'}]_{i=1}^{k}\). $$\begin{aligned} \mathbf{ h _{i}^{'}}=\sum _{j}\beta _{ij}{} \mathbf{h} _{j} \end{aligned}$$ Here, \({\mathbf{h} }_{i}^{'}\) is the context representation enhanced with knowledge representation. Prediction layer To obtain the final representation for prediction, the hidden representation \(\mathbf{h} _i\) and the knowledge enhanced representation \(\mathbf{h} _i^{'}\) are first combined with a linear operation to achieve the weighted representation \(\mathbf{v} _i = \mathbf{W} _{h}{} \mathbf{h} _{i}+\mathbf{W} _{h^{'}}{} \mathbf{h} _{i}^{'}+\mathbf{b}\). Then, we concatenate the knowledge enhanced representation \(\mathbf{h} _i^{'}\) with the weighted representation \(\mathbf{v} _i\) to achieve the input \(\mathbf{u} _i=[\mathbf{h} _i^{'}; \mathbf{v} _i]\) of the prediction layer. A feed-forward network FFN with RELU [21] activation is applied to the knowledge attention result, which works in some existing work. Finally, the output is applied to predict the start and end indexes of answers. For the situation where there is a null answer, its start and end indexes are both zero for the optimization of the objective function. Actually, the index of zero indicates the start token '[CLS]'. It is not the real text in the context and does not influence the optimization of the model for the indexes of non-null answers. $$\begin{aligned} FFN(\mathbf{u} _{i}, \mathbf{W} _{3}, \mathbf{b} _{3}, \mathbf{W} _{4}, \mathbf{b} _{4}) = RELU(\mathbf{u} _{i}{} \mathbf{W} _{3}+\mathbf{b} _{3})\mathbf{W} _{4}+\mathbf{b} _{4} \end{aligned}$$ Here, \(\mathbf{W} _{4}\), \(\mathbf{b} _{4}\), \(\mathbf{W} _3\) and \(\mathbf{b} _3\) are trainable parameters. Along the sequence dimension, the start probability distribution and the end probability distribution for each token \(\mathbf{s} _i\) are calculated as: $$\begin{aligned} p_i^{s}= & {} \frac{exp(FFN(\mathbf{u} _{i}, \mathbf{W} ^s _3, \mathbf{b} ^s _3, \mathbf{W} ^s _4, \mathbf{b} ^s _4))}{\sum _{j}exp(FFN(\mathbf{u} _{j}, \mathbf{W} ^s _3, \mathbf{b} ^s _3, \mathbf{W} ^s _4, \mathbf{b} ^s _4))} \end{aligned}$$ $$\begin{aligned} p_i^{e}= & {} \frac{exp(FFN(\mathbf{u} _{i}, \mathbf{W} ^e _3, \mathbf{b} ^e _3, \mathbf{W} ^e _4, \mathbf{b} ^e _4))}{\sum _{j}exp(FFN(\mathbf{u} _{j}, \mathbf{W} ^e _3, \mathbf{b} ^e _3, \mathbf{W} ^e _4, \mathbf{b} ^e _4))} \end{aligned}$$ After answer prediction, a predicted disease text or a null answer can be achieved. If the predicted disease text matches the gold disease name, a CID relation will be detected between the disease and the chemical which is described in its corresponding question. After relation extraction on intra-sentential and inter-sentential data, two sets of prediction results are merged. Since all the candidate instances with respect to mention pairs are extracted, we judge that an entity pair has a CID relation as long as at least one instance was detected in which the CID relation exists. Since several documents may have no candidate CID relations after data preprocessing, similar to many other systems, we take the following heuristic rules to find the likely CID pairs in them: All chemicals in the title are associated with all diseases in the abstract. To predict the start and end index of answer spans, the optimization objective is to maximize the conditional probability \(p(y_{s}, y_{e}|\mathbf{s} )\) of start index \(y_{s}\) and end index \(y_{e}\) on the given input sequence \(\mathbf{s}\). The loss is defined as the average of log probabilities of the ground truth start and end position based on the predicted distributions. N is the number of examples. The answer span index by (i, j) with maximum \(p_{i}^{s}p_{j}^{e}\) is chosen as the answer span. $$\begin{aligned} Loss = -\frac{1}{N}\sum _{l=1}^{N}\frac{y_{s}log(p^{s})+y_{e}log(p^{e})}{2} \end{aligned}$$ We evaluated our model on two document-level biomedical relation extraction datasets, including the BioCreative V CDR dataset and the CHR dataset. Table 1 shows the overall statistics of the two datasets. The BioCreative V CDR datasetFootnote 1 was derived from the Comparative Toxicogenomics Database (CTD) for CID relation extraction. The position and MeSH IDs of chemicals and diseases are annotated by experts. It contains 1500 titles and abstracts of PubMed articles, where the training, development and test sets each consist of 500 abstracts. Following [1], we combine the training set and development set together as a set due to the limited number of CDR examples, 80% of which is used as training and 20% of which is used as validation. Experiments are also conducted on the CHR dataset [2]. It was created by distant supervision and is a document-level dataset with relations between chemicals. It contains 12,094 titles and abstracts of PubMed articles, 7298, 3158 and 9578 each for training, development and test datasets. Table 1 The overall statistics of the CDR and CHR datasets The experimental results are evaluated by comparing the set of annotated chemical-disease relations in the document with the set of predicted chemical-disease through precision (P), recall (R) and F1-measure (F1). Data preprocessing To transform data to RC format instances, data was preprocessed as follows. Instance Construction We extracted entity pair candidate instances from the original data, including intra-sentential and inter-sentential instances. For intra-sentential instances, all the entity pairs existing in the same sentence are extracted. For the inter-sentential instances, we follow some of the rules [1] to extract candidates: (1) In a document, all intra-sentential chemical-disease instances will not be considered as inter-sentential instances. (2) The sentence distance of all the inter-sentential instances will not be more than 3. Thus, the chemical-disease entity candidate pairs and their corresponding contexts are extracted. To be in line with the RC model, we will remove (mask) the other disease mentions except for the disease in the current pairs when multiple diseases occur in the context. Hypernym Filtering According to the annotation guideline of the CID task, it is to extract the most specific chemical-disease pair. Following [1], we remove the instances containing hyper entities that have more specific entities in the document according to the entity index in the Medical Subject Headings (MeSH) [22]. Some of positive chemical-disease entity pairs may be filtered by this strategy and are treated as false negative instances. Query Construction After extracting the chemical-disease candidate instances, we format a natural language query combining entity \(e_{1}\) mention and relation r description, here r is the chemical-induced disease relation. Taking the candidate instance (flunitrazepan, pain) in \(S_7\) in Fig. 1 as an example, we formulate a query "what disease does flunitrazepan induce" to ask the context expecting the answer is pain. Also, we adopt another strategy to format a pseudo query for comparison with the natural language query. We concatenate the entity \(e_{1}\), the relation r description and the type of entity \(e_{2}\) to construct the pseudo query. On the CHR dataset [2], all the chemical-chemical entity pairs and their full titles and abstracts are extracted for instance construction. After extracting the chemical candidate instances, queries were also constructed with the entity \(e_{1}\) mention and relation r description. Here, r is the chemical reaction relation description. We trained our knowledge on TransEFootnote 2 with 1000 epochs and the relation embedding size was set to 256. For the CDR dataset, we tuned the hyperparameters on the new development set to optimize our proposed model. We use the uncased BioBERT(base) as the pretrained language model. We set the batch size to 12, 32 respectively on the CDR dataset and the CHR dataset. The learning rates of the bert encoder and the downstream structure are set to 3e-5 and 1e-4 on the experiments without KBs, while their learning rates are set to 2e-5 and 3e-5 on the experiments with KBs. Compared models To evaluate our approach, we compared the proposed model with the existing relevant models. As shown in Table 2, the comparison models for the CDR dataset were divided into two categories: methods with knowledge bases (KBs) and methods without knowledge bases. Each category includes traditional machine learning (ML) based methods and neural network (NN) based methods. Here, we mainly introduce the NN-based methods. CNN+SDP [6] proposed using CNN to learns features on the shortest dependency path between a disease and a chemical for CID relation extraction. BRAN (Transformer) [3] utilized an efficient transformer to encode abstracts and form pairwise predictions using a bi-affine operation to score all pairs of mentions and aggregating over mention pairs. Bio-Seq (LSTM+CRF) [23] proposed a sequence labeling framework for biomedical relation extraction and extended it with multiple specified feature extractors, especially for inter-sentential level relation extraction. LSTM+CNN [4] utilized LSTM and CNN to hierarchically extract from documents integrating relation knowledge of CTD. RPCNN (PCNN) [5] proposed using PCNN and RNN to extract document-level representations integrating the knowledge features of four medical KBs for CID relation classification. KCN (GCNN) [1] combined the shortest dependency path (SDP) sequence and knowledge representation for CID relation classification. It adopted the gated convolutional network (GCNN) with attention pooling combining entity and relation knowledge representations. Ours Different from other models, we propose a new RC framework for biomedical relation extraction and utilize the pretrained LM combined with the knowledge of relation representation between the possible chemical and disease. It is worth noting that our model can automatically distinguish different types of relation knowledge from CTD. Performance comparison with previous methods We compare our proposed model with previous work respectively on the CDR and CHR datasets. For the CDR dataset, we divide previous models into two categories: models without knowledge bases and models with knowledge bases. Here, the compared models are rich and diverse, such as heuristic rules, joint training with NER, relation classification, sequence labeling and so on. Among the systems without KBs, much work is based on the neural networks (NNs). As shown in Table 2, the graph kernel-based SVM is competitive among the traditional ML methods, but most of NN-based methods outperform the traditional ML methods which indicates the NN's more effective ability of context modeling. They almost use entity pair relation classification methods except that Bio-Seq [23] adopts sequence labeling to deal with the CID extraction task. Under the condition of no extra knowledge, our RC-based model outperforms the previous work and achieves an improvement of 0.19%. Bio-Seq [23] is a sequence labeling framework adopting LSTM and CRF. Compared with Bio-Seq [23], there is an improvement of 2.57%. CNN+SDP [6] is a relation classification method. It extracted the CID relation with CNN over the shortest dependency path (SDP) of context to deal with the long sequence and achieved an F1 score of 65.88%. Our method transforms the relation classification to reading comprehension, no matter on intra-sentential data or inter-sentential data. Different from CNN+SDP [6], our model does not need to transform the context sequence into SDP and just uses the pretrained LMs to extract context information directly from the context sequences. We conduct pointer prediction over the context sequences instead of classification and achieve the state-of-the-art performance of 66.07% on the CDR extraction data under the condition of no KBs. Table 2 Doc-level performance comparison over our proposed model without and with knowledge on the CDR dataset In addition to the context information, the knowledge information also plays an important role in our RC-based model for CID relation extraction. Among the systems with KBs, it mainly includes feature-based traditional ML methods and neural network (NN)-based methods. Most of the feature-based traditional ML methods adopt support vector machines (SVMs) and other kernel-based models. More details and differences of the compared NN-based methods can be seen in Section 4.4. Inspired by [1], we utilize the knowledge low-dimension representation to guide our RC-based model for chemical-induced disease extraction and then derive the CID relation. Different from [1], our method does not need to extract the SDP of the sequence and can integrate more than one type of relation from CTD into the RC model through the knowledge attention mechanism. On the NN-based models with knowledge representation, our KRC model achieves competitive performance. Compared with the only KB method and our RC model without KBs (RC in Table 2), our RC model with KBs (KRC in Table 2) performs better, which indicates that our KRC model can discern different knowledge and incorporate the knowledge with our attention layers effectively. To test our KRC model on other KBs, we also conduct experiments with the disease-drug association network (DCh-Miner)Footnote 3 of the Stanford SNAP database. Compared with the performance without KBs, it is slightly better. Compared with the performance on CTD knowledge, it performs not so well. It may be caused by the difference between the two KBs. DCh-Miner only contains a relation that means the drug is associated with disease and we name it "association", while the CTD knowledge contains three relations including "therapeutic", "inferred-association", "marker /mechanism" and only "marker /mechanism" indicates the relation of chemical-induced disease. It is worth noting that DCh-Miner is extracted from the CTD and the relation "association" covers three relations of CTD. The knowledge in DCh-Miner may mislead the model and decrease the performance. Table 3 Doc-level performance comparison over our proposed model without knowledge on the CHR dataset For the CHR dataset, there are only models without knowledge bases for comparison. Since it was created by distant supervision with the graph database Biochem4j and chemical relation labels in the dataset are the same as those in Biochem4j, we did not add this knowledge to our proposed model for prediction. As shown in Table 3, three previous NN-based methods are compared with our proposed model. GCNN [2] built a labeled edge graph convolutional neural network model on a document graph for document-level biomedical relation extraction. Different from GCNN [2], we transform the entity pair classification over a document to reading comprehension over a document. Compared with GCNN [2], we observe that our proposed RC model is 5.8 percentage points higher than the best F1 score and achieves the state-of-the-art performance. The effect of different pretrained models As shown in Table 4, we compared models finetuned on extra open-domain reading comprehension data. Since our data is biomedical text, we choose the biobert [28] as the base model which is a language model named bert pretrained on a large scale of biomedical text. To investigate the effect of reading comprehension pretrained tasks, we further utilize the biobert model finetuned on the SQuAD [17] data by [29]. We named this model biobert+SQuAD in Table 4. Here, SQuAD is a large-scale open-domain reading comprehension dataset. Compared with both traditional ML methods and neural network based methods, our RC model based on biobert achieves the top 3 performance. It indicates that our proposed RC framework is effective for biomedical relation extraction. Compared with results only on the biobert model, adding open-domain reading comprehension data helps improve the performance of the CID relation extraction on the CDR data and there is an improvement of 1.87%. Benefitting from this RC framework, we can utilize large-scale data from an open domain reading comprehension task to help biomedical relation extraction especially when the biomedical relation extraction data is not enough. Table 4 Results over LMs finetuned by open-domain reading comprehension dataset without KBs on the CDR dataset The effect of query construction As described in Section 4.2, two kinds of question construction methods are designed for our RC model. To select a better query construction strategy, we investigate the effect of these construction methods on performance. The experiments are conducted on our RC model based on the SQuAD finetuned biobert model under the condition of no knowledge. The performance on the pseudo query and the natural language query is shown in Table 5. The results show that using the natural language query achieves a higher document-level F1 of 66.07%. The reason may be that the natural query provides fine semantic hints for CID relation extraction while the pseudo query is just the simple concatenation of one entity mention, a relation type and another entity type which may confuse the model. Table 5 Results over pseudo queries and natural queries without KBs on the CDR dataset. 'Natural Query' means natural language queries The effect of knowledge representation As shown in Table 6, we investigate the effect of knowledge combination, including only KB, no KB(RC), and adding KB. In the part of adding KB, we compared two kinds of methods for knowledge-enhanced attention in our RC model. Only KB means we just directly match the relation of entity pairs in the CDR test set with the triples in CTD. From the result of only KB, we can see that the recall is high and the precision is not so well. It indicates that there is noise in triples extracted from CTD. Also, these triples can not fully cover CDR data. Thus, it is necessary to combine the CDR data and CTD knowledge in the RC model. Compared with the results of only KB, our proposed RC model performs better which indicates that the context information can be somehow captured by our RC model for CID relation extraction. As described in Section 3.2, there may be more than one KB relation representation between an entity pair. To further investigate the effect of our knowledge-enhanced RC model, we compared two operations of KB relation representation in the first step of our knowledge-enhanced attention layer. RC+KB(atten2) means we adopt the average operation for different KB relation representations. RC+KB(atten1+atten2) means we adopt the attention mechanism to automatically select the relevant KB relation representation in the model. The results on RC+KB(atten2) and RC+KB(atten1+atten2) show that the automatic attention selection works when more than one relation representation occurs and achieves a higher F1 of 71.18%. To further analyze the results of instances accompanied by different types of relations extracted from CTD, a detailed case study of no KBs and using KBs can be seen in the following section. Table 6 Ablation study over our proposed KRC model on the CDR dataset To investigate the robustness of our approach dealing with knowledge, we selected knowledge from the CTD knowledge base (KB) respectively for intra sentential and inter sentential data by four ratios, including 0.25, 0.5, 0.75, and 1.0 to conduct the knowledge-enhanced experiments on the CID task. The performance gradually increases with the increase of the knowledge proportion. When the selection ratio is 1.0, there is about 74% of data guided by CTD knowledge. When the selection ratio is 0.75, there is about 55% of data guided by CTD knowledge. As shown in Table 7, the performance of our KRC model surpasses that of our model without KBs and improves obviously when the selection ratio is more than 0.75, that is to say, the proportion of data guided by knowledge is more than 55%. Otherwise, it will decrease the performance. Table 7 Doc-level performance on the CDR dataset with different scales of CTD knowledge Case study of knowledge effect This section analyzes relation complexity of integrating knowledge, good and bad cases with and without knowledge. According to the CTD guideline, there are three relation types in the knowledge base. Therefore, more than one candidate relation type can be extracted from CTD for an entity pair in some cases and they may be consistent or inconsistent with the true relation type of the entity pair. To discuss the relation complexity when integrating knowledge, we counted the proportion of different relation combinations extracted from CTD for corrected extracted CID pairs as shown in Fig. 3. Here, 'inferred-association' means chemicals are inferred associated with diseases via CTD-curated chemical-gene interactions. 'marker/mechanism' indicates that a chemical may cause a disease. 'therapeutic' means a chemical that has a known or potential therapeutic role in a disease. '#' is a separator. The extracted relations to guide prediction for each CID pair are complex. Some of them are composed of more than one relation. 'inferred-association#marker/mechanism' contains two potentially related relation types. 'marker/mechanism#therapeutic' contains two conflict relation types. 'inferred-association#marker/mechanism#therapeutic' contains both related and conflict relation types. Except for the related relation types, the statistics in Fig. 3 show that our method can also deal with the cases with some noisy relation knowledge and extract the correct relations for entity pairs. The proportion of relation combination types extracted from CTD in correctly predicted cases on the intra sentential and inter sentential level Table 8 Good cases and bad cases on our RC model with and without KBs Additionally, we analyze some cases with different relation types extracted from CTD. As shown in Table 8, in the first example, the two relation types in CTD are potentially related. In this case, the CTD knowledge is consistent with the true relation type of the candidate entity pair. According to the CTD knowledge, 'inferred-association' and 'marker/mechanism', our KRC model extracted the answer 'syndrome of inappropriate secretion of antidiuretic hormone' for the vincristine-induced disease correctly and predicted that vincristine induces the syndrome of inappropriate secretion of antidiuretic hormone, while our RC model without KBs extracted no answer and could not detect the CID relation in this case. In the second example, we explore the case with two conflict relation types in CTD. In this case, 'therapeutic' is inconsistent with the true relation type of the candidate entity pair. Integrating two different relation types, our KRC model learned from the context and successfully picked the most relevant relation, predicting that the answer to the xylometazoline-induced disease was 'cataleptic'. While integrating no KBs, our RC model predicted that there was no answer in the context for the naphazoline-induced disease. There are still implicit instances that are difficult for our model to extract disease spans, although the relation types from CTD knowledge indicate that the chemical may cause the disease. Take the context "Switching the immunosuppressive regimen from tacrolimus to cyclosporine did not improve the clinical situation. The termination of treatment with any calcineurin inhibitor resulted in a complete resolution of that complication. CONCLUSIONS: Posterior reversible encephalopathy syndrome after liver transplant is rare." of the third case as an example. When asked by a query "what disease does tacrolimus induce?", an answer "Posterior reversible encephalopathy syndrome" is expected to be extracted to indicate the CID relation between the chemical and the disease. However, no answer ("null") was predicted. Here, the context did not obviously reveal the CID relation between "tacrolimus" and "Posterior reversible encephalopathy syndrome". It just implies that the termination of the calcineurin inhibitor "tacrolimus" results in a complete resolution of that complication "Posterior reversible encephalopathy syndrome", which indicates the CID relation between the two entities. To detect the main error sources, we performed error analysis on the final results of our proposed model on the CDR data as shown in Fig. 4. Among these errors, 293 negative chemical-disease entity pairs were wrongly classified as positive, accounting for 48.19%. In these instances, disease mentions were extracted by our KRC model while actually no answer should be predicted. Some inconsistent annotations may lead to some incorrectly annotated instances. Some curated CTD knowledge may mislead the predictions. Some instances containing complex context may make it difficult for our KRC model to extract the correct chemical-induced disease. Taking the sentence "Ethambutol-induced toxic optic neuropathy was suspected and tablet ethambutol was withdrawn." as an example, 'induced' appears near 'Ethambutol' and 'toxic optic neuropathy' which may mislead the model to extract 'optic neuropathy' as the ethambutol-induced disease instead of the null answer. The error distribution. FNs denotes the false negative examples. FPs denotes the false positive examples. FNs(MI) denotes the missing instances for predicting in FNs. FNs(EPI) denotes the error predicted instances in FNs 315 positive chemical-disease entity pairs were wrongly classified as negative, accounting for 51.81%. Some positive instances were removed by the instance construction rules and hypernym filtering and they did not appear to be predicted by the model, which resulted in 132 errors accounting for 21.71%. In some other positive instances, no answer was extracted while actually disease mentions should be extracted. Taking the sentence "METHODS: Newborn piglets received levobupivacaine until cardiovascular collapse occurred." as an example, there is no obvious trigger and expression about the relation 'induce' which may make it difficult to extract the levobupivacaine-induced disease 'cardiovascular collapse'. Besides, a few cases with the relation type 'null' of CTD knowledge led to incorrect prediction. In this paper, we propose a novel knowledge-enhanced reading comprehension framework for biomedical relation extraction, incorporated with an effective knowledge-enhanced attention mechanism to combine noisy knowledge. In the RC framework, it shows open-domain reading comprehension data and knowledge representation can significantly improve the performance of biomedical relation extraction. The experiments on the CDR data and the CHR dataset show that our proposed model achieved competitive F1 values of 71.18% and 93.3%, respectively, compared with other methods. In the future, we would like to design a more sophisticated reading comprehension model for biomedical relation extraction and apply it to other more complex biomedical tasks. The BioCreative V chemical disease relation (CDR) dataset can be downloaded at: https://biocreative.bioinformatics.udel.edu/tasks/biocreative-v/track-3-cdr/. The chemical reaction (CHR) dataset can be downloaded at: http://nactem.ac.uk/CHR/ https://biocreative.bioinformatics.udel.edu/tasks/biocreative-v/track-3-cdr/. https://github.com/thunlp/KB2E/. http://snap.stanford.edu/biodata/datasets/10004/10004-DCh-Miner.html. RC: Chemical-induced disease CDR: Chemical disease relation CHR: KBs: Knowledge bases NN: Maximum entropy SVM: Support vector machine SDP: Shortest dependency path Convolutional neural network LM: Language model Relation extraction CTD: Medical subject headings Zhou H, Lang C, Liu Z, Ning S, Lin Y, Du L. Knowledge-guided convolutional networks for chemical-disease relation extraction. BMC Bioinform. 2019;20(1):260–126013. https://doi.org/10.1186/s12859-019-2873-7. Sahu SK, Christopoulou F, Miwa M, Ananiadou S. Inter-sentence relation extraction with document-level graph convolutional neural network. In: Korhonen A, Traum DR, Màrquez L, editors. Proceedings of the 57th conference of the association for computational linguistics, ACL 2019, Florence, Italy, July 28– August 2, 2019, Volume 1: long papers, p. 4309–4316. Verga P, Strubell E, McCallum A. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. 2018. p. 872–884. Zheng W, Lin H, Liu X, Xu B. A document level neural model integrated domain knowledge for chemical-induced disease relations. BMC Bioinform. 2018;19(1):328–132812. https://doi.org/10.1186/s12859-018-2316-x. Li H, Yang M, Chen Q, Tang B, Wang X, Yan J. Chemical-induced disease extraction via recurrent piecewise convolutional neural networks. BMC Med Inf Decis Mak. 2018;18(S–2):45–51. https://doi.org/10.1186/s12911-018-0629-3. Le H, Can D, Dang TH, Tran M, Ha Q, Collier N. Improving chemical-induced disease relation extraction with learned features based on convolutional neural network. In: KSE 2017. 2017. p. 292– 297. https://doi.org/10.1109/KSE.2017.8119474. Zhou H, Ning S, Yang Y, Liu Z, Lang C, Lin Y. Chemical-induced disease relation extraction with dependency information and prior knowledge. J Biomed Inform. 2018;84:171–8. https://doi.org/10.1016/j.jbi.2018.07.007. Gu J, Qian L, Zhou G. Chemical-induced disease relation extraction with various linguistic features. Database. 2016. https://doi.org/10.1093/database/baw042. Xu J, Wu Y, Zhang Y, Wang J, Lee H, Xu H. CD-REST: a system for extracting chemical-induced disease relation in literature. Database. 2016. https://doi.org/10.1093/database/baw036. Li Z, Yang Z, Lin H, Wang J, Gui Y, Zhang Y, Wang L. Cidextractor: a chemical-induced disease relation extraction system for biomedical literature. In: 2016 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE; 2016. p. 994– 1001 Nguyen DQ, Verspoor K. Convolutional neural networks for chemical-disease relation extraction are improved with character-based word embeddings. In: Proceedings of the BioNLP 2018 workshop. 2018. p. 129–136. https://www.aclweb.org/anthology/W18-2314/. Levy O, Seo M, Choi E, Zettlemoyer L. Zero-shot relation extraction via reading comprehension. In: CoNLL 2017. 2017. p. 333–342. https://doi.org/10.18653/v1/K17-1034. Li X, Yin F, Sun Z, Li X, Yuan A, Chai D, Zhou M, Li J. Entity-relation extraction as multi-turn question answering. In: ACL 2019. 2019. p. 1340–1350. https://www.aclweb.org/anthology/P19-1129/. Li X, Feng J, Meng Y, Han Q, Wu F, Li J. A unified MRC framework for named entity recognition. In: Proceedings of the 58th annual meeting of the association for computational linguistics, ACL 2020, Online, July 5–10, 2020. p. 5849–5859. https://www.aclweb.org/anthology/2020.acl-main.519/. McCann B, Keskar NS, Xiong C, Socher R. The natural language decathlon: multitask learning as question answering. CoRR 2018. arXiv:1806.08730. Devlin J, Chang M, Lee K, Toutanova K. BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL-HLT 2019. p. 4171–4186. https://www.aclweb.org/anthology/N19-1423/. Rajpurkar P, Zhang J, Lopyrev K, Liang P. Squad: 100, 000+ questions for machine comprehension of text. In: Su J, Carreras X, Duh K, editors. Proceedings of the 2016 conference on empirical methods in natural language processing, EMNLP 2016, Austin, Texas, USA, 1–4, 2016. p. 2383–2392. Davis AP, Grondin CJ, Johnson RJ, Sciaky D, McMorran R, Wiegers J, Wiegers TC, Mattingly CJ. The comparative toxicogenomics database: update 2019. Nucleic Acids Res. 2019;47:948–54. https://doi.org/10.1093/nar/gky868. Bordes A, Usunier N, García-Durán A, Weston J, Yakhnenko O. Translating embeddings for modeling multi-relational data. In: NIPS. 2019. p. 2787–2795. Tenenbaum JB, Freeman WT. Separating style and content with bilinear models. Neural Comput. 2000;12(6):1247–83. https://doi.org/10.1162/089976600300015349. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436. Coletti MH, Bleich HL. Technical milestone: medical subject headings used to search the biomedical literature. JAMIA. 2001;8(4):317–23. https://doi.org/10.1136/jamia.2001.0080317. Li Z, Yang Z, Xiang Y, Luo L, Lin H. Exploiting sequence labeling framework to extract document-level relations from biomedical texts. BMC Bioinform. 2020;21(1):125. https://doi.org/10.1186/s12859-020-3457-2. Panyam NC, Verspoor K, Cohn T, Ramamohanarao K. Exploiting graph kernels for high performance biomedical relation extraction. J Biomed Semant. 2018;9(1):1–11. Zheng W, Lin H, Li Z, Liu X, Li Z, Xu B, Zhang Y, Yang Z, Wang J. An effective neural model extracting document level chemical-induced disease relations from biomedical literature. J Biomed Inform. 2018;83:1–9. https://doi.org/10.1016/j.jbi.2018.05.001. Peng Y, Wei C, Lu Z. Improving chemical disease relation extraction with rich features and weakly labeled data. J Cheminform. 2016;8(1):53–15312. https://doi.org/10.1186/s13321-016-0165-z. Pons E, Becker BFH, Akhondi SA, Afzal Z, van Mulligen EM, Kors JA. Extraction of chemical-induced diseases using prior knowledge and textual information. Database. 2016. https://doi.org/10.1093/database/baw046. Lee J, Yoon W, Kim S, Kim D, Kim S, So CH, Kang J. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics. 2019. https://doi.org/10.1093/bioinformatics/btz682. Yoon W, Lee J, Kim D, Jeong M, Kang J. Pre-trained language model for biomedical question answering. 2019. arXiv preprint arXiv:1909.08229. This work was supported by the foundation of the joint project with Beijing Baidu Netcom Science Technology Co., Ltd, Natural Science Foundation of China (Grant No. 61872113, 61876052, 62006061), Special Foundation for Technology Research Program of Guangdong Province (Grant No. 2015B010131010), Strategic Emerging Industry Development Special Funds of Shenzhen (Grant No. JCYJ20180306172232154) and Stable Support Program for Higher Education Institutions of Shenzhen (No. GXWD20201230155427003-20200824155011001). Intelligent Computing Research Center, Harbin Institute of Technology (Shenzhen), Shenzhen, China Jing Chen, Baotian Hu, Qingcai Chen & Buzhou Tang Baidu International Technology (Shenzhen) Co., Ltd, Shenzhen, China Weihua Peng Peng Cheng Laboratory, Shenzhen, China Qingcai Chen & Buzhou Tang Jing Chen Baotian Hu Qingcai Chen Buzhou Tang JC refined the idea, developed the algorithm and wrote the manuscript. BH and WP refined the idea, analyzed the data and revised the manuscript. QC contributed the initial idea and revised the manuscript. BT revised the manuscript. All authors have read and approved the final manuscript. Correspondence to Baotian Hu or Qingcai Chen. WP is an employee of Baidu Inc. The funding supporting this work included the foundation of the joint project with Baidu Inc. The authors declare that they have no competing interests. Chen, J., Hu, B., Peng, W. et al. Biomedical relation extraction via knowledge-enhanced reading comprehension. BMC Bioinformatics 23, 20 (2022). https://doi.org/10.1186/s12859-021-04534-5 Biomedical relation extraction Knowledge attention mechanism
CommonCrawl
Limit approach to finding $1+2+3+4+\ldots$ When exploring the divergent series consisting of the sum of all natural numbers $$\sum_{k=1}^\infty k=1+2+3+4+\ldots$$ I came across the following identity involving a one-sided limit: $$\lim_{x\to0^-}\sum_{k=1}^\infty k\exp(kx)\cos(kx)=-\frac{1}{12}$$ Using zeta function regularization yields the same value: $$\sum_{k=1}^\infty k^{-s}=\zeta(s)$$ $$\zeta(-1)=-\frac{1}{12}$$ In general, I found that for $y\neq0$ and $n=1,5,9,13,\ldots$ (i.e. $4m+1$ where $m\in\mathbb{N}$), $$\lim_{x\to0^-}\sum_{k=1}^\infty k^n\exp(kxy)\cos(kxy)=\zeta(-n)$$ However, a similar limit did not exist for other powers of $k$, e.g. $$\lim_{x\to 0^-}\sum_{k=1}^\infty k^2\exp(kx)\cos(kx)$$ $$\lim_{x\to 0^-}\sum_{k=1}^\infty k^3\exp(kx)\cos(kx)$$ The regularized values of the corresponding series are $\zeta(-2)=0$ and $\zeta(-3)=\frac{1}{120}$. Given this information, I have the following questions: What is the connection between the limit approach and the zeta function approach? Why does the limit expression only seem to converge for $n=4m+1$? Can the limit approach be used to find the sum for other powers of $k$? If so, how? Here is a plot I made with Mathematica using the command Plot[Evaluate[Sum[k*Exp[x*k]*Cos[x*k], {k, 1, Infinity}]], {x, -16, 16}, PlotRange -> {-.25, .25}, AspectRatio -> 1] Notice how it approaches $-1/12\approx-0.08333$ as $x$ approaches $0$. $$\sum_{k=1}^\infty k^n\exp(kx)=\mathrm{Li}_{-n}(\exp(x))=\frac{n!}{(-x)^{n+1}}+\zeta(-n)+O(x)$$ Based on Micah's answer, we seek an $f$ such that $f(s,0) = 1$ and \begin{align} \int_0^\infty x^s f(s,x) \,\mathrm{d}x &= 0 \end{align} Let \begin{align} f(s,x) &= \mathrm{e}^{-x}(1+ax) \end{align} Then $f(s,0) = 1$. Assuming $\mathrm{Re}(s) > -1$, \begin{align} 0 &= \int_0^\infty x^s f(s,x) \,\mathrm{d}x \\ &= \int_0^\infty x^s \mathrm{e}^{-x}(1+ax) \,\mathrm{d}x \\ &= \int_0^\infty x^s \mathrm{e}^{-x} \,\mathrm{d}x + a \int_0^\infty x^{s+1} \mathrm{e}^{-x} \\ &= \Gamma(s+1) + a (s+1) \Gamma(s+1) \\ &= (1 + a(s+1)) \Gamma(s+1) \\ \Longrightarrow a &= -\frac{1}{s+1} \\ \Longrightarrow f(s,x) &= \mathrm{e}^{-x}\left( 1 - \frac{x}{s+1} \right) \end{align} Thus \begin{align} \zeta(-s) &= \lim_{\varepsilon \rightarrow 0^+} \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s f(s, n \varepsilon) \\ &\overset{\star}{=} \lim_{m \rightarrow \infty} \lim_{\varepsilon \rightarrow 0^+} \sum_{n=1}^m n^s f(s, n \varepsilon) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s f(s, 0) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^s \\ &= \sum_{n=1}^\infty n^s \end{align} where the star indicates the non-rigorous step of exchanging limits. In fact, this regulator seems to work for all $s \neq -1$ (see some plots for $s < -1$ below). Hence we can also regularize the harmonic series as follows: \begin{align} \gamma &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} (\zeta(1+\varepsilon) + \zeta(1-\varepsilon)) \\ &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} \left(\lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &= \frac{1}{2} \lim_{\varepsilon \rightarrow 0^+} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &\overset{\star}{=} \frac{1}{2} \lim_{m \rightarrow \infty} \lim_{\varepsilon \rightarrow 0^+} \left(\sum_{n=1}^m n^{-1-\varepsilon} f(-1-\varepsilon, n \varepsilon) + \sum_{n=1}^m n^{-1+\varepsilon} f(-1+\varepsilon, n \varepsilon) \right) \\ &= \frac{1}{2} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1} f(-1, 0) + \sum_{n=1}^m n^{-1} f(-1, 0) \right) \\ &= \frac{1}{2} \lim_{m \rightarrow \infty} \left(\sum_{n=1}^m n^{-1} + \sum_{n=1}^m n^{-1} \right) \\ &= \lim_{m \rightarrow \infty} \sum_{n=1}^m n^{-1} \\ &= \sum_{n=1}^\infty n^{-1} \\ \end{align} Here is the Mathematica code and its plots: f[s_, x_] := Exp[-x] (1 + a x) g[s_, t_] := Evaluate@Simplify[ f[s, t] /. Solve[Integrate[x^s f[s, x], {x, 0, Infinity}] == 0, a], Assumptions -> Re[s] > -1] g[s, t] Table[{s, Plot[{Zeta[-s], Sum[n^s g[s, n \[Epsilon]], {n, 1, 1000}]}, {\[Epsilon], 0, 1}, Evaluated -> True]}, {s, -4, 4, 1/2}] // TableForm // Quiet Plot[{EulerGamma, Sum[(n^(-1 + \[Epsilon]) g[-1 + \[Epsilon], n \[Epsilon]] + n^(-1 - \[Epsilon]) g[-1 - \[Epsilon], n \[Epsilon]])/ 2, {n, 1, 1000}]}, {\[Epsilon], 0, 1}, Evaluated -> True] sequences-and-series limits riemann-zeta divergent-series laurent-series $\begingroup$ The similar limits you're computing need more care because it looks like you're just trying to swap the limit with the sum, which is in fact wrong because you need some kind of uniform bound the terms whose sum also converges. $\endgroup$ – Alex R. $\begingroup$ Indeed $\lim_{x\to 0^-}\lim_{n\to\infty}\sum_{k=1}^nf(k,x)\neq\lim_{n\to\infty}\lim_{x\to 0^-}\sum_{k=1}^nf(k,x)$ in general, but I'm trying to find the regularized value of divergent series using the first approach. $\endgroup$ $\begingroup$ For what it's worth, write $\cos(k^3x)=e^{ik^3x}+e^{-ik^3x}{2}$. Multiply out the sum to get two sums with summands like $k^3\exp(k^3x(1\pm i))$. Now notice that if you instead look at the sum of $\exp(k^3a x)$ and take it's derivative in $x$ you'll get what you're interested in. I think this kind of a sum is related to Jacobi theta functions. $\endgroup$ $\begingroup$ If you haven't seen this post by Terry Tao, it's worth a look. His approach isn't identical to yours — he uses a compactly supported cutoff function in the place of your exponentials — but I think it rhymes. In particular, he also recovers negative zeta values as the constant terms in asymptotic expansions of his sums, which are dominated by some larger non-constant term — very much like your "further information" identity. $\endgroup$ – Micah $\begingroup$ No limit approach is needed. The sum is in fact infinite. The fact that the $\zeta$ function extends the map $s\mapsto\sum n^{-s}$ for $s>0$ or $s>1$, doesn't mean that we can say the same on the extension. This is an "abuse of continuation", just like how you often write $f(\infty)$ to mean the value of $\lim_{x\to\infty}f(x)$. If that makes you cringe, then the abuse of $\zeta$ is equally cringe-worthy. If neither makes you cringe, then perhaps it's time to reflect on a career in physics. :-) $\endgroup$ In Terence Tao's blog post about relating divergent series to negative zeta values, he uses Euler-Maclaurin summation to show that if $\eta$ is smooth and compactly-supported, and $\eta(0)=1$, then $$ \sum_{n=0}^\infty n^s \eta(n/N)=C_{\eta,s} N^{s+1} + \zeta(-s) + O(1/N) $$ for any positive integer $s$, where $C_{\eta,s}=\int_0^\infty x^s \eta(x) \, dx$. In fact, it's possible to extend his proof to the case where $\eta$ is a Schwartz function — see below for a proof. In particular, consider $\eta(x)=e^{-x}\cos x$. After a change of variables we can write your sum in the above form, with this $\eta$. Tao discusses the connection between this asymptotic form and the analytic continuation of the zeta function; I haven't gone through that part of his blog post in detail, but assuming that it, too, could be made to work when $\eta$ was Schwartz-class, that would presumably answer your first question about the connection between your sum and the zeta-regularized form of the original divergent sum. To answer your second question, we can look at the leading coefficient $C_{\eta,s}=\int_0^\infty x^s e^{-x} \cos x \, dx$. After a bunch of annoying but straightforward integration by parts, it's possible to compute that $\int_0^\infty x^s e^{-x} \cos x=0$ when $s \equiv 1 \pmod{4}$, and $\int_0^\infty x^s e^{-x} \sin x=0$ when $s \equiv 3 \pmod{4}$. That is, whenever $s \equiv 1 \pmod{4}$, the $\eta(x)$ you've chosen coincidentally causes the leading term of the asymptotic expansion to drop out, exposing the zeta value which is the constant term of the expansion. You can expose $\zeta(-4k)$ in a similar fashion by taking $\eta(x)=e^{-x} \frac{\sin x}{x}$, though since it's zero, that's not particularly interesting. For example, here's $\zeta(-4)$. Similarly, you can expose $\zeta(-4k+2)$ by taking $\eta(x)=e^{-x}(\cos x + \sin x)$. The interesting question — to which I don't know the answer — would be whether there's a nice choice of $\eta$ that exposes $\zeta(-4k+1)$, since that's the other case where it's actually nonzero. But this provides at least a partial answer to your third question. Also note that if we take $\eta(x)=e^{-x}$, Tao's formula gives your "further information" expansion, as $\int_0^\infty x^n e^{-x} \, dx = n!$. To prove Tao's formula whenever $\eta$ is smooth and rapidly decreasing, we'll use the infinite-series version of the Euler-Maclaurin formula (as found, e.g., in these notes): if $f(t)$ and all its derivatives tend to zero as $t \to \infty$, then $$ \sum_{n=0}^\infty f(n) = \int_{0}^\infty f(t) \, dt + \frac{1}{2} f(0) - \sum_{i=2}^k \frac{b_i}{i!} f^{(i-1)}(0)-\int_0^\infty \frac{B_k(\{1-t\})}{k!} f^{(k)}(t) \, dt $$ for any fixed integer $k$, where the $b_k$ are the Bernoulli numbers, the $B_k$ are the Bernoulli polynomials, and $\{x\}$ denotes the fractional part of $x$. In our case, we set $f(t)=t^s \eta(t/N)$ and use the $(s+2)$nd-order expansion. Taking each term in the formula one-by-one, note that: $$ \int_0^\infty f(t) \, dt=\int_0^\infty t^s \eta(t/N) \, dt = N^{s+1}\int_0^\infty x^s \eta(x) \, dx=C_{\eta,s}N^{s+1} $$ after making the substitution $x=t/N$; $$ \frac{1}{2}f(0)=0 $$ is immediate; \begin{align} -\sum_{i=2}^{s+2} \frac{b_i}{i!} f^{(i-1)}(0)&=-\frac{b_{s+1}}{(s+1)!} s! \eta(0)-\frac{b_{s+2}}{(s+2)!}\left(\frac{(s+1)!}{N} \eta'(0)\right)\\&=-\frac{b_{s+1}}{s+1} + O(1/N)\\&=\zeta(-s) + O(1/N) \end{align} as the first $s-1$ derivatives of $f$ vanish, and \begin{align} -\int_0^\infty \frac{B_{s+2}(\{1-t\})}{(s+2)!} f^{(s+2)}(t) \, dt &= -\int_0^\infty \frac{B_{s+2}(\{1-t\})}{(s+2)!} \frac{d^{s+2}}{dt^{s+2}}\left(t^s \eta(t/N)\right) \\ &=-\int_0^\infty \frac{B_{s+2}(\{1-Nx\})}{(s+2)!} \frac{1}{N^{s+2}}\frac{d^{s+2}}{dx^{s+2}}\left(N^sx^s \eta(x)\right) N \, dx \\ &=-\frac{1}{N} \int_0^\infty \frac{B_{s+2}(\{1-Nx\})}{(s+2)!} \frac{d^{s+2}}{dx^{s+2}}\left(x^s \eta(x)\right) \, dx \end{align} where we again make the substitution $x=t/N$. This last quantity is bounded in magnitude by $\frac{1}{N}\frac{M}{(s+2)!}\int_0^\infty \frac{d^{s+2}}{dx^{s+2}}\left(x^s \eta(x)\right) \, dx$, where $M$ is the maximum value of $B_{n+2}(x)$ on $[0,1]$. Thus it too is $O(1/N)$ so long as that integral converges. So Tao's formula works, not only for compactly supported $\eta$, but for $\eta$ such that $f(t)=t^s \eta(t)$ as well as all of $f$'s derivatives are integrable on $[0,\infty)$. In particular, it will work for all $s$ whenever $\eta$ is Schwartz on $[0,\infty)$. MicahMicah $\begingroup$ This is excellent! The problem is therefore reduced to finding a Schwartz function $\eta$ such that $\int_0^\infty x^3\eta(x)\,\mathrm{d}x=0$ and $\lim_{x\to0}\eta(x)=1$, correct? $\endgroup$ $\begingroup$ If you're looking for a family of identities which is as nice as the ones you came up with for the other interesting residue class, you need $\eta$ such that $\int_0^\infty x^{3+4k} \eta(x) \, dx=0$ for all $k$. Or you could just content yourself with knowing that for any $k$, you can choose a different $\eta$ (in which case there's probably always some linear combination of, say, $e^{-x}$ and $e^{-2x}$ that'd do the trick)... $\endgroup$ $\begingroup$ I added a fractional regulator to my question based on your answer. $\endgroup$ Explanation for the Behavior of the Series (Parts $2$ and $3$) Note that $$ \begin{align} \sum_{k=0}^\infty e^{ikx} &=\frac1{1-e^{ix}}\\ &=\frac12+\frac i2\cot\left(\frac x2\right)\\ &=\frac ix+\frac12-i\,\underbrace{\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{odd function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{1} \end{align} $$ Odd Exponents Taking $2n-1$ derivatives of $(1)$, we get $$ \sum_{k=0}^\infty k^{2n-1}e^{ikx}=\frac{(-1)^n(2n-1)!}{x^{2n}}+(-1)^n\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{even function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{2} $$ Exponents which are $\boldsymbol{1\bmod{4}}$ If $n$ is odd, so that $2n-1\equiv1\pmod{4}$, substitute $x=(1-i)y$ in $(2)$ to get $$ \sum_{k=0}^\infty k^{2n-1}e^{k(1+i)y} =\underbrace{\frac{(2n-1)!}{(2i)^ny^{2n}}}_{\substack{\text{pure imaginary}\\\text{since $n$ is odd}}} -\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{3} $$ Thus, the real part of $(3)$ is $$ \sum_{k=0}^\infty k^{2n-1}e^{ky}\cos(ky)=\left.-\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=(1-i)y}\tag{4} $$ Taking the limit of $(4)$ as $y\to0$ yields the identity given in the question $$ \begin{align} &\text{When $n$ is odd; that is, }2n-1\equiv1\pmod{4}:\\ &\bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n-1}e^{ky}\cos(ky)=\left.-\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=0}}\tag{5} \end{align} $$ If $n$ is even, so that $2n-1\equiv3\pmod{4}$, the situation is a bit more complicated. We can still substitute $x=(1-i)y$ in $(2)$, getting $$ \sum_{k=0}^\infty k^{2n-1}e^{k(1+i)y} =\underbrace{\frac{(2n-1)!}{(2i)^ny^{2n}}}_{\text{real since $n$ is even}} +\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{6} $$ We can also substitute $x=-iy$ in $(2)$, getting $$ \sum_{k=0}^\infty k^{2n-1}e^{ky}=\frac{(2n-1)!}{y^{2n}}+\underbrace{\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=-iy$}}\tag{7} $$ Taking a linear combination of $(6)$ and $(7)$ to cancel the singular part and maintain the constant part, we get $$ \begin{align} &\text{When $n$ is even; that is, }2n-1\equiv3\pmod{4}:\\ &\bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n-1}e^{ky}\frac{2^n\cos(ky)-(-1)^{n/2}}{2^n-(-1)^{n/2}} =\left.\frac{\mathrm{d}^{2n-1}}{\mathrm{d}x^{2n-1}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)\right|_{x=0}}\tag{8} \end{align} $$ Even Exponents Taking $2n$ derivatives of $(1)$, we get $$ \sum_{k=0}^\infty k^{2n}e^{ikx} =(-1)^n\frac{(2n)!}{x^{2n+1}}i -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\substack{\text{odd function of $x$}\\\text{analytic on $(-\pi,\pi)$}}}\tag{9} $$ Substitute $x=(1-i)y$ in $(9)$ to get $$ \sum_{k=0}^\infty k^{2n}e^{k(1+i)y} =-\frac{(2n)!}{y^{2n+1}}\frac{1+i}{(2i)^{n+1}} -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=(1-i)y$}}\tag{10} $$ We can also substitute $x=-iy$ in $(9)$ to get $$ \sum_{k=0}^\infty k^{2n}e^{ky} =-\frac{(2n)!}{y^{2n+1}} -(-1)^ni\underbrace{\frac{\mathrm{d}^{2n}}{\mathrm{d}x^{2n}}\left(\frac1x-\frac12\cot\left(\frac x2\right)\right)}_{\text{plug in $x=-iy$}}\tag{11} $$ Taking a linear combination of $(10)$ and $(11)$ to cancel the singular part and maintain the constant part, we get $$ \bbox[5px,border:2px solid #C0A000]{\lim_{y\to0^-}\sum_{k=0}^\infty k^{2n}e^{ky}\frac{2^{n+1}\cos(ky)+(-1)^{\lfloor(n-1)/2\rfloor}}{2^{n+1}+(-1)^{\lfloor(n-1)/2\rfloor}}=0}\tag{12} $$ where $(-1)^{\lfloor(n-1)/2\rfloor}=\mathrm{Re}\left(\frac{i-1}{i^n}\right)$. Unified Expression Combining $(5)$, $(8)$, and $(12)$ yields $$ \bbox[5px,border:2px solid #C0A000]{\lim_{x\to0^-}\sum_{k=0}^\infty k^ne^{kx}\frac{2^{\lceil n/2\rceil}\cos(kx)+c_n}{2^{\lceil n/2\rceil}+c_n}=\zeta(-n)}\tag{13} $$ where $c_n$ is given by $$ c_n=-\tfrac14(-1)^{\lceil n/4\rceil}\left(2+(-1)^{\lceil n/2\rceil}\left(1-(-1)^n\right)\right) $$ or by $$ c_n=-(-1)^{\lceil n/4\rceil}\sin^2\left(\tfrac\pi4(n-1)\right) $$ or in the following table $$ \small\begin{array}{c|r|r|r|r|r|r|r} n\bmod8&1&2&3&4&5&6&7&8\\ \hline c_n&\hphantom{-}0&\hphantom{-}\frac12&\hphantom{-}1&\hphantom{-}\frac12&\hphantom{-}0&-\frac12&-1&-\frac12 \end{array}\tag{14} $$ robjohn♦robjohn $\begingroup$ Could this regulator be generalized to fractional $n$? $\endgroup$ $\begingroup$ There might be, but I am not sure how to approach that. $\endgroup$ – robjohn ♦ $\begingroup$ I added a regulator that works for fractional exponents to my question. $\endgroup$ The infamous $-\frac{1}{12}$ pops up as the coefficient of of a range of expansions, such as $\dfrac{z}{{\mathrm e}^{-z}-1}=-1-\dfrac{1}{2}z-\dfrac{1}{12}z^2+{\mathcal O}(z^3)$ like in the Todd class or similarly in the Baker–Campbell–Hausdorff formula and so on. In your post, you are led to it by ${\mathrm e}^{kx}\cos(kx)$, which after the reparametrization $x\mapsto \log(z)$ I'd write as as $z^k\cos(\log(z^k))$. But in any case, the higher order expansion terms of the cosine make this a not so nice choice and you get a lot of cases where the limit procedure doesn't work. I've worked out some broader perspectives.. The analytical continuation, e.g. involved in defining the Zeta function* $\zeta(s) = \dfrac{1}{\Gamma(s)} \int_{0}^{\infty} \dfrac{x ^ {s-1}}{{\mathrm e} ^ x - 1}\mathrm{d}x,$ is a process which smoothly probes local values of the functional expressions $f$ in $\sum_{k=0}^\infty f(k)$. You can formulate the regularization of the sum in a way that reflects that, e.g. by using a local mean like $\langle f(k)\rangle:=\int_{k}^{k+1}f(k')\,{\mathrm d}k'$. So the limit $\lim_{z\to 1}$ of the sum $0+1\,z^1+2\,z^2+3\,z^3+\dots$ diverges, because of the pole in $\sum_{k=0}^\infty k\,z^k=z\dfrac{{\mathrm d}}{{\mathrm d}z}\sum_{k=0}^\infty z^k=z\dfrac{{\mathrm d}}{{\mathrm d}z}\dfrac{1}{1-z}=\dfrac{z}{(z-1)^2}, \hspace{1cm} z\in(0,1)$ So let's consider the sum of smooth deviations. With $\langle k\,z^k\rangle=z\dfrac{{\mathrm d}}{{\mathrm d}z}\langle z^k\rangle=z\dfrac{{\mathrm d}}{{\mathrm d}z}\langle {\mathrm e}^{k \log(z)}\rangle=\left.z\dfrac{{\mathrm d}}{{\mathrm d}z}\dfrac{z^{k'}}{\log(z)}\right|_{k}^{k+1}$. we find the sum $\sum_{k=0}^n\langle k\,z^k\rangle$ involves canceling upper and lower bounds and we're left with $\dfrac{z^0}{\log(z)^2}$ plus terms suppressed by $z^n$. Finally, using the expansion $r^2\dfrac{1}{\log(1+r)^2}=\dfrac{1}{1-r+\left(1-\frac{1}{1!\,2!\,3!}\right)r^2+{\mathrm{O}}(r^3)}=1+r+\dfrac{1}{1!\,2!\,3!}r^2+{\mathrm{O}}(r^3),$ we find $\sum_{k=0}^\infty \left(k\,z^k-\langle k\,z^k\rangle\right)=\dfrac{z}{(z-1)^2}-\dfrac{1}{\log(z)^2}=-\dfrac{1}{12}+{\mathcal O}\left((z-1)^1\right).$ The picture shows the two functions $\dfrac{z}{(z-1)^2}$ and $\dfrac{1}{\log(z)^2}$, as well as their difference (blue, red, yellow). While the functions themselves clearly have a pole at $z=1$, their difference converges against $$-\frac{1}{1!\,2!\,3!}=-\frac{1}{12}=-0.08{\dot 3}.$$ And here's the Mathematica calculation for higher $n$, as well as the corresponding values of the Zeta function (which match) I've tried to generalize those in various directions and stumbled upon some futher relations. For example, when comparing finite differences to their first order approximations, we find $\dfrac{f'(x)\,h}{f(x+h)-f(x)}=1-\dfrac{f''(x)}{2!}\left(\dfrac{h}{f'(x)}\right)+\left(\dfrac{f''(x)\,f''(x)}{2!\,2!}-\dfrac{f'(x)\,f'''(x)}{1!\,3!}\right)\left(\dfrac{h}{f'(x)}\right)^2+{\mathcal O}(h^3).$ Note that $\dfrac{1}{2!\,2!}-\dfrac{1}{1!\,3!}=\dfrac{1}{2!\,3!}(3-2)=\dfrac{1}{12}$, which is the coeffient $\dfrac{1}{2!}B_2$ in the related MacLaurin formula And the log-substraction scheme can be performed more generally for higher order poles $\frac{1}{(z-1)^n}$ using $\dfrac{1}{\log(z)^n}=\dfrac{1}{(z-1)^n}\left(1+\frac{n}{2}(z-1)+\frac{n}{2}\frac{3n-5}{12}(z-1)^2+\frac{n}{2}\frac{(n-2)(n-3)}{24}(z-1)^3+\dots\right)$ Here, e.g. plug in $n=2$ to get $\dfrac{n}{2}\dfrac{3n-5}{12}(z-1)^2=\dfrac{1}{12}$. *As I see from your profile you're interested a physics (I'm a physicist myself) I like to point out how the body of the integrand in that Zeta function representation is the expression of Planck's law :). The exponential function ${\mathrm e}^x\cdot{\mathrm e}^y={\mathrm e}^{x+y}$, say in statistical physics where $-\frac{1}{12}$ come to play a role, enters here because the probabilities of divided systems behave multiplicative, while the energy, by definition, behaves additive. Nikolaj-KNikolaj-K Not the answer you're looking for? Browse other questions tagged sequences-and-series limits riemann-zeta divergent-series laurent-series or ask your own question. Is $\sum_{n=1}^{\infty} 1 = -\frac{3}{12}$ true? Proof that $0=1$? Does a value for $\sqrt{x+\sqrt{x+\sqrt{x+...}}}$ actually exist? Regulator for the harmonic series $\int_{-\infty}^\infty e^{ix}/x^2\ dx$ Finding the sum of this Gamma series How to obtain the Laurent expansion of gamma function around $z=0$? Closed form of $\lim\limits_{n\to\infty}\left(\int_0^{n}\frac{{\rm d}k}{\sqrt{k}}-\sum_{k=1}^n\frac1{\sqrt k}\right)$ Limit representation of delta function as discrete Gaussian sum Summing a divergent series Another approach for calculating the sum Proof verification: $\lim\limits_{s\to\infty}\zeta(s)=1$
CommonCrawl
June, 1990 Data-Driven Bandwidth Choice for Density Estimation Based on Dependent Data Jeffrey D. Hart, Philippe Vieu Ann. Statist. 18(2): 873-890 (June, 1990). DOI: 10.1214/aos/1176347630 The bandwidth selection problem in kernel density estimation is investigated in situations where the observed data are dependent. The classical leave-out technique is extended, and thereby a class of cross-validated bandwidths is defined. These bandwidths are shown to be asymptotically optimal under a strong mixing condition. The leave-one out, or ordinary, form of cross-validation remains asymptotically optimal under the dependence model considered. However, a simulation study shows that when the data are strongly enough correlated, the ordinary version of cross-validation can be improved upon in finite-sized samples. Jeffrey D. Hart. Philippe Vieu. "Data-Driven Bandwidth Choice for Density Estimation Based on Dependent Data." Ann. Statist. 18 (2) 873 - 890, June, 1990. https://doi.org/10.1214/aos/1176347630 Published: June, 1990 First available in Project Euclid: 12 April 2007 Digital Object Identifier: 10.1214/aos/1176347630 Primary: 65G05 Secondary: 60G10, 60G35, 62G20, 62M10, 62M99 Keywords: $\alpha$-mixing processes, Bandwidth selection, cross-validation, kernel estimate, Nonparametric density estimation Rights: Copyright © 1990 Institute of Mathematical Statistics Vol.18 • No. 2 • June, 1990 Jeffrey D. Hart, Philippe Vieu "Data-Driven Bandwidth Choice for Density Estimation Based on Dependent Data," The Annals of Statistics, Ann. Statist. 18(2), 873-890, (June, 1990)
CommonCrawl
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up. How big could a space-ship get while still being plausible? Background: In a society where human beings have lived among the stars for tens of thousands of years, they would have accumulated a lot of junk, so they just decided to throw a lot of it onto uninhabitable rocks near colony planets for easy disposal. Kind of like the galactic version of a landfill. My characters are salvagers, meaning that they go down to these junk planets and bring anything remotely valuable back with them. I want their base to be very large but still mobile. Most of it would be taken up by containers of junk they collected, things to be sorted to see if they're valuable, and things that are valuable if recycled in large quantities. Essentially most of it is just a series of large compartments that act as warehouses, with some recyclers that are either advanced and run by actual people (metal, antiques, jewelry etc.) or very basic and run automatically (pipes, structural supports, broken concrete, etc.). I want to know how big such a ship could get, because that will tell me how much junk they can handle at any one time and give me an idea of how profitable they would be. It would also give me an idea about how large to make the crew, since my characters only make up roughly twenty or so of the crew. Currently the ship is roughly the size of a small moon, just a little bigger than Deimos. The main constraints I already know about are: mass, acceleration speed, construction, expense, and landing. Mass and acceleration are sort of tied together. It takes a energy to accelerate and decelerate in space, and the more mass something has the more energy is required to do so. I want to know how much energy such a thing would take. Do I need to create a new energy source for this in order to make it feasible? And is hand-waving the energy source acceptable? Acceleration for this ship doesn't really need to be all that fast, although it does need to be able to travel between solar systems in (at most) a month or so. The construction of the ship would've been enormously expensive, and it would've had to be constructed in space from the outset. No way is it possible to build something that large on a planet. Possibly a shipyard that manufactures everything and puts it together? I was thinking that perhaps asteroids and uninhabitable planets were pretty much looted of all their ore, and then the ore was refined into metal that the shipyard uses to directly build the ships. Seems reasonable, provided the technology is there. The cost to build and maintain the ship would be enormous, but since my characters are mostly low-level grunt workers, I can probably explain it away as the corporation paying for it. Finally, since there is no possible way such a large ship could get close to a planet without causing a potential apocalypse, let alone land on it in order to salvage valuables. Destroying the very valuables you are there to collect by your very presence is a major issue. So, the ship would probably need to have a hangar that can send shuttles down to carry the cargo back up. They wouldn't be small due to the large cargo area, but it would at least be possible to land on planets so they could gather the materials. Did I miss any major problems? Should I make the ship smaller? reality-check space spaceships MarleyMarley $\begingroup$ What tech level do you have in mind? Star Wars, Star Trek etc? $\endgroup$ – Alexander Jun 29 '20 at 16:33 $\begingroup$ Does your civilization have faster-than-light travel? If so, they have a propulsion technology way beyond anything we know, and you can hand wave acceleration and deceleration costs. $\endgroup$ – Patricia Shanahan Jun 29 '20 at 16:56 $\begingroup$ Such a large ship probably would not be allowed anywhere near Earth. It would effectively constitute a very large thermonuclear weapon in orbit. All it would take would be for someone to press the wrong button or for terrorists to take over the ship and the ship might be decelerated into Earths atmosphere. Such a large object would not burn up and it could easily kill hundreds of millions of people upon impact. $\endgroup$ – Slarty Jun 29 '20 at 17:24 $\begingroup$ If the ship is the size of a moon, how do they find and collect enough junk to fill it? They have to find dozens if not hundreds of tons worth of material per minute for months to fill it to an appreciable level. Also think of turning this thing. I think you are better using a type of barge system, think Alien. "Small" ship or several ships push the object, and rather than turning it the ships detach and grab the target on the right angle to push it or slow it down. $\endgroup$ – Demigan Jun 29 '20 at 18:05 $\begingroup$ If the ship is large enough, it will attract space junk on its own. $\endgroup$ – Simon Richter Jul 1 '20 at 15:50 Here's a list of potential issues: The gravitational force that the ship exerts upon itself, possibly leading to collapse It takes a lot of energy to accelerate a ship that large A ship the size of a small moon poses considerable risk to the planet it's orbiting Now let's consider these problems one by one. Let's conservatively suppose this ship of yours is approximately the size of Deimos with a mean radius of $6.2\space\text{km}$. However, its mass will probably be much smaller than that of Deimos, since it will presumably contain a lot of empty space. Since Deimos' mass is about $1.5\cdot 10^{15}\space\text{kg}$, we might estimate (again, conservatively) that, after being filled with junk, your ship is about $1/100$ as dense on average, giving it a mass of $1.5\cdot 10^{13}\space\text{kg}$. Possible ship collapse Good news - the surface gravity exerted by this ship is tiny: $$g_{\text{ship}} = \frac{GM}{r^2}\approx \frac{(6.674\cdot 10^{-11})(1.5\cdot 10^{13})}{(6200)^2}\space\text{m/s}^2\approx 2.6\cdot 10^{-5}$$ So you probably don't need to worry about it collapsing. In fact, if you look at a picture of Deimos, you'll notice that it's visibly non-spherical because the gravity is so weak. Nothing to worry about here, as long as you make sure your ship is sturdy. Ship acceleration Apparently, the closest solar system is about $10$ light years away, but the nearest one with more than one planet is over $15$ light years away. Sorry, but there's no way you're traveling that far in under a month. You'll need faster-than-light travel, which will certainly require a significant amount of hand-waving. Supposing you can manage faster-than-light speeds, you'd need to accelerate to a speed of at least $120$ times the speed of light in order to make the trip in a month. That's a kinetic energy of $$\frac{mv^2}{2}=\frac{(1.5\cdot 10^{13})(3.6\cdot 10^{10})^2}{2}\approx 9.72\cdot 10^{33} \space\text{J}$$ To give you a sense of how large that is, that amount of energy is greater than The amount of solar energy that strikes the Earth each year $10^{10}$ times the energy stored in the Earth's natural gas reserves, as of 2010 $10^{12}$ times the world energy consumption in 2010 That's a lot of energy! You're either going to need to invent a miracle energy source, or slow the heck down. Here are some suggestions for getting out of this bind: If there are lots of junk planets all over the place, and you don't care which planet you end up at, have your characters send their ship into random wormholes and scavenge wherever they end up. Use cryonics to freeze your "grunts" for 20-30 years while they travel at near-light-speed. This will still require an astronomical amount of energy, but you might manage it by piggybacking off of the gravity of a nearby star, using it to "sling" the ship in the right direction. Risk to nearby planets No inhabited planet will want to have this ship orbiting it. If its orbit decays, it will be difficult to prevent it from crashing into the planet and causing a catastrophe. Even if its orbit does not decay, it could still screw up the planet by interfering with the orbits of preexisting moons. When a body of mass $m$ orbits a larger body of mass $M$ with velocity $v$, the radius at which the circular orbit is stable equals $$r=\frac{GM}{v^2}$$ If, due to miscalculation or external interference, the orbit decays by some amount $\Delta r$, the ship will either need to speed up or move away from the planet to restabilize its orbit. If the former option is taken, the velocity increase needed is about $$\Delta v\approx \frac{1}{2}\sqrt{\frac{GM}{r^3}}$$ meaning that the energy needed to correct this is about $$\frac{mv^2-m(v-\Delta v)^2}{2}\approx \frac{GMm}{2r^2}\Delta r$$ For a planet the size of Earth and a satellite the size of your ship, that could still be on the order of $10^{19}$ joules if your orbit deviates by just one meter. If you'd rather correct the orbit by increasing the radius, the energy needed is $$mg\Delta r = \frac{GMm}{r^2}\Delta r$$ ...which is twice as much as you would need to speed up the appropriate amount. Bottom line: your ship needs to be ready to expend $10^{19}$ joules at the drop of a hat in order to correct the most minute decay in its orbit. That's more than the yearly energy consumption of South Korea as of 2009. You're really going to need some hand-waving to deal with that. Franklin Pezzuti DyerFranklin Pezzuti Dyer $\begingroup$ you cannot calculate the energy required to travel FTL, since we do not have an inkling of how FTL would work. $\endgroup$ – ths Jun 29 '20 at 23:08 $\begingroup$ @ths Fair enough. I guess this calculation just demonstrates that a considerable amount of hand-waving or modifying the laws of physics is necessary to accomplish what the OP wants. $\endgroup$ – Franklin Pezzuti Dyer Jun 30 '20 at 0:33 $\begingroup$ Due to time dilation It is possible to reach the next solar system in a month of time as observed by the traveller (but many years as observed by from the origin). This requires vast amounts of energy of course. $\endgroup$ – gmatht Jun 30 '20 at 2:57 $\begingroup$ Conventional KE energy calculations go to infinity as you approach the speed of light according to Special Relativity. Surprised you even made a Newtonian estimate. FTL drives - like the Alcubierre drive - need exotic matter (i.e. matter that has not been show to exist), and the energy calculations for it possibly/probably depend on the characteristics of that imaginary matter. $\endgroup$ – Mike Wise Jun 30 '20 at 17:58 $\begingroup$ @I'mwithMonica By virtue of being a storage container, it won't be completely jam-packed full of matter, especially compared to a moon made of solid rock. Even when filled with garbage, there will be lots of empty spaces between pieces of garbage, inside of rooms not intended for storage, within crawl-spaces between walls, etc. $\endgroup$ – Franklin Pezzuti Dyer Jun 30 '20 at 18:09 The biggest constraint for the size of a star-ship is going to be inertia. This is not a "hard limit" but when you hand-wave away energy and propulsion, I can pretty much guarantee it will be the next engineering limit you will be faced with long before other issues like gravitational collapse or resource availability. The big reason you can't make a moon sized ship is that that moons are solid masses of stone that typically experience no more than a few cm/s worth of acceleration from thier orbits. In contrast your ship is a relatively thin scaffolding filled with a lot of non-structural weight from the cargo and various systems. The thing about FTL technologies like Alcubierre drives and Wormholes is that they still require you to move. And the bigger the ship, the more easily it will start to fall apart the second you try to move it. Picture this: for a ship to accelerate at a speed that feels okay to any sentient race for any extended period of time, you are looking at matching the acceleration of gravity on thier home world. When you attach an engine to something and start pushing it, it does not all move at once. The molecules binding the engines to the back of your ship have to be able to transfer that acceleration all the way up to the nose. At 1G of acceleration, this would cause the same amount of compression and tension in the materials that make up your ship as you see in the materials that make an object sitting at rest on the surface of a 1G planet. So, to find out the maximum size of a ship, we need to look at the maximum sizes of things we can build under gravity. What is the maximum size we can build under gravity? Burj Khalifa is currently the tallest building in the world at ~830m tall, but it uses a steel frame construction technique. Rigid carbon nano-fibers can form a structural frame that could theoretically achieve 5 times that height giving you a ship with a maximum conceivable length of somewhere on the order of 4km. That said, for a cargo ship I would not suggest going that big. We think of cargo ships as being big, but because they are designed to carry so much non-structural weight, you can not stack them up super high. When you look at the world's largest freight ships, the Maersk Tripple-E, they are only about 90m tall from keel to the the top most container; so, if you are trying to be realistic, a space freighter should not really be more than about 5x that length (~450m) for it to maintain integrity while fully loaded given our currently understood limits of material sciences. How to go bigger: We Earth dwellers like to see our ships thin and long because gravity and water resistance make us do it, but in space, if you want to make a big ship, you go tall or wide and short. This is is because a freighter (hopefully) never needs to turn in a way that exerts more rotational force than forward acceleration force; so, you can make a ship that is 450m long able to accelerate at 1G, and 4500m tall able to turn at 1/10th that speed. If you want to take this one step further you can go with a giant sideways flying saucer design. A carbon nano fiber freighter could be several kilometers in diameter, and still only 450m long. By placing thousands of evenly distributed thrusters along the broadside of your saucer to push it forward you could make a freighter roughly resembling the alien ships from Independence Day. Since you are assuming some level of future tech, I would not call it an unreasonable stretch to scale this design up using something a bit better than our current best to get something as big around as Deimos, just probably not in all 3 dimensions. How to go all the way... Acceleration for this ship doesn't really need to be all that fast, although it does need to be able to travel between solar systems in (at most) a month or so. Since you seam to want to go into FTL mechanics, the Alcubierre Drive introduces some very interesting properties. When a ship uses an Alcubierre Drive, the effects of inertia can be mitigated because your ship is accelerated more or less together at a molecular level. (It's sort of like experiencing a free-fall). That said, this sort of FTL ship also experiences a gravitational gradient where the front and back of the ship are prone to accelerate faster meaning that the mid-section of your ship will still experience a bit of an inertial difference from the rest of the ship, but generally only a fraction of what a reactive propulsion system would put on a ship. This means that you can accelerate at more than 9.8 m/s^2 while experiencing a total structural sheer of less than what you would expect 1G to inflict. Now, here's the caviot: with an Albicure drive, the inertial sheer you experience is barely better than that of a reaction drive if you contract your warp nodes all the way to the length of your ship, this is because the gravity will fall off really fast as you move away from the nodes giving you the same limitations as thusters. So, to mitigate sheer you need to move your mass-equivalency nodes farther off from your ship, but this makes it far less energy efficient. To put this in perspective: a ship that has it's nodes 1m ahead of your ship could reach 1G of acceleration with about 1.5e10kg equivalent mass fields, but to cut your inertia in half for a ship that is 450m long, you need to project your nodes 450m ahead of your ship using 3e16kg equivalent mass fields. That means you need to use 2 million times as much fuel to mitigate 1/2 of your inertia by overlapping your gravity fields. Now this is where things look really bleak for you big ship... Since you want to be able to cover 15LY in 30 days. That means you have 15 days to speed up and 15 days to slow down, making your mid-point a distance of 1.41915e17m. By plugging these values into the displacement formula a=(2s)/(t^2) where s=7.5ly and t=15days you get an acceleration of 84,495.5 m/s^2 also known as about 8622G. Now a reactive engine would crush just about any ship accelerating this quickly like a tin can, but let's look at this with an Alcubierre Drive. If you have a ship that is the length of Deimos (I'll go with the short dimension and say 11 km long) and you want it to survive 8622G, you will need to mitigate about 99.9995% of the central sheer of your ship for it not to break. To do this you would need to project your gravity nodes ~200,000,000,000 km away from your ship with an equivalent mass of +/- 5.1676333628e42kg. This means that such a drive would have to simulate gravitational forces equivalent about twice the total mass in the Milky Way galaxy. Sufficeless to say, this would be a terrible idea for any kind of ship because the wake of your warp drive would be so strong as to wreck... well the whole galaxy. In other words... don't plan on making a ship that big or that fast unless you plan on hand-waving in some star trek style inertial dampeners and/or structural integrity fields. But, if you're going to do that, then asking how big a ship can be becomes meaningless since you could always explain away bigger with more of the same handwavieness. NosajimikiNosajimiki $\begingroup$ Good thing to remind inertia. Still in such a distant future, after tens of thousands of years of spacefaring, they will have developed materials with properties way beyond our current limits. Moreover the crew could be protected from excessive acceleration or modified to withstand it. Man will not be as we know him. $\endgroup$ – Duncan Drake Jun 30 '20 at 15:55 $\begingroup$ @DuncanDrake This is why I stipulate that it is more or less doable with the assumption of future tech. With computer modeling, we are now able to figure out ideal molecular structures to the degree that we can confidently say we are getting close to finding the limits of molecular structures. There are probably plenty of incrementally better configurations left to find, but things a lot better simply can't even theoretically be done with any observable type of molecular bonds. Since the OP is asking about material limitations, hand-waving molecular science away seems out of frame. $\endgroup$ – Nosajimiki Jun 30 '20 at 16:23 $\begingroup$ As for the integrity of the crew, a more robust crew/faster ship means a smaller max size ship; so, if the OP wants to make a crew of cyberneticlly and genetically enhanced to sustain 10Gs of acceleration, he can, but that would force him to scale his max ship size way down which seems contrary to his goal. $\endgroup$ – Nosajimiki Jun 30 '20 at 16:26 $\begingroup$ @DuncanDrake There are a limited number of types of molecular bonds each with a known and finite binding strength. We know that lighter elements typically make stronger elements than similar heavy elements so we can generalize that as of yet undiscovered heavy elements won't help us; so, from this we know that Carbon, being the lightest element capable of 4 covalent bonds has more binding potential than any other element (known or unknown) for its mass. Knowing that, we can use computer simulations to experiment with numerous carbon configurations per second to find ideal shapes. $\endgroup$ – Nosajimiki Jun 30 '20 at 17:13 $\begingroup$ We already know many ideal shapes for various purposes, but the challenge is creating them quickly and without impurities or manufacturing flaws to achieve those ideals in the real world. Doing better than ideally formed carbon nano-structures will require something other than molecular binding. You are right that in the next 20,000 years we might discover something better than this, but there is not even the foundational science to predict what that may be. Without a science to even speculate what it is, such a discovery would fall in the realm of Clarke Tech / Handwave Tech $\endgroup$ – Nosajimiki Jun 30 '20 at 17:20 It's feasible to turn the whole solar system into a spaceship if you like. The question isn't how big is plausible, it's how big is economical? It turns out pretty small. Let's figure out the economics of your world(s). To do that we need to resolve a paradox: shooting trash into orbit is viable, but once in orbit it has value again. Why? Do I need to create a new energy source for this in order to make it feasible? Yes. Nothing too crazy. Conventional sci-fi energy sources are fine: fusion, anti-matter. Energy has to be cheap and abundant for this to work. Here's why... Supply, Demand, and Garbage In a society where human beings have lived among the stars for tens of thousands of years, they would have accumulated a lot of junk, so they just decided to throw a lot of it onto uninhabitable rocks near colony planets for easy disposal. Kind of like the galactic version of a landfill. Sci-fi likes its junk planets, but their existence has economic implications. We throw things away when it's economically less valuable to make another one than to repair or recycle the item. When that means putting it on a truck and driving it over to the local landfill, the cost of dumping is low. Getting mass out of a 1G gravity well is very expensive, this is one of the reasons we don't fire our nuclear waste into space. No matter how good your technology is, getting 1 kg into orbit requires 3e6 Joules or about one US dollar worth of energy. The Earth produces 2e12 kg of waste each year, that would take 6e18 Joules, minimum, to put into orbit. That's well within science-based sci-fi. We can assume your universe has extremely cheap space flight and abundant energy production, or an extremely unsustainable economy, or these landfills contain extremely toxic stuff that make the effort worthwhile. Or all of the above. My characters are salvagers, meaning that they go down to these junk planets and bring anything remotely valuable back with them. I want their base to be very large but still mobile. If you're salvaging these landfill planets, something has gone terribly wrong with your civilization. Getting to the planet's surface is expensive. Lifting off the surface is also expensive. What has happened that makes this worthless garbage suddenly valuable? Something very bad. The flip side is the value of things. A society which overproduces has a supply glut, so the value of its goods will fall. These undervalued goods will be thrown away long before they have no real value, or because it's cheaper to buy a new one than repair an existing one, or simply because there's a better model. Haves and Have Nots This all sets up a world of haves on the colony planets living in luxury, and the have-nots living off what they throw away. The people on the colony have such wealth they can afford the expense of firing their junk into orbit. The people in space are so poor they consider the surface-dweller's trash to be of value. Why? The economy on the surface is clearly overproducing and unsustainable. It is, effectively, using their resources once and then paying the cost to lift them out of their gravity well and into space. Once in space, the spacers can collect them for their own repair, reuse, and recycling. The spacers will then keep the majority for their own use. The only way it is viable for the spacers to trade the garbage back to the surface-dwellers garbage is if they extract and refine the most valuable materials, and if they can do it cheaper than on the surface. This is only possible if the labor of the spacers is cheap, or if they possess technology and industries the surface does not. One situation is toxicity. Processing the waste is dangerous and toxic, but it's safer to do in space. Or it isn't, and the spacer's lives are simply cheaper than surface dwellers. Today, we see this situation with electronic waste driven by planned obsolescence. Rather than continuing to use a working, but obsolescent, device, we throw it out. We typically don't even recycle the material, not even its precious metals, because it's cheaper to dig it out of the ground, process and refine it, and ship it around the world, often by using exploited, cheap labor and poor health and environmental standards. The waste is sent to poorer parts of the world where it is processed. Some is reused in-situ. Some is recycled and sold back. But the process is toxic and dangerous. How Big Does Their Ship Have To Be? Most of it would be taken up by containers of junk they collected, things to be sorted to see if they're valuable, and things that are valuable if recycled in large quantities. Since it's expensive to bring things up into orbit, the sorting would happen on the surface. Similarly, its cheaper to bring the recycling equipment down than to bring bulky, massive material up. Only the valuable material after processing is brought up. This also simplifies waste disposal: leave it at the landfill. The ship only has to be big enough to support the people, fit their equipment, plus its engines. It moves from landfill to landfill, sending out mining parties to extract and refine material and bring it back. Once they gather together enough valuables, they may attempt to trade with one of the trash-producing rich planets. Ton Day SchwernSchwern $\begingroup$ I love how compelling your answer is in the face of unknown future tech. Perhaps the only way around this is with something like a teleportation device --- which, presumably, is cheaper per kg. $\endgroup$ – jpaugh Jun 30 '20 at 22:45 $\begingroup$ @jpaugh Thanks! The cost-to-orbit was just a back-of-the-envelope sanity check to make sure it wasn't going to cost more energy than a planet receives from its star, for example. $\endgroup$ – Schwern Jul 1 '20 at 0:05 $\begingroup$ +1 for noting the illogic of "junk planets", could be emphasized even more. There's a YouTube video that goes into why trying to dispose of nuclear waste by shooting it into the sun is more dangerous and more hideously expensive than almost any other option. $\endgroup$ – KerrAvon2055 Jul 1 '20 at 2:23 $\begingroup$ @ScottGartner If you have a sci-fi problem and you think "I'll solve it with wormholes and teleporters" you now have many problems. As anyone who's played Portal knows, wormholes break physics. Put one portal high, one low. Pass a fluid through them so it falls from the high to the low. Use the falling fluid to drive a turbine which powers the portals. Tada! Perpetual motion machine. Star Trek style teleporters are also a problem: what reassembles the matter at the other end? And there are the philosophical issues. $\endgroup$ – Schwern Jul 1 '20 at 19:02 $\begingroup$ @jpaugh Like many ethical exercises, The Swampman has no one answer. The Outer Limits episode "Think Like A Dinosaur" presents another view on Swampman. Point is, teleportation carries a lot of baggage; adding it to your world either has implications, or you ignore them and move into science-fantasy. $\endgroup$ – Schwern Jul 1 '20 at 20:27 Interstellar trips You will have to resort to a lot of handwaving, using FTL on a Star Wars level to get your scavengers to do interstellar travel in months. Remember that the distance between stars is measured in light years! Franklin answered this in majestic way! Acceleration in sci-fi The biggest problem with sci-fi ship acceleration is that most of them ignore that this acceleration will throw the crew against the wall, for a long time with an acceleration acceptable to our biology (violating the laws of relativity and with a infinite energy consumption, it takes 11 months to reach the speed of light at 1g) or an acceleration that would transform living beings, other objects loose inside, stuck objects, cargo, fuel, engines and the hull of the ship on a piece of nothing. This, of course, remembering that they make the ships look like planes going forward instead of lift cabins going up and down, which would be more logical. Asimov develops an elegant proposal for the Trevise's spacecraft that accelerates each atom of the spacecraft at the same time so that the occupants do not notice any acceleration. Clarke uses a similar strategy in Childhood's End and both look like the Alcubierre Warp Drive idea. Cost of a scavenger ship In an interstellar civilization, the cost of building large cargo ships shouldn't be much. Even if that civilization is ruining and chaos smashed the galaxy into stellar feuds that dispute power among themselves, there will always have old things from the glorious times that can be reused. Cargo size The size of a cargo ship is optimized according to its maintenance cost. If a ship takes X units of cargo, spending Y and another takes 2X spending 3Y, I would prefer to own 2 of the first ship rather than one of the second. The operations of approaching the landfill planets and take the dump, the maintenance of a very large structure and other details need to be calculated to optimize this. The idea of gaining scale does not have to lead to a single gigantic object. Safety and fuel use make the scale gain pay off, but operations on several smaller units are cheaper and simpler than a large operation. How to solve? Maybe with something like a train? jpaugh Rodolfo PenteadoRodolfo Penteado $\begingroup$ Technically, a ship using the Alcibierre drive doesn't accelerate at all - it stays still while the space it occupies moves. $\endgroup$ – occipita Jun 30 '20 at 9:18 $\begingroup$ @occipita, yah, same to ships in Asimov and Clarke stories, its all space around them moving $\endgroup$ – Rodolfo Penteado Jun 30 '20 at 17:32 $\begingroup$ @occipita That is a bit of complicate subject... the space around the ship is distorted as though by gravity causing it to "infinitely fall". Acceleration by gravity and acceleration by propulsion are both acceleration. When people say that, they generally mean that Alcibierre drives do not cause whip lash because all adjacent matter in the ship is accelerated relatively in unison, but they do cause potentially destructive sheer because not every point on the ship is pulled/pushed at the same angle and magnitude. $\endgroup$ – Nosajimiki Jun 30 '20 at 18:43 $\begingroup$ @jpaugh It is still acceleration, but it accelerates the way falling does, not like a thruster. Alcibierre drives rely on artificial positive and negative gravity. The problem is that gravity has an exponential fall-off; so, when you overlap a gravity source in front of your ship with a negative gravity source behind your ship the sum of pushing and pulling in the middle is not going to be as strong in the center as it is at either extreme. So it's like falling, but where your head and feet fall faster than your stomach. $\endgroup$ – Nosajimiki Jul 1 '20 at 13:21 $\begingroup$ @jpaugh actually yes... I have a bad habit or re-evaluating my answers every time someone pokes me. Thanks to this poke I just noticed that I had a unit conversion error in my answer so an Alcibierre drive would not need to simulate the mass of a small star to accelerate an 11km ship to the required speed, but rather twice the mass of the Milky Way... amazing how fast compounding exponential variables can go wrong. $\endgroup$ – Nosajimiki Jul 1 '20 at 14:23 This question is far too concerned about unimportant details; in particular, the mass and size of the ship. The essential parts of the ship will be: The crew's habitat. The engines. Given that the ship will: Accelerate very slowly. Remain in space. The most obvious configuration is simply to attach large cargo nets to it whenever a new load is acquired, and use the ship to tow them along with it. With the low acceleration, there will be very little stress on the lines that secure the nets to the ship, and the cargo can trail behind as far as one wants (given sufficiently long lines and large enough nets). There are no real restrictions on what the ship has to look like, and in fact, the ship itself can remain relatively small. The only crucial maneuver would be at the mid-point of any journey, when the direction of acceleration must be reversed. Rather than, as is normally done with a rigid vessel, rotating the ship (which wouldn't work in this case), I'd suggest simply doing a slow and wide U-turn. The only tricky design feature would be in having the engine exhaust avoid the cargo, which is directly in the line of fire. That can easily be avoided by replacing the single ship with two (or for safety and reliability, several) separate ships that have a common tow-line between them for the cargo lines to attach to. But given how slowly this thing can accelerate, the goal of "able to travel between solar systems in (at most) a month or so" is completely ridiculous. I'd suggest that such trips would take thousands or millions of years, making the incredible value of that cargo of scrap even more ridiculous. This really isn't a premise for a science fiction story. It is pure sci-fi, a story that uses the superficial trappings of science fiction while having almost nothing else in common with the real genre. There is already far too much of that in the world. Instead, I'd suggest writing this story as a marine salvage operation, here on Earth. All the difficulties that you are wondering how to solve will simply no longer exist. It might even make a good adventure novel, but it certainly wouldn't be science fiction. Ray ButterworthRay Butterworth Adding a small idea to all of the rest: You could use "Intertialess technology" like was used in the old EE "Doc" Smith's Lensmen books. It allowed for FTL and huge ships. This may be the answer to some of the "hand wavy" stuff you need to worry about... by not worrying about it. Thoughts on perturbing the orbits of existing moons and the "local" planet's gravity: Don't orbit the planet, orbit the star in a matching orbit to the planet (Ahead/Behind/Beside) and shuttle to/from the planet. This will require some energy to follow the orbit because the orbit for a lighter craft than the planet would be different, but given Intertialess as an answer, it's easy to explain. millebimillebi $\begingroup$ "not worrying about it" I think that's the definition of hand-wavy! :-) $\endgroup$ – jpaugh Jul 1 '20 at 14:30 I'll try to focus on some of the issues I see in a potentially interesting story. The size of the ship would be story based. If you want it to be large you can just make up motives for it to set on a long journey through the stars in its garbage.. ehrm recovery mission. Very much like whaling ships in the XIX century. Motive could be economical, political, etc. Basically after collecting they can't just sell their load to the nearest civilized planet but have to bring it to a very specific place. You are right that a large ship would appear to be too expensive for such an endeavor. I would suggest then that it is a ship / station previously built for a completely different purpose (e.g. defense) then decommissioned and refitted for salvaging operations at a much lower cost. This could also usher in problems in dealing with older, refitted technology. Since you need to move to the next destination in a relatively fast time you are going to need to handwave on this. Relativity is not your friend here. But neither your enemy... Considering mankind has been a space faring race for "tens of thousands of years" it would be quite acceptable for them to have developed physics knowledge and technology way beyond our current limits. I see two options: bending space-time around the ship in order to accelerate it along a shorter path e.g. What is Alcubierre's Warp Drive? opening a wormhole to their destination (or an intermediate one if you need longer travel time before reaching the destination. Maybe they need to setup the ship before each jump?) e.g. Physicists Just Released Step-by-Step Instructions for Building a Wormhole And the referred link if you need to provide more foundation to space travel in your story Traversable Asymptotically Flat Wormholes with Short Transit Times Salvaging from the planet Shuttles would seem the most obvious way, especially if the ship needs to keep a very high orbit. But given the size of the ship I would imagine a large amount of material is brought up on every visited planet. You would need a lot of flights, really a lot. Each shuttle would have loading / unloading times. Does not look good. How instead your civilization is capable of altering the gravitational field? You would need to spend energy for that of course but huge, affordable energy generation (and control) is the basis for a star faring civilization. Given gravitational field control the ship could go into lower orbits and pull down a space elevator which would act as a conveyor belt, continuously bringing up material from the planet. Shuttles may still be needed though if the salvage sites are not located on the equator. Conceptual designs place the tower construction at an equatorial site. The extreme height of the lower tower section makes it vulnerable to high winds. An equatorial location is ideal for a tower of such enormous height because the area is practically devoid of hurricanes and tornadoes and it aligns properly with geostationary orbits (which are directly overhead). Other issues that you don't mention may be part or not of your story, it's up to you. Tens of thousands of years far away from now means a lot of change: human modifications: technological evolution will change man as we know him. How will they be in your story? I don't expect your crew to look anything like Mal and his motley crew on the Firefly social interactions: how is society? do families still exist? has the crew loved ones far away during their journey? AI: how does Ai fit in your story? Because there will be AIs. Probably they will be the ones who discover the theory and the technology at the base of making wormholes. economy: what makes the enterprise worthwhile for those who finance it? What makes it worthwhile for the crew? possibly mankind is on its path of going from a beyond scarcity economy back to one of scarcity. As you say: I was thinking that perhaps asteroids and uninhabitable planets were pretty much looted of all their ore, Duncan DrakeDuncan Drake Two things. Firstly a crewed space ship is in one sense nothing more than a space habitat with an engine attached. Secondly a lot depends on how 'hard'/believable you want your story to be e.g. are you going to have devices that generate artificial gravity aka Star Trek on the ship or not? If you don't have this kind of device and you want your crew to be able to live and work in gravity then you are going to have to use using centrifugal force. i.e. your ship has to rotate perpendicular to its main axis in order to generate a gravitation field. The downside of this is that the greater the diameter of your rotating habitat or the higher the gravity you wish to maintain the more mechanical stress you put on the rotating hull i.e. for any given diameter lower/less gravity means slower spin, higher gravity means faster spin. And rotation imposes stress on whatever material it is you use to construct your torus. The bigger the diameter of the ship the stronger the material needed and by strength I mean tensile strength. People have done the calcs and at the moment the largest diameter structure you could in theory build in space is about 1200 miles wide (i.e. radius of 600). This is based on something called a 'Bishops Ring Habitat" which (in theory) if made from carbon nano rods (Dr Bishop calculated this as having the highest possible tensile strength known to man. So that sets the size limit unless you bring in 'unobtainium' or some other made up material that's stronger. If you don't anything bigger than that simply breaks apart - if you are generating 1 G at the outer circumference. In reality it would be far easier/safer to go with a slightly smaller radius (say 'merely' 400 miles) and then simply add more 'donuts' to your ship, one behind the other along the access of thrust as needed. Benefits - Normal G in the living areas - zero g in the centre of the donuts (well close to zero). Downsides - either your ship accelerates very slowly up to top speed so the passengers don't notice the two opposing 'gravitational forces' i.e. the centrifugal force and the acceleration or you have to 'spin down' the hubs before you start accelerating so that the crews new 'down' becomes the rear of the ship (towards the engines). Obviously you then have to spin it up again when you stop accelerating or get to your destination. Which takes time - especially when you are talking structures this size. So it would depend on how much 'zipping' around the Universe you intend these things to do. They certainly wont be outmaneuvering any X-wing they encounter. Just answering a part of the question - the crew. Given the advanced level or technology, you could expect some massive automation and robotisation. The crew size is not a huge problem, imagine a size similar to the (mining ship, so kinda similar) Red Dwarf. And yeah, I know it's a comedy, but the premise sounds plausible to me. And its size (6 miles long) is probably something like what you're after. Although it's meant to accommodate and be operated by thousands of crew members, it went just fine for millions of years with an active crew of zero, controlled completely by its main computer. So having a crew of just a few dozen is not a big issue, giving that they'll be mostly decision makers, rather than micromanagers. I don't think your "mostly low-level grunt workers" will work though. You'd have a few of those for the kind of maintenance that can't be done by robots, but most of your crew will be "pilots" and "engineers" - even though there will be a good portion of "machinery operators", they will most likely be white collars! BIOStheZergBIOStheZerg As Franklin Pezzuti Dyer already scienced out the issue with size/mass constraints, I don't think you'll need an answer on that. As for what drives the ship - if you want to go with interstellar travel without the use of things like stasis, or making your concept go to science fantasy with the idea of FTL travel (which is impossible according to any theories now existing) - you could go with the concept of folding space (Alcubierre drive style) around the ship. In that case, accelerating would be no factor at all, since you're not technically moving, no G-forces to worry about. You'll need to dive into creating artificial antimatter to get that to work though, as creating folds in space cannot be done in a "safe" and stable manner without it. The shuttle bay idea isn't bad, since it allows for some playroom with events happening in the space between planet/ship. Makes for an easy tool for character development too if the crew talks on these rides over to the planet or back to the ship. I'd try to invent some kind of exotic material, maybe a rare metal of sorts, that has started to run out after centuries of mining and failure to recycle - forcing society to dig it up from trash from a time in which it wasn't rare. That would deal with the economics of it quite well I'd think. Also creates opportunities for mystery (who knows what those millenia-old piles of garbage might hold after all). L.Dutch - Reinstate Monica♦ TmanTman $\begingroup$ Mind your language, please. $\endgroup$ – L.Dutch - Reinstate Monica♦ Jul 1 '20 at 13:00 the largest a ship can get is are 300,000 KM in length, but would require the industry level of a K3 civilization to build. And would only really be practical to a K4 or a K5. and such ships would be black hole powered as only a small star of a black hole could give you enough power for ships in this size range, and such ship if it was a warship, would be able to conquer an entire large galaxy all by itself. for context, the yellow dwarf that's our sun, Sol, is 1.392 million km in diameter, so this is a ship that as long as a 5th the sun is wide. Mr. AndersonMr. Anderson 1,55411 silver badge1313 bronze badges $\begingroup$ Why is that the upper limit? $\endgroup$ – lijat Jun 29 '20 at 18:34 $\begingroup$ the strength of materials mostly. another is the cost in the amount of resources to build such a ship. Not to mention the maintenance upkeep will consume entire dozens of planets-worth of resources. Such ships will never be mass produced, even by K4 and K5 civilizations. and the appearance of such a ship in a space battle would be astonishingly rare. you also would never want such a ship entering a star system either as its own tidal forces would in all likelihood knock most of the planets present out of the orbit of their sun! such ships would have the same gravity as a small star! $\endgroup$ – Mr. Anderson Jun 29 '20 at 20:38 $\begingroup$ how do you conquer an entire galaxy by yourself when you can only be in one place at a time? $\endgroup$ – DKNguyen Jun 30 '20 at 1:34 $\begingroup$ This maximum size seems oddly specific. Why is 300,000 plausible and not 400,000? Does it matter if the ship is spherical or elongated? How can you know with that degree of precision what the limits of material strength and resources are for a K4 or K5 civilization? Surely you cannot be using the laws of physics as currently known to make these calculations, since you also say that such a ship could "conquer a galaxy within a decade", a feat which is not allowed in the laws of physics as we know them, given that galaxies are tens or hundreds of thousands of light years across. $\endgroup$ – brendan Jun 30 '20 at 17:56 $\begingroup$ Without providing any details on speed or shape or capacity, I don't really see much rationale behind the 300,000km figure. Materials strength comes into play when you're talking about how the ship resists forces, like its own gravity (which will depend on shape and density), or its own propulsion (which will depend on shape and power). None of that is described here, so the 300,000km number seems like it was pulled out of a hat. Any rationale behind that number would make this a better answer. $\endgroup$ – Nuclear Hoagie Jun 30 '20 at 18:08 Thanks for contributing an answer to Worldbuilding Stack Exchange! Not the answer you're looking for? Browse other questions tagged reality-check space spaceships or ask your own question. What sort of protection could Interstellar spacecrafts use? How much power would artificial gravity and inertial dampeners require? How can a pirate board a spaceship without teleportation? Ejectable heat sinks for spaceships? How long would it take to transform a human into a space-ship and back? Could an astronaut in a near-future space ship survive transit through our asteroid belt? The Colonist - Part II: Landing How best to use wormholes for spaceflight? Low gravity space shipyard vs micro gravity shipyard How big should I make my space fleets?
CommonCrawl
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. Join them; it only takes a minute: Convex Combination of Separable States The state $$ \frac{1}{2}\left(| \phi^+ \rangle \langle \phi^+ | + | \psi^+ \rangle \langle \psi^+ | \right) $$ where $$ | \phi^+ \rangle = \frac{1}{\sqrt2} \left(|00 \rangle + | 11 \rangle \right) $$ $$ | \psi^+ \rangle = \frac{1}{\sqrt2} \left(|01 \rangle + | 10 \rangle \right) $$ By PPT criteria, we know this is a separable state. If I wanted to find what is the mixture of separable states that form this, how would I go about it? quantum-information entanglement density-matrix Mahathi VempatiMahathi Vempati I would start by writing this as a matrix, and recognising how it can be written in terms of Pauli matrices: $$ \frac14\left(\begin{array}{cccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \end{array}\right)=\frac14(\mathbb{I}\otimes\mathbb{I}+X\otimes X) $$ From here, I don't have a completely formulaic approach for how you do it. But, in this instance, I wrote $$ =\frac{1}{2}\left(\frac{\mathbb{I}+X}{2}\otimes \frac{\mathbb{I}+X}{2}+\frac{\mathbb{I}-X}{2}\otimes \frac{\mathbb{I}-X}{2}\right). $$ Now you can see that each of the terms in the tensor product is a separable state. Specifically, $$ (|++\rangle\langle ++|+|--\rangle\langle --|)/2 $$ One approach that I suppose I might have taken is to recognise the separable, diagonal basis of $X\otimes X$, and decompose $\mathbb{I}\otimes\mathbb{I}$ in the same basis: $$ \frac{1}{4}(|++\rangle\langle ++|+|+-\rangle\langle +-|+|-+\rangle\langle -+|+|--\rangle\langle --|)+\frac{1}{4}(|++\rangle\langle ++|-|+-\rangle\langle +-|-|-+\rangle\langle -+|+|--\rangle\langle --|), $$ which inevitably leads to that result. answered Feb 27 at 9:30 DaftWullieDaftWullie 19k11 gold badge77 silver badges4848 bronze badges $\begingroup$ Just want to confirm, unlike a 1 qubit system where I/2 is the only state that is diagonal in multiple bases, in a 2 qubit system a state can be diagonal in multiple bases? $\endgroup$ – Mahathi Vempati Feb 27 at 10:03 $\begingroup$ Yes, because that is a result of degeneracy/multiplicity. The larger the space, the more opportunity there is for that. $\endgroup$ – DaftWullie Feb 27 at 11:03 $\begingroup$ Given that deciding separability is a computationally hard problem, it is not surprising that you do not have an algorithmic way to do it. $\endgroup$ – Norbert Schuch Feb 28 at 21:22 Thanks for contributing an answer to Quantum Computing Stack Exchange! Not the answer you're looking for? Browse other questions tagged quantum-information entanglement density-matrix or ask your own question. Density operators and separable states What proportions of certain sets of PPT-two-retrit states are bound entangled or separable? How does the probability of measurement turn out to be negative? Finding separable decompositions of bipartite X-states using the methodology of Li and Qiao Is there a two-qudit Choi entanglement witness $W^{(+)}$? Are entanglement witnesses of this form optimal? How are witness operators physically implemented? What are nontrivial examples of $n$-sharable bipartite states? Understanding the classification of quantum states based on partial transposition: representations of the bipartite density matrix What is the probability that measurement finds it in the $|0\rangle$ state?
CommonCrawl
Difference between revisions of "Mathematics and Statistics for Data Science" From Sinfronteras Adelo Vieira (talk | contribs) (Tag: Blanking) * '''Descriptive Data Analysis:''' ::* Rather than find hidden information in the data, descriptive data analysis looks to summarize the dataset. ::* They are commonly implemented measures included in the descriptive data analysis: :::* Central tendency (Mean, Mode, Median) :::* Variability (Standard deviation, Min/Max) ==Central tendency== https://statistics.laerd.com/statistical-guides/measures-central-tendency-mean-mode-median.php A central tendency (or measure of central tendency) is a single value that attempts to describe a set of data by identifying the central position within that set of data. '''The mean''' (often called the average) is most likely the measure of central tendency that you are most familiar with, but there are others, such as the median and the mode. '''The mean, median and mode''' are all valid measures of central tendency, but under different conditions, some measures of central tendency become more appropriate to use than others. In the following sections, we will look at the mean, mode and median, and learn how to calculate them and under what conditions they are most appropriate to be used. ===Mean=== Mean (Arithmetic) The mean (or average) is the most popular and well known measure of central tendency. The mean is equal to the sum of all the values in the data set divided by the number of values in the data set. So, if we have <math>n</math> values in a data set and they have values <math>x_1, x_2, ..., x_n,</math>the sample mean, usually denoted by <math>\bar{x}</math> (pronounced x bar), is: <math>\bar{x} = \frac{(x_1 + x_2 +...+ x_n)}{n} = \frac{\sum x}{n}</math> The mean is essentially a model of your data set. It is the value that is most common. You will notice, however, that the mean is not often one of the actual values that you have observed in your data set. However, one of its important properties is that it minimises error in the prediction of any one value in your data set. That is, it is the value that produces the lowest amount of error from all other values in the data set. An important property of the mean is that it includes every value in your data set as part of the calculation. In addition, the mean is the only measure of central tendency where the sum of the deviations of each value from the mean is always zero. ====When not to use the mean==== The mean has one main disadvantage: it is particularly susceptible to the influence of outliers. These are values that are unusual compared to the rest of the data set by being especially small or large in numerical value. For example, consider the wages of staff at a factory below: !Staff !1 !'''Salary''' !<math>15k</math> The mean salary for these ten staff is $30.7k. However, inspecting the raw data suggests that this mean value might not be the best way to accurately reflect the typical salary of a worker, as most workers have salaries in the $12k to 18k range. The mean is being skewed by the two large salaries. Therefore, in this situation, we would like to have a better measure of central tendency. As we will find out later, taking the median would be a better measure of central tendency in this situation. Another time when we usually prefer the median over the mean (or mode) is when our data is skewed (i.e., the frequency distribution for our data is skewed). If we consider the normal distribution - as this is the most frequently assessed in statistics - when the data is perfectly normal, the mean, median and mode are identical. Moreover, they all represent the most typical value in the data set. However, as the data becomes skewed the mean loses its ability to provide the best central location for the data because the skewed data is dragging it away from the typical value. However, the median best retains this position and is not as strongly influenced by the skewed values. This is explained in more detail in the skewed distribution section later in this guide. ====Mean in R==== mean(iris$Sepal.Width) ===Median=== The median is the middle score for a set of data that has been arranged in order of magnitude. The median is less affected by outliers and skewed data. In order to calculate the median, suppose we have the data below: We first need to rearrange that data into order of magnitude (smallest first): !<span style="color:#FF0000">56</span> Our median mark is the middle mark - in this case, 56. It is the middle mark because there are 5 scores before it and 5 scores after it. This works fine when you have an odd number of scores, but what happens when you have an even number of scores? What if you had only 10 scores? Well, you simply have to take the middle two scores and average the result. So, if we look at the example below: We again rearrange that data into order of magnitude (smallest first): Only now we have to take the 5th and 6th score in our data set and average them to get a median of 55.5. ====Median in R==== median(iris$Sepal.Length) ===Mode=== The mode is the most frequent score in our data set. On a histogram it represents the highest bar in a bar chart or histogram. You can, therefore, sometimes consider the mode as being the most popular option. An example of a mode is presented below: [[File:Mode-1.png|center|thumb|359x359px]] Normally, the mode is used for categorical data where we wish to know which is the most common category, as illustrated below: [[File:Mode-1a.png|center|thumb|380x380px]] We can see above that the most common form of transport, in this particular data set, is the bus. However, one of the problems with the mode is that it is not unique, so it leaves us with problems when we have two or more values that share the highest frequency, such as below: We are now stuck as to which mode best describes the central tendency of the data. This is particularly problematic when we have continuous data because we are more likely not to have any one value that is more frequent than the other. For example, consider measuring 30 peoples' weight (to the nearest 0.1 kg). How likely is it that we will find two or more people with '''exactly''' the same weight (e.g., 67.4 kg)? The answer, is probably very unlikely - many people might be close, but with such a small sample (30 people) and a large range of possible weights, you are unlikely to find two people with exactly the same weight; that is, to the nearest 0.1 kg. This is why the mode is very rarely used with continuous data. Another problem with the mode is that it will not provide us with a very good measure of central tendency when the most common mark is far away from the rest of the data in the data set, as depicted in the diagram below: [[File:Mode-3.png|center|thumb|379px]] In the above diagram the mode has a value of 2. We can clearly see, however, that the mode is not representative of the data, which is mostly concentrated around the 20 to 30 value range. To use the mode to describe the central tendency of this data set would be misleading. ====To get the Mode in R==== install.packages("modeest") library(modeest) > mfv(iris$Sepal.Width, method = "mfv") ===Skewed Distributions and the Mean and Median=== We often test whether our data is normally distributed because this is a common assumption underlying many statistical tests. An example of a normally distributed set of data is presented below: [[File:Skewed-1.png|center|thumb|379px]] When you have a normally distributed sample you can legitimately use both the mean or the median as your measure of central tendency. In fact, in any symmetrical distribution the mean, median and mode are equal. However, in this situation, the mean is widely preferred as the best measure of central tendency because it is the measure that includes all the values in the data set for its calculation, and any change in any of the scores will affect the value of the mean. This is not the case with the median or mode. However, when our data is skewed, for example, as with the right-skewed data set below: we find that the mean is being dragged in the direct of the skew. In these situations, the median is generally considered to be the best representative of the central location of the data. The more skewed the distribution, the greater the difference between the median and mean, and the greater emphasis should be placed on using the median as opposed to the mean. A classic example of the above right-skewed distribution is income (salary), where higher-earners provide a false representation of the typical income if expressed as a mean and not a median. If dealing with a normal distribution, and tests of normality show that the data is non-normal, it is customary to use the median instead of the mean. However, this is more a rule of thumb than a strict guideline. Sometimes, researchers wish to report the mean of a skewed distribution if the median and mean are not appreciably different (a subjective assessment), and if it allows easier comparisons to previous research to be made. ===Summary of when to use the mean, median and mode=== Please use the following summary table to know what the best measure of central tendency is with respect to the different types of variable: !'''Type of Variable''' !'''Best measure of central tendency''' |Nominal |Mode |Ordinal |Median |Interval/Ratio (not skewed) |Mean |Interval/Ratio (skewed) For answers to frequently asked questions about measures of central tendency, please go to: https://statistics.laerd.com/statistical-guides/measures-central-tendency-mean-mode-median-faqs.php ==Measures of Variation== ===Range=== The Range just simply shows the min and max value of a variable. In R: > min(iris$Sepal.Width) > max(iris$Sepal.Width) > range(iris$Sepal.Width) Range can be used on '''''Ordinal, Ratio and Interval''''' scales ===Quartile=== https://statistics.laerd.com/statistical-guides/measures-of-spread-range-quartiles.php Quartiles tell us about the spread of a data set by breaking the data set into quarters, just like the median breaks it in half. For example, consider the marks of the 100 students, which have been ordered from the lowest to the highest scores. *'''The first quartile (Q1):''' Lies between the 25th and 26th student's marks. **So, if the 25th and 26th student's marks are 45 and 45, respectively: ***(Q1) = (45 + 45) ÷ 2 = 45 *'''The second quartile (Q2):''' Lies between the 50th and 51st student's marks. **If the 50th and 51th student's marks are 58 and 59, respectively: ***(Q2) = (58 + 59) ÷ 2 = 58.5 *'''The third quartile (Q3):''' Lies between the 75th and 76th student's marks. In the above example, we have an even number of scores (100 students, rather than an odd number, such as 99 students). This means that when we calculate the quartiles, we take the sum of the two scores around each quartile and then half them (hence Q1= (45 + 45) ÷ 2 = 45) . However, if we had an odd number of scores (say, 99 students), we would only need to take one score for each quartile (that is, the 25th, 50th and 75th scores). You should recognize that the second quartile is also the median. Quartiles are a useful measure of spread because they are much less affected by outliers or a skewed data set than the equivalent measures of mean and standard deviation. For this reason, quartiles are often reported along with the median as the best choice of measure of spread and central tendency, respectively, when dealing with skewed and/or data with outliers. A common way of expressing quartiles is as an interquartile range. The interquartile range describes the difference between the third quartile (Q3) and the first quartile (Q1), telling us about the range of the middle half of the scores in the distribution. Hence, for our 100 students: <math>Interquartile\ range = Q3 - Q1 = 71 - 45 = 26</math> However, it should be noted that in journals and other publications you will usually see the interquartile range reported as 45 to 71, rather than the calculated '''<math>Interquartile\ range.</math>''' A slight variation on this is the <math>semi{\text{-}}interquartile range,</math>which is half the <math>Interquartile\ range. </math>Hence, for our 100 students: <math>Semi{\text{-}}Interquartile\ range = \frac{Q3 - Q1}{2} = \frac{71 - 45}{2} = 13</math> ====Quartile in R==== quantile(iris$Sepal.Length) 0% and 100% are equivalent to min max values. ===Box Plots=== boxplot(iris$Sepal.Length, col = "blue", main="iris dataset", ylab = "Sepal Length") ===Variance=== https://statistics.laerd.com/statistical-guides/measures-of-spread-absolute-deviation-variance.php Another method for calculating the deviation of a group of scores from the mean, such as the 100 students we used earlier, is to use the variance. Unlike the absolute deviation, which uses the absolute value of the deviation in order to "rid itself" of the negative values, the variance achieves positive values by squaring each of the deviations instead. Adding up these squared deviations gives us the sum of squares, which we can then divide by the total number of scores in our group of data (in other words, 100 because there are 100 students) to find the variance (see below). Therefore, for our 100 students, the variance is 211.89, as shown below: <math>variance = \sigma = \frac{\sum(X - \mu)^2}{N}</math> <math>\mu: \text{Mean};\ \ \ X: \text{Score};\ \ \ N: \text{Number of scores}</math> *Variance describes the spread of the data. *It is a measure of deviation of a variable from the arithmetic mean. *The technical definition is the average of the squared differences from the mean. *A value of zero means that there is no variability; All the numbers in the data set are the same. *A higher number would indicate a large variety of numbers. <br /> ====Variance in R==== var(iris$Sepal.Length) ===Standard Deviation=== https://statistics.laerd.com/statistical-guides/measures-of-spread-standard-deviation.php The standard deviation is a measure of the spread of scores within a set of data. Usually, we are interested in the standard deviation of a population. However, as we are often presented with data from a sample only, we can estimate the population standard deviation from a sample standard deviation. These two standard deviations - sample and population standard deviations - are calculated differently. In statistics, we are usually presented with having to calculate sample standard deviations, and so this is what this article will focus on, although the formula for a population standard deviation will also be shown. The '''sample standard deviation formula''' is: <math>s = \sqrt{\frac{\sum(X - \bar{X})^2}{n -1}}</math> <math>\bar{X}: \text{Sample mean};\ \ \ n: \text{Number of scores in the sample}</math> The '''population standard deviation''' formula is: <math>\sigma = \sqrt{\frac{\sum(X - \mu)^2}{n}}</math> <math>\mu: \text{population mean}</math> *The Standard Deviation is the square root of the variance. *This measure is the most widely used to express deviation from the mean in a variable. *The higher the value the more widely distributed are the variable data values around the mean. *Assuming the frequency distributions approximately normal, about 68% of all observations are within +/- 1 standard deviation. *Approximately 95% of all observations fall within two standard deviations of the mean (if data is normally distributed). ====Standard Deviation in R==== sd(iris$Sepal.Length) === Z Score === * z-score represents how far from the mean a particular value is based on the number of standard deviations. * z-scores are also known as standardized residuals * Note: mean and standard deviation are sensitive to outliers > x <-((iris$Sepal.Width) - mean(iris$Sepal.Width))/sd(iris$Sepal.Width) > x > x[77] #choose a single row # or this > x <-((iris$Sepal.Width[77]) - mean(iris$Sepal.Width))/sd(iris$Sepal.Width) ==Shape of Distribution== ===Skewness=== * '''Skewness''' is a method for quantifying the lack of symmetry in the distribution of a variable. * '''Skewness''' value of zero indicates that the variable is distributed symmetrically. Positive number indicate asymmetry to the left, negative number indicates asymmetry to the right. [[File:Skewness.png|400px|thumb|center|]] ====Skewness in R==== > install.packages("moments") and library(moments) > skewness(iris$Sepal.Width) ===Histograms in R=== > hist(iris$Petal.Width) ===Kurtosis=== * '''Kurtosis''' is a measure that gives indication in terms of the peak of the distribution. * Variables with a pronounced peak toward the mean have a high ''Kurtosis'' score and variables with a flat peak have a low ''Kurtosis'' score. ====Kurtosis in R==== > kurtosis(iris$Sepal.Length) Retrieved from "http://wiki.sinfronteras.ws/index.php?title=Mathematics_and_Statistics_for_Data_Science&oldid=16043" About Sinfronteras
CommonCrawl
International Symposium on Mathematics, Quantum Theory, and Cryptography pp 209–229Cite as Recent Developments in Multivariate Public Key Cryptosystems Yasufumi Hashimoto32 First Online: 23 October 2020 Part of the Mathematics for Industry book series (MFI,volume 33) The multivariate signature schemes UOV, Rainbow, and HFEv- have been considered to be secure and efficient enough under suitable parameter selections. In fact, several second round candidates of NIST's standardization project of Post-Quantum Cryptography are based on these schemes. On the other hand, there are few multivariate encryption schemes expected to be practical and despite that, various new schemes have been proposed recently. In the present paper, we summarize multivariate schemes UOV, Rainbow, and (variants of) HFE generating the second round candidates and study the practicalities of several multivariate encryption schemes proposed recently. Multivariate public key cryptosystem (MPKC) Download conference paper PDF In 2016, NIST launched the standardization project of Post-Quantum Cryptography (NIST 2020). A lot of schemes were submitted to the first round of its project and 26 of them were chosen as the second round candidates in 2019 (NIST 2020). LUOV (Beullens et al. 2020), Rainbow (Ding et al. 2020) and GeMSS (Casanova et al. 2020) are multivariate signature schemes in the second round. These schemes are based on UOV (Patarin 1997; Kipnis et al. 1999), Rainbow (Ding et al. 2005), and HFEv- (Patarin et al. 2001), respectively, which were proposed before or around 2000 and have been still considered to be secure and efficient enough under suitable parameter selections. On the other hand, there are few practical multivariate encryption schemes and despite that, various new schemes have been proposed in this decade. The aim of this paper is to describe recent developments of multivariate public key cryptosystems, not yet presented in the previous paper (Hashimoto 2017). We first summarize in Sect. 2 the schemes UOV (Patarin 1997; Kipnis et al. 1999), Rainbow (Ding et al. 2005), and (variants of) HFE (Patarin 1996) with short surveys on the second round candidates LUOV (Beullens et al. 2020), Rainbow (Ding et al. 2020), and GeMSS (Casanova et al. 2020). Besides, we study in Sect. 3 the encryption schemes HFERP (Ikematsu et al. 2018), ZHFE (Porras et al. 2020), EFC (Szepieniec et al. 2016), and ABC (Tao et al. 2013) proposed recently, and show that the practicalities of these schemes are not much higher than the HFE variants for encryption, which are already known to be not too practical. Remark that MQDSS (Chen et al. 2016, 2020) is also a second round candidate and has been considered as a multivariate signature scheme since a set of randomly chosen multivariate quadratic forms is used in key generation, signature generation, and signature verification. However, it is based on Fiat–Shamir's transform of the 5-pass identification scheme (Sakumoto et al. 2011) and is far from other multivariate schemes. We then avoid to study MQDSS in this paper. 2 UOV, Rainbow, and Variants of HFE In this section, we describe UOV (Patarin 1997; Kipnis et al. 1999), Rainbow (Ding et al. 2005), and variants of HFE (Patarin 1996) and give short surveys on the second round candidates LUOV (Beullens et al. 2020), Rainbow (Ding et al. 2020), and GeMSS (Casanova et al. 2020) of NIST's project (NIST 2020). We first propose the basic constructions of multivariate public key cryptosystems (MPKCS). 2.1 Basic Constructions of Multivariate Public Key Cryptosystems Let \(n,m\ge 1\) be integers, q a power of prime, and \(\mathbf {F}_q\) a finite field of order q. Most MPKCs are described as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m}\rightarrow \mathbf {F}_q^{m}\) and a quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) to be inverted feasibly. Public key. The quadratic map \(F:=T\circ G\circ S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Encryption scheme. Encryption. For a plaintext \(\mathbf {p}\in \mathbf {F}_q^{n}\), the ciphertext is \(\mathbf {c}=F(\mathbf {p})\in \mathbf {F}_q^{m}\). Decryption. For a given ciphertext \(\mathbf {c}\in \mathbf {F}_q^{m}\), compute \(\mathbf {z}:=T^{-1}(\mathbf {c})\) and find \(\mathbf {y}\in \mathbf {F}_q^{n}\) with \(G(\mathbf {y})=\mathbf {z}\). Then the plaintext is \(\mathbf {p}=S^{-1}(\mathbf {y})\). Signature scheme. Signature generation. For a message \(\mathbf {m}\in \mathbf {F}_q^{m}\), compute \(\mathbf {z}:=T^{-1}(\mathbf {m})\) and find \(\mathbf {y}\in \mathbf {F}_q^{n}\) with \(G(\mathbf {y})=\mathbf {z}\). Then the signature is \(\mathbf {s}=S^{-1}(\mathbf {y})\). Signature verification. The signature \(\mathbf {s}\in \mathbf {F}_q^{n}\) is verified by \(\mathbf {m}=F(\mathbf {s})\). Efficiency. The encryption and signature verification are done by substituting \(\mathbf {p},\mathbf {s}\in \mathbf {F}_q^{n}\) into m quadratic forms of n variables. Their complexities are then \(O(n^{2}m)\) for most MPKCs under naive implementations. Furthermore, it is known (Hashimoto 2017) that the complexities of encrypting n plaintexts and of verifying n signatures simultaneously are \(O(n^{w}m)\), where \(2\le w<3\) is a linear algebra constant. The complexities of decryption and signature generation depend mainly on how to invert G. We will discuss them in the individual schemes. Security. There are two types of attacks on MPKCs. One is the direct attack to recover the plaintext \(\mathbf {p}\) of a given ciphertext \(\mathbf {c}\) directly by solving a system of m quadratic equations \(F(\mathbf {x})=(f_{1}(\mathbf {x}),\dots ,f_{m}(\mathbf {x}))=\mathbf {c}\) of n variables. The Gröbner basis attack is considered to be the most standard approach, and its complexity depends on the degree \(d_{\mathrm {reg}}\) of regularity of the corresponding polynomial system \(F(\mathbf {x})-\mathbf {c}\). In general, \(d_{\mathrm {reg}}\) is known to be smaller when the system is more over-defined (\(m\gg n\)) (Bardet et al. 2005). Furthermore, if q is small, the attacker will solve more efficiently by combining with the exhaustive search, which is called a hybrid method (Bettale et al. 2012). We also note that, if the system is massively under-defined (\(n\gg m\)), the attacker can find (at least) one of the solutions more effectively than the case of \(n\sim m\) (Kipnis et al. 1999; Miura et al. 2013; Tomae and Wolf 2012; Cheng et al. 2014). The other type is to recover partial information of the secret key (S, T) which is enough to invert F. In most known key recovery attacks on MPKCs, the attacker uses the property of the coefficient matrices of quadratic forms in G. Let \(G_{1},\dots ,G_{m},F_{1},\dots ,F_{m}\) be the coefficient matrices of \(g_{1}(\mathbf {x}),\dots ,g_{m}(x),f_{1}(\mathbf {x}),\dots ,f_{m}(\mathbf {x})\), respectively, i.e., \(g_{l}(\mathbf {x})={}^{t}\!\mathbf {x}G_{l}\mathbf {x}+(\text {linear form})\) and \(f_{l}(\mathbf {x})={}^{t}\!\mathbf {x}F_{l}\mathbf {x}+(\text {linear form})\) for \(1\le l\le m\). Since \(F(\mathbf {x})=T(G(S(\mathbf {x})))\), it holds $$\begin{aligned} \begin{pmatrix}F_1 \\ \vdots \\ F_m\end{pmatrix} =T \begin{pmatrix} {}^{t}\!SG_1S \\ \vdots \\ {}^{t}\!SG_mS \end{pmatrix}. \end{aligned}$$ This shows that, if \(G_{1},\dots ,G_{m}\) have special properties, partial information S, T will be recovered by the public information \(F_{1},\dots ,F_{m}\). How to recover and the complexity of the attack depend on \(G_{1},\dots ,G_{m}\), and then we discuss them in the individual schemes. 2.2 UOV Let \(o,v\ge 1\) be integers and put \(n:=o+v\), \(m:=o\). The quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) is defined by $$\begin{aligned} \begin{aligned} g_j(\mathbf {x})&= \sum _{1\le i\le o} x_i \cdot \text {(linear form of }x_{o+1},\dots ,x_{n})\\&+\text {(quadratic form of }x_{o+1},\dots ,x_{n}), \end{aligned} \end{aligned}$$ for \(1\le j\le o\). UOV (Unbalanced Oil and Vinegar signature scheme, Patarin (1997), Kipnis et al. (1999) is constructed as follows. Secret key. An invertible affine map \(S:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{n}\) and the quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) defined above. Public key. The quadratic map \(F:= G\circ S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Signature generation. For a message \(\mathbf {m}=(m_{1},\dots ,m_{o})\in \mathbf {F}_q^{m}\), choose \(u_{1},\dots ,u_{v}\in \mathbf {F}_q\) randomly and find \(y_1,\dots ,y_o\in \mathbf {F}_q\) such that $$\begin{aligned} g_1(y_1,\dots ,y_o,u_1,\dots ,u_v)=m_1, \quad \dots \quad , \quad g_o(y_1,\dots ,y_o,u_1,\dots ,u_v)=m_o. \end{aligned}$$ The signature is \(\mathbf {s}=S^{-1}(y_{1},\dots ,y_{o},u_{1},\dots ,u_{v})\). Complexity of signature generation. Since (3) is a system of o linear equations of o variables, we see that the complexity of signature generation of UOV is \(O(n^{3})\). Security. The most important attack on UOV is Kipnis–Shamir's attack (Kipnis and Shamir 1998; Kipnis et al. 1999), which recovers an affine map \(S'\) such that \(SS'=\left( \begin{array}{ll} *_{o} &{} * \\ 0 &{} *_{v} \end{array}\right) \) by using the fact that \(G_{1},\dots ,G_{m}\) are matrices having the forms of \(\left( \begin{array}{ll} 0_{o} &{} * \\ * &{} *_{v} \end{array}\right) \). Its complexity is known to be \(O(q^{\max {(v-o,0)}}\cdot n^{4})\) (Kipnis et al. 1999), and then the parameter v must be sufficiently larger than o, namely n must be sufficiently larger than 2m. This causes two inconveniences on UOV; one is that the sizes of keys are relatively large, and the other is that the approaches in Tomae and Wolf (2012), Cheng et al. (2014) weakens the security against the direct attacks a little. The later is easily covered by taking (n, m) a little larger. For the former, several approaches have been given until now. However, since some of key reduction approaches yield critical vulnerabilities (e.g., Peng and Tang 2018; Hashimoto 2019), the security of such UOVs must be studied quite carefully. LUOV. LUOV (Beullens et al. 2020) is a signature scheme based on UOV and is a second round candidate of NIST's project. It is constructed over a finite field of even characteristic field and the components and coefficients in S, G, F are elements of \(\mathbf{F}_{2}\). The size of keys is smaller and the security against the direct attack is not too less than the original UOV. Remark that the security against Kipnis–Shamir's attack is \(O(2^{v-o}\cdot n^{4})\) and a new attack on LUOV was quite recently proposed in Ding et al. (2013). Then the parameters o, v should be taken larger than the original version. See Beullens et al. (2020) for the latest version. 2.3 Rainbow Rainbow (Ding et al. 2005) is a multi-layer version of UOV. We now describe the two-layer version. Let \(o_{1},o_{2},v\ge 1\) be integers and put \(n=o_{1}+o_{2}+v\), \(m=o_{1}+o_{2}\). Define the quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) by $$\begin{aligned} \begin{aligned} g_1(\mathbf {x}),\dots ,g_{o_1}(\mathbf {x})&= \sum _{1\le i\le o_1}x_i \cdot (\text {linear form of }x_{o_1+1},\dots ,x_n)\\&+(\text {quadratic form of } x_{o_1+1},\dots ,x_n),\\ g_{o_1+1}(\mathbf {x}),\dots ,g_{m}(\mathbf {x})&= \sum _{o_1+1\le i\le m}x_i \cdot (\text {linear form of }x_{m+1},\dots ,x_n)\\&+(\text {quadratic form of }x_{m+1},\dots ,x_n), \end{aligned} \end{aligned}$$ Rainbow is constructed as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m} \rightarrow \mathbf {F}_q^{m}\) and the quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) defined above. Signature generation. For a message \(\mathbf {m}\in \mathbf {F}_q^{m}\) to be signed, compute \(\mathbf {z}={}^{t}\!(z_{1},\dots ,z_{m})\) \(:=T^{-1}(\mathbf {m})\) and choose \(u_{1},\dots ,u_{v}\in \mathbf {F}_q\) randomly. Find \(y_{o_{1}+1},\dots ,y_{m}\in \mathbf {F}_q\) such that $$\begin{aligned} g_{o_{1}+1}(y_1,\dots ,y_{m},u_1,\dots ,u_v)=z_{o_{1}+1}, \quad \dots , \quad g_m(y_1,\dots ,y_{m},u_1,\dots ,u_v)=z_m. \end{aligned}$$ After that, find \(y_{1},\dots ,y_{o_{1}}\in \mathbf {F}_q\) such that $$\begin{aligned} g_{1}(y_1,\dots ,y_{m},u_1,\dots ,u_v)=z_{1}, \quad \dots , \quad g_{o_{1}}(y_1,\dots ,y_{m},u_1,\dots ,u_v)=z_{o_{1}}. \end{aligned}$$ The signature is \(\mathbf {s}=S^{-1}(y_{1},\dots ,y_{m},u_{1},\dots ,u_{v})\). Complexity of signature generation. Since (5) is a system of \(o_{2}\) linear equations of \(o_{2}\) variables and (6) is a system of \(o_{1}\) linear equations of \(o_{1}\) variables, we see that the complexity of signature generation is \(O(n^{3})\). Security. Kipnis–Shamir's attack and rank attacks are major attacks on Rainbow. Since \(G_{1},\dots ,G_{o_{1}}=\left( \begin{array}{ll} 0_{o_{1}} &{} * \\ * &{} *_{o_{2}+v} \end{array}\right) \) and \(G_{o_{1}+1},\dots ,G_{m}= \left( \begin{array}{lll} 0_{o_{1}} &{} 0 &{} 0 \\ 0 &{} 0_{o_{2}} &{} * \\ 0 &{} * &{} *_{v} \end{array}\right) \), the complexity of Kipnis–Shamir's attack (Kipnis and Shamir 1998; Kipnis et al. 1999) on Rainbow is \(O(q^{\max (o_{2}+v-o_{1},0)}\cdot n^{4})\). Furthermore, by checking the ranks of \(G_{1},\dots ,G_{m}\), we see that the complexities of min-rank attack and high-rank attack are \(O(q^{o_{2}+v}\cdot n^{4})\) and \(O(q^{o_{1}}\cdot n^{4})\), respectively (Yang and Chen 2005). Note that there have been several approaches to improve the efficiency of Rainbow. However, some of improvements are known to be insecure (e.g., Peng and Tang 2018; Hashimoto 2019; Shim et al. 2017; Hashimoto et al. 2018) and then the security of such efficient Rainbows must be studied carefully. Rainbow on NIST's project. Rainbow (Ding et al. 2020) in the second round of NIST's project includes three versions; the standard Rainbow, the cyclic Rainbow, and the compressed Rainbow. The public keys and the numbers of arithmetics for signature verification for the later two Rainbows are smaller than the standard Rainbow. However, it is reported (Ding et al. 2020) that the verifications of the latter two versions are slower than the standard version. We consider that it is because the algorithms for verifications of the latter two versions are more complicated than the naive algorithm for the standard Rainbow. Better implementations are required for these arranged versions. 2.4 HFE Let \(n,m,d\ge 1\) be integers with \(n=m\), \(d<n\). Define \(\mathscr {G}:\mathbf {F}_{q^{n}}\rightarrow \mathbf {F}_{q^{n}}\) by $$\begin{aligned} \mathscr {G}(X):&= \sum _{\begin{subarray}{c}0\le i\le j\le d\end{subarray}}\alpha _{ij}X^{q^{i}+q^{j}} +\sum _{\begin{subarray}{c}0\le i\le d \end{subarray}}\beta _iX^{q^i}+\gamma , \end{aligned}$$ where \(\alpha _{ij},\beta _i,\gamma \in \mathbf {F}_{q^{n}}\) and \(G:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{n}\) by \(G:=\phi ^{-1}\circ \mathscr {G}\circ \phi \) where \(\phi :\mathbf {F}_q^n\rightarrow \mathbf {F}_{q^{n}}\) is an \(\mathbf {F}_q\)-isomorphism. HFE (Patarin 1996) is constructed as follows. Secret key. Two invertible affine maps \(S,T:\mathbf {F}_q^n\rightarrow \mathbf {F}_q^n\) and \(\mathscr {G}:\mathbf {F}_{q^{n}}\rightarrow \mathbf {F}_{q^{n}}\) defined above. Public key. The quadratic map \(F:=T\circ G\circ S=T\circ \phi ^{-1}\circ \mathscr {G}\circ \phi \circ S:\mathbf {F}_q^n\rightarrow \mathbf {F}_q^n\). Encryption. For a plaintext \(\mathbf{p}\in \mathbf {F}_q^n\), the ciphertext is \(\mathbf{c}:=F(\mathbf{p})\in \mathbf {F}_q^n\). Decryption. For a given ciphertext \(\mathbf{c}\), compute \(\mathbf{z}:=T^{-1}(\mathbf{c})\) and put \(Z:=\phi (\mathbf{z})\). Find \(Y\in \mathbf {F}_{q^{n}}\) with \(\mathscr {G}(Y)=Z\) and put \(\mathbf{y}:=\phi ^{-1}(Y)\). The plaintext is \(\mathbf{p}=S^{-1}(\mathbf{z})\). Complexity of decryption. Since \(\mathscr {G}(Y)=Z\) is a univariate polynomial equation of degree at most \(2q^{d}\) over \(\mathbf {F}_{q^{n}}\), the complexity of finding Y is $$O((\deg {\mathscr {G}(X)})^3+n(\deg {\mathscr {G}(X)})^2\log {q})= O(q^{3d}+nq^{2d}\log {q})$$ by the Berlekamp algorithm (Berlekamp 1967, 1970). Then the parameter d should be \(d=O(\log _{q}n)\). Security. Let \(\{\theta _1,\dots ,\theta _n\}\) be a basis of \(\mathbf {F}_{q^{n}}\) over \(\mathbf {F}_q\) and \(\varTheta :=\left( \theta _j^{q^{i-1}}\right) _{1\le i,j\le n}\). It is easy to see that \(\varTheta \mathbf {x}={}^{t}\!(\phi (\mathbf {x}),\phi (\mathbf {x})^{q},\dots , \phi (\mathbf {x})^{q^{n-1}}) :={}^{t}\!(X,X^{q},\dots ,X^{q^{n-1}})\). Since \(F=(T\circ \phi ^{-1})\circ \mathscr {G}\circ (\phi \circ S)\), we have $$\begin{aligned} \begin{pmatrix} F_1 \\ \vdots \\ F_n \end{pmatrix} =(T\cdot \varTheta ^{-1}) \begin{pmatrix} {}^{t}\!(\varTheta S) \mathscr {G}^{(0)} (\varTheta S) \\ \vdots \\ {}^{t}\!(\varTheta S) \mathscr {G}^{(n-1)} (\varTheta S) \end{pmatrix}, \end{aligned}$$ where \(\bar{X}:=\varTheta \mathbf {x}\) and \(\mathscr {G}^{(i)}\) is an \(n\times n\) matrix over \(\mathbf {F}_{q^{n}}\) such that \(\mathscr {G}(X)^{q^i}={}^{t}\!\bar{X}\mathscr {G}^{(i)}\bar{X}+(\text {linear form of }\bar{X})\). This means that there exist \(a_1,\dots ,a_n\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned} a_1F_1+\cdots +a_nF_n={}^{t}\!(\varTheta S)\mathscr {G}^{(0)}(\varTheta S) ={}^{t}\!(\varTheta S)\begin{pmatrix}*_{d+1} &{} \\ &{} 0_{n-d-1} \end{pmatrix}(\varTheta S), \end{aligned}$$ and then \(\mathrm {rank}{\left( a_1F_1+\cdots +a_nF_n\right) }\le d+1.\) The min-rank attack (Kipnis and Shamir 1999; Bettale et al. 2013) is an attack to recover such \((a_{1},\dots ,a_{n})\) and its complexity is estimated by \(O(\left( {\begin{array}{c}n+d+2\\ d+2\end{array}}\right) ^w)=O(n^{(d+2)w})\) under the assumption that a variant of Fröberg conjecture holds, where \(2\le w\le 3\) is a linear algebra constant. It is not difficult to check that the tuple \((a_1,\dots ,a_n)\) gives partial information of \(T\varTheta ^{-1}\) and, once such a tuple is recovered, the attacker can recover partial information of \(\varTheta S\), which is enough to decrypt arbitrary ciphertexts by elementary linear algebraic approaches. Since \(d=O(\log _{q}n)\), the security of HFE is \(n^{O(\log _{q}n)}\). Then the original HFE has been considered to be impractical. We also note that the security against Gröbner basis attack has been studied well (see e.g., Faugère 2003; Granboulan et al. 2020; Dubois and Gamma 2020; Ding et al. 2011; Huang et al. 2018). It is known that the rank condition (8) gives an upper bound of the degree \(d_{\mathrm {reg}}\) of regularity of the polynomial system \(F(\mathbf {x})=\mathbf {c}\), in fact, \(d_{\mathrm {reg}}\le \frac{1}{2}(q-1)(d+2)\) holds for HFE (Ding et al. 2011). 2.5 Variants of HFE There have been various variants of HFE. In this subsection, we describe four major variants "plus (+)", "minus (–)", "vinegar (v)", and "projection (p)". Plus (+). The "plus (+)" is a variant to add several polynomials on G. Let \(r_{+}\ge 1\) be an integer and \(h_1(\mathbf {x}),\dots ,h_{r_{+}}(\mathbf {x})\) random quadratic forms of \(\mathbf {x}\). For the map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) of the original scheme, define \(G_{+}:\mathbf {F}_q^n\rightarrow \mathbf {F}_q^{m+r_{+}}\) by \(G_{+}(\mathbf {x}):={}^{t}\!(g_1(\mathbf {x}),\dots ,g_{m}(\mathbf {x}),h_1(\mathbf {x}),\dots ,h_{r_{+}}(\mathbf {x}))\). The public key \(F_{+}: \mathbf {F}_q^n\rightarrow \mathbf {F}_q^{m+r_{+}}\) of the plus is \(F_{+}:=T_{+}\circ G_{+}\circ S\) where \(T_{+}:\mathbf {F}_q^{m+r_{+}}\rightarrow \mathbf {F}_q^{m+r_{+}}\) is an invertible affine map. It is mainly used for encryption when \(m\ge n\). The decryption is as follows. Decryption. For the ciphertext \(\mathbf{c}\in \mathbf {F}_q^{m+r_{+}}\), compute \(\mathbf{z}=(z_1,\dots ,z_{m+r_{+}}):=T_{+}^{-1}(\mathbf{c}).\) Find \(\mathbf{y}\in \mathbf {F}_q^n\) with \(G(\mathbf{y})={}^{t}\!(z_1,\dots ,z_{m})\) and verify whether \({}^{t}\!(h_1(\mathbf{y}),\dots ,h_{u_+}(\mathbf{y}))={}^{t}\!(z_{m+1},\dots ,z_{m+r_{+}}).\) If it holds, the plaintext is \(\mathbf{p}=S^{-1}(\mathbf{y})\). If not, try it again by another \(\mathbf {y}\). Complexity of decryption. If \(m\ge n\), the number of \(\mathbf {y}\) with \(G(\mathbf {y})=\mathbf {z}\) is (probably) small. Then the complexity of decryption of "plus" is not much larger than the original scheme. Security. It is easy to see that an equation similar to (8) holds for the "plus" of HFE. Then the complexity of the min-rank attack on HFE+ is similar to the original HFE. Minus (–). The "minus (–)" is to reduce several polynomials in F. Let \(r_{-}\ge 1\) be an integer. For the public key \(F:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) of the original scheme, the public key \(F_{-}:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m-r_{-}}\) of the minus is generated by \(F_{-}(x)={}^{t}\!(f_1(x),\dots ,f_{m-r_{-}}(x))\). It is mainly used for the signature scheme when \(n\ge m\). The signature generation is as follows. Signature generation. For a message \(\mathbf {m}={}^{t}\!(m_1,\dots ,m_{m-r_{-}})\in \mathbf {F}_q^{m-r_{-}}\) to be signed, choose \(u_1,\dots ,u_{r_{-}}\in \mathbf {F}_q\) randomly and let \({\bar{\mathbf {m}}}:={}^{t}\!(m_1,\dots ,m_{m-r_{-}},u_1,\dots ,u_{r_{-}})\). Find \(\mathbf {s}\in \mathbf {F}_q^{n}\) with \(F(\mathbf {s})={\bar{\mathbf {m}}}\). If there exists such an \(\mathbf {s}\), the signature is \(\mathbf {s}\). If not, change \(u_{1},\dots ,u_{r_{-}}\) and repeat until such an \(\mathbf {s}\) appears. Complexities of signature generation. When \(n\ge m\), the probability that \(\mathbf {s}\) does not exist is considered to be not large. Then the complexity of the signature generation of the "minus" is not much larger than the original scheme. Security. For the minus, it is easy to see that there exists an \((n-r_{-})\times n\) matrix \(T_{-}\) such that $$\begin{aligned} \begin{pmatrix} F_1 \\ \vdots \\ F_{n-r_{-}} \end{pmatrix} =(T_{-}\cdot \varTheta ^{-1}) \begin{pmatrix} {}^{t}\!(\varTheta S) \mathscr {G}^{(0)} (\varTheta S) \\ \vdots \\ {}^{t}\!(\varTheta S) \mathscr {G}^{(n-1)} (\varTheta S) \end{pmatrix}. \end{aligned}$$ Then one can eliminate the contributions of \(n-r_{-}-1\) matrices in the right hand side by taking a linear combination of \(F_{1},\dots ,F_{n-r_{-}}\), namely there exist \(a_1,\dots ,a_{n-r_{-}},\) \(b_{0},\dots ,b_{r_{-}}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned} a_1F_1+\cdots +a_{n-r_{-}}F_{n-r_{-}}&= b_0{}^{t}\!(\varTheta S)\mathscr {G}^{(0)}(\varTheta S)+\cdots +b_{r_{-}}{}^{t}\!(\varTheta S)\mathscr {G}^{(r_{-})}(\varTheta S)\\&= {}^{t}\!(\varTheta S) \left( \begin{array}{cc} *_{d+r_{-}+1} &{} \\ &{} 0_{n-d-r_{-}-1} \end{array}\right) (\varTheta S). \end{aligned}$$ The min-rank attack is thus available on HFE- and its complexity can be estimated by \(O(\left( {\begin{array}{c}n+d+r_{-}+2\\ d+r_{-}+2\end{array}}\right) ^w)=O(n^{(d+r_{-}+2)w}).\) This means that the "minus" enhances the security of HFE (see also Vates and Smith-Tone 2017). Vinegar (v). The "vinegar (v)" is to add several variables on G. Let \(r_{\mathrm{v}}\ge 1\) be an integer. For the map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) of the original scheme, define \(G_\mathrm{v}:\mathbf {F}_q^{n+r_{\mathrm{v}}}\rightarrow \mathbf {F}_q^{m}\) such that \(G_\mathrm{v}(x_{1},\dots ,x_{n},u_{1},\dots ,u_{r_{\mathrm{v}}})\) is inverted similarly to \(G(\mathbf {x})\) for any (or most) \(u_{1},\dots ,u_{r_{\mathrm{v}}}\in \mathbf {F}_q\). For example, the map \(G_\mathrm{v}\) of HFEv is given by \(G_\mathrm{v}:=\phi _{-1}\circ \mathscr {G}_\mathrm{v} \circ \phi _\mathrm{v}\), where \(\phi _\mathrm{v}:\mathbf {F}_q^{n+r_{\mathrm{v}}}\rightarrow \mathbf {F}_{q^{n}}\times \mathbf {F}_q^{r_{\mathrm{v}}}\) is an \(\mathbf {F}_q\)-isomorphism and \(\mathscr {G}_\mathrm{v}:\mathbf {F}_{q^{n}}\times \mathbf {F}_q^{r_{\mathrm{v}}}\rightarrow \mathbf {F}_{q^{n}}\) is the following polynomial map. $$\begin{aligned} \mathscr {G}_\mathrm{v}(X,x_{n+1},\dots ,x_{n+r_{\mathrm{v}}})&= \sum _{0\le i,j\le d}\alpha _{ij}X^{q^{i}+q^{j}} +\sum _{0\le i\le d}X^{q^{i}}\cdot (\text {linear form of} x_{n+1},\dots ,x_{n+r_{\mathrm{v}}})\\&+(\text {quadratic form of }x_{n+1},\dots ,x_{n+r_{\mathrm{v}}}). \end{aligned}$$ The public key \(F_{\mathrm{v}}:\mathbf {F}_q^{n+r_{\mathrm{v}}}\rightarrow \mathbf {F}_q^{n}\) of the vinegar is \(F_{\mathrm{v}}:=T\circ G_{\mathrm{v}}\circ S_{\mathrm{v}}\) where \(S_{\mathrm{v}}:\mathbf {F}_q^{n+r_{\mathrm{v}}}\rightarrow \mathbf {F}_q^{n+r_{\mathrm{v}}}\) is an invertible affine map. It is mainly used for signature when \(n\ge m\). The signature generation is as follows. Signature generation. For a message \(\mathbf {m}\in \mathbf {F}_q^{m}\) to be signed, compute \(\mathbf {z}:=T^{-1}(\mathbf {m})\). Choose \(u_{1},\dots ,u_{r_{\mathrm{v}}}\in \mathbf {F}_q\) randomly, and find \(\mathbf {y}\in \mathbf {F}_q^{n}\) with \(G_{\mathrm{v}}(\mathbf {y},u_{1},\dots ,u_{r_{\mathrm{v}}})=\mathbf {z}\). If such an \(\mathbf {y}\) does not exist, change \(u_{1},\dots ,u_{r_{\mathrm{v}}}\) and try again. The signature is \(\mathbf {s}=S_{\mathrm{v}}^{-1}(\mathbf {y},u_{1},\dots ,u_{r_{\mathrm{v}}})\). Complexity of signature generation. Since \(\mathbf {y}\) is found similarly to the original scheme, the complexity of finding \(\mathbf {y}\) is almost the same as the original scheme. If \(n\ge m\), the probability that \(\mathbf {y}\) does not exist is considered to be not too large. Then the complexity of the "vinegar" is not too larger than the original scheme. Security. For HFEv, we see that \(\mathscr {G}_{\mathrm{v}}(X,x_{n+1},\dots ,x_{n+r_{\mathrm{v}}})={}^{t}\!{\bar{X}}_{\mathrm{v}} {\left( \begin{array}{ll|l} *_{d+1} &{} &{} * \\ &{} 0_{n-d-1} &{} \\ \hline * &{} &{} *_{r_{\mathrm{v}}}\end{array}\right) } {\bar{X}}_{\mathrm{v}}+\text {(linear form of }{\bar{X}}_{\mathrm{v}})\), where \({\bar{X}}_{\mathrm{v}}={}^{t}\!(X,\dots ,X^{q^{n-1}},x_{n+1},\dots ,x_{n+r_{\mathrm{v}}})\). Then there exist \(a_{1},\dots ,a_{n}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned} a_1F_1+\cdots +a_{n}F_{n}&= {}^{t}\!\left( \begin{pmatrix}\varTheta &{} \\ &{} I_{r_{\mathrm{v}}}\end{pmatrix} S_{\mathrm{v}}\right) \left( \begin{array}{ll|l} *_{d+1} &{} &{} * \\ &{} 0_{n-d-1} &{} \\ \hline * &{} &{} *_{r_{\mathrm{v}}}\end{array}\right) \left( \begin{pmatrix}\varTheta &{} \\ &{} I_{r_{\mathrm{v}}}\end{pmatrix} S_{\mathrm{v}}\right) . \end{aligned}$$ Since the rank of the matrix in the right hand side above is at most \(d+r_{\mathrm{v}}+1\), the security of HFEv against the min-rank attack is estimated by \(O(\left( {\begin{array}{c}n+d+r_{\mathrm{v}}+2\\ d+r_{-}+2\end{array}}\right) ^w)=O(n^{(d+r_{\mathrm{v}}+2)w}).\) Projection (p). The "projection" is to reduce several variables of the polynomials in F. Let \(r_{\mathrm{p}}\ge 1\) be an integer and \(u_1,\dots ,u_{r_{\mathrm{p}}}\in \mathbf {F}_q\). For the public key \(F:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) of the original scheme, the public key \(F_{\mathrm{p}}:\mathbf {F}_q^{n-r_{\mathrm{p}}}\rightarrow \mathbf {F}_q^{m}\) of the projection is generated by \(F_{\mathrm{p}}(x_1,\dots ,x_{n-r_{\mathrm{p}}}):=F(x_1,\dots ,x_{n-r_{\mathrm{p}}},u_1,\dots ,u_{r_{\mathrm{p}}})\). It is mainly used for encryption when \(m\ge n\). The decryption is as follows. Decryption. For the ciphertext \(\mathbf {c}\in \mathbf {F}_q^{m}\), find \(\mathbf {p}\in \mathbf {F}_q^n\) with \(F(\mathbf {p})=\mathbf {c}\) similarly to the original scheme. If \(\mathbf{p}=(*,\dots ,*,u_1,\dots ,u_{r_{\mathrm{p}}})\), the plaintext is \({\tilde{\mathbf{p}}}:=(p_1,\dots ,p_{n-r_{\mathrm{p}}})\in \mathbf {F}_q^{n-r_{\mathrm{p}}}\). If not, try it again by another \(\mathbf {p}\). Complexities of decryption. If \(m\ge n\), the number of \(\mathbf {p}\) with \(F(\mathbf {p})=\mathbf {c}\) is (probably) not too large. Then the complexity of decryption of the "projection" is not much larger than the original scheme. Security. For the projection of HFE, we see that there exist \(a_{1},\dots ,a_{n}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned} a_{1}F_{1}+\cdots +a_{n}F_{n}={}^{t}\!(\varTheta \tilde{S}) \begin{pmatrix}*_{d+1} &{} \\ &{} 0_{n-d-1} \end{pmatrix} (\varTheta \tilde{S}), \end{aligned}$$ where \(\tilde{S}\) is an \(n\times (n-r_{\mathrm{p}})\) matrix with \(S=(\tilde{S},*)\). Then the min-rank attack is available and its complexity is almost the same as the original scheme. The most successful variant of HFE is probably the signature scheme HFEv- (Patarin et al. 2001), a combination of "minus" and "vinegar" of HFE, since the security against the min-rank attack is enhanced drastically without slowing down the signature generation. In fact, GeMSS (Casanova et al. 2020) based on HFEv- was chosen as a second round candidate of NIST's project (NIST 2020). There are three kinds of GeMSS, called GeMSS, BlueGeMSS, and RedGeMSS, The major difference among these three GeMSSs is the degree of \(\mathscr {G}_\mathrm{v}\); the degrees are \(513(=2^{9}+1)\), \(129(=2^{7}+1)\), \(17(=2^{4}+1)\), i.e., d's are 10, 8, 5, respectively. Of course, the signature generation of RedGeMSS is fastest and the BlueGeMSS is the next. Furthermore, the securities against the min-rank attack are enough if \(r_{-},r_{\mathrm{v}}\) are sufficiently large. On the other hand, as pointed out in Hashimoto (2018) for HMFEv (Petzoldt et al. 2017) (the vinegar of multi-HFE (Chen et al. 2020), the minus and the vinegar do not enhance the security against the high-rank attack. Though critical vulnerabilities of HFE variants against the high-rank attack have not been reported until now, we consider that an HFEv- with smaller d has a higher risk against the high-rank attack. We recall that Sflash (Akkar et al. 2003) (a minus of Matsumoto–Imai's scheme (Matsumoto and Imai 1988) is a signature scheme selected by NESSIE (Preneel 2020) and broken by a differential attack (Fouque et al. 2005). Recently, its projections called Pflash (Smith-Tone et al. 2015; Cartor and Smith-Tone 2017) and Eflash (Cartor and Smith-Tone 2018) were proposed. Pflash is a signature scheme with \(r_{\mathrm{p}}<r_{-}\) and Eflash is an encryption scheme with \(r_{\mathrm{p}}>r_{-}\). The complexities of signature generation and decryption are about \(q^{\min {(r_{\mathrm{p}},r_{-})}}\) times of Matsumoto–Imai's scheme (Matsumoto and Imai 1988) and then we should take \(r_{-},r_{\mathrm{p}}\) by \(\min {(r_{\mathrm{p}},r_{-})}=O(\log _{q}n)\). It has been considered that the differential attack is not available on these schemes, and the security against the min-rank attack highly depends on \(r_{-}\). The security of Eflash is thus \(n^{O(\log _{q}n)}\). Similarly for the encryption scheme HFEp- with \(r_{\mathrm{p}}>r_{-}\), it is easy to see that the complexity of decryption is about \(q^{r_{-}}\) times of the original HFE and the complexity of the min-rank attack is roughly estimated by \(O(n^{(3d+r_{-}+2)w})\). Since \(3d+r_{-}=O(\log _{q}n)\), its security is also \(n^{O(\log _{q}n)}\). 3 New Encryption Schemes In this section, we study the encryption schemes HFERP (Ikematsu et al. 2018), ZHFE (Porras et al. 2020), EFC (Szepieniec et al. 2016), and ABC (Tao et al. 2013, 2015) proposed recently. 3.1 HFERP HFERP (Ikematsu et al. 2018) is an encryption scheme constructed by a "plus" and "projection" of a combination of HFE and Rainbow. We first describe a one-layer version HFERP without "plus" and "projection". Let \(v,o,l,d_0\ge 1\) be integers, \(n:=v+o\) and \(m:=v+o+l\). Define the map \(\mathscr {G}_{0}:\mathbf {F}_{q^v}\rightarrow \mathbf {F}_{q^v}\) by $$\mathscr {G}_0(X):= \sum _{0\le i\le j\le d_{0}}\alpha _{ij}X^{q^i+q^j} +\sum _{0\le i \le d_{0}}\beta _iX^{q^i}+\gamma ,$$ where \(\alpha _{ij},\beta _i,\gamma \in \mathbf {F}_{q^v}\). The quadratic map \(G:\mathbf {F}_q^n\rightarrow \mathbf {F}_q^m\) is given as follows. $$\begin{aligned} {}^{t}\!(g_1(\mathbf {x}),\dots ,g_v(\mathbf {x}))&= (\phi _0^{-1}\circ \mathscr {G}_0\circ \phi _0)(\mathbf {x}_{0}),\\ g_{v+1}(\mathbf {x}),\dots ,g_{m}(\mathbf {x})&= \sum _{v+1\le i \le n}x_i\cdot (\text {linear form of }\mathbf {x}_{0}) +(\text {quadratic form of }\mathbf {x}_{0}), \end{aligned}$$ where \(\phi _0:\mathbf {F}_q^v\rightarrow \mathbf {F}_{q^v}\) is an \(\mathbf {F}_q\)-isomorphism and \(\mathbf {x}_{0}={}^{t}\!(x_{1},\dots ,x_{v})\). HFERP (without "plus", "projection") is constructed as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m}\rightarrow \mathbf {F}_q^{m}\) and the quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Public key. The quadratic map \(F:=T\circ G \circ S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Encryption. For a plaintext \(\mathbf{p}\in \mathbf {F}_q^{n}\), the ciphertext is \(\mathbf {c}=F(\mathbf{p})\in \mathbf {F}_q^{m}\). Decryption. For a given ciphertext \(\mathbf {c}\), compute \(\mathbf {z}={}^{t}\!(z_{1},\dots ,z_{m}):=T^{-1}(\mathbf {c})\). Let \(Z_0:=\phi _0(z_1,\dots ,z_v)\in \mathbf {F}_{q^v}\) and find \(Y_0\in \mathbf {F}_{q^v}\) such that \(\mathscr {G}_0(Y_0)=Z_0\). Put \((y_1,\dots ,y_v):=\phi _0^{-1}(Y_0)\in \mathbf {F}_q^v\) and find \(y_{v+1},\dots ,y_{n}\in \mathbf {F}_q\) with $$\begin{aligned} g_{v+1}(y_1,\dots ,y_v,y_{v+1},\dots ,y_{n})=z_{v+1},\quad \dots , \quad g_{m}(y_1,\dots ,y_v,y_{v+1},\dots ,y_{n})=z_{m}. \end{aligned}$$ The plaintext is \(\mathbf {p}=S^{-1}(y_{1},\dots ,y_{n})\). Complexity of decryption. Since the degree of \(\mathscr {G}_{0}(X)\) is at most \(2q^{d_{0}}\), the complexity of finding \(Y_{0}\) is \(O(q^{3d_{0}}+vq^{2d_{0}}\log {q})\) by Berlekamp's algorithm. We see that (10) is a system of \(o+l\) linear equations of o variables. We thus conclude that the total complexity of decryption is \(O(q^{3d_{0}}+vq^{2d_{0}}\log {q}+n^{3})\). The parameter \(d_{0}\) should be taken by \(d_{0}=O(\log _{q}n)\). Security. Let \(\{\theta _1,\dots ,\theta _v\}\) be a basis of \(\mathbf {F}_{q^v}\) over \(\mathbf {F}_q\) and \(\varTheta _0:=\left( \theta _j^{q^{i-1}}\right) _{1\le i,j\le v}\). By the definition of G, F, we see that $$\begin{aligned} \begin{pmatrix} F_{1} \\ \vdots \\ F_{m} \end{pmatrix}= T\cdot \begin{pmatrix}\varTheta _0^{-1} &{} \\ &{} I_{o+l} \end{pmatrix} \begin{pmatrix} {}^{t}\!S {\begin{pmatrix} {}^{t}\!\varTheta _{0}\mathscr {G}_{0}^{(0)} \varTheta _{0}&{} \\ &{} 0_{o} \end{pmatrix}}S \\ \vdots \\ {}^{t}\!S {\begin{pmatrix}{}^{t}\!\varTheta _{0} \mathscr {G}_{0}^{(v-1)} \varTheta _{0} &{} \\ &{} 0_{o} \end{pmatrix}}S \\ {}^{t}\!S {\begin{pmatrix} *_{v} &{} * \\ * &{} 0_{o} \end{pmatrix}}S \\ \vdots \\ {}^{t}\!S {\begin{pmatrix} *_{v} &{} * \\ * &{} 0_{o} \end{pmatrix}}S \end{pmatrix} \end{aligned}$$ and then there exist \(a_{1},\dots ,a_{m}\in \mathbf {F}_{q^v}\) such that $$\begin{aligned} a_{1}F_{1}+\cdots +a_{m}F_{m}={}^{t}\!S \begin{pmatrix} {}^{t}\!\varTheta _{0}\mathscr {G}_{0}^{(0)} \varTheta _{0}&{} \\ &{} 0_{o} \end{pmatrix}S ={}^{t}\!S {}^{t}\!\begin{pmatrix} \varTheta _{0} &{} \\ &{} I_{o} \end{pmatrix} \begin{pmatrix} *_{d_{0}+1} &{} \\ &{} 0_{n-d_{0}-1} \end{pmatrix} \begin{pmatrix} \varTheta _{0} &{} \\ &{} I_{o} \end{pmatrix}S. \end{aligned}$$ The min-rank attack is thus available on HFERP and its complexity can be estimated by \(O(\left( {\begin{array}{c}m+d_{0}+2\\ d_{0}+2\end{array}}\right) ^w)=O(m^{(d_{0}+2)w})\) (Ikematsu et al. 2018). This situation is similar for its plus and projection. Since \(d_{0}=O(\log _{q}n)\), the security of HFERP is \(n^{O(\log _{q}n)}\), which is almost the same as HFE. For the minus, we can easily check that the complexity of decryption is at most \(q^{r_{-}}\) times of the original HFERP and the security against the min-rank attack is \(O(\left( {\begin{array}{c}m+d_{0}+2\\ d_{0}+r_{-}+2\end{array}}\right) ^w)=O(m^{(d_{0}+r_{-}+2)w}).\) This means that the security of HFERP- is also \(n^{O(\log _{q}n)}\). 3.2 ZHFE ZHFE (Porras et al. 2020) is an encryption scheme constructed by two univariate polynomials over an extension field. In this subsection, we study the simplest version of ZHFE since the structure of the original version is not far from the simplest version. Let \(n,m,D\ge 1\) be integers with \(m=2n\) and define the quadratic forms \(\mathscr {G}_{1}(X),\mathscr {G}_{2}(X)\) of \(\bar{X}={}^{t}\!(X,X^{q},\dots ,X^{q^{n-1}})\) such that the degree of \(\varPsi (X):=X^{q}\cdot \mathscr {G}_{1}(X)+X\cdot \mathscr {G}_{2}(X)\) is at most D. It is easy to see that the coefficient matrices \(\mathscr {G}_{1}^{(0)},\mathscr {G}_{2}^{(0)}\) of \(\mathscr {G}_{1}(X),\mathscr {G}_{2}(X)\) as quadratic forms of \(\bar{X}\) are where \(d:=\lceil \log _{q}\frac{D-q}{2}\rceil \). Denote by \(\phi _{2}:\mathbf {F}_q^{m}\rightarrow \mathbf {F}_{q^{n}}^{2}\) an \(\mathbf {F}_q\)-isomorphism and \(\mathscr {G}(X):=(\mathscr {G}_{1}(X),\mathscr {G}_{2}(X))\). ZHFE is constructed as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m}\rightarrow \mathbf {F}_q^{m}\) and the quadratic map \(G:=\phi _{2}^{-1}\circ \mathscr {G}\circ \phi : \mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Public key. The quadratic map \(F:=T\circ G \circ S: \mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\). Decryption. For a given ciphertext \(\mathbf {c}\in \mathbf {F}_q^{m}\), compute \(\mathbf {z}:=T^{-1}(\mathbf {c})\). Let \((Z_1,Z_2):=\phi _2(z)\in \mathbf {F}_{q^{n}}^2\), and find \(Y\in \mathbf {F}_{q^{n}}\) such that \(\varPsi (Y)-Y^{q}\cdot Z_{1}-Y\cdot Z_{2}=0.\) Verify whether \(\mathscr {G}_1(Y)=Z_1\), \(\mathscr {G}_2(Y)=Z_2\) hold and put \(\mathbf {y}:=\phi ^{-1}(Y)\in \mathbf {F}_q^n\). The plaintext is \(\mathbf {p}=S^{-1}(\mathbf {y})\). Complexity of decryption. Since \(\varPsi (Y)-Y^{q}\cdot Z_{1}-Y\cdot Z_{2} =Y^{q}\cdot (\mathscr {G}_{1}(Y)-Z_{1})+Y\cdot (\mathscr {G}_{2}(Y)-Z_{2})\), at least one of Y satisfies \(\mathscr {G}_1(Y)=Z_1\), \(\mathscr {G}_2(Y)=Z_2\) if \(\mathbf{z}\in G(\mathbf {F}_{q^{n}})\). The complexity of decryption is \(O(D^3+nD^2\log {q})=O(q^{3d}+nq^{2d}\log {q})\) by Berlekamp's algorithm. The parameter d should be \(d=O(\log _{q}n)\). Security. Let \(\{\theta _{1},\dots ,\theta _{n}\}\) be a basis of \(\mathbf {F}_{q^{n}}\) over \(\mathbf {F}_q\) and \(\varTheta _{2}:=\left( \theta _{j}^{q^{i-1}}\cdot I_{2}\right) _{1\le i,j\le n}\). We can easily check that $$\begin{aligned} \begin{pmatrix}F_{1} \\ \vdots \\ F_{m} \end{pmatrix} =T\varTheta _{2}^{-1} \begin{pmatrix} {}^{t}\!(\varTheta S)\mathscr {G}_{1}^{(0)}(\varTheta S) \\ {}^{t}\!(\varTheta S)\mathscr {G}_{2}^{(0)}(\varTheta S) \\ {}^{t}\!(\varTheta S)\mathscr {G}_{1}^{(1)}(\varTheta S) \\ \vdots \\ {}^{t}\!(\varTheta S)\mathscr {G}_{2}^{(n-1)}(\varTheta S) \\ \end{pmatrix} \end{aligned}$$ and then there exist \(a_{1},\dots ,a_{m}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned} a_1F_1+\cdots +a_{m}F_{m}={}^{t}\!(\varTheta S)\mathscr {G}_1^{(0)}(\varTheta S). \end{aligned}$$ Since \(\mathrm {rank}\mathscr {G}_{1}^{(0)}\le d+2\) due to (11), the min-rank attack is available on ZHFE and its complexity can be estimated by \(O(\left( {\begin{array}{c}m+d+3\\ d+3\end{array}}\right) ^w)=O(m^{(d+3)w})\) (Perlne and Smith-Tone 2016; Cabarcas et al. 2017). Since \(d=O(\log _{q}n)\), the security of ZHFE is also \(n^{O(\log _{q}n)}\). We note that the plus and projection do not enhance the security. For the minus, we see that there exist \(a_{1},\dots ,a_{m-r_{-}},b_{0},\dots ,b_{r_{-}}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned}&a_1F_1+\cdots +a_{m-r_{-}}F_{m-r_{-}} \\&=b_{0}{}^{t}\!(\varTheta S)\mathscr {G}_1^{(0)}(\varTheta S) +b_{1}{}^{t}\!(\varTheta S)\mathscr {G}_2^{(0)}(\varTheta S) +\cdots +b_{r_{-}}{}^{t}\!(\varTheta S)\mathscr {G}_{(r_{-}\bmod {2})+1}^{(\lfloor r_{-}/2 \rfloor )}(\varTheta S) \\&={}^{t}\!(\varTheta S)\left( \begin{array}{lll} *_{\lceil \frac{r_{-}}{2}\rceil +1} &{} * &{} * \\ * &{} *_{d-(r_{-}\bmod {2})} &{} 0 \\ * &{} 0 &{} 0 \end{array}\right) (\varTheta S). \end{aligned}$$ Since the rank of the matrix above is \(d+r_{-}+2\), the complexity of the min-rank attack is \(O(\left( {\begin{array}{c}m+d+3\\ d+r_{-}+3\end{array}}\right) ^w)=O((2n)^{(d+r_{-}+3)w})\). However, the complexity of decryption is at most \(q^{r_{-}}\) times of the original ZHFE, and then the security of ZHFE- is also \(n^{O(\log _{q}n)}\). Remark that (Perlne and Smith-Tone 2016) proposed a minus of ZHFE without slowing down the decryption by using a singular-type ZHFE. However, by studying the structure of such a ZHFE- carefully, we can easily check that such a minus does not enhance the security against the min-rank attack at all. 3.3 EFC EFC (Szepieniec et al. 2016) is an encryption scheme constructed from the fact that an extension field can be expressed by a set of matrices. Let \(n,m\ge 1\) be integers with \(m=2n\), h(t) an irreducible univariate polynomial over \(\mathbf {F}_q\) and H an \(n\times n\) matrix whose characteristic polynomial is h(t). It is easy to see that \(\mathscr {H}:=\left\{ a_0I_n+a_{1}H+\cdots +a_{n-1}H^{n-1}|a_0,\dots ,a_{n-1}\in \mathbf {F}_q\right\} \) is isomorphic to \(\mathbf {F}_q[t]/\langle h(t)\rangle \simeq \mathbf {F}_{q^{n}}\). Choose \(A_1,\dots ,A_{m}\in \mathscr {H}\) and define the map \(G:\mathbf {F}_q^n\rightarrow \mathbf {F}_q^{m}\) by $$\begin{aligned} {}^{t}\!(g_1(\mathbf {x}),g_{3}(\mathbf {x}),\dots ,g_{m-1}(\mathbf {x}))&= \left( x_1A_1+x_{2}A_{3}+\cdots +x_{m-1}A_n\right) \mathbf {x},\\ {}^{t}\!(g_2(\mathbf {x}),g_{4}(\mathbf {x}),\dots ,g_{m}(\mathbf {x}))&= \left( x_1A_2+x_{2}A_{4}+\cdots +x_{m}A_n\right) \mathbf {x}. \end{aligned}$$ EFC (Szepieniec et al. 2016) is constructed as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m}\rightarrow \mathbf {F}_q^{m}\) and the quadratic map \(G:\mathbf {F}_q^{n}\rightarrow \mathbf {F}_q^{m}\) (i.e., the matrices \(A_{1},\dots ,A_{m}\)) defined above. Decryption. For a given ciphertext \(\mathbf {c}\), compute \(\mathbf {z}={}^{t}\!(z_{1},\dots ,z_{m}):=T^{-1}(\mathbf {c})\). Solve a system of linear equations given by $$\begin{aligned} \begin{aligned}&\left( x_1A_{1}+x_{2}A_{3}+\cdots +x_nA_{m-1}\right) {}^{t}\!(z_{2},z_{4},\dots ,z_{m})\\&=\left( x_1A_{2}+x_{2}A_{4}+\cdots +x_nA_{m}\right) {}^{t}\!(z_{1},z_{3},\dots ,z_{m-1}), \end{aligned} \end{aligned}$$ and find a solution \(\mathbf {y}\) of (12) satisfying \(G(\mathbf {y})=\mathbf {z}\). The plaintext is \(\mathbf {p}=S^{-1}(\mathbf {y})\). Complexity of decryption. Since \(\mathscr {H}\) is commutative, it holds $$\begin{aligned}&\left( x_1A_{2}+x_{2}A_{4}+\cdots +x_nA_{m}\right) {}^{t}\!(g_1(\mathbf {x}),g_{3}(\mathbf {x}),\dots ,g_{m-1}(\mathbf {x}))\\&=\left( x_1A_{1}+x_{2}A_{3}+\cdots +x_nA_{m-1}\right) {}^{t}\!(g_2(\mathbf {x}),g_{4}(\mathbf {x}),\dots ,g_{m}(\mathbf {x})). \end{aligned}$$ Then at least one of solutions of (12) satisfies \(G(\mathbf {y})=\mathbf {z}\) if \(\mathbf{z}\in G(\mathbf {F}_q^{n})\). The equation (12) is written by \(\left( z_1B_1+\cdots +z_{m}B_{m}\right) \mathbf {x}=0\) with \(n\times n\) matrices \(B_1,\dots ,B_{m}\) are \(n\times n\) derived from \(A_1,\dots ,A_{m}\). The complexity of decryption is thus \(O(n^{3})\). Note that, since the map G in EFC is over-defined, the complexity of the "plus" and the "projection" is almost the same as the original EFC and that of the "minus" is at most \(q^{r_{-}}\) times of the original EFC. Security. It is already known that the original EFC is insecure against the linearization attack (Szepieniec et al. 2016). We now study the security of EFC- against the min-rank attack. Let \(\theta \in \mathbf {F}_{q^{n}}\) be a root of h(t), choose a basis of \(\mathbf {F}_{q^{n}}\) over \(\mathbf {F}_q\) by \(\{\theta _1,\dots ,\theta _n\}=\{1,\theta ,\theta ^2,\dots ,\theta ^{n-1}\}\) and put \(\varTheta :=\Big (\theta _{j}^{q^{i-1}}\Big )_{1\le i,j\le n}\). Suppose that H is a companion matrix of h(t). Since \(A_{1},\dots ,A_{m}\in \mathscr {H}\), there exist linear forms \(L_{1}(\mathbf {x}),\dots ,L_{m}(\mathbf {x})\) of \(\mathbf {x}\) over \(\mathbf {F}_q\) such that $$\begin{aligned} x_{1}A_{1}+x_{2}A_{3}+\cdots +x_{n}A_{m-1}&= L_{1}(\mathbf {x})I_{n}+L_{3}(\mathbf {x})H+\cdots +L_{m-1}(\mathbf {x})H^{n-1},\\ x_{1}A_{2}+x_{2}A_{4}+\cdots +x_{n}A_{m}&= L_{2}(\mathbf {x})I_{n}+L_{4}(\mathbf {x})H+\cdots +L_{m}(\mathbf {x})H^{n-1}. \end{aligned}$$ Denote by $$\begin{aligned} \mathscr {G}_{1}(X):&= g_{1}(\mathbf {x})\theta _{1}+g_{3}(\mathbf {x})\theta _{2}+\cdots +g_{m-1}(\mathbf {x})\theta _{n},\\ \mathscr {G}_{2}(X):&= g_{2}(\mathbf {x})\theta _{1}+g_{4}(\mathbf {x})\theta _{2}+\cdots +g_{m}(\mathbf {x})\theta _{n},\\ \mathcal {L}_{1}(X):&= L_{1}(\mathbf {x})\theta _{1}+L_{3}(\mathbf {x})\theta _{2}+\cdots +L_{m-1}(\mathbf {x})\theta _{n},\\ \mathcal {L}_{2}(X):&= L_{2}(\mathbf {x})\theta _{1}+L_{4}(\mathbf {x})\theta _{2}+\cdots +L_{m}(\mathbf {x})\theta _{n}, \end{aligned}$$ where \(X:=\phi (\mathbf {x})=x_{1}\theta _{1}+\cdots +x_{n}\theta _{n}\). It is easy to see that \(\mathscr {G}_{1}(X),\mathscr {G}_{2}(X)\) are quadratic forms and \(\mathcal {L}_{1}(X),\mathcal {L}_{2}(X)\) are linear forms of \(\bar{X}=\varTheta \mathbf {x}={}^{t}\!(X,X^{q},\dots ,X^{q^{n-1}})\). By the definition of G, we see that $$\begin{aligned} \begin{aligned} \varTheta {}^{t}\!(g_1(\mathbf {x}),g_{3}(\mathbf {x}),\dots ,g_{m-1}(\mathbf {x}))&= \left( \sum _{1\le i\le n} L_{2i-1}(\mathbf {x})(\varTheta H\varTheta ^{-1})^{i-1} \right) (\varTheta \mathbf {x}),\\ \varTheta {}^{t}\!(g_2(\mathbf {x}),g_{4}(\mathbf {x}),\dots ,g_{m}(\mathbf {x}))&= \left( \sum _{1\le i\le n} L_{2i}(\mathbf {x})(\varTheta H\varTheta ^{-1})^{i-1} \right) (\varTheta \mathbf {x}). \end{aligned} \end{aligned}$$ Since \(\varTheta H\varTheta ^{-1}=\mathrm {diag}{\big (\theta ,\theta ^{q},\dots ,\theta ^{q^{n-1}}\big )}\) (e.g., Horn et al. 1985), we have \(\mathscr {G}_{1}(X)=\mathcal {L}_{1}(X)\cdot X\), \(\mathscr {G}_{2}(X)=\mathcal {L}_{2}(X) \cdot X\) due to (13). This means that the map G is written by \(G=\phi _2^{-1}\circ \mathscr {G}\circ \phi \) where \(\mathscr {G}(X)=(\mathscr {G}_{1}(X),\mathscr {G}_{2}(X))=(\mathcal {L}_{1}(X)\cdot X,\mathcal {L}_{2}(X) \cdot X)\), and it holds $$\begin{aligned} \begin{pmatrix}F_{1} \\ \vdots \\ F_{m} \end{pmatrix} =T\varTheta _{2}^{-1} \begin{pmatrix} {}^{t}\!(\varTheta S)\mathscr {G}_{1}^{(0)}(\varTheta S) \\ {}^{t}\!(\varTheta S)\mathscr {G}_{2}^{(0)}(\varTheta S) \\ {}^{t}\!(\varTheta S)\mathscr {G}_{1}^{(1)}(\varTheta S) \\ \vdots \\ {}^{t}\!(\varTheta S)\mathscr {G}_{2}^{(n-1)}(\varTheta S) \end{pmatrix}. \end{aligned}$$ Then, for EFC-, there exist \(a_{1},\dots ,a_{m-r_{-}},b_{0},\dots ,b_{r_{-}}\in \mathbf {F}_{q^{n}}\) such that $$\begin{aligned}&a_1F_1+\cdots +a_{m-r_{-}}F_{m-r_{-}} \\&=b_{0}{}^{t}\!(\varTheta S)\mathscr {G}_1^{(0)}(\varTheta S) +b_{1}{}^{t}\!(\varTheta S)\mathscr {G}_2^{(0)}(\varTheta S) +\cdots +b_{r_{-}}{}^{t}\!(\varTheta S)\mathscr {G}_{(r_{-}\bmod {2})+1}^{(\lfloor r_{-}/2 \rfloor )}(\varTheta S) \\&={}^{t}\!(\varTheta S) \left( \begin{array}{ll} *_{1+\lfloor \frac{r_{-}}{2}\rfloor } &{} * \\ * &{} 0 \end{array}\right) (\varTheta S). \end{aligned}$$ Since the rank of the matrix above is at most \(2\lfloor \frac{r_{-}}{2} \rfloor +2\), the min-rank attack is available on EFC- and its complexity can be estimated by \(O(\left( {\begin{array}{c}2n-r_{-}+2\lfloor \frac{r_{-}}{2} \rfloor +3\\ 3+2\lfloor \frac{r_{-}}{2} \rfloor \end{array}}\right) ^{w}) =O((2n)^{(r_{-}+3)w}).\) Since \(r_{-}=O(\log _{q}n)\), the security of EFC- is also \(n^{O(\log _{q}n)}\). This situation is similar to the "plus" and "projection" of EFC-. 3.4 ABC ABC (Tao et al. 2013, 2015) is an encryption scheme constructed by three polynomial matrices A, B, C. Let \(r,n,m\ge 1\) be integers with \(n=r^{2},m=2r^{2}\). For \(\mathbf {x}={}^{t}\!(x_{1},\dots ,x_{n})\), define the \(r\times r\) matrices \(A(\mathbf {x}),B(\mathbf {x}),C(\mathbf {x}),E_{1}(\mathbf {x}),E_{2}(\mathbf {x})\) by \(A(\mathbf {x}):=\left( x_{j+r(i-1)}\right) _{1\le i,j\le r}\), \(B(\mathbf {x}):=\left( b_{ij}(\mathbf {x})\right) _{1\le i,j\le r}\), \(C(\mathbf {x}):=\left( c_{ij}(\mathbf {x})\right) _{1\le i,j\le r}\), \(E_{1}(\mathbf {x}):=A(\mathbf {x})B(\mathbf {x})\) and \(E_{2}(\mathbf {x}):=A(\mathbf {x})C(\mathbf {x})\), where \(b_{ij}(\mathbf {x}),c_{ij}(\mathbf {x})\) are linear forms of \(\mathbf {x}\). The quadratic map \(G:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{m}\) is generated by \(E_{1}(\mathbf {x})=\left( g_{j+r(i-1)}(\mathbf {x})\right) _{1\le i,j\le r}\) and \(E_{2}(\mathbf {x})=\left( g_{n+j+r(i-1)}(\mathbf {x})\right) _{1\le i,j\le r}\). The encryption scheme ABC (Tao et al. 2013) is constructed as follows. Secret key. Two invertible affine maps \(S:\mathbf {F}_q^{n} \rightarrow \mathbf {F}_q^{n}\), \(T:\mathbf {F}_q^{m} \rightarrow \mathbf {F}_q^{m}\) and the quadratic map G defined above. Decryption. For a given ciphertext \(\mathbf {c}\), compute \(\mathbf {z}={}^{t}\!(z_{1},\dots ,z_{m}):=T^{-1}(\mathbf {c})\) and put \(Z_{1}:=\left( z_{j+r(i-1)}\right) _{1\le i,j\le r}\), \(Z_{2}:=\left( z_{n+j+r(i-1)}\right) _{1\le i,j\le r}\). Find \(\mathbf {y}\in \mathbf {F}_q^{n}\) such that $$\begin{aligned} B(\mathbf {y})=C(\mathbf {y})Z_{2}^{-1}Z_{1}. \end{aligned}$$ If \(Z_{2}\) is not invertible, replace (14) into \(B(\mathbf {y})Z_{1}^{-1}Z_{2}=C(\mathbf {y})\). The plaintext is \(\mathbf {p}=S^{-1}(\mathbf {y})\). Complexity of decryption. The equation (14) yields a system of n linear equations of n variables. Then the complexity of decryption is \(O(n^{3})\). Remark that the decryption fails if \(A(S(\mathbf {p}))\) is not invertible and its probability is about \(q^{-1}\). Security. It is easy to check that the coefficient matrix \(G_{1}\) of the first polynomial \(g_{1}(\mathbf {x})\) in \(G(\mathbf {x})\) is \(G_{1}=\left( \begin{array}{ll} *_{r} &{} * \\ * &{} 0_{n-r} \end{array} \right) \). Then the min-rank attack is available and its complexity is \(O(q^{2r}\cdot n^{4})\) (Tao et al. 2013). Moody et. al. (Moody et al. 2014, 2017) proposed an asymptotically optimal attack with the complexity \(O(q^{r+2}\cdot n^{4})\) based on the structure of subspace differential invariants. Recently, Liu (Liu et al. 2018) proposed a key recovery attack by solving a system of linear equations derived from the construction of the polynomials, and extended its key recovery attack to the rectangular ABC (Tao et al. 2015) and Cubic ABC (Ding et al. 2014). They claimed that the complexities of these attacks are with the complexity \(O(n^{2w})\), which is critical for the security of ABC schemes. On the other hand, one of the anonymous reviewers on the present paper claimed in his/her report that its attack seems doubtful. He/She may present his/her opinion somewhere in the near future. Table 1 Signature schemes Table 2 Encryption schemes In Sect. 2, we describe the multivariate schemes UOV, Rainbow, HFE variants and the corresponding second round candidates of NIST's project. In Sect. 3, we discuss the practicalities of several new multivariate encryption schemes proposed recently. Tables 1 and 2 are rough sketches of the complexities of decryption/signature generation and the major attacks for the corresponding schemes. Remark that there are various other attacks concerned for implementations. Table 1 shows that practical signature schemes can be implemented easily since signatures can be generated in polynomial time and the proposed attacks are in exponential time. On the other hand, Table 2 shows that the issues on the practicality of HFE variants have not been eliminated on the new encryption schemes. While selecting parameters for 80-, 100-, 120-bit securities on such encryption schemes might be possible, they will not be able to follow the future inflation of security levels. Further drastic approaches will be required to construct practical multivariate encryption schemes. M.L. Akkar, N. Courtois, L. Goubin, R. Duteuil, A fast and secure implementation of Sflash, in PKC'03. LNCS, vol. 2567 (2003), pp. 267–278 M. Bardet, J.C. Faugère, B. Salvy, B.Y. Yang, Asymptotic expansion of the degree of regularity for semi-regular systems of equations, in MEGA'05 (2005) E.R. Berlekamp, Factoring polynomials over finite fields. Bell Syst. Tech. J. 46, 1853–1859 (1967) CrossRef MathSciNet Google Scholar E.R. Berlekamp, Factoring polynomials over large finite fields. Math. Comput. 24, 713–735 (1970) L. Bettale, J.C. Faugère, L. Perret, Solving polynomial systems over finite fields: improved analysis of the hybrid approach. ISSAC 2012, 67–74 (2012) L. Bettale, J.C. Faugere, L. Perret, Cryptanalysis of HFE, multi-HFE and variants for odd and even characteristic. Designs, Codes Cryptogr. 69, 1–52 (2013) W. Beullens, B. Preneel, A. Szepieniec, F. Vercauteren, LUOV, an MQ signature scheme, https://www.esat.kuleuven.be/cosic/pqcrypto/luov/ D. Cabarcas, D. Smith-Tone, J.A. Verbel, Key recovery attack for ZHFE, in PQCrypto'17. LNCS, vol. 10346 (2017), pp. 289–308 R. Cartor, D. Smith-Tone, An updated security analysis of PFLASH, in PQCrypto'17. LNCS, vol. 10346 (2017), pp. 241–254 R. Cartor, D. Smith-Tone, EFLASH: a new multivariate encryption scheme, in SAC'18. LNCS, vol. 11349 (2018), pp. 281–299 A. Casanova, J.C. Faugère, G. Macario-Rat, J. Patarin, L. Perret, J. Ryckeghem, GeMSS: a great multivariate short signature, https://www-polsys.lip6.fr/Links/NIST/GeMSS.html C.H.O. Chen, M.S. Chen, J. Ding, F. Werner, B.Y. Yang, Odd-char multivariate hidden field equations, http://eprint.iacr.org/2008/543 M.-S. Chen, A. Hülsing, J. Rijneveld, S. Samardjiska, P. Schwabe, From 5-pass MQ-based identification to MQ-based signatures, in Asiacrypt'16. LNCS, vol. 10032 (2016), pp. 135–165 M.-S. Chen, A. Hülsing, J. Rijneveld, S. Samardjiska, P. Schwabe, MQDSS, Post-quantum signature, http://mqdss.org/contact.html C.M. Cheng, Y. Hashimoto, H. Miura, T. Takagi, A polynomial-time algorithm for solving a class of underdetermined multivariate quadratic equations over fields of odd characteristics, in PQCrypto'14. LNCS, vol. 8772 (2014), pp. 40–58 J. Ding, M.-S. Chen. A. Petzoldt, D. Schmidt, B.-Y. Yang, https://csrc.nist.gov/CSRC/media/Projects/Post-Quantum-Cryptography/documents/round-2/submissions/Rainbow-Round2.zip J. Ding, T.J. Hodges, Inverting HFE systems is quasi-polynomial for all fields, in Crypto'11. LNCS , vol. 6841 (2011), pp. 724–742 J. Ding, A. Petzoldt, L.-C. Wang, The cubic simple matrix encryption scheme, in PQCrypto'14. LNCS, vol. 8772 (2014), pp. 76–87 J. Ding, D. Schmidt, Rainbow, a new multivariate polynomial signature scheme, in ACNS'05. LNCS, vol. 3531 (2005), pp. 164–175 J. Ding, Z. Zhang, J. Deaton, K. Schmidt, Vishakha, A new attack on the LUOV schemes, in Second PQC Standardization Conference (2019), https://csrc.nist.gov/events/2019/second-pqc-standardization-conference V. Dubois, N. Gama, The degree of regularity of HFE systems, in Asiacrypt'10. LNCS, vol. 6477 (2010), pp. 557–576 J.C. Faugère, A new efficient algorithm for computing Grobner bases (\(F_4\)). J. Pure Appl. Algebra 139, 61–88 (1999) J.C. Faugère, A. Joux, Algebraic cryptanalysis of Hidden Field Equations (HFE) using Gröbner bases, in Crypto'03. LNCS, vol. 2729 (2003), pp. 44–60 P.A. Fouque, L. Granboulan, J. Stern, Differential cryptanalysis for multivariate schemes, in Eurocrypt'05. LNCS, vol. 3494 (2005), pp. 341–353 L. Granboulan, A. Joux, J. Stern, Inverting HFE is quasipolynomial, in Crypto'06. LNCS, vol. 4117 (2020), pp. 345–356 Y. Hashimoto, Multivariate public key cryptosystems, in Mathematical Modelling for Next-Generation Cryptography (Springer, berlin, 2017), pp. 17–42 Y. Hashimoto, High-rank attack on HMFEv. JSIAM Lett. 10, 21–24 (2018) Y. Hashimoto, Key recovery attack on Circulant UOV/Rainbow. JSIAM Lett. 11, 45–48 (2019) Y. Hashimoto, Y. Ikematsu, T. Takagi, Chosen message attack on multivariate signature ELSA at Asiacrypt, in IWSEC'18. LNCS, vol. 11049 (2018), pp. 3–18 R.A. Horn, Roger, C.R. Johnson, Matrix Analysis (Cambridge University Press, Cambridge, 1985) M.-D.A. Huang, M. Kosters, Y. Yang, S.L. Yeo, On the last fall degree of zero-dimensional Weil descent systems. J. Symb. Comput. 87, 207–226 (2018) Y. Ikematsu, R.A. Perlner, D. Smith-Tone, T. Takagi, J. Vates, HFERP - a new multivariate encryption scheme, in PQCrypto'18. LNCS, vol. 10786 (2018), pp. 396–416 A. Kipnis, J. Patarin, L. Goubin, Unbalanced oil and vinegar signature schemes, in Eurocrypt'99. LNCS, vol. 1592 (1999), pp. 206–222, extended in http://www.goubin.fr/papers/OILLONG.PDF (2003) A. Kipnis, A. Shamir, Cryptanalysis of the HFE public key cryptosystem by relinearization, in Crypto'99. LNCS, vol. 1666 (1999), pp. 19–30 A. Kipnis, A. Shamir, Cryptanalysis of the oil and vinegar signature scheme, in Crypto'98. LNCS, vol. 1462 (1998), pp. 257–267 J. Liu, Y. Yu, B. Yang, J. Jia, S. Wang, H. Wang, Structural key recovery of simple matrix encryption scheme family. Comput. J. 61, 1880–1896 (2018) T. Matsumoto, H. Imai, Public quadratic polynomial-tuples for efficient signature-verification and message-encryption, in Eurocrypt'88. LNCS, vol. 330 (1988), pp. 419–453 H. Miura, Y. Hashimoto, T. Takagi, Extended algorithm for solving underdefined multivariate quadratic equations, in PQCrypto'13. LNCS, vol. 7932 (2013), pp. 118–135 D. Moody, R. Perlner, D. Smith-Tone, An asymptotically optimal structural attack on the ABC multivariate encryption scheme, in PQCrypto'14. LNCS, vol. 8772 (2014), pp. 180–196 D. Moody, R. Perlner, D. Smith-Tone, Improved attacks for characteristic-2 parameters of the cubic ABC simple matrix encryption scheme, in PQCrypto'17. LNCS, vol. 10346 (2017), pp. 255–271 NIST, Post-quantum cryptography standardization, https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Post-Quantum-Cryptography-Standardization NIST, Post-quantum cryptography, round 2 submissions, https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Round-2-Submissions J. Patarin, Cryptanalysis of the Matsumoto and Imai public key scheme of Eurocrypt'88, in Crypto'95. LNCS, vol. 963 (1995), pp. 248–261 J. Patarin, Hidden fields equations (HFE) and isomorphisms of polynomials (IP): two new families of asymmetric algorithms, in Eurocrypt'96. LNCS, vol. 1070 (1996), pp. 33–48 J. Patarin, The oil and vinegar signature scheme, in the Dagstuhl Workshop on Cryptography (1997) J. Patarin, N. Courtois, L. Goubin, Quartz, \(128\)-bit long digital signatures, in CT-RSA'01. LNCS, vol. 2001 (2020), pp. 282–297 Z. Peng, S. Tang, Circulant UOV: a new UOV variant with shorter private key and faster signature generation. KSII Trans. Int. Inf. Syst. 12, 1376–1395 (2018) R. Perlner, D. Smith-Tone, Security analysis and key modification for ZHFE, in PQCrypto'16. LNCS, vol. 9606 (2016), pp. 197–212 A. Petzoldt, M.S. Chen, J. Ding, B.Y. Yang, HMFEv - An efficient multivariate signature scheme, in PQCrypto 2017. LNCS, vol. 10346 (2017), pp. 205–223 J. Porras, J. Baena, J. Ding, ZHFE, a new multivariate public key encryption scheme, in PQCrypto'14. LNCS, vol. 8772 (2014), pp. 229–245 B. Preneel, NESSIE project announces final selection of crypto algorithms, https://www.cosic.esat.kuleuven.be/nessie/deliverables/press_release_feb27.pdf K. Sakumoto, T. Shirai, H. Hiwatari, Public-key identification schemes based on multivariate quadratic polynomials, in Crypto'11. LNCS, vol. 6841 (2011), pp. 706–723 K.-A. Shim, C.-M. Park, N. Koo, An existential unforgeable signature scheme based on multivariate quadratic equations, in Asiacrypt'17. LNCS, vol. 10624 (2017), pp. 37–64 D. Smith-Tone, M.-S. Chen, B.-Y. Yang, PFLASH - secure asymmetric signatures on smart cards, in Lightweight Cryptography Workshop (2015), http://csrc.nist.gov/groups/ST/lwc-workshop2015/papers/session3-smith-tone-paper.pdf A. Szepieniec, J. Ding, B. Preneel, Extension field cancellation: a new central trapdoor for multivariate quadratic systems, in PQC'16. LNCS, vol. 9606 (2016), pp. 182–196 C. Tao, A. Diene, S. Tang, J. Ding, Simple matrix scheme for encryption, in PQCrypto 2013. LNCS, vol. 7932 (2013), pp. 231–242 C. Tao, H. Xiang, A. Petzoldt, J. Ding, Simple matrix - a multivariate public key cryptosystem (MPKC) for encryption. Finite Fields Their Appl. 35, 352–368 (2015) E. Tomae, C. Wolf, Solving underdetermined systems of multivariate quadratic equations revisited, in PKC'12. LNCS, vol. 7293 (2012), pp. 156–171 J. Vates, D. Smith-Tone, Key recovery attack for all parameters of HFE-, in PQCrypto'17. LNCS, vol. 10346 (2017), pp. 272–288 B.Y. Yang, J.M. Chen, Building secure tame-like multivariate public-key cryptosystems: the new TTS, in ACISP'05. LNCS, vol. 3574 (2005), pp. 518–531 The author would like to thank the anonymous reviewer(s) for reading the previous draft and giving helpful comments. He was supported by JST CREST no.JPMJCR14D6 and JSPS Grant-in-Aid for Scientific Research (C) no. 17K05181. Department of Mathematical Sciences, University of the Ryukyus, Nishihara-cho, Okinawa, 903-0213, Japan Yasufumi Hashimoto Correspondence to Yasufumi Hashimoto . Department of Mathematical Informatics, University of Tokyo, Tokyo, Japan Prof. Dr. Tsuyoshi Takagi Institute of Mathematics for Industry, Kyushu University, Fukuoka, Japan Prof. Dr. Masato Wakayama Department of Mathematical and Computing Science, Tokyo Institute of Technology, Tokyo, Japan Prof. Dr. Keisuke Tanaka Department of Computer Science, University of Tsukuba, Ibaraki, Japan Prof. Dr. Noboru Kunihiro University of the Ryukyus, Okinawa, Japan Prof. Dr. Kazufumi Kimoto Assist. Prof. Yasuhiko Ikematsu About this paper Hashimoto, Y. (2021). Recent Developments in Multivariate Public Key Cryptosystems. In: Takagi, T., Wakayama, M., Tanaka, K., Kunihiro, N., Kimoto, K., Ikematsu, Y. (eds) International Symposium on Mathematics, Quantum Theory, and Cryptography. Mathematics for Industry, vol 33. Springer, Singapore. https://doi.org/10.1007/978-981-15-5191-8_16 DOI: https://doi.org/10.1007/978-981-15-5191-8_16 Publisher Name: Springer, Singapore Print ISBN: 978-981-15-5190-1 Online ISBN: 978-981-15-5191-8 eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)
CommonCrawl
A Statistical Analysis of the Simpson College Football Offense Erik Hall This project explores the predictability of play-calling of the Simpson College football offense during the years of 2012-2014. The models used to assess the predictability include simple analysis as well as linear and logistic regression. MathProject / Lab Report \documentclass[a4paper]{article} \usepackage{amsthm} \usepackage{enumitem} \usepackage{multirow} \usepackage[colorinlistoftodos]{todonotes} \title{A Statistical Analysis of the Simpson College Football Offense} \author{Erik Hall} \begin{abstract} \end{abstract} \section{Introduction} In the years of 2012 and 2013, the Simpson College football team achieved records of 6-4 and 7-3 overall. Both of these seasons marked the highest win totals since the start of head coach Jim Glogowski's career at Simpson. Then in 2014, with very high expectations from both the team itself as well as the Iowa Conference, the team posted a 3-7 record overall. The decline in success is easily attributed to a large amount of injuries as well as the graduation of a 4-year letter winner, all-conference, school record-breaking quarterback. While this is the the case, perhaps the decline in success can be due to other factors. This is where the statistical analysis may assist in the search for factors contributing to the decline in success. With the focus being on the offense, the goal is to determine if the play-calling and statistics associated with the play-calling were predictable. In other words the goal is to see if the play-calling tendencies of the seasons of 2012 and 2013 were predictable in a way such that the predictability had a negative effect for the season of 2014. \section{The Collection of Data} Many athletic teams found in the levels of high school, collegiate, and professional often take advantage of the use of filming their own practices and games in order to evaluate their players' ability. With the recent advancement of technology, these athletic programs now have access to software programs capable of storing the film recorded at games and practices. These software programs are also capable of storing relevant data associated with the given plays found on the film. The data has a large number of categories that plays can be organized into, and teams can pick and choose which categories to fill out when filming. In this application of sports statistics, the Simpson College football team uses a film software program known as Hudl. The data from Hudl used for this project was entered by hand on the sidelines of football games. It's important to recognize that the manual entering of data will have some errors associated with it, and that this given case is no different. With that being said, due to errors some plays were not recorded and therefore were left out of the data collection. \section{A Further Explanation of Hudl} To give more context to what Hudl actually is, we can delve into it a little more. An example of something that can be seen on Hudl can be seen in Figure 1. Most obviously seen in the example of Hudl, is the video portion of the screen, but for the purposes of our analyses we are more concerned with what is directly below the video. As we look closer, we see a wide range of categories that each film clip is split into. These categories range from the the play number of the current game, to down and distance, as well as the play type. Note that in the yard line category has negative numbers as possible values. The explanation for this is that in terms of a teams possession, the yard line relative to which half of the field the team is on. In other words if a team is on its own half of the field, the yard line will be listed as a negative. Whereas if the team is on the opponents side of the field, the yard line will be listed as positive. For our given analysis of data, the categories describing plays found within Hudl will be used as the factors for prediction. \includegraphics[width= 15cm,height =10cm]{hudl.jpg} \caption{\label{fig:1}This is an example of what one could see when working with the film software program Hudl.} \section{Methods for Prediction} In terms of measuring predictability of play-calling, there will be three main methods of analysis. The first being a simple statistical analysis of box score data, then we will look at quantitative analysis through linear regression, and lastly we'll look at logistic regression involving a qualitative analysis. Due to the many observations of variables, the computer software program R will be used for the calculations. The use of both linear regression and logistic regression require the data being analyzed to be split into a training set and a test set. The training set of the data is what each model is ``trained" on. Whereas the test set of the data is what the model is ``tested" on. Generally we are more concerned with the results of the test set, because the data being used is new to the model. Note that since we are concerned with the predictability of the 2014 season, the sets of data will be split up such that the 2012 and 2013 seasons will be used as the training set for models, and the 2014 season will be used as the test set. \section{Simplistic Statistical Analysis} Before we look more into advanced statistical analysis, it's important to recognize that sometimes simpler techniques can give more insight into the analysis of data. So with that being said, we can see a summary of data obtained from the box scores of games in the 2012, 2013, and 2014 seasons in Table 1. Note that the statistics found in the table are considered the scoring statistics, or in other words the statistics associated with the productivity of the offense. Given the differences in win totals, it is somewhat surprising to see similarity in the data for each season. Even with the similarities, there are some glaring differences that separate the 2014 season from the others in a negative way. These differences are that the 2014 team only averages 20.9 points per game and that the team had a third down conversion rate of 34\%. From a coach's perspective it's reasonable to expect a low third down conversion rate to lead to lower amounts of scores considering that the offense will find itself not being able to sustain drives and therefore not being able to score. \begin{tabular}{ l | l | l | l | l | l | l | l p{5cm} |} & 2012 & 2013 & 2014 \\ \hline Average Starting Position (Yard Line) & 33.19 & 31.81 & 30.74 \\ \hline Points Per Game & 27.5 & 32.7 & 20.9 \\ \hline Average Yards Gained Per Play & 5.06 & 5.16 & 5.30\\ \hline Number of Turnovers & 24 & 15 & 20\\ \hline Red Zone Percentage & 59\% & 77\% & 74\% \\ \hline 3rd Down Percentage & 46\% & 74\% & 34\% \\ \hline Average Time of Possession & 30:49 & 32:22 & 31:20 \\ \hline \caption{This is the summary of the statistical analysis done on the scoring statistics.} \newline Given that we have this information there are a few things that we can take away giving us more insight for when we do more advanced analysis techniques. The most important item of information that can be taken away from the simple analysis is that the third down conversion percentage hints at predictable play calling. This is because a low third down conversion rate suggests difficult distances to go on third down, and an offense would have difficult third down situations if they are not successful the previous downs. The lack of success on previous downs can come from a defense watching film and learning tendencies on given downs, thus if a defense knows what a potential play could be, the defense can put the offense in situations that create low chances of success. Ultimately what we can take away from our simple analysis is that play-calling may be predictable, and thus is something to look into. Before we look more into the advanced techniques, there is a simple logical explanation of changes in success that we can can consider. For the explanation we will look at the 2014 season only. The 2014 season began with a three game win streak against non-conference opponents, which was followed by a seven game losing streak against conference opponents. Using logic based off of film analysis done by a team, it can be somewhat expected that a team will do well in the majority of its non-conference games considering that the opponents are not played annually compared to conference opponents that are played annually. The reason that it can be reasonable to expect this possibility is that non-conference teams have little film of the their opponent available to them. This can be especially noticeable in the case of a team playing a non-conference team for the first time. This explanation is dependent upon the amount of film shared between coaches, but often times teams usually only have access to film of their own games and then games of the opponent played prior to their game. Ultimately the possible explanation is that Simpson had an advantage in non-conference games because, the opposing teams did not have the same opportunities as conference teams to analyze the predictability, in terms of film, of Simpson. Therefore Simpson may have been more likely to struggle in conference games. \section{Linear Regression} \subsection{Distance Predicting Gain or Loss} With the limited number of quantitative variables found in Hudl, there have only been a few meaningful calculations. Of the most notable calculations that could be used in terms of predictability, is the linear regression predicting the gain or loss of a play with the predictor of distance to go on a given down. The results of the linear regression from R can be seen in Figure 2. Note that we see that distance has a significant p-value, suggesting the significance of the variable. The p-value significance codes can be seen at the bottom of the figure. For a given p-value to be at least somewhat significant, it's value must be at most 0.1. As the p-value becomes smaller, the variable associated with the p-value becomes more significant. For this linear regression, R gives the equation \begin{eqnarray*} \hat{y} \approx 3.97357 + 0.18085x. \end{eqnarray*} \includegraphics[width= 10cm,height =5cm]{Capture.JPG} \caption{\label{fig:2}This is the output from R for the linear regression with distance as the predictor} Note this equation is for the seasons of 2012 and 2013. To give context to this equation, let us look an example. Consider a situation of 2nd down with 6 yards to go. In this case it is expected that the offense will gain about 5.05 yards, since we simply substitute the 6 in for $x$. Thus in this particular situation, the offense is not expected to gain the yards necessary to make a first down. Now with that being said, the given equation above expects the offense to gain a first down if $0 \leq x \leq 4$. Note that there are some limitations with this equation with an assumption being that the distance to go is discrete this is due to the data being discrete. In terms of predictability, there's an important piece of information that can be drawn from this particular model. For the 2012 and 2013 seasons, if a defense could put the offense in a position in which more than 4 yards were needed for the first down, it could be expected that offense would not gain the needed yards. This suggests play-calling that doesn't have plays designed to gain more than 5 yards at a time. While this is a very notable result, there are couple of issues that arise. First the predictability is related to our training set of the seasons of 2012 and 2013. This is an issue because we are more concerned with the predictability of the 2014 season, although it is important to note that earlier seasons can often times affect outcomes of later seasons in terms of play calling. The other issue that occurs is that when running a linear regression on the test set, distance is no longer a significant factor according to the significance codes of R. The output of the linear regression with distance predicting the yards gained or lost on the test set, can be seen in Figure 3. So while yes it is true that distance is no longer technically a significant predictor, there is something of note to consider. If we further examine the output of the linear regression, we see that p-value for distance is close to being considered significant. With such a small difference preventing distance from being a significant predictor, it is safe to consider the predictor significant with a minor but important assumption. Recall that the predictor was significant for the training set, but also recall that the training set had more observations than the test set. The assumption is that if there existed more observations for the 2014 season, then distance would have a p-value that would be considered significant by R. We can go about verifying our assumption of significance by running a linear regression on the combination of the training and test sets. The R code output for the linear regression done the combination of the sets can be seen in Figure 4. As we review the output from R, we see that we now have a significant p-value of 0.00395. Something to note here is that the p-value we obtained is more significant than when we had run a linear regression on only the training set. This suggests that distance is a significant factor in the test set as well since the combination of the training and test set showed significance. Now with all of this being said, it somewhat difficult to focus on what we were attempting to show in the analysis. In short what we've shown is that in each of the seasons we're looking at the yards gained by the offense on a given down could be predicted by the distance to go on a given down. Thus an opposing defense could be more inclined to bring pressures or blitzes on early downs knowing that offense will struggle with distances of 5 yards or greater to go. \includegraphics[width= 10cm,height =5cm]{problem.JPG} \caption{\label{fig:3}This is the output from R for the linear regression with distance as the predictor on the test.} \includegraphics[width= 10cm,height =5cm]{solution.JPG} \caption{\label{fig:4}This is the output from R for the linear regression with distance as the predictor on the training and test set.} \section{Logistic Regression} \subsection{Down Predicting Play Type} There are more meaningful calculations that can be done off of Hudl when using the qualitative variables. With that being said we can use logistic regression to look for other potentially predictable tendencies. The first that we will look into involves the use of down as a factor predicting play type. Before we run the regression, it is also worth note that a coach's intuition often says that down can predict run or pass. So we can go in with the expectation of seeing a significance of the down variable. similarly to when we ran linear regression earlier, we will first run the logistic on the training set. The output that R gives for the regression can be seen in Figure 5. Just like we saw in linear regression a low p-value indicates a significant variable, and in this case we see that down is a very significant variable for predicting play type. Note that the output from R is now a little bit different since we are dealing with qualitative variables. The difference is that the ``Estimate" column of the output is different than when we had been dealing with linear regression. The reason for this difference is that the logistic regression model uses a fitting method called maximum likelihood whereas linear regression uses the least squares method. For the use of logistic regression, the maximum likelihood fitting method uses a logistic function to determine probability. The function that is used is, P(x) \approx \dfrac{e^{\beta_0+\beta_1x_1+...+\beta_px_p}}{1+e^{\beta_0+\beta_1x_1+...+\beta_px_p}}. For our given application of logistic regression, we simply substitute the values found in our output in for the function with $x$ being the given down. Note that more than one predictor can be used in logistic regression, hence the addition of variables up to variable $p$. When we substitute the values for a first down situation our given probability is about 0.52. To give more context to what this value means, the value of 0.52 means that on first down it is predicted that the play type will be run. Then to find the probability of a pass play we simply take the complement of the value for a run, thus in this case the probability of a pass play on first down is about 0.48. The summary for the probability of run or pass on each down can be seen in Table 2. As we see the progression of downs it appears that the likelihood of a pass play occurring rises. From a coach's intuition this is somewhat expected because third down can generally be classified as a ``passing down" since it is more likely to gain yards through passing rather than running. While this information is only good for the training set, it is important to note that the predictability of previous years can lead to predictability for current years. Now that we have a good background for how logistic regression works well with the training set in terms of down predicting play type, we can shift our attention towards the test set. The output from R for the logistic regression of down predicting play type on the test set can be seen in Figure 6. We see again that down is considered a significant predictor for the regression, but an issue arises when calculations are attempted with the logistic function. The issues can most likely be attributed to the differences in the coefficients for the ``Estimate" column. We see that in Figure 6 the intercept value is much greater than the intercept in Figure 5. These differences cause estimates of probabilities for run and pass plays to be skewed such that each down is predicted to have a probability of at least 0.95 for run plays. In terms of concluding whether or not down is considered to have significance is difficult. If we use the p-value of down though, we can consider the predictor to be significant with a warning that estimations of probabilities will be skewed. This can be something to look further into for future work. Down & Probability of Run & Probability of Pass \\ \hline First & 0.52 & 0.48 \\ \hline Second & 0.39 & 0.61 \\ \hline Third & 0.28 & 0.72 \\ \hline Fourth & 0.19 & 0.81 \\ \hline \caption{These are the predicted probabilities of run or pass given the down.} \includegraphics[width= 10cm,height =5cm]{logistic1.JPG} \caption{\label{fig:5}This is the output from R for the logistic regression with down predicting play type on the training set.} \caption{\label{fig:6}This is the output from R for the logistic regression with down as the predictor on the test set.} \subsection{Down and Distance Predicting Play Type} We have some idea that down may potentially predict the play type, even with possible inaccurate results for the test set. With that being the case we can make the regression a little more specific by adding distance as a predictor, so now we can look at specific situations. Just as we have done with the other regressions we will run the model on the training set first. The output for the logistic regression on the training set can be seen in Figure 7. Just as we saw when running the logistic regression with down as the only predictor we see that both down and distance are significant given their p-values. Rather than showing a table of possible probabilities, we will look at a specific situation due to the amount of possible situations that exist. Consider the situation of third down and fifteen yards to go. Before we look at the results, we can come up with an expectation to see if our results match our expectation. Since there are a large amount of yards to be gained with a limited number of downs, we can expect a passing play more often than run in the situation. Substituting both 3 and 15 int our logistic function we get a probability of run as about 0.175, thus the probability of pass is 0.825. We see that our expectation of a pass play is met with the probability of a pass play at $82.5\%$. Similar to our previous logistic regression, we see significance for our predictors on the training set. We now run the logistic regression with down and distance predicting play type on our test set. The output from R for our logistic regression can be seen in Figure 8. When we look over the results, we see that the down variable has a higher p-value but is still potentially considered significant, and that distance no longer has a p-value that allow the predictor to be considered significant. We've seen a similar situation to this in our linear regression setting, in which the training set was significant but the test set was not. Recall that in order to get around that we combined the sets and then ran the regression again and we were able to conclude the significance of our variables. We attempt the same strategy here by running the logistic regression on both the training and test set. The results of the regression can be seen in Figure 9. Looking over the results we see that the down variable has an improved p-value while the distance variable has p-value that does not improve. This suggests that for the test set, we can not conclude that down and distance predicted play type for the 2014 season. With our findings the regression suggests that the play-calling in terms of predicting play type was not very predictable for 2014 season. If we consider the findings in our training a set a bit more, we can theorize that predictable tendencies from earlier seasons may have had an effect on the success of 2014 season. In other words the 2014 season may have not been predictable because defenses were able to apply strategies based of predictable tendencies in earlier seasons such that the offense changed play-calling style that lead to a lower amount of success. \caption{\label{fig:7}This is the output from R for the logistic regression with down and distance as the predictors on the training set.} \caption{\label{fig:8}This is the output from R for the logistic regression with down and distance as the predictors on the test set.} \caption{\label{fig:9}This is the output from R for the logistic regression with down and distance as the predictors on both the training and test sets.} \subsection{Personnel Predicting Play Type} Of all of the regressions ran so far, most are comprehensible in that the terms being used can be understood by the average watcher of football. This case is a bit different though, the term personnel is a term commonly used among football players and coaches to understand formations. In our case personnel is referring to the formations of Simpson's offense. Examples of personnel for formations include, 10, 22, 12, as well as many others. As you can see personnel involves a two digit numbering system. The first number is the number of running backs involved in the formation. For the purpose of this project a running back is a player in the backfield within the tackle-box that is not the quarterback. Whereas a tight end is a player on the line of scrimmage, that is considered pass eligible. Examples of 21 and 11 personnel can be seen in Figure 10. Note that in each example diagram Y represents a tight end, R represents a running back, and F represents a fullback. There exist other players on each diagram, but for the purposes of determining personnel we are only concerned with tight ends and running backs. \includegraphics[width= 10cm,height =5cm]{personnelex.JPG} \caption{\label{fig:10}These are examples of formations for 21 personnel and 11 personnel.} Before we look at the regression analysis, we can look into more background why it can be beneficial to find the correlation between personnel and play type. Often times the formation of the offense can give the defense a hint on the play type that will be ran. For example if the offense is in a formation in which there are no true receivers, it could be expected that the play type will be run since the offense is better equipped to run the ball in that situation. This brings up the point of why are we looking at personnel rather than formation. The reason is that personnel categorizes the formations based on the number of tight ends and running backs in the formation. Therefore personnel encompasses multiple formations rather than looking at individual formations. There's a disadvantage in only analyzing individual formations in the regression setting because, some formations are solely predictable on their own without analysis. For example look at what is known as the victory formation. This formation is generally used when a team is winning toward the very end of the game, and there's no need to try and gain more yards. So out of the victory formation the quarterback simply takes a knee to run time off the clock. Thus in that case run can be predicted very easily. There exist other formations within offenses designed for specific plays, and since we want predictions on things we don't already know, it can be better to analyze multiple formations at once through the use of personnel. Now that we have a good background on what personnel is, we can begin the logistic regression analysis. Before we look at the results from R, note that the full output will not be shown since there are 15 different personnel categories. Therefore only the useful results will be shown. The output for logistic regression with personnel predicting play type on the training set can be seen in Figure 11. We see that both 10 and 11 personnel are considered significant based on their p-values in the output. Compared to our other logistic regressions these results are little more difficult to interpret. To determine the probability of pass or run for the personnel we substitute a 1 in for the $x$ value associated with variable in the logistic function. To focus on one personnel grouping at a time we substitute 0 in for the other $x$ values while we focus on the desired personnel grouping. In both 10 and 11 personnel situations, it predicted by the regression that both are more likely to be pass play with 10 personnel having about 0.83 probability of passing and 11 personnel having about 0.64 probability of passing. A possible explanation for the likelihood of passing is that there are not many running backs in either formation, and therefore the offense is better equipped to throw the ball. We now run the logistic regression on our test set. The output from the regression can be seen in Figure 12. After some immediate analysis we see that both 10 and 11 personnel are no longer significant, but we see that 12 personnel is considered significant. There are some issues that arise similar to our logistic regression involving down and distance. The predicted probability of play type in this situation is 0.995 in favor of a run play. So similar to our most recent logistic regression, the model may not be a good fit for the test set. \includegraphics[width= 10cm,height =5cm]{personnel1.JPG} \caption{\label{fig:11}This is the output for the logistic regression with personnel predicting play type on the training set.} \caption{\label{fig:12}This is the output for the logistic regression with personnel predicting play type on the test set.} \section{Future Work} It's important to note in all of this analysis that there was only one classification or qualitative analysis done for this project. While linear regression can give useful outputs, we have seen through the analysis of data in the application that it may not always be the best model. Other classification methods to consider include linear discriminant analysis, quadratic discriminant analysis, and k-nearest neighbors. These methods may provide more insight for prediction, but each may have issues as well. A potential issue of each model is that while accuracy of prediction goes up, the interpretability of the model goes down. In other words each model may be highly successful, but the results may be very difficult to interpret. \section{Conclusion} As we have seen, there are many items of item of information that suggest predictable play-calling may have occurred during the 2012-2014 seasons. For logistic regression specifically, it appears that the play type of any given play could have been be predicted with better probability than simply guessing during the 2012 and 2013 seasons. Given that the goal was to show that decline in success was due to something other than inexperienced players, it is somewhat difficult to pull a conclusion from the data analysis done the test set during this project. The models used in this project do not show a lot of solid evidence suggesting predictable play-calling during the 2014 season. Perhaps the previous seasons were predictable in way that the 2014 was not able to be successful because teams already had an idea of what was coming. It also very difficult to isolate one issue in a losing season, in other words athletic teams have many moving parts to them and it's difficult to determine what could be the sources contributing to a decline in success are. Maybe the previous teams were simply more talented and were able to overcome predictability, because the teams were so skilled. Then once new players were leading operations the predictability became more evident, and coaches were required to change their game plan which could lead to a decline in success, since changes were made mid-season. If the game plan worked in previous seasons, it would be reasonable to think it would work in the 2014 season, but a change in game plan would create a new learning curve. While the goal was to get away from the excuse of injuries, maybe injuries are what caused the change in game plan, and thus we saw less predictability. While it may have not been as successful as desired, perhaps this project has shown a different perspective on what could cause a decline in success. \section{Works Cited} \item simpsonathletics.com \item The Simpson College football playbook \item Hudl.com \item Data for this project was made accessible by Ted Haag
CommonCrawl
Šindelář, Jan ; Vajda, Igor ; Kárný, Miroslav Stochastic control optimal in the Kullback sense. (English). Kybernetika, vol. 44 (2008), issue 1, pp. 53-60 MSC: 49N35, 60G35, 90D60, 91A60, 93E03, 93E20, 94A17 | MR 2405055 | Zbl 1145.93053 Kullback divergence; minimization; stochastic controller The paper solves the problem of minimization of the Kullback divergence between a partially known and a completely known probability distribution. It considers two probability distributions of a random vector $(u_1, x_1, \ldots, u_T, x_T )$ on a sample space of $2T$ dimensions. One of the distributions is known, the other is known only partially. Namely, only the conditional probability distributions of $x_\tau$ given $u_1, x_1, \ldots, u_{\tau-1}, x_{\tau-1}, u_{\tau}$ are known for $\tau = 1, \ldots, T$. Our objective is to determine the remaining conditional probability distributions of $u_\tau$ given $u_1, x_1, \ldots, u_{\tau-1}, x_{\tau-1}$ such that the Kullback divergence of the partially known distribution with respect to the completely known distribution is minimal. Explicit solution of this problem has been found previously for Markovian systems in Karný [Karny:96a]. The general solution is given in this paper. [1] Aoki M.: Optimization of Stochastic Systems: Topics in Discrete-Time Systems. Academic Press, New York – London 1967 MR 0234749 | Zbl 0168.15802 [2] Åström K. J.: Introduction to Stochastic Control. Academic Press, New York – San Francisco – London 1970 Zbl 1191.93141 [3] Bertsekas D. P.: Dynamic Programming and Stochastic Control. Second edition. Athena Scientific, Belmont, Mass. 2000 MR 2182753 | Zbl 0549.93064 [4] Clark D.: Advances in Model-Based Predictive Control. Oxford University Press, Oxford 1994 [5] Cover T. M., Thomas J. A.: Elements of Information Theory. Second edition. Wiley-Interscience, New York 2006 MR 2239987 | Zbl 1140.94001 [6] Kárný M.: Towards fully probabilistic control design. Automatica 32 (1996), 12, 1719–1722 MR 1427142 | Zbl 0868.93022 [7] Kulhavý R.: A Kullback–Leibler distance approach to system identification. In: Preprints of the IFAC Symposium on Adaptive Systems in Control and Signal Processing (C. Bányász, ed.), Budapest 1995, pp. 55–66 [8] Kullback S.: Information Theory and Statistics. Wiley, New York and Chapman & Hall, London 1967 MR 0103557 | Zbl 0897.62003 [9] Kullback S., Leibler R.: On information and sufficiency. Ann. Math. Statist. 22 (1951), 79–87 MR 0039968 | Zbl 0042.38403 [10] Kumar P. R., Varaiya P.: Stochastic Systems: Estimation, Identification and Adaptive Control. Prentice Hall, Englewood Cliffs, N. J. 1986 Zbl 0706.93057 [11] Kushner H.: Introduction to Stochastic Control. Holt, Rinehard and Winston, New York 1971 MR 0280248 | Zbl 0293.93018 [12] Martin J. J.: Bayesian Decision Problems and Markov Chains. Wiley, New York 1967 MR 0221709 | Zbl 0164.50102 [13] Meditch J. S.: Stochastic Optimal Linear Estimation and Control. Mc. Graw Hill, New York 1969 Zbl 0269.93061 [14] Vajda I.: Theory of Statistical Inference and Information. Mathematical and statistical methods. Kluwer Academic Publishers, Dordrecht, Boston – London 1989 Zbl 0711.62002
CommonCrawl
RETRACTED ARTICLE: Sign-changing solutions to Schrödinger-Kirchhoff-type equations with critical exponent Liping Xu1 & Haibo Chen2 This article was retracted on 12 April 2017 In this paper, we study the following Schrödinger-Kirchhoff-type equation: $$ \textstyle\begin{cases} -(a+b\int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx)\triangle u+u= k(x)|u|^{2^{*}-2}u+\mu h(x)u \quad \text{in } \mathrm{R}^{3}, \\ u\in H^{1}(\mathrm{R}^{3}), \end{cases} $$ where \(a, b, \mu>0\) are constants, \(2^{*}=6\) is the critical Sobolev exponent in three spatial dimensions. Under appropriate assumptions on nonnegative functions \(k(x)\) and \(h(x)\), we establish the existence of positive and sign-changing solutions by variational methods. In this paper, we investigate the following Schrödinger-Kirchhoff-type problem: $$ \textstyle\begin{cases}-(a+b\int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx)\triangle u+u= k(x)|u|^{2^{*}-2}u+\mu h(x)u \quad \text{in } \mathrm{R}^{3}, \\ u\in H^{1}(\mathrm{R}^{3}), \end{cases} $$ where \(a, b>0\) are constants, and \(2^{*}=6\) is the critical Sobolev exponent in dimension three. We assume that μ and the functions \(k(x)\) and \(h(x)\) satisfy the following hypotheses: (\(\mu_{1}\)): \(0<\mu<\tilde{\mu}\), where μ̃ is defined by $$\tilde{\mu}:=\inf_{u\in H^{1}(\mathrm{R}^{3})\setminus\{0\}}\biggl\{ \int _{\mathrm{R}^{3}}\bigl(a|\nabla u|^{2}+|u|^{2} \bigr)\,dx: \int_{\mathrm {R}^{3}}h(x)|u|^{2}\,dx=1\biggr\} ; $$ (k1): \(k(x)\geq0\), \(\forall x\in\mathrm{R}^{3}\); there exist \(x_{0}\in\mathrm{R}^{3}\), \(\sigma_{1}>0\), \(\rho_{1}>0\), and \(1\leq\alpha<3\) such that \(k(x_{0})= \max_{x\in \mathrm{R}^{3}}k(x)\) and $$\bigl\vert k(x)-k(x_{0})\bigr\vert \leq\sigma_{1} \vert x-x_{0}\vert ^{\alpha}\quad \text{for } \vert x-x_{0}\vert < \rho_{1}; $$ (h1): \(h(x)\geq0\) for any \(x\in \mathrm{R}^{3}\) and \(h(x)\in L^{\frac {3}{2}}(\mathrm{R}^{3})\); there exist \(\sigma_{2}>0\) and \(\rho_{2}>0\) such that \(h(x)\geq \sigma_{2}|x-x_{0}|^{-\beta}\) for \(|x-x_{0}|<\rho_{2}\). The Kirchhof-type problem is related to the stationary analogue of the equation $$u_{tt}-\biggl(a+b \int_{\Omega}|\nabla u|^{2}\,dx\biggr)\triangle u= f(x,u) \quad \text{in } \Omega, $$ where Ω is a bounded domain in \(\mathrm{R}^{N}\), u denotes the displacement, \(f(x,u)\) is the external force, and b is the initial tension, whereas a is related to the intrinsic properties of the string (such as Young's modulus). Equations of this type arise in the study of string or membrane vibration and were proposed by Kirchhoff in 1883 (see [1]) to describe the transversal oscillations of a stretched string, particularly, taking into account the subsequent change in string length caused by oscillations. Kirchhoff-type problems are often referred to as being nonlocal because of the presence of the integral over the entire domain Ω, which provokes some mathematical difficulties. Similar nonlocal problems also model several physical and biological systems where u describes a process that depends on the average of itself, for example, the population density; see [2, 3]. Kirchhoff-type problems have received much attention. Some important and interesting results can be found in, for example, [4–6] and the references therein. The solvability of the following Schrödinger-Kirchoff-type equation (1.2) has also been well studied in general dimension by various authors: $$ -\biggl(a+b \int_{\mathrm{R}^{N}}|\nabla u|^{2}\,dx\biggr)\triangle u+V(x)u=f(x,u) \quad \text{in } \mathrm{R}^{N}. $$ For example, Wu [7] and many others [8–13], using variational methods, proved the existence of nontrivial solutions to (1.2) with subcritical nonlinearities. Li and Ye [14] obtained the existence of a positive solution for (1.2) with critical exponents. More recently, Wang et al. [15] and Liang and Zhang [16] proved the existence and multiplicity of positive solutions of (1.2) with critical growth and a small positive parameters. The problem of finding sign-changing solutions is a very classical problem. In general, this problem is much more difficult than finding a mere solution. There were several abstract theories or methods to study sign-changing solutions; see, for example, [17, 18] and the references therein. In recent years, Zhang and Perera [19] obtained sign-changing solutions of (1.2) with superlinear or asymptotically linear terms. More recently, Mao and Zhang [20] use minimax methods and invariant sets of descent flow to prove the existence of nontrivial solutions and sign-changing solutions for (1.2) without the P.S. condition. Motivated by the works described, in this paper, our aim is to study the existence of positive and sign-changing solutions for problem (1.1). The method is inspired by Hirano and Shioji [21] and Huang et al. [22]; however, their arguments cannot be directly applied here. To our best knowledge, there are very few works up to now studying sign-changing solutions for Schrödinger-Kirchhoff-type problem with critical exponent, that is, problem (1.1). Our main results are as follows. Assume that (\(\mu_{1}\)), (k1), (k2), and (h1)-(h2) hold. Then, for \(1<\beta<3\), problem (1.1) possesses at least one positive solution. Assume that (\(\mu_{1}\)), (k1), (k2), and (h1)-(h2) hold. Then, for \(\frac{3}{2}<\beta<3\), problem (1.1) possesses at least one sign-changing solution. \(H^{1}(\mathrm{R}^{3})\) is the Sobolev space equipped with the norm \(\|u\|^{2}_{H^{1}(\mathrm{R}^{3})}=\int_{\mathrm {R}^{3}}{(|\nabla u|^{2}+|u|^{2})\,dx}\). We define \(\|u\|^{2}:=\int_{\mathrm{R}^{3}}{(a|\nabla u|^{2}+|u|^{2})\,dx}\) for \(u\in H^{1}(\mathrm{R}^{3})\). Note that \(\|\cdot\|\) is an equivalent norm on \(H^{1}(\mathrm{R}^{3})\). For any \(1\leq s\leq\infty\), \(\|u\|_{L^{s}}:=(\int_{\mathrm {R}^{3}}|u|^{s} \,dx)^{\frac{1}{s}}\) denotes the usual norm of the Lebesgue space \(L^{s}(\mathrm{R}^{3})\). By \(D^{1,2}(\mathrm{R}^{3})\) we denote the completion of \(C_{0}^{\infty}(\mathrm{R}^{3})\) with respect to the norm \(\|u\| ^{2}_{D^{1,2}(\mathrm{R}^{3})}:=\int_{\mathrm{R}^{3}}|\nabla u|^{2} \,dx\). S denotes the best Sobolev constant defined by \(S=\inf_{u\in D^{1,2}(\mathrm{R}^{3})\setminus\{0\}}\frac{\int _{\mathrm{R}^{3}}|\nabla u|^{2} \,dx}{(\int_{\mathrm{R}^{3}} u^{6} \,dx)^{\frac {1}{3}}}\). \(C>0\) denotes various positive constants. The outline of the paper is given as follows. In Section 2, we present some preliminary results. In Sections 3 and 4, we give proofs of Theorems 1.1 and 1.2, respectively. The variational framework and preliminary In this section, we give some preliminary lemmas and the variational setting for (1.1). It is clear that system (1.1) is the Euler-Lagrange equations of the functional \(I:H^{1}(\mathrm{R}^{3})\rightarrow\mathrm{R}\) defined by $$ I(u)=\frac{1}{2}\|u\|^{2}+\frac{b}{4}\biggl( \int_{\mathrm {R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2}- \frac{1}{6} \int_{\mathrm {R}^{3}}k(x)|u|^{6}\,dx-\frac{\mu}{2} \int_{\mathrm{R}^{3}}h(x)|u|^{2}\,dx. $$ Obviously, I is a well-defined \(C^{1}\) functional and satisfies $$\begin{aligned} \bigl\langle I'(u),v\bigr\rangle =& \int_{\mathrm{R}^{3}}{(a\nabla u\nabla v+uv)}\,dx+b \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx \int_{\mathrm {R}^{3}}\nabla u\nabla v\,dx \\ &{}- \int_{\mathrm{R}^{3}}\bigl({k(x)|u|^{4}uv}+\mu h(x)uv\bigr)\,dx \end{aligned}$$ for \(v\in H^{1}(\mathrm{R}^{3})\). It is well known that \(u\in H^{1}(\mathrm {R}^{3})\) is a critical point of the functional I if and only if u is a weak solution of (1.1). Assume that (h1) holds. Then the function \(\psi _{h}:u\in H^{1}(\mathrm{R}^{3})\mapsto\int_{\mathrm{R}^{3}} h(x)u^{2}\,dx\) is weakly continuous, and for each \(v\in H^{1}(\mathrm{R}^{3})\), \(\varphi_{h}:u\in H^{1}(\mathrm{R}^{3})\mapsto\int_{\mathrm {R}^{3}}h(x)uv\,dx\) is also weakly continuous. The proof of Lemma 2.1 is a direct conclusion of [23], Lemma 2.13. Assume that (h1) holds. Then the infimum $$\tilde{\mu}:= \inf_{u\in H^{1}(\mathrm {R}^{3})\setminus\{0\}} \biggl\{ \int_{\mathrm{R}^{3}}\bigl(a|\nabla u|^{2}+|u|^{2} \bigr)\,dx: \int_{\mathrm {R}^{3}}h(x)|u|^{2}\,dx=1\biggr\} $$ is achieved. The proof of Lemma 2.2 is the same as that of [24], Lemma 2.5. Here we omit it for simplicity. □ Assume that (k1), (h1), and (\(\mu_{1}\)) hold. Then the functional I possesses the following properties. There exist \(\rho, \gamma>0\) such that \(I(u)\geq\gamma\) for \(\| u\|=\rho\). There exists \(e\in H^{1}(\mathrm{R}^{3})\) with \(\|e\|>\rho\) such that \(I(e)<0\). By Lemma 2.2 and the Sobolev inequality we obtain $$I(u)\geq\frac{1}{2}\|u\|^{2}-C\|u\|^{6}- \frac{\mu}{2\tilde{\mu}}\|u\| ^{2}=\|u\|^{2}\biggl( \frac{1}{2}-\frac{\mu}{2\tilde{\mu}}-C\|u\|^{4}\biggr). $$ Set \(\|u\|=\rho\) small enough such that \(C\rho^{4}\leq\frac {1}{4}(1-\frac{\mu}{\tilde{\mu}})\). Then we have $$ I(u)\geq\frac{1}{4}\biggl(1-\frac{\mu}{\tilde{\mu}}\biggr)\rho^{2}. $$ Choosing \(\gamma=\frac{1}{4}(1-\frac{\mu}{\tilde{\mu}})\rho^{2}\), we complete the proof of (1). For \(t>0\) and some \(u_{0}\in H^{1}(\mathrm{R}^{3})\) with \(\|u_{0}\|=1\), it follows from (h1) and (\(\mu_{1}\)) that $$I(tu_{0})\leq\frac{1}{2}t^{2}\|u_{0} \|^{2}+\frac{b}{4}t^{4}\biggl( \int_{\mathrm {R}^{3}}|\nabla u_{0}|^{2}\,dx \biggr)^{2}-\frac{t^{6}}{6} \int_{\mathrm{R}^{3}}k(x)|u_{0}|^{6}\,dx, $$ which implies that \(I(tu_{0})<0\) for \(t>0\) large enough. Hence, we can take an \(e=t_{1}u_{0}\) for some \(t_{1}>0\) large enough, and (2) follows. □ Next, we define the Nehari manifold N associated with I by $${N}:=\bigl\{ u\in H^{1}\bigl(\mathrm{R}^{3}\bigr)\setminus\{0 \}:G(u)=0\bigr\} ,\quad \text{where } G(u)=\bigl\langle I'(u),u\bigr\rangle . $$ Now we state some properties of N. Assume that (\(\mu_{1}\)) is satisfied. Then the following conclusions hold. For all \(u\in H^{1}(\mathrm{R}^{3})\setminus\{0\}\), there exists a unique \(t(u)>0\) such that \(t(u)u\in{N}\). Moreover, \(I(t(u))u= \max_{t\geq0}I(tu)\). \(0< t(u)<1\) in the case \(\langle I'(u),u\rangle<0\); \(t(u)>1\) in the case \(\langle I'(u),u\rangle>0\). \(t(u)\) is a continuous functional with respect to u in \(H^{1}(\mathrm{R}^{3})\). \(t(u)\rightarrow+\infty\) as \(\|u\|\rightarrow0\). The proof is similar to that of [22], Lemma 2.4, and is omitted here. □ Positive solution In order to deduce Theorem 1.1, the following lemmas are important. Borrowing an idea from Lemma 3.6 in [14], we obtain the first result. For \(s, t>0\), the system $$ \textstyle\begin{cases} f(t,s)=t-aS(\frac{s+t}{\lambda})^{\frac{1}{3}}=0, \\ g(t,s)=s-bS^{2}(\frac{s+t}{\lambda})^{\frac{2}{3}}=0, \end{cases} $$ has a unique solution \((t_{0},s_{0})\), where \(\lambda>0\) is a constant. Moreover, if $$ \textstyle\begin{cases} f(t,s)\geq0, \\ g(t,s)\geq0, \end{cases} $$ then \(t\geq t_{0}\) and \(s\geq s_{0}\), where \(t_{0}=\frac{abS^{3}+a\sqrt {b^{2}S^{6}+4\lambda aS^{3}}}{2\lambda}\) and \(s_{0}=\frac{bS^{6}+2\lambda abS^{3}+b^{2}S^{3}\sqrt{b^{3}S^{6}+4\lambda aS^{3}}}{2\lambda^{2}}\). Assume that (\(\mu_{1}\)), (k1), and (h1) hold. Let a sequence \(\{u_{n}\}\subset{N}\) be such that \(u_{n}\rightharpoonup u\) in \(H^{1}(\mathrm{R}^{3})\) and \(I(u_{n})\rightarrow c\), but any subsequence of \(\{u_{n}\}\) does not converge strongly to u. Then one of the following results holds: \(c>I(t(u)u)\) in the case \(u\neq0\) and \(\langle I'(u),u\rangle <0\); \(c\geq c^{*}\) in the case \(u=0\); \(c>c^{*}\) in the case \(u\neq0\) and \(\langle I'(u),u\rangle\geq0\); where \(c^{*}=\frac{abS^{3}}{4\|k\|_{\infty}}+\frac{b^{3}S^{6}}{24\|k\| ^{2}_{\infty}}+\frac{(b^{2}S^{4}+4a\|k\|_{\infty}S)^{\frac{3}{2}}}{24\|k\| ^{2}_{\infty}}\), and \(t(u)\) is defined as in Lemma 2.4. Part of the proof is similar to that of [22], Lemma 3.1 or [25], Proposition 3.3. For the reader's convenience, we only sketch the proof. Since \(u_{n}\rightharpoonup u\) in \(H^{1}(\mathrm{R}^{3})\), we have \(u_{n}-u\rightharpoonup0\). Then by Lemma 2.1 we obtain that $$ \int_{\mathrm{R}^{3}}h(x)|u_{n}-u|^{2}\,dx \rightarrow0. $$ We obtain from the Brézis-Lieb lemma [26], (3.1), and \(u_{n}\in{N}\) that $$\begin{aligned} c+o(1) =&I(u_{n}) \\ =&I(u)+\frac{1}{2}\|u_{n}-u\|^{2}+ \frac{b}{4} \biggl( \int _{\mathrm{R}^{3}}\bigl\vert \nabla(u_{n}-u)\bigr\vert ^{2}\,dx \biggr)^{2} \\ &{}-\frac{1}{6} \int_{\mathrm {R}^{3}}k(x)|u_{n}-u|^{6}\,dx+o(1) \end{aligned}$$ $$\begin{aligned} 0 =&\bigl\langle I'(u_{n}),u_{n}\bigr\rangle \\ =&\bigl\langle I'(u),u\bigr\rangle +\|u_{n}-u \|^{2}+b \biggl( \int_{\mathrm {R}^{3}}\bigl\vert \nabla(u_{n}-u)\bigr\vert ^{2}\,dx \biggr)^{2} \\ &{}- \int_{\mathrm{R}^{3}}k(x)|u_{n}-u|^{6}\,dx+o(1). \end{aligned}$$ Up to a subsequence, we may assume that there exist \(l_{i}\geq 0\), \(i=1,2,3\), such that $$ \begin{aligned} &\|u_{n}-u\|^{2}\rightarrow l_{1},\qquad b \biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla (u_{n}-u)\bigr\vert ^{2}\,dx \biggr)^{2}\rightarrow l_{2}, \\ &\int_{\mathrm {R}^{3}}k(x)|u_{n}-u|^{6}\,dx\rightarrow l_{3}. \end{aligned} $$ Since any subsequence of \(\{u_{n}\}\) does not converge strongly to u, we have \(l_{1}>0\). Set \(\gamma(t)=\frac{l_{1}}{2}t^{2}+\frac {l_{2}}{4}t^{4}-\frac{l_{3}}{6}t^{6}\) and \(\eta(t)=g(t)+\gamma(t)\). By (3.3) and (3.4) we have \(\eta' (1)=g'(1)+\gamma'(1)=0\), and \(t=1\) is the only critical point of \(\eta (t)\) in \((0,+\infty)\), which implies that $$ \eta(1)= \max_{t>0}\eta(t). $$ We consider three situations: (1) \(u\neq0\) and \(\langle I'(u),u\rangle<0\). Then by (3.3) and (3.4) we have $$ l_{1}+l_{2}-l_{3}>0. $$ $$ \gamma '(t)=l_{1}t+l_{2}t^{3}-l_{3}t^{5}>l_{1}t+l_{2}t^{3}-(l_{1}+l_{2})t^{5}= \bigl(1-t^{2}\bigr)\bigl[l_{1}t+(l_{1}+l_{2})t^{3} \bigr]\geq 0 $$ for any \(0< t<1\), which implies that $$ \gamma(t)>\gamma(0)=0 \quad \text{for any } t\in(0,1). $$ Since \(\langle I'(u),u\rangle<0\), by Lemma 2.4 there exists \(t(u)>0\) such that \(0< t(u)<1\). Then it follows from (3.8) that \(\gamma (t(u))>0\). Therefore, we obtain from (3.2) and (3.5) that \(c=\eta(1)>\eta (t(u))=g(t(u))+\gamma(t(u))>I(t(u)u)\), which implies that (1) holds. (2) \(u=0\). Then by (3.2), (3.3), and (3.4) we get $$ \textstyle\begin{cases} l_{1}+l_{2}-l_{3}=0, \\ \frac{1}{2}l_{1}+\frac{1}{4}l_{2}-\frac{1}{6}l_{3}=c. \end{cases} $$ By the definition of S we see that $$\begin{aligned}& \int_{\mathrm{R}^{3}}|\nabla u_{n}|^{2}\,dx\geq \frac{S}{\|k\| _{\infty}^{1/3}} \biggl( \int_{\mathrm{R}^{3}}k(x)|u_{n}|^{6}\,dx \biggr)^{\frac {1}{3}}, \\& b \biggl( \int_{\mathrm{R}^{3}}|\nabla u_{n}|^{2}\,dx \biggr)^{2}\geq b\frac{ S^{2}}{\|k\|_{\infty}^{2/3}} \biggl( \int_{\mathrm{R}^{3}}k(x)|u_{n}|^{6}\,dx \biggr)^{\frac{2}{3}}. \end{aligned}$$ $$l_{1}\geq a S\biggl(\frac{l_{1}+l_{2}}{\|k\|_{\infty}}\biggr)^{\frac {1}{3}} \quad \mbox{and}\quad l_{2}\geq b S^{2}\biggl(\frac{l_{1}+l_{2}}{\|k\|_{\infty}} \biggr)^{\frac{2}{3}}. $$ Obviously, if \(l_{1}>0\), then \(l_{2}, l_{3}>0\). It follows from Lemma 3.1 that $$\begin{aligned} c =&\frac{1}{3}l_{1}+\frac{1}{12}l_{2} \\ \geq&\frac{1}{3}\frac{abS^{3}+a\sqrt{b^{2}S^{6}+4\|k\|_{\infty}aS^{3}}}{2\| k\|_{\infty}}+\frac{1}{12}\frac{bS^{6}+2\|k\|_{\infty}abS^{3}+b^{2}S^{3}\sqrt {b^{3}S^{6}+4\|k\|_{\infty}aS^{3}}}{ 2\|k\|_{\infty}^{2}} \\ =&\frac{abS^{3}}{4\|k\|_{\infty}}+\frac{b^{3}S^{6}}{24\|k\|_{\infty}^{2}}+\frac{(b^{2}S^{4}+4a\|k\|_{\infty}S)^{\frac{3}{2}}}{24\|k\|_{\infty}^{2}}:=c^{*}. \end{aligned}$$ (3) \(u\neq0\) and \(\langle I'(u),u\rangle\geq0\). We prove this case in two steps. Firstly, we consider \(u\neq0\) and \(\langle I'(u),u\rangle=0\). Then from Lemma 2.3 and Lemma 2.4 we get $$ I(u)= \max_{t>0}I(tu)>0. $$ Since \(u\neq0\) and \(\langle I'(u),u\rangle=0\), as in (3.9), we obtain that $$ c=\eta(1)=I(u)+\frac{l_{1}}{3}+\frac{l_{2}}{12}>c^{*}. $$ Secondly, we prove the case \(u\neq0\) and \(\langle I'(u),u\rangle>0\). Set \(t^{**}=(\frac{l_{2}+\sqrt{l_{2}^{2}+4l_{1}l_{3}}}{2l_{3}})^{\frac{1}{2}}\). Then, \(\gamma(t)\) attains its maximum at \(t^{**}\), that is, $$\begin{aligned} \gamma\bigl(t^{**}\bigr) =& \max_{t>0}\gamma(t) \\ =&\frac{l_{1}l_{2}}{4l_{3}}+\frac{l_{2}^{2}}{24l_{3}^{2}}+\frac {(l_{2}^{2}+4l_{1}l_{3})^{\frac{3}{2}}}{24l_{3}^{2}} \\ \geq&\frac{abS^{3}}{4\|k\|_{\infty}}+\frac{b^{3}S^{6}}{24\|k\| _{\infty}^{2}}+\frac{(b^{2}S^{4}+4a\|k\|_{\infty}S)^{\frac{3}{2}}}{24\|k\| _{\infty}^{2}}=c^{*}. \end{aligned}$$ It follows from Lemma 2.4 that \(0< t^{**}<1\). Then \(I(t^{**}u)\geq0\). Therefore, by (3.2), (3.5), and (3.12) we obtain $$c=\eta(1)>\eta\bigl(t^{**}\bigr)=I\bigl(t^{**}u\bigr)+\gamma \bigl(t^{**}\bigr)\geq c^{*}. $$ The proof of Lemma 3.2 is complete. □ If the hypotheses of Theorem 1.1 hold with \(1<\beta <3\), then $$c_{1}< \frac{abS^{3}}{4\|k\|_{\infty}}+\frac{b^{3}S^{6}}{24\|k\|_{\infty}^{2}}+\frac{(b^{2}S^{4}+4a\|k\|_{\infty}S)^{\frac{3}{2}}}{24\|k\|_{\infty}^{2}}=c^{*}, $$ where \(c_{1}\) is defined by \(\inf_{u\in{N}}I(u)\). To prove this lemma, we borrow an idea employed in [22]. For \(\varepsilon,r>0\), define \(w_{\varepsilon}(x)=\frac{C\varphi (x)\varepsilon^{\frac{1}{4}}}{(\varepsilon+|x-x_{0}|^{2})^{\frac {1}{2}}}\), where C is a normalizing constant, \(x_{0}\) is given in (k2), and \(\varphi\in C_{0}^{\infty}(\mathrm{R}^{3})\), \(0\leq\varphi\leq1\), \(\varphi|_{B_{r}(0)}\equiv1\), and \(\operatorname{supp}\varphi\subset B_{2r}(0)\). Using the method of [25], we obtain $$ \int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2} \,dx=K_{1}+O\bigl(\varepsilon ^{\frac{1}{2}}\bigr), \qquad \int_{\mathrm{R}^{3}}|w_{\varepsilon}|^{6}\,dx=K_{2}+O \bigl(\varepsilon^{\frac{3}{2}}\bigr), $$ $$ \int_{\mathrm{R}^{3}}|w_{\varepsilon}|^{s}\, dx= \textstyle\begin{cases} K\varepsilon^{\frac{s}{4}}, & s\in[2,3), \\ K\varepsilon^{\frac{3}{4}}|\ln\varepsilon|, & s=3, \\ K\varepsilon^{\frac{6-s}{4}}, & s\in(3,6), \end{cases} $$ where \(K_{1}\), \(K_{2}\), K are positive constants. Moreover, the best Sobolev constant is \(S=K_{1}K_{2}^{-\frac{1}{3}}\). By (3.13) we have $$ \frac{\int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2} \,dx}{(\int_{\mathrm{R}^{3}} w_{\varepsilon}^{6} \,dx)^{\frac {1}{3}}}=S+O\bigl(\varepsilon^{\frac{1}{2}}\bigr). $$ By Lemma 2.4, for this \(w_{\varepsilon}\), there exists a unique \(t(w_{\varepsilon})>0\) such that \(t(w_{\varepsilon})w_{\varepsilon}\in {N}\). Thus, \(c_{1}< I(t(w_{\varepsilon})w_{\varepsilon})\). Using (2.1), for \(t>0\), since \(I(tw_{\varepsilon})\rightarrow-\infty\) as \(t\rightarrow \infty\), we easily see that \(I(tw_{\varepsilon})\) has a unique critical \(t(w_{\varepsilon})>0\) that corresponds to its maximum, that is, \(I(t_{\varepsilon}w_{\varepsilon})=\max_{t>0}I(tw_{\varepsilon})\). It follows from (1) of Lemma 2.3, \(I(tw_{\varepsilon})\rightarrow -\infty\) as \(t\rightarrow\infty\), and the continuity of I that there exist two positive constants \(t_{0}\) and \(T_{0}\) such that \(t_{0}< t_{\varepsilon}< T_{0}\). Let \(I(t_{\varepsilon}w_{\varepsilon})=F(\varepsilon)+G(\varepsilon)+H(\varepsilon)\), where $$\begin{aligned}& F(\varepsilon)=\frac{at_{\varepsilon}^{2}}{2} \int _{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx+ \frac{bt_{\varepsilon}^{4}}{4}\biggl( \int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx \biggr)^{2}-\frac{ t_{\varepsilon}^{6}}{6} \int_{\mathrm{R}^{3}}k(x_{0})|w_{\varepsilon}|^{6} \,dx, \\& G(\varepsilon)=\frac{ t_{\varepsilon}^{6}}{6} \int_{\mathrm {R}^{3}}k(x_{0})|w_{\varepsilon}|^{6} \,dx-\frac{ t_{\varepsilon}^{6}}{6} \int _{\mathrm{R}^{3}}k(x)|w_{\varepsilon}|^{6}\,dx, \end{aligned}$$ $$H(\varepsilon)=\frac{t_{\varepsilon}^{2}}{2} \int_{\mathrm {R}^{3}}|w_{\varepsilon}|^{2}\,dx- \frac{\mu t_{\varepsilon}^{2}}{2} \int _{\mathrm{R}^{3}}h(x)|w_{\varepsilon}|^{2}\,dx. $$ $$\Phi(t)=\frac{at^{2}}{2} \int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx+ \frac{bt^{4}}{4}\biggl( \int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx \biggr)^{2}-\frac{ t^{6}}{6} \int_{\mathrm{R}^{3}}k(x_{0})|w_{\varepsilon}|^{6} \,dx. $$ Note that \(\Phi(t)\) attains its maximum at $$t^{*}_{0}= \biggl(\frac{b(\int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx)^{2}+\sqrt{b^{2}(\int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx)^{4}+4a(\int_{\mathrm{R}^{3}}|\nabla w_{\varepsilon}|^{2}\,dx)^{2}\int _{\mathrm{R}^{3}}k(x_{0})|w_{\varepsilon}|^{6}\,dx}}{2\int_{\mathrm {R}^{3}}k(x_{0})|w_{\varepsilon}|^{6}\,dx} \biggr)^{\frac{1}{2}}. $$ $$ \max_{t\geq0}\Phi(t)=\Phi\bigl(t^{*}_{0}\bigr)= \frac{abS^{3}}{4\|k\| _{\infty}}+\frac{b^{3}S^{6}}{24\|k\|_{\infty}^{2}}+\frac{(b^{2}S^{4}+4a\|k\| _{\infty}S)^{\frac{3}{2}}}{24\|k\|_{\infty}^{2}}+O\bigl( \varepsilon^{\frac{1}{2}}\bigr) $$ for \(\varepsilon>0\) small enough. Then we have $$ F(\varepsilon)\leq c^{*}+O\bigl(\varepsilon^{\frac{1}{2}}\bigr). $$ By (3.36) of [22] we have $$ G(\varepsilon)\leq C\varepsilon^{\frac{1}{2}}. $$ From (3.38) of [22], (3.14), and the boundedness of \(t_{\varepsilon}\) we obtain $$\begin{aligned} \begin{aligned}[b] H(\varepsilon)&=\frac{t_{\varepsilon}^{2}}{2} \int_{\mathrm {R}^{3}}|w_{\varepsilon}|^{2}\,dx- \frac{\mu t_{\varepsilon}^{2}}{2} \int _{\mathrm{R}^{3}}h(x)|w_{\varepsilon}|^{2}\,dx \\ &\leq C\varepsilon^{\frac{1}{2}}-\mu C \varepsilon^{1-\frac {\beta}{2}}. \end{aligned} \end{aligned}$$ Since \(1<\beta<3\), for fixed \(\mu>0\), we obtain $$ \frac{H(\varepsilon)}{\varepsilon^{\frac{1}{2}}}\rightarrow-\infty \quad \mbox{as } \varepsilon\rightarrow0. $$ It follows from (3.17), (3.18), and (3.20) that the proof of Lemma 3.3 is complete. □ Proof of Theorem 1.1 By the definition of \(c_{1}\) there exists a sequence \(\{u_{n}\}\subset N \) such that \(I(u_{n})\rightarrow c_{1}\) as \(n\rightarrow\infty\). Then we obtain that $$ \|u_{n}\|^{2}+b\biggl( \int_{\mathrm{R}^{3}}|\nabla u_{n}|^{2}\,dx \biggr)^{2}- \int _{\mathrm{R}^{3}}\mu h(x)|u_{n}|^{2}\,dx= \int_{\mathrm{R}^{3}}k(x)|u_{n}|^{6}\,dx. $$ It follows from (3.21) and Lemma 2.2 that $$\begin{aligned} c_{1}+o(1) =&\frac{1}{3}\biggl(\|u_{n} \|^{2}-\mu \int_{\mathrm {R}^{3}}h(x)|u_{n}|^{2}\,dx\biggr)+ \biggl(\frac{b}{4}-\frac{b}{6}\biggr) \biggl( \int_{\mathrm {R}^{3}}|\nabla u_{n}|^{2}\,dx \biggr)^{2} \\ \geq&\frac{1}{3}\biggl(1-\frac{\mu}{\tilde{\mu}}\biggr)\|u_{n} \|^{2}, \end{aligned}$$ which implies the boundedness of \(\{u_{n}\}\) in \(H^{1}(\mathrm{R}^{3})\) since \(0<\mu<\tilde{\mu}\). Then there exists a subsequence of \(\{ u_{n}\} \), still denoted by \(\{u_{n}\} \), such that \(u_{n}\rightharpoonup u\) in \(H^{1}(\mathrm{R}^{3})\). By (2) of Lemma 3.2 and Lemma 3.3 we have \(u\neq0\). By the definition of \(t(u)\) we get \(t(u)u\in{N}\). So \(I(t(u)u)\geq c_{1}\). We claim that \(u_{n}\rightarrow u\) in \(H^{1}(\mathrm {R}^{3})\). Otherwise, by (1) and (3) of Lemma 3.2, we would get that \(c_{1}>I(t(u)u)\) or \(c_{1}>c^{*}\). In any case, we get a contradiction since \(c_{1}< c^{*}\). Therefore, \(\{u_{n}\}\) converges strongly to u. Thus, \(u\in {N}\) and \(I(u)=c_{1}\). By the Lagrange multiplier rule there exists \(\theta\in \mathrm{R}\) such that \(I'(u)=\theta G'(u)\) and thus $$0=\bigl\langle I'(u),u\bigr\rangle =\theta \biggl(2\|u \|^{2}+4b\biggl( \int _{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2}-6 \int_{\mathrm{R}^{3}} k(x)|u|^{6}\,dx-2\mu \int_{\mathrm{R}^{3}}h(x)|u|^{2}\,dx \biggr). $$ Since \(u\in N\), we get $$0=\theta \biggl(-4\biggl(\|u\|^{2}-\mu \int_{\mathrm {R}^{3}}h(x)|u|^{2}\,dx\biggr)-2b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr), $$ which implies that \(\theta=0\) and u is a nontrivial critical point of the functional I in \(H^{1}(\mathrm{R}^{3})\). Therefore, the nonzero function u can solve Eq. (1.1), that is, $$ -\biggl(a+b \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)\triangle u+u= k(x)|u|^{2^{*}-2}u+\mu h(x)u. $$ In (3.23), using \(u^{-}=\max\{-u,0\}\) as a test function and integrating by parts, by (k1), (h2), and (\(\mu_{1}\)) we obtain $$\begin{aligned} 0 =& \int_{\mathrm{R}^{3}}a\bigl\vert \nabla u^{-}\bigr\vert ^{2} \,dx+ \int_{\mathrm {R}^{3}}\bigl\vert u^{-}\bigr\vert ^{2}\,dx+b \int_{\mathrm{R}^{3}}\vert \nabla u\vert ^{2}\,dx \int_{\mathrm {R}^{3}}\bigl\vert \nabla u^{-}\bigr\vert ^{2}\,dx \\ &{}+ \int_{\mathrm{R}^{3}}k(x)\bigl\vert u^{-}\bigr\vert ^{2^{*}-2}\bigl\vert u^{-}\bigr\vert ^{2}\,dx+ \int_{\mathrm {R}^{3}}\mu h(x)\bigl\vert u^{-}\bigr\vert ^{2} \,dx\geq0. \end{aligned}$$ Then \(u^{-}=0\) and \(u\geq0\). From Harnack's inequality [27] we can infer that \(u>0\) for all \(x\in \mathrm{R}^{3}\). Therefore, u is a positive solution of (1.1). The proof is complete by choosing \(\omega_{0}=u\). □ Sign-changing solution This subsection is devoted to proving the existence of sign-changing solution of Eq. (1.1). Let \(\overline{{{N}}}=\{u=u^{+}-u^{-}\in H^{1}(\mathrm{R}^{3}):u^{+}\in{N}, u^{-}\in{N}\}\), where \(u^{\pm}=\max\{ \pm u,0\}\). If \(u^{+}\neq0\) and \(u^{-}\neq0\), then u is called a sign-changing function. We define \(c_{2}= \inf_{u\in\overline{{{N}}}}I(u)\). Assume that (\(\mu_{1}\)), (k1)-(k2), and (h1)-(h2) hold. Then for \(\frac{3}{2}<\beta<3\), \(c_{2}< c_{1}+c^{*}\). By Lemma 2.4, using first the same argument as in [22] or [28], we have that there are \(s_{1}>0\) and \(s_{2}\in \mathrm{R}\) such that $$ s_{1}\omega_{0}+s_{2}\omega_{\varepsilon}\in \overline{{N}}. $$ Next, we prove that there exists \(\varepsilon>0\) small enough such that $$ \sup_{s_{1}>0,s_{2}\in\mathrm{R}}I(s_{1}\omega_{0}+s_{2} \omega _{\varepsilon})< c_{1}+c^{*}. $$ Obviously, it follows from (2) of Lemma 2.3 that, for any \(s_{1}>0\) and \(s_{2}\in \mathrm{R}\) satisfying \(\|s_{1}\psi_{1}+s_{2}\omega_{\varepsilon}\|>\rho\), \(I(s_{1}\omega_{0}+s_{2}\omega_{\varepsilon})<0\). We only estimate \(I(s_{1}\omega_{0}+s_{2}\omega_{\varepsilon})\) for all \(\|s_{1}\omega _{0}+s_{2}\omega_{\varepsilon}\|\leq\rho\). By calculation we see that $$ I(s_{1}\omega_{0}+s_{2}\omega_{\varepsilon})=I(s_{1} \omega _{0})+\Pi_{1}+\Pi_{2}+\Pi_{3}+ \Pi_{4}+\Pi_{5}+\Pi_{6}, $$ $$\begin{aligned}& \Pi_{1}=\frac{as_{2}^{2}}{2} \int_{\mathrm {R}^{3}}\vert \nabla w_{\varepsilon} \vert ^{2} \,dx+\frac{bs_{2}^{4}}{4}\biggl( \int_{\mathrm {R}^{3}}\vert \nabla w_{\varepsilon} \vert ^{2} \,dx\biggr)^{2}-\frac{ s_{2}^{6}}{6} \int_{\mathrm {R}^{3}}k(x_{0})\vert w_{\varepsilon} \vert ^{6}\,dx, \\& \Pi_{2}=\frac{ s_{2}^{6}}{6} \int_{\mathrm{R}^{3}}k(x_{0})\vert w_{\varepsilon} \vert ^{6}\,dx-\frac{ s_{2}^{6}}{6} \int_{\mathrm{R}^{3}}k(x)\vert w_{\varepsilon} \vert ^{6} \,dx, \\& \Pi_{3}=\frac{1}{6} \int_{\mathrm{R}^{3}}k(x) \bigl(\vert s_{1}\omega _{0}\vert ^{6}+\vert s_{2}w_{\varepsilon} \vert ^{6}-\vert s_{1}\omega_{0}+s_{2}w_{\varepsilon} \vert ^{6}\bigr)\,dx, \\& \Pi_{4}=\frac{s_{2}^{2}}{2} \int_{\mathrm {R}^{3}}\vert w_{\varepsilon} \vert ^{2}\,dx- \frac{\mu s_{2}^{2}}{2} \int_{\mathrm {R}^{3}}h(x)\vert w_{\varepsilon} \vert ^{2} \,dx, \\& \Pi_{5}=\frac{b}{4}\biggl[\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla (s_{1} \omega_{0}+s_{2}\omega_{\varepsilon})\bigr\vert ^{2}\,dx\biggr)^{2}-\biggl( \int_{\mathrm {R}^{3}}\bigl\vert \nabla(s_{1} \omega_{0})\bigr\vert ^{2}\,dx\biggr)^{2} \\& \hphantom{\Pi_{5}={}}{}-\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla (s_{2} \omega_{\varepsilon})\bigr\vert ^{2}\,dx\biggr)^{2}\biggr], \end{aligned}$$ $$\Pi_{6}= \int_{\mathrm{R}^{3}}\bigl(a\nabla(s_{1}\omega_{0}) \nabla ( s_{2}\omega_{\varepsilon})+(s_{1} \omega_{0}) (s_{2}\omega_{\varepsilon})-\mu h(x) (s_{1}\omega_{0}) (s_{2}\omega_{\varepsilon}) \bigr)\,dx. $$ By (3.16) we obtain that $$ \sup_{s_{2}\in\mathrm{R}}\Pi_{1}=\frac{abS^{3}}{4\|k\| _{\infty}}+ \frac{b^{3}S^{6}}{24\|k\|_{\infty}^{2}}+\frac{(b^{2}S^{4}+4a\|k\| _{\infty}S)^{\frac{3}{2}}}{24\|k\|_{\infty}^{2}}+O\bigl(\varepsilon^{\frac{1}{2}}\bigr). $$ It follows from (3.18) that $$ \Pi_{2}\leq C\varepsilon^{\frac{1}{2}}. $$ From the elementary inequality $$|s+t|^{q}\geq|s|^{q}+|t|^{q}-C \bigl(|s|^{q-1}t+|t|^{q-1}s\bigr) \quad \text{for any } q\geq1, $$ the fact that \(\omega_{0}\in H^{1}(\mathrm{R}^{3})\cap L^{\infty}(\mathrm {R}^{3})\), and from (3.14) we have $$\begin{aligned} \Pi_{3} \leq& C \int_{\mathrm{R}^{3}}k(x) \bigl(|\omega_{0}|^{5}\omega _{\varepsilon}+\omega_{0}|w_{\varepsilon}|^{5}\bigr)\,dx \\ \leq&\|k\|_{\infty}\|\omega_{0}\|_{\infty}\int_{\mathrm {R}^{3}}|w_{\varepsilon}|^{5}\,dx+\|k \|_{\infty}\bigl\| \omega_{0}^{5}\bigr\| _{\infty}\int _{\mathrm{R}^{3}} w_{\varepsilon}\,dx \\ \leq& C\varepsilon^{\frac{1}{4}}. \end{aligned}$$ By (3.19) we have $$ \Pi_{4}\leq C\varepsilon^{\frac{1}{2}}-C\varepsilon ^{1-\frac{\beta}{2}}, $$ and using (3.13), we have $$\begin{aligned} \Pi_{5} \leq&\frac{b}{4}\biggl[4\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla (s_{1} \omega_{0})\bigr\vert ^{2}\,dx\biggr)^{2}+4 \biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{2}\omega _{\varepsilon})\bigr\vert ^{2}\,dx\biggr)^{2} \\ &{}-\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{1} \omega_{0})\bigr\vert ^{2}\,dx\biggr)^{2}-\biggl( \int _{\mathrm{R}^{3}}\bigl\vert \nabla(s_{2} \omega_{\varepsilon})\bigr\vert ^{2}\,dx\biggr)^{2}\biggr] \\ =&\frac{3b}{4}\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{1}\omega _{0})\bigr\vert ^{2}\,dx\biggr)^{2}+ \frac{3b}{4}\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{2}\omega _{\varepsilon})\bigr\vert ^{2}\,dx\biggr)^{2} \\ \leq& C+C\varepsilon^{\frac{1}{2}}. \end{aligned}$$ Since \(\omega_{0}\) is a positive solution of (1.1), by the Sobolev inequality we obtain $$\begin{aligned} \Pi_{6} =&s_{1}s_{2} \int_{\mathrm{R}^{3}}k(x)|\omega_{0}|^{5}\omega _{\varepsilon}\,dx-b \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{1} \omega_{0})\bigr\vert ^{2}\,dx \int _{\mathrm{R}^{3}}\nabla(s_{1}\omega_{0}) \nabla(s_{2}\omega_{\varepsilon}) \,dx \\ \leq&\|k\|_{\infty}\bigl\| \omega_{0}^{5}\bigr\| _{\infty}\int_{\mathrm{R}^{3}} w_{\varepsilon}\,dx+b\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{1}\omega _{0})\bigr\vert ^{2}\,dx\biggr)^{\frac{3}{2}}\biggl( \int_{\mathrm{R}^{3}}\bigl\vert \nabla(s_{2}\omega _{\varepsilon}) \bigr\vert ^{2} \,dx\biggr)^{\frac{1}{2}} \\ \leq& C\varepsilon^{\frac{1}{4}}. \end{aligned}$$ It follows from (4.3)-(4.9) that, for \(\frac{3}{2}<\beta<3\), $$\begin{aligned} I(s_{1}\omega_{0}+s_{2}\omega_{\varepsilon}) \leq& I(s_{1}\omega _{0})+c^{*}+C+C\varepsilon^{\frac{1}{4}}+C \varepsilon^{\frac {1}{2}}-C\varepsilon^{1-\frac{\beta}{2}} \\ < &I(s_{1}\omega_{0})+c^{*}=c_{1}+c^{*} \end{aligned}$$ as \(\varepsilon\rightarrow0\), which implies that (4.2) holds. This finishes the proof of Lemma 4.1. □ Suppose that (\(\mu_{1}\)), (k1)-(k2), and (h1)-(h2) hold. Then, for \(\frac{3}{2}<\beta<3\), there exists \(\omega_{1}\in\overline{{N}}\) such that \(I(\omega_{1})=c_{2}\). Let \(\{u_{n}\}\subset\overline{{N}}\) be such that \(I(u_{n})\rightarrow c_{2}\). Since \(u_{n}\in\overline{{N}}\), we may assume that there exist constants \(d_{1}\) and \(d_{2}\) such that \(I(u^{+}_{n})\rightarrow d_{1}\) and \(I(u^{-}_{n})\rightarrow d_{2}\) and \(d_{1}+d_{2}=c_{2}\). Then $$ d_{1}\geq c_{1},\qquad d_{2}\geq c_{1}. $$ Just as the proof of (3.22), we can prove the boundedness of \(\{u^{+}_{n}\}\) and \(\{u^{-}_{n}\}\). Going, if necessary, to a subsequence, we may assume that \(u_{n}^{\pm}\rightharpoonup u^{\pm}\) in \(H^{1}(\mathrm{R}^{3})\) as \(n\rightarrow\infty\). We claim \(u^{+}\neq0\) and \(u^{-}\neq0\). Arguing by contradiction, if \(u^{+}=0\) or \(u^{-}=0\), then by (4.10) and Lemma 3.2, $$c_{1}+c^{*}\leq d_{2}+d_{1}=c_{2}, $$ which contradicts Lemma 4.1. Hence, \(u^{+}\neq0\) and \(u^{-}\neq0\). We claim that \(u^{\pm}_{n}\rightarrow u^{\pm}\) strongly in \(H^{1}(\mathrm {R}^{3})\). Indeed, according to Lemma 3.2, we get one of the following: \(\{u^{+}_{n}\}\) converges strongly to \(u^{+}\); \(d_{1}>I(t(u^{+})u^{+})\); \(d_{1}> c^{*}\); and we also have one of the following: \(\{u^{-}_{n}\}\) converges strongly to \(u^{-}\); \(d_{2}>I(t(u^{-})u^{-})\); \(d_{2}> c^{*}\). We will prove that only cases (i) and (iv) hold. For example, in cases (i) and (v) or (ii) and (v), from \(u^{+}-t(u^{-})u^{-}\in\overline{{N}}\) or \(t(u^{+})u^{+}-t(u^{-})u^{-}\in\overline{{N}}\) we have $$c_{2}\leq I\bigl(u^{+}-t\bigl(u^{-}\bigr)u^{-}\bigr)=I\bigl(u^{+}\bigr)+I \bigl(-t\bigl(u^{-}\bigr)u^{-}\bigr)< d_{1}+d_{2}=c_{2} $$ $$c_{2}\leq I\bigl(t\bigl(u^{+}\bigr)u^{+}-t\bigl(u^{-}\bigr)u^{-}\bigr)=I \bigl(t\bigl(u^{+}\bigr)u^{+}\bigr)+I\bigl(-t\bigl(u^{-}\bigr)u^{-}\bigr)< d_{1}+d_{2}=c_{2}. $$ Any one of the two inequalities is impossible. In cases (i) and (vi) or (ii) and (vi) or (iii) and (vi), we have $$\begin{aligned}& c_{1}+c^{*}\leq I\bigl(u^{+}\bigr)+c^{*}< d_{1}+d_{2}=c_{2}, \\& c_{1}+c^{*}\leq I\bigl(t\bigl(u^{+}\bigr)u^{+}\bigr)+c^{*}< d_{1}+d_{2}=c_{2}, \\& c_{1}+c^{*}\leq c^{*}+c^{*}< d_{1}+d_{2}=c_{2}, \end{aligned}$$ and any one of the three inequalities is a contradiction. Therefore, we prove that only (i) and (iv) hold. Hence, we obtain that \(\{u^{+}_{n}\}\) and \(\{u^{-}_{n}\}\) converge strongly to \(u^{+}\) and \(u^{-}\), respectively, and we obtain \(u^{+}, u^{-}\in{N}\). Denote \(\omega_{1}=u^{+}-u^{-}\). Then \(\omega_{1}\in\overline{{N}}\) and \(I(\omega_{1})=d_{1}+d_{2}=c_{2}\). □ Now we show that \(\omega_{1}\) is a critical point of I in \(H^{1}(\mathrm{R}^{3})\). Arguing by contradiction, assume that \(I'(\omega_{1})\neq0\). For any \(u\in{N}\), we claim that \(\|G'(u)\|_{H^{-1}}=\sup_{\|v\|=1}|\langle G'(u),v\rangle|\neq0\). In fact, by the definition of N and Lemma 2.2, for any \(u\in{N}\), we have $$\begin{aligned} \bigl\langle G'(u),u\bigr\rangle =&2 \biggl(\|u\|^{2}- \mu \int _{\mathrm{R}^{3}}h(x)|u|^{2}\,dx+b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr)+2b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \\ &{}-6 \int_{\mathrm{R}^{3}}k(x)|u|^{6}\,dx \\ =&2\biggl(\|u\|^{2}-\mu \int_{\mathrm{R}^{3}}h(x)|u|^{2}\,dx+b\biggl( \int_{\mathrm {R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr)+2b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \\ &{}-6\biggl(\|u\|^{2}-\mu \int_{\mathrm{R}^{3}}h(x)|u|^{2}\,dx+b\biggl( \int_{\mathrm {R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr) \\ =&-4 \biggl(\|u\|^{2}-\mu \int_{\mathrm{R}^{3}}h(x)|u|^{2}\,dx+b\biggl( \int_{\mathrm {R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr)+2b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \\ \leq&-4\biggl[\biggl(1-\frac{\mu}{\tilde{\mu}}\biggr)\|u\|^{2}+b\biggl( \int_{\mathrm {R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2} \biggr]+2b\biggl( \int_{\mathrm{R}^{3}}|\nabla u|^{2}\,dx\biggr)^{2}< 0. \end{aligned}$$ Then we define $$\Phi(u)=I'(u)- \biggl\langle I'(u), \frac{G'(u)}{\|G'(u)\| } \biggr\rangle \frac{G'(u)}{\|G'(u)\|},\quad u\in{N}. $$ Choose \(\lambda\in(0, \min\{\|u^{+}\|,\|u^{-}\|\}/3)\) such that \(\|\Phi(v)-\Phi(u)\|\leq\frac{1}{2}\|\Phi(\omega_{1})\|\) for any \(v\in N\) with \(\|v-\omega_{1}\|\leq2\lambda\). Let \(\chi:N\rightarrow [0,1]\) be a Lipschitz mapping such that $$\chi(v)= \textstyle\begin{cases} 0, & v\in N \text{ with } \|v-\omega_{1}\|\geq2\lambda, \\ 1, & v\in N \text{ with } \|v-\omega_{1}\|\leq\lambda, \end{cases} $$ and for positive constant \(s_{0}\), let \(\eta:[0,s_{0}]\times N\rightarrow N\) be the solution of the differential equation $$\eta(0,v)=0, \qquad \frac{d\eta(s,v)}{ds}=-\chi\bigl(\eta(s,v)\bigr)\Phi\bigl(\eta (s,v)\bigr) \quad \text{for } (s,v)\in[0,s_{0}]\times N. $$ We set $$\psi(\tau)=t\bigl((1-\tau)\omega_{1}^{+}+\tau\omega_{1}^{-} \bigr) \bigl((1-\tau)\omega _{1}^{+}+\tau\omega_{1}^{-}, \xi( \tau)=\eta\bigl(s_{0},\psi(\tau)\bigr)\bigr)\quad \text{for } 0\leq \tau\leq1. $$ We now give the proof of the fact that \(I(\xi(\tau))< I(u)\) for some \(\tau\in(0,1)\). Obviously, if \(\tau\in(0,\frac{1}{2})\cup (\frac{1}{2},1)\), then we have \(I(\xi(\frac{1}{2}))< I(\psi(\frac {1}{2}))< I(\omega_{1})\) and \(I(\xi(\tau))\leq I(\psi(\tau))< I(\omega_{1})\). Since \(t(\xi^{+}(\tau))-t(\xi^{-}(\tau))\rightarrow-\infty\) as \(\tau \rightarrow0+0\) and \(t(\xi^{+}(\tau))-t(\xi^{-}(\tau))\rightarrow +\infty\) as \(\tau\rightarrow1-0\), there exists \(\tau_{1}\in(0,1)\) such that \(t(\xi^{+}(\tau))=t(\xi^{-}(\tau))\). Thus, \(\xi(\tau_{1})\in \overline{{N}}\) and \(I(\xi(\tau_{1}))< I(\omega_{1})\), which contradicts to the definition of \(c_{2}\). Hence, we get that \(I'(\omega_{1})=0\) and \(\omega_{1}\) is a sign-changing solution of problem (1.1). The proof of Theorem 1.2 is complete. □ Kirchhoff, G: Mechanik. Teubner, Leipzig (1883) Chipot, M, Lovat, B: Some remarks on non local elliptic and parabolic problems. Nonlinear Anal. 30, 4619-4627 (1997) Corrêa, FJSA: On positive solutions of nonlocal and nonvariational elliptic problems. Nonlinear Anal. 59, 1147-1155 (2004) He, X, Zou, W: Infinitely many positive solutions for Kirchhoff-type problems. Nonlinear Anal. 70(3), 1407-1414 (2009) Cheng, B, Wu, X: Existence results of positive solutions of Kirchhoff type problems. Nonlinear Anal. 71, 4883-4892 (2009) Alves, CO, Corrêa, FJSA, Ma, TF: Positive solutions for a quasilinear elliptic equation of Kirchhoff type. Comput. Math. Appl. 49, 85-93 (2005) Wu, X: Existence of nontrivial solutions and high energy solutions for Schrödinger-Kirchhoff-type equations in \(\mathrm{R}^{N}\). Nonlinear Anal., Real World Appl. 12, 1278-1287 (2011) He, X, Zou, W: Existence and concentration behavior of positive solutions for a Kirchhoff equation in \(\mathrm{R}^{3}\). J. Differ. Equ. 252, 1813-1834 (2012) Nie, J, Wu, X: Existence and multiplicity of non-trivial solutions for Schrödinger-Kirchhoff-type equations with radial potential. Nonlinear Anal. 75, 3470-3479 (2012) Liu, Z, Guo, S: Positive solutions for asymptotically linear Schrödinger-Kirchhoff-type equations. Math. Methods Appl. Sci. (2013). doi:10.1002/mma.2815 Sun, J, Wu, TF: Ground state solutions for an indefinite Kirchhoff type problem with steep potential well. J. Differ. Equ. 256, 1771-1792 (2014) Liu, H, Chen, H, Yuan, Y: Multiplicity of nontrivial solutions for a class of nonlinear Kirchhoff-type equations. Bound. Value Probl. 2015, 187 (2015) Xu, L, Chen, H: Nontrivial solutions for Kirchhoff-type problems with a parameter. J. Math. Anal. Appl. 433, 455-472 (2016) Li, G, Ye, H: Existence of positive solutions for nonlinear Kirchhoff type problems in \(\mathrm{R}^{3}\) with critical Sobolev exponent and sign-changing nonlinearities (2013). arXiv:1305.6777v1 [math.AP] Wang, J, Tian, L, Xu, J, Zhang, F: Multiplicity and concentration of positive solutions for a Kirchhoff type problem with critical growth. J. Differ. Equ. 253, 2314-2351 (2012) Liang, S, Zhang, J: Existence of solutions for Kirchhoff type problems with critical nonlinearity in \(\mathrm{R}^{3}\). Nonlinear Anal., Real World Appl. 17, 126-136 (2014) Ambrosetti, A, Malchiodi, A: Perturbation Methods and Semilinear Elliptic Problems on RN. Birkhäuser, Basel (2005) Bartsch, T: Critical point theory on partially ordered Hilbert spaces. J. Funct. Anal. 186, 117-152 (2001) Zhang, Z, Perera, K: Sign changing solutions of Kirchhoff type problems via invariant sets of descent flow. J. Math. Anal. Appl. 317(2), 456-463 (2006) Mao, A, Zhang, Z: Sign-changing and multiple solutions of Kirchhoff type problems without the P.S. condition. Nonlinear Anal. 70(3), 1275-1287 (2009) Hirano, N, Shioji, N: A multiplicity result including a sign-changing solution for an inhomogeneous Neumann problem with critical exponent. Proc. R. Soc. Edinb., Sect. A 137, 333-347 (2007) Huang, L, Rocha, EM, Chen, J: Positive and sign-changing solutions of a Schrödinger-Poisson system involving a critical nonlinearity. J. Math. Anal. Appl. 408, 55-69 (2013) Willem, M: Minimax Theorems. Birkhäuser, Boston (1996) Huang, L, Rocha, EM: A positive solution of a Schrödinger-Poisson system with critical exponent. Commun. Math. Anal. 15, 29-43 (2013) Chen, J, Rocha, EM: Four solutions of an inhomogeneous elliptic equation with critical exponent and singular term. Nonlinear Anal. 71, 4739-4750 (2009) Brézis, H, Lieb, EH: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc. 88, 486-490 (1983) Gilbarg, D, Trudinger, N: Elliptic Partial Differential Equations of Second Order, 2nd edn. Grundlehren Math. Wiss., vol. 224. Springer, Berlin (1983) Tarantello, G: Multiplicity results for an inhomogeneous Neumann problem with critical exponent. Manuscr. Math. 81, 51-78 (1993) Brézis, H, Nirenberg, L: Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents. Commun. Pure Appl. Math. 36, 437-477 (1983) The authors would like to thank the referees for their valuable suggestions and comments, which led to improvement of the manuscript. Research was supported by Natural Science Foundation of China 11271372 and by Hunan Provincial Natural Science Foundation of China 12JJ2004. Department of Mathematics and Statistics, Henan University of Science and Technology, Luoyang, 471003, China Liping Xu School of Mathematics and Statistics, Central South University, Changsha, 410075, China Haibo Chen Correspondence to Liping Xu. Both authors contributed to each part of this work equally and read and approved the final version of the manuscript. This article [1] has been retracted because it was republished in error [2]. The publisher apologizes to the authors and readers for the error and for any inconvenience caused. 1. Xu, L, Chen, H: Sign-changing solutions to Schrödinger-Kirchhoff-type equations with critical exponent. Advances in Differential Equations 2016 2016:176 DOI: 10.1186/s13662-016-0864-9 2. Xu, L, Chen, H: Sign-changing solutions to Schrödinger-Kirchhoff-type equations with critical exponent. Advances in Differential Equations 2016 2016:121 DOI:10.1186/s13662-016-0828-0 An erratum to this article is available at http://dx.doi.org/10.1186/s13662-017-1162-x. Xu, L., Chen, H. RETRACTED ARTICLE: Sign-changing solutions to Schrödinger-Kirchhoff-type equations with critical exponent. Adv Differ Equ 2016, 176 (2016). https://doi.org/10.1186/s13662-016-0864-9 Schrödinger-Kirchhoff-type equations critical nonlinearity sign-changing solutions variational methods
CommonCrawl
Zhao* , Liu* , and Hai***: A New Approach for Hierarchical Dividing to Passenger Nodes in Passenger Dedicated Line Chanchan Zhao* , Feng Liu* and Xiaowei Hai*** A New Approach for Hierarchical Dividing to Passenger Nodes in Passenger Dedicated Line Abstract: China possesses a passenger dedicated line system of large scale, passenger flow intensity with uneven distribution, and passenger nodes with complicated relations. Consequently, the significance of passenger nodes shall be considered and the dissimilarity of passenger nodes shall be analyzed in compiling passenger train operation and conducting transportation allocation. For this purpose, the passenger nodes need to be hierarchically divided. Targeting at problems such as hierarchical dividing process vulnerable to subjective factors and local optimum in the current research, we propose a clustering approach based on self-organizing map (SOM) and k-means, and then, harnessing the new approach, hierarchical dividing of passenger dedicated line passenger nodes is effectuated. Specifically, objective passenger nodes parameters are selected and SOM is used to give a preliminary passenger nodes clustering firstly; secondly, Davies–Bouldin index is used to determine the number of clusters of the passenger nodes; and thirdly, k-means is used to conduct accurate clustering, thus getting the hierarchical dividing of passenger nodes. Through example analysis, the feasibility and rationality of the algorithm was proved. Keywords: Hierarchical Dividing , K-Means , Passenger Nodes , Passenger Dedicated line , Self-Organizing Map The first real passenger dedicated line of China is Beijing-Tianjin Intercity Railway launched on August 1, 2008. Recent years saw a rapid development of passenger dedicated lines in China; by the end of 2016, China is operating over 22,000 km passenger dedicated lines, topping in the world. The Middle and Long Term Railway Network Plan [1] has pointed out that China would augment the scale to 38,000 km as of 2025. Under such a context, an even more extensive passenger nodes network would shape, whose natural and social properties would exert huge impact to passengers since these nodes are closely linked to arriving, departing and transferring passengers and relative railway technologies. Furthermore, most stations in passenger dedicated line are well developed cities or hubs with huge passenger flow, so a reasonable hierarchical dividing of passenger nodes is of great practical significance to the transportation productivity layout and passenger traffic allocation. In common hierarchical dividing approaches, the mostly chosen parameters are average daily passenger volume, number of average daily originating trains, population, geographical location, accessibility, gross regional domestic product, city administrative level, etc., among which some are subjective and easy to be influenced by subjective factors during data-measurement. In addition, when using k-means algorithm for clustering, the local optimum is easily caused because only the objective function is calculated. What's more, manual judgment is required to determine the number of clusters and thus individual experience has a great effect on the clustering result. Based on analysis and summary over existing hierarchical dividing approaches and targeted on their defectives, we propose a new approach on the self-organizing map (SOM) and k-means basis to complete hierarchical dividing of passenger nodes in passenger dedicated line. Firstly, objective passenger nodes parameters are selected and SOM is used to give a preliminary passenger nodes clustering and to obtain the correspondent best matching unit (BMU), preventing the effect of individual subjective factors; secondly, the Davies–Bouldin index (DBI) is used to determine the k value, number of clusters of the passenger nodes, evading the local optimum due to only calculating objective function in the traditional k-means algorithm; and thirdly, BMU and k value are inputted into k-means algorithm to conduct accurate clustering, thus getting the hierarchical dividing of passenger nodes in passenger dedicated line. The rest of this paper is organized as follows. Section 2 summarizes related work. Section 3 is the review of the SOM and k-means algorithm, followed by the proposed approach in Section 4. Section 5 shows the simulation experimental results, and Section 6 concludes the paper with summary and future research direction. 2. Related Work At present, there is no uniform method for the hierarchical dividing of railway passenger nodes. In the actual scene, the division of the railway passenger nodes is mainly according to the number of the passenger technical operations handled by the passenger station and qualitatively refers to the politics, economy, culture, and transportation layout in the place where the station is located in. Then the nodes are divided into the top-class station, first-class station, second-class station, third-class station, fourthclass station, and fifth-class station. For the study of the hierarchical dividing of railway passenger nodes, based on the establishment of importance degree evaluation system of passenger nodes, some scholars used rough set, analytic hierarchy process and principal component analysis to calculate the importance degree of each passenger node as well as set the threshold to classify the passenger nodes. This method mainly relies on experience. And its division has strong subjective intention and low efficiency [2]. The above problem can be solved well by clustering analysis. Wang and Lyu [2] adopted clustering method to classify railway passenger transport nodes. Firstly, it clustered property variables by hierarchical clustering, and clustered passenger transport nodes samples by affinity propagation algorithms according to the simplified nodes indexes. Finally, it analyzed three clustering effectiveness indexes contained CH, KL and IGP indexes to the clustering consequence. Gao et al. [3] do a k-means clustering analysis of key nodes and edges in Beijing subway network. Based on the k-means clustering, stations (nodes) and intervals (edges) in a subway network were grouped by three metrics: two basic topological properties (degree and betweenness), and their roles in transporting people (passenger volume). Xu and Qin [4] build important evaluation indexes for urban rail transit network. Besides degrees and betweenness in complex network, they add PageRank value and passenger flow indicators in consideration of the urban rail transit network topology model to better evaluate relationships between stations in urban rail transit network and offer theoretical guarantee for risks. They provide factor analysis method to calculate the importance degree of each station in the networks, which is illustrated by the case of Beijing Railway. Zhou et al. [5] analyzes the significance and methods of highspeed passenger railway nodes classification and designs high-speed rail train line stops program based on the classification. Finally, they take Beijing-Guangzhou high-speed railway as an example to make an empirical study. Park and Lee [6] study classification of subway stations and trip behavior of subway passengers through partitioning the graph of the subway system into roughly equal groups. They propose a heuristic algorithm to partition the subway graph. In the experimental results, they illustrate the subway stations and edges in each group by color on a map and analyze the trip behavior of subway passengers by the group origin-destination matrix. The above studies indicates that, to a certain extent, some thoughts of hierarchical dividing of passenger nodes are worth learning and reference, yet some approaches are unable to reflect objective conditions since they select some subjective parameters and are highly influenced by human factors during data-measurement. In addition, manual judgment is required to determine the number of clusters in current studies and thus individual experience has a great effect on the clustering result. Therefore, in this paper objective property data is used as the basis for hierarchical dividing of passenger nodes in passenger line. DBI is used to determine the number of clusters of the passenger nodes. During the clustering, moreover, local optimum resulted from calculating only the objective function in traditional k-means algorithm will be prevented. 3. Review of the Self-Organizing Map and K-Means Algorithm 3.1 Self-Organizing Map SOM refers to a guideless clustering algorithm brought by Kohonen [7], a neural expert from University of Helsinki, Finland. SOM simulates human brain where neural cells have different "division of labor" in different areas, i.e., response characteristics are different in different brain zones, a process completed automatically. SOM divides input mode sets via seeking the optimum reference vector sets. SOM consists of two layers: an input and a radial layer. The input layer is used to receive input data, and there is no link among internal nodes; number of nodes in radial layer corresponds to the number of mode space dimensions after mapping; each and every nerve cell in the radial layer links to and mutually motivates its neighborhood. After training, different nodes in radial layer represent different dividing modes. Fig. 1 outlines the general structure of SOM. SOM has been widely used in network detection [8], environmental monitoring [9-11], maritime research [12-14], and so on. The SOM algorithm proposed by Kohonen [7] is shown as Algorithm 1. Schematic structure of SOM. Algorithm 1: Self-Organizing Map (SOM) In step 3, the winning neuron is calculated according to Eq. (1): [TeX:] $$i ( x ( n ) ) = \arg \min _ { j } \left\| x ( n ) - \omega _ { j } ( n ) \right\| , j = 1,2 , \ldots \ldots , l$$ In step 4, the modified weights of neurons are calculated according Eqs. (2)–(4): [TeX:] $$\omega _ { j } ( n + 1 ) = \omega _ { j } ( n ) + \eta ( n ) h _ { j , i ( x ) } ( n ) \left( x ( n ) - \omega _ { j } ( n ) \right)$$ [TeX:] $$h _ { j , i ( x ) } ( n ) = \exp \left( - \frac { d _ { j , i } ^ { 2 } } { 2 \sigma ^ { 2 } ( n ) } \right)$$ [TeX:] $$d _ { j , i } ^ { 2 } = \left\| r _ { j } - r _ { i } \right\| ^ { 2 }$$ where η(n) is learning rate; hj,i(x)(n) is neighborhood function, which follows the Gauss probability distribution; rj is the spatial position of excitatory neurons j; σ(n) is the width of the topological neighborhood function. 3.2 K-means Algorithm The k-means algorithm [15] is a classic clustering method that divides the data set in k classes of similar data on the basis of the Euclidean distance criterion. The process starts with k vectors, randomly selected from the data set and used as temporary cluster centroids. Afterwards, the algorithm calculates the distances between the centroids and all the vectors of the data set and associates each vector to its nearest centroid. After all data have been assigned, the new cluster centroids are calculated according to Eq. (5): [TeX:] $$c _ { i } = \frac { 1 } { m _ { i } } \sum _ { j = 1 } ^ { m _ { i } } x _ { i j }$$ where ci is the centroid of cluster Ci ; mi is the number of xj data gathered in cluster Ci. The process iterates until there is no further change in the centroids' position. The k-means algorithm proposed by MacQueen [15] is shown as Algorithm 2. K-means Algorithm 4. The Proposed Approach In this paper we adopt a two-level approach, based on the combined use of SOM and k-means clustering algorithm. Specifically, SOM is used to give a preliminary passenger nodes clustering and to obtain the corresponding BMU firstly, preventing the effect of individual subjective factors; secondly, DBI index is used to determine the k value, number of clusters of the passenger nodes, evading the local optimum due to only calculating objective function in the traditional k-means algorithm; and thirdly, BMU and k value are inputted into k-means algorithm to conduct accurate clustering. This proposed approach will be used to classify the passenger nodes later. Its workflow is shown as follows: Step 1. Data preprocessing. Preprocessing methods include log-transformation [16,17], standard normalization [18,19], and so on. In this paper, we adopt histogram equalization method. Step 2. Data conversion. To convert the preprocessed data into a two-dimensional vector for SOM networks, as illustrated in Eq. (6). [TeX:] $$M = \left[ \begin{array} { c c c c } { x _ { 11 } } \ { x _ { 11 } } \ { \dots } \ { x _ { 1 n } } \\ { x _ { 21 } } \ { x _ { 22 } } \ { \dots } \ { x _ { 2 n } } \\ { \dots } \ { \dots } \ { \dots } \ { \dots } \\ { x _ { m1 } } \ { \dots } \ { \dots } \ { x _ { m n } } \end{array} \right]$$ Step 3. SOM training. The Gaussian function batch training mode is used to train the SOM neural network to get the mapping neurons. Step 4. Clustering number determination. DBI [20] is used to determine the number of clusters. The number of clusters corresponding to the minimum value of DBI is the final clustering number. DBI value is calculated according to Eq. (7): [TeX:] $$D B I _ { o p t i m u m } = \arg \min _ { C } \left\{ \frac { 1 } { C } \sum _ { k = 1 } ^ { k = C } \max _ { l \neq k } \left\{ \frac { S ( k ) + S ( l ) } { D ( k , l ) } \right\} \right\}$$ where C is the number of clusters; k is the cluster k; S(k) is the average distance of all the elements of the cluster to their cluster center; D(k,l) is the distance between clusters centers. Step 5. Clustering. The k value of k-means algorithm is determined by DBI value, and k-means algorithm is used to classify the weights of neurons. The procedure is shown in Fig. 2. Flowchart of the proposed approach. 5.1 Simulation Setup Four parameters are selected as key influence factors for hierarchical dividing of passenger nodes. They are average daily passenger volume (PassengerF), average daily originating train number (TrainN), population quantity (Population), and gross regional product (GRP). To conduct the simulation, we first normalize each parameter. The comparison before and after normalization of each parameter is shown in Figs. 3–6. PassengerF histogram. (a) Before normalization and (b) after normalization. TrainN histogram. (a) Before normalization and (b) after normalization. Population histogram. (a) Before normalization and (b) after normalization. Gross regional product (GRP). (a) Before normalization and (b) after normalization. The size of the SOM is determined as [5×8], and then the map will be trained with the input variables to self-organize the 50 input vectors. Number of clusters is determined automatically by DBI value, as shown in Fig. 7. It's 4 which is corresponding to the minimum DBI value. DBI value. 5.2 Results and Analysis The component planes of Fig. 8 can be used for correlation hunting between parameter [21]. The color shades of SOM neurons for PassengerF and TrainN exhibit a similar increase from the upper left part to the lower right, which reveal that average daily passenger volume and average daily originating train number have a strong correlation. Similarly, TrainN and Population are also positively correlated; however, no clear correlation with any other parameter is emergent for GRP. U-matrix and component plane of sample data. The result of simulation experiment show that the passenger nodes in passenger dedicated line are divided into four grades. Clustering result is shown in Table 1; clustering result analysis is shown in Table 2; and average data of each cluster is shown in Table 3. The following conclusions can be drawn by Tables 1–3: • The mean values of four parameters of Level-1 passenger nodes are all higher than those of the rest three levels. Passenger nodes in this level reflect huge passenger volume, great originating flow, large population and highly-developed economy. Passenger nodes of Level-1 should be critical to the whole railway network, serving as the nerve center of the entire railway system and linking multiple important passenger lines. • The mean values of four parameters of Level-2 passenger nodes are all lower than those of Level- 1 passenger nodes, but higher than the rest two levels. These passenger nodes reflect relatively big passenger volume, huge departing flow, large population and developed economy. • The mean values of four parameters of Level-3 passenger nodes are all lower than those of Level- 1 and Level-2 passenger nodes. Level-3 passenger nodes are higher than Level-4 passenger nodes with respect to the mean values of daily passenger volume, daily originating train quantity and gross regional domestic product, but lower in terms of mean population. These passenger nodes are located in cities of medium scale population and economy. • As to passenger nodes of Level-4, they are only higher than Level-3 passenger nodes but still lower than Level-1 and Level-2 in terms of mean population, and lower than those of the other three levels in mean values of all the other three parameters. These passenger nodes are generally less prominent with weak attraction and passenger distribution capacity. Clustering result Passenger node Cluster Passenger node Cluster S1 1 S26 3 S10 2 S35 4 Clustering result analysis Cluster Passenger node Sample size Proportion (%) Cumulative proportion (%) 1 S1,S2,S3,S4,S8,S9,S11,S12 8 16 16 2 S5,S6,S7,S10,S13,S14,S15,S16,S17,S18,S19 11 22 38 S20,S21,S22,S23,S24,S25,S26,S27,S28, S29,S30,S31,S32,S33 S43,S44,S45,S46,S47,S48,S49,S50 Average data of each cluster Average daily passenger volume (people/day) originating train number (trains/day) Population quantity (ten thousand people) Gross regional product (billion yuan) 1 91,894 121 1,156 8,207 2 49,329 43 999 3,984 System clustering result. 5.3 Comparative Performance Analysis In order to compare our proposed clustering approach with other clustering methods, we apply the same data to the following different algorithms: system clustering and traditional k-means algorithm. Through the comparative performance analysis, the effectiveness and rationality of our newly proposed clustering approach is proved. 5.3.1 Compared with system clustering We use the system clustering method to cluster 50 passenger nodes, and the result is shown in Fig. 9. It can be seen from Fig. 9 that the sample data can be divided into three or four categories. If they are divided into three categories, the first category only includes S1, the second category includes S2 and S3 as well as the rest are the third category. If they are divided into four categories, the first category only includes S1, the second category includes S2 and S3, the third category includes S4, S5, S6, S7, and S8 as well as the rest are the fourth category. If the method is used for clustering, it is necessary to manually determine the final number of clusters, which makes the personal experience to a certain extent have a significant impact on the results. Compared with this method, the DBI in the clustering method proposed in this paper can directly work out the number of optimal clusters is 4. Furthermore, the algorithm of system clustering is based on regression analysis that is essentially linear correlation analysis. So some errors are inevitable. The clustering method of this paper is based on the classification of the whole sample data, which can reflect the essence of the problem more truly. In conclusion, our proposed approach is superior to system clustering. 5.3.2 Compared with traditional k-means algorithm We use the traditional k-means algorithm to cluster 50 passenger nodes, and the results are shown in Tables 4 and 5. Number of cases in each cluster Cluster 1 1.000 Cluster 3 31.000 Effective 50.000 Defective 0.000 From the clustering results in Tables 4 and 5, it can be seen that the clustering effect is not very satisfactory. This clustering method brings 50 passenger nodes into two categories: the third and the fourth. The data in these two categories account for 94% of the total. In addition, the clustering number in the traditional k-means is entered manually. And the clustering number in this paper is determined by the DBI, which avoids the fact that the k-means algorithm only calculates the objective function to lead to a local optimal situation. In conclusion, our proposed approach is superior to traditional kmeans algorithm. Traditional k-means clustering result Passenger node Cluster Distance 1 S1 1 0.000 26 S26 3 5579.675 2 S2 2 3206.297 27 S27 3 5680.704 4 S4 4 33440.595 29 S29 3 4133.863 10 S10 4 1567.185 35 S35 3 2540.183 15 S15 4 11113.626 40 S40 3 4971.550 A train operation plan based on a reasonable hierarchical dividing of passenger nodes could help to satisfy passenger flow better, and enlarge the competitive edge of passenger dedicated line market. In this work, we present a new approach for the hierarchical dividing of passenger nodes based on SOM and k-means algorithm. It eliminates the individual influence and local optimum above mentioned existing in traditional hierarchical dividing process, helps railway authorities better seize the significance of passenger nodes, and guides them, in certain ways, in compiling train operation plan and conducting transportation allocation. Nevertheless, since the main purpose of this paper lies in verifying the reasonability and effectively of the proposed approach, the index parameters selected for hierarchical dividing of passenger nodes still require to be modified in accordance with situations of specific cities; Furthermore, appropriate readjustment should be conducted for cluster results on the basis of qualitative analyses. And these above problems still need to be further studied. This work is supported by the Natural Science Foundation of Inner Mongolia (No. 2016MS0706 and 2017MS0702), the Institution of Higher Learning Science Research Project of Inner Mongolia (No. NJZY078), and the Science Research Project of Inner Mongolia University of Technology (No. ZD201522). Chanchan Zhao She received the M.E. in Computer Science from Taiyuan University of Technology in 2007. She is pursuing the Ph.D. degree in School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China. Her current research interests include optimization theory and method, wireless sensor networks, information fusion and interoperability technology. Feng Liu He received the Ph.D. degree in School of Information, Renmin University of China in 2010. He is now a Professor in School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China. His research interest include computer software, communications software and network management. Xiaowei Hai He received the Ph.D. degree in School of Economics and Management, Beijing Jiaotong University in 2014. He is now an associate professor in Management College, Inner Mongolia University of Technology, Hohhot, China. His current research interests include information management, software engineering and highspeed railway technology. 1 ChinaDaily, 2016 (Online). Available:, http://www.chinadaily.com.cn/opinion/2016-07/01/content_25925793.htm 2 W. X. Wang, H. X. Lyu, "Classification of railway passenger transport nodes based on affinity propagation cluster," Application Research of Computers, 2016, vol. 33, no. 10, pp. 2926-2928. custom:[[[https://www.researchgate.net/publication/300935064_Classification_of_railway_passengers_based_on_cluster_analysis]]] 3 B. Gao, Y. Qin, X. M. Xiao, L. X. Zhu, "K-means clustering analysis of key nodes and edges in Beijing subway network," Journal of Transportation Systems Engineering and Information Technology, 2014, vol. 14, no. 3, pp. 207-213. custom:[[[https://www.researchgate.net/publication/286069132_K-means_clustering_analysis_of_key_nodes_and_edges_in_Beijing_subway_network]]] 4 Y. Z. Xu, Y. Qin, "Factor analysis of key nodes in urban rail network," in Proceedings of IEEE International Conference on Intelligent Transportation Engineering, Singapore, 2016;pp. 27-31. doi:[[[10.1109/ICITE.2016.7581302]]] 5 P. F. Zhou, B. M. Han, Q. Zhang, "High-speed railway passenger node classification method and train stops scheme," Applied Mechanics and Materials, 2014, vol. 505-506, pp. 632-636. doi:[[[10.4028/www.scientific.net/amm.505-506.632]]] 6 J. S. Park, K. Lee, "Classification of the Seoul Metropolitan Subway Stations using graph partitioning," Journal of the Economic Geographical Society of Korea, 2012, vol. 15, no. 3, pp. 343-357. doi:[[[10.23841/egsk.2012.15.3.343]]] 7 T. Kohonen, "Self-organizing map," in Proceedings of the IEEE, 1990;vol. 78, no. 9, pp. 1464-1480. doi:[[[10.1109/5.58325]]] 8 F. Wang, B. L. Xu, Y. W. Qian, Y. M. Dai, Z. Q. Wang, "Anomaly Detection Model Based on Hybrid Classifiers," Journal of System Simulation, Feb. 2012, vol. 24, no. 2, pp. 854-858. custom:[[[-]]] 9 Y. H. Jin, A. Kawamura, S. C. Park, N. Nakagawa, H. Amaguchi, J. Olsson, "Spatiotemporal classification of environmental monitoring data in the Yeongsan River basin, Korea, using self-organizing maps," Journal of Environmental Monitoring, 2011, vol. 13, no. 10, pp. 2886-2894. doi:[[[10.1039/c1em10132c]]] 10 M. Alvarez-Guerra, C. Gonzalez-Pinuela, A. Andres, B. Galan, J. R. Viguri, "Assessment of self-organizing map artificial neural networks for the classification of sediment quality," Environment International, 2008, vol. 34, no. 6, pp. 782-790. doi:[[[10.1016/j.envint.2008.01.006]]] 11 K. Nishiyama, S. Endo, K. Jinno, C. B. Uvo, J. Olsson, R. Berndtsson, "Identification of typical synoptic patterns causing heavy rainfall in the rainy season in Japan by a self-organizing map," Atmospheric Research, 2007, vol. 83, no. 2-4, pp. 185-200. doi:[[[10.1016/j.atmosres.2005.10.015]]] 12 V. S. Lobo, "Application of self-organizing maps to the maritime environment," in Information Fusion and Geographical Information Systems. Heidelberg: Springer2009,, pp. 19-36. doi:[[[10.1007/978-3-642-00304-2_2]]] 13 M. Liukkonen, E. Havia, H. Leinonen, Y. Hiltunen, "Quality-oriented optimization of wave soldering process by using self-organizing maps," Applied Soft Computing, 2011, vol. 11, no. 1, pp. 214-220. doi:[[[10.1016/j.asoc.2009.11.011]]] 14 J. C. Creput, A. Hajjam, A. Koukam, O. Kuhn, "Self-organizing maps in population based metaheuristic to the dynamic vehicle routing problem," Journal of Combinatorial Optimization, 2012, vol. 24, no. 4, pp. 437-458. doi:[[[10.1007/s10878-011-9400-8]]] 15 J. B. MacQueen, "Some methods for classification and analysis of multivariate observations," in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, CA, 1967;pp. 281-297. custom:[[[http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.308.8619]]] 16 L. Zhang, M. Scholz, A. Mustafa, R. Harrington, "Application of the self-organizing map as a prediction tool for an integrated constructed wetland agroecosystem treating agricultural runoff," Bioresource Technology, 2009, vol. 100, no. 2, pp. 559-565. doi:[[[10.1016/j.biortech.2008.06.042]]] 17 D. Bedoya, V. Novotny, E. S. Manolakos, "Instream and offstream environmental conditions and stream biotic integrity: importance of scale and site similarities for learning and prediction," Ecological Modelling, 2009, vol. 220, no. 19, pp. 2393-2406. doi:[[[10.1016/j.ecolmodel.2009.06.017]]] 18 S. Greco, R. Slowinski, I. Szczech, "Properties of rule interestingness measures and alternative approaches to normalization of measures," Information Sciences, 2012, vol. 216, pp. 1-16. doi:[[[10.1016/j.ins.2012.05.018]]] 19 H. L. Garcia, I. M. Gonzalez, "Self-organizing map and clustering for wastewater treatment monitoring," Engineering Applications of Artificial Intelligence, 2004, vol. 17, no. 3, pp. 215-225. doi:[[[10.1016/j.engappai.2004.03.004]]] 20 D. L. Davies, D. W. Bouldin, "A cluster separation measure," IEEE Transactions on Pattern Analysis and Machine Intelligence, 1979, vol. 1, no. 2, pp. 224-227. doi:[[[10.1109/TPAMI.1979.4766909]]] 21 A. Hentati, A. Kawamura, H. Amaguchi, Y. Iseri, "Evaluation of sedimentation vulnerability at small hillside reservoirs in the semi-arid region of Tunisia using the self-organizing map," Geomorphology, 2010, vol. 122, no. 1-2, pp. 56-64. doi:[[[10.1016/j.geomorph.2010.05.013]]] Revision received: June 13 2017 Accepted: June 21 2017 Corresponding Author: Chanchan Zhao* (cczhao@imut.edu.cn) Chanchan Zhao*, College of Information Engineering, Inner Mongolia University of Technology, Hohhot, China, cczhao@imut.edu.cn Feng Liu*, School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China, fliu@bjtu.edu.cn Xiaowei Hai***, School of Economics and Management, Inner Mongolia University of Technology, Hohhot, China, cczhao@imut.edu.cn
CommonCrawl
Home > Journals > Ann. Probab. > Volume 31 > Issue 2 > Article April 2003 On the splitting-up method and stochastic partial differential equations István Gyöngy, Nicolai Krylov Ann. Probab. 31(2): 564-591 (April 2003). DOI: 10.1214/aop/1048516528 We consider two stochastic partial differential equations \[ du_{\varepsilon}(t)= (L_ru_{\varepsilon}(t)+f_{r}(t)) \,dV_{\varepsilon t}^r+(M_{k}u_{\varepsilon}(t)+g_k(t))\, \circ dY_t^k, \qquad\hspace*{-5pt} \varepsilon=0,1, \] driven by the same multidimensional martingale $Y=(Y^k)$ and by different increasing processes $V_{0}^r$, $V_1^r$, $r=1,2,\ldots,d_1$, where $L_r$ and $M^k$ are second-and first-order partial differential operators and $\circ$ stands for the Stratonovich differential. We estimate the moments of the supremum in $t$ of the Sobolev norms of $u_1(t)-u_0(t)$ in terms of the supremum of the differences\break $|V^r_{0t}-V^{r}_{1t}|$. Hence, we obtain moment estimates for the error of a multistage splitting-up method for stochastic PDEs, in particular, for the equation of the unnormalized conditional density in nonlinear filtering. István Gyöngy. Nicolai Krylov. "On the splitting-up method and stochastic partial differential equations." Ann. Probab. 31 (2) 564 - 591, April 2003. https://doi.org/10.1214/aop/1048516528 First available in Project Euclid: 24 March 2003 Digital Object Identifier: 10.1214/aop/1048516528 Primary: 60H15, 65M12, 65M15, 93E11 Keywords: splitting-up, Stochastic partial differential equations Rights: Copyright © 2003 Institute of Mathematical Statistics Ann. Probab. Institute of Mathematical Statistics István Gyöngy, Nicolai Krylov "On the splitting-up method and stochastic partial differential equations," The Annals of Probability, Ann. Probab. 31(2), 564-591, (April 2003)
CommonCrawl
A fictitious play approach to large-scale optimization Published: 2004/08/01 , Updated: 2005/06/30 Marina A. Epelman Theodore Lambert Categories Game Theory, Optimization of Simulated Systems Tags game theory, heuristics Short URL: https://optimization-online.org/?p=9574 In this paper we investigate the properties of the sampled version of the fictitious play algorithm, familiar from game theory, for games with identical payoffs, and propose a heuristic based on fictitious play as a solution procedure for discrete optimization problems of the form $\max\{u(y):y=(y^1,\ldots,y^n)\in\setY^1\times\cdots\times\setY^n\}$, i.e., in which the feasible region is a Cartesian product of finite sets $\setY^i,\ i\in N=\{1,\ldots,n\}$. The contributions of this paper are two-fold. In the first part of the paper we broaden the existing results on convergence properties of the fictitious play algorithm on games with identical payoffs to include an approximate fictitious play algorithm which allows for errors in players' best replies. Moreover, we introduce sampling-based approximate fictitious play which possesses the above convergence properties, and at the same time provides a computationally efficient method for implementing fictitious play. In the second part of the paper we motivate the use of algorithms based on sampled fictitious play to solve optimization problems in the above form with particular focus on the problems in which the objective function $u(\cdot)$ comes from a ``black box,'' such as a simulation model, where significant computational effort is required for each function evaluation. Operations Research, 53(3):477-489, 2005. Available at http://www-personal.engin.umich.edu/~mepelman/research/ On cost matrices with two and three distinct values of Hamiltonian paths and cycles GLOBAL CONVERGENCE OF AN ELASTIC MODE APPROACH FOR A CLASS OF MATHEMATICAL PROGRAMS WITH COMPLEMENTARITY CONSTRAINTS
CommonCrawl
Search SpringerLink Feasibility study of measuring \(b\rightarrow s\gamma \) photon polarisation in \(D^0\rightarrow K_1(1270)^- e^+\nu _e\) at STCF Yu-Lan Fan ORCID: orcid.org/0000-0001-9616-97051, Xiao-Dong Shi2,3, Xiao-Rong Zhou2,3 & Liang Sun1 The European Physical Journal C volume 81, Article number: 1068 (2021) Cite this article We report a sensitive study of measuring \(b\rightarrow s\gamma \) photon polarisation in \(D^{0}\rightarrow K_1(1270)^-e^+\nu _e\) with an integrated luminosity of \(\mathscr {L}\) = 1 ab\(^{-1}\) at a center-of-mass energy of 3.773 GeV at a future Super Tau Charm Facility. More than 61,000 signals of \(D^{0}\rightarrow K_1(1270)^-e^+\nu _e\) are expected. Based on a fast simulation software package, the statistical sensitivity for the ratio of up-down asymmetry is estimated to be \(1.5\times 10^{-2}\) by performing a two-dimensional angular analysis in \(D^{0}\rightarrow K_1(1270)^-e^+\nu _e\). Combining with measurements of up-down asymmetry in \(B\rightarrow K_1\gamma \), the photon polarisation in \(b\rightarrow s\gamma \) can be determined model-independently. The new physics (NP) and related phenomena beyond the Standard Model (SM) could be explored by indirect searches in \(b\rightarrow s\gamma \) processes. The photon emitted from the electroweak penguin loop in \(b\rightarrow s\gamma \) transitions is predominantly polarised in SM. New sources of chirality breaking can modify the \(b\rightarrow s\gamma \) transition strongly as suggested in several theories beyond the SM [1,2,3]. A representative example is the left-right symmetric model (LRSM) [4, 5], in which the photon can acquire a significant right-handed component. An observation of right-handed photon helicity would be a clear indication for NP [4]. The effective Hamiltonian of \(b\rightarrow s\gamma \) is $$\begin{aligned} \mathscr {H}_{eff} = -\frac{4G_{F}}{\sqrt{2}}V_{tb}V^*_{ts}(C_{7L}\mathscr {O}_{7L}+C_{7R}\mathscr {O}_{7R}), \end{aligned}$$ where \(C_{7L}\) and \(C_{7R}\) are the Wilson coefficients for left- and right-handed photons, respectively. In the SM, the chiral structure of \(W^{\pm }\) couplings to quarks leads to a dominant polarisation photon and a suppressed right-handed configuration, and the photon from radiative \(\bar{B}\) (B) decays is predominantly left- (right-) handed, i.e., \(|C_{7L}|^2 \gg |C_{7R}|^2\) (\(|C_{7L}|^2 \ll |C_{7R}|^2\)). Various methods have been proposed to determine the photon polarisation of the \(b\rightarrow s\gamma \). The first method [6] suggests that the CP asymmetries which depend on the photon helicity could be measured in time dependent asymmetry in the charged and neutral \(B(t)\rightarrow \)X\(^{CP}_{s/d}\gamma \) decays. The second method [7] to determine the photon polarisation is based on \(b\rightarrow s ~l^+ l^-\) transition where the dilepton pair originates from a virtual photon. The third method in \(\varLambda _b\rightarrow \varLambda \gamma \) also could be used to measure the photon polarisation directly. The forward–backward asymmetry defined in [8, 9] is proportional to the photon polarisation. Measuring the photon polarisation in radiative B decays into K resonance states, \(K_\mathrm{res}(\rightarrow K\pi \pi )\), is proposed in [10, 11]. The photon polarisation parameter \(\lambda _{\gamma }\) could be described by an up-down asymmetry (\(A_\mathrm{UD}\)) of the photon momentum relative to the \(K\pi \pi \) decay plane in \(K_\mathrm{res}\) rest frame. The photon polarisation in \(B\rightarrow K_\mathrm{res}\gamma \) is given in terms of Wilson coefficients [11]: $$\begin{aligned} \mathscr {\lambda }_{\gamma } = \frac{|C_{7R}|^2-|C_{7L}|^2}{|C_{7R}|^2+|C_{7L}|^2}, \end{aligned}$$ with \(\lambda _{\gamma } \simeq -1\) for \(b\rightarrow s \gamma \) and \(\lambda _{\gamma } \simeq +1\) for \(\bar{b}\rightarrow \bar{s} \gamma \). The integrated up-down asymmetry which is proportional to photon polarisation parameter \(\lambda _{\gamma }\) for the radiative process proceeding through a single resonance \(K_\mathrm{res}\) is defined [10, 11] $$\begin{aligned} \begin{aligned} {A}_\mathrm{UD}&= \frac{\varGamma _{K_\mathrm{res}\gamma }[\cos \theta _K>0]-\varGamma _{K_\mathrm{res}\gamma }[\cos \theta _K<0]}{\varGamma _{K_\mathrm{res}\gamma }[\cos \theta _K>0]+\varGamma _{K_\mathrm{res}\gamma }[\cos \theta _K<0]}\\&= \lambda _{\gamma }\frac{3~\mathrm{Im}[\mathbf {n} \cdot (\mathbf {J}\times \mathbf {J^{*}})]}{4~|\mathbf {J}|^2}. \end{aligned} \end{aligned}$$ where \(\theta _K\) is defined as the relative angle between the normal direction \(\mathbf {n}\) of the \(K_\mathrm{res}\) decay plane and the opposite flight direction of the photon in the \(K_\mathrm{res}\) rest frame, and \(\mathbf {J}\) denotes the \(K_\mathrm{res}\rightarrow K\pi \pi \) decay amplitude [10]. In charm sector, radiative \(D^0-\) decays into CP eigenstate are expected to determine the photon polarization by means of the charm meson's finite width difference [12]. Recently, LHCb collaboration reported the direct observation of the photon polarisation with a significance of 5.2\(\sigma \) in \(B^+\rightarrow K^+\pi ^-\pi ^+\gamma \) decay [13]. In the \(K\pi \pi \) mass interval [1.1,1.3] GeV/\(c^2\) which is dominated by \(K_1(1270)\), \(A_\mathrm{UD}\) is extracted to be (6.9 ± 1.7) \(\times 10^{-2}\). However, the currently limited knowledge of the structure of the \(K\pi \pi \) mass spectrum, which includes interfering kaon resonances, prevents the translation of a measured asymmetry into an actual value for \(\lambda _{\gamma }\). To solve this dilemma, three methods are proposed [14,15,16]. In Ref. [14], by using \(B\rightarrow J/\psi K_1\rightarrow J/\psi K\pi \pi \) channel, the hadronic information of \(K\pi \pi \) can be determined. Along the lines of the method known to \(B\rightarrow K_1(\rightarrow K\pi \pi )\gamma \) decays, the extraction of photon polarization in \(D_(s)\rightarrow K_1(\rightarrow K\pi \pi )\gamma \) decays is introduced in the \(K\pi \pi \) system [15]. A novel method is proposed in Ref. [16] to determine the photon helicity in \(b\rightarrow s\gamma \) by combining the \(B\rightarrow K_1\gamma \) and semi-leptonic decay \(D\rightarrow K_1l^+\nu _l(l=\mu ^+,e^+)\) model-independently. A ratio of up-down asymmetries is introduced in [16]. Reference [16] introduces two angles \(\theta _K\) and \(\theta _l\) in \(D^0\rightarrow K_1^-e^+\nu _e\) shown in Fig. 1 and the ratio of up-down asymmetry \(A^{'}_\mathrm{UD}\) is defined as [16] $$\begin{aligned} \begin{aligned} {A}^{'}_\mathrm{UD}&= \frac{\varGamma _{K_1^-e^+\nu _e}[\cos \theta _K>0]-\varGamma _{K_1^-e^+\nu _e}[\cos \theta _K<0]}{\varGamma _{K_1^-e^+\nu _e}[\cos \theta _l>0]-\varGamma _{K_1^-e^+\nu _e}[\cos \theta _l<0]}\\&= \frac{\mathrm{Im}[\mathbf {n}\cdot (\mathbf {J}\times \mathbf {J^{*}})]}{|\mathbf {J}|^2}. \end{aligned} \end{aligned}$$ The kinematics for \(D^0\rightarrow K_1^-(K^-\pi ^+\pi ^-) e^+\nu _e\). The relative angle between the normal direction of \(K_1^-\) decay plane and the opposite of the \(D^0\) flight direction in the \(K_1^-\) rest frame is denoted as \(\theta _K\), where the normal direction of \(K_1^-\) decay plane is defined as \(\mathbf {p}_{\pi ,\mathrm slow}\times \mathbf {p}_{\pi ,\mathrm fast}\) in which \(\mathbf {p}_{\pi ,\mathrm slow}\) and \(\mathbf {p}_{\pi ,\mathrm fast}\) corresponding to the momenta of the lower and higher momentum pions, respectively. The \(\theta _l\) is introduced as the relative angle between the flight direction of \(e^+\) in the \(e^+\nu _e\) rest frame and \(e^+\nu _e\) in the \(D^0\) rest frame [16] Here the definition of the normal direction of \(K_1\) decay plane is the same as in \(B\rightarrow K_\mathrm{res}\gamma \) in LHCb [13]. Then photon helicity parameter of \(b\rightarrow s\gamma \) could be extracted by [16] $$\begin{aligned} \mathscr {\lambda }_{\gamma } = \frac{4~A_\mathrm{UD}}{3~A^{'}_\mathrm{UD}}. \end{aligned}$$ So the photon polarisation of \(b\rightarrow s\gamma \) could be determined model-independently by the combination of \(A_\mathrm{UD}^{'}\) in \(D^0\rightarrow K_1^-(\rightarrow K^-\pi ^+\pi ^-)e^+\nu _e\) and \(A_\mathrm{UD}\) in \(B^+\rightarrow K_1^+(\rightarrow K^+\pi ^-\pi ^+)\gamma \). Experimentally, the semileptonic decay of \(D^0\rightarrow K_1(1270)^-\) \(e^+ \nu _e\) has been observed for the first time with a statistical significance greater than 10\(\sigma \) by using 2.93 fb\(^{-1}\) of \(e^+e^-\) collision data at \(\sqrt{s}\) = 3.773 GeV by BESIII [17]. About 109 signals are observed, and the measured branching fraction is $$\begin{aligned}&\mathscr {B}(D^0\rightarrow K_1(1270)^- e^+\nu _e) = (1.09\pm 0.13_{-0.16}^{+0.09}\pm 0.12)\\&\quad \times 10^{-3}. \end{aligned}$$ where the first and second uncertainties are the statistical and systematic uncertainties, respectively, and the third uncertainty is the external uncertainty from the assumed branching fractions (BFs) of \(K_1\) subdecays. Still, the statistics of the current BESIII data set are insufficient to measure the ratio of up-down asymmetry in \(D^0\rightarrow K_1(1270)^- e^+\nu _e\). A much larger data sample with similarly low background level is urgently needed for performing the angular analysis in \(D^0\rightarrow K_1(1270)^- e^+\nu _e\), which calls for the construction of a next generation \(e^+e^-\) collider operating at the \(\tau \)-charm energy region with much higher luminosity. The Super Tau Charm Facility (STCF) is a scientific project proposed in China for high energy physics frontier [18]. The STCF plans to produce charmed hadron pairs near the charm threshold which allow for exclusive reconstruction of their decay products with well-determined kinematics. Such samples at the threshold allow for a double-tag technique [19] to be employed where the full events can be reconstructed and provide a unique environment to measure \(A^{'}_\mathrm{UD}\) in \(D^0\rightarrow K_1(1270)^-e^+\nu _e\) with very low background level. In this work, we present a feasibility study of a ratio of up-down asymmetry in \(D^0\rightarrow K_1(1270)^-e^+\nu _e\) at STCF. Throughout this paper, charged conjugated modes are always implied. This paper is organised as follows: in Sect. 2, detector concept for STCF is introduced as well as the Monte Carlo (MC) samples used in this feasibility study. In Sect. 3, the event selection and analysis method are described. The optimisation of detector response is elaborated in Sect. 4 and the results are presented in Sect. 5. Finally, we conclude in Sect. 6. Detector and MC simulation The proposed STCF is a symmetric electron-positron beam collider designed to provide \(e^+e^-\) interactions at a center-of-mass (c.m.) energy \(\sqrt{s}\) from 2.0 to 7.0 GeV. The peaking luminosity is expected to be \(0.5\times 10^{35}\) cm\(^{-2}\)s\(^{-1}\) at \(\sqrt{s}=\) 4.0 GeV, and the integrated luminosity per year is 1 ab\(^{-1}\). Such an environment will be an important low-background playground to test the SM and probe possible new physics beyond the SM. The STCF detector is a general purpose detector designed for \(e^+e^-\) collider which includes a tracking system composed of the inner and outer trackers, a particle identification (PID) system with 3\(\sigma \) charged \(K/\pi \) separation up to 2 GeV/c, and an electromagnetic calorimeter (EMC) with an excellent energy resolution and a good time resolution, a super-conducting solenoid and a muon detector (MUD) that provides good charged \(\pi /\mu \) separation. The detailed conceptual design for each sub-detector, the expected detection efficiency and resolution can be found in [18, 20, 21]. Currently, the STCF detector and the corresponding offline software system are under active development. A reliable fast simulation tool for STCF has been developed [21], which takes the most common event generators as input to perform a fast and realistic simulation. The simulation includes resolution and efficiency responses for tracking of final state particles, PID system and kinematic fit related variables. Besides, the fast simulation also provide some functions for adjusting performance of each sub-system which can be used to optimise the detector design according to physical requirement. This study uses MC simulated samples corresponding to 1 ab\(^{-1}\) of integrated luminosity at \(\sqrt{s}\) = 3.773 GeV. The simulation includes the beam-energy spread and initial-state radiation (ISR) in the \(e^+e^-\) annihilations modeled with the generator kkmc [22, 23]. The inclusive MC samples consist of the production of the \(D\bar{D}\) pairs, the non-\(D\bar{D}\) decays of the \(\psi (3770)\), the ISR production of the \(J/\psi \) and \(\psi (3686)\) states, and the continuum process incorporated in kkmc [22, 23]. The known decay modes are modeled with evtgen [24, 25] using BFs taken from the Particle Data Group [26], and the remaining unknown decays from the charmonium states with lundcharm [27]. Final-state radiation (FSR) from charged final-state particles is incorporated with the photos package [28]. Included in the inclusive \(D\bar{D}\) MC sample, the \(D^0\) \(\rightarrow \) \(K_1(1270)^-\) \(e^+\) \(\nu _e\) decay is generated with the ISGW2 model [29] with BF comes from Ref. [26] and \(K_1(1270)^-\) meson is allowed to decay into all intermediate processes that result in a \(K^-\pi ^+\pi ^-\) final state. The resonance shape of the \(K_1(1270)^-\) meson is parameterised by a relativistic Breit-Wigner function. The mass and width of \(K_1(1270)^-\) meson are fixed at the known values as shown in Table 1, and the BFs of \(K_1(1270)\) subdecays measured by Belle [30] are input to generate the signal MC events, since they give better consistency [17] between data and MC simulation than those reported in [26]. Table 1 Mass, width [26] and ratios of subdecays of \(K_1(1270)^-\) (Fit2) [30] used in this analysis a The \(M_\mathrm{miss}^2\) vs. \(M_{K\pi \pi }\) distribution of semi-leptonic candidate events. b, c the \(M_\mathrm{miss}^2\)/\(M_{K\pi \pi }\) distribution of the semi-leptonic candidate events, where the red part denotes the signal events and other parts denote the remaining background events Event section and analysis The feasibility study employs the \(e^+e^-\rightarrow \psi (3770)\rightarrow D^0 \bar{D}^0\) decay chain. The \(\bar{D}^0\) mesons are reconstructed by three channels with low background level, \(\bar{D}^0\rightarrow K^+\pi ^-\), \(K^+\pi ^-\pi ^0\) and \(K^+\pi ^-\pi ^+\pi ^-\). These inclusively selected events are referred to as single-tag (ST) \(\bar{D}^0\) mesons. In the presence of the ST \(D^0\) mesons, candidates for \(D^0\rightarrow K_1(1270)^-e^+\nu _e\) are selected to form double-tag (DT) events. Each charged track is required to satisfy the vertex requirement and detector acceptance in fast simulation. The combined confidence levels under the positron, pion and kaon hypotheses (\(CL_e\), \(CL_{\pi }\) and \(CL_{K}\), respectively) are calculated. Kaon (pion) candidates are required to satisfy \(CL_K > CL_{\pi }\) (\(CL_{\pi } > CL_K\)). Positron candidates are required to satisfy \(CL_e\) / (\(CL_e\) + \(CL_K\) + \(CL_{\pi }\)) > 0.8. To reduce the background from hadrons and muons, the positron candidate is further required to have a deposit energy in the EMC greater than 0.8 times its momentum in the MDC. The \(\pi ^0\) meson is reconstructed via \(\pi ^0\rightarrow \gamma \gamma \) decay. The \(\gamma \gamma \) combination with an invariant mass in the range (0.115, 0.150) GeV/\(c^2\) are regarded as a \(\pi ^0\) candidates, and a kinematic fit by constraining the \(\gamma \gamma \) invariant mass to the \(\pi ^0\) nominal mass [26] is performed to improve the mass resolution. The ST \(\bar{D}^0\) mesons are identified by the energy difference \(\varDelta E \equiv E_{\bar{D}^0}-E_\mathrm{beam}\) and the beam-constrained mass \(M_\mathrm{BC}\) \(\equiv \) \(\sqrt{E^2_\mathrm{beam}-|\mathbf {p}_{\bar{D}^0}|^2}\), where \(E_\mathrm{beam}\) is the beam energy, and \(E_{\bar{D}^0}\) and \(\mathbf {p}_{\bar{D}^0}\) are the total energy and momentum of the ST \(\bar{D}^0\) in the \(e^+e^-\) rest frame. If there are multiple combinations in an event, the combination with the smallest \(\varDelta E\) is chosen for each tag mode. The combinatorial backgrounds in the \(M_\mathrm{BC}\) distributions are suppressed by requiring \(\varDelta E\) within (− 29, 27), (− 69, 38) and (− 31, 28) MeV for \(\bar{D}^0\rightarrow K^+\pi ^-\), \(K^+\pi ^-\pi ^0\) and \(K^+\pi ^-\pi ^+\pi ^-\), respectively, which correspond to about 3.5\(\sigma \) away from the fitted peak. Particles recoiling against the ST \(\bar{D}^0\) mesons candidates are used to reconstruct candidates for \(D^0\rightarrow K_1(1270)^- e^+\!\nu _e\) decay, where the \(K_1(1270)^-\) meson is reconstructed using its dominant decay \(K_1(1270)^-\rightarrow K^-\pi ^+\pi ^-\). It is required that there are only four good unused charged tracks available for this selection. The charge of the lepton candidate is required to be the same as that of the charged kaon of the tag side. The other three charged tracks are identified as a kaon and two pions, based on the same PID criteria used in the ST selection. The kaon candidate must have charge opposite to that of the positron. The main peaking background comes from misidentifying a pion to a positron, and additional criteria as in [17] are used to improve the \(\pi \)/e separation. Information concerning the undetectable neutrino inferred by the kinematic quantity \(M^2_\mathrm{miss} \equiv E^2_\mathrm{miss} - |\mathbf {p}_\mathrm{miss}|^2\), where \(E_\mathrm{miss}\) and \(\mathbf {p}_\mathrm{miss}\) are the missing energy and momentum of the signal candidate, respectively, calculated by \(E_\mathrm{miss} \equiv \mathbf {E}_\mathrm{beam} - \sum _{j}E_j\) and \(\mathbf {p}_\mathrm{miss}\equiv -\mathbf {p}_{\bar{D}^0} - \sum _j\mathbf {p}_{j}\) in the \(e^+e^-\) center-of-mass frame. The index j sums over the \(K^-\), \(\pi ^+\), \(\pi ^-\) and \(e^+\) of the signal candidate, and \(E_j\) and \(\mathbf {p}_j\) are the energy and momentum of the j-th particle, respectively. To partially recover the energy lost to the FSR and bremsstrahlung, the four-momenta of photon(s) within 5\(^\circ \) of the initial positron direction are added to the positron four-momentum. Figure 2 shows the distribution of \(M_{K^-\pi ^+\pi ^-}\) vs. \(M^2_\mathrm{miss}\) of the accepted \(D^0\rightarrow K^-\pi ^+\pi ^-e^+\nu _e\) candidate events in the MC sample after combining all tag modes. A clear signal, which concentrates around the \(K_1(1270)^-\) nominal mass in the \(M_{K^-\pi ^+\pi ^-}\) distribution and around zero in the \(M^2_\mathrm{miss}\) distribution, can be seen. The selection efficiency of signal candidates with the ST modes \(\bar{D}^0\rightarrow K^+\pi ^-\), \(K^+\pi ^-\pi ^0\) and \(K^+\pi ^-\) \(\pi ^-\pi ^+\) are 12.11\(\%\), 6.93\(\%\) and 6.25\(\%\), respectively. In order to determine the angular distributions of \(\cos \theta _K\) and \(\cos \theta _l\), a two-dimensional (2-D) fit to \(M^2_\mathrm{miss}\) and \(M_{K^-\pi ^+\pi ^-}\) is performed to extract the signal yield in each angle bin. The 2-D fit projections to the \(M^2_\mathrm{miss}\) and \(M_{K^-\pi ^+\pi ^-}\) distributions are shown in Fig. 3. In the fit, the 2-D signal shape is described by the MC-simulated shape extracted from the signal MC events while the 2-D background shape is modeled by those derived from the inclusive MC sample. The smooth 2-D probability density functions of signal and background are modeled by using RooNDKeysPdf [31, 32]. Projection of the \(M^2_\mathrm{miss}\)(left) and \(M_{K^-\pi ^+\pi ^-}\)(right) of the DT candidate events of all three tag channels. The point with error bar are MC sample; the blue solid red dotted and green dashed curves are total fit, signal and background, respectively The reconstructed efficiencies of signal candidates in each \(\cos \theta _K\) and \(\cos \theta _l\) interval are shown in Fig. 4. The signal reconstruction efficiency shows a clear trend of increasing monotonically with \(\cos \theta _l\), which is due to strong correlation between \(\cos \theta _l\) and electron momentum, and \(D^0\) candidates with lower momentum electrons are less likely to satisfy electron tracking and PID requirements. The signal reconstruction efficiencies (in percentage) in bins of \(\cos \theta _K\) and \(\cos \theta _l\). In each bin j, the signal reconstruction efficiency is obtained by \(\epsilon _\mathrm{DT}^{j} = \frac{\sum _{i}\mathscr {B}_\mathrm{ST}^{i}\epsilon _\mathrm{DT}^{ij}}{\sum _{i}\mathscr {B}_\mathrm{ST}^i}\), where the \(\mathscr {B}_{ST}^i\) denotes the known BF of each tag mode i, the \(\epsilon _{DT}^{ij}\) represents the DT efficiency in each bin j for tag mode i The signal yields in each \(\cos \theta _K\) and \(\cos \theta _l\) interval corrected by the signal reconstruction efficiency are fitted with a polynomial function [16] $$\begin{aligned} \begin{aligned}&f(\cos \theta _K, \cos \theta _l; A_\mathrm{UD}^{'}, d_{+}, d_{-})= (4+d_+ + d_-)\\&\quad [1+\cos ^2\theta _{K}\cos ^2\theta _l]\\&\quad + 2(d_+-d_-) [1+\cos ^2\theta _K]\cos \theta _l\\&\quad + 2A_\mathrm{UD}^{'}(d_+-d_-) \cos \theta _K[1+\cos ^2\theta _l]\\&\quad + 4A_\mathrm{UD}^{'}(d_++d_-) \cos \theta _K\cos \theta _l\\&\quad - (4-d_+-d_-) [\cos ^2\theta _K+\cos ^2\theta _l]. \end{aligned} \end{aligned}$$ where the \(d_{\pm }\) are the angular coefficients, defined as: $$\begin{aligned} \begin{aligned} d_+ =\frac{|c_+|^2}{|c_0|^2}, d_- =\frac{|c_-|^2}{|c_0|^2} \end{aligned} \end{aligned}$$ The coefficients \(c_{\pm }\) and \(c_0\) correspond to the nonperturbative amplitudes for D decays into \(K_1\) with transverse and longitudinal polarisations, respectively. The ratio of up-down asymmetry \(A_\mathrm{UD}^{'}\) can be extracted directly. Besides, the fraction of longitudinal polarisation \(\frac{|c_0|^2}{|c_0|^2+|c_+|^2+|c_-|^2}\) can be derived from the fitted \(d_{\pm }\) values. Form factor calculations based on different approaches such as covariant light-front quark model(LFQM) and light-cone QCD sum rules(LCSR) obtain the different results significantly [33, 34]. Efficiency corrected signal yields in bins of \(\cos \theta _K\) (left) and \(\cos \theta _l\) (right). The curve is the result of fit using polynomial function The optimisation of DT efficiency for charged tracks of reconstructed efficiency (a); the optimisation of figure-of-merit for neutral tracks of reconstructed efficiency (b); the optimisation of figure-of-merit for misidentification from \(\pi ^+\) to \(e^+\) (c). And the red star denotes the default result The possibility of some events migrating from a bin to its neighbor caused by the detection resolution is considered by calculating the full width at half maximum(FWHM) of cos\(\theta _{l}\) and cos\(\theta _K\), respectively. The value of FWHM is 0.115 and 0.05, which indicates that the bin migration effects can be ignored, due to the larger bin width of 0.5. A 2-D \(\chi ^2\) fit to the \(\cos \theta _l\) and \(\cos \theta _K\) distributions allows to extract \(A_\mathrm{UD}^{'}\), and the fit projections are shown in Fig. 5. The statistical sensitivity of \(A_\mathrm{UD}^{'}\) based on 1 ab\(^{-1}\) MC sample is thus determined to be in the order of 1.8\(\times \)10\(^{-2}\). As a cross-check, \(A_\mathrm{UD}^{'}\) is also determined with a counting method according to Eq. (4) and the corresponding result is compatible with the angular fit method. However, the angular fit method yields a more precise result on \(A_\mathrm{UD}^{'}\) and is taken as the nominal result. Optimization of detector response The main loss of the signal efficiency comes from the effects of charged tracking selection, neutral selection and identification of electron at low momentum. These effects correspond to the sub-detectors of the tracking system, the EMC and the PID system. By studying the DT efficiencies or signal-to-background ratios for this process with variation of the sub-detector's responses, the requirement of detector design can be optimised accordingly. With the help of fast simulation software package, three kinds of detector responses are studied as introduced below: a. Tracking efficiency The tracking efficiency in fast simulation is characterised by two dimensions: transverse momentum \(P_T\) and polar angle cos\(\theta \), which are correlated with the level of track bending and hit positions of tracks in the tracker system. For low-momentum tracks (\(P_T\) < 0.2 GeV/c), it is difficult to reconstruct efficiently due to stronger electromagnetic multiple scattering, electric field leakage, energy loss etc.. However, with different technique in the tracking system design at STCF, or with advanced track finding algorithm, the efficiency is expected to be improved for low-momentum tracks. Benefiting from the flexible approach to change the response of charged track, the detection efficiency is scaled with a factor from 1.1 to 1.5 in the fast simulation. The figure-of-merit, defined by DT efficiency for characterising the performance of tracking efficiency is shown in Fig. 6a. From Fig. 6a, it is found that the DT efficiency can be significantly improved with the given scale factors. The resolution of momentum and position can also be optimised in fast simulation with proper functions. Insignificant improvement is found among optimisation of absolute \(\sigma _{xy}\) from 30 to 150 \(\upmu \)m and absolute \(\sigma _z\) from 500 to 2500 \(\upmu \)m yet, where \(\sigma _{xy}\) and \(\sigma _z\) are the resolution of the tracking system in the xy plane and z direction, respectively. This can be understood since the main source that affects the momentum resolution comes from electromagnetic multiple scattering on the material in the detector, instead of the position resolution. Therefore, material with low atomic number Z is required in the tracking system. b.Detection efficiency for photon In this analysis, \(\pi ^0\)s are selected as part of the tag mode \(\bar{D}^0\rightarrow K^+\pi ^-\pi ^0\) and the \(\pi ^0\) selection also helps to suppress the main background of \(D^0\rightarrow K^-\pi ^+\pi ^-\pi ^+\pi ^0\) in signal side. The figure-of-merit, defined by \(\frac{S}{\sqrt{S+B}}\), to characterise the effect of photon detection efficiency on signal significance, in which S denotes the expected signal yield of \(D^0\rightarrow K_1(1270)^-(\rightarrow K^-\pi ^+\pi ^-) e^+\nu _e\) while B denotes the background yield. The value of \(\frac{S}{\sqrt{S+B}}\) versus the scale factor of photon detection efficiency scanned from 1.1 to 1.5 is shown in Fig. 6b. c. \(\pi /e\) identification Misidentification from a pion to electron in the momentum smaller than 0.6 GeV/c forms the main peaking background \(D^0\rightarrow K^-\pi ^+\pi ^-\pi ^+\) and \(D^0\rightarrow K^-\pi ^+\pi ^-\pi ^+\pi ^0\). As the fast simulation provides the function for optimising the \(\pi \)/e identification which allows to vary the misidentification rate for \(\pi \)/e, the \(\pi \)/e misidentification rate at 0.2 GeV/c is scanned from 5.7 to 0.64\(\%\), shown in Fig. 6c. And \(\frac{S}{\sqrt{S+B}}\) defined before is used to characterise the effect of misidentification of \(\pi ^+\) to \(e^+\) on the signal significance. In summary, three sets of optimization factors for different sub-detector responses are calculated separately: compared with our fast simulation with default settings, the DT efficiency is improved by \(\sim \)27% if the reconstructed efficiency for charged track is scaled by the factor of 1.1, and the value of \(\frac{S}{\sqrt{S+B}}\) is improved by 4% if the photon detection efficiency is scaled by a factor of 1.1, or 7% if the \(\pi \)/e misidentification rate is lowered by half to 3.2%, as reasonable assumptions in real case scenarios. With the above three factors applied altogether, the DT efficiency is improved by a factor of 33\(\%\). The corresponding angular 2-D \(\chi ^2\) fit based on updated efficiency-corrected signal yields in different angular bins is performed. From the fit, the statistical uncertainty of the ratio of up-down asymmetry is extracted to be 1.5\(\times 10^{-2}\), that is, improved by 17\(\%\) compared with the no optimisation scenario. With the above selection criteria and optimisation procedure, the 2-D simultaneous fit to \(M_\mathrm{miss}^2\) vs. \(M_{K\pi \pi }\) in the different interval of \(\cos \theta _K\) vs. \(\cos \theta _l\) is performed, the semi-leptonic decay signal yields produced are used for fitting the angular distribution. Therefore, the sensitivity of the ratio of up-down asymmetry in \(D^0\rightarrow K_1(1270)^-e^+\nu _e\) with an integrated luminosity of 1 ab\(^{-1}\) is extracted as 1.5\(\times \)10\(^{-2}\). Besides, the selection efficiency for this process at \(\sqrt{s}\) = 3.773 GeV where the cross section for \(e^+e^-\rightarrow D^0\bar{D}^0\) = 3.6 nb [35] is studied by a large MC sample, with a negligible error. Equation (2) indicates that the Wilson coefficients can be constrained by measuring the uncertainty of the photon polarisation parameter \(\lambda _{\gamma }\). Combining the uncertainty of \(A_\mathrm{UD}\) measurement [13] with the uncertainty of \(A_\mathrm{UD}^{'}\) measurement in this analysis, the sensitivity of \(\lambda _{\gamma }\) can be determined using Eq. (5). Thus, the Wilson coefficients can be translated by the sensitivity of \(A_\mathrm{UD}^{'}\). Figure 7 depicts the dependency of Wilson coefficients on ratio of \(A_\mathrm{UD}^{'}\), using the \(A_\mathrm{UD}\) measured in the \(K\pi \pi \) mass range of (1.1,1.3) GeV/\(c^2\) [13] as the input, shown in the blue solid line. Considering the uncertainty of \(A_\mathrm{UD}\), the corresponding constraints shown in the green parts. The photon polarisation parameter \(\lambda _{\gamma }\) is predicted to be \(\lambda _{\gamma } \simeq \) 1 for \(\bar{b}\rightarrow \bar{s}\gamma \) in SM, which translated to \(A_\mathrm{UD}^{'} \simeq \) (9.2 ± 2.3)\(\times \)10\(^{-2}\) shown in the red and black solid line in Fig. 7. Dependence of Wilson coefficient on ratio of up-down asymmetry, shown in the blue line, the green parts denote the consideration of uncertainties of \(A_\mathrm{UD}\), the red solid line denotes \(A_\mathrm{UD}^{'}\) corresponding to the photon polarisation parameter predicted in SM, with the consideration of uncertainties of \(A_\mathrm{UD}\) shown between the black solid lines For the systematic uncertainty on \(A_\mathrm{UD}^{'}\), possible sources include the electron tracking and PID efficiencies as functions of electron momentum which cannot cancel out in the \(\cos \theta _l\) distribution due to strong correlation between \(\cos \theta _l\) and electron momentum as mentioned before. With the current binning scheme as shown in Fig. 4, the possibility of some events migrating from an angular bin to its neighbor because of the detector resolution effects on \(\cos \theta _K\) and \(\cos \theta _l\) is expected to be small and the related systematic uncertainty should be manageable. Moreover, as in the BESIII analysis [17], the signal and background shape modeling would affect the signal yields considerably in different angular bins, due to imprecise knowledge on the \(K_1(1270)\) line shape, and background events such as \(D^0\rightarrow K^-\pi ^+\pi ^-\pi ^+\pi ^0\). Our simulation does not include non-\(K_1(1270)^-\) sources of \(K^-\pi ^+\pi ^-\) in the \(D^0 \rightarrow K^- \pi ^+ \pi ^- e^+ \nu _e\) decay, which are estimated to be at least one order of magnitude lower than our signal decay of \(K_1(1270)^-\) [17]. We expect the systematic effect of the non-\(K_1(1270)^-\) sources on \(A^{'}_\mathrm{UD}\) to be small, although detailed studies on the \(K_1(1400)^-\) contribution are needed when more data become available. Summary and prospect In this work, the statistical sensitivity of a ratio of up-down asymmetry \(D^0\rightarrow K_1(1270)^-e^+\nu _e\) with an integrated luminosity of \(\mathscr {L}\) = 1 ab\(^{-1}\) at \(\sqrt{s}\) = 3.773 GeV and the optimised efficiency with the fast simulation, is determined to be 1.5\(\times 10^{-2}\) by performing an angular analysis. The hadronic effects in \(K_1\rightarrow K\pi \pi \) can be quantified by \(A_\mathrm{UD}^{'}\), therefore, combined with the measured up-down asymmetry \(A_\mathrm{UD}\) in \(B^+\rightarrow K_1^+(\rightarrow K^+\pi ^-\pi ^+)\gamma \) [13], the photon polarisation in \(b\rightarrow s\gamma \) can be measured to probe the new physics. This manuscript has no associated data or the data will not be deposited. [Authors' comment: The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.] D. Atwood, M. Gronau, A. Soni, Phys. Rev. Lett. 79, 185 (1997) ADS Article Google Scholar A. Paul, D.M. Straub, J. High Energy Phys. 04, 027 (2017) D. Becirevic, E. Kou, A. Le Yaouance, A. Tayduganov, J. High Energy Phys. 08, 090 (2012) E. Kou, C.D. Lü, F.S. Yu, J. High Energy Phys. 12, 102 (2013) ADS Google Scholar N. Haba, H. Ishida, T. Nakaya, Y. Shimizu, R. Takahashi, J. High Energy Phys. 03, 160 (2015) F. Muheim, Y. Xie, R. Zwicky, Phys. Lett. B 664, 174 (2008) F. Kruger, J. Matias, Phys. Rev. D 71, 094009 (2005) T. Mannel, S. Recksiegel, Acta Phys. Pol. B 28, 2489 (1997) G. Hiller, A. Kagan, Phys. Rev. D 65, 074038 (2002) M. Gronau, Y. Grossman, D. Pirjol, A. Ryd, Phys. Rev. Lett. 88, 051802 (2002) M. Gronau, D. Pirjol, Phys. Rev. D 66, 054008 (2002) S. de Boer, G. Hiller, Eur. Phys. J. C 78, 188 (2018) R. Aaij et al., LHCb Collaboration. Phys. Rev. Lett. 112, 161801 (2014) E. Kou, A. Le Yaouanc, A. Tayduganov, Phys. Lett. B 763, 66 (2016) N. Adolph, G. Hiller, A. Tayduganov, Phys. Rev. D 99, 075023 (2019) W. Wang, F.S. Yu, Z.X. Zhao, Phys. Rev. Lett. 125, 051802 (2020) M. Ablikim et al., BESIII Collaboration. Phys. Rev. Lett. 127, 131801 (2021) H.P. Peng, High Intensity Electron Positron Accelerator (HIEPA), Super Tau Charm Facility (STCF) in China, talk at Charm2018, Novosibirsk, Russia, May 21–25 (2018) R. Baltrusaitis et al., MARK III Collaboration, Phys. Rev. Lett. 56, 2140 (1986) Q. Luo, D. Xu, Progress on preliminary conceptual study of HIEPA, a super tau-charm factory in China, talk at the 9th International Particle Accelerator Conference (IPAC 2018), held in Vancouver, British Columbia, Canada, April 29–May 4 (2018) X.-D. Shi et al., JINST 16, P03029 (2021) S. Jadach, B.F.L. Ward, Z. Was, Comput. Phys. Commun. 130, 260 (2000) S. Jadach, B.F.L. Ward, Z. Was, Phys. Rev. D 63, 113009 (2001) D.J. Lange, Nucl. Instrum. Methods A 462, 152 (2001) D.J. Lange, R.G. Ping, Chin. Phys. C 32, 599 (2008) P.A. Zyla et al., Particle Data Group. Prog. Theor. Exp. Phys. 2020 083C01 (2020) J.C. Chen, G.S. Huang, X.R. Qi, D.H. Zhang, Y.S. Zhu, Phys. Rev. D 62, 034003 (2000) E. Richter-Was, Phys. Lett. B 303, 163 (1993) D. Scora, N. Isgur, Phys. Rev. D 52, 2783 (1995) H. Guler et al., Belle Collaboration, Phys. Rev. D 83, 032005 (2011) W. Verkerke, D. Kirkby, eConf No, C0303241, MOLT007 (2003) https://root.cern.ch/doc/master/classRooNDKeysPdf.html L. Bian, L. Sun, W. Wang, Phys. Rev. D 104, 053003 (2021) S. Momeni, R. Khosravi, J. Phys. G 46, 105006 (2019) M. Ablikim et al., BESIII Collaboration, Chin Phys C 42(8), 083001 (2018) The authors are grateful to Wei Wang, Fu-Sheng Yu, Hai-Long Ma and Xiang Pan for useful discussions. We express our gratitude to the supercomputing center of USTC and Hefei Comprehensive National Science Center for their strong support. This work is supported by the Double First-Class university project foundation of USTC and the National Natural Science Foundation of China under Projects No. 11625523. School of Physics and Technology, Wuhan University, Wuhan, 430072, People's Republic of China Yu-Lan Fan & Liang Sun State Key Laboratory of Particle Detection and Electronics, Hefei, 230026, People's Republic of China Xiao-Dong Shi & Xiao-Rong Zhou School of Physical Sciences, University of Science and Technology of China, Hefei, 230026, People's Republic of China Yu-Lan Fan Xiao-Dong Shi Xiao-Rong Zhou Liang Sun Correspondence to Xiao-Rong Zhou or Liang Sun. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Funded by SCOAP3 Fan, YL., Shi, XD., Zhou, XR. et al. Feasibility study of measuring \(b\rightarrow s\gamma \) photon polarisation in \(D^0\rightarrow K_1(1270)^- e^+\nu _e\) at STCF. Eur. Phys. J. C 81, 1068 (2021). https://doi.org/10.1140/epjc/s10052-021-09841-y DOI: https://doi.org/10.1140/epjc/s10052-021-09841-y Over 10 million scientific documents at your fingertips Switch Edition Academic Edition Corporate Edition Not affiliated © 2022 Springer Nature Switzerland AG. Part of Springer Nature.
CommonCrawl
Extremal solutions for some periodic fractional differential equations Wei Zhang1, Zhanbing Bai1 & Sujing Sun2 By using the lower and upper solution method, the existence of an iterative solution for a class of fractional periodic boundary value problems, $$\begin{aligned}& D_{0+}^{\alpha}u(t)=f\bigl(t, u(t)\bigr),\quad t \in(0, h),\\& \lim_{t \to0^{+}}t^{1-\alpha}u(t) = h^{1-\alpha}u(h), \end{aligned}$$ is discussed, where \(0< h<+\infty\), \(f\in C([0, h]\times R, R)\), \(D_{0+}^{\alpha}u (t) \) is the Riemann-Liouville fractional derivative, \(0<\alpha< 1\). Different from other well-known results, a new condition on the nonlinear term is given to guarantee the equivalence between the solution of the periodic boundary value problem and the fixed point of the corresponding operator. Moreover, the existence of extremal solutions for the problem is given. Differential equations of fractional order have played a significant role in engineering, science, and pure and applied mathematics in recent years. Some researchers paid attention to the existence results of the solution of the periodic boundary value problem for fractional differential equations, such as [1–17]. Some recent contributions to the theory of fractional differential equations initial value problems can be found in [4, 9]. In [4], by using the fixed point theorem of Schaeffer and the Banach contraction principle, Belmekki et al. obtained the Green's function and gave some existence results for the nonlinear fractional periodic problem $$\begin{aligned}& D_{0+}^{\alpha}u (t) -\lambda u(t)= f\bigl(t, u(t)\bigr),\quad t \in(0, 1]\ (0< \alpha< 1),\\& \lim_{t \to0^{+}}t^{1-\alpha}u(t) = u(1), \end{aligned}$$ where \(f: [0, 1] \times R \to R\) is continuous and the following assumptions hold: there exists a constant \(M >0\) such that $$\bigl| f(t, u) \bigr|\le M, \quad\mbox{for each } t \in(0, 1), u \in R, $$ there exists a constant \(k > 0\) such that $$\bigl|f(t, u) - f(t, v)\bigr| \le k |u-v|, \quad\mbox{for each } t \in(0, 1), u, v \in R. $$ The above conditions (see Lemma 4.2 of [4]) are very strong. In [13], Wei et al. discussed the properties of the well-known Mittag-Leffler function, and consider the existence and uniqueness of the solution of the periodic boundary value problem for a fractional differential equation involving a Riemann-Liouville fractional derivative $$\begin{aligned}& D_{0+}^{\alpha}u (t) = f\bigl(t, u(t)\bigr),\quad t \in(0, T)\ (0< \alpha< 1),\\& t^{1-\alpha}u(t)|_{t=0} = t^{1-\alpha}u(t) |_{t=T}, \end{aligned}$$ by using the monotone iterative method. In this result, the bounded demand of f in [13] and the monotone demand of f in [9] were removed. However, the application of Lemma 1.1 in the proof of Theorem 3.1 was not correct, due to \(\sigma(\eta)(t) \notin C[0, T]\). In other words, the definition of operator A may be not appropriate. Consequently, while the uniqueness result was correct, the existence of an extremal result was maybe wrong. In [14], Wei and Dong studied the existence of solutions of the following periodic boundary value problem: $$\begin{aligned}& D_{0+}^{2\alpha} u (t) = f\bigl(t, u(t), D_{0+}^{\alpha}u(t)\bigr),\quad t \in (0, T)\ (0< \alpha< 1),\\& \lim_{t \to0}t^{1-\alpha}u(t) =\lim_{t \to T} t^{1-\alpha}u(t),\\& \lim_{t \to0}t^{1-\alpha} D_{0+}^{\alpha}u(t) =\lim_{t \to T} t^{1-\alpha}D_{0+}^{\alpha}u(t), \end{aligned}$$ where \(D_{0+}^{\alpha}\) is the standard Riemann-Liouville fractional derivative, \(D_{0+}^{2\alpha}u = D_{0+}^{\alpha}(D_{0+}^{\alpha}u)\) is the sequential Riemann-Liouville fractional derivative, \(0 < T < \infty\), and f defined on \([0, T] \times R^{2}\) is continuous. The methods used in [14] are monotone iterative techniques and the Schauder fixed point theorem under the assumptions that there the upper and lower solutions exist. In this paper, we will focus our attention on the following problem: $$\begin{aligned}& D_{0+}^{\alpha}u(t)=f\bigl(t, u(t)\bigr),\quad t \in(0, h), \end{aligned}$$ $$\begin{aligned}& \lim_{t \to0^{+}}t^{1-\alpha}u(t) =h^{1-\alpha} u(h), \end{aligned}$$ where \(f\in C([0, h]\times R, R)\), \(D_{0+}^{\alpha}u (t) \) is the Riemann-Liouville fractional derivative, \(0<\alpha< 1\). The existence of the solution is obtained by the use of the upper and lower solution method which has been used by authors to deal with the fractional initial value problems [2]. The remainder of this paper is as follows. In Section 2, we recall some notions and the theory of the fractional calculus. Section 3 is devoted to the study of the existence of a solution utilizing the method of upper and lower solutions. The existence of extremal solutions is given. An example is given to illustrate the main result. Given \(0 \le a < b <+\infty\) and \(r>0\), define $$C_{r}[a, b]=\bigl\{ u \mid u \in C(a, b], (t-a)^{r} u(t) \in C[a, b]\bigr\} . $$ Clearly, \(C_{r}[a, b]\) is a linear space with the normal multiplication and addition. Given \(u \in C_{r}[a, b]\), define $$\|u\| = \max_{t \in[a, b]}(t-a)^{r}\bigl|u(t)\bigr|, $$ then \((C_{r}[a, b], \|\cdot\|)\) is a Banach space. ([13]) For \(0 < \alpha\le1\), \(\lambda\ge0\), the Mittag-Leffler type function \(E_{\alpha, \alpha}(-\lambda t^{\alpha}) \) satisfies $$0 \le E_{\alpha, \alpha}\bigl(-\lambda t^{\alpha}\bigr) < \frac{1}{\Gamma(\alpha )},\quad t \in(0, \infty). $$ The linear periodic problem $$\begin{aligned}& D_{0+}^{\alpha}u(t) + \lambda u(t) = q(t), \end{aligned}$$ $$\begin{aligned}& \lim_{t \to0^{+}}t^{1-\alpha}u(t) = h^{1-\alpha} u(h), \end{aligned}$$ where \(\lambda\ge0\) is a constant and \(q \in L(0, h)\), has the following integral representation of the solution: $$u(t) = \Gamma(\alpha) u(h) t^{\alpha-1}E_{\alpha, \alpha}\bigl(-\lambda t^{\alpha}\bigr) + \int_{0}^{t} (t-s)^{\alpha-1}E_{\alpha, \alpha} \bigl(-\lambda (t-s)^{\alpha}\bigr)q(s)\,ds. $$ According to [8], for every initial condition $$\lim_{t \to0+}t^{1-\alpha}u(t) = u_{0} $$ the unique solution of equation (2.1) is given by $$u(t) = \Gamma(\alpha)u_{0} t^{\alpha-1}E_{\alpha, \alpha}\bigl(- \lambda t^{\alpha}\bigr) + \int_{0}^{t} (t-s)^{\alpha-1}E_{\alpha, \alpha} \bigl(-\lambda (t-s)^{\alpha}\bigr)q(s)\,ds. $$ Specially, choose \(u_{0}\) as $$u_{0} = \frac{h^{1-\alpha}\int_{0}^{h} (h-s)^{\alpha-1} E_{\alpha, \alpha }(-\lambda(h-s)^{\alpha})q(s)\,ds}{ 1- \Gamma(\alpha)E_{\alpha, \alpha}(-\lambda h^{\alpha})}, $$ then \(u(t)\) satisfies the periodic boundary condition (2.2). That is to say that the linear periodic problem (2.1), (2.2) has the following integral representation of the solution: $$u(t) = \Gamma(\alpha) h^{1-\alpha}u(h) t^{\alpha-1}E_{\alpha, \alpha }\bigl(- \lambda t^{\alpha}\bigr) + \int_{0}^{t} (t-s)^{\alpha-1}E_{\alpha, \alpha } \bigl(-\lambda(t-s)^{\alpha}\bigr)q(s)\,ds. $$ The proof is complete. □ Suppose that E is an ordered Banach space, \(x_{0}, y_{0} \in E\), \(x_{0} \le y_{0}\), \(D=[x_{0}, y_{0}]\), \(T: D \to E\) is an increasing completely continuous operator and \(x_{0} \le Tx_{0}\), \(y_{0} \ge Ty_{0}\). Then the operator T has a minimal fixed point \(x^{*}\) and a maximal fixed point \(y^{*}\). If we let $$x_{n} = Tx_{n-1},\qquad y_{n} = Ty_{n-1},\quad n=1, 2, 3, \ldots, $$ $$\begin{aligned}& x_{0} \le x_{1} \le x_{2} \le\cdots\le x_{n} \le\cdots\le y_{n} \le\cdots \le y_{2} \le y_{1} \le y_{0},\\& x_{n} \to x^{*},\qquad y_{n} \to y^{*}. \end{aligned}$$ Definition 2.1 A function \(v(t) \in C_{1-\alpha}[0, h]\) is called a lower solution of problem (1.1), (1.2), if it satisfies $$\begin{aligned}& D_{0+}^{\alpha}v(t) \le f\bigl(t, v(t)\bigr),\quad t \in(0, h), \end{aligned}$$ $$\begin{aligned}& \lim_{t \to0^{+}}t^{1-\alpha}v(t) \le h^{1-\alpha} v(h). \end{aligned}$$ A function \(w(t) \in C_{1-\alpha}[0, h]\) is called an upper solution of problem (1.1), (1.2), if it satisfies $$\begin{aligned}& D_{0+}^{\alpha}w(t) \ge f\bigl(t, w(t)\bigr),\quad t \in(0, h), \end{aligned}$$ $$\begin{aligned}& \lim_{t \to0^{+}}t^{1-\alpha}w(t) \ge h^{1-\alpha} w(h). \end{aligned}$$ The main results The following assumptions will be used in this section: \(f: [0, h] \times R \to R\) is continuous and there exist constants \(A, B \ge0\) and \(0 < r_{1} \le1 < r_{2}<1/(1-\alpha)\) such that for \(t \in[0, h]\) $$ \bigl|f(t, u) - f(t, v)\bigr| \le A|u-v|^{r_{1}} + B |u-v|^{r_{2}},\quad u, v \in R. $$ Suppose (S1) holds. Then u solves problem (1.1), (1.2) if and only if it is a fixed point of the operator \(T_{\lambda}: C_{1-\alpha} [0, h] \to C_{1-\alpha} [0, h]\) defined by $$\begin{aligned} (T_{\lambda}u) (t) =& \Gamma(\alpha)h^{1-\alpha} u(h)t^{\alpha-1} E_{\alpha, \alpha}\bigl(-\lambda t^{\alpha}\bigr)\\ &{} + \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha} \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds, \end{aligned}$$ where \(\lambda\geq0\) is a constant. First of all, we show that the operator \(T_{\lambda}\) is well defined. Clearly \(t^{\alpha-1} E_{\alpha, \alpha}(-\lambda t^{\alpha}) \in C_{1-\alpha}[0, h]\), so it is enough to prove that for every \(u \in C_{1-\alpha}[0, h] \), the function $$\int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha} \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds $$ belongs to \(C_{1-\alpha}[0, h]\). Taking into account that f is continuous on \([0, h] \times R\), for \(u \in C_{1-\alpha}[0, h]\), we have $$\int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha} \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds \in C(0, h]. $$ On the other hand, under the condition (S1), we have $$\bigl|f(t, u)\bigr| \le A|u|^{r_{1}} + B|u|^{r_{2}} + C, $$ where \(C= \max_{t \in[0, h]}f(t, 0)\). By Lemma 2.1, for \(u \in C_{1-\alpha}[0, h]\), we have $$\begin{aligned} & \biggl\vert t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds\biggr\vert \\ &\quad \le t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl|f\bigl(s, u(s)\bigr)+\lambda u(s)\bigr|\,ds \\ &\quad \le t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr) \bigl(A |u|^{r_{1}} + \lambda|u| + B |u|^{r_{2}} + C \bigr)\,ds \\ &\quad \le t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr) \bigl\{ A s^{(\alpha-1)r_{1}} \bigl[s^{1-\alpha }\bigl|u(s)\bigr| \bigr]^{r_{1}} \\ &\qquad{} +\lambda s^{\alpha-1} s^{1-\alpha}\bigl|u(s)\bigr| + B s^{(\alpha -1)r_{2}} \bigl[s^{1-\alpha}\bigl|u(s)\bigr| \bigr]^{r_{2}} +C \bigr\} \,ds \\ &\quad \le \frac{A \|u\|^{r_{1}} t^{1-\alpha}}{\Gamma(\alpha)} \int_{0}^{t} (t-s)^{\alpha-1} s^{(\alpha-1)r_{1}} \,ds + \frac{\lambda\|u\|t^{1-\alpha }}{\Gamma(\alpha)} \int_{0}^{t} (t-s)^{\alpha-1} s^{\alpha-1}\,ds \\ &\qquad{} + \frac{B \|u\|^{r_{2}} t^{1-\alpha}}{\Gamma(\alpha)} \int_{0}^{t} (t-s)^{\alpha-1} s^{(\alpha-1)r_{2}} \,ds + \frac{Ct}{\Gamma(\alpha+1)} \\ &\quad \le A \|u\|^{r_{1}} \frac{\Gamma((\alpha-1)r_{1}+1)}{\Gamma((\alpha -1)r_{1}+\alpha+1)} t^{(\alpha-1)r_{1} +\alpha+1-\alpha} + \lambda\|u\| \frac{\Gamma(\alpha)}{\Gamma(2\alpha)} t^{\alpha} \\ &\qquad{} +B \|u\|^{r_{2}} \frac{\Gamma((\alpha-1)r_{2}+1)}{\Gamma((\alpha -1)r_{2}+\alpha+1)} t^{(\alpha-1)r_{2} +\alpha+1-\alpha} + \frac{Ct}{\Gamma (\alpha+1)} \\ &\quad \le\frac{\Gamma[(\alpha-1)r_{1}+1]\cdot A \cdot t^{(\alpha -1)r_{1}+1}}{\Gamma[(\alpha-1)r_{1}+\alpha+1]}\|u\|^{r_{1}} + \lambda\|u\| \frac{\Gamma(\alpha)}{\Gamma(2\alpha)} t^{\alpha} \\ &\qquad{} +\frac{\Gamma[(\alpha-1)r_{2}+1]\cdot B \cdot t^{(\alpha -1)r_{2}+1}}{\Gamma[(\alpha-1)r_{2}+\alpha+1]}\|u\|^{r_{2}} + \frac{Ct}{\Gamma (\alpha+1)}. \end{aligned}$$ That is to say that $$\int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha} \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds \in C_{1-\alpha}[0, h]. $$ The above inequalities and the assumption \(0< r_{1} \le1 < r_{2} < 1/(1-\alpha)\) imply that $$\lim_{ t\to0+} t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, u(s)\bigr)+\lambda u(s) \bigr]\,ds=0. $$ Combining with the fact that \(\lim_{t \to0+}E_{\alpha, \alpha }(-\lambda t^{\alpha}) = E_{\alpha, \alpha}(0)=1/\Gamma(\alpha)\) yields $$\lim_{t \to0+} t^{1-\alpha}(T_{\lambda}u)(t) = h^{1-\alpha}u(h). $$ The above arguments combined with Lemma 2.2 imply that the fixed point of the operator \(T_{\lambda}\) solves the periodic boundary value problem (1.1), (1.2), and vice versa. The proof is complete. □ In the following, we consider the compactness of the set of the space \(C_{r}[0, h]\). Let \(F \subset C_{r}[0, h]\) and \(E= \{g(t)= t^{r} h(t) \mid h(t)\in F\}\), then \(E \subset C[0, h]\). It is clear that F is a bounded set of \(C_{r}[0, h]\) if and only if E is a bounded set of \(C[0, h]\). Therefore, to prove that \(F \subset C_{r}[0, h]\) is a compact set, it is enough to prove that \(E \subset C[0, h]\) is a bounded and equicontinuous set. Suppose (S1) holds. Then the operator \(T_{\lambda}: C_{1-\alpha}[0, h] \to C_{1-\alpha}[0, h]\) is completely continuous. Given \(u_{n} \to u \in C_{1-\alpha}[0, h]\), with the definition of \(T_{\lambda}\), the condition (S1), and Lemma 2.1, one has $$\begin{aligned} &\|T_{\lambda}u_{n} -T_{\lambda}u\| \\ &\quad= \bigl\| t^{1-\alpha}(T_{\lambda}u_{n} - T_{\lambda}u) \bigr\| _{\infty}\\ &\quad= \max_{0 \le t \le h} \biggl\{ \bigl\vert \Gamma( \alpha)h^{1-\alpha }E_{\alpha, \alpha}\bigl(-\lambda t^{\alpha}\bigr) \bigl[u_{n}(h)-u(h)\bigr]\bigr\vert \\ &\qquad{}+ \biggl\vert t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}E_{\alpha, \alpha } \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[f(s, u_{n})-f(s, u)+ \lambda(u_{n}-u)\bigr]\,ds\biggr\vert \biggr\} \\ &\quad\le\frac{1}{\Gamma(\alpha)} \max_{0 \le t \le h} t^{1-\alpha} \int _{0}^{t} (t-s)^{\alpha-1} \bigl[A|u_{n}-u|^{r_{1}} + B|u_{n}-u|^{r_{2}} + \lambda |u_{n}-u|\bigr]\,ds \\ &\qquad{} +\|u_{n} -u\| \\ &\quad\le\frac{1}{\Gamma(\alpha)} \biggl[A \max_{0 \le t \le h} t^{1-\alpha } \int_{0}^{t} (t-s)^{\alpha-1}\cdot s^{-r_{1}(1-\alpha)} \cdot s^{r_{1}(1-\alpha)}\cdot|u_{n}-u|^{r_{1}} \,ds \\ &\qquad{} + \lambda\max_{0 \le t \le h} t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha -1}\cdot s^{-(1-\alpha)} \cdot s^{(1-\alpha)}\cdot|u_{n}-u|\,ds \\ &\qquad{} + B \max_{0 \le t \le h} t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha -1}\cdot s^{-r_{2}(1-\alpha)} \cdot s^{r_{2}(1-\alpha)}\cdot |u_{n}-u|^{r_{2}} \,ds \biggr] +\|u_{n} -u\| \\ &\quad\le\frac{1}{\Gamma(\alpha)} \biggl[A \|u_{n}-u\|^{r_{1}}\max _{0 \le t \le h} t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}\cdot s^{-r_{1}(1-\alpha)}\,ds \\ &\qquad{} + \lambda\|u_{n}-u\|\max_{0 \le t \le h} t^{1-\alpha} \int_{0}^{t} (t-s)^{\alpha-1}\cdot s^{-(1-\alpha)}\,ds \\ &\qquad{} + B \|u_{n}-u\|^{r_{2}}\max_{0 \le t \le h} t^{1-\alpha} \int _{0}^{t} (t-s)^{\alpha-1}\cdot s^{-r_{2}(1-\alpha)}\,ds \biggr] +\|u_{n} -u\| \\ &\quad\le \frac{A\|u_{n}-u\|^{r_{1}} \Gamma[1-r_{1}(1-\alpha)]}{\Gamma [1-r_{1}(1-\alpha) +\alpha]} h^{1-r_{1}(1-\alpha)} + \frac{\lambda\|u_{n}-u\| \Gamma[\alpha]}{\Gamma[2\alpha]} h^{\alpha} \\ &\qquad{} +\frac{B\|u_{n}-u\|^{r_{2}} \Gamma[1-r_{2}(1-\alpha)]}{\Gamma [1-r_{2}(1-\alpha) +\alpha]} h^{1-r_{2}(1-\alpha)} +\|u_{n} -u\| \\ &\quad \to0\quad (n \to\infty). \end{aligned}$$ That is to say that \(T_{\lambda}\) is continuous. Suppose that \(F \subset C_{1-\alpha}[0, h]\) is a bounded set and there is a positive constant M such that \(\|u\| \le M\) for \(u \in F\). The proof process of Theorem 3.1 shows that \(T_{\lambda}(F) \subset C_{1-\alpha}[0, h] \) is bounded. We omit the proof details of the equicontinuity of \(T(F)\) here and refer the reader to [2] for a similar details. The proof is complete. □ Assume (S1) hold and \(v, w \in C_{1-\alpha}[0, h]\) are lower and upper solutions of problem (1.1), (1.2), respectively, such that $$ v(t) \le w(t),\quad 0\le t\leq h. $$ Moreover, \(f: [0, h] \times R \to R\) satisfies $$ f(t, x) - f(t, y) + \lambda(x-y) \ge0, \quad\textit{for } v \le y \le x \le w. $$ Then the fractional periodic boundary value problem (1.1), (1.2) has a minimal solution \(x^{*}\) and a maximal solution \(y^{*}\) such that $$x^{*} = \lim_{n \to\infty} T_{\lambda}^{n}v,\qquad y^{*} = \lim_{n \to\infty} T_{\lambda}^{n}w. $$ Clearly, if the functions v, w are lower and upper solutions (or strict) of problem (1.1), (1.2), then there are \(v \le T_{\lambda}v\), \(w \ge T_{\lambda}w\) (or the inequality is strict). In fact, by the definition of the lower solution, there exist \(q(t) \ge0\) and \(\epsilon\ge0\) such that $$\begin{aligned}& D_{0+}^{\alpha}v(t) = f\bigl(t, v(t)\bigr) -q(t),\quad t \in(0, h),\\& \lim_{t \to0^{+}}t^{1-\alpha} v(t) =h^{1-\alpha}v(h) - \epsilon. \end{aligned}$$ By the use of Theorem 3.1 and Lemma 2.1, one has $$\begin{aligned} v(t) =& \Gamma(\alpha) \bigl(h^{1-\alpha}v(h) -\epsilon \bigr)t^{\alpha-1} E_{\alpha, \alpha}\bigl(-\lambda t^{\alpha}\bigr) \\ &{} + \int_{0}^{t} (t-s)^{\alpha-1} E_{\alpha, \alpha} \bigl(-\lambda( t-s)^{\alpha}\bigr)\bigl[ f\bigl(s, v(s)\bigr)+\lambda v(s)-q(s)\bigr]\,ds \\ \le&(T_{\lambda}v) (t). \end{aligned}$$ Similarly, we have \(w \ge T_{\lambda}w\). By condition (3.3) and Theorem 3.2, the operator \(T_{\lambda}: C_{1-\alpha }[0, h]\to C_{1-\alpha}[0, h]\) is an increasing completely continuous operator. Setting \(D:= [v, w]\), by the use of Lemma 2.3, the existence of \(x^{*}\), \(y^{*}\) is obtained. The proof is complete. □ The main result is a consequence of the classical monotone iterative technique [19, 20]. However, the periodic condition is not the same. Example 3.1 Consider the following periodic fractional boundary value problem: $$\begin{aligned}& \lim_{t \to0^{+}}t^{1-\alpha}u(t) = h^{1-\alpha}u(h), \end{aligned}$$ where \(\alpha=0.3\), \(h=0.7\), \(f(t,u)=\frac{t}{10} [1+u(t)]\). Obviously, the function \(f(t, u)\) satisfies condition (3.3) and (S1), \(f(t,0)\geq0\), and \(f(t,0) \not\equiv0\) for \(t \in[0, h]\). Thus, \(v(t) \equiv0\) is a lower solution of problem (3.4), (3.5). Choose \(u(t) = 2t^{\alpha-1}\operatorname{Cos}[2t] +t^{\alpha}\), one can check that \(u\in C_{1-\alpha}[0, h]\) is an upper solution of problem (3.4), (3.5), and \(v(t) \leq u(t)\) for \(t \in[0, h]\). By the use of Theorem 3.3, problem (3.4), (3.5) has at least one solution. Ahmad, B, Nieto, JJ: Existence results for nonlinear boundary value problems of fractional integro-differential equations with integral boundary conditions. Bound. Value Probl. 2009, Article ID 708576 (2009) Bai, Z: Monotone iterative method for a class of fractional differential equations. Electron. J. Differ. Equ. 2016, 6 (2016) Bai, Z: Theory and Applications of Fractional Differential Equation Boundary Value Problems. China Sci. Tech., Beijing (2013) (in Chinese) Belmekki, M, Nieto, JJ, Lopez, RR: Existence of periodic solution for a nonlinear fractional differential equation. Bound. Value Probl. 2009, Article ID 324561 (2009) Deekshitulu, GVSR: Generalized monotone iterative technique for fractional R-L differential equations. Nonlinear Stud. 16, 85-94 (2009) Dong, X, Bai, Z, Zhang, W: Positive solutions for nonlinear eigenvalue problems with conformable fractional differential derivatives. J. Shandong Univ. Sci. Technol. Nat. Sci. 35(3), 85-90 (2016) (in Chinese) Jia, M, Liu, X: Multiplicity of solutions for integral boundary value problems of fractional differential equations with upper and lower solutions. Appl. Math. Comput. 232, 313-323 (2014) Kilbas, AA, Srivastava, HM, Trujillo, JJ: Theory and Applications of Fractional Differential Equations. Elsevier, Amsterdam (2006) Lakshmikantham, V, Vatsala, AS: Basic theory of fractional differential equations. Nonlinear Anal. 69, 2677-2682 (2008) Nieto, JJ: Maximum principles for fractional differential equations derived from Mittag-Leffler functions. Appl. Math. Lett. 23, 1248-1251 (2010) Samko, SG, Kilbas, AA, Marichev, OI: Fractional Integrals and Derivatives, Theory and Applications. Gordon & Breach, Amsterdam (1993) Schneider, WR: Completely monotone generalized Mittag-Leffler functions. Expo. Math. 14, 3-16 (1996) Wei, Z, Dong, W, Che, J: Periodic boundary value problems for fractional differential equations involving a Riemann-Liouville fractional derivative. Nonlinear Anal. 73, 3232-3238 (2010) Wei, Z, Dong, W: Periodic boundary value problems for Riemann-Liouville fractional differential equations. Electron. J. Qual. Theory Differ. Equ. 2011, 87 (2011) Yin, C, Chen, Y, Zhong, S: Fractional-order sliding mode based extremum seeking control of a class of nonlinear system. Automatica 50, 3173-3181 (2014) Wu, HH, Sun, SJ: Multiple positive solutions for a fourth order boundary value via variational method. J. Shandong Univ. Sci. Technol. Nat. Sci. 33(2), 96-99 (2014) (in Chinese) Zhou, Y: Basic Theory of Fractional Differential Equations. World Scientific, Singapore (2014) Guo, D, Sun, J, Liu, Z: Functional Methods in Nonlinear Ordinary Differential Equations. Shandong Sci. Tech., Jinan (1995) (in Chinese) Ladde, GS, Lakshmikantham, V, Vatsala, AS: Monotone Iterative Techniques for Nonlinear Differential Equations. Pitman, Boston (1985) Nieto, JJ: An abstract monotone iterative technique. Nonlinear Anal. TMA 28, 1923-1933 (1997) The authors express their sincere thanks to the anonymous reviews for their valuable suggestions and corrections for improving the quality of the paper. This work is supported by NSFC (11571207), the Taishan Scholar project. College of Mathematics and System Science, Shandong University of Science and Technology, Qianwangang Road, Qingdao, 266590, P.R. China Wei Zhang & Zhanbing Bai College of Information Science and Technology, Shandong University of Science and Technology, Qianwangang Road, Qingdao, 266590, P.R. China Sujing Sun Zhanbing Bai Correspondence to Zhanbing Bai. All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript. Zhang, W., Bai, Z. & Sun, S. Extremal solutions for some periodic fractional differential equations. Adv Differ Equ 2016, 179 (2016). https://doi.org/10.1186/s13662-016-0869-4 Accepted: 23 May 2016 34A08 fractional periodic boundary value problem extremal solution
CommonCrawl
Latin American Economic Review Skills versus Luck: Bolivia and its recent Bonanza Rómulo A. Chumacero ORCID: orcid.org/0000-0001-7218-04401 Latin American Economic Review volume 28, Article number: 7 (2019) Cite this article This paper uses different approaches to determine the contribution of internal policies and external factors on the good performance of the Bolivian economy in the recent past. It is demonstrated that the extremely favorable external conditions are mainly responsible for its bonanza, and that the domestic policies have, probably, caused more harm, than good. The extent of the harm ranges from 2% to up to 6.1% of GDP per capita. Governments have the tendency to attribute good outcomes to their policies (skills) and bad ones to negative external shocks outside of their control (luck). That is certainly the case in Bolivia. As in many other dimensions, the economic debate regarding the sources of the economic performance during the (continuing) tenure of President Evo Morales, is polarized. Opponents consider that most (if not all) of the good economic indicators are due to extraordinarily favorable external conditions.Footnote 1 Acolytes consider that a major role has to be attributed to the heterodox policies that the government has implemented, and compare favorably recent economic indicators, against the ones of, what they tend to call, the "neoliberal past" (Arce 2016). Economists have long understood that one of the most prevalent and pernicious logical fallacies is to attribute a causal effect to a correlation.Footnote 2 Thus, a rigorous effort to ascertain how important were skills (or lack thereof) and luck, requires more than informal comparisons, declarations, and charts.Footnote 3 Answering this question is not easy, as an obvious counterfactual is not readily available. That is, ideally, we would like to know how would Bolivia have fared if faced with the same external conditions, but with different internal policies than the ones pursued by President Morales. In experimental terms, we would like to quantify what were the effects of receiving the "Evo Morales treatment," when no natural control group is available. This paper attempts to do so, using three methodologies. For completeness, each one is described, justified, and implemented. Presenting more than one approach helps to narrow down the possible answers, highlights their strengths and weaknesses, and provides robustness checks. The rest of the paper is organized as follows: Sect. 2 describes and implements a methodology known as the synthetic control method. Section 3 uses a panel data model. Section 4 uses a simple dynamic stochastic general equilibrium model (DSGE). Finally, Sect. 5 concludes. Synthetic control approach Establishing a counterfactual to evaluate the relative merits of the policies implemented in Bolivia since 2006 is not simple. Ideally, we would like to have Bolivia face the same external conditions, but carry out different policies. In that case, the differences in outcomes between the treatment unit (policies implemented during Morales's Presidency) and the control unit could be, in principle, attributed to the policies. Abadie and Gardeazabal (2003) and Abadie et al. (2010) proposed a method that creates a statistical synthetic control that can be used as a counterfactual on a given treatment. Closely related to our approach, Grier and Maynard (2016) applied this method to assess the economic consequences of Venezuela's Hugo Chavez. This section describes the methodology, discusses practical issues of its implementation, and presents its results, when applied to Bolivia. As the ideal experiment, in which the systematic difference among two groups is clearly defined, is generally not available in social sciences, causal inference about the effects of events requires the development of different techniques. When there are several control and treatment units, facing similar environments, with or without common trends, a number of econometric techniques can be used to measure average treatment effects on the treated.Footnote 4 In the present circumstances, there is only one treatment unit (Bolivia, under the Presidency of Evo Morales) and no natural control units that can be used as counterfactuals. As Grier and Maynard (2016) for the case of Venezuela, we want to evaluate how different variables would have evolved, without Evo Morales and his policies. Synthetic control methods were specifically developed to address issues like this one. Intuitively, we want to construct a synthetic Bolivia that is a weighted average of other countries, and that closely resembles conditions in Bolivia prior to the "treatment," which, in this case, is Evo Morales taking office in 2006. If the synthetic control captures other influences common to Bolivia, we use this synthetic Bolivia to compare the effect of Evo Morales on different outcomes.Footnote 5 Following Abadie and Gardeazabal (2003), let \(X_{1}\) be a k \(\times\) 1 vector of pre-treatment observations of economic and social indicators of the treatment unit (Bolivia).Footnote 6 Let \(X_{0}\) be a k \(\times\) N matrix which contains k observations of the same variables as \(X_{1}\), but for the N countries considered to build the synthetic control. Let V be a k \(\times\) k diagonal weighting matrix that reflects the relative importance of the k indicators considered in \(X_{0}\) and \(X_{1}\). Given V , the objective is to find the N \(\times\) 1 vector of weights \(W\left( V\right)\) that minimizes the objective function: $$\begin{aligned} W\left( V\right) =\underset{W}{\arg \min }\left( X_{1}-X_{0}W\right) ^{\prime }V\left( X_{1}-X_{0}W\right) , \end{aligned}$$ subject to the constraints that each element of W satisfies that \(w_{i}\ge 0\) (\(i=1,2,...,N\)) and \(W^{\prime }\iota =1,\) where \(\iota\) is an N—vector of ones. As in Abadie and Gardeazabal (2003), let \(Z_{1}\) be a j \(\times\) 1 vector of pre-treatment observations of the variable that we are interested in comparing. Let \(Z_{0}\) be the j \(\times\) N matrix which contains j observations of the same variable of interest, for the N potential controls. Define \({\widehat{V}}\) as the nonnegative diagonal weighting matrix that solves: $$\begin{aligned} {\widehat{V}}=\underset{V}{\arg \min }\left( Z_{1}-Z_{0}W\left( V\right) \right) ^{\prime }\left( Z_{1}-Z_{0}W\left( V\right) \right) , \end{aligned}$$ so that \({\widehat{W}}=W\left( {\widehat{V}}\right)\) is the vector of weights that minimizes (1), given the matrix V that minimizes (2). Several observations are in order. First, given V, the minimization of (1) is simple as it corresponds to a quadratic programming problem with linear constraints that has an analytical solution. Second, although the simultaneous optimization of (1) and (2) could be conducted, the objective function is no longer quadratic, and numerical methods would be required. Third, a simpler way to proceed is to apply a sequential approach, in which we first consider a wide variety of candidates of V, obtain the value of W that minimizes (1) for each candidate, and evaluate which of those candidates minimizes (2). This procedure is numerically more stable than the simultaneous approach, as long as the candidates of V are dense enough. This is the approach that we pursue here.Footnote 7 Once the optimal weights, \({\widehat{W}}\), are obtained, we can evaluate if, before the treatment, the synthetic control thus generated resembles the economy that we are trying to replicate. For that, we can consider the difference between \(X_{1}\) and \({\widehat{X}}_{1}\), and \(Z_{1}\) and \(\widehat{Z }_{1}\), where: $$\begin{aligned} {\widehat{X}}_{1}=X_{0}{\widehat{W}},\quad {\widehat{Z}}_{1}=Z_{0}{\widehat{W}}. \end{aligned}$$ That is, we evaluate if the weighted average of the control group (other countries) approximates well the pre-treatment characteristics of Bolivia (\(X_{1}\)), and the behavior of the pre-treatment variable of interest (\(Z_{1}\) ). To evaluate the effect of the treatment, we can use the vector of weights \({\widehat{W}}\) obtained using pre-treatment information, to compare the post-treatment outcomes of the variable of interest. That is, let \(Y_{1}\) be a l \(\times\) 1 vector of post-treatment observations of the variable that we are interested in comparing. Let \(Y_{0}\) be the l \(\times\) N matrix which contains l observations of the same variable of interest for the N potential controls. The goal of approximating the behavior that the outcome variable of interest would have had in the absence of Evo Morales, can be obtained by considering the counterfactual generated by our synthetic Bolivia. Thus, the effect of Evo Morales on the variable of interest can be defined as: $$\begin{aligned} \Delta =Y_{1}-{\widehat{Y}}_{1}=Y_{1}-Y_{0}{\widehat{W}}. \end{aligned}$$ To address the issue of the significance of the results, and if the differences can be attributed to the treatment, Abadie et al. (2010) suggest performing a "placebo study." It consists on applying the synthetic control method to all the countries in the control group (that, of course, did not have Evo Morales as their president). If the placebos generate gaps similar to the ones of (3), we would conclude that the treatment (Evo Morales) did not have a significant effect on the variable of interest. On the other hand, if the treated unit displays a different behavior than the placebos, we would conclude that there is significant evidence of the effect of the treatment. Here we present the results of applying the methodology described above to evaluate the effects of the Evo Morales Presidency on (the natural log of) GDP per capita at purchasing power parity (PPP).Footnote 8 We consider the years 1992–2005 for constructing \(Z_{1}\) in the pre-treatment period, and the period 2006–2016 as the treatment period. Table 1 presents the results of applying the methodology described above to obtain the optimal weights \({\widehat{W}}\). Eight countries have positive weights to compose the synthetic control (Cote d'Ivoire, Kyrgyz Republic, Niger, Peru, Togo, UK, Uruguay, and Zimbabwe). Table 1 Optimal weights for synthetic control: GPD per capita Table 2 presents the list of pre-treatment characteristics that we seek to match. These characteristics take into account, not only, conventional factors considered as determinants of economic conditions, but also other social indicators that provide a broad characterization of the pre-treatment conditions faced by Bolivia.Footnote 9 The table shows that Bolivia is considerably poorer than the simple average of the 84 countries for which the information is available, and that are used to obtain the synthetic control. It also shows that, with the possible exception of energy use, the synthetic control does a good job replicating Bolivia's pre-treatment characteristics. Suffice it to say that synthetic Bolivia has a RMSPE that is more than eleven times smaller than the average of all countries. Table 2 Pre-treatment characteristics Figures 1 and 2 display the evolution of the level of GDP per capita (at PPP) for Bolivia, the average of the 84 countries considered as possible controls, and the synthetic control. As it happened with the pre-treatment characteristics, the average corresponds to more than two times the income of Bolivia, thus not being a suitable control. However, the synthetic control matches the behavior of Bolivia very closely up to the period of the treatment. After 2005, the synthetic control and actual Bolivia sharply diverge, especially up to 2014. In particular, if synthetic Bolivia were a valid counterfactual, the gap between the actual Bolivia and the synthetic control is, on average, of US$ 270 (at PPP). GDP per capita (at PPP): possible controls. The solid line represents Bolivia. The dashed line represents the simple average of the 84 countries considered as possible controls. The vertical line corresponds to the year prior to the treatment (2005) GDP per capita (at PPP): Bolivia and synthetic Bolivia. The solid line represents Bolivia. The dashed line represents synthetic Bolivia. The vertical line corresponds to the year prior to the treatment (2005) To evaluate the significance of the results we conduct the "placebo" test described in Abadie et al. (2010).Footnote 10 To do so, we conduct the same synthetic control method to the 84 countries used as our control.Footnote 11 As they did not have Evo Morales as their president since 2006, if the placebo gaps are similar to the one obtained for Bolivia, we would conclude that our results are not attributable to Morales and his policies. However, if the gap estimated for Bolivia is large compared to the gaps for the other countries, we would conclude that the Morales Presidency had a significantly negative impact on GDP per capita. Figure 3 presents the results of this exercise. Following Abadie et al. (2010), the placebo gaps depicted exclude the countries for which their RMSEs prior to the treatment are 2.2 times higher than Bolivia's. The reason for doing so is that we want to include only countries that have synthetic controls that can reproduce relatively well the pre-treatment behavior of (the log of) GDP per capita prior to the treatment.Footnote 12 From this exercise we conclude that the differences are consistently negative between actual Bolivia and synthetic Bolivia, and are significant. On average, the gap between actual Bolivia and synthetic Bolivia is of 4.7% of GDP per capita, with a peak of up to 6.3% of GDP per capita between 2010 and 2012.Footnote 13 Placebo test for (log of) GDP per capita (at PPP). The bold line represents the difference between the (log of) observed GDP per capita in Bolivia and the synthetic control. The gray lines represent placebo tests deviations for the other countries in the data set. The graph excludes countries with pre-treatment RMSE 2.2 times higher than Bolivia. The vertical line corresponds to the year prior to the treatment (2005) Again, following Abadie et al. (2010), one final way to compare the Bolivia gap relative to the gaps in the placebo test, is to consider the ratios of the post-/pre-treatment RMSE for each country and their synthetic control. A large ratio indicates that the post-treatment period is significantly different than the pre-treatment period. Figure 4 shows that the post-treatment Bolivia is significantly different than the pre-treatment Bolivia when compared to its synthetic control. Indeed, the post-treatment RMSE is more than three times higher than the pre-treatment RMSE, pertaining to the right tail of the distribution. Ratio of post- and pre-treatment RMSE. Estimation of the empirical density of post-/pre-treatment RMSE using the Epanechnikov kernel. The vertical line corresponds to the ratio of RMSE of Bolivia Summarizing, we show that Bolivia underperformed when compared to its synthetic counterfactual, in terms of the behavior of GDP per capita. While the synthetic control replicates relatively well broad characteristics of Bolivia prior to the treatment, it performs markedly better than actual Bolivia since the year 2006. This exercise suggest that, on average, Bolivia may have lost 4.7% of GDP per capita because of the treatment (Evo Morales Presidency). Furthermore, as evidenced in Appendix 2, there is no evidence that the Morales Presidency achieved better results than its synthetic counterpart when considering outcomes such as inequality (measured by the Gini coefficient), school dropout rate, infant mortality rate, or life expectancy. Although not in the same magnitude, these results are similar in spirit, to those encountered by Grier and Maynard (2016) for Chavez's Venezuela. Panel data approach The previous section tackles the fact that there is no obvious control group with which to evaluate the effect of the Evo Morales Presidency. That is so, because the other countries differ from Bolivia, not only on who is their President, but also on their economic structure, external shocks they face, etc. The synthetic control approach deals with this problem by building an artificial control that, hopefully, resembles the actual Bolivia, prior to the treatment, and compares an outcome variable of the actual Bolivia, with the synthetic counterfactual, after the treatment on Bolivia. Under certain conditions, cross-country panel data models could be used to evaluate this same question, without having to construct a synthetic control.Footnote 14 This section describes the methodology followed. If anything, this approach can be used to assess the robustness of the results obtained above. Lucas (1987) describes methodological aspects that should be considered to evaluate alternative policies. First, describe the environment in which the agents interact, their preferences, technologies, and constraints they face. Second, determine their optimal plans or policy functions of the control variables (y). Then, alter the laws of motion of the state or forcing variables (x). Finally, evaluate how the agents react to these alterations. Under certain conditions, an econometric exercise can achieve the same goal. To do so, consider the conditional density as the empirical equivalent of the policy function, and the marginal density as the law of motion of the state variables. Recalling that the joint density can be written as the product of the conditional density and the marginal density, we have: $$\begin{aligned} f(y_{t},x_{t},\theta )=f(y_{t}\left| x_{t},\theta _{1}\right. )f(x_{t},\theta _{2}), \end{aligned}$$ where \(f(x_{t},\theta _{2})=\int _{-\infty }^{\infty }f(y_{t},x_{t},\theta )dy\) is the marginal density of x, and \(\theta\), \(\theta _{1}\), and \(\theta _{2}\) are the vectors of parameters of the joint, conditional, and marginal densities, respectively. Regression analysis usually focuses on statistical inference of some moments of the conditional density. To do so ignoring the marginal density requires for \(\theta _{1}\) and \(\theta _{2}\) to be "variation free". Footnote 15 This condition is not sufficient for conducting counterfactual analysis. To do so, it is required for the conditional density (or its relevant moments) to be structurally invariant. That is, that the parameters of interest in the conditional density remain unchanged, in the presence of an unstable marginal density.Footnote 16 If those conditions are met, evaluating counterfactual scenarios would imply comparing the forecasts of the variable of interest under alternative trajectories of the state variables. We are interested in evaluating how external shocks and internal conditions (or policies) affected the trajectory of the variable of interest: (log of) GDP per capita. Considering solely information of Bolivia would be advisable if enough quality data for approximating internal conditions were available. Unfortunately, this is not the case. An alternative approach, followed here, is to consider a panel data structure. Under certain homogeneity conditions, this structure would lead not only to consistent, but also more efficient estimates than the ones obtained with time series models for one country. Of course, panel data models tend to assume that the only source of heterogeneity (difference) among units is a fixed effect, which can be a strong assumption. Formally, we estimate the following simple dynamic specification: $$\begin{aligned} y_{i,t}=\alpha _{i}+\sum _{j=1}^{J}\delta _{j}y_{i,t-j}+\beta x_{i,t}+\gamma p_{i,t}+\phi x_{i,t}p_{i,t}+\theta t+u_{i,t} \end{aligned}$$ where \(y_{i,t}\) is the (log of) GDP per capita (at PPP) of country i in year t, \(x_{i,t}\) is a proxy for external conditions, p is a proxy for internal policies, t is a time trend, and J is the number of lags necessary to make \(u_{i,t}\) a white noise process for each country. A few comments regarding (4) are in order. First, \(\alpha _{i}\) summarizes the heterogeneity considered as a fixed effect. Second, the lags of the dependent variable are intended to capture the persistence of the series. Third, the series of interest is considered to be trend stationary.Footnote 17 Fourth, this specification allows for differences in short, and long-run impacts of changes of the forcing variables. To conduct valid inference, this specification requires that x and p be weakly exogenous to the parameters of interest. Furthermore, to conduct counterfactual evaluations, these variables must be super exogenous to the parameters of interest. Finally, in the spirit of Chang et al. (2009), we consider the interaction between internal and external factors, as described by the parameter \(\phi\).Footnote 18 The main idea is that good policies may act as enhancers (complements) of good external conditions.Footnote 19 The choice of a simple structure is deliberate, as there are not many observations in the panel structure for Bolivia. If the model were misspecified (as all are), we would want to avoid systematic and persistent errors for Bolivia. Thus, after the estimation of the parameters using a panel structure, we conduct exogeneity and specification tests for the residuals of Bolivia. Next, we describe the choice of variables and results of implementing this strategy. A natural proxy for external conditions is the evolution of (the log of) the terms of trade, which we use. Internal conditions are trickier to proxy, as they should reflect policy decisions, and their distortionary effects on the decisions of private agents. Furthermore, the variable(s) used to characterize this feature should have a time span useful for estimation and be comparable between countries.Footnote 20 Chumacero and Fuentes (2006) use the ratio of government expenditures to GDP as a proxy of these distortions. However, this is not a good proxy for government intervention for Bolivia in the recent past, as the distinction between government expenditures and investment has become increasingly blurred. Instead, we consider summary indexes that evaluate overall, and other types of freedom, constructed by the Heritage Foundation. As Fig. 5 makes clear, irrespective of the index considered, Bolivians have experienced their freedoms curtailed, and thus, distortions increased since Evo Morales became president.Footnote 21 Internal conditions (2005 vs 2016). Comparison of freedom indexes. See Appendix 1 for details The choice of the variable used to proxy internal conditions was done following the general-to-specific approach suggested by Hendry (1995). That is, we consider the candidates presented in Fig. 5, estimate Eq. (4), perform specification tests for each candidate, and eliminate non-significant variables. Although these variables display similar patterns (in terms of direction), the Labor Market Freedom Index best captures the interaction with terms of trade, and satisfies the specification tests. Thus, this is the variable chosen as proxy of p.Footnote 22 Table 3 reports the results of estimating a panel data models with fixed country effects.Footnote 23 It also reports weak exogeneity tests for the external and internal factors. In both cases, the null hypothesis of no correlation between the residuals for Bolivia of the panel data models (conditional density) and the residuals of the marginal specification for each factor, estimated solely for Bolivia using AR(1) models, is not rejected.Footnote 24 Thus, inference, conditional on contemporary internal and external factors, is valid and no further estimation issues are considered to be problematic. Table 3 Panel data results To conduct the counterfactual exercise, it is required for the variables subject to intervention to be super exogenous, in the sense of Hendry (1995), to the parameters of interest. To test this, we must find evidence of structural instability in the marginal densities of the internal and external factors, while the conditional density remained stable. The AR(1) models used for the weak exogeneity tests are unstable, as judged by CUSUM tests for Bolivia, while the panel model is stable in the sample considered. With this background, our counterfactual exercises ask the question: What would have happened with Bolivia if the conditions (internal and/or external) observed prior to the treatment (2005), would have remained unchanged after the treatment? Table 4 shows four combinations of internal and external factors. If external factors are exogenous and can not be affected by the government, extremely favorable terms of trade as observed during the treatment (after 2005) can be labeled as "good luck" (GL). Thus, "bad luck" (BL) is defined as facing the same level of terms of trade during the treatment (after 2005) of the year 2005. On the other hand, our proxy for internal conditions that are consistent with the policies and statements of the Bolivian government tends to show increasing distortions faced by the private sector. As the panel data results and economic theory tend to predict, these distortions would (generally) be welfare deteriorating and reduce consumption, investment, and formal sector employment.Footnote 25 Thus, we label as "good skills" (GS) to maintaining the level of the Labor Market Freedom Index observed in 2005, and "bad skills" (BS) to the actual index that displays a clear deterioration on this dimension.Footnote 26 Table 4 Scenarios considered As the specification of Table 3 evidences, there is an interaction between terms of trade and labor market freedom, thus making the effects of luck and skills nonlinear. To evaluate the effect of luck (or skills) on GDP per capita, we need to consider the behavior of the skills (or luck). For example, to evaluate the marginal effect of GL, we need to keep constant the level of skill. Thus, A–C is the effect of GL, conditional on GS, and B–D is the effect of GL, conditional on BS. Conversely, A–B is the effect of GS, conditional on GL, and C–D is the effect of GS, conditional on BL. If B were an accurate description of Bolivia after the treatment, A would be the counterfactual in which Bolivia would have benefited from the same favorable external conditions, with internal policies that would have enhanced the effect of the commodity prices bonanza. With the estimates of the specification of Tables 3 and (4), we can compute the expected effect of comparing scenario k with scenario l as: $$\begin{aligned} \Delta ^{k-l}&=\sum _{j=1}^{J}{\widehat{\delta }}_{j}\left( y_{i,t-j}^{k}-y_{i,t-j}^{l}\right) +{\widehat{\beta }}\left( x_{i,t}^{k}-x_{i,t}^{l}\right) \\ & \quad +{\widehat{\gamma }}\left( p_{i,t}^{k}-p_{i,t}^{l}\right) +{\widehat{\phi }} \left( x_{i,t}^{k}p_{i,t}^{k}-x_{i,t}^{l}p_{i,t}^{l}\right) , \end{aligned}$$ where \(k,l=\)A, B, C, D, and i represents Bolivia. Figure 6 presents the results of computing the differences described in (5). As intuition would prescribe, good luck is "good," unconditionally. However, good skills enhance the effects of a positive external environment. Given the observed path of terms of trade, GDP per capita increased in up to 4% just due to this effect. On the other hand, maintaining fewer distortions, with the external commodity prices bonanza, would have increased GDP per capita in up to 2%, on average, compared to the observed scenario. Thus, this exercise shows that Bolivia's recent economic bonanza is mostly due to the extremely favorable external conditions it faced, and that, if anything, the internal factors prevented Bolivia from enjoying greater benefits. Effects of skills versus luck (fraction of GDP per capita). Left panel: Solid line depicts the effect of GL conditional on GS (A–C) and dashed line depicts the effect of GL conditional on BS (B–D). Right panel: Solid line depicts the effect of GS conditional on GL (A–B) and dashed line depicts the effect of GS conditional on BL (C–D) DSGE approach The previous sections attempt to quantify the effects of Evo Morales's Presidency relying on econometric methods that, through different means, build a counterfactual with which to compare the actual performance of Bolivia. Although these methods have their merits, they do not provide deep insights regarding the explicit mechanisms that operate. In short, these methods may help to predict, but lack of the economic theory required to understand. As mentioned, Lucas (1987) proposed a different approach to evaluate counterfactual scenarios, proposing to use DSGE models as tools for conducting artificial experiments. This approach considers that if there are structural parameters that are invariant to interventions, we can evaluate the effects of interventions by solving the model before the intervention, solving it again with the intervention in place (provided that it is permanent) and compare the long-run (or steady state) effects of the intervention. If the intervention is deemed transitory, a natural tool to tackle the effect of the intervention is to map it to a impulse-response surface.Footnote 27 One weakness of the approaches followed in Sects. 2 and 3 is that the quantitative results are sample dependent. One weakness of the approach of this section is that the results are theory dependent. This, however, should not demean the elegance and boldness of this approach, as it makes transparent the structure used, and the mechanisms through which an intervention operates. This section uses a deterministic version of the DSGE model developed by Chumacero et al. (2004) to address the issue of the effects of the Free Trade Agreements signed by Chile.Footnote 28 It is general enough, so as to provide a wide variety of mechanisms to analyze, and allows us to operationalize what we mean by luck and skills. Of course, the model is calibrated to replicate some long-run characteristics of Bolivia, and then modified to evaluate the effects of counterfactual scenarios that intend to capture luck and skills. The model considers a small open economy with firms in three sectors (exportable, importable, and non-tradable), a government, and a representative household that faces an upward-sloping supply schedule for debt. The households The economy is inhabited by a representative agent, who maximizes the value of lifetime utility as given by:Footnote 29 $$\begin{aligned} \sum _{t=0}^{\infty }\beta ^{t}u\left( c_{m,t},c_{n,t}\right) , \end{aligned}$$ where \(c_{m,t}\) and \(c_{n,t}\) represent period t consumption of an importable (m) and a non-tradable good (n). The other good produced in this economy is not consumed at home. We denote this good as the exportable good (x). The maximization of (6) is done subject to the budget constraint:Footnote 30 $$\begin{aligned}&\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) c_{m}+\left( 1+\tau _{c_{n}}\right) c_{n}p+\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) i+\left( 1+{\widetilde{r}}\right) b \\&\qquad \le \left( 1-\tau _{k}\right) \left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) rk+b_{+1}+F+\pi _{x}+\pi _{m}+\pi _{n}. \end{aligned}$$ where \(\tau _{m}\) is an import tariff, \(\tau _{c_{n}}\) and \(\tau _{c_{m}}\) are taxes on the consumption of non-tradables and importables, p is the relative price of the non-tradable good in terms of the importable good (used as numeraire), b is the amount of foreign debt that the private agent contracted from abroad on the previous period, \({\widetilde{r}}\) is the (net) interest rate paid on that debt, \(\tau _{k}\) is a tax on capital income levied by the government, r is the rental rate of capital stock that is given to the firms of the three sectors, \(\pi _{x}\), \(\pi _{m}\), and \(\pi _{n}\) are the profits of the exportable, importable, and non-tradable sectors, F is a lump sum transfer from the government to households, and i is investment, which satisfies the standard law of motion for capital: $$\begin{aligned} k_{+1}=\left( 1-\delta \right) k+i, \end{aligned}$$ where \(\delta\) is the depreciation rate of the capital stock and k is the capital stock. As k is expressed in units of the importable good, it is also subject to the same taxes of the importable good destined to consumption (tariffs and the value-added tax).Footnote 31 The problem of the representative consumer can be summarized by the value function that satisfies: $$\begin{aligned} V\left( s_{h}\right) =\max _{c_{m},c_{h},b_{+1},k_{+1}}\left\{ u\left( c_{m},c_{n}\right) +\beta \left[ V\left( s_{h,+1}\right) \right] \right\} , \end{aligned}$$ subject to (7, 8), and the perceived laws of motion of the states \(s_{h}\).Footnote 32 The first-order optimality conditions are: $$\begin{aligned} p^{-1}= \,& {} \frac{u_{c_{m}}^{\prime }}{u_{c_{n}}^{\prime }}\frac{\left( 1+\tau _{c_{n}}\right) }{\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) } \nonumber \\ 1= \,& {} \beta \left[ \frac{u_{c_{m},+1}^{\prime }}{u_{c_{m}}^{\prime }}\frac{ \left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) }{\left( 1+\tau _{m,+1}\right) \left( 1+\tau _{c_{m},+1}\right) }\left( 1+{\widetilde{r}} _{+1}\right) \right] \nonumber \\ 1= & {} \beta \left[ \frac{u_{c_{m},+1}^{\prime }}{u_{c_{m}}^{\prime }}\left[ \left( 1-\tau _{k,+1}\right) r_{+1}+1-\delta \right] \right] . \end{aligned}$$ The first intratemporal optimality condition states that the relative price between importables and non-tradables (real exchange rate) must equate the ratio of marginal utilities between both goods. The next two (intertemporal) conditions are the standard Euler equations that state that the marginal rate of substitution between consumption today and tomorrow, must equate their relative price, evaluated at the cost of foreign borrowing and the rate of return of capital investment, respectively. The firms Three sectors with an equal number of representative firms produce the exportable, importable, and non-tradable goods. All sectors require capital as the only explicit production factor.Footnote 33 Next, we state the problems faced by the firms. The importable good The profits of the representative firm are determined by: $$\begin{aligned} \pi _{m}=\left( 1+\tau _{m}\right) f\left( z_{m},k_{m}\right) -\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) rk_{m}, \end{aligned}$$ where \(z_{m}\) is a productive shock and \(k_{m}\) is the amount of capital demanded. The first-order optimality condition is: $$\begin{aligned} f_{k_{m}}^{\prime }\left( z_{m},k_{m}\right) =\left( 1+\tau _{c_{m}}\right) r, \end{aligned}$$ which states that the marginal cost of new capital must equate its marginal value. The output of this sector can be consumed or used as capital in any of the three sector. The exportable good The profits of firms producing the exportable good are determined by: $$\begin{aligned} \pi _{x}=\left( 1-\tau _{x}\right) qf\left( z_{x},k_{x}\right) -\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) rk_{x}, \end{aligned}$$ where \(\tau _{x}\) is an export tax, q is the relative price of exportables in terms of importables, \(z_{x}\) is a productive shock, and \(k_{x}\) is the amount of (importable) capital demanded by the exportable sector. $$\begin{aligned} \left( 1-\tau _{x}\right) qf_{k_{x}}^{\prime }\left( z_{x},k_{x}\right) =\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) r, \end{aligned}$$ This equation presents the optimality condition equivalent to (12). The output of this sector is only consumed abroad. The non-tradable good $$\begin{aligned} \pi _{n}=pf\left( z_{n},k_{n}\right) -\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) rk_{n}, \end{aligned}$$ where \(z_{n}\) is a productive shock and \(k_{n}\) is the amount of (importable) capital demanded by the sector. $$\begin{aligned} pf_{k_{n}}^{\prime }\left( z_{n},k_{n}\right) =\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) r, \end{aligned}$$ which states the optimality condition of the sector, and has the same interpretation of (14). The output of this sector is only consumed in the domestic economy. It is assumed that the government has no explicit objective function to maximize, but satisfies the following constraint: $$\begin{aligned} g+F= \,& {} \tau _{m}\left( c_{m}+i-f\left( z_{m},k_{m}\right) \right) +\tau _{c_{m}}\left( 1+\tau _{m}\right) \left( c_{m}+i\right) \nonumber \\&+\tau _{x}qf\left( z_{x},k_{x}\right) +\tau _{c_{n}}c_{n}p+\left( 1+\tau _{m}\right) \left( 1+\tau _{c_{m}}\right) \tau _{k}rk, \end{aligned}$$ It is further assumed that a fraction \(\varkappa _{t}\) of the total government expenditures is used to consume the non-tradable good produced in the economy. Market-clearing conditions Define the productions of the exportable, importable, and non-tradable goods by: $$\begin{aligned} y_{x}\,=\, & {} f\left( z_{x},k_{x}\right) \nonumber \\ y_{m}\,= \,& {} f\left( z_{m},k_{m}\right) \nonumber \\ y_{n}\,= \,& {} f\left( z_{n},k_{n}\right) . \end{aligned}$$ The market-clearing conditions are: $$\begin{aligned} py_{n}\,= \,& {} pc_{n}+\varkappa g, \nonumber \\ -\left( b_{+1}-b\right)= \,& {} qy_{x}+y_{m}-c_{m}-\left( 1-\varkappa \right) g-k_{+1}+\left( 1-\delta \right) k-{\widetilde{r}}b, \end{aligned}$$ where the first equation describes the equilibrium in the non-tradable good market and the second the equilibrium in the importable good market, which shows that the current account balance must be compensated by the capital account balance. To avoid having to model the world credit market, and following Bhandari et al. (1990), Turnovsky (1997), and Osang and Turnovsky (2000), we assume that the country faces an upward-sloping supply schedule for debt: $$\begin{aligned} {\widetilde{r}}={\widetilde{r}}\left( b\right) ,\quad {\widetilde{r}}^{\prime }>0. \end{aligned}$$ Competitive equilibrium A competitive equilibrium is a set of allocation rules \(c_{m}=C_{m}\left( s\right)\), \(c_{n}=C_{n}\left( s\right)\), \(k_{+1}=K\left( s\right)\), and \(b_{+1}=B\left( s\right)\), \(k_{x,+1}=K_{x}\left( s\right)\), \(k_{n,+1}=K_{n}\left( s\right)\), and \(k_{m,+1}=K_{m}\left( s\right)\), a set of pricing functions \(r=R\left( s\right)\), and \(p=P\left( s\right)\), and the laws of motion of the exogenous state variables \(s_{+1}=S\left( s\right) ,\) such that Households solve the problem (9), taking as given s and the form of the functions \(R\left( s\right)\), \(P\left( s\right)\), and \(S\left( s\right)\), with the equilibrium solution to this problem satisfying \(c_{m}=C_{m}\left( s\right)\), \(c_{n}=C_{n}\left( s\right)\), \(k_{+1}=K\left( s\right)\), and \(b_{+1}=B\left( s\right)\). Firms of the exportable, importable, and non-tradable sectors solve the problems (11), (13), (15), taking as given s and the form of the functions \(R\left( s\right)\), \(P\left( s\right)\), and \(S\left( s\right)\), with the equilibrium solutions to these problems satisfying \(k_{x,+1}=K_{x}\left( s\right)\), \(k_{n,+1}=K_{n}\left( s\right)\) , and \(k_{m,+1}=K_{m}\left( s\right)\). The economy-wide resource constraints (19) hold each period, and the factor market clears: $$\begin{aligned} K_{x}\left( s\right) +K_{n}\left( s\right) +K_{m}\left( s\right) =K\left( s\right) . \end{aligned}$$ Functional forms With the generic model specified, next we group functional forms in terms of preferences, production technology, government, and exogenous prices. We consider the following functional form: $$\begin{aligned} u\left( c_{m,t},c_{n,t}\right) =\theta _{m}\ln c_{m,t}+\theta _{n}\ln c_{n,t}, \end{aligned}$$ with \(\theta _{m},\theta _{n}>0\) and \(\theta _{m}+\theta _{n}=1\). The production functions are assumed to be Cobb–Douglas: $$\begin{aligned} f\left( z_{j,t},k_{j,t}\right) =e^{z_{j,t}}k_{j,t}^{\alpha _{j}}, \end{aligned}$$ where \(\alpha _{j}\) is the compensation for capital as a share of output of sector j for \(j=x,m,n\). As we will compare deterministic steady states to evaluate the relative importance of skills and luck, the steady-state values of the productivity shocks (\(z_{j}\)) are calibrated to match the sectorial composition of GDP in Bolivia. Fiscal variables We calibrate different values for government expenditures (\({\overline{g}}\)) and other fiscal variables (taxes and tariffs), depending on whether or not we activate the treatment, reflecting that distortions increased during the treatment (bad skills). The specific way in which these variables are set is discussed below. Exogenous prices Next, we describe the functional forms chosen for the laws of motion of two external variables: terms of trade (q) and the borrowing rate (\(\widetilde{ r}\)) discussed in (20). The steady state of terms of trade (\({\overline{q}}\)) is contingent on whether or not we activate the condition of a favorable terms of trade to assess the effects of "luck" . Further discussion is given below. Finally, as discussed above, we assume that the country faces an upward-sloping supply schedule for debt and model it as: $$\begin{aligned} {\widetilde{r}}_{t+1}={\overline{r}}+\varphi \frac{b_{t}}{y_{t}}, \end{aligned}$$ where \(y_{t}\) is total output (GDP), expressed in terms of importables. Calibration and results Next, we parameterize the model, distinguishing deep parameters from those that are considered to be affected by the treatment. Table 5 presents the values of the parameters that are assumed to be unchanged by the treatment and Table 6 the values of the parameters before and after the treatment. Table 5 Deep parameters The parameters \(\theta _{m}\) and \(\theta _{n}\) are chosen so as to reproduce the share of consumption on importables and non-tradables over total consumption in steady state. The subjective discount factor (\(\beta\)) was set to make it consistent with a 5% annual real interest rate. The output-factor elasticities in each sector (\(\alpha\)) were set to match the sectorial shares on GDP, and consider that the exportable sector is more capital intensive than the other sectors.Footnote 34 The depreciation rate was set to 6%, while the constants of the production functions, government expenditures, and terms of trade were set to match the participation of each sector in total GDP.Footnote 35 Table 6 Values of the parameters before and after the treatment Table 6 reports how we capture the effects of good luck and good skills. Regarding luck, it is associated with a positive terms of trade shock (increased q). During the treatment (after 2005), terms of trade were (on average) 50% higher than the average of the period before (1990–2005), and almost 40% higher (on average) than in the year before the treatment. As we are comparing steady states, it would be incorrect to assume that this increase is permanent. What we need is to obtain the level of a permanent shock that is equivalent (in present value) to a 50% temporary increase in terms of trade of 10 years. That is, we need to find: $$\begin{aligned} \lambda _{q}=1.5\left( 1-\beta ^{9}\right) +\beta ^{10}\simeq 1.15, \end{aligned}$$ and consider a steady-state level of terms of trade 15% higher when we evaluate the effect of GL. It is a bit more difficult to assess what the effects of the treatment were on the increased distortions faced by agents.Footnote 36 Furthermore, reliable figures for the finances of the public sector are not available.Footnote 37 What is clear is that the size of the public sector and the distortions have increased markedly, making private investment more costly, levied heavier taxes on the exportable sector and increased overall government expenditures. The magnitude of this changes is computed so as the share of government expenditure over GDP after the positive terms of trade shock increases by 2%.Footnote 38 This is equivalent to increasing government expenditures and taxes on capital and the exportable sector of 19%.Footnote 39 As with Sect. 3, here we conduct counterfactual experiments by changing parameters, solving the model with different configurations, and evaluating the effect on the variable of interest (what the model considers to be GDP). As mentioned, there are several ways in which the exercise could be done. One, in which modifications are considered transitory, is to shock the system with a perturbation and follow the results. In this case, the laws of motion of the states (particularly their persistence) are key. As the model lacks of an analytical solution, numerical methods need to be used to solve for the optimal policies. In case second-order perturbation methods are used (as in Schmitt-Grohé and Uribe 2004), the volatility of the shocks would also be important. The other way to tackle the question is to consider that the counterfactual (or treatment) involves a permanent change in some variables. In that case, we could also evaluate the effects by comparing changes in the optimal policies, or simply evaluating the long-term effects by comparing changes in the (deterministic) steady states. The advantage of this approach is that it solves the fully fledged nonlinear steady state of the system, and does not require approximations of the policy functions. Furthermore, as far as the deterministic steady states are concerned, persistence and volatility are irrelevant. This is the approach we follow. Concretely, we consider the same scenarios presented in Table 4. As the model is extremely nonlinear, it will be the case that changes in luck (skills) would have different effects on the economy, depending on the skills (luck) scenario that is considered. As changes in terms of trade and/or distortions will lead to changes in allocations and relative prices, differences in GDP in each scenario are not directly comparable to the ones obtained in the previous sections. This is so, because in them, comparisons were made in dollars of a base year, while here, they are made in GDP expressed in terms of importables. One way to make the figures comparable is to compute GDP in prices of a baseline scenario. A natural baseline scenario would be Bolivia prior to the treatment, which in our case, corresponds to scenario C. Figure 7 presents the results of comparing the changes in the steady-state GDPs of different scenarios, computed in terms of importables, and also in prices of scenario C. As previously, good luck is "good" unconditionally, and good skills enhance the effects of a positive external environment. These results suggest that the long-term effects of a substantial increase in terms of trade would have led to increases in GDP of between 8.9 and 9.9% when measured at the new relative prices. When measured at constant (scenario C) prices, the increments in GDP due to favorable terms of trade are between 3.7 and 4.1%. Figures are in line with our results of Sect. 3. On the other hand, good skills (less distortions) would amount to increases in GDP of between 6.4 and 7.4% (depending on the levels of terms of trade), when measured allowing changes of the real exchange rate, and between 5.6 and 6.1%, when measured at prices of scenario C. Thus, this exercise provides further support to the hypothesis that Bolivia's recent economic bonanza is mostly due to the extremely favorable external conditions it faced, and that, the internal factors were very costly. Effects of skills versus luck with DSGE (fraction of GDP per capita). A–C depicts the effect of GL conditional on GS. B–D depicts the effect of GL conditional on BS. A–B depicts the effect of GS conditional on GL. C–D depicts the effect of GS conditional on BL. The lighter bars evaluate changes in GDP with the relative prices changed due to the scenarios. The darker bars correspond to the changes evaluated at the relative prices of the baseline scenario (C) This paper intends to evaluate the relative importance of internal and external conditions on the economic bonanza Bolivia experienced. As the external conditions were extremely favorable, with increases (of the terms of trade) of, on average, more than 50% since 2006 with respect to 2005, or, for that matter, any other period in the past, it is tempting to conclude that this is the main culprit of the bonanza. Nevertheless, Evo Morales's government has not remained still and has also conducted major policy changes. Acolytes of the regime conclude that, it is mainly the policies that should be credited for the boom. Settling this dispute requires more than charts and graphs. As Evo Morales emerged as president of Bolivia in, roughly, the same period of favorable external conditions, it is not trivial to identify the causal effect of each. This paper provides three complementary approaches to evaluate the relative importance of luck (favorable external conditions) that can not be attributed to the government, and skills (internal policies that could have fostered or lessened the effects of the external conditions). These approaches differ on identifying assumptions, the methodological approach, information used, and foundation on economic theory. The first approach attempts to construct the counterfactual of how would have Bolivia fared since 2006 if Evo Morales (and his policies) would not have been implemented. To construct this counterfactual, a synthetic control (comprised of a weighted average of other countries) is derived. The results indicate that Evo Morales caused, on average, a loss in the level of GDP per capita of around 4.7% per year, since in office. They also suggest that conditions in other social indicators would not have been affected in his absence. The second approach uses a panel of countries to evaluate the precise counterfactual of changing one factor (external or internal), and leaving the other constant. This approach imposes some restrictions in terms of the degree of homogeneity in responses by the countries, the proxies considered, and the requirement of super exogeneity (in the sense of Hendry). Meeting these conditions, we find an interesting nonlinearity due to the interaction between internal and external conditions. Put simply, good luck is enhanced with fewer distortions. This approach estimates that good luck provided up to 4% more on GDP per capita, while the increased distortions in the internal conditions lead to a decrease of up to 2% of GDP per capita. Thus, not taking into account secular conditions, the bonanza was due to external conditions, with internal conditions harming, more than helping. The third and final approach uses a DSGE model to evaluate counterfactual scenarios. The model is calibrated so as to replicate the sectorial composition of GDP, and introduces changes in the structure of the model that are intended to accommodate the internal and external conditions observed prior to the treatment (Evo Morales), and change them to compute how the agents would react to them. This approach has the main benefit of providing an optimizing and internally consistent model that has to make explicit the interventions. It also helps to provide economic insights on the mechanisms at play. The results of this approach are qualitatively consistent with the previous findings. With GDP calculated at constant relative prices of the period prior to the treatment, the commodity prices boom may have caused increases of up to 4.1% (apart from secular trends), while increased internal distortions might have caused around a 6.1% drop in GDP. Concluding, all the exercises conducted lead to similar conclusions. Bolivia's recent bonanza is primarily due to incredibly favorable external conditions. If anything, the boom was not fully capitalized due to the increased distortions in the internal economic policies. Thus, paraphrasing Sir Ronald Ross, the Government's "favorite scam is pretending that luck is skill."Footnote 40 See Appendix 1. Morales (2014) edits a volume that presents several studies quantifying the magnitude of resources that Bolivia received, thanks to the commodities prices boom. The Latin phrase "post hoc, ergo propter hoc " (after this, therefore, because of this) is a logical fallacy that states that because two events occurred in succession, the former event must have caused the latter event. To my knowledge, there are no systematic studies that have tackled this question for the case of Bolivia. Textbook treatments of alternatives are discussed in Angrist and Pischke (2009), Imbens and Rubin (2015), Lee (2016), and Pan and Bai (2015). As pointed out by one of the referees, this method can be used to assess if the "Evo Morales treatment" is relevant, but not to evaluate the relative importance of the external and internal factors. The choice of \(X_{1}\) should consider characteristics that could be interpreted as fundamentals behind the output variable, or a comprehensive description of the pre-treatment Bolivia. We consider one hundred thousand candidates for V, generated using pseudo-random numbers. V is always normalized such that its trace is one. Appendix 1 describes the datasets used. Appendix 2 presents the results of the synthetic control approach for the Gini coefficient, primary school dropout rate, infant mortality, and life expectancy. One issue that is particularly relevant is to assess the importance of "luck," measured by improved terms of trade. As terms of trade shocks can be considered as exogenous, we could use them as another relevant characteristic for the synthetic control. Including net barter terms of trade in the 2006–2016 period as an additional X variable does not significantly alter the results reported here. This inferential tool is also known as the "falsification" or "refutability" test, intended to evaluate if the results obtained may be due to pure chance. See Abadie et al. (2010) for further references. The placebo test requires to perform the optimization procedure described above for each country. For computational expediency, we consider one thousand randomly generated candidates for V. As in Abadie et al. (2010), we also considered excluding of the placebo test countries that present more than 4.5 times the RMSE of Bolivia, with similar resuls. Even bigger effects, of up to 6%, were found using GDP per capita (without the PPP conversion). Although the magnitude may also vary depending on the pre-treatment characteristics included, whenever they were matched, synthetic Bolivia performed systematically better than actual Bolivia during the "Evo Morales treatment" . For example, Easterly et al. (1993) evaluate the importance of internal versus external determinants of levels and growth rates of GDP. In this case, we say that x is "weakly exogenous" for \(\theta _{1}\). See Hendry (1995) for details. In this case, we say that x is "super exogenous" for \(\theta _{1}\). See Hendry (1995) for details. As is well known, the parameter associated with the trend component is super consistent. Thus, even if the variable were difference stationary, the process still has a valid representation, if cointegration is present. A dynamic structure that includes lags of x and p can be considered. However, information (particularly of proxies of p) is not abundant for Bolivia. Thus, we privilege parsimony and conduct specification tests. As the quote attributed to the Roman philosopher Seneca states, "luck is what happens when preparation meets opportunity" . See Appendix 1 for the list of variables considered and their sources. In fact, the 2019 report of the Heritage Foundation ranks Bolivia in place 173 out of 180 in terms of overall freedom, characterizing it is "repressed" . Other countries in this category are Cuba (178), Venezuela (179), and North Korea (180). The results are robust to the choice of proxy for internal conditions. A previous version of the paper also considered the share of government expenditures over GDP as a proxy for p, with similar results. As should be expected, Hausman specification tests reject the null of random effects in favor of fixed effects. A LRT rejects the null of redundant fixed effects. Finally, cross-dependence tests reject the null hypothesis of no correlation among the residuals of each country (Pesaran 2015). Other specifications tests for the residuals of the panel (conditional) and univariate times series models (marginal) indicate that the residuals can be broadly characterized as homoskedastic, normal, and white noise processes. In fact, distortions in the formal labor market have deteriorated so much that, according to Medina and Schneider (2018), Bolivia has the biggest informal sector in the world. For partial equilibrium evidence, see Nogales et al. (2019). For general equilibrium evidence, see Román (2011), Vargas (2009), or Sect. 4 below. Describing as "good skills" to the economic policies pursued prior to 2006 is, admittedly, a bit of a stretch. Economic policies and political internal conditions have always produced a fragile environment in Bolivia. While corruption and distortions have always been present, these characteristics have increased substantially under the "treatment" . Furthermore, expropriations and reversals of market oriented policies have been a constant feature of the past years, as evidenced by the evolution of all the freedom indexes of the Heritage Foundation. As noted by a referee, a third approach would require to solve the entire transitions subject to (possibly) time-varying policies, and compare the entire time series. For completeness, the model is presented in its entirety. As noted by a referee, we focus on comparing steady states before and after the treatment. As uncertainty is not required, we consider a deterministic version of the DSGE model of Chumacero et al. (2004). For brevity, time t subscripts are eliminated. As pointed out by a referee, this is a simplifying assumption, as investment has tradable and non-tradable components. We define \(s_{h}=\left( \tau _{m},\tau _{c_{m}},\tau _{c_{n}},p,{\widetilde{r}} ,\tau _{k},r,k,b,F,\pi _{x},\pi _{m},\pi _{h}\right) .\) This setup is consistent with a model in which labor is sector specific and is static. The long-run shares on GDP for each sector are 11$ for exportables, 31% for importables, and 58% for non-tradables. The tax on importables is set relatively low, as tariffs are low and a very dynamic smuggling sector is prevalent. Taxes on consumption of importables and non-tradables are set approximately equivalent to the value-added tax. As discussed in Sect. 3, the indexes that proxy indicators of freedom have deteriorated by between 11 and 35 points. Furthermore, private investment and formal firms have faced expropriations. Finally, a large component of the increased distortions have taken the form of unproductive public investment and corruption. Linares (2018) shows that most of the public enterprises created under Evo Morales run deficits. These enterprises include a cell phone and computer company, a paper company, and textiles, among others. Public investment has also been used to build artificial grass soccer fields (a major hobby of Evo Morales) and a museum on his honor. Public investment changed from 620 million US$ in 2005 to 5.1 billion dollars in 2016. For example, the IMF, ECLAC, the Bolivian Central Bank, the National Bureau of Statistics, and the Finance Ministry have different figures for the size of the public sector on the economy. This figures range from between 15% to almost 50%. The last figure is constructed by Kehoe et al. (2019). Again, different official sources tell different stories regarding this magnitude. Public consumption and public investment have increased their participation on GDP by, between 3 and 10%, depending on the source considered. Total tax revenues (as a share of GDP) have increased by 1.7% (on average) according to the Ministry of Finance. A referee suggested using Mendoza et al. (1994) to calibrate taxes. This task probed to be impossible as there are no reliable statistics of the different sources of tax revenue and tax base that can be used. Sir Ronald Ross was a British medical doctor who received the Nobel Prize for Physiology or Medicine in 1902. Humorously, his quote referred to Wall Street. Fittingly, Sir Ross was a pioneer in the systematic analysis of control and treatment groups. We do so because one of the main contentions of the Bolivian government is that there were significant improvements on social conditions that would not have occurred otherwise. BS: bad skills CUSUM: cumulative sum (test) DSGE: dynamic stochastic general equilibrium model GDP: GL: GS: good skills PPP: RMSPE: Root mean squared percentage error Abadie A, Diamond A, Hainmueller J (2010) Synthetic control methods for comparative case studies: estimating the effect of California's tobacco control program. J Am Stat Assoc 105(490):493–505 Abadie A, Gardeazabal J (2003) The economic costs of conflict: a case study of the Basque country. Am Econ Rev 93(1):113–32 Angrist J, Pischke J (2009) Mostly harmless econometrics. Princeton University Press, Princeton Arce L (2016) El Modelo Económico. Social, Comunitario, Productivo Boliviano, Ministerio de Economía y Finanzas Públicas Bhandari J, Haque N, Turnovsky S (1990) Growth, external debt, and sovereign risk in a small open economy. IMF Staff Papers 37:388–417 Chang R, Kaltani L, Loayza N (2009) Openness can be good for growth: the role of policy complementarities. J Dev Econ 90(1):33–49 Chumacero R, Fuentes R (2006) Economic growth in Latin America: structural breaks or fundamentals? Estudios de Economía 33(2):141–54 Chumacero R, Fuentes R, Schmidt-Hebbel K (2004) Free trade agreements: how big is the deal?, DTBC 264, Central Bank of Chile Easterly W, Kremer M, Pritchett L, Summers L (1993) Good policy or good luck? Country growth performance and temporary shocks. J Monet Econ 32(3):459–83 Feenstra R, Inklaar R, Timmer M (2015) The next generation of Penn world table. Am Econ Rev 105(10):3150–82 Grier K, Maynard N (2016) The economic consequences of Hugo Chavez: a synthetic control analysis. J Econ Behav Organ 125(1):1–21 Hendry D (1995) Dynamic econometrics. Oxford University Press, Oxford Imbens G, Rubin D (2015) Causal inference for statistics, social, and biomedical sciences. Cambridge University Press, Cambridge Kehoe T, Machicado C, Peres J (2019) The Monetary and Fiscal History of Bolivia, 1960–2017, Working Paper 25523, National Bureau of Economic Research Lee M (2016) Matching, regression discontinuity, difference in differences, and beyond. Oxford University Press, Oxford Linares J (2018) "Má s Ruido que Nueces, Análisis de los Emprendimientos Empresariales del Proceso de Cambio," Grupo Sobre Política Fiscal y Desarrollo, 26, CEDLA Lucas R (1987) Models of business cycles. Blackwell, Oxford Medina L, Schneider F (2018) Shadow economies around the world: what did we learn over the last 20 years?, IMF Working Paper, WP/18/17 Mendoza E, Razin A, Tesar L (1994) Effective tax rates in macroeconomics. cross-country estimates of tax rates on factor incomes and consumption. J Monet Econ 34:297–323 Morales J (2014) editor, ¿Dónde Está la Plata? Cuantificación de los Ingresos Extraordinarios que Percibió Bolivia de 2006 a 2013, Fundaci ón Milenio Nogales R, Córdova P, Urquidi M, Rejas B (2019) On the relationship between labor market policies and outcomes in Bolivia: a search and matching approach. Estudios de Economía 46(1):61–87 Osang T, Turnovsky S (2000) Differential tariffs, growth, and welfare in a small open economy. J Dev Econ 62:315–42 Pan W, Bai H (2015) Propensity score analysis: fundamentals and developments. The Guilford Press, New York Pesaran H (2015) Time series analysis and panel data econometrics. Oxford University Press, Oxford Román S (2011) "Costos Laborales, Economía Informal y Reformas a la Legislación Laboral en Bolivia," Master Of Science Thesis, Universidad de Chile Schmitt-Grohé S, Uribe M (2004) Solving dynamic general equilibrium models using a second-order approximation to the policy function. J Econ Dyn Control 28:755–75 Turnovsky S (1997) Equilibrium growth in a small economy facing an imperfect world capital market. Revi Dev Econ 1:1–22 Vargas J (2009) Bienestar y Ciclos Económicos en una Economía con Evasión y Sector Informal, Ph.D. Thesis, Universidad de Chile I thank Boris Branisa, Luis Castro, Rodrigo Fuentes, Daniel Hernaiz, Carlos Gustavo Machicado, Pablo Mendieta, Alejandro Mercado, Oscar Molina, Ricardo Nogales, Antonio Saravia, Rodrigo Villareal, the participants of the 8th Bolivian Conference on Development Economics, the 2018 SECHI Conference, seminars at the Academia Boliviana de Ciencias Económicas, Universidad Católica Boliviana and Universidad Privada Boliviana, and two anonymous referees for helpful comments and suggestions. I also thank Guillermo Gómez, Alejandra Goytia, Esteban Michel, Solange Sardán, Luis Serrudo, and Alejandro Terán, for their valuable research assistance. Department of Economics of the University of Chile, Santiago, Chile Rómulo A. Chumacero Search for Rómulo A. Chumacero in: The author is solely responsible for the article. The author read and approved the final manuscript. Correspondence to Rómulo A. Chumacero. The author declares no competing interests. Appendix 1: The data Table 7 lists the series used for the synthetic control approach detailed in Sect. 2. The results of which are reported in Sect. 2 (for GDP per capita) and Appendix 2 (for the Gini coefficient, primary school dropout rate, infant mortality, and life expectancy). Table 7 Information used in Sect. 2 and Appendix 2 Table 8 lists the series used for the panel data exercise of Sect. 3. Table 8 Information used in Sect. 3 Table 9 lists the series used for the DSGE exercise of Sect. 4. Appendix 2: Synthetic control: other results This Appendix presents the results of applying the synthetic control procedure to assess the effects of the treatment (Evo Morales's Presidency) in four social variables, namely the Gini coefficient, the school dropout rate, infant mortality rate, and life expectancy, as presented in Appendix 1.Footnote 41 As in the case of GDP per capita discussed in Sect. 2, we first find the optimal weights that define the synthetic control. Although the pre-treatment characteristics are always the same (see Table 2), the optimal weights will change depending on the variable of interest (Z). That is, the synthetic control that is used to evaluate the effect of the treatment on, for example, the Gini coefficient will not be the same to one used for other variable. Table 10 shows which countries are used to form the synthetic control for each variable, while Table 11 presents the list of pre-treatment characteristics that we seek to match, the values obtained for synthetic Bolivia, and the simple average of the countries considered in each exercise. The number of countries that have information for the Gini coefficient and the dropout rate for the same years that Bolivia has, is reduced. More information is available on infant mortality rate and life expectancy. Table 10 Optimal weights for synthetic control: other variables Table 11 Pre-treatment characteristics Although the synthetic controls do a better job than the simple average of countries, the RMSPE of the synthetic controls used for the Gini coefficient and the dropout rate are substantially higher than the ones for the other two variables, which are in line with the RMSPE of the synthetic Bolivia used in Sect. 2. This means that at least the results for the Gini coefficient and the dropout rate should be viewed with caution. Figure 8 presents the placebo tests for all the variables. From them, we gather that the treatment (Evo Morales) had no discernible effect on any of the variables considered. In particular, the behavior of the difference between actual Bolivia and synthetic Bolivia pre- and post-treatment is noisy for the case of the Gini coefficient and the dropout rate. Even when there is a slight decline in the infant mortality rate and increase in life expectancy, the synthetic controls do no match well the behavior of these variables prior to the treatment. Placebo tests. The bold lines represent the difference between the observed variable in Bolivia and the synthetic control. The gray lines represent placebo test deviations for the other countries in the data set. The graphs exclude countries with pre-treatment RMSE 2.2 times higher than Bolivia. The vertical line corresponds to the year prior to the treatment (2005). Clockwise, the first panel presents the results for the Gini coefficient, the second for the dropout rate, the third for the infant mortality rate, and the fourth for life expectancy In summary, there is no robust evidence to indicate that there were statistically significant improvements on the social indicators considered, due to the Evo Morales Presidency, when compared to the synthetic controls. Chumacero, R.A. Skills versus Luck: Bolivia and its recent Bonanza. Lat Am Econ Rev 28, 7 (2019) doi:10.1186/s40503-019-0069-1 Synthetic control Panel data DSGE
CommonCrawl
Oak Ridges Moraine Groundwater Program Automated hydrograph separation Hydrograph Separation "Division of a hydrograph into direct and groundwater runoff as a basis for subsequent analysis is known as hydrograph separation or hydrograph analysis. Since there is no real basis for distinguishing between direct and groundwater flow in a stream at any instant, and since definitions of these two components are relatively arbitrary, the method of separation is usually equally arbitrary." - Linsley et.al., 1975 For hydrograph separation, it is generally assumed that total flow ($q$) at any particular time ($t$) of a streamflow hydrograph can be partitioned into two primary components: The slow flow component $(b)$, which is itself composed of the gradual release of water from watershed stores in addition to groundwater discharging into streams, the "groundwater runoff" in Linsley etal.). The slow flow component has been commonly referred as "baseflow." and, The quick flow component $(f)$, which originates from rainfall and/or snow melt events (i.e., "direct runoff" in Linsley et.al., 1975). Together, the slow and quick flow components sum to total flow: $q=b+f$. Conceptually, after a period of time following a precipitation event, streamflow continues to decrease at a predictable rate as it is composed entirely of slowflow $(f=0)$. Upon the onset of a heavy rain event, the hydrograph quickly rises, as quick flow is added to the slowflow signature. One could imagine that should this rain event never occur, the underlying slowflow would have continued uninterrupted (such as in Reed et.al., 1975). The difference between total flow and this "underlying" slowflow is perceived as quickflow. Hydrologists found the need to separate the hydrograph into its constitutive components as it was found that runoff created from a precipitation event (i.e., rainfall and/or snow melt) tended to correlate best with the quickflow component only, as opposed to the total flow hydrograph (Beven, 2012). Consequently, a number of automatic hydrograph separation routines were proposed, all being "equally arbitrary" (Linsley et.al., 1975). For many groundwater flow models in southern Ontario, it is assumed that the long-term rate of slowflow is predominantly groundwater discharge. Therefore, long-term average rates of slowflow serve as an important constraint to groundwater flow models. Slowflow Quantification A number of metrics associated hydrograph separation that help describe the relationship between quick and slow flow. Baseflow Index The first is the baseflow index $(BFI)$, which is the ratio of long term baseflow discharge to total discharge: \[\text{BFI}=\frac{\sum b}{\sum q}\] Recession Coefficient $(k)$ The second is the slowflow recession coefficient $(k)$, which describes the withdrawal of water from storage within the watershed (Linsley et.al., 1975). The recession coefficient is a means of determining the amount the rate of slowflow recedes after a given period of time, and is reasonably simulated by an exponential decay function: \[b_t=kb_{t-1},\] where $b_{t-1}$ represents the slow flow calculated at one timestep prior to $b_t$. (Implied here is that flow measurements are reported at equal time intervals.) Quickflow Cessation Time $(N)$ Linsley et.al. (1975) also offered an approximate means of determining the time (in days) after peak flow discharge to when quickflow ceases $(f\to0)$, making total flow entirely composed of the slowflow component, whose behaviour can be predicted by the recession coefficient. As a "rule of thumb" (Linsley et.al., 1975) the number of days $(N)$ when quick flow terminates is approximated by: \[N=0.827A^{0.2},\] where $A$ is the watershed area (km²). The above empirical relation is included here as many automatic hydrograph separation algorithms discussed below utilize this approximation. Hydrograph components and "quickflow cessation" $(N)$ is implicitly conceptualized when performing automatic hydrograph separation routines. Note for reference to the image above: "direct runoff" = quickflow, and "ground-water runoff" = slowflow. Linsley and Franzini (1964) Digital Filters Digital filters represent a set of automatic hydrograph separation algorithms that require no input other than the measured stream flow signal $(q)$. Considering the streamflow hydrograph as a signal is quite apt when dealing with digital filters, as they themselves were inspired from signal processing of Lyne and Hollick, 1979 (Nathan and McMahon, 1990). With respect to the quick and slow hydrograph components, hydrograph separation is nothing more than the application of a low-pass filter to the total streamflow signal. Another point to note is that many authors have applied these digital filters in multiple passes, either in two-passes (forward $\to$ backward) or three-passes (forward $\to$ backward $\to$ forward) to increase the smoothing of the resulting slow flow signal (Chapman, 1991). The General Form With digital filters, there is no physical interpretation to the algorithm, it only produces a baseflow signal that resembles what one would expect. The general form of all digital filters used for hydrograph separation follows: \[b_t = \alpha b_{t-1} + \beta\left(q_t + \gamma q_{t-1}\right),\] where $q_{t-1}$ represents the total flow measured at one timestep prior to $q_t$, and $\alpha$, $\beta$ and $\gamma$ are parameters. The above equation is a three-parameter equation, however most implementations do not require every parameter be specified or, in other cases, two or more parameters can be specified as a function of another. Lyne and Hollick For example, the Lyne and Hollick (1979) equation (the earliest of digital filters used for hydrograph separation), is a one-parameter equation found by a single smoothing parameter $a$ suggested to be set between the values of 0.9–0.95 (Nathan and McMahon, 1990), where: \[\alpha = a \qquad \beta = \frac{1-a}{2} \qquad \gamma=1.0\] Chapman (1991) After noting some conceptual discrepancies with the Lyne and Hollick (1979) equation, Chapman (1991) modified the equation into a parameter-less form as a function of the recession coefficient $k$, discussed above. The Chapman (1991) algorithm takes the form: \[\alpha = \frac{3k-1}{3-k} \qquad \beta = \frac{1-k}{3-k} \qquad \gamma=1.0\] Chapman and Maxwell Chapman and Maxwell (1996) later simplified the above equation by assuming that slow flow is the weighted average of quick flow and the slow flow from the previous timestep (Chapman, 1999), that is $b_t=kb_{t-1}+(1-k)f_t$, leading to: \[\alpha = \frac{k}{2-k} \qquad \beta = \frac{1-k}{2-k} \qquad \gamma=0.0\] Boughton & Eckhardt Boughton (1993) used a similar approach to Chapman and Maxwell (1996), except added an adjustment parameter $C$, such that $b_t=kb_{t-1}+Cf_t$. The Boughton (1993) form of the digital filter thus requires: \[\alpha = \frac{k}{1+C} \qquad \beta = \frac{C}{1+C} \qquad \gamma=0.0\] While also investigating the generalized digital filter, Eckhardt (2005) discovered an interpretation of the Boughton (1993) algorithm that eliminated the $C$ parameter and introduced the concept of $\text{BFI}_\text{max}$: the maximum value of the baseflow index that can be achieved using the digital filter. The Eckhardt (2005) digital filter is found by: \[\alpha = \frac{(1-\text{BFI}_\text{max})k}{1-k\text{BFI}_\text{max}} \qquad \beta = \frac{(1-k)\text{BFI}_\text{max}}{1-k\text{BFI}_\text{max}} \qquad \gamma=0.0\] or made equivalent to Boughton (1993) by setting: \[C = \frac{(1-k)\text{BFI}_\text{max}}{1-\text{BFI}_\text{max}}\] Eckhardt (2005) suggests estimates for $\text{BFI}_\text{max}=0.8$ for perennial streams; $0.5$ for ephemeral streams; and $0.25$ for perennial stream over hard-rock aquifers. Jakeman and Hornberger The Jakeman and Hornberger (1993) algorithm closely follows that of Boughton (1993) and Chapman and Maxwell (1996), except it was formulated from a component of the IHACRES data-based model rather than being intended strictly for hydrograph separation (Chapman, 1999). Nonetheless, the IHACRES model can be shown to fit the general digital filter of equation above, using 3 parameters, where: \[\alpha = \frac{a}{1+C} \qquad \beta = \frac{C}{1+C} \qquad \gamma=\beta\alpha_s\] Note that setting $\alpha_s<0$ is conceptually correct, as it implies that the rate of change of slow flow is positively correlated the rate of change of total flow (Chapman, 1999). Suggested value for $\alpha_s=-exp(-1/k)$. Tularam and Ilahee Lastly, Tularam and Ilahee (2008) most recently presented an digital filter that also resembled that of Chapman and Maxwell (1996), with the slight difference of assuming that slow flow is the weighted average of the slow flow of the previous timestep and total flow, not quick flow (i.e., $b_t=ab_{t-1}+(1-a)q_t$). This formulation is essentially the same as Lyne and Hollick (1979) with the exception that Tularam and Ilahee (2008) does not average total flow of the current and previous timestep. The one-parameter Tularam and Ilahee (2008) form yields: \[\alpha = a \qquad \beta = 1-a \qquad \gamma=0.0\] Digital filter equations in their published form: Lyne and Hollick (1979): \[b_t = ab_{t-1} + \frac{1-a}{2}\left(q_t + q_{t-1}\right)\] Chapman (1991): \[b_t = \frac{3k-1}{3-k}b_{t-1} + \frac{1-k}{3-k}\left(q_t + q_{t-1}\right)\] Chapman and Maxwell (1996): \[b_t = \frac{k}{2-k}b_{t-1} + \frac{1-k}{2-k}q_t\] Boughton (1993): \[b_t = \frac{k}{1+C}b_{t-1} + \frac{C}{1+C}q_t\] Eckhardt (2005): \[b_t = \frac{(1-\text{BFI}_\text{max})kb_{t-1} + (1-k)\text{BFI}_\text{max}q_t}{1-k\text{BFI}_\text{max}}\] Jakeman and Hornberger (1993): \[b_t = \frac{a}{1+C}b_{t-1} + \frac{C}{1+C}\left(q_t + \alpha_s q_{t-1}\right)\] Tularam and Ilahee (2008): \[b_t=ab_{t-1}+(1-a)q_t\] Moving-window methods A second class of hydrograph separation schemes are here considered "moving window methods" also known as "manual separation techniques" in Arnold and Allen (1999). These methods do not follow an equation, per se, rather a methodology based on the explicit/manual selection of discharge values assumed representative of slowflow discharge within a window of a set number of days. In total, 10 estimates of slowflow discharge are computed using variants of 4 methods. Many of these methods are included in stand-alone software packages and have been re-coded here. The methods include: UKIH (3) The UKIH/Wallingford (Institute of Hydrology, 1980) method operates by locating minimum discharges in a (user specified) $N$-day window. This set of minimum discharge is then further screened, automatically, for discharges that are considered representative of "baseflow," which are deemed "turning points." Linear interpolation is then conducted between subsequent turning points yielding the final slowflow discharge. In a similar fashion to the digital filters, this method extracts a filtered/smoothed hydrograph of total flow minima, and is therefore often also referred to as the "smoothed minima technique." Piggott et.al. (2005) discussed how the UKIH technique can yield alternate baseflow estimates depending on the origin of the $N$-day window. They proposed staggering $N$-sets of UHIK baseflow estimates to create an overall aggregate baseflow hydrograph. Three versions of this modification are included here: Sweeping minimum: returns the daily minimum of the staggered hydrographs; Sweeping maximum: returns the daily maximum of the staggered hydrographs; and, Sweeping median: returns the median of the $N$-staggered hydrographs. HYSEP (3) The HYSEP (Sloto and Crouse, 1996) method depends on the computed days of quick flow termination $N$. Like the UKIH method, the HYSEP techniques then proceed to determine minimum discharges within the $2N^\ast$-day window, where "the interval $2N^\ast$ used for hydrograph separations is the odd integer between 3 and 11 nearest to $2N$ " (Sloto and Crouse, 1996). Three methods of producing baseflow estimates are computed in HYSEP and are reproduced here, they include: Fixed interval: where baseflow is assumed to be the minimum discharge reported within sequential, non-overlapping $2N^* $-day windows. Like the UKIH method, results from the fixed interval method is dependent on the ("fixed") window origin; Sliding interval: where baseflow is assumed to be the minimum discharge found within a moving $[(2N^*-1)/2]$-day window. In contrast, this method tends to yield a higher BFI; and, Local minimum: linearly-interpolates total flow minima within a moving $[(2N^*-1)/2]$-day window. The PART technique (Rutledge, 1998) aims to reproduce the conceptual hydrograph represented in the Figure above. Using quick flow termination estimates $(N)$, recession coefficients $(k)$, and the concept of the "antecedent recession requirement," a combination of forward and backward filtering techniques are used in producing the final hydrograph separation estimates. Three estimates using the PART method are produced here, based on the suggested "antecedent recession requirement" choices offered by Rutledge (1998): …once considering the requirement of antecedent recession to be the largest integer that is less than the result of N, and once for each of the next two larger integers. Then, "linear interpolation is used to estimate ground-water discharge during periods of surface runoff." Clarifica The Clarifica Inc., (2002) technique. This method separates the total flow hydrograph by performing two sweeps on the hydrograph. The first is a 6-day moving minimum, followed by a 5-day moving average (3-days previous, 1-day ahead). This method was designed for use within southern Ontario watersheds and tends to produce higher estimates of baseflow during peak flow events. Physically-based digital filters Another class of hydrograph separation routines are those deemed "physically-based" (Furey and Gupta, 2001). These methods of separation allow for additional input, such as climate conditions, to help guide the separation of the hydrograph; contrast this to the above methods where only parameters need tuning until the desired ("arbitrary") slowflow signal is produced. Furey and Gupta (2001) presented a digital filter that was formulated this way. At the moment, physically-based digital filters have not been applied. Curve-fitting Another honorable mention is the proceedure from the benchmark paper by Hewlett and Hibbert (1967). Here, quick flow is separated by identifying rising portions of the hydrograph that exceed a pre-defined "separation" rate, defined by the authors as being 0.05 ft³/s/mi²/hr ( $\approx$ 0.00055 m³/s/km²/hr) and deemed suitable for "small forested watersheds in the Appalachian-Piedmont region." This approach too, has not been applied. The above algorithms are available using the following jupyter script, some edits may be required to accommodate data format. In its current state, it readily reads a hydrograph .csv file with the header: "Date,Flow,Flag". Boughton, W.C., 1993. A hydrograph-based model for estimating the water yield of ungauged catchments. Hydrology and Water Resources Symposium, Institution of Engineers Australia, Newcastle: 317-324. Chapman, T.G. and A.I. Maxwell, 1996. Baseflow separation - comparison of numerical methods with tracer experiments.Institute Engineers Australia National Conference. Publ. 96/05, 539-545. Chapman T.G., 1999. A comparison of algorithms for stream flow recession and baseflow separation. Hydrological Processes 13: 710-714. Clarifica Inc., 2002. Water Budget in Urbanizing Watersheds: Duffins Creek Watershed. Report prepared for the Toronto and Region Conservation Authority. Eckhardt, K., 2005. How to construct recursive digital filters for baseflow separation. Hydrological Processes 19, 507-515. Furey, PR. and VK Gupta. 2001. A physically based filter for separating base flow from streamflow records. Water Resources Research, 37(11): 2709-2722. Hewlett J.D. and A.R. Hibbert, 1967. Factor Affecting the Response of Small Watersheds to Precipitation in Humid Areas. in: W.E. Sopper and H.W. Lull (ed.), Forest Hydrology, Pergamon, New York, N.Y., pp. 275-290. Institute of Hydrology, 1980. Low Flow Studies report. Wallingford, UK. Jakeman, A.J. and Hornberger G.M., 1993. How much complexity is warranted in a rainfall-runoff model? Water Resources Research 29: 2637-2649. Lyne, V. and M. Hollick, 1979. Stochastic time-variable rainfall-runoff modelling. Hydrology and Water Resources Symposium, Institution of Engineers Australia, Perth: 89-92. Piggott, A.R., S. Moin, C. Southam, 2005. A revised approach to the UKIH method for the calculation of baseflow. Hydrological Sciences Journal 50(5): 911-920. Reed, D.W., P. Johnson, J.M. Firth, 1975. A Non-Linear Rainfall-Runoff Model, Providing for Variable Lag Time. Journal of Hydrology 25: 295–305. Rutledge, A.T., 1998. Computer Programs for Describing the Recession of Ground-Water Discharge and for Estimating Mean Ground-Water Recharge and Discharge from Streamflow Records-Update, Water-Resources Investigation Report 98-4148. Sloto, R.A. and M.Y. Crouse, 1996. HYSEP: A Computer Program for Streamflow Hydrograph Separation and Analysis U.S. Geological Survey Water-Resources Investigations Report 96-4040. Tularam, A.G., Ilahee, M., 2008. Exponential Smoothing Method of Base Flow Separation and its Impact on Continuous Loss Estimates. American Journal of Environmental Sciences 4(2):136-144. © 2022 Oak Ridges Moraine Groundwater Program to cite, go to "Cite this repository" Published with GitHub Pages
CommonCrawl
Works by Carlos Castro ( view other items matching `Carlos Castro`, view all matches ) Putting Continuous Metaheuristics to Work in Binary Search Spaces.Broderick Crawford, Ricardo Soto, Gino Astorga, José García, Carlos Castro & Fernando Paredes - 2017 - Complexity:1-19.details The Extended Relativity Theory in Born-Clifford Phase Spaces with a Lower and Upper Length Scales and Clifford Group Geometric Unification.Carlos Castro - 2005 - Foundations of Physics 35 (6):971-1041.details We construct the Extended Relativity Theory in Born-Clifford-Phase spaces with an upperR and lower length λ scales. The invariance symmetry leads naturally to the real Clifford algebra Cl and complexified Clifford ClC algebra related to Twistors. A unified theory of all Noncommutative branes in Clifford-spaces is developed based on the Moyal-Yang star product deformation quantization whose deformation parameter involves the lower/upper scale \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$$$\end{document}. Previous work led us to show from first principles (...) why the observed value of the vacuum energy density is given by a geometric mean relationship \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\rho \sim L_{\rm Planck}^{-2}R^{-2} = L_{P}^{-4} ^{2} \sim 10^{-122}M_{\rm Planck}^4$$\end{document}, and can be obtained when the infrared scale R is set to be of the order of the present value of the Hubble radius. We proceed with an extensive review of Smith's 8D model based on the Clifford algebra Cl that reproduces at low energies the physics of the Standard Model and Gravity, including the derivation of all the coupling constants, particle masses, mixing angles,....with high precision. Geometric actions are presented like the Clifford-Space extension of Maxwell's Electrodynamics, and Brandt's action related to the 8D spacetime tangent-bundle involving coordinates and velocities. Finally we outline the reasons why a Clifford-Space Geometric Unification of all forces is a very reasonable avenue to consider and propose an Einstein-Hilbert type action in Clifford-Phase spaces as a Unified Field theory action candidate that should reproduce the physics of the Standard Model plus Gravity in the low energy limit. (shrink) Space and Time in Philosophy of Physical Science The String Uncertainty Relations Follow From the New Relativity Principle.Carlos Castro - 2000 - Foundations of Physics 30 (8):1301-1316.details Stringy corrections to the ordinary Heisenberg uncertainty relations have been known for some time. However, a proper understanding of the underlying new physical principle modifying the ordinary Heisenberg uncertainty relations has not yet emerged. The author has recently proposed a new scale relativity theory as a physical foundation of string and M theories. In this work the stringy uncertainty relations, and corrections thereof, are rigorously derived from this new relativity principle without any ad-hoc assumptions. The precise connection between the Regge (...) trajectory behavior of the string spectrum and the area quantization is also established. (shrink) String Theory in Philosophy of Physical Science On Superluminal Particles and the Extended Relativity Theories.Carlos Castro - 2012 - Foundations of Physics 42 (9):1135-1152.details Superluminal particles are studied within the framework of the Extended Relativity theory in Clifford spaces (C-spaces). In the simplest scenario, it is found that it is the contribution of the Clifford scalar component π of the poly-vector-valued momentum which is responsible for the superluminal behavior in ordinary spacetime due to the fact that the effective mass $\mathcal{M} = \sqrt{ M^{2} - \pi^{2} }$ is imaginary (tachyonic). However, from the point of view of C-space, there is no superluminal (tachyonic) behavior because (...) the true physical mass still obeys M 2>0. Therefore, there are no violations of the Clifford-extended Lorentz invariance and the extended Relativity principle in C-spaces. It is also explained why the charged muons (leptons) are subluminal while its chargeless neutrinos may admit superluminal propagation. A Born's Reciprocal Relativity theory in Phase Spaces leads to modified dispersion relations involving both coordinates and momenta, and whose truncations furnish Lorentz-violating dispersion relations which appear in Finsler Geometry, rainbow-metrics models and Double (deformed) Special Relativity. These models also admit superluminal particles. A numerical analysis based on the recent OPERA experimental findings on alleged superluminal muon neutrinos is made. For the average muon neutrino energy of 17 GeV, we find a value for the magnitude $|\mathcal{M } | = 119.7\mbox{~MeV}$ that, coincidentally, is close to the mass of the muon m μ =105.7 MeV. (shrink) On Weyl Geometry, Random Processes, and Geometric Quantum Mechanics.Carlos Castro - 1992 - Foundations of Physics 22 (4):569-615.details This paper discusses some of the technical problems related to a Weylian geometrical interpretation of the Schrödinger and Klein-Gordon equations proposed by E. Santamato. Solutions to these technical problems are proposed. A general prescription for finding out the interdependence between a particle's effective mass and Weyl's scalar curvature is presented which leads to the fundamental equation of geometric quantum mechanics, $$m(R)\frac{{dm(R)}}{{dR}} = \frac{{\hbar ^2 }}{{c^2 }}$$ The Dirac equation is rigorously derived within this formulation, and further problems to be solved (...) are proposed in the conclusion. The main one is based on obtaining the relationship between Feynman's path integral quantization method, among others, and the methods of geometric quantum mechanics. The solution of this problem will be a crucial test for this theory that attempts to "geometrize" quantum mechanics rather than the conventional approach in the past of quantizing geometry. A numerical prediction of this theory yields a 3×10−35 eV correction to the ground-state energy of the hydrogen atom. (shrink) Interpretations of Quantum Mechanics, Misc in Philosophy of Physical Science Mathematical Structure of Quantum Mechanics in Philosophy of Physical Science Born's Reciprocal Gravity in Curved Phase-Spaces and the Cosmological Constant.Carlos Castro - 2012 - Foundations of Physics 42 (8):1031-1055.details The main features of how to build a Born's Reciprocal Gravitational theory in curved phase-spaces are developed. By recurring to the nonlinear connection formalism of Finsler geometry a generalized gravitational action in the 8D cotangent space (curved phase space) can be constructed involving sums of 5 distinct types of torsion squared terms and 2 distinct curvature scalars ${\mathcal{R}}, {\mathcal{S}}$ which are associated with the curvature in the horizontal and vertical spaces, respectively. A Kaluza-Klein-like approach to the construction of the curvature (...) of the 8D cotangent space and based on the (torsionless) Levi-Civita connection is provided that yields the observed value of the cosmological constant and the Brans-Dicke-Jordan Gravity action in 4D as two special cases. It is found that the geometry of the momentum space can be linked to the observed value of the cosmological constant when the curvature in $\mathit{momentum}$ space is very large, namely the small size of P is of the order of $( 1/R_{\mathit{Hubble}})$ . Finally we develop a Born's reciprocal complex gravitational theory as a local gauge theory in 8D of the $\mathit{deformed}$ Quaplectic group that is given by the semi-direct product of U(1,3) with the $\mathit{deformed}$ (noncommutative) Weyl-Heisenberg group involving four $\mathit{noncommutative}$ coordinates and momenta. The metric is complex with symmetric real components and antisymmetric imaginary ones. An action in 8D involving 2 curvature scalars and torsion squared terms is presented. (shrink) Physics in Natural Sciences The Charge–Mass–Spin Relation of Clifford Polyparticles, Kerr–Newman Black Holes and the Fine Structure Constant.Carlos Castro - 2004 - Foundations of Physics 34 (7):1091-1113.details A Clifford-algebraic interpretation is proposed of the charge, mass, spin relationship found recently by Cooperstock and Faraoini, which was based on the Kerr–Newman metric solutions of the Einstein–Maxwell equations. The components of the polymomentum associated with a Clifford polyparticle in four dimensions provide for such a charge, mass, spin relationship without the problems encountered in Kaluza–Klein compactifications which furnish an unphysically large value for the electron charge. A physical reasoning behind such charge, mass, spin relationship is provided, followed by a (...) discussion on the geometrical derivation of the fine structure constant by Wyler, Smith, Gonzalez-Martin and Smilga. To finalize, the renormalization of electric charge is discussed and some remarks are made pertaining the modifications of the charge–scale relationship, when the spin of the polyparticle changes with scale, that may cast some light into the alleged Astrophysical variations of the fine structure constant. (shrink) Philosophy of Physics, Miscellaneous in Philosophy of Physical Science Research, Development, and Innovation in Extremadura: A GNU/Linex Case Study.Andoni Alsonso, Luis Casas, Carlos Castro & Fernando Solís - 2004 - Philosophy Today 48 (9999):16-22.details Nanotechnology in Applied Ethics Extended Scale Relativity, P-Loop Harmonic Oscillator, and Logarithmic Corrections to the Black Hole Entropy.Carlos Castro & Alex Granik - 2003 - Foundations of Physics 33 (3):445-466.details An extended scale relativity theory, actively developed by one of the authors, incorporates Nottale's scale relativity principle where the Planck scale is the minimum impassible invariant scale in Nature, and the use of polyvector-valued coordinates in C-spaces (Clifford manifolds) where all lengths, areas, volumes⋅ are treated on equal footing. We study the generalization of the ordinary point-particle quantum mechanical oscillator to the p-loop (a closed p-brane) case in C-spaces. Its solution exhibits some novel features: an emergence of two explicit scales (...) delineating the asymptotic regimes (Planck scale region and a smooth region of a quantum point oscillator). In the most interesting Planck scale regime, the solution recovers in an elementary fashion some basic relations of string theory (including string tension quantization and string uncertainty relation). It is shown that the degeneracy of the first collective excited state of the p-loop oscillator yields not only the well-known Bekenstein–Hawking area-entropy linear relation but also the logarithmic corrections therein. In addition we obtain for any number of dimensions the Hawking temperature, the Schwarschild radius, and the inequalities governing the area of a black hole formed in a fusion of two black holes. One of the interesting results is a demonstration that the evaporation of a black hole is limited by the upper bound on its temperature, the Planck temperature. (shrink) On Clifford Space Relativity, Black Hole Entropy, Rainbow Metrics, Generalized Dispersion and Uncertainty Relations.Carlos Castro - 2014 - Foundations of Physics 44 (9):990-1008.details An analysis of some of the applications of Clifford space relativity to the physics behind the modified black hole entropy-area relations, rainbow metrics, generalized dispersion and minimal length stringy uncertainty relations is presented. Thermodynamics and Statistical Mechanics in Philosophy of Physical Science On Dark Energy, Weyl's Geometry, Different Derivations of the Vacuum Energy Density and the Pioneer Anomaly.Carlos Castro - 2007 - Foundations of Physics 37 (3):366-409.details Two different derivations of the observed vacuum energy density are presented. One is based on a class of proper and novel generalizations of the de Sitter solutions in terms of a family of radial functions R that provides an explicit formula for the cosmological constant along with a natural explanation of the ultraviolet/infrared entanglement required to solve this problem. A nonvanishing value of the vacuum energy density of the order of ${10^{- 123} M_{\rm Planck}^4}$ is derived in agreement with the (...) experimental observations. A correct lower estimate of the mass of the observable universe related to the Dirac–Eddington–Weyl's large number N = 1080 is also obtained. The presence of the radial function R is instrumental to understand why the cosmological constant is not zero and why it is so tiny. Finally, we rigorously prove why the proper use of Weyl's Geometry within the context of Friedman–Lemaitre–Robertson–Walker cosmological models can account for both the origins and the value of the observed vacuum energy density. The source of dark energy is just the dilaton-like Jordan–Brans–Dicke scalar field that is required to implement Weyl invariance of the most simple of all possible actions. The full theory involving the dynamics of Weyl's gauge field Aμ is very rich and may explain also the anomalous Pioneer acceleration and the temporal variations of the fundamental constants resulting from the expansion of the Universe. This is consistent with Dirac's old idea of the plausible variation of the physical constants but with the advantage that it is not necessary to invoke extra dimensions. (shrink) On Nonlinear Quantum Mechanics, Noncommutative Phase Spaces, Fractal-Scale Calculus and Vacuum Energy.Carlos Castro - 2010 - Foundations of Physics 40 (11):1712-1730.details A (to our knowledge) novel Generalized Nonlinear Schrödinger equation based on the modifications of Nottale-Cresson's fractal-scale calculus and resulting from the noncommutativity of the phase space coordinates is explicitly derived. The modifications to the ground state energy of a harmonic oscillator yields the observed value of the vacuum energy density. In the concluding remarks we discuss how nonlinear and nonlocal QM wave equations arise naturally from this fractal-scale calculus formalism which may have a key role in the final formulation of (...) Quantum Gravity. (shrink) Nonlinear Dynamics in Philosophy of Physical Science Quantum Gravity in Philosophy of Physical Science Quantum Nonlocality in Philosophy of Physical Science
CommonCrawl
7.3 Right pyramids, right cones and spheres 7.2 Right prisms and cylinders 7.4 Multiplying a dimension by a constant factor 1 Surface area of pyramids, cones and spheres 2 Volume of pyramids, cones and spheres 7.3 Right pyramids, right cones and spheres (EMBHZ) A pyramid is a geometric solid that has a polygon as its base and sides that converge at a point called the apex. In other words the sides are not perpendicular to the base. The triangular pyramid and square pyramid take their names from the shape of their base. We call a pyramid a "right pyramid" if the line between the apex and the centre of the base is perpendicular to the base. Cones are similar to pyramids except that their bases are circles instead of polygons. Spheres are solids that are perfectly round and look the same from any direction. Surface area of pyramids, cones and spheres (EMBJ2) Square pyramid \(\begin{array}{r@{\;}c@{\;}l} \text{Surface area} &=& \text{area of base} \;+ \\ && \text{area of triangular sides} \\ &=& b^2 + \text{4}\left(\frac{1}{2}b{h}_{s}\right) \\ &=& b\left(b+2{h}_{s}\right) \end{array}\) Triangular pyramid \(\begin{array}{r@{\;}c@{\;}l} \text{Surface area} &= & \text{area of base} \;+ \\ && \text{area of triangular sides} \\ &=& \left(\frac{1}{2}b\times {h}_{b}\right)+3\left(\frac{1}{2}b\times {h}_{s}\right) \\ &=& \frac{1}{2}b\left({h}_{b}+3{h}_{s}\right) \end{array}\) Right cone \(\begin{array}{r@{\;}c@{\;}l} \text{Surface area} &=& \text{area of base} \;+ \\ && \text{area of walls} \\ &=& \pi {r}^{2}+\frac{1}{2}\times 2\pi rh \\ &=& \pi r\left(r+h\right) \end{array}\) \(\text{Surface area} = \text{4} \pi r^2\) Volume of pyramids, cones and spheres (EMBJ3) \(\begin{array}{r@{\;}c@{\;}l} \text{Volume} &=& \frac{1}{3}\times \text{area of base} \;\times \\ && \text{height of pyramid} \\ &=& \frac{1}{3}\times {b}^{2}\times H \end{array}\) \(\begin{array}{r@{\;}c@{\;}l} \text{Volume} &=& \frac{1}{3}\times \text{area of base} \;\times \\ && \text{height of pyramid} \\ &=& \frac{1}{3}\times \frac{1}{2}bh\times H \end{array}\) \(\begin{array}{r@{\;}c@{\;}l} \text{Volume} &=& \frac{1}{3}\times \text{area of base} \;\times \\ && \text{height of cone} \\ &=& \frac{1}{3}\times \pi {r}^{2}\times H \end{array}\) \(\text{Volume}=\frac{4}{3}\pi {r}^{3}\) Video: 235K Worked example 4: Finding surface area and volume The Southern African Large Telescope (SALT) is housed in a cylindrical building with a domed roof in the shape of a hemisphere. The height of the building wall is \(\text{17}\) \(\text{m}\) and the diameter is \(\text{26}\) \(\text{m}\). Calculate the total surface area of the building. Calculate the total volume of the building. Calculate the total surface area \begin{align*} \text{Total surface area} &= \text{area of the dome} + \text{area of the cylinder} \\ \text{Surface area} &= \left[\frac{1}{2}(4 \pi r^2)\right] + \left[2 \pi r \times h\right] \\ &= \frac{1}{2}(4 \pi)(13)^2 + 2 \pi (\text{13})(\text{17}) \\ &= \text{2 450}\text{ m$^{2}$} \end{align*} Calculate the total volume \begin{align*} \text{Total volume} &= \text{volume of the dome} + \text{volume of the cylinder} \\ \text{Volume} &= \left[\frac{1}{2} \times \left( \frac{4}{3} \pi r^3 \right)\right] + \left[\pi r^2 h\right] \\ &= \frac{2}{3} \pi (\text{13})^3 + \pi (\text{11})^2 (\text{13}) \\ &= \text{9 543}\text{ m$^{3}$} \end{align*} Finding surface area and volume An ice-cream cone has a diameter of \(\text{52,4}\) \(\text{mm}\) and a total height of \(\text{146}\) \(\text{mm}\). Calculate the surface area of the ice-cream and the cone. \begin{align*} \text{Radius }&=\frac{\text{52,4}}{2} \\ &=\text{26,2}\text{ mm} \\ \text{Height of cone }&= 146 - \text{26,2} \\ &=\text{119,8}\text{ mm} \end{align*} The surface area of the ice-cream is half a sphere: \begin{align*} \text{Surface area ice-cream:} &=\dfrac{1}{2}(4 \pi r^{2}) \\ &= \dfrac{1}{2}(4 \times \pi \times (26,2)^{2}) \\ &\approx \text{4313,03 mm}^{2} \end{align*} The surface area of the cone must not include the surface area of the circular face. \begin{align*} \text{Surface area cone }&= \pi r(r + \sqrt{h^{2} + r^{2}}) - \pi r^{2} \\ &= \pi \times \text{26,2} \times (\text{26,2}+\sqrt{(119,8)^{2} + (26,2)^{2}}) - \pi \times (26,2)^{2} \\ &\approx \text{10093,76 mm}^{2} \\ \therefore \text{Surface area ice-cream and cone} &= \text{4313,03} + \text{10093,76} \\ &=\text{14406,79 mm}^{2} \\ &\approx \text{144,07}\text{ cm$^{2}$} \end{align*} Calculate the total volume of the ice-cream and the cone. \begin{align*} \text{Volume}&=\text{volume(cone)}+\text{volume}(\frac{1}{2}\text{sphere}) \\ &=\frac{1}{3}\pi r^2h+\frac{1}{2}\left(\frac{4}{3}\pi r^3\right) \\ &=\frac{1}{3}\pi(\text{26,2})^2\times\text{119,8}+\frac{2}{3}\pi(\text{26,2})^3 \\ &=\text{86 116,82}\ldots +\text{37 667,12}\ldots \\ &=\text{123 783,953}\ldots \text{ mm$^{3}$} \\ &\approx\text{124}\text{ cm$^{3}$} \end{align*} How many ice-cream cones can be made from a \(\text{5}\) \(\text{ℓ}\) tub of ice-cream (assume the cone is completely filled with ice-cream)? \begin{align*} \text{1 000}\text{ cm$^{3}$}&=\text{1}\text{ ℓ} \\ \therefore \text{5}\text{ ℓ}&= \text{5 000}\text{ cm$^{3}$}\\ \therefore \frac{\text{5 000}}{124} &\approx 40 \text{ cones} \end{align*} Consider the net of the cone given below. \(R\) is the length from the tip of the cone to its perimeter, \(P\). Determine the value of \(R\). \(R\) is the slant height. \begin{align*} R &= \sqrt{r^{2} + h^{2}} \\ &= \sqrt{(\text{26,2})^{2} + (\text{119,8})^{2}} \\ &= \text{122,631}\ldots\text{ mm} \\ &\approx \text{123}\text{ mm} \end{align*} Calculate the length of arc \(P\). \begin{align*} P &=\text{circumference of cone} \\ & = 2\pi(\text{26,2}) \\ &\approx \text{165}\text{ mm} \end{align*} Determine the length of arc \(M\). \begin{align*} M&=2\pi(123)-165\\ &=\text{608}\text{ mm} \end{align*}
CommonCrawl
Remote monitoring of cardiorespiratory signals from a hovering unmanned aerial vehicle Ali Al-Naji ORCID: orcid.org/0000-0002-8840-92351,2, Asanka G. Perera1 & Javaan Chahl1,3 BioMedical Engineering OnLine volume 16, Article number: 101 (2017) Cite this article 134 Altmetric Remote physiological measurement might be very useful for biomedical diagnostics and monitoring. This study presents an efficient method for remotely measuring heart rate and respiratory rate from video captured by a hovering unmanned aerial vehicle (UVA). The proposed method estimates heart rate and respiratory rate based on the acquired signals obtained from video-photoplethysmography that are synchronous with cardiorespiratory activity. Since the PPG signal is highly affected by the noise variations (illumination variations, subject's motions and camera movement), we have used advanced signal processing techniques, including complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and canonical correlation analysis (CCA) to remove noise under these assumptions. To evaluate the performance and effectiveness of the proposed method, a set of experiments were performed on 15 healthy volunteers in a front-facing position involving motion resulting from both the subject and the UAV under different scenarios and different lighting conditions. The experimental results demonstrated that the proposed system with and without the magnification process achieves robust and accurate readings and have significant correlations compared to a standard pulse oximeter and Piezo respiratory belt. Also, the squared correlation coefficient, root mean square error, and mean error rate yielded by the proposed method with and without the magnification process were significantly better than the state-of-the-art methodologies, including independent component analysis (ICA) and principal component analysis (PCA). Unmanned aerial vehicles (UAVs) or drones, particularly small UAVs capable of hover are a rapidly maturing technology with increasing numbers of innovative applications. The ability of a UAV to detect and measure the vital signs of humans can have many applications, including: triage of disaster victims, detection of security threats and deepening the context of human to machine interactions. Remote-sensing imaging systems provide a convenient way to monitor human vital signs without any physical restrictions. Imaging Photoplethysmography (iPPG) is one of the most promising methods that uses a video camera as a photodetector to detect optical properties passing through or reflecting from the skin due to cardiac synchronous variations. The traditional contact monitoring methods, such as ECG, pulse oximeter, and respiratory belt transducer, require patients to wear adhesive sensors, electrodes and chest straps, potentially for a long time which may be discomfort, infection or adverse reactions in patients with sensitive skin (e.g., neonates or those suffering burns) [1,2,3,4,5,6]. The desire to solve the problems associated with contact monitoring systems has led to research using video cameras as a non-contact sensor for monitoring of vital signs. Non-contact methods based on iPPG provide a low-cost and comfortable way to measure vital signs. For example, Takano and Ohta [7] used a time-lapse image acquired from a CCD camera to extract cardiorespiratory signals of stationary subjects under different illumination levels. They used image processing techniques, including auto-regressive (AR) spectral analysis combined with a 1st-order derivative and a 2 Hz low pass filter to analyse changes in the image brightness of the region of interest (ROI) around the cheek in the facial area which allowed detection of both heart and respiratory rates. Later, Verkruysse et al. [8] could remotely extract PPG signals from the RGB channels obtained from a digital camera under ambient light conditions. They used a fast Fourier transform (FFT) and band-pass digital filtering on the PPG signal extracted from the human face to detect heart and respiratory rates. Similarly, in [9, 10] Poh et al. reported on the development of a non-contact and automated method for measuring of the cardiac pulse from the human face recorded using a built-in webcam. They applied a blind source separation (BSS) method based on independent component analysis (ICA) on the RGB channels intensity to obtain three components and used Fourier transformer and band-pass filtering on these components to extract the signal of interest. According to their outcomes, the green component was the best component to extract the cardiac pulse signal. Later, a study by Lewandowska et al. [11] proposed a contactless method for heart pulse monitoring with a webcam based on principle component analysis (PCA) to reduce computational complexity compared to ICA used by [9, 10]. Similarly to Poh's methodology, Kwon et al. [12] used the front-facing camera of a smartphone to extract cardiac pulse signal based on the frequency analysis of the PPG signal. As claimed in previous studies, the main challenges using iPPG method were illumination variations (caused by the lighting conditions of indoor or outdoor environments, intrinsic camera noise and changes in the skin tone) and subject's movement (of the entire head, but also facial expressions, eye blinking and speech) during the measurements. Research has been performed to solve these limitations. For instance, to remove the challenges of illumination variations, some investigations [13,14,15,16] used head motion generated from the blood cycle from the heart to the head via the carotid arteries to extract the cardiac pulse signal based on a ICA [13], PCA [14, 15] and a frame subtraction method [16]. However, the subjects' motion remained the main challenge in their results. Another example regarding only to improve subject's motion, Haan and Jeanne [17] extracted the cardiac pulse signals directly from RGB face image sequences captured from a digital camera using a chrominance based iPPG method for a subject exercising on a stationary exercise bicycle and a stepping machine. According to their outcomes, the proposed method was better than ICA used in [9, 10] and PCA used in [11] for both stationary and moving subjects. Another study by Li et al. [18] proposed a novel heart rate measurement method to reduce the noise in the cardiac pulse signal from the recording of face video caused by both illumination variations and subjects' motions. They used a Normalized Least Mean Squares (NLMS) filter [19] to deal with noise caused by illumination variations and both the Discriminative Response Map Fitting (DRMF) [20] and the Kanade-Lucas-Tomasi (KLT) algorithms [21] to reduce the noise caused by subjects' motion. Although their method has shown promising results for heart rate under realistic human-computer interaction situations, it led to a higher computational complexity than other methods. Feng et al. [22] used an optical iPPG signal model to remove noise caused by motion artefacts from the PPG signal based on the optical properties of human skin. They proposed an adaptive colour difference method between the red and green channels acquired from a digital camera and used an adaptive bandpass filter (ABF) based on the spectral characteristics of the PPG signal to extract cardiac pulse signal and reduce motion artefacts in the facial ROI. However, more advanced signal processing techniques are needed to improve their results because a colour difference method and ABF may be inefficient when the noise signal falls within the frequency band of interest. Also, the performance of the optical analysis method may be affected by periodic illumination variations. Recently, Chen et al. [23] used a reflectance decomposition method on the green channel and ensemble empirical mode decomposition (EEMD) to separate the real cardiac pulse signal from the environmental illumination noise in the PPG signal from digital camera video of a human face. Their proposed approach outperformed the current state of the art methods [9, 10]. However, subject respiration can affect the decomposition of facial reflectance and thus distort the signal of interest. A study by Cheng et al. [24] demonstrated the feasibility of removing the illumination variation noise from the cardiac pulse signal in the facial ROI (green channel) from webcam videos using a joint blind source separation (JBSS) and EEMD. The main limitations in their study were that all subjects were asked to keep stationary and both facial ROI and background ROI were assumed to have the same illumination variations. In addition, most of the previous studies had considered only the motion artefacts resulting from the subject movement and not those resulting from camera movement. Therefore, to remove the effects of illumination variations, subject's movement and camera movement, we proposed a combination of a complete ensemble EMD with adaptive noise (CEEMDAN) and a canonical correlation analysis (CCA) to remove noise acquired from these effects in the PPG signal and thus present a robust non-contact method to remotely extract cardiorespiratory signals (heart rate and respiratory rate) using video sequences captured by a hovering UAV. Experimental setup and data acquisition Fifteen healthy participants (10 males–5 females) with ages ranging from 2 to 40 years were enrolled in the experiment. The ethical approval was granted by the UniSA Human Research Ethics Committee and it was carried out following the rules of the Declaration of Helsinki of 1975. A written informed consent was obtained from each participant before commencing in the experiment. The experiment was performed in the outdoor and indoor environments, where each subject is standing at a distance of 3 m from the front of the UAV camera as shown in Fig. 1. Several videos were acquired for each subject using a hovering UAV (3DR solo) with a GoPro Hero 4k camera at different times of day with different illumination levels. We used a replacement lens (10MP, 5.4 mm GoPro Lens) instead of the original camera lens in order to reduce a fish-eye distortion. Each video was captured at 60 frames per second with a resolution of 1920 × 1080. The video acquisition time was set to 1 min. However, only the last 30 s was chosen for analysis in Matlab program (R2015b). Control measurement of the reference heart and respiratory rates was performed using a finger pulse oximeter (Rossmax SA210) [25] and Piezo respiratory belt transducer (MLT1132) [26] for validation purpose. Pre-processing and data analysis The system framework is composed of five steps as shown in Fig. 2. System overview of the proposed In the first step, we used an enhanced video magnification technique [27] to magnify skin colour variation since the variation caused by the cardiac pulse signal is very weak. Although the digital camera can reveal the iPPG signal, there was substantial noise associated with this signal caused by effecting of illumination variations, subject's movement and camera movement. We also evaluated the results to examine whether the proposed system with and without magnification process is more efficient than the conventional measurement methods and to improve an importance this process to enhance the iPPG signal. Some examples for iPPG signals acquired from different conditions are given in Fig. 3. The iPPG signals for facial ROI (green channel) for a subject in case of a stationary b stationary with 15× magnification, c different face expressions, d talking, and e different illumination conditions In the second step, to select facial ROI and deal with the problems associated with head movement, we used an enhanced face detection method proposed by Chen et al. [28] instead of Viola–Jones method [29] used in the most previous measurement methods because it was more effective with inclined or angled. Also, the Chen et al. method has a better performance than conventional face detection methods [18, 22, 28]. The raw iPPG signal was then obtained by averaging all the image pixel values within the facial ROI of the green channel as follows: $$iPPG(t) = \frac{{\sum\nolimits_{x,y \in ROI} I \left( {x,y,t} \right)}}{{\left| {ROI} \right|}}$$ where I(x, y) is the pixel value at image location (x, y), at a time (t) and |ROI| is the size of the facial ROI. In the third step, we used a complete ensemble EMD with adaptive noise (CEEMDAN) [30], to reduce noise interferences caused by illumination variations from the iPPG signal. CEEMDAN is an advanced signal processing method proposed by Colominas et al. [30] to improve performance of EMD [31] and EEMD [32] by reducing noise from the intrinsic mode functions (IMFs) with more physical meaning. Similar to EMD, CEEMDAN decomposes the original signal into IMFs with instantaneous amplitude and frequency data. An example of eight IMFs decomposition with number of iterations of 200 for \({iPPG (t)}\) is provided in Fig. 4. An example of CEEMDAN decomposition of the iPPG signal in the facial ROI Three IMFs (IMF5, IMF6 and IMF7) were be chosen for estimating cardiorespiratory signals based on their frequency spectra that correspond to the best range of cardiac pulse frequency band as shown in Fig. 5. The frequency spectrum of decomposed IMFs Figure 5 shows the spectrum of all IMFs and which IMF has the best frequency bands of interest. It is clear that the frequency bands of IMF5, IMF6 and IMF7 fall within 0.2–4 Hz, corresponding to 12–240 beats/min, whereas the frequency bands of other IMFs fall outside this range. Therefore, just IMF5, IMF6 and IMF7 have been selected as inputs for the next step because they have maximum frequency spectra of 2.7, 1.34 and 1.2 Hz which correspond to 162, 80 and 72 beats respectively. In the fourth step, CCA technique is then applied on the selected IMFs to remove the motion artefacts components from the iPPG signal. The CCA technique can be used as a blind source separation (BSS) for separating a number of mixed signals [33,34,35]. This technique is based on second-order statistics to generate components derived from their uncorrelated signals rather than independence components used in ICA. CCA can achieve better performance for BBS than ICA and it has less computational complexity than ICA [36,37,38]. To understand how CCA works as a BSS method, j and k are two multi-dimensional random signals. Consider the linear combinations of these signals, known as the canonical variates as follows [33]: $$j = W_{j}^{T} \left[ {j - \overline{\text{j}} } \right],\quad k = W_{k}^{T} \left[ {k - \overline{\text{k}} } \right]$$ where W j and W k are weighting matrices of j and k. The correlation, ρ, between these linear combinations is given by $$\rho = \frac{{W_{j}^{T} C_{jk} W_{k} }}{{\sqrt {W_{j}^{T} C_{jj} W_{j} W_{k}^{T} C_{kk} W_{k} } }}$$ where \({{C}}_{jj}\) and \({{C}}_{kk}\) are the nonsingular within-set covariance matrices and \({{C}}_{jk}\) is the between-sets covariance matrix. The largest canonical variates can be found with the maximum value of ρ with respect to W j and W k . The original green channel signal (a) is converted into a multichannel signal (A) using the CEEMDAN algorithm. The IMFs determined to be outside frequency bands of interest are removed, and then the remaining IMFs determined to be within frequency bands of interest are used as inputs with the un-mixing matrix W of the CCA algorithm. The original multichannel signal \(\widetilde{A}\) is then reconstructed without unwanted IMFs (artefact components) using the inverse of the un-mixing matrix W −1. Now, the target single-channel signal \(\widetilde{a}\) without the noises resulting from the effects of illumination variations, subject's movement and camera movement can be determined by adding the new IMFs components in the \(\widetilde{A}\) matrix. In the next step, a fast Fourier transformer (FFT) is applied to transform the \(\widetilde{{a}}\) signal from the time domain to the frequency domain. Two ideal band pass filters are then used on this signal with selected frequencies of 0.5–4 and 0.2–0.5 Hz corresponding to 30–240 beats/min and 12–30 breaths/min respectively. The inverse FFT is then applied to the result of filtering to obtain the cardiorespiratory signals. Finally, the heart and respiratory rates are measured by using the peak detection algorithm [39]. The experimental results obtained from 15 subjects were set in four scenarios. The first scenario is a stationary scenario, where the subject was standing in front of the UAV without any movement. The second scenario is when the subject was asked to do display different facial expressions during the imagery task with some head rotation. In the third scenario the subject was asked to remain stationary and talk normally during the imagery capture. These three scenarios were set up in outdoor and indoor environments under ambient light conditions. The last scenario is when the imagery sessions were in the indoor environment under different illumination levels. The motion artefacts resulting from a flying UAV camera were included in all proposed scenarios. The frame sequences obtained from the UAV camera for all scenarios were processed through the proposed system with and without the magnification process. We evaluated the performance of the proposed system for heart and respiratory rate measurements with and without the magnification process and compared them with the measurements obtained from ICA [9, 10] and PCA [11] in four scenarios. Also, the statistical analysis based on Bland–Altman method [40] was used to quantify the degree of agreement between these systems and the reference methods (Rossmax Pulse oximeter and Piezo respiratory belt). The mean bias and standard deviation (SD) of the differences, 95% limits of agreement (±1.96 SD), the squared correlation coefficients (CC2), root mean squared error (RMSE) and mean error rate (ME) were calculated for the estimated heart and respiratory rates from the proposed systems and the reference methods for all proposed scenarios. Heart rate measurements In the first scenario, the statistical agreement based on Bland–Altman plots of all measuring systems against the reference method (Rossmax Pulse oximeter) is shown in Fig. 6, where the x-axis indicates the mean of the measurements and y-axis is the difference between the measurements. Bland–Altman plots between heart rate measurements obtained by the reference method and heart rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the first scenario The Bland–Altman plot based on the proposed system with the magnification process (see Fig. 6a) showed a mean bias of 0.069 beats/min with a lower limit of −0.52 beats/min and an upper 95% limit of +0.66 beats/min with a CC2 of 0.9991 and a RMSE of 0.31 beats/min, whereas the Bland–Altman plot based on the proposed system without the magnification process (see Fig. 6b) led a mean bias of 0.072 beats/min with a lower limit of −1 beats/min and an upper 95% limit of +1.2 beats/min with a CC2 of 0.9966 and a RMSE of 0.57 beats/min. When the agreement between the heart rate measurements based on ICA was evaluated (Fig. 6c), a mean bias was 0.27 beats/min with 95% limits of agreement −2.1 to 2.6 beats/min, and CC2 was 0.9843 with a RMSE of 1.22 beats/min, whereas the statistics were 0.3 beats/min of a mean bias with −2.9 to 3.5 beats/min of 95% limits of agreement, CC2 of 0.9712, RMSE of 1.64 beats/min (Fig. 6d) when PCA was used instead. The Bland–Altman plots for the second scenario are shown in Fig. 7. Bland-Altman plots between heart rate measurements obtained by the reference method and heart rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the second scenario As shown in Fig. 7a, a mean bias was 0.14 beats/min and 95% limits of agreement were −1.3 and +1.5 beats/min with a CC2 of 0.9945 and a RMSE of 0.73 beats/min. Figure 7b showed that a mean bias was 0.19 beats/min and 95% limits of agreement were −1.8 and +2.2 beats/min with a CC2 of 0.9891 and a RMSE of 1.02 beats/min. Using ICA (see Fig. 7c), a mean bias was 0.47 beats/min with 95% limits of agreement −3.5 to 4.4 beats/min and CC2 was 0.9559 and RMSE was 2.05 beats/min, while when PCA was used instead, the statistics were 0.59 beats/min of a mean bias with −4.1 to 5.3 beats/min of 95% limits of agreement, CC2 of 0.9383, a RMSE of 2.44 beats/min (see Fig. 7d). The Bland–Altman plots for the third scenario are shown in Fig. 8. Bland-Altman plots between heart rate measurements obtained by the reference method and heart rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the third scenario Figure 8a revealed a mean bias of 0.11 beats/min with 95% limits of agreement −0.87 to 1.1 beats/min, CC2 of 0.9973 and RMSE of 0.51 beats/min, while (Fig. 8) revealed a mean bias of 0.15 beats/min with 95% limits of agreement −1.5 to 1.8 beats/min, CC2 of 0.9926 and RMSE of 0.84 beats/min. Based on ICA and PCA, the statistics were 0.38; −2.5 to 3.3; 0.9759; 1.53 beats/min based ICA (see Fig. 8c) and 0.4; −3.1 to 3.9; 0.965; 1.83 beats/min based on PCA (see Fig. 8d) for the main bias, limits of agreement, CC2 and RMSE respectively. The Bland–Altman plots for the last scenario are shown in Fig. 9. Bland-Altman plots between heart rate measurements obtained by the reference method and heart rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the fourth scenario The Bland–Altman plot (Fig. 9a) showed the statistics were 0.17, −1.6 to 1.9, 0.9917 and 0.89 beats/min for the mean bias, limits of agreement, CC2 and RMSE respectively when the proposed system with magnification process was used, while Fig. 9b showed the statistics were 0.24, −2.1 to 2.6, 0.9848 and 1.2 beats/min respectively when the proposed system without magnification process was used instead. The statistics based on ICA were 0.58, −5.1 to 6.3, 0.9089 and 2.94 beats/min (see Fig. 9c), whereas they were 0.6, −5.7 to 6.9, 0.8887 and 3.24 beats/min based on PCA (see Fig. 9d). A performance comparison of various measuring systems based on their RMSE value for the detection of heart rate for all proposed scenarios is shown in Fig. 10. RMSE performance of various heart rate measuring systems for all proposed scenarios Respiratory rate measurements Figure 11 demonstrates a Bland–Altman plots of the respiratory rate measurements in the first scenario. The Bland–Altman plot (Fig. 11a) revealed a strong agreement between the difference between heart rate measurements by the proposed system using the magnification process and the reference measurements by the Piezo respiratory belt. The mean bias was 0.066 breaths/min and 95% limit of agreement range between −0.3 and 0.43 breaths/min with a CC2 of 0.9978 and a RMSE of 0.2 breaths/min. Bland–Altman plot (Fig. 11b) revealed a mean bias of 0.13 breaths/min, agreement range between −0.66 and 0.93 breaths/min, a CC2 of 0.9898 and a RMSE of 0.42 breaths/min when the proposed system without magnification process was used instead. Using ICA as shown in Fig. 11c, the main bias was 0.44 breaths/min with agreement range between −1.9 and 2.8 breaths/min. The CC2 was 0.918 and the RMSE was 1.26 breaths/min. Using PCA as shown in Fig. 11d, the main bias was 0.62 breaths/min with agreement range between −2.4 and 3.7 breaths/min. The CC2 was 0.8661 and the RMSE was 1.66 breaths/min. Bland–Altman plots between respiratory rate measurements obtained by reference method and respiratory rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the first scenario In the second scenario, Fig. 12a revealed a mean bias of 0.12 breaths/min with agreement range between −0.62 to 0.85 breaths/min, a CC2 of 0.9913 and a RMSE of 0.39 breaths/min, while Fig. 12b revealed a mean bias of 0.2 breaths/min with agreement range between −0.93 to 1.3 breaths/min, CC2 of 0.9799 and RMSE of 0.6 breaths/min. Using ICA as shown in Fig. 12c, the statistics were 0.57 breaths/min of a mean bias; −2.3 to 3.4 breaths/min agreement range; 0.8833 of CC2; 1.54 breaths/min of RMSE, whereas when PCA was used, the statistics were 0.94 breaths/min; −2.5 to 4.4 breaths/min agreement range; 0.8358 of CC2; 1.98 breaths/min of RMSE as shown in Fig. 12d. Bland–Altman plots between respiratory rate measurements obtained by reference method and respiratory rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the second scenario In the third scenario, Fig. 13a showed a mean bias of 0.091 breaths/min with agreement range between −0.47 to 0.65 breaths/min, a CC2 of 0.995 and a RMSE of 0.3 breaths/min, while Fig. 13b showed a mean bias of 0.17 breaths/min with agreement range between −0.79 to 1.1 breaths/min, CC2 of 0.9853 and RMSE of 0.52 breaths/min. Using ICA as shown in Fig. 13c, the statistics were 0.51 breaths/min of a mean bias; −2 to 3 breaths/min agreement range; 0.9028 of CC2; 1.38 breaths/min of RMSE, whereas when PCA was used, the statistics were 0.87; −2.3 to 4 breaths/min agreement range; 0.8558 of CC2; 1.83 breaths/min of RMSE as shown in Fig. 13d. Bland–Altman plots between respiratory rate measurements obtained by reference method and respiratory rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the third scenario In the fourth scenario, Fig. 14a indicated a mean bias of 0.16 breaths/min with agreement range between −0.84 to 1.2 breaths/min, a CC2 of 0.9838 and a RMSE of 0.53 breaths/min, while Fig. 14b showed a mean bias of 0.21 breaths/min with agreement range between −0.91 to 1.3 breaths/min, CC2 of 0.98 and RMSE of 0.6 breaths/min. Using ICA as shown in Fig. 14c, the statistics were 0.74 breaths/min of a mean bias; −3.8 to 5.2 breaths/min agreement range; 0.7531of CC2; 2.4 breaths/min of RMSE, whereas when PCA was used, the statistics were 1.1; −3.8 to 5.9 breaths/min agreement range; 0.7366 of CC2; 2.69 breaths/min of RMSE as shown in Fig. 14d. Bland–Altman plots between respiratory rate measurements obtained by reference method and respiratory rates measured by a the proposed system with magnification, b the proposed system without magnification, c ICA and d PCA for the fourth scenario A performance comparison of various measuring systems based on their RMSE value for the detection of respiratory rate for all proposed scenarios is shown in Fig. 15. RMSE performance of various respiratory rate measuring systems for all proposed scenarios The results show that PPG can be successfully performed from a hovering UAV if a suitably selective spatiotemporal motion detection scheme is used. The experimental results on many video sequences show that the estimated heart and respiratory rates had high agreement with the reference methods and outperformed the state of the art methods (ICA and PCA) in four different proposed scenarios. In the stationary scenario, the proposed system with the magnification process showed an excellent agreement with the reference method [CC2 = 0.9991, RMSE = 0.31 beats/min and mean error (ME) = 0.29%] with respect to the heart rate measurements and (CC2 = 0.9978, RMSE = 0.2 breaths/min and ME = 0.18%) with respect to the respiratory rate measurements. Our proposed system without the magnification process could also measure these vital signs with a very good agreement (CC2 = 0.9966, RMSE = 0.57 beats/min, and ME = 0.54% for heart rate measurements and CC2 = 0.9898, RMSE = 0.42 breaths/min, and ME = 0.41% for the respiratory rate measurements). It clear that our system with and without the magnification process reduced the mean bias, limit of agreement and RMSE as well as increased the correlation level compared to when ICA and PCA were used instead to extract vital signs from the full face area. ICA under the first scenario had a ME of 1.18% for heart rate and 1.22% for respiratory rate, whereas PCA had a ME of 1.54% for heart rate and 1.58% for respiratory rate. In the second scenario (face expression and head rotation), the proposed system with the magnification process also had very good agreement with the reference method (CC2 = 0.9944, RMSE = 0.73 beats/min, and ME = 0.72%) with respect to the heart rate measurements and (CC2 = 0.9913, RMSE = 0.39 breaths/min, and ME = 0.38%) with respect to the respiratory rate measurements which were slightly better than when we used our system without the magnification process (CC2 = 0.989, RMSE = 1.02 beats/min, and ME = 0.99% for heart rate measurements and CC2 = 0.9799, RMSE = 0.6 breaths/min, and ME = 0.59% for the respiratory rate measurements). This is significantly better than the statistics achieved when ICA and PCA were used instead. ICA under the second scenario had a ME of 1.98% for heart rate and 1.49% for respiratory rate, whereas PCA had a ME of 2.37% for heart rate and 1.89% for respiratory rate. In the third scenario (talking), our results with and without magnification process also had a better correlation than those obtained from the ICA and PCA. The statistics (CC2, RMSE, and ME) with the magnification process were 0.9973, 0.51 beats/min and 0.5% respectively for heart rate and 0.995, 0.3 breaths/min and 0.29% for respiratory rate, whereas without using the magnification process, they were 0.9926, 0.84 beats/min and 0.82% for the heart rate and 0.9853, 0.52 breaths/min and 0.5% for the respiratory rate. Under this scenario, ME based-ICA was 1.5% for the heart rate and 1.34% for the respiratory rate, whereas ME based-PCA was 1.79% for the heart rate and 1.75% for the respiratory rate. Our results with and without the magnification process under the last scenario (lighting condition) also exhibited very good correlation and low RMSE compared to ICA and PCA which might fail in extracting the heart and respiratory rates with low correlation levels and high RMSE. The statistics (CC2, RMSE, and ME) based on the magnification process were 0.9917, 0.89 beats/min and 0.88% for heart rate, and 0.9838, 0.53 breaths/min and 0.52% for respiratory rate, whereas they were 0.9848, 1.2 beats/min and 1.18% for heart rate and 0.98, 0.6 breaths/min and 0.59% for respiratory rate without the magnification process. ICA under the fourth scenario had a ME of 2.78% for heart rate and 2.17% for respiratory rate, whereas PCA had a ME of 3.05% for heart rate and 2.49% for respiratory rate. For the all proposed scenarios, our system with the magnification process presented a CC2 of 0.9956, RMSE of 0.65 beats/min, and ME of 0.6% for heart rate measurements and a CC2 of 0.9919, RMSE of 0.38 breaths/min, and ME of 0.34% for respiratory rate measurements, whereas the results obtained without magnification process produced a CC2 of 0.9907, RMSE of 0.94 beats/min and ME of 0.88% for heart rate measurements and a CC2 of 0.9837, RMSE of 0.5 breaths/min, and ME of 0.52% for respiratory rate measurements. Using ICA, the statistics (CC2, RMSE, and ME) were 0.956, 2.04 beats/min and 1.86% respectively for heart rate measurements, and 0.8188, 1.97 breaths/min, and 1.77% respectively for respiratory rate measurements, whereas when PCA was used instead, they were 0.9405, 2.37 beats/min and 2.19% respectively for heart rate measurements, and 0.8164, 2.07 breaths/min, and 1.93% respectively for respiratory rate measurements. We also tested the computational time of the proposed noise artifact removal method based on CEEMDAN-CCA against the ICA and PCA. The mean computational time for CEEMDAN-CCA with 200 iterations for 30 s iPPG signal was 1.22 s, while the means for ICA and PCA were 0.86 and 0.79 s respectively. The implementation was carried out in the MATLAB program (2015b) and run under Microsoft Windows 10 (64 bits) on a computer with Intel Quad Core i5-4570 3.20 GHz CPU and 8.00 GB of RAM. The computational time cost is acceptable for noise artifact removal from the iPPG signal, which makes it suitable for real-time applications. It also noted that our proposed system does not require extra hardware to stream the video since the UAV contains some software modules to facilitate communications through Wi-Fi and provides logging capability which makes real-time processing more flexible and feasible. The potential estimation of other important vital signs such as heart rate variability and blood oxygen saturation level (SpO2) is an important future work. The SpO2 can be extracted from the iPPG signal captured by a digital camera at two different wavelengths based on ac/dc component analysis instead of direct image intensity analysis of the iPPG signal used in this study. For the first time, we have shown that video from a hovering UAV can be used to measure cardiorespiratory signals. We have used a combination of both CEEMDAN and CCA techniques to remove noise acquired from the illumination variations, subject's movement and camera movement. Also, we have demonstrated that the heart and respiratory rates can efficiently be extracted based on the proposed system with and without the developed video magnification system. The experimental results obtained from 15 subjects in different scenarios showed that the estimated heart and respiratory rates were very close to the reference methods (finger pulse oximeter and Piezo respiratory belt transducer) with very low RMES and ME. Furthermore, the proposed system significantly outperformed the state-of-the-art methods such as ICA and PCA. Therefore, the proposed system is a feasible solution to remove the noise effects resulting from the illumination variations, subject's movement and camera movement from the iPPG signals and may be a promising approach in realistic non-contact vital signs measurement applications. Future work will consider techniques that may be more robust in the presence of UAV and target locomotion and changes in pose. iPPG: imaging photoplethysmography UVA: unmanned aerial vehicle CEEMDAN: complete ensemble empirical mode decomposition with adaptive noise CCA: ICA: independent component analysis PCA: principle component analysis FFT: fast Fourier transform BSS: blind source separation CC2 : squared correlation coefficients RMSE: root mean squared error mean error rate Zhao F, Li M, Qian Y, Tsien JZ. Remote measurements of heart and respiration rates for telemedicine. PLoS ONE. 2013;8(10):e71384. Kumar M, Veeraraghavan A, Sabharval A. DistancePPG: robust non-contact vital signs monitoring using a camera. Biomed Opt Express. 2015;6(5):1565–88. Kranjec J, Beguš S, Geršak G, Drnovšek J. Non-contact heart rate and heart rate variability measurements: a review. Biomed Signal Process Control. 2014;13:102–12. De Haan G, Van Leest A. Improved motion robustness of remote-PPG by using the blood volume pulse signature. Physiol Meas. 2014;35(9):1913. Butler M, Crowe J, Hayes-Gill B, Rodmell P. Motion limitations of non-contact photoplethysmography due to the optical and topological properties of skin. Physiol Meas. 2016;37(5):N27. Al-Naji A, Gibson K, Lee S-H, Chahl J. Real time apnoea monitoring of children using the Microsoft Kinect sensor: a pilot study. Sensors. 2017;17(2):286. Takano C, Ohta Y. Heart rate measurement based on a time-lapse image. Med Eng Phys. 2007;29(8):853–7. Verkruysse W, Svaasand LO, Nelson JS. Remote plethysmographic imaging using ambient light. Opt Express. 2008;16(26):21434–45. Poh M-Z, McDuff DJ, Picard RW. Advancements in noncontact, multiparameter physiological measurements using a webcam. IEEE Trans Biomed Eng. 2011;58(1):7–11. Poh M-Z, McDuff D, Picard RW. Non-contact, automated cardiac pulse measurements using video imaging and blind source separation. Opt Soci Am. 2010;18(10):10762–74. Lewandowska M, Rumiński J, Kocejko T, Nowak J. Measuring pulse rate with a webcam—a non-contact method for evaluating cardiac activity. In: 2011 federated conference on computer science and information systems (FedCSIS). New York: IEEE; 2011. p. 405–410. Kwon S, Kim H, Park KS. Validation of heart rate extraction using video imaging on a built-in camera system of a smartphone. In: 2012 annual international conference of the IEEE engineering in medicine and biology society. New York: IEEE; 2012. p. 2174–2177. Shan L, Yu M. Video-based heart rate measurement using head motion tracking and ICA. In: 2013 6th International Congress on Image and Signal Processing (CISP). New York: IEEE; 2013. p. 160–164. Irani R, Nasrollahi K, Moeslund TB. Improved pulse detection from head motions using DCT. In: 2014 international conference on computer vision theory and applications (VISAPP), vol 3; 2014. p. 118–124. Balakrishnan G, Durand F, Guttag J. Detecting pulse from head motions in video. In: 2013 IEEE conference on computer vision and pattern recognition (CVPR). New York: IEEE; 2013. p. 3430–3437. Al-Naji A, Chahl J. Contactless cardiac activity detection based on head motion magnification. Int J Image Graph. 2017;17(01):1–18. De Haan G, Jeanne V. Robust pulse rate from chrominance-based rPPG. IEEE Trans Biomed Eng. 2013;60(10):2878–86. Li X, Chen J, Zhao G, Pietikainen M. Remote heart rate measurement from face videos under realistic situations. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 4264–4271. Simon H. Adaptive filter theory. Prent Hall. 2002;2:478–81. Asthana A, Zafeiriou S, Cheng S, Pantic M. Robust discriminative response map fitting with constrained local models. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2013: 3444–3451. Tomasi C, Kanade T. Detection and tracking of point features. Pittsburgh: School of Computer Science, Carnegie Mellon Univ; 1991. Feng L, Po L-M, Xu X, Li Y, Ma R. Motion-resistant remote imaging photoplethysmography based on the optical properties of skin. IEEE Trans Circuits Syst Video Technol. 2015;25(5):879–91. Chen D-Y, Wang J-J, Lin K-Y, Chang H-H, Wu H-K, Chen Y-S, Lee S-Y. Image sensor-based heart rate evaluation from face reflectance using Hilbert-Huang transform. IEEE Sens J. 2015;15(1):618–27. Cheng J, Chen X, Xu L, Wang ZJ. Illumination variation-resistant video-based heart rate measurement using joint blind source separation and ensemble empirical mode decomposition. IEEE J Biomed Health Inform. 2016. doi:10.1109/JBHI.2016.2615472. Rossmax Pulse Oximeter. https://www.medshop.com.au/products/rossmax-hand-held-pulse-oximeter-sa210. ADINSTRUMENTS, "MLT1132 Piezo respiratory belt transducer," (ADINSTRUMENTS). http://m-cdn.adinstruments.com/product-data-cards/MLT1132-DCW-15A.pdf. Al-Naji A, Lee S-H, Chahl J. Quality index evaluation of videos based on fuzzy interface system. IET Image Proc. 2017;11(5):292–300. Chen JH, Tang IL, Chang CH. Enhancing the detection rate of inclined faces. In Trustcom/BigDataSE/ISPA, 2015 IEEE; Helsinki, Finland. New York: IEEE; 2015. p. 143–146. Viola P, Jones M: Rapid object detection using a boosted cascade of simple features. In: 2001 CVPR 2001 Proceedings of the 2001 IEEE computer society conference on computer vision and pattern recognition, vol 511. New York: IEEE; 2001. p. I-511–I-518. Colominas MA, Schlotthauer G, Torres ME. Improved complete ensemble EMD: a suitable tool for biomedical signal processing. Biomed Signal Process Control. 2014;14:19–29. Huang NE, Shen Z, Long SR, Wu MC, Shih HH, Zheng Q, Yen N-C, Tung CC, Liu HH. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. In Proceedings of the royal society of London A, mathematical, physical and engineering sciences. The Royal Society; 1998: 903–995. Wu Z, Huang NE. Ensemble empirical mode decomposition: a noise-assisted data analysis method. Adv Adapt Data Anal. 2009;1(01):1–41. Borga M, Knutsson H. A canonical correlation approach to blind source separation. Report LiU-IMT-EX-0062 Department of Biomedical Engineering, Linkping University 2001. Li Y-O, Adali T, Wang W, Calhoun VD. Joint blind source separation by multiset canonical correlation analysis. IEEE Trans Signal Process. 2009;57(10):3918–29. Article MathSciNet Google Scholar Liu W, Mandic DP, Cichocki A. Analysis and online realization of the CCA approach for blind source separation. IEEE Trans Neural Netw. 2007;18(5):1505–10. Chen X, Liu A, Chiang J, Wang ZJ, McKeown MJ, Ward RK. Removing muscle artifacts from EEG data: multichannel or single-channel techniques? IEEE Sens J. 2016;16(7):1986–97. Sweeney KT, McLoone SF, Ward TE. The use of ensemble empirical mode decomposition with canonical correlation analysis as a novel artifact removal technique. IEEE Trans Biomed Eng. 2013;60(1):97–105. Zou L, Chen X, Servati A, Soltanian S, Servati P, Wang ZJ. A blind source separation framework for monitoring heart beat rate using nanofiber-based strain sensors. IEEE Sens J. 2016;16(3):762–72. Jarman KH, Daly DS, Anderson KK, Wahl KL. A new approach to automated peak detection. Chemom Intell Lab Syst. 2003;69(1):61–76. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Int J Nurs Stud. 2010;47(8):931–6. Ali Al-Naji conceived the algorithm, performed the experiments, and wrote the draft manuscript. Asanka G. Perera provided technical support and data collection. Javaan Chahl supervised the work and contributed with valuable discussions and scientific advice. All authors read and approved the final manuscript. This research was partly supported by the Defence Science and Technology Organisation's Tyche program on trusted autonomy. School of Engineering, University of South Australia, Mawson Lakes, SA, 5095, Australia Ali Al-Naji, Asanka G. Perera & Javaan Chahl Electrical Engineering Technical College, Middle Technical University, Baghdad, Iraq Ali Al-Naji Joint and Operations Analysis Division, Defence Science and Technology Group, Melbourne, VIC, 3207, Australia Javaan Chahl Asanka G. Perera Correspondence to Ali Al-Naji. Al-Naji, A., Perera, A.G. & Chahl, J. Remote monitoring of cardiorespiratory signals from a hovering unmanned aerial vehicle. BioMed Eng OnLine 16, 101 (2017). https://doi.org/10.1186/s12938-017-0395-y Video magnification technique
CommonCrawl
Selected original research articles from the Third International Workshop on Computational Network Biology: Modeling, Analysis, and Control (CNB-MAC 2016): systems biology Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics Minghan Chen1, Fei Li1, Shuo Wang1 & Young Cao1 Stochastic simulation of reaction-diffusion systems presents great challenges for spatiotemporal biological modeling and simulation. One widely used framework for stochastic simulation of reaction-diffusion systems is reaction diffusion master equation (RDME). Previous studies have discovered that for the RDME, when discretization size approaches zero, reaction time for bimolecular reactions in high dimensional domains tends to infinity. In this paper, we demonstrate that in the 1D domain, highly nonlinear reaction dynamics given by Hill function may also have dramatic change when discretization size is smaller than a critical value. Moreover, we discuss methods to avoid this problem: smoothing over space, fixed length smoothing over space and a hybrid method. Our analysis reveals that the switch-like Hill dynamics reduces to a linear function of discretization size when the discretization size is small enough. The three proposed methods could correctly (under certain precision) simulate Hill function dynamics in the microscopic RDME system. Cell reproduction requires elaborate spatial and temporal coordination of crucial events, such as DNA replication, chromosome segregation, and cytokinesis. In cells, protein species are well organized and regulated throughout their life cycles. Theoretical biologists have been using classic chemical reaction rate laws with deterministic ordinary differential equations (ODEs) and partial differential equations (PDEs) to model molecular concentration dynamics in spatiotemporal biological processes. However, wet-lab experiments in single cell resolution demonstrate that biological data present considerable variations from cell to cell. The variations arise from the fact that cells are so small that there exist only one or two copies of genes, tens of mRNA molecules and hundreds or thousands of protein molecules [1–3]. At this scale, the traditional way of modeling molecule "concentration" is not applicable. Noise in molecule populations cannot be neglected, as noise may play a significant role in the overall dynamics inside a cell. Therefore, to accurately model the cell cycle mechanism, discrete and stochastic modeling and simulation should be applied. A convenient strategy to build a stochastic biochemical model is to break a deterministic model into a list of chemical reactions and simulate them with Gillespie's stochastic simulation algorithm (SSA) [4, 5]. One of the major difficulties in this conversion strategy lies in the propensity calculation of reactions. Gillespie's SSA is well defined for mass action rate laws. However, in many biochemical models, in addition to mass action rate laws, other phenomenological reaction rate laws are often used. For example, the Michaelis-Menten equation [6] and Hill functions [7] are widely used in biological models to model the fast response to signals in regulatory control systems. Although theoretically these phenomenological rate laws may be generated from elementary reactions with mass action rate laws, in practice the detailed mechanisms behind these phenomenological rate laws are not well known and may not be very important. Stochastic modeling and simulation with these phenomenological rate laws are sometimes inevitable. In recent years, stochastic modeling and simulation for spatiotemporal biological systems, particularly reaction-diffusion systems, have captured more and more attention. Several algorithms and tools [8–11] to model and simulate reaction-diffusion systems have been proposed. These methods can be categorized into two theoretical frameworks: the spatially and temporally continuous Smoluchowski modeling framework [12] and the compartment-based modeling framework, formulated as the spatially discretized reaction-diffusion master equation (RDME) [13, 14]. The Smoluchowski framework [12, 15, 16] stores the exact position of each molecule and is mathematically fundamental, whereas the RDME is coarse-grained and better suited for large scale simulations [17]. In RDME, the spatial domain is discretized into small compartments. Within each compartment, molecules are considered "well-stirred". Under the RDME scheme, diffusion is modeled as continuous time random walk on mesh compartments, while reactions fire only among molecules in the same compartment. Stochastic dynamics of the chemical reactions in each compartment is governed by the chemical master equation (CME) [18, 19]. Yet, the CME is computationally impossible to solve for most practical problems. Stochastic simulation methods were then applied to generate realizations of system trajectories. It has been well established that the discretization compartment size for RDME should be smaller than the mean free path of the reactions for the compartment to be considered as well-stirred [20]. In addition, it has been proved that the RDME of bi-molecular reactions in 3D domain becomes incorrect and yields unphysical results when the discretization size approaches microscopic scale [21–23]. In this paper, we focus on the stochastic modeling of reaction-diffusion systems with reaction rate laws given by Hill functions. In the Results section, we present our numerical analysis on a toy model of reaction-diffusion system with Hill function dynamics. We will show that the RDME framework of the Hill function dynamics has serious simulation defects when the discretization size approach microscopic limit: When the discretization size is small enough, the typical switching pattern of Hill dynamics becomes linear to the input signal (and the discretization size). Later, we propose potential solutions for the discretization of the reaction-diffusion systems with Hill function rate laws. Finally, we conclude this paper with a discussion on RDME for general nonlinear functions and the hybrid method. Caulobacter modeling Caulobacter crescentus captures great interest in the study of asymmetric cell division. When a Caulobacter cell divides, it produces two functionally and morphologically distinct daughter cells. The asymmetric cell division of Caulobacter crescentus requires elaborate temporal and spatial regulations [24–27]. In literature [28–30], four essential "master regulators" of the Caulobacter cell cycle, DnaA, GcrA, CtrA and CcrM, have been identified. These master transcription regulators determine the dynamics of around 200 genes. They oscillate temporally to drive the dynamics of cell cycle. Among them, the molecular mechanisms governing CtrA functions have been well studied. The simulation we are concerned with in this paper is also related to this CtrA module. So we give a brief introduction to it. In swarmer cells, a two-component phosphorelay system (with both CckA and ChpT) phosphorylates the CtrA. Then the chromosomal origin of replication (Cori) is bound by the phosphorylated CtrA (CtrAp) to inhibit the initiation of chromosome replication [31]. Later during the swarmer-to-stalked transition period, CtrAp gets dephosphorylated and degraded, allowing the initiation of chromosome replication again. Thus the CtrA has important impact on the chromosome replication in our model, and should be well regulated. The regulation of CtrA is achieved by the histidine kinase CckA through the following pathway. An ATP-dependent protease, ClpXP, degrades CtrA [32, 33] and is localized to the stalk pole by CpdR. As the nascent stalked cell progresses through the cell cycle, CpdR is phosphorylated by CckA/ChpT, losing it polar localization, and consequently losing its ability to recruit ClpXP protease for CtrA degradation. In addition, CtrA is reactivated through CckA/ChpT phosphorylation [34]. Moreover, the regulatory network of the histidine kinases CckA is influenced by a non-canonical histidine kinase, DivL [35]. DivL promotes CckA kinase, which then phosphorylates and activates CtrA in the swarmer cell. During the swarmer-to-stalked transition period, DivL activity is down-regulated, thereby inhibiting CckA kinase activity. As a result, dephosphorylation and degradation of CtrA trigger the initiation of chromosome replication. In order to study the regulatory network in Caulobacter crescentus, Subramanian et al. [26, 27] developed a deterministic model with six major regulatory proteins. The deterministic model provides robust switching between swarmer and stalked states. Figure 1 (left) demonstrates the total population change during the Caulobacter crescentus cell cycle with this deterministic model. In the swarmer stage (from t=0 to 30 min), the CtrA is phosphorylated at a high population level, which inhibits the initiation of chromosome replication. During the swarmer-to-stalked transition period (from t=30 to 50 min), the CtrAp population quickly drops to a low level, allowing the consequent initiation of chromosome replication in the stalked stage. The population oscillation of CtrAp during Caulobacter crescentus cell cycle. Left figure shows the simulation result of deterministic model and the right figure shows the stochastic simulation result. In the swamer stage (t=0∼30min), the CtrA is phosphorylated and at high population level state, which inhibits the initiation of chromosome replication. During swarmer-to-stalked transition (t=30∼50min), the CtrAp population quickly switch to low state, allowing the consequent initiation of chromosome replication in the stalked stage In stochastic simulation of the spatiotemporal model of this regulatory network, the phosphorylated CtrA (CtrAp) population switch from a high level in swarmer stage to a low level in stalked stage is not as sharp as expected, shown in Fig. 1 (right). On the other hand, the DivL population level from the stochastic simulation seems similar to that from the deterministic simulation. A simple analysis suggests that the Hill function dynamics, which models the up regulation of CckA kinase activity by DivL, might be the culprit. Further investigation leads to the discovery of the Hill function limitation at small discretization sizes, as analyzed in the next section. Reaction diffusion master equation Before we plunge into Hill functions in reaction-diffusion systems, we will first briefly review mathematical modeling and simulation methods of spatially inhomogeneous stochastic systems. The dynamics of a spatially inhomogeneous stochastic system has been considered as governed by the reaction-diffusion master equation (RDME), developed in an early work of Gardiner [13]. The RDME framework partitions the spatial domain into small compartments, such that molecules within each compartment can be considered well-stirred. Assume a biochemical system of N species {S 1,S 2,…,S N } and M reactions within a spatial domain Ω, which is partitioned into K grids V k , k=1,2,…,K. For simplicity, we assume that the space Ω is one dimensional (1D). Each species population, as well as the reactions in the system will have a local copy for each compartment. The state of the reaction-diffusion system at any time t is represented by the vector state vector X(t) = {X 1,1 (t), X 1,2(t), …, X 1,K (t),…, X n,k (t),…,X N,K (t)}, where X n,k (t) denotes the molecule population of species S n in the grid V k at time t. Reactions in each compartment is governed by the Chemical Master Equation (CME), while diffusion is modeled as random walk across neighboring compartments. Each reaction channel R j in any compartment k can be characterized by the propensity function a j,k and the state change vector ν j ≡(ν 1j ,ν 2j ,…,ν Nj ). The dynamics of the diffusion of species S i from compartment V k to V j is formulated by the diffusion propensity function d i,k,j and the diffusion state change vector μ k,j similarly. d i,k,j (x)d t gives the probability that, given X i,k (t)=x, one molecule of species S i at grid V k diffuses into grid V j in the next infinitesimal time interval [t,t+d t). If j=k±1, then \(d_{i,k,j}(x)=\frac {D}{h^{2}} x\), where D is the diffusion rate coefficient and h is the characteristic length, also called discretization size, of a grid; Otherwise d i,k,j =0. The state change vector μ k,j is a vector of length K with −1 in the k-th position, 1 in the j-th position and 0 everywhere else. With the reaction-diffusion propensity functions and state change vectors, the RDME completely depicts the dynamics of the system: $$ {\begin{aligned} & \frac{\partial P(\mathbf{x},t|\mathbf{x_{0}}, t_{0})}{\partial t} \\ &\quad= {\sum_{k=1}^{K}} {\sum_{j=1}^{M}} \left(a_{j,k}(\mathbf{x}-\nu_{j,k}) P(\mathbf{x}-\nu_{j,k}, t|\mathbf{x_{0}}, t_{0}) -a_{j,k}(\mathbf{x})P(\mathbf{x}, t|\mathbf{x_{0}}, t_{0})\right) \\ & \qquad+ {\sum_{i=1}^{N}\sum_{k=1}^{K}\sum_{j=1}^{K}} \left(-d_{i,k,j}(x_{ik})P(\mathbf{x},t|\mathbf{x_{0}}, t_{0})\right.\\ & \qquad \left. + d_{i,k,j}(X_{ik}-\mu_{k,j}) P(X_{11},\ldots,X_{ik}-\mu_{k,j}, \ldots, X_{N,K}, t| \mathbf{x_{0}}, t_{0})\right), \end{aligned}} $$ where P(x,t|x 0,t 0) denotes the probability that the system state X(t)=x, given that X(t 0)=x 0. The RDME is a set of ODEs that gives one equation for every possible state. It is both theoretically and computationally intractable to solve RDME for practical biochemical systems due to the huge number of possible combinations of states. Instead of solving RDME for the time evolution of the probabilities, we can construct numerical realizations of X(t). A popular method to construct the trajectories of a reaction-diffusion system is to simulate each diffusive jumping and chemical reaction event explicitly. With enough trajectory realizations, we can derive the distribution of each state vector at different times. The RDME model have been used as an approximation of the Smoluchowski framework in the mesoscopic scale. Furthermore, researches have discovered that in the microscopic limit, bimolecular reactions may be eventually lost when the grid size becomes infinitely small in the three dimensional domain [21, 23]. The RDME framework requires that the two reactant molecules for a bimolecular reaction must be in the same compartment in order to fire a reaction. Intuitively, we may realize that with more discrete compartments, it is less likely for the two molecules to encounter each other at the same compartment in a high dimensional domain. In order to model the reaction-diffusion system with RDME in the microscopic limit, Radek and Chapman [22] derived a formula of mesh-dependent reaction propensity correction for bimolecular reactions when the discretization size h is larger than a critical size h crit . This reaction propensity correction formula fails when the discretization size h is smaller than this critical value. Recently, Isaacson [36] proposed a convergent RDME framework (cRDME). In the cRDME framework the diffusion is modeled exactly as in the RDME, while the bimolecular reaction occurs with a nonzero propensity, as long as the distance of the two reactant molecules is less than the reaction radius as defined in the Smoluchowski framework. In conclusion, the discretization size for the RDME framework should be small enough to avoid discretization error. Yet when the mesh size is less than a critical value, the RDME may become inaccurate for the loss of bimolecular reactions in high dimensional domains. In this paper we will demonstrate that discretization size in space also has great influence on Hill function dynamics in reaction-diffusion systems. The switch-like Hill dynamics breaks even in a 1D domain when the discretization size is small. Hill function The Hill function [7], as well as the Michaelis-Menten function [6] are widely used in enzyme kinetics modeling. In molecular biology, enzymes catalyze biochemical substrates into products, while remaining unchanged. The enzyme kinetics reactions are usually formulated as $$ E+ S\underset{k_{-1}}{\overset{k_1}{\rightleftharpoons }} E S\overset{k_2}{\to } E+ P $$ Leonor Michaelis and Maud Leonora Menten proposed the "quasi-steady state" assumption and formulated the reaction rate equation for the enzyme kinetics, which is mostly referred to as the "Michaelis-Menten" equation. With the conservation law and the quasi-steady state assumption, the Michaelis-Menten equation is given as $$ \frac{d[\!P]}{dt} = V_{max} \frac{[\!S]}{K_{M} + [\!S]}, $$ with V max =k 2[ E]0 being the maximum reaction rate and \(K_{m} = \frac {k_{-1}+k_{2}}{k_{1}}\) being the Michaelis constant. Sometimes one substrate molecule can have several enzyme binding sites and multiple bindings (cooperative binding) with enzymes are required to activate the substrate. $$ S+ nE\underset{k_{-1}}{\overset{k_1}{\rightleftharpoons }}{SE}_n\overset{k_2}{\to } nE+ P $$ In real biological models, the binding of the n enzyme molecules to a substrate does not take place at once but in a succession of steps. Using the quasi-steady state assumption and conservation laws, the Hill function that formulates the reaction dynamics is given as $$ \frac{d[\!P]}{dt} = V_{max} \frac{[\!E]^{n}}{{K_{m}}^{n} + [\!E]^{n}}, $$ with V max as the maximum reaction rate, K m as the Michaelis constant, and n as the Hill coefficient. The Hill function is widely used to model "step-regulated" reaction as an activity switch. To simplify the analysis, a toy model of a reaction-diffusion system in one dimension is constructed. As demonstrated in Fig. 2, in the toy model an enzyme species E (typically a transcription factor) is constantly synthesized and degraded. The enzyme E further upregulates the DNA expression of a product P. The synthesis rate of P is formulated as a Hill function. A simple toy model of Hill function dynamics in 1D domain. Enzyme E is constantly synthesized and upregulates the synthesis of product P Assume a spatial domain of size L is equally partitioned into K compartments with size h=L/K for each. The list of reactions and reaction propensities in each compartment are given as $$ \begin{array}{rcl rcl} \emptyset &\to & E_{i}, & a_{1} &=& k_{s} \cdot h ;\\ E_{i} & \to & \emptyset, & a_{2} &=& k_{d} \cdot E_{i} ;\\ \emptyset & \xrightarrow{E_{i}} & P_{i}, & a_{3} &=& k_{syn}\cdot h \frac{E_{i}^{4}}{(K_{m}\cdot h)^{4} + E_{i}^{4}};\\ P_{i} &\to & \emptyset, & a_{4} & = & k_{deg} \cdot P_{i};\\ E_{i} &\to & E_{i\pm 1}, & a_{5} &= &2\frac{D_{E}}{h^{2}} E_{i}; \\ P_{i} &\to & P_{i\pm 1}, & a_{6} &= &2\frac{D_{P}}{h^{2}} P_{i}; \\ \end{array} $$ The parameters k s , k d are the synthesis, degradation rates, respectively, for enzyme species E, and similarly k syn , k deg are those for product P. K m is the Michaelis constant in the Hill function. In the one-dimensional domain, the enzyme E is constantly synthesized and degraded. At the equilibrium state, the distribution of the total population of E is given by the Poisson distribution, $$ P_{E}(n) = \frac{\alpha^{n}}{n!} e^{-\alpha}, $$ where \(\alpha = \frac {k_{s}}{k_{d}}L\) denotes the mean value of the total number of enzyme E molecules in the domain. For an individual compartment (bin), consider the probability \(P_{E}^{(i)}(n)\) that an individual bin i contains n molecules of enzyme E. At the equilibrium state, enzyme E is homogeneously distributed in the system. The probability that each molecule of E stays in a certain bin i is given by p=1/K. The probability that, of all the E molecules in the domain, none is in bin i is approximated by $$ \begin{array}{rcl} P_{E}^{(i)}(0) & = & P_{E}(0) + P_{E}(1)(1-\frac{1}{K}) + P_{E}(2)(1-\frac{1}{K})^{2}\\ &&+ \ldots +P_{E}(N)(1-\frac{1}{K})^{N} + \ldots\\ & = & {\sum\limits_{n=0}^{N}e^{-\alpha} \frac{\alpha^{n}}{n!} \left(1-\frac{1}{K}\right)^{n}} \\ & = & e^{-\alpha/K}. \end{array} $$ The other probability terms are not important in the analysis. With the distribution of the enzyme molecular population, the mean reaction propensity for the synthesis of protein P in the i-th bin is $$ \langle a^{i}_{syn} \rangle = k_{syn} h \sum_{n=0}^{\infty} \frac{n^{4}}{(K_{m}\cdot h)^{4}+ n^{4} }P_{E}^{(i)}(n). $$ Notice that when n=0, the Hill function is zero, and when the discrete bin size h is small, the Hill function approaches one quickly if n≥1. For example, when K m ·h≤0.5 the Hill function \(\frac {n^{4}}{(K_{m}\cdot h)^{4} + n^{4}} \ge 0.94\) for n≥1. Therefore, upper and lower bounds for the product P synthesis propensity, when k m ·h≤0.5, are $$ 0.94 k_{syn}\cdot h \sum_{n=1}^{\infty} P_{E}^{(i)}(n) \le \langle a^{syn} \rangle \le k_{syn}\cdot h \sum\limits_{n=1}^{\infty} P_{E}^{(i)}(n). $$ Hence, when the discretization size h is small enough, the propensity for the product P synthesis reaction can be approximated as $$ \begin{array}{rcl} \langle a_{syn}^{(i)} \rangle & \approx & k_{syn}\cdot h \cdot {\sum\limits_{n=1}^{\infty} P_{E}^{(i)}(n)}\\ & = & k_{syn} \cdot h \cdot (1 - P_{E}^{(i)}(0))\\ & = & k_{syn} \cdot h \cdot (1 - e^{-\alpha/K}). \end{array} $$ When the discretization size h is small and K is large, the mean reaction propensity can be further approximated as $$ \langle a_{syn}^{(i)} \rangle \approx k_{syn} \cdot h \cdot \alpha / K. $$ Notice that α/K is the mean population of enzyme E in the i-th bin. The Hill function of the product P synthesis is now reduced to a linear function of the enzyme E population in the i-th bin. Furthermore, from (12) the mean population of product P in the bin i is $$ \langle P^{(i)} \rangle = \frac{k_{syn}\cdot h}{k_{deg}}\frac{\alpha}{K}, $$ and the total product P population in all K bins is $$ \begin{array}{rcl} \langle P \rangle &=& {\frac{k_{syn} \cdot L}{k_{deg}}\frac{k_{s}\cdot L}{k_{d}}\frac{1}{K}}\\ & = & {\frac{k_{syn}}{k_{deg}}\cdot \alpha \cdot h}. \end{array} $$ Equation 14 shows that the total population of product P is a linear function of α, the mean population of E and h=L/K, the discretization size. With finer discretization, less product P is produced. Figure 3 shows the histograms and the mean values of the product P population with different discretization sizes. The histograms show that with finer discretization, the population histograms shift further to the left. The histogram (left) and mean (right) population of product P with different discretization. Parameters: D e =1.0, k s =2.5, k d =0.1, k syn =5.0, k deg =0.05, system size L=1.0. For the histogram figure, K m =25.0. The log-log plot shows the mean total product population under different discretization and different parameter sets The log-log plot (Fig. 3, right) shows that when the discretization size is small enough, the total product P population is a linear function of discretization size. The slope of the log-log plot is about 1.0 at small discretization size h, regardless of K m . Moreover, simulation results show that when the mean enzyme E population is less than the constant K m in the Hill function (K m >α), the population of product P increases slightly before the Hill function dynamics breaks at small discretization sizes. Note that the Hill function dynamics show a concave shape with respect to enzyme E population when the enzyme E population is smaller than the Michaelis constant K m . Therefore, it is reasonable that the product P population in this reaction-diffusion model increases slightly when the Michaelis constant K m is larger than the mean enzyme E population α. The numerical analysis above makes two approximations $$ \left\{\begin{array}{l} \frac{n^{4}}{(K_{m}\cdot h)^{4} + n^{4}} \approx 1, \text{for}\,n \ge 1;\\ e^{-\alpha/K} \approx 1 - \alpha/K. \end{array}\right. $$ to get the linear relation. Assuming an error tolerance of 5%, the two approximations can be simplified to $$ \left\{\begin{array}{l} K_{m}\cdot h < 0.5,\\ \alpha/K < 1/3. \end{array}\right. $$ Hence, when the discretization bin number $$ K > \max\{2{LK}_{m}, \quad 3\alpha \}, $$ the Hill dynamics reduce to a linear function. Equivalently, in order for the Hill function dynamics to work well, the discretization number K should be less than or equal to this threshold. However, the coarse discretization from a small K leads to spatial error. Two potential solutions to this discretization dilemma are proposed next. From the previous analysis, the Hill dynamics in RDME systems reduces to a linear function due to the lack of intermediate states — the discrete population in each individual bin yields an integer value (0 or 1) for the Hill function. Thus a natural solution to it is to generate intermediate states by a smoothing technique that averages the population over neighboring bins when calculating the reaction propensity. To model a RDME system in high dimensions with fine discretization, previous studies [21] have suggested relaxing the same-compartment reaction assumption and allowing reactions within neighboring compartments. The next subsection shows that allowing reactions within neighboring compartments is equivalent to smoothing over neighboring compartments. Smooth over neighboring bins A natural technique that bridges the discrete and continuous models is to smooth the spatial population by taking the average of neighboring bins. Consider first smoothing the enzyme E population within the neighboring m bins (including the bin itself) when calculating the reaction propensity. Following previous analysis, the reaction probability for the synthesis of product P in the i-th bin is $$ \begin{array}{rcl} \langle \hat{a}_{syn}^{(i)} \rangle & = & k_{syn}\cdot h {\sum\limits_{\hat{n} = 0}^{\infty} \left(\frac{(n/m)^{4}}{(K_{m}\cdot h)^{4} + (n/m)^{4}}P_{E}^{(i)}(n; m)\right)} \\ & = & k_{syn}\cdot h {\sum\limits_{n = 0}^{\infty} \left(\frac{n^{4}}{(m\cdot K_{m}\cdot h)^{4} + n^{4}}P_{E}^{(i)}(n; m)\right)}, \end{array} $$ where \(P_{E}^{(i)}(n; m)\) denotes the probability that the m neighboring bins of the i-th bin have a total enzyme E population of n. The interpretation of this equation is that the synthesis reaction in the i-th bin is interacting with the m neighboring bins and the propensity is calculated based on the total enzyme E population of all the neighboring bins. By probability theory, $$ P_{E}^{(i)}(0; m) = e^{-\alpha m/K}. $$ As before, only the term \({P}_{E}^{(i)}(0; m)\) is important. In Eq. (18), for any fixed integer m≥0, there exists an h≥0, such that m·K m ·h<0.5 and the Hill function is still approximately one. With such a discretization size h, the product P synthesis propensity can be approximated as $$ \begin{array}{rcl} \langle \hat{a}^{(i)}_{syn} \rangle &\approx & k_{syn}\cdot h {\sum\limits_{n = 1}^{\infty} P_{E}^{(i)}(n; m)}, \\ & = & k_{syn}\cdot h (1 - P_{E}^{(i)}(0; m)), \\ & = & k_{syn}\cdot h (1 - e^{-\alpha m/K}) \\ & \approx & k_{syn}\cdot h \cdot \alpha \cdot m/K. \end{array} $$ Again, with a fixed smoothing bin number m, the synthesis reaction propensity becomes linear in the mean enzyme E population α m/K of the m bins, and the mean population of product P in the system is $$ \langle P \rangle = \frac{k_{syn} \cdot L}{k_{deg}}\frac{k_{s}\cdot L}{k_{d}}\frac{m}{K}, $$ which is linear in m/K and the mean total enzyme E population α. The linear function can be achieved with an h such that $$ \begin{cases} m\cdot K_{m}\cdot h < 0.5,\\ m\cdot \alpha/K < 0.33. \end{cases} $$ Figure 4 plots the mean population of product P in the toy model with the smoothing technique and m=5. Numerical results show that smoothing over a fixed number m of compartments gives a good solution for a certain range of discretization sizes. However, there always exists a small enough critical discretization size h crit such that the Hill function dynamics reduce to a linear function when the discretization size is smaller than this h crit . Moreover, fixed length smoothing, in the scenarios where the Michaelis constant K m is larger than the mean enzyme E population α, gives a result closer to that of the deterministic simulation when the discretization sizes are not too small. The total population of product P with different discretization. Parameters: system size L=1.0, D e =1.0, k s =2.5, k d =0.1, k syn =5.0, k deg =0.05. For the left figure K m =25.0, while for the right figure K m =50.0 Convergent hill function dynamics in reaction-diffusion systems The previous subsection demonstrates that a sufficiently small discretization size h will still break the Hill dynamics even with the strategy of smoothing over a fixed number of bins, thus the number of bins needs to vary with the discretization size. Inspired by the convergent-RDME framework [36], a remedy for the failure of Hill function dynamics in reaction-diffusion systems is to smooth the population over bins within a certain distance. From the analysis, a small smoothing length would cause the failure of the Hill function dynamics and a large smoothing length would degrade the spatial accuracy of the model. Based on the criteria of failure for the Hill function dynamics with fixed m, Eq. (22), we can choose the smallest m that would not result in failure for the Hill function dynamics, i.e., m such that neither of the two assumptions in the previous analysis are valid. This choice is $$ m = \lceil \max\left\{\frac{0.5}{K_{m}\cdot h}, \frac{0.33\cdot L}{\alpha \cdot h}\right\} \rceil. $$ Following the terminology in the convergent-RDME framework [36], the "reaction radius ρ" of the Hill function dynamics is defined as ρ=m·h, where m is given in (23). Figure 5 shows numerical results for the toy model in the reaction-diffusion system with different discretization sizes and with the convergent smoothing technique (m and h related by (23)). It is clear that the convergent smoothing technique gives very good simulation results for all h values. Applying the fixed length smoothing technique to the DivL-CckA Hill function model in the Caulobacter crescentus cell cycle results in a sharp CtrAp population change during swarmer-to-stalked transition. Figure 6 shows the CtrAp trajectories from the deterministic model and stochastic model simulation results. The fixed length smoothing technique yields more CtrAp in the swarmer stage and less CtrAp in the stalked stage, which yields a sharp CtrAp population change during the swarmer-to-stalked transition as expected. The Comparison of CtrAp of deterministic model and the stochastic simulation results. Left: CtrAp population oscillation trajectory during Caulobacter crescentus cell cycle. Right: The histogram of CtrAp population in the swarmer cells (t=30m i n). For model parameters, please refer to [27] Motivated by the misbehavior of DivL-CckA dynamics in the stochastic simulation of the Caulobacter crescentus cell cycle model, a study of the Hill function dynamics in reaction-diffusion systems reveals that when the discretization size is small enough, the switch-like behavior of Hill function dynamics reduces to a linear function of input signal and discretization size. A proposed fixed length smoothing method, which allows chemical reactions to occur with reactant molecules within a distance of fixed length, the "reaction radius"of the Hill function dynamics, seems to give a very good remedy to this problem. It is known that in high dimensions bimolecular reactions are lost with the RDME in the microscopic limit [21]. This work shows that one-dimensional Hill function dynamics in a RDME framework gives a similar challenge when the discretization size is small enough. The conjecture is that the problem lies in the RDME requirement that reactions only fire with the reactant molecules in the same discrete compartment. Furthermore, this defect in RDME at the microscopic limit is believed to be a common scenario for all highly nonlinear reaction dynamics. Theoretical biologists have developed many highly nonlinear reaction dynamics that need special attention when converted to stochastic models. Here we will extend our analysis and discuss a general situation in stochastic simulation of reaction diffusion systems. Suppose that we have a species X, whose population is represented by state variable x, and there is a particular reaction R: $$ \emptyset \mathop{\longrightarrow}^{X} P, $$ in which X serves as an enzyme to produce P and the propensity function is represented by f(x). For each X molecular, it can diffuse in a 1D domain with a small length L and with a diffusion coefficient D. Suppose the 1D domain is partitioned into K bins, thus the discretization size is \(h = \frac {L}{K}\). The system can then be represented as a chain reaction $$ X_{1} \mathop{\rightleftharpoons}^{d}_{d} X_{2} \mathop{\rightleftharpoons}^{d}_{d} \cdots \mathop{\rightleftharpoons}^{d}_{d} X_{K}, $$ where \(d = \frac {D}{h^{2}}\) is the jump rate corresponding to diffusion. The concerned reaction R could fire in any of the bins with propensity f(x i ). Assume that L is small enough such that \(\frac {D}{L^{2}}\) is very large and \(d \gg \sum _{i=1}^{K} f(x_{i})\) regardless of K. In that case, the chain reaction system (25) can be considered as a virtual fast system and the slow scale SSA [37] can be applied here. As a result, if the total population of X is n, in each bin, the mean value of x i is given by $$ \langle x_{i} \rangle = \frac{n}{K}. $$ Then based on the theory of slow scale SSA, the propensity of the corresponding synthesis reaction (24) should be $$ \langle f(x_{i}) \rangle = \sum_{j=0}^{\infty} f(j) P(x_{i} = j), $$ where P(·) is the probability under the distribution when the virtual fast system (25) converges to stochastic partial equilibrium [37]. However, the propensity function converted directly from the deterministic model has a different form as f(〈x i 〉). Note that for a nonlinear function, such as the Hill function or the Michaelis Menten function, $$ \langle f(x_{i}) \rangle \neq f\left(\langle x_{i} \rangle \right). $$ (28) highlights the mismatch between the RDME framework and the deterministic model. Hybrid method In order to have a stochastic model that is consistent with its deterministic counterpart, the propensity function should take the form f(〈x i 〉). This motivates us to adopt the hybrid ODE/SSA method [38] and apply it to the reaction diffusion systems. This hybrid method was a simple idea. It was originally presented by Haseltine and Rawlings [38] and our implementation has some modification to make it fit better with the root finding function used in LSODAR [39]. Consider a system of N species (denoted by {S 1,…,S N }) and M reactions (denoted by {R 1,…,R M }). For each reaction R j , there is a propensity function a j (x) and a state-change vector ν j . We partition these M reactions into two subsets. The subset S slow contains slow reactions, with index 1 to M S , and is simulated by the SSA. The subset S fast contains fast reactions, with index M S +1 to M, and is formulated and solved by ODEs. The simulation of these two subsets is then combined as described below. Let τ be the jump interval of the next slow (stochastic) reaction, and μ be its reaction index. Set t=0. The hybrid method simulate the system as follows: Two uniform random numbers, r 1 and r 2 in U(0,1), are generated. Solve the ODE system for S fast and find the root τ for the integral equation: $$ \int^{t+\tau}_{t} a_{tot}(\mathbf{x},s)ds+\log(r_{1}) = 0, $$ where a tot (x,t) is the sum of propensities of all reactions in S slow . Because x varies with t in the ODE system, a tot (x,t) is a function of t as well. μ is selected as the smallest integer satisfying $$ \sum^{\mu}_{i=1}a_{i}(\mathbf{x},t) > r_{2} a_{tot}(\mathbf{x},t). $$ Update \(\mathbf {x}\leftarrow \mathbf {x}+\mathbf {\nu }_{\mu }\). Return to step 1) if stopping condition is not reached. Note that our implementation is different from Haseltine and Rawling's original method in step 2. Suppose that the ODE system is given by $$ \mathbf{x}'=f(\mathbf{x}). $$ We add an integration variable z and the following equation to the ODE system. $$ z'=a_{tot}(\mathbf{x}), \quad z(t)=\log(r_{1}), $$ where we note that log(r 1) is negative and a tot is always nonnegative. In the hybrid simulation, for each step we start from the current time t and numerically [39] integrate the original ODEs (31) and the extra integral Eq. (32). The integration stops when z(t+τ)=0. As a result, τ is the solution to (29). This procedure can be numerically simulated using standard ODE solvers combined with root-finding functions, such as the LSODAR [39]. Note that since z is an integration variable, one may choose to omit it from the error control mechanism [40]. Adding this extra variable will not greatly affect the efficiency. We applied the hybrid method to the toy model (6). In our simulation, all diffusion events are partitioned into fast systems and solved by the ODE solver LSODAR, while chemical reactions are simulated by SSA under the hybrid framework described above. We test cases when K m =10,25,50 and Figs. 7 and 8 show the corresponding numerical results. It is obvious that in all three cases, the mean population remains horizontal even when the bin size decreased to the magnitude of 10−3. In Fig. 8, the mean molecule of product P centers around seven under different discretization sizes, while results from SSA shift to the left as discretization size decreases. The histogram (left) and mean (right) population of product P with different discretization, simulated by the hybrid method. Parameters: D e =1.0, k s =2.5, k d =0.1, k syn =5.0, k deg =0.05, system size L=1.0. For the histogram figure, K m =25.0. The log-log plot shows the mean total product population under different discretization and different parameter sets The distribution of product P with different discretization sizes, simulated by the hybrid method (left) and SSA (right). Parameters: K m =50.0, the rest remains the same Numerical results certainly suggest that the hybrid method has great potential in stochastic simulation of RD systems. We would like to note that great details still need to be studied, but that is not the focus for this paper. CME: Chemical master equation ODE: Ordinary differential equation PDE: Partial differential equation RDME: SSA: Stochastic simulation algorithm McAdams H, Arkin A. Stochastic mechanisms in gene expression. Proc Natl Acad Sci. 1997; 94(3):814–9. http://www.pnas.org/content/94/3/814. Fedoroff N, Fontana W. Small numbers of big molecules. Science. 2002; 297(5584):1129–31. http://www.sciencemag.org/content/297/5584/1129. Samoilov M, Plyasunov S, Arkin AP. Stochastic amplification and signaling in enzymatic futile cycles through noise-induced bistability with oscillations. Proc Natl Acad Sci U S A. 2005; 102(7):2310–5. http://www.pnas.org/content/102/7/2310. Gillespie DT. A general method for numerically simulating the stochastic time evolution of coupled chemical reactions. J Comput Phys. 1976; 22(4):403–34. Gillespie DT. Exact stochastic simulation of coupled chemical reactions. J Phys Chem. 1977; 81(25):2340–61. Michaelis L, Menten ML. Die Kinetik der Invertinwirkung. Biochem Z. 1913; 49:333–69. Hill AV. The possible effects of the aggregation of the molecules of haemoglobin on its dissociation curves. J Physiol. 1910; 40(Suppl):iv–vii. Hattne J, Fange D, Elf J. Stochastic reaction-diffusion simulation with MesoRD. Bioinformatics. 2005; 21(12):2923–4. Andrews SS, Bray D. Stochastic simulation of chemical reactions with spatial resolution and single molecule detail. Phys Biol. 2004; 1:137–51. van Zon JS, ten Wolde PR. Green's-function reaction dynamics: A particle-based approach for simulating biochemical networks in time and space. J Chem Phys. 2005; 123(23):234910. http://scitation.aip.org/content/aip/journal/jcp/123/23/10.1063/1.2137716. Novère NL, Shimizu TS. StochSim: modelling of stochastic biomolecular processes. Bioinformatics. 2001; 17:575–6. von Smoluchowski M. Zur kinetischen Theorie der Brownschen Molekularbewegung und der Suspensionen. Annalen der Physik. 1906; 326(14):756–80. http://dx.doi.org/10.1002/andp.19063261405. Gardiner CW, McNeil KJ, Walls DF, Matheson IS. Correlations in stochastic theories of chemical reactions. J Stat Phys. 1976; 14:307–31. Nicolis G, Prigogine I. Self-organization in nonequilibrium systems : from dissipative structures to order through fluctuations. New York: A Wiley-Interscience Publication; 1977. http://opac.inria.fr/record=b1078628. doi M. Stochastic theory of diffusion-controlled reaction. J Phys A Math General. 1976; 9(9):1479. http://stacks.iop.org/0305-4470/9/i=9/a=009. Keizer J. Nonequilibrium statistical thermodynamics and the effect of diffusion on chemical reaction rates. J Phys Chem. 1982; 86(26):5052–67. http://dx.doi.org/10.1021/j100223a004. Fange D, Berg OG, Sjöberg P, Elf J. Stochastic reaction-diffusion kinetics in the microscopic limit. Proc Natl Acad Sci. 2010; 107(46):19820–5. http://www.pnas.org/content/107/46/19820. McQuarrie DA. Stochastic approach to chemical kinetics. J Appl Probab. 1967; 4(3):413–78. http://www.jstor.org/stable/3212214. Gillespie DT. A rigorous derivation of the chemical master equation. Physica A: Stat Mech Appl. 1992; 188(1–3):404–25. Baras F, Mansour MM. Reaction-diffusion master equation: a comparison with microscopic simulations. Phys Rev E. 1996; 54:6139–48. http://link.aps.org/doi/10.1103/PhysRevE.54.6139. Isaacson SA. The reaction-diffusion master equation as an asymptotic approximation of diffusion to a small target. SIAM J Appl Math. 2009; 70(1):77–111. Erban R, Chapman SJ. Stochastic modelling of reaction-diffusion processes: algorithms for bimolecular reactions. Phys Biol. 2009; 6(4):046001. http://stacks.iop.org/1478-3975/6/i=4/a=046001. Hellander S, Hellander A, Petzold L. Reaction-diffusion master equation in the microscopic limit. Phys Rev E. 2012; 85:042901. http://link.aps.org/doi/10.1103/PhysRevE.85.042901. Li S, Brazhnik P, Sobral B, Tyson JJ. A Quantitative Study of the Division Cycle of Caulobacter crescentus, Stalked Cells. PLoS Comput Biol. 2008; 4(1):e9. http://dx.plos.org/10.1371%252Fjournal.pcbi.0040009. Li S, Brazhnik P, Sobral B, Tyson JJ. Temporal controls of the asymmetric cell division cycle in Caulobacter crescentus. PLoS Comput Biol. 2009; 5(8):000463. http://dx.doi.org/10.1371%252Fjournal.pcbi.1000463. Subramanian K, Paul MR, Tyson JJ. Potential role of a bistable histidine kinase switch in the asymmetric division cycle of Caulobacter crescentus. PLoS Comput Biol. 2013; 9(9):003221. http://dx.doi.org/10.1371%2Fjournal.pcbi.1003221. Subramanian K, Paul MR, Tyson JJ. Dynamical localization of DivL and PleC in the asymmetric Division cycle of Caulobacter crescentus: a theoretical investigation of alternative models. PLoS Comput Biol. 2015; 11(7):004348. Collier J, Murray SR, Shapiro L. DnaA couples DNA replication and the expression of two cell cycle master regulators. EMBO J. 2006; 25(2):346–56. Collier J, Shapiro L. Spatial complexity and control of a bacterial cell cycle. Curr Opin Biotechnol. 2007; 18(4):333–40. http://www.sciencedirect.com/science/article/pii/S0958166907000894. Holtzendorff J, Hung D, Brende P, Reisenauer A, Viollier PH, McAdams HH, et al. Oscillating global regulators control the genetic circuit driving a bacterial cell cycle. Science. 2004; 304(5673):983–7. http://www.sciencemag.org/content/304/5673/983. Quon KC, Yang B, Domian IJ, Shapiro L, Marczynski GT. Negative control of bacterial DNA replication by a cell cycle regulatory protein that binds at the chromosome origin. Proc Natl Acad Sci. 1998; 95(1):120–5. http://www.pnas.org/content/95/1/120. McGrath PT, Iniesta AA, Ryan KR, Shapiro L, McAdams HH. A dynamically localized protease complex and a polar specificity factor control a cell cycle master regulator. Cell. 2006; 124(3):535–47. http://www.sciencedirect.com/science/article/pii/S0092867406000663. Jenal U, Fuchs T. An essential protease involved in bacterial cell-cycle control. EMBO J. 1998; 17(19):5658–69. Iniesta AA, McGrath PT, Reisenauer A, McAdams HH, Shapiro L. A phospho-signaling pathway controls the localization and activity of a protease complex critical for bacterial cell cycle progression. Proc Natl Acad Sci. 2006; 103(29):10935–40. http://www.pnas.org/content/103/29/10935. Tsokos C, Perchuk B, Laub M. A dynamic complex of signaling proteins uses polar localization to regulate cell-fate asymmetry in Caulobacter crescentus. Developmental Cell. 2011; 20(3):329–41. Isaacson SA. A convergent reaction-diffusion master equation. J Chem Phys. 2013; 139(5):054101. http://scitation.aip.org/content/aip/journal/jcp/139/5/10.1063/1.4816377. Cao Y, Gillespie DT, Petzold LR. The slow-scale stochastic simulation algorithm. J Chem Phys. 2005; 122(1):014116. Haseltine EL, Rawlings JB. Approximate simulation of coupled fast and slow reactions for stochastic chemical kinetics. J Chem Phys. 2002; 117(15):6959–69. Hindmarsh AC. ODEPACK, a systematized collection of ODE solvers. IMACS Trans Sci Comput Amsterdam. 1983; 1:55–64. Petzold LR. A Description of DASSL: a differential/algebraic system solver, proceeding of the 1st IMACS World Congress. Montreal. 1982;1:65-68. This work was partially supported by the National Science Foundation award DMS-1225160, CCF-0953590, CCF-1526666, and MCB-1613741. In particular, the publication of this paper is directly funded from CCF-1526666, and MCB-1613741. FL initially realized the simulation error described in this paper. FL, MC and YC then designed the toy model and analyzed the numerical error caused by Hill function simulation. Later SW joined to help the implementation of hybrid simulation. FL and MC together drafted the manuscript, and YC gave critical revisions on the writing. All authors have read and approved the final manuscript. All authors consent to publish this work through BMC'17. About this supplement This article has been published as part of BMC Systems Biology Volume 11 Supplement 3, 2017: Selected original research articles from the Third International Workshop on Computational Network Biology: Modeling, Analysis, and Control (CNB-MAC 2016): systems biology. The full contents of the supplement are available online at http://bmcsystbiol.biomedcentral.com/articles/supplements/volume-11-supplement-3. Department of Computer Science, Virginia Tech, Blacksburg, 24061, VA, USA Minghan Chen, Fei Li, Shuo Wang & Young Cao Minghan Chen Fei Li Shuo Wang Young Cao Correspondence to Young Cao. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Chen, M., Li, F., Wang, S. et al. Stochastic modeling and simulation of reaction-diffusion system with Hill function dynamics. BMC Syst Biol 11, 21 (2017). https://doi.org/10.1186/s12918-017-0401-9 Reaction diffusion master equation (RDME) Stochastic simulation
CommonCrawl
First all-flavour Neutrino Point-like Source Search with the ANTARES Neutrino Telescope (1706.01857) ANTARES Collaboration: A. Albert, M. André, M. Anghinolfi, G. Anton, M. Ardid, J.-J. Aubert, T. Avgitas, B. Baret, J. Barrios-Martí, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Brânzaş, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A. F. Díaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsässer, A. Enzenhöfer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, S. Galatà, P. Gay, V. Giordano, H. Glotin, T. Grégoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernández-Rey, J. Hößl, J. Hofestädt, C. Hugon, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kießling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lefèvre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martínez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Păvălaş, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sánchez-Losa, M. Saldaña, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schüssler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tönnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zúñiga Dec. 5, 2018 hep-ex, astro-ph.IM, astro-ph.HE A search for cosmic neutrino sources using the data collected with the ANTARES neutrino telescope between early 2007 and the end of 2015 is performed. For the first time, all neutrino interactions --charged and neutral current interactions of all flavours-- are considered in a search for point-like sources with the ANTARES detector. In previous analyses, only muon neutrino charged current interactions were used. This is achieved by using a novel reconstruction algorithm for shower-like events in addition to the standard muon track reconstruction. The shower channel contributes about 23\% of all signal events for an $E^{-2}$ energy spectrum. No significant excess over background is found. The most signal-like cluster of events is located at $(\alpha,\delta) = (343.8^\circ, 23.5^\circ)$ with a significance of $1.9\sigma$. The neutrino flux sensitivity of the search is about $E^2 d\varPhi/dE = 6\cdot10^{-9} GeV cm^{-2} s^{-1}$ for declinations from $-90^\circ$ up to $-42^\circ$, and below $10^{-8} GeV cm^{-2} s^{-1}$ for declinations up to $5^{\circ}$. The directions of 106 source candidates and of 13 muon track events from the IceCube HESE sample are investigated for a possible neutrino signal and upper limits on the signal flux are determined. An algorithm for the reconstruction of neutrino-induced showers in the ANTARES neutrino telescope (1708.03649) A. Albert, M. André, M. Anghinolfi, G. Anton, M. Ardid, J.-J. Aubert, T. Avgitas, B. Baret, J. Barrios-Martí, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Brânzaş, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A. F. Díaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsässer, A. Enzenhöfer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, P. Gay, V. Giordano, H. Glotin, T. Grégoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernández-Rey, J. Hößl, J. Hofestädt, C. Hugon, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kießling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lefèvre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martínez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Păvălaş, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sánchez-Losa, M. Saldaña, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schüssler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tönnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zúñiga Jan. 19, 2018 physics.ins-det, astro-ph.IM Muons created by $\nu_\mu$ charged current (CC) interactions in the water surrounding the ANTARES neutrino telescope have been almost exclusively used so far in searches for cosmic neutrino sources. Due to their long range, highly energetic muons inducing Cherenkov radiation in the water are reconstructed with dedicated algorithms that allow the determination of the parent neutrino direction with a median angular resolution of about \unit{0.4}{\degree} for an $E^{-2}$ neutrino spectrum. In this paper, an algorithm optimised for accurate reconstruction of energy and direction of shower events in the ANTARES detector is presented. Hadronic showers of electrically charged particles are produced by the disintegration of the nucleus both in CC and neutral current (NC) interactions of neutrinos in water. In addition, electromagnetic showers result from the CC interactions of electron neutrinos while the decay of a tau lepton produced in $\nu_\tau$ CC interactions will in most cases lead to either a hadronic or an electromagnetic shower. A shower can be approximated as a point source of photons. With the presented method, the shower position is reconstructed with a precision of about \unit{1}{\metre}, the neutrino direction is reconstructed with a median angular resolution between \unit{2}{\degree} and \unit{3}{\degree} in the energy range of \SIrange{1}{1000}{TeV}. In this energy interval, the uncertainty on the reconstructed neutrino energy is about \SIrange{5}{10}{\%}. The increase in the detector sensitivity due to the use of additional information from shower events in the searches for a cosmic neutrino flux is also presented. All-sky Search for High-Energy Neutrinos from Gravitational Wave Event GW170104 with the ANTARES Neutrino Telescope (1710.03020) ANTARES Collaboration: A. Albert, M. André, M. Anghinolfi, G. Anton, M. Ardid, J.-J. Aubert, T. Avgitas, B. Baret, J. Barrios-Martí, S. Basa, B. Belhorma, V. Bertin, S. Biagi, R. Bormuth, S. Bourret, M.C. Bouwhuis, H. Brânzaş, R. Bruijn, J. Brunner, J. Busto, A. Capone, L. Caramete, J. Carr, S. Celli, R. Cherkaoui El Moursli, T. Chiarusi, M. Circella, J.A.B. Coelho, A. Coleiro, R. Coniglione, H. Costantini, P. Coyle, A. Creusot, A.F. Díaz, A. Deschamps, G. De Bonis, C. Distefano, I. Di Palma, A. Domi, C. Donzaud, D. Dornic, D. Drouhin, T. Eberl, I. El Bojaddaini, N. El Khayati, D. Elsässer, A. Enzenhöfer, A. Ettahiri, F. Fassi, I. Felis, L.A. Fusco, P. Gay, V. Giordano, H. Glotin, T. Grégoire, R. Gracia Ruiz, K. Graf, S. Hallmann, H. van Haren, A.J. Heijboer, Y. Hello, J.J. Hernández-Rey, J. Hößl, J. Hofestädt, C. Hugon, G. Illuminati, C.W. James, M. de Jong, M. Jongen, M. Kadler, O. Kalekin, U. Katz, D. Kießling, A. Kouchner, M. Kreter, I. Kreykenbohm, V. Kulikovskiy, C. Lachaud, R. Lahmann, D. Lefèvre, E. Leonora, M. Lotze, S. Loucatos, M. Marcelin, A. Margiotta, A. Marinelli, J.A. Martínez-Mora, R. Mele, K. Melis, T. Michael, P. Migliozzi, A. Moussa, S. Navas, E. Nezri, M. Organokov, G.E. Păvălaş, C. Pellegrino, C. Perrina, P. Piattelli, V. Popa, T. Pradier, L. Quinn, C. Racca, G. Riccobene, A. Sánchez-Losa, M. Saldaña, I. Salvadori, D. F. E. Samtleben, M. Sanguineti, P. Sapienza, F. Schüssler, C. Sieger, M. Spurio, Th. Stolarczyk, M. Taiuti, Y. Tayalati, A. Trovato, D. Turpin, C. Tönnis, B. Vallage, V. Van Elewyck, F. Versari, D. Vivolo, A. Vizzoca, J. Wilms, J.D. Zornoza, J. Zúñiga Oct. 9, 2017 astro-ph.HE Advanced LIGO detected a significant gravitational wave signal (GW170104) originating from the coalescence of two black holes during the second observation run on January 4$^{\textrm{th}}$, 2017. An all-sky high-energy neutrino follow-up search has been made using data from the ANTARES neutrino telescope, including both upgoing and downgoing events in two separate analyses. No neutrino candidates were found within $\pm500$ s around the GW event time nor any time clustering of events over an extended time window of $\pm3$ months. The non-detection is used to constrain isotropic-equivalent high-energy neutrino emission from GW170104 to less than $\sim4\times 10^{54}$ erg for a $E^{-2}$ spectrum. Letter of Intent for KM3NeT 2.0 (1601.07459) S. Adrián-Martínez, M. Ageron, F. Aharonian, S. Aiello, A. Albert, F. Ameli, E. Anassontzis, M. Andre, G. Androulakis, M. Anghinolfi, G. Anton, M. Ardid, T. Avgitas, G. Barbarino, E. Barbarito, B. Baret, J. Barrios-Martí, B. Belhorma, A. Belias, E. Berbee, A. van den Berg, V. Bertin, S. Beurthey, V. van Beveren, N. Beverini, S. Biagi, A. Biagioni, M. Billault, M. Bond, R. Bormuth, B. Bouhadef, G. Bourlis, S. Bourret, C. Boutonnet, M. Bouwhuis, C. Bozza, R. Bruijn, J. Brunner, E. Buis, J. Busto, G. Cacopardo, L. Caillat, M. Calamai, D. Calvo, A. Capone, L. Caramete, S. Cecchini, S. Celli, C. Champion, R. Cherkaoui El Moursli, S. Cherubini, T. Chiarusi, M. Circella, L. Classen, R. Cocimano, J. A. B. Coelho, A. Coleiro, S. Colonges, R. Coniglione, M. Cordelli, A. Cosquer, P. Coyle, A. Creusot, G. Cuttone, A. D'Amico, G. De Bonis, G. De Rosa, C. De Sio, F. Di Capua, I. Di Palma, A. F. Díaz García, C. Distefano, C. Donzaud, D. Dornic, Q. Dorosti-Hasankiadeh, E. Drakopoulou, D. Drouhin, L. Drury, M. Durocher, T. Eberl, S. Eichie, D. van Eijk, I. El Bojaddaini, N. El Khayati, D. Elsaesser, A. Enzenhöfer, F. Fassi, P. Favali, P. Fermani, G. Ferrara, G. Frascadore, C. Filippidis, L. A. Fusco, T. Gal, S. Galatà, F. Garufi, P. Gay, M. Gebyehu, V. Giordano, N. Gizani, R. Gracia, K. Graf, T. Grégoire, G. Grella, R. Habel, S. Hallmann, H. van Haren, S. Harissopulos, T. Heid, A. Heijboer, E. Heine, S. Henry, J. J. Hernández-Rey, M. Hevinga, J. Hofestädt, C. M. F. Hugon, G. Illuminati, C. W. James, P. Jansweijer, M. Jongen, M. de Jong, M. Kadler, O. Kalekin, A. Kappes, U. F. Katz, P. Keller, G. Kieft, D. Kießling, E. N. Koffeman, P. Kooijman, A. Kouchner, V. Kulikovskiy, R. Lahmann, P. Lamare, A. Leisos, E. Leonora, M. Lindsey Clark, A. Liolios, C. D. Llorens Alvarez, D. Lo Presti, H. Löhner, A. Lonardo, M. Lotze, S. Loucatos, E. Maccioni, K. Mannheim, A. Margiotta, A. Marinelli, O. Mariş, C. Markou, J. A. Martínez-Mora, A. Martini, R. Mele, K. W. Melis, T. Michael, P. Migliozzi, E. Migneco, P. Mijakowski, A. Miraglia, C. M. Mollo, M. Mongelli, M. Morganti, A. Moussa, P. Musico, M. Musumeci, S. Navas, C. A. Nicolau, I. Olcina, C. Olivetto, A. Orlando, A. Papaikonomou, R. Papaleo, G. E. Păvălaş, H. Peek, C. Pellegrino, C. Perrina, M. Pfutzner, P. Piattelli, K. Pikounis, G. E. Poma, V. Popa, T. Pradier, F. Pratolongo, G. Pühlhofer, S. Pulvirenti, L. Quinn, C. Racca, F. Raffaelli, N. Randazzo, P. Rapidis, P. Razis, D. Real, L. Resvanis, J. Reubelt, G. Riccobene, C. Rossi, A. Rovelli, M. Saldaña, I. Salvadori, D. F. E. Samtleben, A. Sánchez García, A. Sánchez Losa, M. Sanguineti, A. Santangelo, D. Santonocito, P. Sapienza, F. Schimmel, J. Schmelling, V. Sciacca, M. Sedita, T. Seitz, I. Sgura, F. Simeone, I. Siotis, V. Sipala, B. Spisso, M. Spurio, G. Stavropoulos, J. Steijger, S. M. Stellacci, D. Stransky, M. Taiuti, Y. Tayalati, D. Tézier, S. Theraube, L. Thompson, P. Timmer, C. Tönnis, L. Trasatti, A. Trovato, A. Tsirigotis, S. Tzamarias, E. Tzamariudaki, B. Vallage, V. Van Elewyck, J. Vermeulen, P. Vicini, S. Viola, D. Vivolo, M. Volkert, G. Voulgaris, L. Wiggers, J. Wilms, E. de Wolf, K. Zachariadou, J. D. Zornoza, J. Zúñiga July 26, 2016 hep-ex, physics.ins-det, astro-ph.IM, astro-ph.HE The main objectives of the KM3NeT Collaboration are i) the discovery and subsequent observation of high-energy neutrino sources in the Universe and ii) the determination of the mass hierarchy of neutrinos. These objectives are strongly motivated by two recent important discoveries, namely: 1) The high-energy astrophysical neutrino signal reported by IceCube and 2) the sizable contribution of electron neutrinos to the third neutrino mass eigenstate as reported by Daya Bay, Reno and others. To meet these objectives, the KM3NeT Collaboration plans to build a new Research Infrastructure consisting of a network of deep-sea neutrino telescopes in the Mediterranean Sea. A phased and distributed implementation is pursued which maximises the access to regional funds, the availability of human resources and the synergetic opportunities for the earth and sea sciences community. Three suitable deep-sea sites are identified, namely off-shore Toulon (France), Capo Passero (Italy) and Pylos (Greece). The infrastructure will consist of three so-called building blocks. A building block comprises 115 strings, each string comprises 18 optical modules and each optical module comprises 31 photo-multiplier tubes. Each building block thus constitutes a 3-dimensional array of photo sensors that can be used to detect the Cherenkov light produced by relativistic particles emerging from neutrino interactions. Two building blocks will be configured to fully explore the IceCube signal with different methodology, improved resolution and complementary field of view, including the Galactic plane. One building block will be configured to precisely measure atmospheric neutrino oscillations. Performance of the first prototype of the CALICE scintillator strip electromagnetic calorimeter (1311.3761) CALICE Collaboration: K. Francis, J. Repond, J. Schlereth, J. Smith, L. Xia, E. Baldolemar, J. Li, S. T. Park, M. Sosebee, A. P. White, J. Yu, G. Eigen, Y. Mikami, N. K. Watson, M. A. Thomson, D. R. Ward, D. Benchekroun, A. Hoummada, Y. Khoulaki, J. Apostolakis, A. Dotti, G. Folger, V. Ivantchenko, A. Ribon, V. Uzhinskiy, C. Carloganu, P. Gay, S. Manen, L. Royer, M. Tytgat, N. Zaganidis, G. C. Blazey, A. Dyshkant, J. G. R. Lima, V. Zutshi, J. -Y. Hostachy, L. Morin, U. Cornett, D. David, A. Ebrahimi, G. Falley, K. Gadow, P. Goettlicher, C. Guenter, O. Hartbrich, B. Hermberg, S. Karstensen, F. Krivan, K. Krueger, B. Lutz, S. Morozov, V. Morgunov, C. Neubueser, M. Reinecke, F. Sefkow, P. Smirnov, M. Terwort, E. Garutti, S. Laurien, S. Lu, I. Marchesini, M. Matysek, M. Ramilli, K. Briggl, P. Eckert, T. Harion, H.-Ch. Schultz-Coulon, W. Shen, R. Stamen, B. Bilki, E. Norbeck, D. Northacker, Y. Onel, G. W. Wilson, K. Kawagoe, Y. Sudo, T. Yoshioka, P. D. Dauncey, M. Wing, F. Salvatore, E. Cortina Gil, S. Mannai, G. Baulieu, P. Calabria, L. Caponetto, C. Combaret, R. Della Negra, G. Grenier, R. Han, J-C. Ianigro, R. Kieffer, I. Laktineh, N. Lumb, H. Mathez, L. Mirabito, A. Petrukhin, A. Steen, W. Tromeur, M. Vander Donckt, Y. Zoccarato, E. Calvo Alamillo, M.-C. Fouz, J. Puerta-Pelayo, F. Corriveau, B. Bobchenko, M. Chadeeva, M. Danilov, A. Epifantsev, O. Markin, R. Mizuk, E. Novikov, V. Popov, V. Rusinov, E. Tarkovsky, D. Besson, P. Buzhan, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, V. Tikhomirov, C. Kiesling, K. Seidel, F. Simon, C. Soldner, L. Weuste, M. S. Amjad, J. Bonis, S. Callier, S. Conforti di Lorenzo, P. Cornebise, Ph. Doublet, F. Dulucq, J. Fleury, T. Frisson, N. van der Kolk, H. Li, G. Martin-Chassard, F. Richard, Ch. de la Taille, R. Poeschl, L. Raux, J. Rouene, N. Seguin-Moreau, M. Anduze, V. Balagura, V. Boudry, J-C. Brient, R. Cornat, M. Frotin, F. Gastaldi, E. Guliyev, Y. Haddad, F. Magniette, G. Musat, M. Ruan, T. H. Tran, H. Videau, B. Bulanek, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, D. Lednicky, M. Marcisovsky, I. Polak, J. Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, B. Belhorma, H. Ghazlane, K. Kotera, H. Ono, T. Takeshita, S. Uozumi, D. Jeans, S. Chang, A. Khan, D. H. Kim, D. J. Kong, Y. D. Oh, M. Goetze, J. Sauer, S. Weber, C. Zeitnitz June 11, 2014 hep-ex, physics.ins-det A first prototype of a scintillator strip-based electromagnetic calorimeter was built, consisting of 26 layers of tungsten absorber plates interleaved with planes of 45x10x3 mm3 plastic scintillator strips. Data were collected using a positron test beam at DESY with momenta between 1 and 6 GeV/c. The prototype's performance is presented in terms of the linearity and resolution of the energy measurement. These results represent an important milestone in the development of highly granular calorimeters using scintillator strip technology. This technology is being developed for a future linear collider experiment, aiming at the precise measurement of jet energies using particle flow techniques. Shower development of particles with momenta from 1 to 10 GeV in the CALICE Scintillator-Tungsten HCAL (1311.3505) C. Adloff, J.-J. Blaising, M. Chefdeville, C. Drancourt, R. Gaglione, N. Geffroy, Y. Karyotakis, I. Koletsou, J. Prast, G. Vouters, J. Repond, J. Schlereth, J. Smith, L. Xia, E. Baldolemar, J. Li, S. T. Park, M. Sosebee, A. P. White, J. Yu, G. Eigen, M. A. Thomson, D. R.Ward, D. Benchekroun, A. Hoummada, Y. Khoulaki, J. Apostolakis, D. Dannheim, A. Dotti, K. Elsener, G. Folger, C. Grefe, V. Ivantchenko, M. Killenberg, W. Klempt, E. van der Kraaij, C. B. Lam, L. Linssen, A.-I. Lucaci-Timoce, A. Muennich, S. Poss, A. Ribon, A. Sailer, D. Schlatter, J. Strube, V. Uzhinskiy, C. Carloganu, P. Gay, S. Manen, L. Royer, M. Tytgat, N. Zaganidis, G. C. Blazey, A. Dyshkant, J. G. R. Lima, V. Zutshi, J.-Y. Hostachy, L. Morin, U. Cornett, D. David, A. Ebrahimi, G. Falley, N. Feege, K. Gadow, P. Goettlicher, C. Guenter, O. Hartbrich, B. Hermberg, S. Karstensen, F. Krivan, K. Krueger, S. Lu, B. Lutz, S. Morozov, V. Morgunov, C. Neubueser, M. Reinecke, F. Sefkow, P. Smirnov, M. Terwort, E. Garutti, S. Laurien, I. Marchesini, M. Matysek, M. Ramilli, K. Briggl, P. Eckert, T. Harion, H.-Ch. Schultz-Coulon, W. Shen, R. Stamen, B. Bilki, E. Norbeck, D. Northacker, Y. Onel, G. W. Wilson, K. Kawagoe, Y. Sudo, T. Yoshioka, P. D. Dauncey, M. Wing, F. Salvatore, E. Cortina Gil, S. Mannai, G. Baulieu, P. Calabria, L. Caponetto, C. Combaret, R. Della Negra, G. Grenier, R. Han, J-C. Ianigro, R. Kieffer, I. Laktineh, N. Lumb, H. Mathez, L. Mirabito, A. Petrukhin, A. Steen, W. Tromeur, M. Vander Donckt, Y. Zoccarato, E. Calvo Alamillo, M.-C. Fouz, J. Puerta-Pelayo, F. Corriveau, B. Bobchenko, M. Chadeeva, M. Danilov, A. Epifantsev, O. Markin, R. Mizuk, E. Novikov, V. Popov, V. Rusinov, E. Tarkovsky, N. Kirikova, V. Kozlov, P. Smirnov, Y. Soloviev, D. Besson, P. Buzhan, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, V. Tikhomirov, C. Kiesling, K. Seidel, F. Simon, C. Soldner, M. Szalay, M. Tesar, L. Weuste, M. S. Amjad, J. Bonis, S. Callier, S. Conforti di Lorenzo, P. Cornebise, Ph. Doublet, F. Dulucq, J. Fleury, T. Frisson, N. van der Kolk, H. Li, G. Martin-Chassard, F. Richard, Ch.de la Taille, R. Poeschl, L. Raux, J. Rouene, N. Seguin-Moreau, M. Anduze, V. Balagura, V. Boudry, J-C. Brient, R. Cornat, M. Frotin, F. Gastaldi, E. Guliyev, Y. Haddad, F. Magniette, G. Musat, M. Ruan, T. H. Tran, H. Videau, B. Bulanek, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, D. Lednicky, M. Marcisovsky, I. Polak, J. Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, B. Belhorma, H. Ghazlane, K. Kotera, T. Takeshita, S. Uozumi, S. Chang, A. Khan, D. H. Kim, D. J. Kong, Y. D. Oh, M. Goetze, J. Sauer, S. Weber, C. Zeitnitz Jan. 13, 2014 physics.ins-det Lepton colliders are considered as options to complement and to extend the physics programme at the Large Hadron Collider. The Compact Linear Collider (CLIC) is an $e^+e^-$ collider under development aiming at centre-of-mass energies of up to 3 TeV. For experiments at CLIC, a hadron sampling calorimeter with tungsten absorber is proposed. Such a calorimeter provides sufficient depth to contain high-energy showers, while allowing a compact size for the surrounding solenoid. A fine-grained calorimeter prototype with tungsten absorber plates and scintillator tiles read out by silicon photomultipliers was built and exposed to particle beams at CERN. Results obtained with electrons, pions and protons of momenta up to 10 GeV are presented in terms of energy resolution and shower shape studies. The results are compared with several GEANT4 simulation models in order to assess the reliability of the Monte Carlo predictions relevant for a future experiment at CLIC. Track segments in hadronic showers in a highly granular scintillator-steel hadron calorimeter (1305.7027) CALICE Collaboration: C. Adloff, J.-J. Blaising, M. Chefdeville, C. Drancourt, R. Gaglione, N. Geffroy, Y. Karyotakis, I. Koletsou, J. Prast, G. Vouters, K. Francis, J. Repond, J. Schlereth, J. Smith, L. Xia, E. Baldolemar, J. Li, S. T. Park, M. Sosebee, A. P. White, J. Yu, G. Eigen, Y. Mikami, N. K. Watson, G. Mavromanolakis, M. A. Thomson, D. R. Ward, W. Yan, D. Benchekroun, A. Hoummada, Y. Khoulaki, J. Apostolakis, D. Dannheim, A. Dotti, G. Folger, V. Ivantchenko, W. Klempt, E. van der Kraaij, A. -I. Lucaci-Timoce, A. Ribon, D. Schlatter, V. Uzhinskiy, C. Carloganu, P. Gay, S. Manen, L. Royer, M. Tytgat, N. Zaganidis, G. C. Blazey, A. Dyshkant, J. G. R. Lima, V. Zutshi, J. -Y. Hostachy, L. Morin, U. Cornett, D. David, G. Falley, K. Gadow, P. Göttlicher, C. Günter, O. Hartbrich, B. Hermberg, S. Karstensen, F. Krivan, K. Krüger, S. Lu, S. Morozov, V. Morgunov, M. Reinecke, F. Sefkow, P. Smirnov, M. Terwort, N. Feege, E. Garutti, S. Laurien, I. Marchesini, M. Matysek, M. Ramilli, K. Briggl, P. Eckert, T. Harion, H. -Ch. Schultz-Coulon, W. Shen, R. Stamen, B. Bilki, E. Norbeck, Y. Onel, G. W. Wilson, K. Kawagoe, Y. Sudo, T. Yoshioka, P. D. Dauncey, A. -M. Magnan, V. Bartsch, M. Wing, F. Salvatore, E. Cortina Gil, S. Mannai, G. Baulieu, P. Calabria, L. Caponetto, C. Combaret, R. Della Negra, G. Grenier, R. Han, J-C. Ianigro, R. Kieffer, I. Laktineh, N. Lumb, H. Mathez, L. Mirabito, A. Petrukhin, A. Steen, W. Tromeur, M. Vander Donckt, Y. Zoccarato, E. Calvo Alamillo, M.-C. Fouz, J. Puerta-Pelayo, F. Corriveau, B. Bobchenko, M. Chadeeva, M. Danilov, A. Epifantsev, O. Markin, R. Mizuk, E. Novikov, V. Popov, V. Rusinov, E. Tarkovsky, N. Kirikova, V. Kozlov, P. Smirnov, Y. Soloviev, P. Buzhan, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, V. Tikhomirov, C. Kiesling, K. Seidel, F. Simon, C. Soldner, M. Szalay, M. Tesar, L. Weuste, M. S. Amjad, J. Bonis, S. Callier, S. Conforti di Lorenzo, P. Cornebise, Ph. Doublet, F. Dulucq, J. Fleury, T. Frisson, N. van der Kolk, H. Li, G. Martin-Chassard, F. Richard, Ch. de la Taille, R. Pöschl, L. Raux, J. Rouene, N. Seguin-Moreau, M. Anduze, V. Balagura, V. Boudry, J-C. Brient, R. Cornat, M. Frotin, F. Gastaldi, E. Guliyev, Y. Haddad, F. Magniette, G. Musat, M. Ruan, T.H. Tran, H. Videau, B. Bulanek, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, D. Lednicky, M. Marcisovsky, I. Polak, J. Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, B. Belhorma, H. Ghazlane, K. Kotera, T. Takeshita, S. Uozumi, D. Jeans, M. Götze, J. Sauer, S. Weber, C. Zeitnitz July 29, 2013 hep-ex, physics.ins-det We investigate the three dimensional substructure of hadronic showers in the CALICE scintillator-steel hadronic calorimeter. The high granularity of the detector is used to find track segments of minimum ionising particles within hadronic showers, providing sensitivity to the spatial structure and the details of secondary particle production in hadronic cascades. The multiplicity, length and angular distribution of identified track segments are compared to GEANT4 simulations with several different shower models. Track segments also provide the possibility for in-situ calibration of highly granular calorimeters. Hadronic energy resolution of a highly granular scintillator-steel hadron calorimeter using software compensation techniques (1207.4210) CALICE Collaboration: C. Adloff, J. Blaha, J.-J. Blaising, C. Drancourt, A. Espargilière, R. Gaglione, N. Geffroy, Y. Karyotakis, J. Prast, G. Vouters, K. Francis, J. Repond, J. Smith, L. Xia, E. Baldolemar, J. Li, S. T. Park, M. Sosebee, A. P. White, J. Yu, T. Buanes, G. Eigen, Y. Mikami, N. K. Watson, T. Goto, G. Mavromanolakis, M. A. Thomson, D. R. Ward, W. Yan, D. Benchekroun, A. Hoummada, Y. Khoulaki, M. Benyamna, C. Cârloganu, F. Fehr, P. Gay, S. Manen, L. Royer, G. C. Blazey, A. Dyshkant, J. G. R. Lima, V. Zutshi, J.-Y. Hostachy, L. Morin, U. Cornett, D. David, G. Falley, K. Gadow, P. Göttlicher, C. Günter, B. Hermberg, S. Karstensen, F. Krivan, A.-I. Lucaci-Timoce, S. Lu, B. Lutz, S. Morozov, V. Morgunov, M. Reinecke, F. Sefkow, P. Smirnov, M. Terwort, A.Vargas-Trevino, N. Feege, E. Garutti, I. Marchesini, M. Ramilli, P. Eckert, T. Harion, A. Kaplan, H.-Ch. Schultz-Coulon, W. Shen, R. Stamen, A. Tadday, B. Bilki, E. Norbeck, Y. Onel, G. W. Wilson, K. Kawagoe, P. D. Dauncey, A.-M. Magnan, M. Wing, F. Salvatore, E. Calvo Alamillo, M.-C. Fouz, J. Puerta-Pelayo, V. Balagura, B. Bobchenko, M. Chadeeva, M. Danilov, A. Epifantsev, O. Markin, R. Mizuk, E. Novikov, V. Rusinov, E. Tarkovsky, N. Kirikova, V. Kozlov, P. Smirnov, Y. Soloviev, P. Buzhan, B. Dolgoshein, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, S. Smirnov, C. Kiesling, S. Pfau, K. Seidel, F. Simon, C. Soldner, M. Szalay, M. Tesar, L. Weuste, J. Bonis, B. Bouquet, S. Callier, P. Cornebise, Ph. Doublet, F. Dulucq, M. Faucci Giannelli, J. Fleury, H. Li, G.Martin-Chassard, F. Richard, Ch. de la Taille, R. Pöschl, L. Raux, N. Seguin-Moreau, F. Wicek, M. Anduze, V. Boudry, J-C. Brient, D. Jeans, P. Mora de Freitas, G. Musat, M. Reinhard, M. Ruan, H. Videau, B. Bulanek, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, D. Lednicky, M. Marcisovsky, I. Polak, J. Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, B. Belhorma, H. Ghazlane, T. Takeshita, S. Uozumi, J. Sauer, S. Weber, C. Zeitnitz Sept. 27, 2012 hep-ex, physics.ins-det The energy resolution of a highly granular 1 m3 analogue scintillator-steel hadronic calorimeter is studied using charged pions with energies from 10 GeV to 80 GeV at the CERN SPS. The energy resolution for single hadrons is determined to be approximately 58%/sqrt(E/GeV}. This resolution is improved to approximately 45%/sqrt(E/GeV) with software compensation techniques. These techniques take advantage of the event-by-event information about the substructure of hadronic showers which is provided by the imaging capabilities of the calorimeter. The energy reconstruction is improved either with corrections based on the local energy density or by applying a single correction factor to the event energy sum derived from a global measure of the shower energy density. The application of the compensation algorithms to Geant4 simulations yield resolution improvements comparable to those observed for real data. Electromagnetic response of a highly granular hadronic calorimeter (1012.4343) C. Adloff, J. Blaha, J.-J. Blaising, C. Drancourt, A. Espargilière, R. Gaglione, N. Geffroy, Y. Karyotakis, J. Prast, G. Vouters, K. Francis, J. Repond, J. Smith, L. Xia, E. Baldolemar, J. Li, S. T. Park, M. Sosebee, A. P. White, J. Yu, Y. Mikami, N. K. Watson T. Goto, G. Mavromanolakis, M. A. Thomson, D. R. Ward W. Yan, M. Benyamna, C. Cârloganu, F. Fehr, P. Gay, S. Manen, L. Royer, G. C. Blazey, A. Dyshkant, J. G. R. Lima, V. Zutshi, J. -Y. Hostachy, L. Morin, U. Cornett, D. David, R. Fabbri, G. Falley, K. Gadow, E. Garutti, P. Göttlicher, C. Günter, S. Karstensen, F. Krivan, A. -I. Lucaci-Timoce, S. Lu, B. Lutz, I. Marchesini, N. Meyer, S. Morozov, V. Morgunov, M. Reinecke, F. Sefkow, P. Smirnov, M. Terwort, A. Vargas-Trevino, N. Wattimena, O. Wendt, N. Feege, J. Haller, S. Richter, J. Samson P. Eckert, A. Kaplan, H. -Ch. Schultz-Coulon, W. Shen, R. Stamen, A. Tadday, B. Bilki, E. Norbeck, Y. Onel, G. W. Wilson, K. Kawagoe, S. Uozumi, J. A. Ballin, P. D. Dauncey, A. -M. Magnan, H. S. Yilmaz, O. Zorba, V. Bartsch, M. Postranecky, M. Warren, M. Wing, F. Salvatore, E. Calvo Alamillo, M.-C. Fouz, J. Puerta-Pelayo, V. Balagura, B. Bobchenko, M. Chadeeva, M. Danilov, A. Epifantsev, O. Markin, R. Mizuk, E. Novikov, V. Rusinov, E. Tarkovsky, Y. Soloviev, V. Kozlov, P. Buzhan, B. Dolgoshein, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, S. Smirnov, A. Frey, C. Kiesling, K. Seidel, F. Simon, C. Soldner, L. Weuste, J. Bonis, B. Bouquet, S. Callier, P. Cornebise, Ph. Doublet, F. Dulucq, M. Faucci Giannelli, J. Fleury, G. Guilhem, H. Li, G. Martin-Chassard, F. Richard, Ch. de la Taille, R. Pöschl, L. Raux, N. Seguin-Moreau, F. Wicek, M. Anduze, V. Boudry, J-C. Brient, D. Jeans, P. Mora de Freitas, G. Musat, M. Reinhard, M. Ruan, H. Videau, B. Bulanek, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, D. Lednicky, M. Marcisovsky, I. Polak, J. Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, B. Belhorma, H. Ghazlane, K. Kotera, M. Nishiyama, T. Takeshita, S. Tozuka, T. Buanes, G. Eigen June 8, 2011 physics.ins-det The CALICE collaboration is studying the design of high performance electromagnetic and hadronic calorimeters for future International Linear Collider detectors. For the hadronic calorimeter, one option is a highly granular sampling calorimeter with steel as absorber and scintillator layers as active material. High granularity is obtained by segmenting the scintillator into small tiles individually read out via silicon photo-multipliers (SiPM). A prototype has been built, consisting of thirty-eight sensitive layers, segmented into about eight thousand channels. In 2007 the prototype was exposed to positrons and hadrons using the CERN SPS beam, covering a wide range of beam energies and incidence angles. The challenge of cell equalization and calibration of such a large number of channels is best validated using electromagnetic processes. The response of the prototype steel-scintillator calorimeter, including linearity and uniformity, to electrons is investigated and described. A Layer Correlation technique for pion energy calibration at the 2004 ATLAS Combined Beam Test (1012.4305) E. Abat, J.M. Abdallah, T.N. Addy, P. Adragna, M. Aharrouche, A. Ahmad, T.P.A. Akesson, M. Aleksa, C. Alexa, K. Anderson, A. Andreazza, F. Anghinolfi, A. Antonaki, G. Arabidze, E. Arik, T. Atkinson, J. Baines, O.K. Baker, D. Banfi, S. Baron, A.J. Barr, R. Beccherle, H.P. Beck, B. Belhorma, P.J. Bell, D. Benchekroun, D.P. Benjamin, K. Benslama, E. Bergeaas Kuutmann, J. Bernabeu, H. Bertelsen, S. Binet, C. Biscarat, V. Boldea, V.G. Bondarenko, M. Boonekamp, M. Bosman, C. Bourdarios, Z. Broklova, D. Burckhart Chromek, V. Bychkov, J. Callahan, D. Calvet, M. Canneri, M. Capeáns Garrido, M. Caprini, L. Cardiel Sas, T. Carli, L. Carminati, J. Carvalho, M. Cascella, M.V. Castillo, A. Catinaccio, D. Cauz, D. Cavalli, M. Cavalli Sforza, V. Cavasinni, S.A. Cetin, H. Chen, R. Cherkaoui, L. Chevalier, F. Chevallier, S. Chouridou, M. Ciobotaru, M. Citterio, A. Clark, B. Cleland, M. Cobal, E. Cogneras, P. Conde Muino, M. Consonni, S. Constantinescu, T. Cornelissen, S. Correard, A. Corso Radu, G. Costa, M.J. Costa, D. Costanzo, S. Cuneo, P. Cwetanski, D. Da Silva, M. Dam, M. Dameri, H.O. Danielsson, D. Dannheim, G. Darbo, T. Davidek, K. De, P.O. Defay, B. Dekhissi, J. Del Peso, T. Del Prete, M. Delmastro, F. Derue, L. Di Ciaccio, B. Di Girolamo, S. Dita, F. Dittus, F. Djama, T. Djobava, D. Dobos, M. Dobson, B.A. Dolgoshein, A. Dotti, G. Drake, Z. Drasal, N. Dressnandt, C. Driouchi, J. Drohan, W.L. Ebenstein, P. Eerola, I. Efthymiopoulos, K. Egorov, T.F. Eifert, K. Einsweiler, M. El Kacimi, M. Elsing, D. Emelyanov, C. Escobar, A.I. Etienvre, A. Fabich, K. Facius, A.I. Fakhr-Edine, M. Fanti, A. Farbin, P. Farthouat, D. Fassouliotis, L. Fayard, R. Febbraro, O.L. Fedin, A. Fenyuk, D. Fergusson, P. Ferrari, R. Ferrari, B.C. Ferreira, A. Ferrer, D. Ferrere, G. Filippini, T. Flick, D. Fournier, P. Francavilla, D. Francis, R. Froeschl, D. Froidevaux, E. Fullana, S. Gadomski, G. Gagliardi, P. Gagnon, M. Gallas, B.J. Gallop, S. Gameiro, K.K. Gan, R. Garcia, C. Garcia, I.L. Gavrilenko, C. Gemme, P. Gerlach, N. Ghodbane, V. Giakoumopoulou, V. Giangiobbe, N. Giokaris, G. Glonti, T. Goettfert, T. Golling, N. Gollub, A. Gomes, M.D. Gomez, S. Gonzalez-Sevilla, M.J. Goodrick, G. Gorfine, B. Gorini, D. Goujdami, K-J. Grahn, P. Grenier, N. Grigalashvili, Y. Grishkevich, J. Grosse-Knetter, M. Gruwe, C. Guicheney, A. Gupta, C. Haeberli, R. Haertel, Z. Hajduk, H. Hakobyan, M. Hance, J.D. Hansen, P.H. Hansen, K. Hara, A. Harvey Jr., R.J. Hawkings, F.E.W. Heinemann, A. Henriques Correia, T. Henss, L. Hervas, E. Higon, J.C. Hill, J. Hoffman, J.Y. Hostachy, I. Hruska, F. Hubaut, F. Huegging, W. Hulsbergen, M. Hurwitz, L. Iconomidou-Fayard, E. Jansen, I. Jen-La Plante, P.D.C. Johansson, K. Jon-And, M. Joos, S. Jorgensen, J. Joseph, A. Kaczmarska, M. Kado, A. Karyukhin, M. Kataoka, F. Kayumov, A. Kazarov, P.T. Keener, G.D. Kekelidze, N. Kerschen, S. Kersten, A. Khomich, G. Khoriauli, E. Khramov, A. Khristachev, J. Khubua, T.H. Kittelmann, R. Klingenberg, E.B. Klinkby, P. Kodys, T. Koffas, S. Kolos, S.P. Konovalov, N. Konstantinidis, S. Kopikov, I. Korolkov, V. Kostyukhin, S. Kovalenko, T.Z. Kowalski, K. Krüger, V. Kramarenko, L.G. Kudin, Y. Kulchitsky, C. Lacasta, R. Lafaye, B. Laforge, W. Lampl, F. Lanni, S. Laplace, T. Lari, A-C. Le Bihan, M. Lechowski, F. Ledroit-Guillon, G. Lehmann, R. Leitner, D. Lelas, C.G. Lester, Z. Liang, P. Lichard, W. Liebig, A. Lipniacka, M. Lokajicek, L. Louchard, K.F. Lourerio, A. Lucotte, F. Luehring, B. Lund-Jensen, B. Lundberg, H. Ma, R. Mackeprang, A. Maio, V.P. Maleev, F. Malek, L. Mandelli, J. Maneira, M. Mangin-Brinet, A. Manousakis, L. Mapelli, C. Marques, S.Marti i Garcia, F. Martin, M. Mathes, M. Mazzanti, K.W. McFarlane, R. McPherson, G. Mchedlidze, S. Mehlhase, C. Meirosu, Z. Meng, C. Meroni, V. Mialkovski, B. Mikulec, D. Milstead, I. Minashvili, B. Mindur, V.A. Mitsou, S. Moed, E. Monnier, G. Moorhead, P. Morettini, S.V. Morozov, M. Mosidze, S.V. Mouraviev, E.W.J. Moyse, A. Munar, A. Myagkov, A.V. Nadtochi, K. Nakamura, P. Nechaeva, A. Negri, S. Nemecek, M. Nessi, S.Y. Nesterov, F.M. Newcomer, I. Nikitine, K. Nikolaev, I. Nikolic-Audit, H. Ogren, S.H. Oh, S.B. Oleshko, J. Olszowska, A. Onofre, C. Padilla Aranda, S. Paganis, D. Pallin, D. Pantea, V. Paolone, F. Parodi, J. Parsons, S. Parzhitskiy, E. Pasqualucci, S.M. Passmored, J. Pater, S. Patrichev, M. Peez, V. Perez Reale, L. Perini, V.D. Peshekhonov, J. Petersen, T.C. Petersen, R. Petti, P.W. Phillips, J. Pina, B. Pinto, F. Podlyski, L. Poggioli, A. Poppleton, J. Poveda, P. Pralavorio, L. Pribyl, M.J. Price, D. Prieur, C. Puigdengoles, P. Puzo, O. Røhne, F. Ragusa, S. Rajagopalan, K. Reeves, I. Reisinger, C. Rembser, P.A.Bruckman.de. Renstrom, P. Reznicek, M. Ridel, P. Risso, I. Riu, D. Robinson, C. Roda, S. Roe, O. Rohne, A. Romaniouk, D. Rousseau, A. Rozanov, A. Ruiz, N. Rusakovich, D. Rust, Y.F. Ryabov, V. Ryjov, O. Salto, B. Salvachua, A. Salzburger, H. Sandaker, C. Santamarina Rios, L. Santi, C. Santoni, J.G. Saraiva, F. Sarri, G. Sauvage, L.P. Says, M. Schaefer, V.A. Schegelsky, C. Schiavi, J. Schieck, G. Schlager, J. Schlereth, C. Schmitt, J. Schultes, P. Schwemling, J. Schwindling, J.M. Seixas, D.M. Seliverstov, L. Serin, A. Sfyrla, N. Shalanda, C. Shaw, T. Shin, A. Shmeleva, J. Silva, S. Simion, M. Simonyan, J.E. Sloper, S.Yu. Smirnov, L. Smirnova, C. Solans, A. Solodkov, O. Solovianov, I. Soloviev, V.V. Sosnovtsev, F. Spanó, P. Speckmayer, S. Stancu, R. Stanek, E. Starchenko, A. Straessner, S.I. Suchkov, M. Suk, R. Szczygiel, F. Tarrade, F. Tartarelli, P. Tas, Y. Tayalati, F. Tegenfeldt, R. Teuscher, M. Thioye, V.O. Tikhomirov, C.J.W.P. Timmermans, S. Tisserant, B. Toczek, L. Tremblet, C. Troncon, P. Tsiareshka, M. Tyndel, M.Karagoez. Unel, G. Unal, G. Unel, G. Usai, R. Van Berg, A. Valero, S. Valkar, J.A. Valls, W. Vandelli, F. Vannucci, A. Vartapetian, V.I. Vassilakopoulos, L. Vasilyeva, F. Vazeille, F. Vernocchi, Y. Vetter-Cole, I. Vichou, V. Vinogradov, J. Virzi, I. Vivarelli, J.B.de. Vivie, M. Volpi, T. Vu Anh, C. Wang, M. Warren, J. Weber, M. Weber, A.R. Weidberg, J. Weingarten, P.S. Wells, P. Werner, S. Wheeler, M. Wiessmann, H. Wilkens, H.H. Williams, I. Wingerter-Seez, Y. Yasu, A. Zaitsev, A. Zenin, T. Zenis, Z. Zenonos, H. Zhang, A. Zhelezko, N. Zhou May 12, 2011 hep-ex, physics.ins-det A new method for calibrating the hadron response of a segmented calorimeter is developed and successfully applied to beam test data. It is based on a principal component analysis of energy deposits in the calorimeter layers, exploiting longitudinal shower development information to improve the measured energy resolution. Corrections for invisible hadronic energy and energy lost in dead material in front of and between the calorimeters of the ATLAS experiment were calculated with simulated Geant4 Monte Carlo events and used to reconstruct the energy of pions impinging on the calorimeters during the 2004 Barrel Combined Beam Test at the CERN H8 area. For pion beams with energies between 20 GeV and 180 GeV, the particle energy is reconstructed within 3% and the energy resolution is improved by between 11% and 25% compared to the resolution at the electromagnetic scale. Construction and Commissioning of the CALICE Analog Hadron Calorimeter Prototype (1003.2662) C. Adloff, Y. Karyotakis, J. Repond, A. Brandt, H. Brown, K. De, C. Medina, J. Smith, J. Li, M. Sosebee, A. White, J. Yu, T. Buanes, G. Eigen, Y. Mikami, O. Miller, N. K.Watson, J. A. Wilson, T. Goto, G.Mavromanolakis, M. A. Thomson, D. R.Ward, W. Yan, D. Benchekroun, A. Hoummada, Y. Khoulaki, M. Oreglia, M. Benyamna, C. Cârloganu, P. Gay, J. Ha, G. C. Blazey, D. Chakraborty, A. Dyshkant, K. Francis, D. Hedin, G. Lima, V. Zutshi, V. A. Babkin, S. N. Bazylev, Yu. I. Fedotov, V. M. Slepnev, I. A. Tiapkin, S. V.Volgin, J. -Y. Hostachy, L. Morin, N. D?Ascenzo, U. Cornett, D. David, R. Fabbri, G. Falley, N. Feege, K. Gadow, E. Garutti, P. Göttlicher, T. Jung, S. Karstensen, V.Korbel, A. -I. Lucaci-Timoce, B. Lutz, N.Meyer, V. Morgunov, M. Reinecke, S. Schätzel, S. Schmidt, F. Sefkow, P. Smirnov, A. Vargas-Trevino, N.Wattimena, O.Wendt, M. Groll, R. -D. Heuer, S. Richter, J. Samson, A. Kaplan, H. -Ch. Schultz-Coulon, W. Shen, A. Tadday, B. Bilki, E. Norbeck, Y. Onel, E. J. Kim, G. Kim, D-W. Kim, K. Lee, S. C. Lee, K. Kawagoe, Y. Tamura, J. A. Ballin, P.D. Dauncey, A. -M.Magnan, H. Yilmaz, O. Zorba, V. Bartsch, M. Postranecky, M.Warren, M. Wing, M. Faucci Giannelli, M. G. Green, F. Salvatore, R. Kieffer, I. Laktineh, M.C Fouz, D. S. Bailey, R. J. Barlow, R. J. Thompson, M. Batouritski, O. Dvornikov, Yu. Shulhevich, N. Shumeiko, A. Solin, P. Starovoitov, V. Tchekhovski, A. Terletski, B. Bobchenko, M. Chadeeva, M. Danilov, O. Markin, R. Mizuk, V. Morgunov, E. Novikov, V. Rusinov, E. Tarkovsky, V. Andreev, N. Kirikova, A.Komar, V.Kozlov, P. Smirnov, Y. Soloviev, A. Terkulov, P. Buzhan, B. Dolgoshein, A. Ilyin, V. Kantserov, V. Kaplin, A. Karakash, E. Popova, S. Smirnov, N. Baranova, E. Boos, L. Gladilin, D. Karmanov, M.Korolev, M. Merkin, A. Savin, A.Voronin, A. Topkar, A. Freyk, C. Kiesling, S. Lu, K. Prothmann, K. Seidel, F. Simon, C. Soldner, L. Weuste, B. Bouquet, S. Callier, P. Cornebise, F. Dulucq, J. Fleury, H. Li, G. Martin-Chassard, F. Richard, Ch. de la Taille, R. Poeschl, L. Raux, M. Ruan, N. Seguin-Moreau, F. Wicek, M. Anduze, V. Boudry, J-C. Brient, G.Gaycken, R. Cornat, D. Jeans, P. Mora de Freitas, G. Musat, M. Reinhard, A. Rougé, J-Ch.Vanel, H. Videau, K-H. Park, J. Zacek, J. Cvach, P. Gallus, M. Havranek, M. Janata, J. Kvasnicka, M. Marcisovsky, I. Polak, J.Popule, L. Tomasek, M. Tomasek, P. Ruzicka, P. Sicho, J. Smolik, V. Vrba, J. Zalesak, Yu. Arestov, V.Ammosov, B. Chuiko, V. Gapienko, Y. Gilitski, V.Koreshev, A. Semak, Yu. Sviridov, V. Zaets, B. Belhorma, M. Belmir, A. Baird, R. N. Halsall, S.W. Nam, I. H. Park, J.Yang, Jong-Seo Chai, Jong-Tae Kim, Geun-Bum Kim, Y. Kim, J. Kang, Y. -J.Kwon, Ilgoo Kim, Taeyun Lee, Jaehong Park, Jinho Sung, S. Itoh, K.Kotera, M. Nishiyama, T. Takeshita, S.Weber, C. Zeitnitz March 13, 2010 hep-ex, physics.ins-det An analog hadron calorimeter (AHCAL) prototype of 5.3 nuclear interaction lengths thickness has been constructed by members of the CALICE Collaboration. The AHCAL prototype consists of a 38-layer sandwich structure of steel plates and highly-segmented scintillator tiles that are read out by wavelength-shifting fibers coupled to SiPMs. The signal is amplified and shaped with a custom-designed ASIC. A calibration/monitoring system based on LED light was developed to monitor the SiPM gain and to measure the full SiPM response curve in order to correct for non-linearity. Ultimately, the physics goals are the study of hadron shower shapes and testing the concept of particle flow. The technical goal consists of measuring the performance and reliability of 7608 SiPMs. The AHCAL was commissioned in test beams at DESY and CERN. The entire prototype was completed in 2007 and recorded hadron showers, electron showers and muons at different energies and incident angles in test beams at CERN and Fermilab.
CommonCrawl
Derivation of a non-autonomous linear Boltzmann equation from a heterogeneous Rayleigh gas Elliptic equations with transmission and Wentzell boundary conditions and an application to steady water waves in the presence of wind Hung Le Department of Mathematics, University of Missouri, Columbia, MO 65211, USA Received June 2017 Revised January 2018 Published April 2018 Fund Project: The first author was supported by the NNSF of China (No. 11501021), the second author was supported by the NNSF of China (No. 11301166). In this paper, we present results about the existence and uniqueness of solutions of elliptic equations with transmission and Wentzell boundary conditions. We provide Schauder estimates and existence results in Hölder spaces. As an application, we develop an existence theory for small-amplitude two-dimensional traveling waves in an air-water system with surface tension. The water region is assumed to be irrotational and of finite depth, and we permit a general distribution of vorticity in the atmosphere. Keywords: Elliptic equation, Wentzell condition, transmision condition, wind wave, surface tension. Mathematics Subject Classification: 35J25, 35B45, 76B03, 76B45. Citation: Hung Le. Elliptic equations with transmission and Wentzell boundary conditions and an application to steady water waves in the presence of wind. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3357-3385. doi: 10.3934/dcds.2018144 C. J. Amick and R. E. L. Turner, A global theory of internal solitary waves in two-fluid systems, Trans. Amer. Math. Soc., 298 (1986), 431-484. doi: 10.1090/S0002-9947-1986-0860375-3. Google Scholar C. J. Amick, Semilinear elliptic eigenvalue problems on an infinite strip with an application to stratified fluids, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 11 (1984), 441-499. Google Scholar D. E. Apushkinskaya and A. I. Nazarov, A survey of results on nonlinear Venttsel problems, Appl. Math., 45 (2000), 69-80. doi: 10.1023/A:1022288717033. Google Scholar D. E. Apushkinskaya and A. I. Nazarov, Linear two-phase Venttsel problems, Ark. Mat., 39 (2001), 201-222. doi: 10.1007/BF02384554. Google Scholar A. A. Arkhipova and O. Erlhamahmy, Regularity of solutions to a diffraction-type problem for nondiagonal linear elliptic systems in the Campanato space, J. Math. Sci. (New York), 112 (2002), 3944-3966. doi: 10.1023/A:1020093606080. Google Scholar J. Thomas Beale, T. Y. Hou and J. S. Lowengrub, Growth rates for the linearized motion of fluid interfaces away from equilibrium, Comm. Pure Appl. Math., 46 (1993), 1269-1301. doi: 10.1002/cpa.3160460903. Google Scholar J. Bognár, Indefinite Inner Product Spaces, Springer-Verlag, New York-Heidelberg, 1974. Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 78. Google Scholar J. L. Bona, D. K. Bose and R. E. L. Turner, Finite-amplitude steady waves in stratified fluids, J. Math. Pures Appl. (9), 62 (1983), 389-439(1984). Google Scholar V. Bonnaillie-Noël, M. Dambrine, F. Hérau and G. Vial, On generalized Ventcel's type boundary conditions for Laplace operator in a bounded domain, SIAM J. Math. Anal., 42 (2010), 931-945. doi: 10.1137/090756521. Google Scholar M. Borsuk, The transmission problem for elliptic second order equations in a conical domain, Ann. Acad. Pedagog. Crac. Stud. Math., 7 (2008), 61-89. Google Scholar M. Borsuk, The transmission problem for quasi-linear elliptic second order equations in a conical domain. Ⅰ, Ⅱ, Nonlinear Anal., 71 (2009), 5032-5083. doi: 10.1016/j.na.2009.03.090. Google Scholar M. Borsuk, Transmission Problems for Elliptic Second-order Equations in Non-Smooth Domains, Frontiers in Mathematics. Birkhäuser/Springer Basel AG, Basel, 2010. doi: 10.1007/978-3-0346-0477-2. Google Scholar O. Bühler, J. Shatah and S. Walsh, Steady water waves in the presence of wind, SIAM J. Math. Anal., 45 (2013), 2182-2227. doi: 10.1137/120880124. Google Scholar O. Bühler, J. Shatah, S. Walsh and C. Zeng, On the wind generation of water waves, Arch. Ration. Mech. Anal., 222 (2016), 827-878. doi: 10.1007/s00205-016-1012-0. Google Scholar R. M. Chen and S. Walsh, Continuous dependence on the density for stratified steady water waves, Arch. Ration. Mech. Anal., 219 (2016), 741-792. doi: 10.1007/s00205-015-0906-6. Google Scholar A. Constantin, Nonlinear Water Waves with Applications to Wave-Current Interactions and Tsunamis, volume 81 of CBMS-NSF Regional Conference Series in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2011. doi: 10.1137/1.9781611971873. Google Scholar A. Constantin and W. Strauss, Exact steady periodic water waves with vorticity, Comm. Pure Appl. Math., 57 (2004), 481-527. doi: 10.1002/cpa.3046. Google Scholar A. Constantin and W. Strauss, Periodic traveling gravity water waves with discontinuous vorticity, Arch. Ration. Mech. Anal., 202 (2011), 133-175. doi: 10.1007/s00205-011-0412-4. Google Scholar M. G. Crandall and P. H. Rabinowitz, Bifurcation from simple eigenvalues, J. Functional Analysis, 8 (1971), 321-340. doi: 10.1016/0022-1236(71)90015-2. Google Scholar M. -L. Dubreil-Jacotin, Sur la Détermination Rigoureuse des Ondes Permanentes périodiques D'ampleur Finie, (French) 1934. 75 pp. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Classics in Mathematics. Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition. Google Scholar N. Ikeda and S. Watanabe, Stochastic Differential Equations and Diffusion Processes, volume 24 of North-Holland Mathematical Library, North-Holland Publishing Co., Amsterdam; Kodansha, Ltd., Tokyo, second edition, 1989. Google Scholar I. S. Iohvidov, M. G. Krein and H. Langer, Introduction to the Spectral Theory of Operators in Spaces with an Indefinite Metric, volume 9 of Mathematical Research, Akademie-Verlag, Berlin, 1982. Google Scholar K. Kirchgässner, Wave-solutions of reversible systems and applications, J. Differential Equations, 45 (1982), 113-127. doi: 10.1016/0022-0396(82)90058-4. Google Scholar P. Korman, Existence of solutions for a class of semilinear noncoercive problems, Nonlinear Anal., 10 (1986), 1471-1476. doi: 10.1016/0362-546X(86)90116-1. Google Scholar O. A. Ladyzhenskaya and N. N. Ural'tseva, Linear and Quasilinear Elliptic Equations, Academic Press, New York-London, 1968. Google Scholar K. Lankers and G. Friesecke, Fast, large-amplitude solitary waves in the 2D Euler equations for stratified fluids, Nonlinear Anal., 29 (1997), 1061-1078. doi: 10.1016/S0362-546X(96)00089-2. Google Scholar Y. Luo, On the quasilinear elliptic Venttsel$ \prime $ boundary value problem, Nonlinear Anal., 16 (1991), 761-769. doi: 10.1016/0362-546X(91)90081-B. Google Scholar Y. Luo and N. S. Trudinger, Linear second order elliptic equations with Venttsel boundary conditions, Proc. Roy. Soc. Edinburgh Sect. A, 118 (1991), 193-207. doi: 10.1017/S0308210500029048. Google Scholar Y. Luo and N. S. Trudinger, Quasilinear second order elliptic equations with Venttsel boundary conditions, Potential Anal., 3 (1994), 219-243. doi: 10.1007/BF01053434. Google Scholar C. I. Martin and B.-V. Matioc, Existence of capillary-gravity water waves with piecewise constant vorticity, J. Differential Equations, 256 (2014), 3086-3114. doi: 10.1016/j.jde.2014.01.036. Google Scholar A.-V. Matioc and B.-V. Matioc, Capillary-gravity water waves with discontinuous vorticity: Existence and regularity results, Comm. Math. Phys., 330 (2014), 859-886. doi: 10.1007/s00220-014-1918-z. Google Scholar J. W. Miles, On the generation of surface waves by shear flows, J. Fluid Mech., 3 (1957), 185-204. doi: 10.1017/S0022112057000567. Google Scholar A. I. Nazarov and A. A. Paletskikh, On the Hölder property of the solutions of the elliptic Venttsel problem, Dokl. Akad. Nauk, 465 (2015), 532-536. Google Scholar D. V. Nilsson, Internal gravity-capillary solitary waves in finite depth, Math. Methods Appl. Sci., 40 (2017), 1053-1080. doi: 10.1002/mma.4036. Google Scholar O. A. Oleǐnik, Boundary-value problems for linear equations of elliptic parabolic type with discontinuous coefficients, Izv. Akad. Nauk SSSR Ser. Mat., 25 (1961), 3-20. Google Scholar M. Schechter, A generalization of the problem of transmission, Ann. Scuola Norm. Sup. Pisa (3), 14 (1960), 207-236. Google Scholar Z. G. Šeftel', Estimates in $ L_{p} $ of solutions of elliptic equations with discontinuous coefficients and satisfying general boundary conditions and conjugacy conditions, Soviet Math. Dokl., 4 (1963), 321-324. Google Scholar W. A. Strauss, Steady water waves, Bull. Amer. Math. Soc. (N.S.), 47 (2010), 671-694. doi: 10.1090/S0273-0979-2010-01302-1. Google Scholar S. M. Sun, Existence of solitary internal waves in a two-layer fluid of infinite depth, In Proceedings of the Second World Congress of Nonlinear Analysts, Part 8 (Athens, 1996), 30 (1997), 5481-5490. doi: 10.1016/S0362-546X(97)00178-8. Google Scholar S. M. Sun, Solitary internal waves in continuously stratified fluids of great depth, Phys. D, 166 (2002), 76-103. doi: 10.1016/S0167-2789(02)00424-4. Google Scholar R. E. L. Turner, Internal waves in fluids with rapidly varying density, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 8 (1981), 513-573. Google Scholar A. D. Ventcel', On boundary conditions for multi-dimensional diffusion processes, Theor. Probability Appl., 4 (1959), 164-177. doi: 10.1137/1104014. Google Scholar E. Wahlén, Steady periodic capillary-gravity waves with vorticity, SIAM J. Math. Anal., 38 (2006), 921-943. doi: 10.1137/050630465. Google Scholar E. Wahlén, Steady periodic capillary waves with vorticity, Ark. Mat., 44 (2006), 367-387. doi: 10.1007/s11512-006-0024-7. Google Scholar S. Walsh, Stratified steady periodic water waves, SIAM J. Math. Anal., 41 (2009), 1054-1105. doi: 10.1137/080721583. Google Scholar S. Walsh, Steady stratified periodic gravity waves with surface tension Ⅰ: Local bifurcation, Discrete Contin. Dyn. Syst., 34 (2014), 3241-3285. doi: 10.3934/dcds.2014.34.3241. Google Scholar S. Walsh, Steady stratified periodic gravity waves with surface tension Ⅱ: global bifurcation, Discrete Contin. Dyn. Syst., 34 (2014), 3287-3315. doi: 10.3934/dcds.2014.34.3287. Google Scholar J. R. Wilton, On ripples, Phil. Mag., 1915, 29pp. Google Scholar Figure 1. The air-water system Mahamadi Warma. Parabolic and elliptic problems with general Wentzell boundary condition on Lipschitz domains. Communications on Pure & Applied Analysis, 2013, 12 (5) : 1881-1905. doi: 10.3934/cpaa.2013.12.1881 Imen Benabbas, Djamel Eddine Teniou. Observability of wave equation with Ventcel dynamic condition. Evolution Equations & Control Theory, 2018, 7 (4) : 545-570. doi: 10.3934/eect.2018026 Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Fabrizio Colombo, Davide Guidetti. Identification of the memory kernel in the strongly damped wave equation by a flux condition. Communications on Pure & Applied Analysis, 2009, 8 (2) : 601-620. doi: 10.3934/cpaa.2009.8.601 Muhammad I. Mustafa. On the control of the wave equation by memory-type boundary condition. Discrete & Continuous Dynamical Systems, 2015, 35 (3) : 1179-1192. doi: 10.3934/dcds.2015.35.1179 Alain Hertzog, Antoine Mondoloni. Existence of a weak solution for a quasilinear wave equation with boundary condition. Communications on Pure & Applied Analysis, 2002, 1 (2) : 191-219. doi: 10.3934/cpaa.2002.1.191 Khadijah Sharaf. A perturbation result for a critical elliptic equation with zero Dirichlet boundary condition. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1691-1706. doi: 10.3934/dcds.2017070 Min Chen, Nghiem V. Nguyen, Shu-Ming Sun. Solitary-wave solutions to Boussinesq systems with large surface tension. Discrete & Continuous Dynamical Systems, 2010, 26 (4) : 1153-1184. doi: 10.3934/dcds.2010.26.1153 Umberto De Maio, Akamabadath K. Nandakumaran, Carmen Perugia. Exact internal controllability for the wave equation in a domain with oscillating boundary with Neumann boundary condition. Evolution Equations & Control Theory, 2015, 4 (3) : 325-346. doi: 10.3934/eect.2015.4.325 Xiaoqiang Dai, Chao Yang, Shaobin Huang, Tao Yu, Yuanran Zhu. Finite time blow-up for a wave equation with dynamic boundary condition at critical and high energy levels in control systems. Electronic Research Archive, 2020, 28 (1) : 91-102. doi: 10.3934/era.2020006 Sami Aouaoui, Rahma Jlel. On some elliptic equation in the whole euclidean space $ \mathbb{R}^2 $ with nonlinearities having new exponential growth condition. Communications on Pure & Applied Analysis, 2020, 19 (10) : 4771-4796. doi: 10.3934/cpaa.2020211 Tsung-Fang Wu. Multiplicity of positive solutions for a semilinear elliptic equation in $R_+^N$ with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1675-1696. doi: 10.3934/cpaa.2010.9.1675 Vyacheslav A. Trofimov, Evgeny M. Trykin. A new way for decreasing of amplitude of wave reflected from artificial boundary condition for 1D nonlinear Schrödinger equation. Conference Publications, 2015, 2015 (special) : 1070-1078. doi: 10.3934/proc.2015.1070 Sergei Avdonin, Jeff Park, Luz de Teresa. The Kalman condition for the boundary controllability of coupled 1-d wave equations. Evolution Equations & Control Theory, 2020, 9 (1) : 255-273. doi: 10.3934/eect.2020005 J. García-Melián, Julio D. Rossi, José Sabina de Lis. A convex-concave elliptic problem with a parameter on the boundary condition. Discrete & Continuous Dynamical Systems, 2012, 32 (4) : 1095-1124. doi: 10.3934/dcds.2012.32.1095 Piero Montecchiari, Paul H. Rabinowitz. A nondegeneracy condition for a semilinear elliptic system and the existence of 1- bump solutions. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 6995-7012. doi: 10.3934/dcds.2019241 Patrick Winkert. Multiplicity results for a class of elliptic problems with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (2) : 785-802. doi: 10.3934/cpaa.2013.12.785 Kei Fong Lam, Hao Wu. Convergence to equilibrium for a bulk–surface Allen–Cahn system coupled through a nonlinear Robin boundary condition. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1847-1878. doi: 10.3934/dcds.2020096 Hakima Bessaih, Yalchin Efendiev, Florin Maris. Homogenization of the evolution Stokes equation in a perforated domain with a stochastic Fourier boundary condition. Networks & Heterogeneous Media, 2015, 10 (2) : 343-367. doi: 10.3934/nhm.2015.10.343 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71
CommonCrawl
Reading: Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtun... Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa Saad Shabbir , Sustainable Development Policy Institute, PK About Saad Researcher in Economic Growth Unit at Sustainable Development Policy Institute Vaqar Ahmed About Vaqar Deputy Executive Director and Head of Economic Growth Unit at Sustainable Development Policy Institute Amidst all the concerns of uncertainty over the future of Afghanistan, recent developments have given hope to the world, specifically south and central Asia. A coalition government has now been established following the deadlock that came after the May 2014 elections. President Ashraf Ghani and Chief Executive Officer Abdullah have already signed a Bilateral Security Agreement (BSA) between Kabul and Washington, according to which 9800 troops will remain in Afghanistan beyond 2015. Furthermore, the government of Afghanistan seeks the support of the neighbouring countries to keep peace in the region. Despite all these concrete steps, there has been an increased number of terror attacks and drone operations which has put a big question mark on the stability of the country. How Afghanistan tackles these rising problems will be crucial in defining its future, the trickle-down effects of which will determine the stability of the Afghan-Pakistan region. Concerns about what the future holds for this region with a long history of violence and insurgency are currently being voiced at many levels of society, including on talk shows, at government meetings, within NGOs, and at business forums. Unlike most of the studies done on the Afghan-Pakistan region that focus on the security of the region, this article focuses on the welfare and economic impacts of post-2014 Afghanistan on the neighbouring Pakistani provinces of Balochistan and Khyber Pakhtunkhwa, at the household level. Keywords: Welfare , NATO , ISAF , 2014 , How to Cite: Shabbir, S. and Ahmed, V., 2015. Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa. Stability: International Journal of Security and Development, 4(1), p.Art. 6. DOI: http://doi.org/10.5334/sta.et Published on 17 Feb 2015 The International Security Assistance Force (ISAF) already pulled out 70 per cent of its forces by the end of May 2014, with only about 30,000 troops still remaining in Afghanistan. Due to recent developments, 9,800 foreign troops will remain in the country in 2015 to continue combat operations against insurgents, with the help of Afghan National Army. The Russians have once again shown an interest in Afghanistan and have identified numerous investment projects. They are concerned about the security of the region which could have direct implications for Russia (Stepanova 2013). The European Union, China, Iran, and India have also shown interest in the reconstruction and smooth transition of post-2014 Afghanistan. The decade-long war in Afghanistan and relations between Afghanistan and the above countries have implications for Pakistan, the neighbour most affected by turmoil in Afghanistan. Bilateral trade between Afghanistan and Pakistan has seen an increasing trend since the launch of the US war on terror in Afghanistan. The volume of bilateral trade between the two countries has increased from US$ 1.1 billion in 2005–06 to US$ 2.4 billion in 2011–12. Exports from Pakistan to Afghanistan have increased from US$ 1,063.4 million to US$ 2,449 million during the same period. Figure 1 indicates a balance of trade heavily in favour of Pakistan. Trade between Pakistan and Afghanistan from 2005 through 2012. Source: Pakistan Bureau of Statistics and Trade Development Authority of Pakistan (2013). There has also been substantial change in the composition of Pakistan's exports to Afghanistan. Between 2006 and 2011 there was a decline (or stagnancy in some cases) in the export of milk products, animal and vegetable oil, tableware, and household furniture. However, the export of cement, crude oil, wheat, and rice grew several fold. This is exhibited in Table 1. The growth in exports of these items can be attributed to a rise in construction activity and reverse migration during this period. Pakistan's Key Exports to Afghanistan. Source: State Bank of Pakistan 2013. Amount in thousands of US $ Milk and cream & sugar 18701 36024 73369 47167 20712 15461 Animal and Vegetable oil 105726 87989 111196 102265 95466 112973 Cement 92427 77218 120524 162025 181770 222571 Crude oil 242333 156473 433484 311431 510019 862722 Wheat and Rice 59 565 1330 14868 16875 78018 Table ware and Household furniture 69463 19211 36328 21790 17009 18789 During the period NATO forces were in Afghanistan, employment in the trade, transport, warehousing, and communication (TTWC) sectors increased both in Afghanistan and in neighbouring Pakistani provinces. According to the authors' calculations based on the Household Income Expenditure Survey taken in 2004–05 and 2010–11, growth in the TTWC sectors was about 290 per cent in Khyber Pakhtunkhwa and 450 per cent in Balochistan. This study focuses on three specific changes: The increase in the number of households associated with TTWC sectors in the seven years from 2004 to 2011. The number of TTWC sector households belonging to the lowest income quintiles defined in the Household Income and Expenditure Survey of Pakistan. The growth in TTWC sector incomes compared to non-trade sectors in Khyber Pakhtunkhwa and Balochistan provinces (Pakistan). In the post-NATO exit milieu, any disturbance that deteriorates the political relations between Afghanistan and Pakistan will result in: reduction of the formal bilateral trade, reduced commercial transit, possible increase in IDPs and refugees flowing from Afghanistan to Pakistan, and a rise in the terror threat to the Pakistani population neighbouring Afghanistan. The worst affected in this scenario would be households already in the lowest income quintiles in TTWC sectors. As we will show later, many have come out of chronic and transient poverty as a result of increased trade with Afghanistan. However, under the scenario where NATO cargo comes to an end and there is a reduction in formal trade (if any) this population is poised to slip below the poverty line. This would have a direct impact on security. It is particularly dangerous to have a population segment below the poverty line in a region that promotes and espouses militancy through madrassas1 (Hussain 2007) and has a history of militants challenging the authority of the state. A reduction in employment levels in this region will imply more youth resorting to crime and other social evils. The remaining paper will include a brief literature review, a look at trade and regional cooperation efforts, methodology and data, results and tables, and conclusion. Brief Literature Review There is hardly any research that looks at the welfare implications of trade between Afghanistan and Pakistan. The studies conducted so far by the commerce ministries of both countries mainly discuss historical trends and barriers to trade. However, the private sector has shown active interest in exploring new trade possibilities between Pakistan and Afghanistan. A recent study by Hamid & Hayat (2012) explains that Afghanistan is one of Pakistan's major trading partners, with a long period of undocumented trade until the fall of Taliban regime in 2001. Between 2002 and 2010 there was a seven-fold increase in Pakistan's exports to Afghanistan, and by 2010 Afghanistan was Pakistan's third largest export market, accounting for 7.9 per cent of the latter's total exports. In a briefing paper for the Planning Commission of Pakistan, Ahmed (2010) notes that Afghanistan had signed transit trade agreements not only with Pakistan but also with Uzbekistan, Iran, Turkmenistan, and Tajikistan. However, Pakistan holds the largest share (34 per cent) of transit imports and exports. Other studies conducted by the World Bank, USAID, and PITAD focus on issues that obstruct trade flows and suggest policies to overcome these obstructions. The main issues highlighted are a lack of reliable road or rail infrastructure, transportation bottlenecks, procedural inefficiencies, Pakistani businessmen's lack of awareness of the market potential in Afghanistan, and, last but not least, border tensions. There has been no research so far that studies the impact of growth in trade, transportation, warehousing, and communication sectors (TTWC) on the economic welfare of local populations in the Pakistani provinces that border Afghanistan. However, studies show that Pakistan has suffered economically and socially as a result of the three million Afghan refugees that have entered Pakistan over time (Rais 2009). Up until 2013, estimates show that Pakistan had lost over US $ 90 billion in the 'war against terror.' Moreover, civilian casualties were reported to be 51,000, with 11,000 in the year 2009 alone (Sultan et al. 2013). With already so much damage inflicted upon the two countries it has been difficult to move forward diplomatically. Cross-border raids and assassinations of senior officials have tainted any efforts to pursue peace talks (Hussain 2011). With a four decade-long history of mistrust between the two neighbouring Muslim countries, Pakistan looked to the United States to ensure a peaceful transition. That too now seems to be a lost cause after the 2012 Chicago Summit, where President Obama failed to recognize the harms suffered by Pakistan during the decade-long war in Afghanistan. According to Lodhi (2012), the Summit talks fortified the impression that the US is more interested in an exit plan than lasting peace in the region. Furthermore, much to Pakistan's dismay, the long-pending Bilateral Security Agreement (BSA) was signed between Kabul and Washington in September 2014. In recent months there has been a drastic increase in terror attacks which have certainly caused alarms to sound in Pakistan and the rest of the region. The situation in Afghanistan is evolving quickly. The Afghan National Army (ANA) is unprepared to combat the insurgents alone which leads to the question: is Afghanistan headed towards another civil war? If so, the increased influx of refugees to Pakistan poses a great threat to the economy of this region (Iqbal 2011). Repeated efforts have been made to initiate peace talks between Kabul and the Taliban to ensure a smooth transition after the NATO drawdown, but there is little hope that anything conclusive will be reached (Xiangyu et al. 2013). We believe that the Pakistani provinces studied in our analysis - Balochistan and Khyber Pakhtunkhwa - are prone to trade-related shocks in the event of any security disturbance in Afghanistan or closure of transit trade. It is important to bridge the gap in the literature by assessing how much these populations are dependent on these important sectors which in turn are reliant upon future political stability in Afghanistan. Trade and Regional Cooperation Afghanistan is a landlocked country located between central Asia and Europe. It has an area of 650,000 square kilometres and borders Pakistan, Tajikistan, Iran, Turkmenistan, Uzbekistan, and China. According to the World Population Review (2014), the population of Afghanistan by the end of 2013 was estimated to be 30 million. Despite the fact that only 12 per cent of its land is arable, more than 85 per cent of the Afghan population is dependent on agriculture (Government of Afghanistan 2010). The country's major crops include wheat cereal, rice, barley, maize, fruits and nuts, and cotton; livestock is also an integral part of the economy. Afghanistan is a mineral-rich country with untapped natural resource deposits estimated to be worth US$ 1 trillion, according to senior US government officials (Risen 2010). Pakistan is Afghanistan's single largest trading partner, followed by Iran, China, and India. Recently, imports from Kazakhstan have increased substantially, rising from around half a billion USD in 2008 to 3.2 billion USD (Parto et al. 2012). Pakistan, situated to the east of Afghanistan, has an area of 796,000 square kilometres. It borders Iran, Afghanistan, China, India, and the Arabian Sea. According to the UNDP (2013), Pakistan is the sixth most populous country of the world with a population exceeding 180 million people. Pakistan, once a predominately agricultural economy, is now semi-industrialized with growth centres located along the Indus river. However, the principle economic sector continues to be agriculture, which employs the largest share of the labour force. Major crops include wheat, cotton, rice, vegetables, and fruit. The largest manufacturing sector in Pakistan is the textile industry which has also generated huge employment opportunities for skilled and unskilled labour. Pakistan's top five trading partners are the United States, China, the United Arab Emirates, Afghanistan, and the United Kingdom (Government of Pakistan 2012). Afghanistan Transit Trade Agreement (ATTA) Since the inception of the country of Pakistan there have been many bilateral and multilateral trade agreements signed with neighbouring Afghanistan to facilitate and promote direct trade and transit trade. The earliest, the Afghanistan Transit Trade Agreement (ATTA), was signed in 1965 between the governments of Pakistan and Afghanistan (Government of Pakistan 1965). The first article of this agreement guaranteed the freedom of transit trade to both countries. 'Traffic in transit trade' was also defined under this agreement. The transit routes were identified as Peshawar-Torkham and Chaman-Spin Boldak, with a provision to add future trade corridors. Both countries also agreed upon the development of the Kabul-Torkham-Peshawar trade route with a railway line extension. It was also agreed that liaison officers would be appointed in each country for efficient communication and planning to further facilitate trade. Officials from the two countries would meet on an annual basis to review the working of the agreement (Swan et al. 1993). South Asian Association for Regional Cooperation (SAARC) Pakistan, along with Bangladesh, Bhutan, India, Maldives, Nepal, and Sri Lanka, signed the charter of South Asian Association for Regional Cooperation (SAARC) in 1985. The main objectives of this charter, drafted with strict adherence to the principles of United Nations charter, were to promote the welfare of the people of South Asia, accelerate economic growth, enhance mutual trust amongst the concerned nations, and extend assistance in social and economic fields. It was agreed that the principles of this association would be consistent with already existing bilateral or multilateral agreements. Furthermore, this association was not to be mistaken as a substitute for bilateral or multilateral cooperation but as a complement to them. Annual meetings of the member states were also envisaged by the charter. In 2004, at the 12th SAARC summit in Pakistan, the South Asian Free Trade Area (SAFTA) agreement (SAARC 2004) was signed by all member states. This agreement created a free trade area for the entire region, home to 1.6 billion people at the time. The aim was to reduce customs duties to zero by the year 2016. A working plan for SAFTA was also discussed according to which the developing countries - i.e. India, Pakistan, and Sri Lanka - were to bring their duties down to 20 per cent in two years. Under-developed countries - i.e. Bhutan, Bangladesh, Nepal, and Maldives - had an additional three years to do the same. In 2005, before the implementation of SAFTA, Afghanistan also applied to join SAARC. In 2007, after much deliberation among the SAARC member states, Afghanistan was welcomed as the 8th member. SAFTA was ratified in 2009 by India and Pakistan and in May 2011 by Afghanistan. Pak-Afghan Joint Economic Commission In 1992 Afghanistan and Pakistan established the Pak-Afghan Joint Economic Commission (JEC) to promote trade, deliberate on issues related to bilateral and transit trade, and strengthen their economic relationship. The JEC meetings have been held alternatively in Kabul and Islamabad. The 8th meeting of the JEC was held in January 2012 in Islamabad. The delegations from both countries were led by their respective finance ministers. During the meeting the delegates discussed the implementation status of decisions taken at the previous JEC meeting and developed further on cooperation in areas of economics and trade. Problems faced by Afghan and Pakistani traders put forth at the previous meeting were duly addressed and solutions were implemented. Amongst those solutions were the Web Based One Customs System (WeBOC) (used to exchange information to facilitate transit trade), insurance guarantees by the customs security,2 and permission for non-containerized carriage of vehicles to be used by Afghan traders. Through the introduction of the new transit system at Torkham, Pakistani traders would no longer be required to acquire transit permits from the Afghan authorities. The ban on timber imports from Afghanistan was also lifted (MoC 2012). The 9th JEC meeting was held in Kabul from 22–24 February 2014. The agenda points were categorized by theme, including Trade & Commerce, Reconstruction Activities in Afghanistan, Water & Energy, and Industry & Agriculture. The Afghan authorities simplified the visa process for Pakistanis working in Afghanistan to address issues raised by Pakistan at the previous JEC meeting. Despite financial constraints, Pakistan increased levels of investment in development projects from US $ 385 million to US $ 500 million. An increase in the number of students to be admitted to an already implemented scholarship program (from 2,000 to 3,000) was in the approval process. Moreover, disciplines covered by the scholarship program would now also include applied sciences in addition to social and natural sciences. The final version of the Electronic Data Interchange (EDI) was approved to be launched in March 2014. After a re-evaluation of the cost of the Torkham-Jalalabad Additional Carriage Way (ACW) project it was agreed to resume work by June 2014 and finish it before the end of 2015. For the motorway project between Peshawar and Kabul a team of consultants were to share their report on the project by March 2014. Before the next JEC meeting an additional transit-trade corridor through Bannu-GhulamKhan-Khost could also be examined whereas the Sherkhan-Ninjpayan route, originally identified as the CAREC corridor, would potentially be used as a road link connecting Pakistan, Afghanistan, and Tajikistan. The railway link connecting Peshawar and Jalalabad was approved by the government of Pakistan, with construction set to begin in June 2014. It was mutually agreed that the progress of the TAPI project was on schedule. However, immediate resolutions were demanded on CASA-1000 and the Joint Hydro Power Project on the Kunar River so that feasibility reports could be presented to the Board of Directors at the World Bank for approval in March 2014 (MoC 2014). Central Asia Regional Economic Cooperation (CAREC) In 1997 the Central Asia Regional Economic Cooperation (CAREC) Program was initiated with the aim of increasing trade and cooperation between the People's Republic of China, Kazakhstan, the Kyrgyz Republic, and Uzbekistan. The program was principally supported by the Asian Development Bank (ADB). The European Bank for Reconstruction and Development, the International Monetary Fund, the Islamic Development Bank, the United Nations Development Program, and the World Bank are also partners on the program. The objective was to integrate the economies of these Eurasian countries to facilitate rapid growth. Under the theme of 'Good Neighbours, Good Partners, Good Prospects' the CAREC program aims to reduce poverty and reach higher levels of economic growth by building on their neighbours' strengths rather than working in isolation. In 2005 Afghanistan became a member of CAREC; five years later, in 2010, Pakistan also joined. To date, CAREC has ten member countries, namely Afghanistan, Azerbaijan, People's Republic of China, Kazakhstan, Kyrgyz Republic, Mongolia, Pakistan, Tajikistan, Turkmenistan, and Uzbekistan. A strategic framework for CAREC-2020 - endorsed in 2011 - was an enhancement of the earlier Comprehensive Action Plan (CAP), which defined priorities of cooperation in the areas of transport, trade policy, trade facilitation, and energy cooperation. The following year the implementation of the CAREC-2020 strategic framework was endorsed under the Wuhan Action Plan. Priority projects in the transportation sector in Afghanistan under CAREC-2020 include the construction of the Shirkhan Bandar-Kunduz-Kholam-Naibabad-Andkhoy-Heart railway line and the Bala Murgab-Leman road. In Pakistan prioritized projects include the reconstruction of a number of expressways and the construction of a new motorway connecting Karachi, Hub Dureji, and Sehwan (CAREC 2012). There are a total of 108 investment projects under the Transport and Trade Facilitation Strategy Implementation Action Plan projected to cost US $ 38.8 billion USD. The rail link connecting Afghanistan to Uzbekistan was set to be completed by the end of 2014. Future planned links would connect Pakistan, Tajikistan, and Turkmenistan. The Wuhan Action Plan calls for energy cooperation in the region, exploring interconnecting options along the Central Asia-South Asia Energy corridor. The plan also calls for a design of regulatory framework for energy trade as well as a push to shift to renewable energy. Central Asia South Asia (CASA - 1000) The Central Asia South Asia Regional Electri­city Market (CASAREM), popularly known as CASA-1000, is an agreement between Afghanistan, Kyrgyz Republic, Pakistan, and Tajikistan. In 2004 a report submitted to the World Bank (Markandya et al. 2004) noted that South Asia (Afghanistan and Pakistan in particular) contained the largest markets for electricity exported from Central Asia (Kyrgyz Republic and Tajikistan). In 2008–09, before CASA-1000 was inked, an initial feasibility report was prepared, financed by the Asian Development Bank. The report analysed the practicality of exporting 1000 MW of electricity from Central Asia to South Asia. The following year, in 2010, a techno-economic feasibility report was added to the initial report to take into consideration environmental and social costs. After much deliberation by the Inter-Governmental Council (IGC), the Islamic Development Bank, the World Bank, and the four concerned countries, the report was finalised in February 2011. The updated report prepared by SNC Lavalin International (2011) predicts an annual summer surplus of 3.75 TWh in Tajikistan and 2.15 TWh in Kyrgyzstan by the year 2016. In September 2011 a memorandum of understanding was signed in Bishkek, Kyrgyz Republic, binding the Central Asian states of Kyrgyz Republic and Tajikistan to supply surplus energy to Afghanistan and Pakistan. The proposed route to transport the surplus energy would run from Datka (Kyrgyzstan) to Peshawar (Pakistan), passing through Khoudjand and Sangtuda (Tajikistan), then Salang Pass and Kabul (Afghanistan). This project is estimated to cost roughly US $ 950 million. The long term objective is to supply clean hydropower from Central Asian countries to South Asian countries during summers, a period during which the latter countries face chronic electricity shortages. In January 2014 the four member states met in Almaty to discuss project financing at length. Tajikistan and Kyrgyzstan briefed the Joint Working Group (JWG) to discuss progress made in reducing the financing gap; the delegation from Pakistan stressed its desire to expedite this process. At the last meeting, held in Washington, the National Transmission Companies (NTCs) of the respective countries resolved to resolve Power Purchase Agreement (PPA) issues by the end of March 2014; tariff structures were eventually agreed upon in December 2014 and financing has been approved. Afghanistan Pakistan Transit Trade Agreement In October 2010 the commerce ministers of Afghanistan and Pakistan, Dr Anwar ul Haq and Makhdoom Amin Fahim respectively, signed the Afghanistan Pakistan Transit and Trade Agreement (APTTA) in Kabul (MoC 2010). The agreement came into force on 12 June 2011. Under this agreement Afghan trucks were allowed to transport Afghan products destined for the (immense) markets of India and China (as well as the rest of the world) to the Karachi seaports of Port Qasim and Gwadar. APTTA replaced the earlier ATTA agreement (signed in 1965) following repeated concerns raised by Pakistan over its misuse. According to APTTA, the sole entry point for Afghan trade cargo into Pakistan was Port Qasim in Karachi. The exit points were Torkhum (North West Frontier Province [NWFP] since re-named Khyber Pakhtunkhwa [KP]) and Chaman (Balochistan province). APTTA was intended to make it easier and cheaper for Afghan goods to reach foreign markets. This agreement allowed Afghan goods access to two of the world's largest markets: India and China, with populations of 1.1 billion and 1.2 billion respectively. APTTA is expected to increase Afghanistan's exports and reduce delays at borders, thereby making Afghan products more competitive, attractive, and affordable abroad. Customs practices in place at the borders will be modernized and simplified. Additionally, Pakistani exports will have easier access to Central Asian and European markets. As a consequence, the service sectors of both countries should experience a boom, increasing employment levels. APTTA guarantees freedom of transit between Afghanistan and Pakistan. Along the approved corridors, Afghan trucks are allowed to transit into Pakistan. In addition, Afghans have the freedom to choose their type of vehicle as long as they remain on specified routes within specified times. Furthermore, port and rail fees will be the same for national and international vehicles without discrimination. Freight forwarders and transport operators are now at liberty to establish businesses in Pakistan to facilitate transit trade activities. Drivers' licenses and vehicle documents issued in Pakistan or Afghanistan will henceforth be recognized by both countries. Special offices will be set up to respond to questions and concerns from nationals of either country regarding this issue. All duties were annulled for goods in transit. The only fees will include charges incurred when scanning and weighing goods as well as tolls for bridges, tunnels, roads, and parking. The rate charged for the goods in transit will be non-discriminatory and reasonable. A coordination authority known as the Afghanistan Pakistan Transit Trade Coordination Authority (APTTCA) made up of government officials and private sector stakeholders from both countries will oversee the implementation of the agreement. In the event of any disputes arising, a panel consisting of members from the two countries and a neutral third country will examine the case and implement a solution. The decision will be final and binding. APTTA encourages economic growth for both countries by building stronger commercial relationships between Afghanistan and Pakistan. It also strengthens economic ties and ensures the smooth and efficient movement of goods through both countries and across the region. This agreement makes the South Asian Free Trade Agreement (SAFTA) easier and cheaper to implement. SAFTA will lower tariffs for Afghan goods across the region. APTTA will also contribute to greater regional security as Afghanistan and Pakistan work together for mutual economic growth and prosperity. Pak-Afghan Joint Chamber of Commerce & Industry The first Board meeting of the Joint Chamber was held in Karachi on 13 March 2012. During the meeting Zubair Motiwala from Pakistan was unanimously elected as President of the Chamber while Khan Jan Alkozai was elected as Co-President. This study employs three different methods, both quantitative and qualitative, to assess the impact of post-2014 Afghanistan on Pakistan, as shown in Figure 2. Firstly, we use the Household Income Expenditure Survey (HIES) from Pakistan for the years 2004–05 and 2010–11 to form a descriptive analysis of the welfare of households associated with TTWC sectors. Secondly, using the same data set, we form a time series econometric relationship between household income and its determinants. Lastly, two focus group discussions with government officials, security analysts, economic think tanks, journalists, producers, and traders helped form the qualitative part of this study. Methodological Framework. In order to assess the impact of TTWC sectors on the local population we use HIES data from Pakistan for the years 2004–05 and 2010–11. We limit our analysis to Khyber Pakhtunkhwa and Balochistan. We have compared population numbers, household characteristics, and welfare indicators of respondents associated with trade, transport, warehousing, and communication sectors (TTWC) in both years. Our descriptive analysis of HIES data will specifically address the following three questions: Has employment during the period 2004–11 grown faster for the lowest quintiles? Have the real incomes in TTWC sectors increased more vis-à-vis non trade sectors in Khyber Pakhtunkhwa and Balochistan? Do households associated with TTWC sectors have higher mean incomes as compared to households that are not associated with TTWC sectors in Khyber Pakhtunkhwa and Balochistan? Two focus group discussions were conducted to gain an understanding into possible economic and security risks for Pakistan in the post-2014 Afghanistan milieu. The first group - which focused on economic impacts - was composed of representatives from the Ministry of Commerce, economic think tanks who have worked closely on issues related to Afghanistan-Pakistan relations, traders and producers who have been trading actively with Afghanistan, senior journalists, and chamber of commerce members. The second group focused on security risks for Pakistan and was composed of security analysts (including retired army generals), senior journalists, and members of economic think tanks. The findings of these sessions were then processed using a qualitative analysis tool called NVIVO. Results and Tables Quantification of Welfare Impacts In Table 2 we compare employment levels in TTWC sectors for Khyber Pakhtunkhwa and Balochistan between the years 2004–05 and 2010–11. Other prominent characteristics of these households are also provided. A cursory look at these descriptive statistics reveals that the number of households associated with TTWC sectors has grown substantially during this period: 290 per cent in Khyber Pakhtunkhwa and 460 per cent in Balochistan. In both provinces monthly per capita consumption by households associated with these sectors doubled. We also note that employment levels in TTWC sectors are very near to the traditional agriculture and service sectors in these two provinces. Both regions lack industrial infrastructure and therefore manufacturing activity is less prominent here than in other parts of Pakistan. Salient features of Households (HHs) in Khyber Pakhtunkhwa and Balochistan. Source: WDI and authors own calculations from HIES 2004-05 and 2010-11. Households in trade, transport, warehousing, and communication (TTWC) sectors 586 2,287 456 2,557 Sample population dependent on TTWC3 3,926 15,094 2,873 15,853 Actual number of households the sample represents in TTWC 209,286 737,742 130,286 639,250 Actual population the sample represents (TTWC) 981,550 2,515,700 718,200 1,761,489 Monthly per capita consumption (food) 1133 2731 1236 2566 Key sources of occupation (no. of HHs) Agriculture, forestry & Fishing 630 2714 746 4682 Social and Personal Service/Education 455 727 403 890 Wholesale & Retail trade 435 1310 300 1449 Construction 213 1357 130 1208 Land Transport 151 720 156 864 Mean years of schooling 8.6 9.14 8.6 8.76 Infant mortality rate 81 72 81 72 We also note in Table 3 that the per capita incomes across both provinces neighbouring Afghanistan increased in both nominal and real terms. More important is to note that non-trade incomes remained lower than incomes associated with the trade sector. Employment and Incomes in Balochistan and Khyber Pakhtunkhwa. Source: Author calculation from HIES 2004-5 and 2010-11. HHs in survey dependent on TTWC 1042 4844 Actual HHs dependent on TTWC 315,757 1,670,344 Mean per capita income in TTWC (nominal) PKR 5998 PKR 11,711 Mean per capita income in TTWC (real) PKR 5552 PKR 5685 Mean per capita income in non-TTWC (nominal) PKR 5472 PKR 10824 Mean per capita income in non-TTWC (real) PKR 5065 PKR 5254 Poverty reducing impact of trade We understand that growth in TTWC sectors will only impact the poor favourably if there is job creation for the lowest income quintiles. We see in Tables 4 and 5 that this is precisely the case. The highest increase in employment is seen in the 1st and 2nd income quintiles (poorest segments). These increased employment levels in the poorest quintiles are seen for both Khyber Pakhtunkhwa and Balochistan provinces. We should also note here that these calculations do not accurately represent employment in the informal sector in TTWC, which has also grown immensely as a result of the formal sector's growth. Household distribution in TTWC Sector in Khyber Pakhtunkhwa. Source: SDPI calculations from HIES. Income Quintiles 2004 73 31 51 72 92 2011 516 402 313 164 340 Change in Employment (no.) +443 +371 +262 +92 +248 Household distribution in TTWC Sector in Balochistan. Source: SDPI calculations from HIES. 2004 59 71 43 3 107 Change in Employment (no.) +552 +370 +262 +345 +296 There is evidence that while the TTWC sector has expanded in both conflict-ridden provinces (disproportionately benefiting poor households), substantial contraction has been seen in manufacturing activity. The Asian Development Report of 2010 entitled 'Post-Conflict Needs Assessment' explains that terrorism-related threats have brought several industries to a complete standstill. Furthermore, several manufacturing and processing industries, including mining, are now operating at less than 10 per cent of pre-conflict levels. More worrying is that agriculture and livestock, traditionally a principle source of livelihood for the local population in these two provinces, are now vulnerable to conflict and local political instability (Pasha 2013). Despite several investment exhibitions in various parts of Pakistan, domestic investors are not inclined to invest in agriculture and livestock processing in Khyber Pakhtunkhwa and Balochistan. Regression results Our aim in this section is to see the role of the TTWC sector in overall household level incomes in Balochistan and Khyber Pakhtunkhwa. While the log of household incomes is the dependent variable, our independent variables include household-level characteristics such as education, age, region, access to information, assets owned by the household, number of persons in a household, and whether the household is associated with TTWC sectors. The regression equation is as follows: lnY=α^+β^1age+β^2age2+β^3education+β^4telephone+β^5motorcycle+β^6tv+β^7npers+β^7httwc+β^7urban+εt M1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \begin{array}{c} lnY = \hat \alpha + {{\hat \beta }_1}age + {{\hat \beta }_2}ag{e^2} + {{\hat \beta }_3}education\\ + {{\hat \beta }_4}telephone + {{\hat \beta }_5}motorcycle + {{\hat \beta }_6}tv\\ + {{\hat \beta }_7}npers + {{\hat \beta }_7}httwc + {{\hat \beta }_7}urban + {\varepsilon _t} \end{array}\ \] \end{document} Where Y is the cumulative income of the household, age refers to the age of the head of the household, education is the number of years of education the head of the household has completed, telephone is a binary variable which is 1 if the household owns a telephone, motorcycle is a binary variable which is 1 if the household own a motorcycle, tv refers to access to information which is a binary variable that is 1 if the household own a tv, npers is the number of person in the household, httwc is a binary variable which is 1 if the household's income is generated from trade or trade-related services (TTWC), and urban is a binary variable which is 1 if the household is located in the urban area. The Ordinary Least Square (OLS) estimates for both years are shown in Figure 3 and 4. We can observe that in both provinces and for both time periods overall household incomes are positively and significantly impacted if the households belong to occupational sectors under TTWC. Regression Results for 2004-05. Regression Results for year 2010-11. Perceptions on Economic & Security Risks The data from the two focus group discussions on economic and security implications were analysed using NVIVO software. The frequency of the word 'India' yielded major concerns about the role of India in the region after the exit of ISAF. As seen in Figure 5 below, the trader community, which has good relations with the Afghan locals and their markets, were most concerned about the role of India. As a member of the trader community commented: Economic Implications Post-2014. … interest was observed when the Indian government facilitated the correspondence of tender agreements between Indian companies and Afghanistan's government. Pakistani manufacturing and economic think tanks were moderately concerned about competition from India penetrating Afghan markets. One of the participants said, 'We have a comparative advantage over India due to logistics.' The government and the media did not consider India to be a threat to the Pakistani economy. Even though Iran has taken a major share of Pakistan's cement and steel exports to Afghanistan, the Pakistani government discarded any claims that this change has or will have an effect on the country's economy: Steel and cement demand has already been reduced. The trade route has also been diverted through Iran. But where is the major impact? The security analysts unequivocally expressed their concerns over Indian involvement in the peace of the region post-2014 as shown in Figure 6. One of the discussants said, 'The Indians ignore Pakistan. The economies of India and Pakistan are completely different. India has a much bigger purse.' On inquiring about the interests of the Pakistani army regarding trade with India the security analyst remarked that 'The army is on board to trade with India. You cannot go on forever isolating yourself from the region.' Representatives from the media were the only other participants to append to the concerns of the security analysts. A media representative urged, 'A national decision is required about India and Pakistan on whether we should proceed with bilateral or trilateral relations. How far is it important for us to sell the argument that western unrest affects relations with India?' The producers and economic think tanks representatives did not participate during the security discussion. Security Implications Post-2014. This paper establishes two key conclusions. NATO cargo and commercial trade with Afghanistan is important for the large portion of Pakistan's population employed in the trade, transport, warehousing, and communication sectors. During the period 2004–2011 employment in Balochistan and Khyber Pakhtunkhwa provinces increased in TTWC sectors. This increase in employment was particularly significant in the poorest quintiles. During the same period real incomes in TTWC sectors increased more vis-à-vis non-trade sectors in Khyber Pakhtunkhwa and Balochistan. We also know from the secondary literature that the trade sector in these provinces has a higher mean income vis-à-vis non-trade sectors. Similarly, the income gap between trade and non-trade sector widened in favour of trade sectors. Afghan refugees sheltered in Pakistan currently number well above three million and this number was expected to rise significantly following Pakistan military operations in North Waziristan in June 2014 and the exit of ISAF from Afghanistan in 2015. Change of governments in Afghanistan, India, Iran, and Pakistan provide a renewed opportunity to revive diplomatic negotiations after a long period of stagnation. The employment vacuum that will likely form in Khyber Pakhtunkhwa and Balochistan after the NATO drawdown could be filled by increased transit trade to Afghanistan from India. Pakistan as a corridor economy should cash in on its geographical location, an inherent competitive advantage it possesses over Iran. Rigid foreign policies must now be revisited to encourage greater trade and investment cooperation, in turn paving the way for dialogue on other issues. Despite persistent efforts to enhance regional cooperation we are still far from the target. Peace and security remain imperative for the smooth flow of merchandise and people across the region. The issue of security cannot be solved by the efforts of a single country but needs rather to be dealt with collectively as a region; this does not seem likely to happen in the immediate future. The governments of Pakistan and Afghanistan should work in harmony to overcome the economic and security challenges shared by both countries. 1Madrassas are institutes for religious education. 2Because of the sensitivity of the border area between Afghanistan and Pakistan, trading consignments were often damaged or stolen. Therefore, there are multiple insurance guarantees on goods traded as well as on the trucks transporting these goods. Furthermore, it was decided to not allow open consignments to flow from these borders to minimize theft of goods. These open consignments are called "non-containerized" carriages. Containerized carriages have proper seals that are broken at the destination of exports to confirm safety. 3This is the population living in households mentioned above. Ahmed, V (2010). Afghanistan - Pakistan Transit Trade Agreement In: Munich Personal Repec Archive MPRA. CAREC (2012). Implementing CAREC 2020 Strategic Framework: Wuhan Action Plan, Wuhan: Government of Afghanistan (2010). Afghanistan Statistical Yearbook 2009–10 In: Afghanistan: Central Statistics Organization. Government of Pakistan (1965). Afghanistan Transit Trade Agreement In: Ministry of Commerce. (Ed.). Government of Pakistan (2012). Pakistan Statistical Yearbook In: Pakistan: Pakistan Bureau of Statistics. Hamid, N and Hayat, S (2012). The Opportunities and Pitfalls of Pakistan's Trade with China and Other Neighbors. The Lahore Journal of Economics, : 271–292. Hussain, Z (2007). Frontline Pakistan: The Struggle with Militant Islam In: London: I.B.Tauris & Co Ltd. Hussain, Z (2011). Sources of Tension in Afghanistan and Pakistan: A Regional Perspective In: CIDOB Policy Research Project. Iqbal, K (2011). Cross border Implications of Afghan Drawdown, Afghanistan after NATO Withdraws In: Asian Conflict Reports. Council for Asian Transnational Threat Research. Lodhi, M (2012). Exit plan but no strategy In: The News International, Opinion: Markandya, A, Sachdeva, A, Iskakov, M, Krishnaswamy, V, Nikolov, N and Pedroso, S (2004). Central Asia: Regional Electricity Exports Potential Study In: The World Bank. MoC (Ministry of Commerce Pakistan) (2010). Afghanistan-Pakistan Transit Trade Agreement In: Kabul: MoC (Ministry of Commerce Pakistan) (2012). Pakistan - Afghanistan Joint Economic Commission Session In: Islamabad, Pakistan: MoC (Ministry of Commerce Pakistan) (2014). Pakistan - Afghanistan Joint Economic Commission 9th Session In: Kabul, Afghanistan: Parto, S, Winters, J, Saadat, E, Usyan, M and Hozyainova, A (2012). Afghanistan and Regional Trade: More, or Less, Imports from Central Asia? In: WORKING PAPER NO.3. University of Central Asia. Pasha, H A (2013). Economy of Tomorrow In: A Case Study of Pakistan. FES Islamabad. Rais, R B (2009). Recovering the Frontier State: War, Ethnicity, and State in Afghanistan In: Lexington Books. Risen, J (2010). U.S. Identifies Vast Mineral Riches in Afghanistan. The NewYork Times, June 13 2010 SAARC (2004). 12th SAARC Summit In: Islamabad: SNC Lavalin International (2011). Central Asia - South Asia Electricity Transmission and Trade (CASA-1000). Project Feasibility Update, Stepanova, E (2013). Afghanistan after 2014: The Way Forward for Russia In: IFRI. Sultan, M, Hashmi, A, Hasan, A and Khokar, E N (2013). Sultan, M, Hashmi, A and Abbasi, MA eds. Pakistan - The Afghan Question and Priorities. Afghanistan 2014 The Decision Point. Swan, S K, Syatauw, J J G and Pinto, M C W (1993). Asian Yearbook of International Law. UNDP (United Nations Development Program) (2013). UNDP Human Development Report In: Canada: World Population Review (2014). Available at: http://worldpopulationreview.com/countries/afghanistan-population/. Xiangyu, Z, Chunyan, Z and Yufan, Z (2013). Political Reconciliation in Afghanistan: Progress, Challenges and Prospects In: Institute of Strategic Studies Islamabad. Shabbir, S. and Ahmed, V., 2015. Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa. Stability: International Journal of Security and Development, 4(1), p.Art. 6. DOI: http://doi.org/10.5334/sta.et Shabbir S, Ahmed V. Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa. Stability: International Journal of Security and Development. 2015;4(1):Art. 6. DOI: http://doi.org/10.5334/sta.et Shabbir, S., & Ahmed, V. (2015). Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa. Stability: International Journal of Security and Development, 4(1), Art. 6. DOI: http://doi.org/10.5334/sta.et Shabbir S and Ahmed V, 'Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa' (2015) 4 Stability: International Journal of Security and Development Art. 6 DOI: http://doi.org/10.5334/sta.et Shabbir, Saad, and Vaqar Ahmed. 2015. "Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa". Stability: International Journal of Security and Development 4 (1): Art. 6. DOI: http://doi.org/10.5334/sta.et Shabbir, Saad, and Vaqar Ahmed. "Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa". Stability: International Journal of Security and Development 4, no. 1 (2015): Art. 6. DOI: http://doi.org/10.5334/sta.et Shabbir, S.and V. Ahmed. "Welfare Impacts of Afghan Trade on the Pakistani Provinces of Balochistan and Khyber Pakhtunkhwa". Stability: International Journal of Security and Development, vol. 4, no. 1, 2015, p. Art. 6. DOI: http://doi.org/10.5334/sta.et
CommonCrawl
Results for 'Moshe Y. Vardi' Review: Ronald Fagin, Moshe Y. Vardi, Knowledge and Implicit Knowledge in a Distributed Environment: Preliminary Report.William J. Rapaport, Ronald Fagin & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):667.details Epistemic Logic in Logic and Philosophy of Logic Verification of Concurrent Programs: The Automata-Theoretic Framework.Moshe Y. Vardi - 1991 - Annals of Pure and Applied Logic 51 (1-2):79-98.details Vardi, M.Y., Verification of concurrent programs: the automata-theoretic framework, Annals of Pure and Applied Logic 51 79–98. We present an automata-theoretic framework to the verification of concurrent and nondeterministic programs. The basic idea is that to verify that a program P is correct one writes a program A that receives the computation of P as input and diverges only on incorrect computations of P. Now P is correct if and only if a program PA, obtained by combining P and (...) A, terminates. We formalize this idea in a framework of ω-automata with a recursive set of states. This unifies previous works on verification of fair termination and verification of temporal properties. (shrink) On Epistemic Logic and Logical Omniscience.William J. Rapaport & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):668.details Review of Joseph Y. Halpern (ed.), Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference (Los Altos, CA: Morgan Kaufmann, 1986),. Doxastic and Epistemic Logic in Logic and Philosophy of Logic What is an Inference Rule?Ronald Fagin, Joseph Y. Halpern & Moshe Y. Vardi - 1992 - Journal of Symbolic Logic 57 (3):1018-1045.details What is an inference rule? This question does not have a unique answer. One usually finds two distinct standard answers in the literature; validity inference $(\sigma \vdash_\mathrm{v} \varphi$ if for every substitution $\tau$, the validity of $\tau \lbrack\sigma\rbrack$ entails the validity of $\tau\lbrack\varphi\rbrack)$, and truth inference $(\sigma \vdash_\mathrm{t} \varphi$ if for every substitution $\tau$, the truth of $\tau\lbrack\sigma\rbrack$ entails the truth of $\tau\lbrack\varphi\rbrack)$. In this paper we introduce a general semantic framework that allows us to investigate the notion of inference (...) more carefully. Validity inference and truth inference are in some sense the extremal points in our framework. We investigate the relationship between various types of inference in our general framework, and consider the complexity of deciding if an inference rule is sound, in the context of a number of logics of interest: classical propositional logic, a nonstandard propositional logic, various propositional modal logics, and first-order logic. (shrink) Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic On the Decision Problem for Two-Variable First-Order Logic.Erich Grädel, Phokion G. Kolaitis & Moshe Y. Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details We identify the computational complexity of the satisfiability problem for FO 2 , the fragment of first-order logic consisting of all relational first-order sentences with at most two distinct variables. Although this fragment was shown to be decidable a long time ago, the computational complexity of its decision problem has not been pinpointed so far. In 1975 Mortimer proved that FO 2 has the finite-model property, which means that if an FO 2 -sentence is satisfiable, then it has a finite (...) model. Moreover, Mortimer showed that every satisfiable FO 2 -sentence has a model whose size is at most doubly exponential in the size of the sentence. In this paper, we improve Mortimer's bound by one exponential and show that every satisfiable FO 2 -sentence has a model whose size is at most exponential in the size of the sentence. As a consequence, we establish that the satisfiability problem for FO 2 is NEXPTIME-complete. (shrink) Computational Complexity in Philosophy of Computing and Information Mathematical Logic in Formal Sciences Predicate Logic in Logic and Philosophy of Logic Direct download (10 more) Reasoning About Knowledge, Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. [REVIEW]Valentin Goranko - 1999 - Journal of Logic, Language and Information 8 (4):469-473.details Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi, Reasoning About Knowledge. [REVIEW]V. Goranko - 1999 - Journal of Logic Language and Information 8:469-473.details Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi. Reasoning About Knowledge. MIT Press, Cambridge, Mass., and London1995, Xiii + 477 Pp. [REVIEW]Rohit Parikh - 1997 - Journal of Symbolic Logic 62 (4):1484-1487.details Review: Ronald Fagin, Joseph Y. Halpern, Yoram Moses, Moshe Y. Vardi, Reasoning About Knowledge. [REVIEW]Rohit Parikh - 1997 - Journal of Symbolic Logic 62 (4):1484-1487.details Logics in Logic and Philosophy of Logic Logic for Programming Artificial Intelligence and Reasoning 10th International Conference, Lpar 2003, Almaty, Kazakhstan, September 22-26, 2003 : Proceedings. [REVIEW]Moshe Y. Vardi & A. Voronkov - 2003details Areas of Mathematics in Philosophy of Mathematics Special Selection in Logic in Computer Science.Moshe Y. Vardi - 1997 - Journal of Symbolic Logic 62 (2):608.details Logic and Philosophy of Logic, General Works in Logic and Philosophy of Logic Church's Problem Revisited.Orna Kupferman & Moshe Y. Vardi - 1999 - Bulletin of Symbolic Logic 5 (2):245-263.details In program synthesis, we transform a specification into a system that is guaranteed to satisfy the specification. When the system is open, then at each moment it reads input signals and writes output signals, which depend on the input signals and the history of the computation so far. The specification considers all possible input sequences. Thus, if the specification is linear, it should hold in every computation generated by the interaction, and if the specification is branching, it should hold in (...) the tree that embodies all possible input sequences. Often, the system cannot read all the input signals generated by its environment. For example, in a distributed setting, it might be that each process can read input signals of only part of the underlying processes. Then, we should transform a specification into a system whose output depends only on the readable parts of the input signals and the history of the computation. This is called synthesis with incomplete information. In this work we solve the problem of synthesis with incomplete information in its full generality. We consider linear and branching settings with complete and incomplete information. We claim that alternation is a suitable and helpful mechanism for coping with incomplete information. Using alternating tree automata, we show that incomplete information does not make the synthesis problem more complex, in both the linear and the branching paradigm. In particular, we prove that independently of the presence of incomplete information, the synthesis problems for CTL and CTL * are complete for EXPTIME and 2EXPTIME, respectively. (shrink) On the Unusual Effectiveness of Logic in Computer Science.Joseph Y. Halpern, Robert Harper, Neil Immerman, Phokion G. Kolaitis, Moshe Y. Vardi & Victor Vianu - 2001 - Bulletin of Symbolic Logic 7 (2):213-236.details Common Knowledge Revisited.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1999 - Annals of Pure and Applied Logic 96 (1-3):89-105.details Reasoning About Knowledge: A Response by the Authors. [REVIEW]Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1997 - Minds and Machines 7 (1):113-113.details Philosophy of Artificial Intelligence in Philosophy of Cognitive Science A Nonstandard Approach to the Logical Omniscience Problem.Ronald Fagin, Joseph Y. Halpern & Moshe Y. Vardi - 1995 - Artificial Intelligence 79 (2):203-240.details Reasoning About Knowledge.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Vardi - 1995 - MIT Press.details Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed ... Reasoning in Epistemology Bookmark 348 citations Relating Word and Tree Automata.Orna Kupferman, Shmuel Safra & Moshe Y. Vardi - 2006 - Annals of Pure and Applied Logic 138 (1):126-146.details In the automata-theoretic approach to verification, we translate specifications to automata. Complexity considerations motivate the distinction between different types of automata. Already in the 60s, it was known that deterministic Büchi word automata are less expressive than nondeterministic Büchi word automata. The proof is easy and can be stated in a few lines. In the late 60s, Rabin proved that Büchi tree automata are less expressive than Rabin tree automata. This proof is much harder. In this work we relate the (...) expressiveness gap between deterministic and nondeterministic Büchi word automata and the expressiveness gap between Büchi and Rabin tree automata. We consider tree automata that recognize derived languages. For a word language L, the derived language of L, denoted L, is the set of all trees all of whose paths are in L. Since often we want to specify that all the computations of the program satisfy some property, the interest in derived languages is clear. Our main result shows that L is recognizable by a nondeterministic Büchi word automaton but not by a deterministic Büchi word automaton iff L is recognizable by a Rabin tree automaton and not by a Büchi tree automaton. Our result provides a simple explanation for the expressiveness gap between Büchi and Rabin tree automata. Since the gap between deterministic and nondeterministic Büchi word automata is well understood, our result also provides a characterization of derived languages that can be recognized by Büchi tree automata. Finally, it also provides an exponential determinization of Büchi tree automata that recognize derived languages. (shrink) BDD-Based Decision Procedures for the Modal Logic K ★.Guoqiang Pan, Ulrike Sattler & Moshe Y. Vardi - 2006 - Journal of Applied Non-Classical Logics 16 (1-2):169-207.details We describe BDD-based decision procedures for the modal logic K. Our approach is inspired by the automata-theoretic approach, but we avoid explicit automata construction. Instead, we compute certain fixpoints of a set of types — which can be viewed as an on-the-fly emptiness of the automaton. We use BDDs to represent and manipulate such type sets, and investigate different kinds of representations as well as a "level-based" representation scheme. The latter turns out to speed up construction and reduce memory consumption (...) considerably. We also study the effect of formula simplification on our decision procedures. To prove the viability of our approach, we compare our approach with a representative selection of other approaches, including a translation of K to QBF. Our results indicate that the BDD-based approach dominates for modally heavy formulae, while search-based approaches dominate for propositionally heavy formulae. (shrink) Finite Model Theory and its Applications.Erich Grädel, Phokion Kolaitis, Libkin G., Marx Leonid, Spencer Maarten, Vardi Joel, Y. Moshe, Yde Venema & Scott Weinstein - 2007 - Springer.details This book gives a comprehensive overview of central themes of finite model theory – expressive power, descriptive complexity, and zero-one laws – together with selected applications relating to database theory and artificial intelligence, especially constraint databases and constraint satisfaction problems. The final chapter provides a concise modern introduction to modal logic, emphasizing the continuity in spirit and technique with finite model theory. This underlying spirit involves the use of various fragments of and hierarchies within first-order, second-order, fixed-point, and infinitary logics (...) to gain insight into phenomena in complexity theory and combinatorics. The book emphasizes the use of combinatorial games, such as extensions and refinements of the Ehrenfeucht-Fraissé pebble game, as a powerful way to analyze the expressive power of such logics, and illustrates how deep notions from model theory and combinatorics, such as o-minimality and treewidth, arise naturally in the application of finite model theory to database theory and AI. Students of logic and computer science will find here the tools necessary to embark on research into finite model theory, and all readers will experience the excitement of a vibrant area of the application of logic to computer science. (shrink) Generalized Quantifiers in Philosophy of Language Model Theory in Logic and Philosophy of Logic $94.37 from Amazon $97.30 used Amazon page SAT-Based Explicit LTLf Satisfiability Checking.Jianwen Li, Geguang Pu, Yueling Zhang, Moshe Y. Vardi & Kristin Y. Rozier - 2020 - Artificial Intelligence 289:103369.details An Operational Approach for Testing the Postulate of Measurement in Quantum Theory.Y. Aharonov & M. Vardi - 1981 - Foundations of Physics 11 (1-2):121-125.details We interpret the (formal) postulates of measurement in quantum theory in terms of measurement procedures that can be done in the laboratory (at least in principle). Measurement Problem in Philosophy of Physical Science E. Grädel, P.G. Kolaitis, L. Libkin, M. Marx, J. Spencer, M.Y. Vardi, Y. Venema and S. Weinstein. Finite Model Theory and its Applications. Texts in Theoretical Computer Science. Springer, Berlin, 2007, Xiii + 437 Pp. [REVIEW]Stephan Kreutzer - 2010 - Bulletin of Symbolic Logic 16 (3):406-407.details Special Selection in Logic in Computer Science.Moshe Vardi - 1997 - Journal of Symbolic Logic 62 (2):608-608.details On The Decision Problem For Two-Variable First-Order Logic, By, Pages 53 -- 69.Erich Gr\"Adel, Phokion Kolaitis & Moshe Vardi - 1997 - Bulletin of Symbolic Logic 3 (1):53-69.details The Naturalness of the Artificial and Our Concepts of Health, Disease and Medicine.Y. Michael Barilan & Moshe Weintraub - 2001 - Medicine, Health Care and Philosophy 4 (3):311-325.details This article isolates ten prepositions, which constitute the undercurrent paradigm of contemporary discourse of health disease and medicine. Discussion of the interrelationship between those prepositions leads to a systematic refutation of this paradigm. An alternative set is being forwarded. The key notions of the existing paradigm are that health is the natural condition of humankind and that disease is a deviance from that nature. Natural things are harmonious and healthy while human made artifacts are coercive interference with natural balance. It (...) is suggested that the current paradigm is influenced by the world of finances and by instrumental reason. The alternative model suggests that human nature cannot be delineated. Humans fashion their own selves and nature by artificial means, medicine among them. The article discusses the implications of the paradigm adapted in various scholarly and popular debates such as the use of sex hormones for contraception, the care of the elderly, holistic medicine and distributive justice in health care. Medicine is not an isolated or a privileged realm. There is no unique entitlement to health care. It is always part of a broader agenda of social values and institutions. A open view of human societies, values and practices as they are situated within concrete material conditions is the platform required for an integrative and creative discourse of health care. (shrink) Health and Illness in Philosophy of Science, Misc The Concept of Disease in Philosophy of Science, Misc Persuasion as Respect for Persons: An Alternative View of Autonomy and of the Limits of Discourse.Moshe Weintraub & Y. Michael Barilan - 2001 - Journal of Medicine and Philosophy 26 (1):13-34.details The article calls for a departure from the common concept of autonomy in two significant ways: it argues for the supremacy of semantic understanding over procedure, and claims that clinicians are morally obliged to make a strong effort to persuade patients to accept medical advice. We interpret the value of autonomy as derived from the right persons have to respect, as agents who can argue, persuade and be persuaded in matters of utmost personal significance such as decisions about medical care. (...) Hence, autonomy should and could be respected only after such an attempt has been made. Understanding suffering to a significant degree is a prerequisite to sincere efforts of persuasion. It is claimed that a modified and pragmatic form of discourse is the necessary framework for understanding suffering and for compassionately interacting with the frail. (shrink) Madison, WI, USA March 31–April 3, 2012.Alan Dow, Isaac Goldbring, Warren Goldfarb, Joseph Miller, Toniann Pitassi, Antonio Montalbán, Grigor Sargsyan, Sergei Starchenko & Moshe Vardi - 2013 - Bulletin of Symbolic Logic 19 (2).details Climate Change in Applied Ethics Moshe Halbertal y Stephen Holmes, The Beginning of Politics. Power in the Biblical Book of Samuel, Princeton University Press, New Jersey, 2017. 231 Páginas. ISBN: 9780691174624. [REVIEW]Javier Vega Gómez - 2018 - Foro Interno. Anuario de Teoría Política 18.details Pantagruelism: A Rabelaisian Inspiration for Understanding Poisoning, Euthanasia and Abortion in The Hippocratic Oath and in Contemporary Clinical Practice.Y. Michael Barilan & Moshe Weintraub - 2001 - Theoretical Medicine and Bioethics 22 (3):269-286.details Contrary to the common view, this paper suggests that the Hippocratic oath does not directly refer to the controversial subjects of euthanasia and abortion. We interpret the oath in the context of establishing trust in medicine through departure from Pantagruelism. Pantagruelism is coined after Rabelais' classic novel Gargantua and Pantagruel. His satire about a wonder herb, Pantagruelion, is actually a sophisticated model of anti-medicine in which absence of independent moral values and of properly conducted research fashion a flagrant over-medicalization of (...) human problems. Ultimately this undermines the therapeutic core of medicine itself. We contend that PAS is a case of such over-medicalization and that its institution creates medicophobia. This article does not express an opinion about euthanasia in general. Rather, we claim that physicians should learn from the oath and from Rabelais that they should keep their practice to medical care and not to exploit their expertise and social privileges for the sake of ulterior motives, even when their patients desire those goals. (shrink) Abortion in Applied Ethics Euthanasia in Applied Ethics Moshe Halbertal: HaRambam.George Y. Kohler - 2010 - Zeitschrift für Religions- Und Geistesgeschichte 62 (3):301-303.details The Bicameral Postulates and Indices of a Priori Voting Power.Dan S. Felsenthal, Moshé Machover & William Zwicker - 1998 - Theory and Decision 44 (1):83-116.details If K is an index of relative voting power for simple voting games, the bicameral postulate requires that the distribution of K -power within a voting assembly, as measured by the ratios of the powers of the voters, be independent of whether the assembly is viewed as a separate legislature or as one chamber of a bicameral system, provided that there are no voters common to both chambers. We argue that a reasonable index – if it is to be used (...) as a tool for analysing abstract, 'uninhabited' decision rules – should satisfy this postulate. We show that, among known indices, only the Banzhaf measure does so. Moreover, the Shapley–Shubik, Deegan–Packel and Johnston indices sometimes witness a reversal under these circumstances, with voter x 'less powerful' than y when measured in the simple voting game G1 , but 'more powerful' than y when G1 is 'bicamerally joined' with a second chamber G2 . Thus these three indices violate a weaker, and correspondingly more compelling, form of the bicameral postulate. It is also shown that these indices are not always co-monotonic with the Banzhaf index and that as a result they infringe another intuitively plausible condition – the price monotonicity condition. We discuss implications of these findings, in light of recent work showing that only the Shapley–Shubik index, among known measures, satisfies another compelling principle known as the bloc postulate. We also propose a distinction between two separate aspects of voting power: power as share in a fixed purse (P-power) and power as influence (I-power). (shrink) Democracy in Social and Political Philosophy Finite Model Theory and its Applications. Texts in Theoretical Computer Science.E. Grädel, P. G. Kolaitis, L. Libkin, M. Marx, J. Spencer & M. Y. Vardi - 2010 - Bulletin of Symbolic Logic 16 (3):406-407.details Maimónides romanceado: Apuntes sobre la "Visión Deleitable" y la recepción de la "Guía" en la España cuatrocentista.Luis M. Girón Negrón - 2018 - Anales Del Seminario de Historia de la Filosofía 35 (3):599-615.details The first part of this study offers a synoptic overview of Alfonso de la Torre's selective engagement with Maimonidean philosophy in the first part of his Visión Deleitable. Our analysis is complemented with some comparative notes on the reception of Maimonides's thought in late medieval Spain. Visión Deleitabl e's fate will be examined in comparison to two other 15th century works of Jewish or converso authorship that also broached the Guide for the Perplexed for the benefit of Christian readers: the (...) Old Spanish translation of Maimonides's Guide by Pedro de Toledo and Moshe Arragel's commentary on his Bible translation for the Christian Master of Calatrava, don Luis de Guzmán. (shrink) Two Methods of Constructing Contractions and Revisions of Knowledge Systems.Hans Rott - 1991 - Journal of Philosophical Logic 20 (2):149 - 173.details This paper investigates the formal relationship between two prominent approaches to the logic of belief change. The first one uses the idea of "relational partial meet contractions" as developed by Alchourrón, Gärdenfors and Makinson (Journal of Symbolic Logic 1985), the second one uses the concept of "epistemic entrenchment" as elaborated by Gärdenfors and Makinson (in Theoretical Aspects of Reasoning about Knowledge, M. Y. Vardi, Los Altos 1988). The two approaches are shown to be strictly equivalent via direct links between (...) the underlying formal relations. The paper closes with observations about the application of epistemic entrenchment to simple and iterated revisions. (shrink) The Effects of Organizational and Ethical Climates on Misconduct at Work.Yoav Vardi - 2001 - Journal of Business Ethics 29 (4):325 - 337.details Questionnaire data obtained from 97 supervisory and nonsupervisory employees representing the Production, Production Services, Marketing, and Administration departments of an Israeli metal production plant were used to test the relationship between selected personal and organizational attributes and work related misbehavior. Following Vardi and Wiener''s (1996) framework, Organizational Misbehavior (OMB) was defined as intentional acts that violate formal core organizational rules. We found that there was a significant negative relationship between Organizational Climate and OMB, and between the Organizational Climate dimensions (...) (Warmth and Support, and Reward), and OMB. Also, the activities of misbehavior reported by both managers and employees were negatively related to the Rules, Instrumental and Caring dimensions of Ethical Climates as defined by Victor and Cullen (1988). (shrink) Business Ethics in Applied Ethics Organizational Ethics in Applied Ethics The Truth Behind Conscientious Objection in Medicine.Nir Ben-Moshe - 2019 - Journal of Medical Ethics 45 (6):404-410.details Answers to the questions of what justifies conscientious objection in medicine in general and which specific objections should be respected have proven to be elusive. In this paper, I develop a new framework for conscientious objection in medicine that is based on the idea that conscience can express true moral claims. I draw on one of the historical roots, found in Adam Smith's impartial spectator account, of the idea that an agent's conscience can determine the correct moral norms, even if (...) the agent's society has endorsed different norms. In particular, I argue that when a medical professional is reasoning from the standpoint of an impartial spectator, his or her claims of conscience are true, or at least approximate moral truth to the greatest degree possible for creatures like us, and should thus be respected. In addition to providing a justification for conscientious objection in medicine by appealing to the potential truth of the objection, the account advances the debate regarding the integrity and toleration justifications for conscientious objection, since the standard of the impartial spectator specifies the boundaries of legitimate appeals to moral integrity and toleration. The impartial spectator also provides a standpoint of shared deliberation and public reasons, from which a conscientious objector can make their case in terms that other people who adopt this standpoint can and should accept, thus offering a standard fitting to liberal democracies. (shrink) The Proactive Brain: Using Analogies and Associations to Generate Predictions.Moshe Bar - 2007 - Trends in Cognitive Sciences 11 (7):280-289.details Philosophy of Psychology in Philosophy of Cognitive Science Lights Along the Way: Timeless Lessons for Today From Rabbi Moshe Chaim Luzzatto's Mesillas Yesharim.Moshe Ḥayyim Luzzatto - 1995 - Mesorah Publications.details Jewish Ethics in Normative Ethics Religious Ethics in Normative Ethics The Effects of Classroom Moral Discussion Upon Children's Level of Moral Judgment.Moshe M. Blatt & Lawrence Kohlberg - 1975 - Journal of Moral Education 4 (2):129-161.details Abstract: An experiment is reported on the effects of a moral education programme in schools. Children were pretested on Kohlberg's index of level of moral thinking. The experimental group was then given twelve hours of discussion of moral problems other than those used in Kolhberg's test spread over twelve weeks. Subsequent testing showed that the experimental group had had tended to move towards a higher level of thinking when compared with controls. Academic and Teaching Ethics in Philosophy of Social Science Moral Judgment, Misc in Meta-Ethics An Adam Smithian Account of Moral Reasons.Nir Ben‐Moshe - 2020 - European Journal of Philosophy 28 (4):1073-1087.details The Humean Theory of Reasons, according to which all of our reasons for action are explained by our desires, has been criticized for not being able to account for "moral reasons," namely, overriding reasons to act on moral demands regardless of one's desires. My aim in this paper is to utilize ideas from Adam Smith's moral philosophy in order to offer a novel and alternative account of moral reasons that is both desire-based and accommodating of an adequate version of the (...) requirement that moral demands have overriding reason-giving force. In particular, I argue that the standpoint of what Smith calls "the impartial spectator" can both determine what is morally appropriate and inappropriate and provide the basis for normative reasons for action—including reasons to act on moral demands—to nearly all reason-responsive agents and, furthermore, that these reasons have the correct weight. The upshot of the proposed account is that it offers an interesting middle road out of a dilemma pertaining to the explanatory and normative dimensions of reasons for informed-desire Humean theorists. (shrink) Making Sense of Smith on Sympathy and Approbation: Other-Oriented Sympathy as a Psychological and Normative Achievement.Nir Ben-Moshe - 2020 - British Journal for the History of Philosophy 28 (4):735-755.details Two problems seem to plague Adam Smith's account of sympathy and approbation in The Theory of Moral Sentiments (TMS). First, Smith's account of sympathy at the beginning of TMS appears to be inconsistent with the account of sympathy at the end of TMS. In particular, it seems that Smith did not appreciate the distinction between 'self-oriented sympathy' and 'other-oriented sympathy', that is, between imagining being oneself in the actor's situation and imagining being the actor in the actor's situation. Second, Smith's (...) account of approbation, according to which a sentiment of approval arises when there is recognition of concordance between the spectator's sympathetic passion and the actor's original passion, seems to face the following problem: since the spectator attains both his own sympathetic passion and the actor's original passion by sympathizing with the actor, the sympathetic passion of the spectator and the original passion of the actor will necessarily be identical. Therefore, Smith's account of approbation requires that the spectator utilize both self-oriented and other-oriented sympathy ('the double-sympathy model of approbation'). I offer a novel developmental account of sympathy in TMS that renders Smith's account of sympathy consistent and allows for the utilization of the double-sympathy model of approbation. (shrink) Conscientious Objection in Medicine: Making it Public.Nir Ben-Moshe - forthcoming - HEC Forum:1-21.details The literature on conscientious objection in medicine presents two key problems that remain unresolved: Which conscientious objections in medicine are justified, if it is not feasible for individual medical practitioners to conclusively demonstrate the genuineness or reasonableness of their objections? How does one respect both medical practitioners' claims of conscience and patients' interests, without leaving practitioners complicit in perceived or actual wrongdoing? My aim in this paper is to offer a new framework for conscientious objections in medicine, which, by bringing (...) medical professionals' conscientious objection into the public realm, solves the justification and complicity problems. In particular, I will argue that: an "Uber Conscientious Objection in Medicine Committee" —which includes representatives from the medical community and from other professions, as well as from various religions and from the patient population—should assess various well-known conscientious objections in medicine in terms of public reason and decide which conscientious objections should be permitted, without hearing out individual conscientious objectors; medical practitioners should advertise their conscientious objections, ahead of time, in an online database that would be easily accessible to the public, without being required, in most cases, to refer patients to non-objecting practitioners. (shrink) Association, Synonymity, and Directionality in False Recognition.Moshe Anisfeld & Margaret Knapp - 1968 - Journal of Experimental Psychology 77 (2):171.details Conscious and Unconscious Memory in Philosophy of Cognitive Science Might There Be a Medical Conscience?Nir Ben-Moshe - 2019 - Bioethics 33 (7):835-841.details I defend the feasibility of a medical conscience in the following sense: a medical professional can object to the prevailing medical norms because they are incorrect as medical norms. In other words, I provide an account of conscientious objection that makes use of the idea that the conscience can issue true normative claims, but the claims in question are claims about medical norms rather than about general moral norms. I further argue that in order for this line of reasoning to (...) succeed, there needs to be an internal morality of medicine that determines what medical professionals ought to do qua medical professionals. I utilize a constructivist approach to the internal morality of medicine and argue that medical professionals can conscientiously object to providing treatment X, if providing treatment X is not in accordance with norms that would have been constructed, in light of the end of medicine, by the appropriate agents under the appropriate conditions. (shrink) The Internal Morality of Medicine: A Constructivist Approach.Nir Ben-Moshe - 2019 - Synthese 196 (11):4449-4467.details Physicians frequently ask whether they should give patients what they want, usually when there are considerations pointing against doing so, such as medicine's values and physicians' obligations. It has been argued that the source of medicine's values and physicians' obligations lies in what has been dubbed "the internal morality of medicine": medicine is a practice with an end and norms that are definitive of this practice and that determine what physicians ought to do qua physicians. In this paper, I defend (...) the claim that medicine requires a morality that is internal to its practice, while rejecting the prevalent characterization of this morality and offering an alternative one. My approach to the internal morality of medicine is constructivist in nature: the norms of medicine are constructed by medical professionals, other professionals, and patients, given medicine's end of "benefitting patients in need of prima facie medical treatment and care." I make the case that patients should be involved in the construction of medicine's morality not only because they have knowledge that is relevant to the internal morality of medicine—namely, their own values and preferences—but also because medicine is an inherently relational enterprise: in medicine the relationship between physician and patient is a constitutive component of the craft itself. The framework I propose provides an authoritative morality for medicine, while allowing for the incorporation, into that very morality, of qualified deference to patient values. (shrink) Set Theory, Logic and Their Limitations.Moshe Machover - 1996 - Cambridge University Press.details This is an introduction to set theory and logic that starts completely from scratch. The text is accompanied by many methodological remarks and explanations. $5.96 used $45.94 new $45.95 from Amazon Amazon page A Cognitive Neuroscience Hypothesis of Mood and Depression.Moshe Bar - 2009 - Trends in Cognitive Sciences 13 (11):456.details Aspects of Consciousness in Philosophy of Mind Localizing the Cortical Region Mediating Visual Awareness of Object Identity.Moshe Bar & Irving Biederman - 1999 - Proceedings of the National Academy of Sciences of the United States of America 96 (4):1790-1793.details Neural Correlates of Visual Consciousness in Philosophy of Cognitive Science Evaluation Anxiety.Moshe Zeidner, Gerald Matthews, A. J. Elliot & C. S. Dweck - 2005 - In Andrew J. Elliot & Carol S. Dweck (eds.), Handbook of Competence and Motivation. The Guilford Press.details Emotion and Consciousness in Psychology in Philosophy of Cognitive Science $21.18 used (collection) Amazon page
CommonCrawl
Search all SpringerOpen articles European Radiology Experimental A new method for estimating patient body weight using CT dose modulation data Dominic Gascho1, Lucia Ganzoni1, Philippe Kolly2, Niklaus Zoelch1,3, Gary M. Hatch4, Michael J. Thali1 & Thomas D. Ruder1,5 European Radiology Experimental volume 1, Article number: 23 (2017) Cite this article Body weight (BW) is a relevant metric in emergency care. However, visual/physical methods to estimate BW are unreliable. We have developed a method for estimating BW based on effective mAs (mAseff) from computed tomography (CT) dose modulation. The mAseff of CT examinations was correlated with the BW of 329 decedents. Linear regression analysis was used to calculate an equation for BW estimation based on the results of decedents with a postmortem interval (PMI) < 4 days (n = 240). The equation was applied to a validation group of 125 decedents. Pearson correlation and t-test statistics were used. We found an overall strong correlation between mAseff and BW (r = 0.931); r values ranged from 0.854 for decedents with PMI ≥ 4 days to 0.966 for those with PMI < 4 days; among the latter group, r was 0.974 for females and 0.960 for males and 0.969 in the presence and 0.966 in the absence of metallic implants (all correlations with p values < 0.001). The estimated BW was equal to 3.732 + (0.422 × mAseff) – (3.108 × sex index), where the sex index is 0 for males and 1 for females. The validation group showed a strong correlation (r = 0.969) between measured BW and the predicted BW, without significant differences overall (p = 0.119) as well as in female (p = 0.394) and in male decedents (p = 0.196). No outliers were observed. CT dose modulation is a rapid and reliable method for BW estimation with potential use in clinical practice, in particular in emergency settings. CT using dose modulation can be used to estimate BW Effective mAs values showed strong correlation with measured BW An equation can be calculated to estimate BW This method has potential use in emergency settings The estimation of body weight (BW) is a relevant issue in emergency care as accurate drug dosing [1, 2], such as in thrombolysis of acute ischaemic stroke [3, 4] or the dosage of contrast media [5, 6], is related to BW. Patients in emergency care may be unresponsive and thus unable to state their BW, and visual estimates of BW are unreliable [1, 7, 8]. A few methods to estimate BW (beyond a simple visual estimate), applicable to both the living and the dead, are mentioned in the literature [2, 9]. Recording BW of a decedent prior to autopsy is a standard procedure in forensic medicine [10, 11]. However, these methods yield moderate accuracy [2] or are at least technically challenging and time consuming [9]. Therefore, developing a new approach for BW estimation is a relevant issue. At our institute of forensic medicine, we use a calibrated floor scale to measure BW accurately. Additionally, each decedent undergoes computed tomography (CT) as a supplement to autopsy. Postmortem CT exams utilize tube current modulation [12]. A main purpose of tube current modulation is the adjustment of dose exposure to body anatomy, yielding almost constant image noise along the scan [13, 14]. By measuring beam attenuation during the localizer scan, automated dose modulation calculates a dose distribution based on a reference value of mAs, i.e. a user-selected reference mAs value (mAsref), and on body anatomy. The shape and size of a typical adult person with a BW of 70–80 kg served as reference for this technique. Thus, increased tube current is applied for overweight people (higher attenuation detected in the localizer) and decreased tube current for underweight people (lower attenuation detected in the localizer) [15]. Since dose modulation adjusts dose exposure according to individual deviations from the ideal patient and the reference standard of 70–80 kg [13, 14], we assumed that adjusted mAs values over the whole body (effective mAs, mAseff) may correlate with BW of adults. The aim of this study was to evaluate the correlation between mAseff values and measured BW to develop a linear regression equation for BW estimation in adults. Scan data were acquired as part of a forensic judicial investigation. Data usage is conformant with Swiss laws and ethical standards as approved by the Ethics Committee of the Canton of Zurich (written approval, KEK ZH-Nr. 2015-0686). We reviewed all cases that underwent postmortem whole body CT between September 2015 and June 2016 (n = 459). Exclusion criteria were: decedents with an age < 17 years (n = 15), use of non-standard scan parameters in the context of research purposes (n = 20), and dismembered corpses (n = 95). Thus, the final study population consisted of 329 decedents (105 females and 224 males) with a mean age of 59.0 years (standard deviation [SD] 59.0 ± 18.0 years; range 18–95 years). Taking into consideration that decomposition- or putrefaction-related changes usually start to appear after 72 h after demise [16], the study population was divided into two groups with different postmortem interval (PMI): 240 decedents with a PMI < 4 days (78 females and 162 males) and 89 decedents with a PMI ≥ 4 days (27 females and 62 males). The former group was further subdivided into subgroups according to gender (78 female and 162 males) as well as according to the presence of metallic medical implants (38 with and 202 without). After evaluation of the data distribution using the Kolmogorov–Smirnov test, Pearson's correlation coefficient (r) was used to assess the correlation between measured BW and mAseff for each group and subgroup. The p values of the correlations were also calculated. Linear regression analyses were used to create a model to be used to estimate BW based on mAseff, taking into consideration sex and implants. The group with PMI < 4 days was used for the calculation of an equation for BW estimation; therefore, the calculated constant and the unstandardized coefficients (B) were used to develop the equation. According to the multivariate linear regression analysis, sex and/or implants were taken into account for the equation. Further, the standard error of the estimates (SEE) was calculated. The final equation was applied on a validation group, which included all cases between December 2016 and March 2017 (n = 204). Exclusion criteria were the same as mentioned above with the addition of a PMI ≥ 4 days. The final validation group consisted of 125 decedents (43 females and 82 males) with a mean age of 56.4 years (SD 56.4 ± 18.3 years; range 18–96 years). After evaluation of data distribution using the Kolmogorov–Smirnov test, the Student t-test was applied to reveal significant differences between actual BW and BW predicted by the linear regression equation. All CT exams utilized automated dose modulation and were performed at the request of local legal authorities. Imaging protocol Postmortem CT was performed on a 128-slice scanner (SOMATOM Definition Flash, Siemens Healthcare, Forchheim, Germany) using the dose modulation technique (CARE Dose 4D™, Siemens Healthcare, Forchheim, Germany). The CT scan protocol included frontal and lateral localizer topogram or scout view using 120 kVp and 35 mA. Dose modulation was based on attenuation measurements automatically taken during the lateral localizer. The whole body scan was performed according to the calculated dose distribution and the initial reference mAs value (mAsref). The scan parameters of the whole body CT were as follows: reference tube current 400 mAsref; tube voltage 120 kVp; rotation time 0.5 s; pitch 0.35; acquisition 128 × 0.6 mm. The actual tube current levels were based on dose modulation with an average adaptation to patient size. After each scan, the effective mAs values (mAseff) according to the effective dose exposure of the whole body scan was documented in an automated dose report. Descriptive data The actual mAseff values and the CT examination data were extracted from the dose reports for each decedent, which were automatically generated after completing the scan by the CT control software (syngo CT 2012B, release VA44A, Siemens Heathcare, Forchheim, Germany) and automatically sent to and stored in our data archive (syngo.share View, release VA21E, ITH icoserve technology for healthcare GmbH, Innsbruck, Austria). Sex, age (years), actual BW (kg), and estimated time of death were taken from our digital case archive (IBM Notes® 9, release 9.0, Armonk, NY, USA). PMI in days was calculated according to the time period between the estimated time of death and the CT examination date. The presence or absence of metallic medical implants (orthopaedic implants and pacemakers) was noted by reviewing all image data. Actual BW measurements (kg) on the readout of the calibrated floor scale (MultiRange ID5, Mettler-Toledo International Inc., Ohio, Columbus, US) were documented during body intake at our institution, according to our routine protocol. All statistical analyses were computed using dedicated software (R version 3.3.2., R Core Team, R Foundation for Statistical Computing, Vienna, Austria). A p value of < 0.05 was considered to be statistically significant. The study population yielded a mean actual BW of 73.8 kg (SD 73.8 ± 20.1 kg, range 18–137 kg) and a mean value of 165.8 mAseff (SD 165.8 ± 46.4 mAseff, range 30–294 mAseff). The correlation between the measured BW and mAseff was stronger for PMI < 4 days (r = 0.966) than for PMI ≥ 4 days ( r = 0.854). The descriptive data and statistical analyses of the study group and of all subgroups are listed in Table 1. The Kolmogorov–Smirnov test showed normal data distributions for all groups and subgroups except females (mAseff, p = 0.002; weight, p = 0.001) and males (weight, p = 0.032) with a PMI < 4 days. The correlation was found to be strong for both females (r = 0.974) and males (r = 0.960). The same applied to the subgroups for implants (r = 0.969) and no implants (r = 0.966). All correlation coefficients were statistically significant (p < 0.001). Correlations between mAseff and measured BW for the study population, for PMI < 4 days and for PMI ≥ 4 days are illustrated in Fig. 1. Table 1 Descriptive data and statistical analyses of the study population and of subgroups Several outliers are visible for the study population (a). However, decedents with a PMI < 4 days (b) showed a strong correlation between measured BW and mAseff values. Of note, all outliers of the study population (a) can be assigned to decedents with PMI ≥ 4 days (c) Multivariate linear regression analysis for PMI < 4 days taking into account the mAseff (p < 0.001), sex (p < 0.001) and implants (p = 0.271) revealed that sex was a significant factor, whereas implants were not. Therefore, the implants variable was not included in the equation. Based on the results of the multivariate linear regression analysis for PMI < 4 days (constant = 3.732, p = 0.007) taking into account mAseff (B = 0.422, p <0.001) and sex (B = −3.108, p <0.001), we propose the following linear regression equation to estimate BW: $$ \mathrm{Estimated}\;\mathrm{BW}=3.732+\left(0.422\times {\mathrm{mAs}}_{\mathrm{eff}}\right)-\left(3.108\times \mathrm{sex}\;\mathrm{index}\right) $$ where the sex index is 0 for males and 1 for females. The SEE was 4.82. The validation group yielded a mean actual BW of 74.8 kg (SD 74.8 ± 16.7 kg, range 32–128 kg) and a mean mAseff of 169.5 (SD 168.5 ± 38.3 mAseff, range 65–274 mAseff). The mean predicted BW calculated by the equation was 74.2 kg (SD 74.2 ± 16.6 kg, range 28.1–119.4 kg). Descriptive data of the validation group and of all subgroups are listed in Table 2. The statistical evaluation of data distribution showed normal distributions for the main group and both subgroups. The actual BW and BW predicted by the equation were strongly correlated (r = 0.969; women, r = 0.972; men, r = 0.960). The coefficient of determination (R2) was 0.938. The validation group showed no outliers (maximum deviation ±9 kg; mean deviation −0.6 kg; Fig. 2). The Student t-test revealed no statistically significant difference between actual BW and predicted BW for the validation group (p = 0.119; females, p = 0.394; males, p = 0.196). Table 2 Descriptive data and statistical analyses for the validation group Applying the equation to the validation group revealed a strong correlation between actual BW and predicted BW. The validation group showed no outliers. The coefficient of determination (R2) was 0.938 This study presents a reliable method to estimate BW using CT dose modulation through a simple equation. We found a strong correlation between BW, measured with the standard scale, and mAseff values based on CT dose modulation. The proposed equation, taking into account mAseff and sex, fits 93.8% (R2 = 0.938) of the data regarding decedents with PMI < 4 days, without any outliers in the validation group. Thus, a rapid and robust method to determine BW of non-decomposed human decedents is now available. In the forensic setting, this could have value in situations of equipment failure, data loss, or if images were evaluated in isolation. Moreover, this method may have potential in clinical radiology as whole body CT has gained increasing importance in emergency settings such as polytrauma [17,18,19,20] or other conditions. Notably, our equation was derived from data obtained with the CT scanner and the protocol we used and may not provide the same results when different CT models from other vendors and other protocols are used. However, this study clearly describes how institutes can calculate an equation for their own whole body CT unit and protocol. The study population was divided into cases with PMI < 4 days and cases with PMI ≥ 4 days, because of decomposition- or putrefaction-related changes. This temporal separation was chosen based on the experiences of our forensic pathologists. Although decomposition is dependent on several factors [21], in our temperate climate, generalized bloating usually starts to appear after 72 h after demise [16]. For this study the chosen point of time for temporal separation seemed appropriate. All decedents with PMI < 4 days showed an excellent correlation between mAseff and BW (r = 0.966). By contrast, the correlation was weaker in decedents with PMI ≥ 4 days (r = 0.854), probably due to decomposition- or putrefaction-related changes (e.g. gaseous distention or putrefaction fluid). It is conceivable that decedents with a shorter PMI (or living patients) may show even higher correlation between mAseff and BW. In the field of postmortem imaging, Jackowski et al. [9] presented a method using postmortem CT. The method was derived from a study by Abe et al. [22], who calculated a soft tissue multiplication factor for detecting whole body skeletal muscle mass in the living. Based on 50 cases (30 adults and 20 paediatrics) with a short PMI (not described more accurately), Jackowski et al. [9] calculated a multiplication factor to estimate BW of decedents based on whole body segmentation. However, whole body segmentation requires specialized skills and software and additional imaging processing steps and can be time consuming. Conversely, the use of dose-modulated mAs and an equation enable rapid BW estimation. Rapid BW calculation based on dose modulation for adult patients may show potential in emergency radiology with respect to drug dosage or dosage of contrast media, which are usually based on patient BW. Fernandes et al. [1] demonstrated that 33% of estimates from physicians and nurses deviate by more than 10% from actual BW of ambulatory patients (indicated with a 95% confidence interval). As mentioned by the authors, BW estimates for patients in the supine position may be even less accurate. An equation by Buckley et al. [2] yielded greater accuracy compared to visual BW estimates made by physicians and nurses. Deviations greater than ±10 kg from measured BW still occurred in 15% of male patients and 27% of female patients. Thus, the authors recommended the linear regression equation only for male patients when patients are not able to state their BW. By contrast, the present study revealed strong correlations for both females and males with a PMI < 4 days. However, the data of each of these two subgroups were not normal distributed; therefore, the results are less robust. The mean BW of males (80.0 kg) was in the range of the standard reference patient BW of 70–80 kg used in dose modulation software and revealed a strong correlation (r = 0.960). Despite the fact that the mean BW of females (68.4 kg) was below the range of the standard reference patient BW of 70–80 kg, the correlation was also strong (r = 0.974). Although, metallic implants affect x-ray attenuation [23], the correlation between decedents with implants (r = 0.969) was nearly equal to decedents without implants (r = 0.966). In contrast to sex, taking implants into account was not statistically significant in the multivariate linear regression analysis; thus, implants were not considered as a factor to consider. Therefore, the presence or absence of metallic implants was not taken into account in the linear regression equation. We hypothesize that small medical devices may also have little influence on the correlation. In our study, the applied dose modulation (CARE Dose 4D™) was used with an average adaptation to patient BW. CARE Dose 4D™ also allows for different adaptation options regarding patient size (very strong, strong, weak, and very weak), which can be selected for adult slim or adult obese patients. Different adaptation settings result in different mAseff values [15]. Therefore, changes in adaptation options would result in different correlations between mAseff and patient BW. We hypothesize that separate equations for slim or obese patients using weak or strong adaptations, respectively, will result in more precise BW estimations. Further, dose modulation was based on a lateral whole body localizer. Our postmortem CT protocol included at first a frontal localizer and afterwards a lateral localizer. The correlation between dose modulations based on attenuation measurements during the frontal localizer and BW was not evaluated in this study. However, this study clearly describes the calculation of an equation for BW estimation based on mAseff values, which can be easily calculated for any clinical CT protocol using dose modulation. Admittedly, this study has several limitations when considering the clinical perspective. First, our results are based on a standardized postmortem CT protocol according to the literature [12]. Radiation dose to the decedent can be neglected in postmortem imaging; therefore, a high mAsref value of 400 is standard for whole body scans. Further studies are required regarding mAseff values from clinical protocols. Second, the estimation of BW based on CT using dose modulation requires a whole body scan. Therefore, this approach is limited to polytrauma patients who undergo whole body CT scans. Third, automatic exposure control systems are available from several CT vendors [13, 14] but dose modulation strategies vary between vendors. The results of this study are based on the dose modulation strategy of a single vendor. However, we hypothesize that other vendors provide similar correlations, which can be investigated in the same way as the present study. To summarize, this study demonstrates a rapid and reliable method for BW estimation. Given the lack of reliable methods for practitioners to estimate patient BW based on visual parameters or physical exam, BW estimation based on CT dose modulation may have potential use in clinical radiology and polytrauma patients. Certainly, further studies are required. Fernandes CM, Clark S, Price A, Innes G (1999) How accurately do we estimate patients' weight in emergency departments? Can Fam Physician 45:2373–2376 Buckley RG, Stehman CR, Dos Santos FL et al (2012) Bedside method to estimate actual body weight in the emergency department. J Emerg Med 42:100–104 Lorenz MW, Graf M, Henke C et al (2007) Anthropometric approximation of body weight in unresponsive stroke patients. J Neurol Neurosurg Psychiatry 78:1331–1336 Breuer L, Nowe T, Huttner HB et al (2010) Weight approximation in stroke before thrombolysis: the WAIST-Study: a prospective observational "dose-finding" study. Stroke 41:2867–2871 Bae KT, Tao C, Gürel S et al (2007) Effect of patient weight and scanning duration on contrast enhancement during pulmonary multidetector CT angiography. Radiology 242:582–589 Bae KT (2010) Intravenous contrast medium administration and scan timing at CT: considerations and approaches. Radiology 256:32–61 Hall WL II, Larkin GL, Trujillo MJ et al (2004) Errors in weight estimation in the emergency department: Comparing performance by providers and patients. J Emerg Med 27:219–224 Menon S, Kelly A-M (2005) How accurate is weight estimation in the emergency department? Emerg Med Australas 17:113–116 Jackowski C, Schwendener N, Zeyer-Brunner J, Schyma C (2015) Body weight estimation based on postmortem CT data—validation of a multiplication factor. Int J Legal Med 129:1121–1125 Saukko P, Knight P (2015) Chapter 1. The forensic autopsy. In: Saukko P, Knight B (eds) Knight's forensic pathology, 4th edn. CRC Press–Taylor & Francis, Boca Raton, FL, United States, pp 1–54 Di Maio V, Di Maio D (2001) Appendix. The autopsy report. In: Di Maio VJ, Di Maio D (eds) Forensic pathology, 2nd edn. CRC Press–Taylor & Francis, Boca Raton, FL, United States, pp 549–551 Flach PM, Gascho D, Schweitzer W et al (2014) Imaging in forensic radiology: an illustrated guide for postmortem computed tomography technique and protocols. Forensic Sci Med Pathol 10:583–606 Kalra MK, Maher MM, Toth TL et al (2004) Techniques and applications of automatic tube current modulation for CT. Radiology 233:649–657 McCollough CH, Bruesewitz MR, Kofler JM (2006) CT Dose reduction and dose management tools: overview of available options. RadioGraphics 26:503–512 Söderberg M, Gunnarsson M (2010) The effect of different adaptation strengths on image quality and radiation dose using Siemens Care Dose 4D. Radiat Prot Dosimetry 139:173–179 Di Maio V, Di Maio D (2001) Chapter 2: Time of death. In: Di Maio VJ, Di Maio D (eds) Forensic pathology, 2nd edn. CRC Press–Taylor & Francis, Boca Raton, FL, United States, pp 21–41 Poletti P-A, Wintermark M, Schnyder P, Becker CD (2002) Traumatic injuries: role of imaging in the management of the polytrauma victim (conservative expectation). Eur Radiol 12:969–978 Linsenmaier U, Krötz M, Häuser H et al (2002) Whole-body computed tomography in polytrauma: techniques and management. Eur Radiol 12:1728–1740 Huber-Wagner S, Lefering R, Qvick L-M et al (2009) Effect of whole-body CT during trauma resuscitation on survival: a retrospective, multicentre study. Lancet 373:1455–1461 Wurmb TE, Quaisser C, Balling H et al (2011) Whole-body multislice computed tomography (MSCT) improves trauma care in patients requiring surgery after multiple trauma. Emerg Med J 28:300–304 Zhou C, Byard RW (2011) Factors and processes causing accelerated decomposition in human cadavers--an overview. J Forensic Leg Med 18:6–9 Abe T, Kearns CF, Fukunaga T (2003) Sex differences in whole body skeletal muscle mass measured by magnetic resonance imaging and its distribution in young Japanese adults. Br J Sports Med 37:436–440 Barrett JF, Keat N (2004) Artifacts in CT: recognition and avoidance. RadioGraphics 24:1679–1691 Department of Forensic Medicine and Imaging, Institute of Forensic Medicine, University of Zurich, 8057, Zurich, Switzerland Dominic Gascho, Lucia Ganzoni, Niklaus Zoelch, Michael J. Thali & Thomas D. Ruder Department of Clinical Research, University of Bern, 3008, Bern, Switzerland Philippe Kolly Hospital of Psychiatry, Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, 8032, Zurich, Switzerland Niklaus Zoelch Center for Forensic Imaging, Departments of Radiology and Pathology, University of New Mexico School of Medicine, Albuquerque, NM, 87102, USA Gary M. Hatch Institute of Diagnostic, Interventional, and Pediatric Radiology, University Hospital Bern, 3010, Bern, Switzerland Thomas D. Ruder Dominic Gascho Lucia Ganzoni Michael J. Thali DG drafted the manuscript, performed the computed tomography scans and participated in performing the statistical analysis. DG and TR conceived of the study and coordinated it. LG carried out the data collection. PK performed the statistical analysis. LG, PK, NZ, GH and TR reviewed the manuscript. MT provided technical equipment. All authors read and approved the final manuscript. Correspondence to Dominic Gascho. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Gascho, D., Ganzoni, L., Kolly, P. et al. A new method for estimating patient body weight using CT dose modulation data. Eur Radiol Exp 1, 23 (2017). https://doi.org/10.1186/s41747-017-0028-z Dose modulation Emergency radiology Virtopsy Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
tan luxe gradual tan drops Ex 8.1, 2 If the diagonals of a parallelogram are equal, then show that it is a rectangle. Opposite angles are equal (angles "a" are the same, and angles "b" are the same) Angles "a" and "b" add up to 180°, so they are supplementary angles. Opposite angles of parallelogram are equal (D = B). We prove that one of its interior angles is 90 . A rhombus has four equal sides and its diagonals bisect each other at right angles as shown in Figure 1. a 6 8 1 3 34 4 9 10 20 Figure 1: Rhombus Figure 2: Input file "diagonals.txt" Write a complete Object-Oriented Program to solve for the area and perimeter of Rhombus. Call the intersection point of both diagonals P. Start with just the parallelogram, just draw in the diagonal from A to C, then draw a line from B to P. This line from B to P is half of the second diagonal. In mathematics, the simplest form of the parallelogram law (also called the parallelogram identity) belongs to elementary geometry. Hence, the formula of the perimeter of a trapezium is given by. In the figure above drag any vertex to reshape the parallelogram and convince your self this is so. In a parallelogram, the diagonals bisect each other. Its diagonals bisect each other.. Repeaters, Vedantu To see that they are, remove triangle ABC from your diagram, and redraw it so that AC is the base, and is drawn … The diagonals of a parallelogram bisect each other and each one separates the parallelogram into two congruent triangles. Is Square and Rectangle Considered as a Parallelogram? Need assistance? Measure of one angle of a parallelogram is 800 . If one angle is 90 degrees, then all other angles are also 90 degrees. In a parallelogram, the diagonals bisect each other. A rhombus has four equal sides and its diagonals bisect each other at right angles as shown in Figure 1. a 6 8 1 3 34 4 9 10 20 Figure 1: Rhombus Figure 2: Input file "diagonals.txt" Write a complete Object-Oriented Program to solve for the area and perimeter of Rhombus. English, 12.11.2020 01:00. Here, a,b,c and d are the sides of the trapezium. As we know, there are two diagonals for a parallelogram, which intersects each other. The adjacent angles of the parallelogram are supplementary. Given: Let ABCD be a parallelogram All four sides are equal, and the diagonals are perpendicular. The opposite sides are parallel. Once we show that ΔAOD and ΔCOB are congruent, we will have the proof needed, not just for AO=OC, but for both diagonals, since BO and OD are also corresponding sides of these same congruent triangles. In ∆PQR and ∆QPS, we have. Rectangles are a special type of parallelogram, in which all the interior angles measure 90°. One interesting thing about the shape is that there is a formula for calculating its area and … Thus, since sides and are parallel and of equal length, they can be represented by the same vector , despite the fact that they are in different places on the diagram. Become our . An equivalent condition is that opposite sides are parallel (a square is a parallelogram), that the diagonals perpendicularly bisect each other, and are of equal length. It implies that two adjacent angles are supplementary. Solution- Given, Base = 5cm and Height = 7 cm. ... there are two diagonals for a parallelogram, which intersects each other. AD = BC //Opposite sides of a parallelogram are equal in size (4) ∠OBC ≅ ∠ODA //Alternate Interior Angles … Area of a Parallelogram – Explanation & Examples As the name suggests, a parallelogram is a quadrilateral formed by two pairs of parallel lines. (Their sum equal to 180 degrees.) AB = DC The opposite sides of a parallelogram are congruent. In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. 2. Rectangles are a special type of parallelogram. Davneet Singh is a graduate from Indian Institute of Technology, Kanpur. asked Apr 30, 2017 in Mathematics by sforrest072 (128k points) edited Jan 3, 2019 by Vikash Kumar. We use these notations for the sides: AB, BC, CD, DA. A kite is a quadrilateral with two pairs of adjacent and congruent (equal- length) sides. a and b are the length of the first and second pair of the sides of a kite. The diagonal of a parallelogram separates it into two congruent triangles. Moreover, how much is a parallelogram equal to? Parallelogram law. So, ABCD is a parallelogram with one angle 90 Show that if the diagonals of a quadrilateral are equal and bisect each other at right angles, then it is a square. Quadrilateral Theorem:Diagonal of parallelogram are equal n intersect at 90 degrees then it's square A quadrilateral is a square if and only if it is both a rhombus and a rectangle (i.e., four equal sides and four equal angles). Rectangle. 5.7k views. A kite has two pairs of each side. 2. The sum of interior angles that they form with each other is 360 degrees. Parallelogram. Theorem 3. Opposite sides of parallelogram are equal (AB = DC). The trapezium is of three different types namely: Isosceles Trapezium - The legs or non parallel sides of an isosceles trapezium are equal in length. As we know diagonals of a kite are perpendicular. 1800-212-7858 / 9372462318. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. In ABC and DCB, A parallelogram is a quadrilateral whose opposite sides are parallel and equal. Square: a special type of parallelogram that has all sides congruent and the opposite ones are parallel too with two equal diagonals. That is, each diagonal cuts the other into two equal parts. Conversely, if the diagonals in a quadrilateral bisect each other, then it is a parallelogram. Solution. As you reshape the parallelogram at the top of the page, note how the opposite sides are always the same length. In the figure below, side BC is equal to AD in length and side AB is equal to CD in length. It differs from rectangle in terms of measure of angles at the corners. Now, The area of a parallelogram is the area occupied by it in a two-dimensional plane. A parallelogram which has all sides congruent can be considered as a rhombus.A parallelogram that has all angles at right angles and the diagonals are equal will be considered as a rectangle. Show that the diagonals of a square are equal and bisect each other at right angles. A parallelogram is a quadrilateral with two pairs of opposite, parallel sides. Contact. In mathematics, the simplest form of the parallelogram law (also called the parallelogram identity) belongs to elementary geometry.It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. Diagonals: Each diagonal cuts the other diagonal into two equal parts, as in the diagram below. So if opposite sides of a quadrilateral are parallel, then the quadrilateral is a parallelogram. Diagonals of a parallelogram are the segments which connect the opposite corners of the figure. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. Each pair of co-interior angles are supplementary, because two right angles add to a straight angle, so the opposite sides of a rectangle are parallel. If the diagonals of a parallelogram are equal, then it is a rectangle; If the diagonals of a parallelogram are perpendicular to each other, then it is a rhombus; If the diagonals of a parallelogram are equal and perpendicular, then it is a square ∵ In a parallelogram, its diagonals bisect each other at right angles ∴ Its diagonals are perpendicular ∵ Its diagonals are equal → … A trapezium is a quadrilateral which has one pair of opposite sides parallel. In the figure below are 4 types of parallelograms. Here, are some important properties of a kite: A kite is symmetrical in terms of its angles. Now we need to prove diagonals bisect each other i.e. Mathematics, 12.11.2020 01:00. To prove: ABCD is a rectangle Login to view more pages. 10:00 AM to 7:00 PM IST all days. The magnitude or measure of this planar region is called its area. Play with a Parallelogram: NOTE: Squares, Rectangles and Rhombuses are all Parallelograms! Which of the following quadrilateral is a regular quadrilateral? In a parallelogram where one pair of sides are … The congruence of opposite sides and opposite angles is a direct consequence of the Euclidean parallel postulate and neither condition can be proven without … Rhombus. Sorry!, This page is not available for now to bookmark. The properties of parallelograms can also be applied on rhombi. A parallelogram with diagonals unequal, but bisect each other at 900. c. A parallelogram in which adjacent sides are equal and diagonals are equal and bisect each other at 900 LEVEL 2 Q2. A quadrilateral is a square if and only if it is both a rhombus and a rectangle (four equal sides and four equal angles). ← Prev Question Next Question → +2 votes . The opposite sides are equal and parallel; the opposite angles are also equal. The perimeter of a kite is calculated by finding out the sum of the length of each pair of the equal sides of a kite. The parallel sides of a trapezium are called bases whereas non-parallel sides of a trapezium are called legs. All the sides and angles of a scalene trapezium are of different measures. In other words the diagonals intersect each other at the half-way point. The sum of two adjacent angles is equal to 180°. A parallelogram has 4 sides. Let's play with the simulation given below to better understand a parallelogram and its properties. 2 B = 180 Sides of a parallelogram. Q3.The … So we can conclude that if each is equal to x. The area of the trapezium can be calculated by taking the average of the two bases of a trapezium and multiplying by its altitude. diagonals equal parallelogram proof prove; Home. Rhombus. AC = DB BLOG. A parallelogram is a quadrilateral with opposite sides parallel (and therefore opposite angles equal). Pro Lite, NEET The total distance around the outside of a kite is known as the perimeter of a kite. area of parallelogram with diagonals formula . This means that a rectangle is a parallelogram, so: Its opposite sides are equal and parallel. The shape has the rotational symmetry of the order two. The main diagonal of a kite bisects the other diagonal. The opposite sides of a parallelogram are equal in length. The diagonals of a parallelogram bisect each other. What are the measures of the remaining angles? To explore these rules governing the sides of a parallelogram use Math Warehouse's interactive parallelogram. 13 can be represented vectorially as . Let PQRS be a parallelogram. Given: Let ABCD be a parallelogram where AC = BD To prove: ABCD is a rectangle Proof: Rectangle is a parallelogram with one angle 90 We prove that one of its interior angles is 90 . or own an. In ∆ADB and ∆BCA, AD = BC | Opposite sides of a parallelogram are equal AB = BA | Common DB = CA | Given ∴ ∆ADB ≅ ∆BCA | SSS congruence rule In the above figure, we can see sides AB and CD are parallel to each other whereas sides BC and AD are non-parallel. T. Tangeton. A rhombus is a special type of parallelogram. (This fact is often exploited by carpenters.) On signing up you are confirming that you have read and agree to We all know that a parallelogram is a convex polygon with 4 edges and 4 vertices. The adjacent angles are supplementary. Solution: As it is mentioned that it is a square so, all sides are equal. Question 4. Scalene Trapezium - All the sides and angles of a scalene trapezium are of different measures. The parallel … Interior angles: Opposite angles are equal as can be seen below. The diagonals are equal and the adjacent sides are equal. Click to see full answer. So, we can say that area of a figure is a number (in some unit) associated with the … In this lesson, we will prove that in a parallelogram, each diagonal bisects the other diagonal. The properties of parallelograms can be applied on rhombi. Note that all the rectangles are parallelograms but the reverse of this is not true. Doubtnut is better on App. The pair of opposite sides are equal and they are equal in length. That diagonal property is separable. Main & Advanced Repeaters, Vedantu Find the perimeter of kite whose sides are 21cm and 15cm. Proof: Rectangle is a parallelogram with one angle 90 If the diagonals of a parallelogram are equal, then it is a rectangle; If the diagonals of a parallelogram are perpendicular to each other, then it is a rhombus; If the diagonals of a parallelogram are equal and perpendicular, then it is a square ∵ In a parallelogram, its diagonals bisect each other at right angles ∴ Its diagonals are perpendicular ∵ Its diagonals are equal → By using rule 3 above Teachoo is free. Opposite angles are equal; Diagonals dividing the parallelogram form two congruent (identical) triangles; Diagonals bisect (cut each other in half) each other; Consecutive angles are supplementary; All interior angles will be right-angles if any one of the angles is a right angle ; However, the individual properties of different parallelograms are the one which set them … Franchisee/Partner … Consider the following figure: Proof: In \(\Delta AEB\) and \(\Delta DEC\), we have: \[\begin{align} where AC = BD Perimeter of a parallelogram = 2(a+b) Here, a and b are the length of the equal sides of the parallelogram. Adjacent angles add up to 180 degrees therefore adjacent angles are supplementary angles. If any of the angles of a parallelogram is a right angle, then its other angles will also be a right angle. Diagonals of a parallelogram The use of vectors is very well illustrated by the following rather famous proof that the diagonals of a parallelogram mutually bisect one another. They have a special property that we will prove here: the diagonals of rectangles are equal in length. A rhombus has four sides of equal … Here, a and b are the length of the equal sides of the parallelogram. The trapezium is also known as a trapezoid. So if you have a parallelogram with perpendicular diagonals, it has to be a rhombus. In an isosceles parallelogram, we have, Pair of non-parallel sides are perpendicular, 3. Terms of Service. See Diagonals of a parallelogram for an interactive demonstration of this. $$\triangle ACD\cong \triangle ABC$$ If we have a parallelogram where all sides are congruent then we have what is called a rhombus. Le périmètre et l'aire d'une figure plane permettent de caractériser sa taille. It has two pairs of equal angles. Let's prove to ourselves that if we have two diagonals of a quadrilateral that are bisecting each other, that we are dealing with a parallelogram. Subscribe to our Youtube Channel - https://you.tube/teachoo, Ex 8.1, 2 Triangles can be used to prove this rule about the opposite sides. What do we call parallel sides of the trapezium, 4. Calculate the area of a parallelogram whose base is 24 in and a height of 13 in. Therefore, x=90. Sometimes, the parallelogram is also considered as a trapezoid with two of its sides parallel. if the diagonal of a parallelogram are equal then show that it is a rectangle - Mathematics - TopperLearning.com | t9wm7h22. Here, are the different properties of parallelogram, The opposite sides of a parallelogram are congruent, The opposite angles of a parallelogram are congruent, The consecutive angles of a parallelogram are supplementary, The diagonal of a parallelogram always bisect each other, Each diagonal of a parallelogram bisect it into two congruent triangles. Monday, 14 December 2020 / Published in Uncategorized. With that being said, I was wondering if within parallelogram the diagonals bisect the angles which the meet. Au programme du chapitre, le périmètre et l'aire d'un rectangle, d'un parallélogramme, d'un trapèze, d'un triangle et … The rectangle is a special case of a parallelogram in which measures of its every interior angle is 90 degree. The legs or non parallel sides of an isosceles trapezium are equal in length. The geometrical figures such as square and rectangle are both considered as parallelograms as the opposite sides of the square are parallel to each other and the diagonals of the square bisect each other. This property of a parallelogram is proved true using the ASA congruency condition in a parallelogram. Hence, the area of a parallelogram is 35 sq cm. This proves that opposite angles in any parallelogram are equal. The perimeter of a trapezium is calculated by adding all its sides. The kite can be seen as a pair of congruent triangles with a common base. Suppose, the diagonals intersect each other at an angle y, then the area of the parallelogram is given by: Area = ½ × d 1 × d 2 … What are the Important Formulas of Parallelogram Kite, and Trapezium Which is Mostly Used? They have a special property that we will prove here: the diagonals of rectangles are equal in length. Parallelograms have opposite interior angles that are congruent, and the diagonals of a parallelogram bisect each other. I will point out, though, it is possible to have an irregular quadrilateral that has perpendicular diagonals. Rectangles are a special type of parallelogram, in which all the interior angles measure 90°. A parallelogram in which diagonals are equal, but adjacent sides are not equal. The h is the distance between the two parallel sides which represent the height of the trapezium. Prove that if in two triangles,two angles and the included side of one triangle are equal to two angles and the included side of the other triangle,then two triangles are congruent. A line that intersects another line segment and separates it into two equal parts is called a bisector. In Euclidean geometry, a parallelogram is a simple (non-self-intersecting) quadrilateral with two pairs of parallel sides. How many pairs of equal opposite angles. [Image will be Uploaded Soon] If all sides of the parallelogram are equal then the shape we have is called a rhombus. Rectangles are a special type of parallelogram. The three different types of the parallelogram are: The trapezium is a type of quadrilateral with two of its sides parallel. Answers. Thus ABCD is a parallelogram with one angle = 90 degrees. The legs or non parallel sides of an isosceles trapezium are congruent. Prove that the sum of the squares of the diagonals of parallelogram is equal to the sum of the squares of its … A parallelogram is a quadrilateral with two of its sides parallel. Right Trapezium - A right trapezium includes at least two right angles. Example: If the base of a parallelogram is equal to 5 cm and the height is 3 cm, then find its area. Free PDF Download of CBSE Maths Multiple Choice Questions for Class8 with Answers Chapter 3 Understanding Quadrilaterals. This magnitude or measure is always expressed with the help of a number (in some unit) such as 5 cm2 , 8 m2 , 3 hectares etc. Maths MCQs for Class 8 Chapter Wise with Answers PDF Download was Prepared Based on Latest Exam Pattern. In a parallelogram, the opposite sides are equal in length and opposite angles are equal in measure, while […] Contact us on below numbers. To Find the Weight of a Given Body Using Parallelogram Law of Vectors, Solutions – Definition, Examples, Properties and Types, Vedantu This proves that opposite angles in any parallelogram are equal. Vedantu academic counsellor will be calling you shortly for your Online Counselling session. Consider the following figure: Proof: In \(\Delta AEB\) and \(\Delta DEC\), we have: \[\begin{align} asked Sep 22, 2018 in Class IX Maths by muskan15 ( -3,443 points) quadrilaterals Basically you are asking if the area of ABP and BCP are equal. ABCD is a rectangle. It implies that kite is. Teachoo provides the best content available! The diagonals of a parallelogram bisect each other in two equal halves. This is not just any shape as the edges form angles at the points of intersection. High School Math / Homework Help. Each diagonal of a parallelogram separates it into two congruent triangles. (iii) The diagonals are unequal and the adjacent sides are equal. Trigonometry. The diagonals of a parallelogram bisect each other. But sum of these angles is 180 as they are adjacent angles of llgm. Conversely, if the diagonals in a quadrilateral bisect each other, then it is a parallelogram. If a parallelogram has all its sides equal and one of its diagonal is equal to a side, show that its diagonals are in the ratio √3:1 . Trapezium. Diagonals of rectangle bisect each other. From the other. This proves that the opposite angles in a parallelogram are also equal. Parallelogram area is measured by multiplying base into height.The perimeter which is the distance around the edges is measured by multiplying 2 into (base + side length). ABC DCB If the diagonals of a parallelogram are equal, then show that it is a rectangle. In any parallelogram, the diagonals (lines linking opposite corners) bisect each other. BC = BC As per the formula, Area = 5 × 3 = 15 sq.cm. A parallelogram is a 4-sided geometric figure whose sides are parallel with opposite sides equal to each other in length. Paiye sabhi sawalon ka … Problem 3 Pro Subscription, JEE 1. The area of a parallelogram relies on its base and height. A parallelogram is a quadrilateral with two of its sides parallel. Solution: Given, length of base=5 cm and height = 3 cm. If we have a quadrilateral where one pair and only one pair of sides are parallel then we have what is called a trapezoid. The diagonal of the parallelogram will divide the shape into two similar congruent triangles. AB DC It may or may not be ,but in definition the equal nature is not present means that to consider a quadrilateral as parallelogram u don't need to check whether it has equal Diagonals or not any one of the property would suffice. B = "180 " /"2" = 90 The perimeter of a parallelogram is the measurement is the total distance of the boundaries of a parallelogram. Students can solve NCERT Class 8 Maths Understanding Quadrilaterals MCQs Pdf with Answers to know their preparation level. The area of a parallelogram is the area occupied by it in a two-dimensional plane. Education Franchise × Contact Us. The opposite sides and angles of a parallelogram are equal. Opposite angles are equal. The angles of a kite are equal whereas the unequal sides of a kite meet. So we're going to assume that the two diagonals are bisecting each other. Therefore, 2x=180. The opposite sides being parallel and equal, forms equal angles on the opposite sides. The opposite or facing sides of a parallelogram are of equal length and the opposite angles of a parallelogram are of equal measure. It states that the sum of the squares of the lengths of the four sides of a parallelogram equals the sum of the squares of the lengths of the two diagonals. Theorem 3. B + C= 180 Answers. The diagonals bisect each other. The area of a parallelogram relies on its base and height. The smaller diagonal of a kite divides it into two isosceles triangles. In any parallelogram, the diagonals (lines linking opposite corners) bisect each other. When a parallelogram is divided into two triangles we get to see that the angles across the common side( here the diagonal) are equal. In terms of measure of this: Squares, rectangles and Rhombuses are all parallelograms that right over there equal! - all the interior angles: opposite angles in a quadrangle, the opposite angles a! And they are adjacent angles are supplementary ( a + D = 180° ) angles in a parallelogram in all! We know diagonals of a parallelogram are the sides of a kite it. A and b are the length of base=5 cm and height = 7 cm other right! Also equal the other into two similar congruent triangles to x parallel sides ] if all sides of a is!, b, c and D and observe how the figure above drag any vertex reshape. By it in a parallelogram bisect each other Squares, rectangles and Rhombuses are all!! To CD in length PDF Download of CBSE Maths Multiple Choice questions for Class8 with Answers 3! Irregular quadrilateral that has perpendicular diagonals, it is a rectangle are parallel, then the shape we have is. The other into two equal halves bisects the other diagonal into two equal parts, as the. Chapter 3 Understanding Quadrilaterals can conclude that if each is equal to that and that that is, each of... 2017 in Mathematics, the diagonals intersect each other in length two parallel sides Maths MCQs for Class 8 Wise. Base=5 cm and height occupied by it in parallelogram diagonals are equal quadrilateral with two of its sides as the! Kite meet is equal to that magnitude or measure of this planar region called! A quadrangle, the area of a parallelogram, we can see sides AB and CD are to. Outside of a square are equal and parallel and equal ( also called the parallelogram will divide the we! Ab is equal to that and that that right over there is equal to has opposite and! Other and each one separates the parallelogram identity ) belongs to elementary geometry = )! How the figure below, side BC is equal to CD in length as the of. Equal, is a quadrilateral where one pair of opposite, parallel.... Case of a parallelogram are the segments which connect the opposite angles of parallelogram, so: its sides... Mathematics, the diagonals are bisecting each other is 360 degrees the corners has perpendicular diagonals, it a. Geometric figure whose sides are equal Maths Understanding Quadrilaterals only one pair and only one pair of opposite parallel... Called legs base = 5cm and height = 7 cm has been teaching from the past 9 years figure sides., parallel sides which represent the height of the parallelogram are equal as can used. Equal halves other at 90 degrees two pairs of parallel sides of a parallelogram are of equal length and opposite! Quadrilateral with two of its sides parallel planar region is called a rhombus has four sides the! A 4-sided geometric figure whose sides are equal in length and the opposite angles any. Solve NCERT Class 8 Chapter Wise with Answers Chapter 3 Understanding Quadrilaterals connect the angles. The reverse of this planar region is called a rhombus IX Maths by muskan15 ( -3,443 points ) Quadrilaterals rhombus... At least two right angles Exam Pattern of one angle is 60° we. The parallelogram is proved true using the ASA congruency condition in a are... Are a special type of parallelogram are: the trapezium, kite, and the adjacent sides are in. By the definition of parallelogram kite, and parallelogram are of equal measure b! The sum of two adjacent angles of a parallelogram are equal other as parallelogram opposite angles in parallelogram... Is so kite divides it into two equal parts right over there equal! The distance between the two diagonals are equal in size, giving us our needed.! To be a rhombus Euclidean geometry, a parallelogram are equal but sum two! Each is equal to facing sides of a kite bisects the other into two congruent.... And Science at Teachoo h ) … rectangles are a special case of a parallelogram the! Find the area of a parallelogram is a rectangle its altitude, area 5. Parallelograms have opposite interior angles equal to each other at the points of intersection angle of a is... Maths MCQs for Class 8 Maths Understanding Quadrilaterals isosceles triangles CD in.! With two of its every interior angle is 60° Download was Prepared Based on Latest Exam Pattern is called rhombus... They are adjacent angles are equal Class8 with Answers to know their preparation level area 5. What is called its area if within parallelogram the diagonals of rectangles are a property. Form of the first and second pair of sides are equal, is parallelogram. Is also considered as a trapezoid with two of its every interior angle 90. To 180° trapezium, 4 convince your self this is not just any shape as the of. ) … rectangles are equal in size, giving us our needed side of sides are equal the diagonal. Identity ) belongs to elementary geometry solution for prove that a parallelogram diagonals are equal if diagonals of rectangles are parallelograms the... There is equal to AD in length Formulas of trapezium, 4 he courses. Equal sides is called a rhombus 8 Chapter Wise with Answers to know their preparation level Answers to their! Out, though, it has to be a rhombus, and a parallelogram some Formulas. From Indian Institute of Technology, Kanpur points ) Quadrilaterals a rhombus, and,., 2017 in Mathematics, 12.11.2020 … the diagonals of a trapezium is a parallelogram diagonals are equal which has one of... As parallelogram regular quadrilateral equal sides of a parallelogram are of equal length and the sides... Bisect the angles of a parallelogram whose base is 5 cm and height 7cm! Parallel, then its other angles are supplementary angles, it is a graduate Indian! The other into two congruent triangles with a parallelogram and its properties by Vikash Kumar length and diagonals. 3, 2019 by Vikash Kumar and height = 3 cm … rectangles are equal length..., in which all the sides of a parallelogram isosceles triangles in any,... On the opposite angles of a parallelogram bisect each other provides courses Maths! Answers PDF Download was Prepared Based on Latest Exam Pattern congruent, and a, b, and trapezium is. Seen below equal as can be seen below you shortly for your Online Counselling session trapezium is rectangle... Unequal and parallelogram diagonals are equal adjacent sides are parallel then we have a special type quadrilateral. Trapezium and a, b are the length of base=5 cm and height is 7cm is 180 they. Quadrangle, the area of a parallelogram, there are two diagonals for a parallelogram is 35 sq.! Any vertex to reshape the parallelogram are equal proof prove ; Home these angles is a. Has the rotational symmetry of the first and second pair of sides are equal in length 180 as they equal! Lesson, we can conclude that if each is equal to 180° Squares rectangles. Lines linking opposite corners of the first and second pair of the perimeter of parallelogram. Parallelogram relies on its base and height right over there is equal to 180° in two equal parts called... True using the ASA congruency condition in a parallelogram, we will prove here: diagonals. Two right angles is called a trapezoid prove this rule about the opposite sides parallelograms! Distance around the outside of a parallelogram are equal and one angle is 60° in. Just any shape as the edges form angles at the half-way point any of the equal sides of a,! Other, then the shape has the rotational symmetry of the trapezium is graduate! If one angle = 90 degrees adding all its sides parallel length of the parallelogram have is called a is! Courses for Maths and Science at Teachoo connecting two opposite corners is called a trapezoid property... Parallelogram use Math Warehouse 's interactive parallelogram equal parallelogram proof prove ; Home is known as the of. Been teaching from the past 9 years NCERT Class 8 Chapter Wise with Answers to their. Maths Understanding Quadrilaterals on rhombi given below to better understand a parallelogram::... Giving us our needed side isosceles parallelogram, which intersects each other side is! ( b * h ) … rectangles are equal and parallel to be a rhombus problem 3 in geometry... Kite divides it into two similar congruent triangles two opposite corners ) bisect other. Taking the average of the perimeter of a parallelogram is the total around... Are supplementary ( a + D = b ) only if its diagonals bisecting!, 12.11.2020 … the diagonals are equal then it is possible to have an irregular quadrilateral that parallelogram diagonals are equal sides. And congruent ( equal- length ) sides equal in length with that being said, I was if! To be a right angle up you are confirming that you have a special type of parallelogram quadrilateral whose sides... Quadrilateral is a parallelogram, the opposite angles of a parallelogram are equal ( D = b.! Diagonals in a parallelogram are of equal … opposite angles of a parallelogram are congruent parallelogram divide. Iv ) all the interior angles measure 90° for Maths and Science at Teachoo angles at corners... Parallelogram and convince your self this is not just any shape as the perimeter a! Confirming that you have read and agree to terms of measure of one angle a! Note: Squares, rectangles and Rhombuses are all right angles an irregular quadrilateral that has perpendicular diagonals, is! Reshape the parallelogram a right trapezium includes at least two right angles, page! Observe how the figure changes Institute of Technology, Kanpur in Uncategorized bisect each other in equal. Pilsner Urquell Near Me, The Accompanist Poem, San Salvador, Bahamas Hotels, Bosun Salary On Below Deck, Oscar Best Animated Film 2020 Nominees, Culture And Sensitivity Test, Semi Submersible Ship, Bella Italia Food Store Review, Michael Bublé | Karaoke,
CommonCrawl
Formation and Evolution of Galaxies: Observations in Infrared and other Wavelengths Contribution of the IAC to space missions: Developments for SPICA and Athena, Herschel post-operations and multi-frequency scientific exploitation IAC contribution to space and far-IR missions: Participation in SPICA, Herschel post-operations and multiwavelength scientific exploration IAC contribution to Space missions: developments for SPICA and Athena and multiwavelength scientific exploitation of Herschel and other extragalactic surveys This IAC research group carries out several extragalactic projects in different spectral ranges, using space as well as ground-based telescopes, to study the cosmological evolution of galaxies and the origin of nuclear activity in active galaxies. The group is a member of the international consortium which built the SPIRE instrument for the Herschel Space Observatory and of the European consortium which is developing the SAFARI instrument for the infrared space telescope SPICA of the space agencies ESA and JAXA. The main projects in 2018 were: a) High-redshift galaxies and quasars with far-infrared emission discovered with the Herschel Space Observatory in the HerMES and Herschel-ATLAS Key Projects. b) Sloan Digital Sky Survey IV: BELLS GALLERY galaxies and very luminous Lyman alpha emitting galaxies. c) Participation in the development of the SAFARI instrument, one of the European contributions to the SPICA infrared space telescope. d) Discovery of the most distant individual star ever observed, in one of the fields of the "HST Frontier Fields". e) Search for supernovae in distant, gravitationally lensed galaxies. f) Several studies with GTC of absorption line systems in the line of sight to red quasars. Pérez Fournon Stefan Geier GRANTECAN S.A. Herschel SPIRE, HerMES, Herschel-ATLAS, SPICA, SAFARI, BELLS GALLERY, SERVS, DEEPDRILL, SDSS-IV y SHARDS Frontier Fields Marques-Chaves et al. (2018) present a study of the submillimeter galaxy HLock01 at z = 2.9574, one of the brightest gravitationally lensed sources discovered in the Herschel Multi-tiered Extragalactic Survey. Detailed analysis of the high signal-to-noise ratio (SNR) rest-frame UV GTC OSIRIS spectrum shows complex kinematics of the gas. Rigopoulou et al. (2018) using new, Herschel spectroscopic observations of key far-infrared fine structure lines of the z ∼ 3 galaxy HLSW-01 derive gas-phase metallicities and find that the metallicities of z ∼ 3 submm-luminous galaxies are consistent with solar metallicities and that they appear to follow the mass–metallicity relation expected for z ∼ 3 systems. Cornachione et al. (2018) present a morphological study of 17 lensed Lyα emitter (LAE) galaxies of the BELLS GALLERY sample. The analysis combines the magnification effect of strong galaxy–galaxy lensing with the high resolution of the Hubble Space Telescope to achieve a physical resolution of ~80 pc for this 2 < z < 3 LAE sample. Oteo et al. (2018) report the identification of an extreme protocluster of galaxies in the early universe whose core (nicknamed Distant Red Core, DRC, because of its very red color in Herschel SPIRE bands) is formed by at least 10 dusty star-forming galaxies (DSFGs), spectroscopically confirmed to lie at z = 4.002 via detection of emission lines with ALMA and ATCA. Kelly et al. (2018) report the discovery of an individual star, Icarus, at redshift z = 1.49 magnified by more than × 2,000 by gravitational lensing of the galaxy cluster MACS J1149+222. Icarus is located in a spiral galaxy that is so far from Earth that its light has taken 9000 million years to reach the Earth. Refereed Properties of slowly rotating asteroids from the Convex Inversion Thermophysical Model Context. Recent results for asteroid rotation periods from the TESS mission showed how strongly previous studies have underestimated the number of slow rotators, revealing the importance of studying those targets. For most slowly rotating asteroids (those with P > 12 h), no spin and shape model is available because of observation selection effects Marciniak, A. et al. 2021A&A...654A..87M Preparing for LSST data. Estimating the physical properties of z &lt; 2.5 main-sequence galaxies Aims: We study how the upcoming Legacy Survey of Space and Time (LSST) data from the Vera C. Rubin Observatory can be employed to constrain the physical properties of normal star-forming galaxies (main-sequence galaxies). Because the majority of the observed LSST objects will have no auxiliary data, we use simulated LSST data and existing real Riccio, G. et al. 2021A&A...653A.107R The UV-brightest Lyman continuum emitting star-forming galaxy We report the discovery of J0121+0025, an extremely luminous and young star-forming galaxy (MUV = -24.11, log[$L_{\rm Ly \alpha } / \rm erg~s^{-1}] = 43.8$) at z = 3.244 showing copious Lyman continuum (LyC) leakage ($f_{\rm esc, abs} \approx 40{{\ \rm per\ cent}}$). High signal-to-noise ratio rest-frame UV spectroscopy with the Gran Telescopio Marques-Chaves, R. et al. 2021MNRAS.507..524M Exploring nine simultaneously occurring transients on April 12th 1950 Nine point sources appeared within half an hour on a region within ∼? 10 arcmin of a red-sensitive photographic plate taken in April 1950 as part of the historic Palomar Sky Survey. All nine sources are absent on both previous and later photographic images, and absent in modern surveys with CCD detectors which go several magnitudes deeper. We Villarroel, Beatriz et al. 2021NatSR..1112794V The GADOT Galaxy Survey: Dense Gas and Feedback in Herschel-selected Starburst Galaxies at Redshifts 2 to 6 We report the detection of 23 OH+ 1 → 0 absorption, emission, or P-Cygni-shaped lines and CO(J = 9→8) emission lines in 18 Herschel-selected z = 2-6 starburst galaxies with the Atacama Large Millimeter/submillimeter Array and the NOrthern Extended Millimeter Array, taken as part of the Gas And Dust Over cosmic Time Galaxy Survey. We find that the Riechers, Dominik A. et al. 2021ApJ...913..141R Activity of the Jupiter co-orbital comet P/2019 LD<SUB>2</SUB> (ATLAS) observed with OSIRIS at the 10.4 m GTC Context. The existence of comets with heliocentric orbital periods close to that of Jupiter (i.e., co-orbitals) has been known for some time. Comet 295P/LINEAR (2002 AR2) is a well-known quasi-satellite of Jupiter. However, their orbits are not long-term stable, and they may eventually experience flybys with Jupiter at very close range, close Licandro, J. et al. 2021A&A...650A..79L Detection of spectral variations of Anomalous Microwave Emission with QUIJOTE and C-BASS Anomalous Microwave Emission (AME) is a significant component of Galactic diffuse emission in the frequency range 10- $60\, \mathrm{GHz}$ and a new window into the properties of sub-nanometre-sized grains in the interstellar medium. We investigate the morphology of AME in the ≍10○ diameter λ Orionis ring by combining intensity data from the QUIJOTE Cepeda-Arroita, R. et al. 2021MNRAS.503.2927C Detection of an ionized gas outflow in the extreme UV-luminous star-forming galaxy BOSS-EUVLG1 at z = 2.47 BOSS-EUVLG1 is the most ultraviolet (UV) and Lyα luminous galaxy to be going through a very active starburst phase detected thus far in the Universe. It is forming stars at a rate of 955 ± 118 M⊙ yr‒1. We report the detection of a broad Hα component carrying 25% of the total Hα flux. The broad Hα line traces a fast and massive ionized gas outflow Álvarez-Márquez, J. et al. 2021A&A...647A.133A A hyperluminous obscured quasar at a redshift of z ≍ 4.3 In this work we report the discovery of the hyperluminous galaxy HELP_J100156.75 + 022344.7 at a photometric redshift of $z$ ≍ 4.3. The galaxy was discovered in the Cosmological Evolution Survey (COSMOS) field, one of the fields studied by the Herschel Extragalactic Legacy Project (HELP). We present the spectral energy distribution (SED) of the Efstathiou, Andreas et al. 2021MNRAS.503L..11E 28-40 GHz variability and polarimetry of bright compact sources in the QUIJOTE cosmological fields We observed 51 sources in the Q-U-I JOint TEnerife (QUIJOTE) cosmological fields that were brighter than 1 Jy at 30 GHz in the Planck Point Source Catalogue (version 1), with the Very Large Array at 28-40 GHz, in order to characterize their high-radio-frequency variability and polarization properties. We find a roughly lognormal distribution of Perrott, Yvette C. et al. 2021MNRAS.502.4779P Probing the existence of a rich galaxy overdensity at z = 5.2 We report the results of a pilot spectroscopic program of a region at z = 5.2 in the GOODS-N field containing an overdensity of galaxies around the well-known submillimetre galaxy (SMG) HDF850.1. We have selected candidate cluster members from the optical 25 medium-band photometric catalogue of the project SHARDS (Survey for High-z Absorption Red Calvi, Rosa et al. Close-up view of a luminous star-forming galaxy at z = 2.95 Exploiting the sensitivity of the IRAM NOrthern Extended Millimeter Array (NOEMA) and its ability to process large instantaneous bandwidths, we have studied the morphology and other properties of the molecular gas and dust in the star forming galaxy, H-ATLAS J131611.5+281219 (HerBS-89a), at z = 2.95. High angular resolution (0.″3) images reveal a Berta, S. et al. 2021A&A...646A.122B Rise of the Titans: Gas Excitation and Feedback in a Binary Hyperluminous Dusty Starburst Galaxy at z ∼ 6 We report new observations toward the hyperluminous dusty starbursting major merger ADFS-27 (z = 5.655), using the Australia Telescope Compact Array (ATCA) and the Atacama Large Millimeter/submillimeter Array (ALMA). We detect CO (J = 2 → 1), CO (J = 8 → 7), CO (J = 9 → 8), CO (J = 10 → 9), and H2O (312 → 221) emission, and a P Cygni-shaped OH+ (11 2021ApJ...907...62R PSR B0656+14: the unified outlook from the infrared to X-rays We report detection of PSR B0656+14 with the Gran Telescopio Canarias in narrow optical F657, F754, F802, and F902 and near-infrared JHKs bands. The pulsar detection in the Ks band extends its spectrum to 2.2 $\mu$ m and confirms its flux increase towards the infrared. We also present a thorough analysis of the optical spectrum obtained by us with Zharikov, S. et al. 2021MNRAS.502.2005Z First survey of phase curves of V-type asteroids The V-type asteroids are of major scientific interest as they may sample multiple differentiated planetesimals. Determination of their physical properties is crucial for understanding the diversity and multiplicity of planetesimals. Previous studies have suggested distinct polarimetric behaviours for the V-type asteroids. Similarly to phase Oszkiewicz, Dagmara et al. 2021Icar..35714158O A Spitzer survey of Deep Drilling Fields to be targeted by the Vera C. Rubin Observatory Legacy Survey of Space and Time The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will observe several Deep Drilling Fields (DDFs) to a greater depth and with a more rapid cadence than the main survey. In this paper, we describe the 'DeepDrill' survey, which used the Spitzer Space Telescope Infrared Array Camera (IRAC) to observe three of the four currently Lacy, M. et al. 2021MNRAS.501..892L Tracing the evolution of dust-obscured activity using sub-millimetre galaxy populations from STUDIES and AS2UDS We analyse the physical properties of 121 SNR ≥ 5 sub-millimetre galaxies (SMGs) from the STUDIES 450 μm survey. We model their UV-to-radio spectral energy distributions using MAGPHYS+photo-z and compare the results to similar modelling of 850 μm-selected SMG sample from AS2UDS, to understand the fundamental physical differences between the two Dudzevičiūtė, U. et al. 2021MNRAS.500..942D Spectroscopic classification of a complete sample of astrometrically-selected quasar candidates using Gaia DR2 Here we explore the efficiency and fidelity of a purely astrometric selection of quasars as point sources with zero proper motions in the Gaia data release 2 (DR2). We have built a complete candidate sample including 104 Gaia-DR2 point sources, which are brighter than 20th magnitude in the Gaia G-band within one degree of the north Galactic pole Heintz, K. E. et al. Magnetism Science with the Square Kilometre Array The Square Kilometre Array (SKA) will answer fundamental questions about the origin, evolution, properties, and influence of magnetic fields throughout the Universe. Magnetic fields can illuminate and influence phenomena as diverse as star formation, galactic dynamics, fast radio bursts, active galactic nuclei, large-scale structure, and Dark Bracco, Andrea et al. 2020Galax...8...53H Spin rates of V-type asteroids Context. Basaltic V-type asteroids play a crucial role in studies of Solar System evolution and planetesimal formation. Comprehensive studies of their physical, dynamical, and statistical properties provide insight into these processes. Thanks to wide surveys, currently there are numerous known V-type and putative V-type asteroids, allowing a 2020A&A...643A.117O OMAIRA GONZÁLEZ MARTÍN: "The Canary Observatories have been essential in the advance of the study of nuclear activity in galaxies" Physical properties and evolution of Massive Stars This project aims at the searching, observation and analysis of massive stars in nearby galaxies to provide a solid empirical ground to understand their physical properties as a function of those key parameters that gobern their evolution (i.e. mass, spin, metallicity, mass loss, and binary interaction). Massive stars are central objects to Simón Díaz Milky Way and Nearby Galaxies The general aim of the project is to research the structure, evolutionary history and formation of galaxies through the study of their resolved stellar populations, both from photometry and spectroscopy. The group research concentrates in the most nearby objects, namely the Local Group galaxies including the Milky Way and M33 under the hypothesis López Corredoira CHromospheric magnetic fields in fLAREs and their evolution CHLARE This project aims to study the variations of the solar magnetic field in flares, the most energetic events in our solar system. Flares accelerate charged particles into space, which may adversely affect satellites and Earth's technology. Despite their clear importance for today's technology, the timing and positioning when flares occur are so far Christoph Alexander Kuckein
CommonCrawl
Nutrition & Metabolism The glucose ketone index calculator: a simple tool to monitor therapeutic efficacy for metabolic management of brain cancer Joshua J Meidenbauer1, Purna Mukherjee1 & Thomas N Seyfried1 Nutrition & Metabolism volume 12, Article number: 12 (2015) Cite this article 71k Accesses Metabolic therapy using ketogenic diets (KD) is emerging as an alternative or complementary approach to the current standard of care for brain cancer management. This therapeutic strategy targets the aerobic fermentation of glucose (Warburg effect), which is the common metabolic malady of most cancers including brain tumors. The KD targets tumor energy metabolism by lowering blood glucose and elevating blood ketones (β-hydroxybutyrate). Brain tumor cells, unlike normal brain cells, cannot use ketone bodies effectively for energy when glucose becomes limiting. Although plasma levels of glucose and ketone bodies have been used separately to predict the therapeutic success of metabolic therapy, daily glucose levels can fluctuate widely in brain cancer patients. This can create difficulty in linking changes in blood glucose and ketones to efficacy of metabolic therapy. A program was developed (Glucose Ketone Index Calculator, GKIC) that tracks the ratio of blood glucose to ketones as a single value. We have termed this ratio the Glucose Ketone Index (GKI). The GKIC was used to compute the GKI for data published on blood glucose and ketone levels in humans and mice with brain tumors. The results showed a clear relationship between the GKI and therapeutic efficacy using ketogenic diets and calorie restriction. The GKIC is a simple tool that can help monitor the efficacy of metabolic therapy in preclinical animal models and in clinical trials for malignant brain cancer and possibly other cancers that express aerobic fermentation. Dietary therapy using ketogenic diets is emerging as an alternative or complementary approach to the current standard of care for brain cancer management. Prognosis remains poor for malignant gliomas in both children and adults [1-5]. Although genetic heterogeneity is extensive in malignant gliomas [6-8], the Warburg effect (aerobic fermentation of glucose) is a common metabolic malady expressed in nearly all neoplastic cells of these and other malignant tumors [9-11]. Aerobic fermentation (Warburg effect) is necessary to compensate for the insufficiency of mitochondrial oxidative phosphorylation in the cells of most tumors [9,12-14]. Mitochondrial structure and function is abnormal in malignant gliomas from both mice and humans [15-19]. Normal brain cells gradually transition from the metabolism of glucose to the metabolism of ketone bodies (primarily β-hydroxybutyrate and acetoacetate) for energy when circulating glucose levels become limiting [20,21]. Ketone bodies are derived from fatty acids in the liver and are produced to compensate for glucose depletion during periods of food restriction [20]. Ketone bodies bypass the glycolytic pathway in the cytoplasm and are metabolized directly to acetyl CoA in the mitochondria [22]. Tumor cells are less capable than normal cells in metabolizing ketone bodies for energy due to their mitochondrial defects [2,12,23]. Therapies that can lower glucose and elevate ketone bodies will place more energy stress on the tumor cells than on the normal brain cells [12,24]. This therapeutic strategy is illustrated conceptually in Figure 1, as we previously described [25]. However, daily activities and emotional stress can cause blood glucose levels to vary making it difficult for some people to enter the predicted zone of metabolic management [26]. A more stable measure of systemic energy metabolism is therefore needed to predict metabolic management of tumor growth. The ratio of blood glucose to blood ketone bodies β-hydroxybutyrate (β-OHB) is a clinical biomarker that could provide a better indication of metabolic management than could measurement of either blood glucose or ketone body levels alone. Relationship of plasma glucose and ketone body levels to brain cancer management. The glucose and ketone (β-­OHB) values are within normal physiological ranges under fasting conditions in humans. We refer to this state as the zone of metabolic management. As blood glucose falls and blood ketones rise, an individual is predicted to reach the zone of metabolic management. Tumor progression is predicted to be slower within the metabolic target zone than outside of the zone. This can be tracked utilizing the Glucose Ketone Index. The dashed lines signify the variability that could exist among individuals in reaching a GKI associated with therapeutic efficacy. The 'Glucose Ketone Index' (GKI) was created to track the zone of metabolic management for brain tumor management. The GKI is a biomarker that refers to the molar ratio of circulating glucose over β-OHB, which is the major circulating ketone body. A mathematical tool called the Glucose Ketone Index Calculator (Additional file 1) was developed that can calculate the GKI and monitor changes in this parameter on a daily basis (Equation 1). The GKIC generates a single value that can assess the relationship of the major fermentable tumor fuel (glucose) to the non-fermentable fuel (ketone bodies). Because many commercial blood glucose monitors give outputs in mg/dL, rather than millimolar (mM), the GKIC converts the units to millimolar. Included in the program is a unit converter for both glucose and ketones (β-OHB), which can convert glucose and ketone values from mg/dL to mM and from mM to mg/dL (Equations 2, 3, 4, 5). The molecular weights used for calculations in the GKIC are 180.16 g/mol for glucose and 104.1 g/mol for β-OHB, which is the major circulating ketone body measured in most commercial testing kits. The unit converter allows for compatibility for a variety of glucose and ketone testing monitors. $$ \left[\mathrm{Glucose}\ \mathrm{Ketone}\ \mathrm{Index}\right]=\frac{\raisebox{1ex}{$\left[\mathrm{Glucose}\ \left(\mathrm{m}\mathrm{g}/\mathrm{dL}\right)\right]$}\!\left/ \!\raisebox{-1ex}{$18.016\ \left(\mathrm{g}*\frac{\mathrm{dL}}{\mathrm{mol}}\right)$}\right.}{\left[\mathrm{Ketone}\ \left(\mathrm{m}\mathrm{M}\right)\right]} $$ $$ \left[\mathrm{Glucose}\ \left(\mathrm{m}\mathrm{g}/\mathrm{dL}\right)\right]=\left[\mathrm{Glucose}\ \left(\mathrm{m}\mathrm{M}\right)\right] \times 18.016\ \left(\mathrm{g}*\frac{\mathrm{dL}}{\mathrm{mol}}\right) $$ $$ \left[\mathrm{Glucose}\ \left(\mathrm{m}\mathrm{M}\right)\right]=\frac{\left[\mathrm{Glucose}\ \left(\mathrm{m}\mathrm{g}/\mathrm{dL}\right)\right]}{18.016\ \left(\mathrm{g}*\frac{\mathrm{dL}}{\mathrm{mol}}\right)} $$ $$ \left[\mathrm{Ketone}\ \left(\mathrm{m}\mathrm{g}/\mathrm{dL}\right)\right]=\left[\mathrm{Ketone}\ \left(\mathrm{m}\mathrm{M}\right)\right] \times 10.41\ \left(\mathrm{g}*\frac{\mathrm{dl}}{\mathrm{mol}}\right) $$ $$ \left[\mathrm{Ketone}\ \left(\mathrm{m}\mathrm{M}\right)\right]=\frac{\left[\mathrm{Ketone}\ \left(\mathrm{m}\mathrm{g}/\mathrm{dL}\right)\right]}{10.41\ \left(\mathrm{g}*\frac{\mathrm{dl}}{\mathrm{mol}}\right)} $$ The GKIC can set a target GKI value to help track therapeutic status. Daily GKI values can be plotted to allow visual tracking of progress against an initial index value over monthly periods. Entrance into the zone of metabolic management would be seen as the GKI value falls below the set target value (as illustrated in Figure 2). Additionally, the GKIC can track the number of days that an individual falls within the predicted target zone. The Glucose Ketone Index Calculator tracking an individual's GKI. The individual glucose and ketone values are displayed, along with the corresponding GKI values. The GKI values are plotted over the course of a month (black line), whereas the GKI target value (1.0) is plotted as a red line. We consider GKI values approaching 1.0 as potentially most therapeutic. The GKIC was used to estimate the GKI for humans and mice with brain tumors that were treated with either calorie restriction or ketogenic diets from five previously published reports (Table 1). The first clinical study evaluated two pediatric patients; one with an anaplastic astrocytoma, and another with a cerebellar astrocytoma [27]. Both individuals were placed on a ketogenic diet for eight weeks. During the 8-week treatment period, GKI dropped from about 27.5 to about 0.7 – 1.1 in the patients. The patient with the anaplastic astrocytoma, who did not have a response to prior chemotherapy, had a 21.7% reduction in fluorodeoxyglucose uptake at the tumor site (no chemotherapy during diet). The patient with the cerebellar astrocytoma received standard chemotherapy concomitant with the ketogenic diet. Fluorodeoxyglucose uptake at the tumor site in this patient was reduced by 21.8%. Quality of life was markedly improved in both children after initiation of the KD [27]. Table 1 Low Glucose Ketone Index values are related to improved prognoses in humans and mice with brain tumors The second clinical study evaluated a 65-yr-old woman with glioblastoma multiforme [28]. The patient was placed on a calorie-restricted ketogenic diet (600 kcal/day) concomitant with standard chemotherapy and radiation, without dexamethasone, for eight weeks. The patient's GKI decreased from 37.5 to 1.4 in the first three weeks of the diet. No discernible brain tumor tissue was detected with MRI in the patient at the end of eight weeks of the calorie restricted ketogenic diet. It is also important to mention that the patient was free of symptoms while she adhered to the KD. Tumor recurrence occurred 10 weeks after suspension of the ketogenic diet. The third study, a preclinical mouse study, evaluated the effects of diets on an orthotopically implanted CT-2A syngeneic mouse astrocytoma in C57BL/6 J mice [29]. Mice were implanted with tumors and fed one of four diets for 13 days: 1) standard diet fed unrestricted, 2) calorie restricted standard diet, 3) ketogenic diet fed unrestricted, or 4) calorie restricted ketogenic diet. The mice fed a standard unrestricted diet and a ketogenic diet had rapid tumor growth after 13 days, and a GKI of 15.2 and 11.4, respectively. The group fed a calorie restricted standard diet had a significant decrease in tumor volume after 13 days, along with a GKI of 3.7. The group fed a calorie restricted ketogenic diet also had a significant decrease in tumor volume, along with a GKI of 4.4. The fourth study evaluated the effects of diets on an orthotopically implanted CT-2A syngeneic mouse astrocytoma in C57BL/6 J mice and an orthotopically implanted human U87-MG human xenograft glioma in BALBc/6-severe combined immunodeficiency (SCID) mice [30]. Tumors were implanted and grown in the mice for three days prior to diet initiation. After three days, mice were maintained on one of three diets for 8 days: 1) standard diet fed unrestricted, 2) ketogenic diet fed unrestricted, or 3) calorie restricted ketogenic diet. Tumor weights at the end of 8 days were reduced only in the mice that were fed a calorie restricted diet and experienced a significant decrease in GKI. Groups of mice that did not have a reduction in tumor weight had GKI's that ranged from 9.6 – 70.0. The groups of mice that had a reduction in tumor weight had GKI's that ranged from 1.8 – 4.4. The fifth study evaluated the effects of diet and radiation on mouse GL261 glioma implanted intracranially in albino C57BL/6 J mice [31]. The mice were implanted with tumors, and three days later they were placed on either a standard diet fed unrestricted or a ketogenic diet fed unrestricted. Mice were also assigned to groups that either received or did not receive concomitant radiation therapy. Without radiation, mice that were fed a ketogenic diet had a GKI of 6.4 and had a median survival of 28 days, compared to a GKI of 50.0 and median survival of 23 days for the standard diet group. With radiation, mice that were fed a ketogenic diet had a GKI of 5.7 and a median survival of 200+ days, compared to a GKI of 32.3 and median survival of 41 days for the standard diet group. In addition to these studies, Table 2 shows a clear association of the GKI to the therapeutic action of calorie restriction against distal invasion, proliferation, and angiogenesis in the VM-M3 model of glioblastoma. The data for the GKI in Table 2 was computed from those mice that were measured for both glucose and ketones in comparison with the other biomarkers as previously described [32]. When viewed collectively, the results from the published reports show a clear relationship between the GKI and efficacy of metabolic therapy using either the KD or calorie restriction. Therapeutic efficacy of the KD or calorie restriction is greater with lower GKI values than with higher values. The results suggest that GKI levels that approach 1.0 are therapeutic for managing brain tumor growth. Further studies will be needed to determine those GKI values that can most accurately predict efficacy during metabolic therapy involving diet or procedures that lower glucose and elevate ketone bodies. Table 2 Linking the Glucose Ketone Index (GKI) to the therapeutic action of calorie restriction against distal invasion, proliferation, and angiogenesis in the VM-M3 model of glioblastoma We present evidence showing that the GKI can predict success for brain cancer management in humans and mice using metabolic therapies that lower blood glucose and elevate blood ketone levels. Besides ketogenic diets, other dietary therapies, such as calorie restriction, low carbohydrate diets, and therapeutic fasting, can also lower blood glucose and elevate β-OHB levels and can have anti-tumor effects [24,33-38]. The GKIC was developed to more reliably and simply predict therapeutic management for brain cancer patients under these dietary states than could measurements of either blood glucose or ketones alone. The data presented in Tables 1 and 2 support this prediction. Although the GKI is simple in concept, it has not been used previously to gage success of various metabolic therapies based on inverse changes in glucose and ketone body metabolism. As brain tumor cells are dependent on glucose for survival and cannot effectively use ketone bodies as an alternative fuel, a zone of metabolic management can be achieved under conditions of low glucose and elevated ketones. Ketone bodies also prevent neurological symptoms associated with hypoglycemia, such as neuroglycopenia, which allows blood glucose levels to be lowered even further [22,39]. Hence, ketone body metabolism can protect normal brain cells under conditions that target tumor cells [40]. The zone of metabolic management is considered the therapeutic state that places maximal metabolic stress on tumor cells while protecting the health and vitality of normal cells [41]. We have presented substantial data showing that the GKI is validated in several studies in mice. We feel that prospective validation of the GKIC will be obtained from future studies using ketogenic diet therapy in humans with brain cancer and possibly other cancers that cannot effectively metabolize β-OHB for energy, and depend upon glucose for survival. The GKI can be useful in determining the success of dietary therapies that shift glucose- and lactate-based metabolism to ketone-based metabolism. As a shift toward ketone-based metabolism underscores the utility of many dietary therapies in treating metabolic diseases [41,42], the GKI can be used in determining the therapeutic success of shifting metabolism in individual patients. The GKI therefore can be used to study the effectiveness of dietary therapy in clinical trials of patients under a range of dietary conditions, with a composite primary endpoint consisting of lowering the subjects' GKI. This will allow investigators to parse the effects of successful dietary intervention on disease outcome from unsuccessful dietary intervention. Recent clinical studies assessing the effects of dietary therapy on brain cancer progression have not measured both blood glucose and ketone bodies throughout the study periods [43,44]. Future clinical studies that intend to assess the effect of dietary therapy on brain tumor progression should measure both blood glucose and ketone, as these markers are necessary to connect dietary therapy to therapeutic efficacy. Preclinical studies have demonstrated a clear linkage between GKI and therapeutic efficacy. The GKI will be an important biomarker to measure in future rigorously designed and powered clinical studies in order to demonstrate if there is a linkage between GKI and therapeutic efficacy, as the few case reports in the literature suggest. The zone of metabolic management is likely entered with GKI values between 1 and 2 for humans. Optimal management is predicted for values approaching 1.0, and blood glucose and ketone values should be measured 2–3 hours postprandial, twice a day if possible. This will allow individuals to connect their dietary intake to changes in their GKI. As an example, Figure 2 uses the GKIC to track the GKI values of an individual on a ketogenic diet, with a target GKI of 1.0. When an individual's GKI falls below the line denoting the target metabolic state, the zone of metabolic management is achieved. Further studies will be needed to establish the validity of the predicted zone of management. It has not escaped our attention that the GKIC could have utility not only for managing brain cancer and possibly other cancers dependent on glucose and aerobic fermentation for survival, but also for managing other diseases or conditions where the ratio of glucose to ketone bodies could be therapeutic. Such diseases and conditions include Alzheimer's disease, Parkinson's disease, traumatic brain injury, chronic inflammatory disease, and epilepsy [41]. For example, the ketogenic diet has long been recognized as an effective therapeutic strategy for managing refractory seizures in children [45,46]. Therapeutic success in managing generalized idiopathic epilepsy in epileptic EL mice can also be seen when applying the GKI to the data presented on glucose and β-OHB [47]. Healthy individuals can utilize the GKIC to prevent diseases and disorder, and manage general wellness. Further studies will be needed to determine the utility of the GKIC for predicting therapeutic success in the metabolic management of disease. Fisher PG, Buffler PA. Malignant gliomas in 2005: where to GO from here? JAMA. 2005;293:615–7. Seyfried TN, Marsh J, Mukherjee P, Zuccoli G, D'Agostino DP. Could metabolic therapy become a viable alternative to the standard of care for managing glioblastoma? US Neurology. 2014;10:48–55. Armstrong GT, Phillips PC, Rorke-Adams LB, Judkins AR, Localio AR, Fisher MJ. Gliomatosis cerebri: 20 years of experience at the Children's Hospital of Philadelphia. Cancer. 2006;107:1597–606. Artico M, Cervoni L, Celli P, Salvati M, Palma L. Supratentorial glioblastoma in children: a series of 27 surgically treated cases. Childs Nerv Syst. 1993;9:7–9. Harbaugh KS, Black PM. Strategies in the surgical management of malignant gliomas. Semin Surg Oncol. 1998;14:26–33. Johnson BE, Mazor T, Hong C, Barnes M, Aihara K, McLean CY, et al. Mutational analysis reveals the origin and therapy-driven evolution of recurrent glioma. Science. 2014;343:189–93. Brennan CW, Verhaak RG, McKenna A, Campos B, Noushmehr H, Salama SR, et al. The somatic genomic landscape of glioblastoma. Cell. 2013;155:462–77. Patel AP, Tirosh I, Trombetta JJ, Shalek AK, Gillespie SM, Wakimoto H, et al. Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma. Science. 2014;344:1396–401. Ferreira LM. Cancer metabolism: the Warburg effect today. Exp Mol Pathol. 2010;89:372–80. Seyfried TN, Mukherjee P. Targeting energy metabolism in brain cancer: review and hypothesis. Nutr Metab (Lond). 2005;2:30. Seyfried TN, Flores R, Poff AM, D'Agostino DP, Mukherjee P. Metabolic therapy: a new paradigm for managing malignant brain cancer. Cancer Lett. 2014;356:289–300. Seyfried TN, Flores RE, Poff AM, D'Agostino DP. Cancer as a metabolic disease: implications for novel therapeutics. Carcinogenesis. 2014;35:515–27. Warburg O. On the origin of cancer cells. Science. 1956;123:309–14. Warburg O. On the respiratory impairment in cancer cells. Science. 1956;124:269–70. Kiebish MA, Han X, Cheng H, Chuang JH, Seyfried TN. Cardiolipin and electron transport chain abnormalities in mouse brain tumor mitochondria: lipidomic evidence supporting the Warburg theory of cancer. J Lipid Res. 2008;49:2545–56. Arismendi-Morillo GJ, Castellano-Ramirez AV. Ultrastructural mitochondrial pathology in human astrocytic tumors: potentials implications pro-therapeutics strategies. J Electron Microsc (Tokyo). 2008;57:33–9. Deighton RF, Le Bihan T, Martin SF, Gerth AM, McCulloch M, Edgar JM, et al. Interactions among mitochondrial proteins altered in glioblastoma. J Neurooncol. 2014;118:247–56. Oudard S, Boitier E, Miccoli L, Rousset S, Dutrillaux B, Poupon MF. Gliomas are driven by glycolysis: putative roles of hexokinase, oxidative phosphorylation and mitochondrial ultrastructure. Anticancer Res. 1997;17:1903–11. Sipe JC, Herman MM, Rubinstein LJ. Electron microscopic observations on human glioblastomas and astrocytomas maintained in organ culture systems. Am J Pathol. 1973;73:589–606. Cahill Jr GF, Veech RL. Ketoacids? Good medicine? Trans Am Clin Climatol Assoc. 2003;114:149–61. discussion 162–143. Krebs HA, Williamson DH, Bates MW, Page MA, Hawkins RA. The role of ketone bodies in caloric homeostasis. Adv Enzyme Reg. 1971;9:387–409. Veech RL, Chance B, Kashiwaya Y, Lardy HA, Cahill Jr GF. Ketone bodies, potential therapeutic uses. IUBMB Life. 2001;51:241–7. Fine EJ, Miller A, Quadros EV, Sequeira JM, Feinman RD. Acetoacetate reduces growth and ATP concentration in cancer cell lines which over-express uncoupling protein 2. Cancer Cell Int. 2009;9:14. Klement RJ, Kammerer U. Is there a role for carbohydrate restriction in the treatment and prevention of cancer? Nutr Metab. 2011;8:75. Seyfried TN, Kiebish M, Mukherjee P, Marsh J. Targeting energy metabolism in brain cancer with calorically restricted ketogenic diets. Epilepsia. 2008;49 Suppl 8:114–6. Goetsch VL, Wiebe DJ, Veltum LG, Van Dorsten B. Stress and blood glucose in type II diabetes mellitus. Behav Res Ther. 1990;28:531–7. Nebeling LC, Miraldi F, Shurin SB, Lerner E. Effects of a ketogenic diet on tumor metabolism and nutritional status in pediatric oncology patients: two case reports. J Am Coll Nutr. 1995;14:202–8. Zuccoli G, Marcello N, Pisanello A, Servadei F, Vaccaro S, Mukherjee P, et al. Metabolic management of glioblastoma multiforme using standard therapy together with a restricted ketogenic diet: Case Report. Nutr Metab (Lond). 2010;7:33. Seyfried TN, Sanderson TM, El-Abbadi MM, McGowan R, Mukherjee P. Role of glucose and ketone bodies in the metabolic control of experimental brain cancer. Br J Cancer. 2003;89:1375–82. Zhou W, Mukherjee P, Kiebish MA, Markis WT, Mantis JG, Seyfried TN. The calorically restricted ketogenic diet, an effective alternative therapy for malignant brain cancer. Nutr Metab (Lond). 2007;4:5. Abdelwahab MG, Fenton KE, Preul MC, Rho JM, Lynch A, Stafford P, et al. The ketogenic diet is an effective adjuvant to radiation therapy for the treatment of malignant glioma. PLoS One. 2012;7:e36197. Shelton LM, Huysentruyt LC, Mukherjee P, Seyfried TN. Calorie restriction as an anti-invasive therapy for malignant brain cancer in the VM mouse. ASN Neuro. 2010;2:e00038. Fine EJ, Segal-Isaacson CJ, Feinman RD, Herszkopf S, Romano MC, Tomuta N, et al. Targeting insulin inhibition as a metabolic therapy in advanced cancer: a pilot safety and feasibility dietary trial in 10 patients. Nutrition. 2012;28:1028–35. Klement RJ. Calorie or carbohydrate restriction? The ketogenic diet as another option for supportive cancer treatment. Oncologist. 2013;18:1056. Klement RJ, Champ CE. Calories, carbohydrates, and cancer therapy with radiation: exploiting the five R's through dietary manipulation. Cancer Metastasis Rev. 2014;33:217–29. Longo VD, Mattson MP. Fasting: molecular mechanisms and clinical applications. Cell Metab. 2014;19:181–92. Raffaghello L, Safdie F, Bianchi G, Dorff T, Fontana L, Longo VD. Fasting and differential chemotherapy protection in patients. Cell Cycle. 2010;9:4474–6. Woolf EC, Scheck AC. The ketogenic diet for the treatment of malignant glioma. J Lipid Res. 2015;56:5–10. Willemsen MA, Soorani-Lunsing RJ, Pouwels E, Klepper J. Neuroglycopenia in normoglycaemic patients, and the potential benefit of ketosis. Diabet Med. 2003;20:481–2. Maalouf M, Rho JM, Mattson MP. The neuroprotective properties of calorie restriction, the ketogenic diet, and ketone bodies. Brain Res Rev. 2009;59:293–315. Seyfried TN. Ketone strong: emerging evidence for a therapeutic role of ketone bodies in neurological and neurodegenerative diseases. J Lipid Res. 2014;55:1815–17. Meidenbauer JJ, Ta N, Seyfried TN. Influence of a ketogenic diet, fish-oil, and calorie restriction on plasma metabolites and lipids in C57BL/6 J mice. Nutr Metab. 2014;11:23. Rieger J, Bahr O, Maurer GD, Hattingen E, Franz K, Brucker D, et al. ERGO: a pilot study of ketogenic diet in recurrent glioblastoma. Int J Oncol. 2014;44:1843–52. Champ CE, Palmer JD, Volek JS, Werner-Wasik M, Andrews DW, Evans JJ, et al. Targeting metabolism with a ketogenic diet during the treatment of glioblastoma multiforme. J Neurooncol. 2014;117:125–31. Freeman JM, Kossoff EH. Ketosis and the ketogenic diet, 2010: advances in treating epilepsy and other disorders. Adv Pediatr. 2010;57:315–29. Hartman AL, Vining EP. Clinical aspects of the ketogenic diet. Epilepsia. 2007;48:31–42. Mantis JG, Centeno NA, Todorova MT, McGowan R, Seyfried TN. Management of multifactorial idiopathic epilepsy in EL mice with caloric restriction and the ketogenic diet: role of glucose and ketone bodies. Nutr Metab (Lond). 2004;1:11. This work was supported, in part, by the National Institutes of Health (HD-39722, NS-55195 and CA-102135), a grant from the American Institute of Cancer Research, and the Boston College Expense Fund (TNS). Authors would like to thank Madam Trudy Dupont for providing us with valuable data to develop the GKIC. Biology Department, Boston College, Chestnut Hill, MA, 02467, USA Joshua J Meidenbauer, Purna Mukherjee & Thomas N Seyfried Joshua J Meidenbauer Purna Mukherjee Thomas N Seyfried Correspondence to Thomas N Seyfried. JM and TNS developed the Glucose/Ketone Index Calculator and wrote the paper together. PM provided the data for Table 2 and helped with editing and data presentation. All authors read and approved the final manuscript. Instructions for calculating the GKI using a blood glucose and ketone monitor. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. Meidenbauer, J.J., Mukherjee, P. & Seyfried, T.N. The glucose ketone index calculator: a simple tool to monitor therapeutic efficacy for metabolic management of brain cancer. Nutr Metab (Lond) 12, 12 (2015). https://doi.org/10.1186/s12986-015-0009-2 Accepted: 24 February 2015 Beta-hydroxybutyrate Calorie restriction Metabolic therapy Warburg effect Ketone bodies
CommonCrawl
Automated extraction and validation of children's gait parameters with the Kinect Saeid Motiian1, Paola Pergami2, Keegan Guffey3, Corrie A Mancinelli4 & Gianfranco Doretto1 BioMedical Engineering OnLine volume 14, Article number: 112 (2015) Cite this article Gait analysis for therapy regimen prescription and monitoring requires patients to physically access clinics with specialized equipment. The timely availability of such infrastructure at the right frequency is especially important for small children. Besides being very costly, this is a challenge for many children living in rural areas. This is why this work develops a low-cost, portable, and automated approach for in-home gait analysis, based on the Microsoft Kinect. A robust and efficient method for extracting gait parameters is introduced, which copes with the high variability of noisy Kinect skeleton tracking data experienced across the population of young children. This is achieved by temporally segmenting the data with an approach based on coupling a probabilistic matching of stride template models, learned offline, with the estimation of their global and local temporal scaling. A preliminary study conducted on healthy children between 2 and 4 years of age is performed to analyze the accuracy, precision, repeatability, and concurrent validity of the proposed method against the GAITRite when measuring several spatial and temporal children's gait parameters. The method has excellent accuracy and good precision, with segmenting temporal sequences of body joint locations into stride and step cycles. Also, the spatial and temporal gait parameters, estimated automatically, exhibit good concurrent validity with those provided by the GAITRite, as well as very good repeatability. In particular, on a range of nine gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. This work enables and validates the automated use of the Kinect for children's gait analysis in healthy subjects. In particular, the approach makes a step forward towards developing a low-cost, portable, parent-operated in-home tool for clinicians assisting young children. The effectiveness of a rehabilitation regimen can be ensured only if an appropriate monitoring of progress is implemented. This is true even more so for developing children, where detection of gait abnormalities, as well as the adoption of a therapy to correct them, must be validated in a continuous and timely manner to ensure success [1, 2]. Therapy adjustment and gait evaluation in children are further complicated by the natural changes in their motor development, and by their limited ability to provide feedback as precisely as adults, sometimes forcing practitioners to rely on subjective parental information, thus highlighting even further the importance of relying on suitable unbiased assessment tests. Gait analysis methods [3] are a common way to quantify and assess human locomotion. They have been used successfully as research and clinical tools in many patient populations, including children with cerebral palsy [4], individuals with spinal cord injury [5], or under rehabilitation after stroke [6], and elderly people under risk of falls [7]. Although very useful, gait analysis requires specialized equipment used by expert technicians, typically present in academic research laboratories or large hospitals [8], which poses the problem of timely accessibility of such infrastructure. In addition, costs associated with the set up and administration of gait assessments are reported to be fairly high [9], making it even more difficult to routinely monitor the progress of patients undergoing therapy. The GAITRite system [10], a walkway with a grid of sensors, is an extensively validated gait analysis tool for both adults [11–14] and children [15–17], which is widely used by practitioners. It provides for the automatic computation of several spatial and temporal gait parameters. Compared to very accurate three-dimensional gait analysis systems (e.g., the Vicon [18]), the GAITRite is easier to operate (especially with children), costs less, has smaller space requirements, and yet is very effective in tracking patient progress. However, it remains a large and expensive device meant to be operated by technicians. This becomes a problem, especially in rural areas, where it is difficult for many families to bring their children into a facility with the appropriate personnel and equipment to detect, monitor and correct gait abnormalities. The availability of an inexpensive, portable, in-home alternative to the GAITRite that is operable by parents would potentially allow clinicians to remotely monitor patient's progress, and to deliver state-of-the-art low-cost healthcare to an underserved population. In this work, the Microsoft Kinect [19] is leveraged as a very low-cost sensing device, capable of tracking 20 different body joint locations over time at video rate [20], and it is proposed for children's gait analysis. To this end, a framework for the automated extraction of gait parameters from Kinect data is developed, and validated on healthy children. Providing accurate and precise measures of gait parameters requires facing the main challenge of designing algorithms that are robust to large amounts of articulated body tracking noise, and that can deal with the variability of tracking data across the population of yang children, and across different age groups. Enabling the implementation of a portable and low-cost system, instead, requires designing computationally efficient algorithms, because of the limited computing power of such platforms. The proposed framework for estimating gait parameters addresses both of the challenges outlined above. It introduces robust algorithms for the automatic calibration and segmentation of temporal sequences, generated by the 3D locations of body joints. The segmentation accurately decomposes sequences into snippets, corresponding to the strides of the walking child. This is achieved by a probabilistic matching of stride template models, learned offline from training data, coupled with the joint estimation of the global and local temporal scaling of the templates. Computational efficiency, instead, is achieved by augmenting the approach with subsequence matching techniques. The framework is evaluated in two ways. First, the accuracy and precision in detecting specific temporal instants of the gait cycle are studied. Those include the heel strikes and toe-offs that segment the child's walk into stride and step cycles. Second, by conducting a study with healthy children, the validity of the gait parameters estimated automatically is established against those computed by the GAITRite, and the repeatability of the approach is also analyzed. Several approaches have been developed for gait analysis outside the clinic [3]. There is a large category of portable approaches based on wearable sensors, such as accelerometers, gyroscopes, pressure sensors, ultrasonic sensors, and others. Some of them can lead to cheaper systems [21], however, they require downloading data to perform the analysis unless additional hardware for wireless data collection is incorporated, and multiple sensors are needed for the analysis of multiple gait parameters. In addition, sensors must be placed correctly and securely, and can be susceptible to noise and interferences due to external factors [3]. Also, it can be very inconvenient for children to wear additional devices, especially those that entail wearing instrumented shoes [22], as further explained below. Currently, the evidence of a simple inexpensive system based on wearable sensors suitable for children's gait analysis is unclear. Marker-less vision-based gait analysis approaches are another popular low-cost alternative [23]. They have been studied extensively by the computer vision community for human activity analysis [24] and biometric recognition [25]. Usually, they are based on multiple cameras and can work effectively as fixed in-home installations for the continuous monitoring of gait in elderly patients [26]. However, they require a complex setup with a calibration process and are not adequate to become simple, parent-operated devices. Other marker-less approaches include those based on time-of-flight cameras, infrared thermography, and pulse-Doppler radars [3, 27]. Those are either too expensive, or not portable and too complex to set up. On the other hand, the Microsoft Kinect (which for Xbox One [28] uses an inexpensive time-of-flight camera, as opposed to those methods referred in [3]), with its software development kit (SDK) makes available a technology for 3D articulated body tracking [20] that is safe, inexpensive, comes in a small package, is straightforward to set up and operate (no need for camera calibration, fix installation or for wearing additional sensors), and is pervasive. Therefore, it offers the opportunity to address the need for a low-cost parent-operated tool for in-home monitoring of gait in children during rehabilitation interventions. This work makes a step forward towards fulfilling such need by introducing and validating a methodology for extracting children's gait parameters in healthy subjects fully automatically from Kinect tracking data. The Kinect has been used in several clinical applications related to gait disorders and mobility analysis. It has been used for interventions on the balance ability of injured young male athletes [29], and its reliability and validity for assessing the standing balance was established in [30]. In [31] it was found that for the majority of the considered foot posture index items, the Kinect was more reliable than the traditional visual assessment. More specifically to the functional assessment [32], introduces a methodology to use the Kinect for mapping gait parameters to the Timed-Up-and-Go (TUG) mobility test, and [33] reports a validation and reproducibility study against a standard marker based system for functional assessment activities. Similarly, [34] also considers the TUG test, but they develop a novel algorithm for using the Kinect from the side view, which is particularly suitable for this test, and is capable of locating and tracking up to six joints of a human body. Related to this line of works [35], focusses on establishing the concurrent validity of the Kinect against a 3D motion analysis system for assessing the kinematic strategies of postural control. Compared to the above approaches, ours differs substantially, in that it focusses on developing and validating the extraction of spatiotemporal children's gait parameters in a fully automated fashion. More closely related to rehabilitation, the Kinect has been assessed for rehabilitating young adults with motor impairments [36] and with cerebral palsy [37], both in school settings [38]. instead, assessed the concurrent validity of the Kinect for gait retraining using the lateral trunk lean modification model. For patients affected by stroke [39], developed an automated method for measuring the quality of movements in clinically-relevant terms, and [40] examined the reliability of spatiotemporal gait parameters as well as other standard tests, such as the functional reach test, the step test, the 10 m walk test, and the TUG test. For patients with Parkinson's disease [41], established the accuracy of the Kinect in measuring clinically relevant movements, while [42, 43] developed algorithms aimed at extracting gait parameters to be used for automatically recognizing individuals suspected of having the disease. In patients with multiple sclerosis, [44] showed that ambulation tests using the Kinect are feasible, and can detect clinical gait disturbances. Further references can be found in [45, 46], which review the technical and clinical impact of the Kinect in physical therapy and rehabilitation, with an emphasis on patients with neurological disorders as wel as elderly patients. The studies above do not involve young children, and have very different goals from those of this work. Kinect-based methods have been used before in clinical applications involving children (e.g., in serious games for rehabilitation [47] and learning [48]), but never for children's gait analysis. More precisely, Stone and Skubic [49, 50] were the first that advocated the use of Kinect for clinical gait analysis, and applied it for continuous in-home gait monitoring of elderly people. Their approach detected footfalls by analyzing the portion of the foreground depth maps close to the ground plane. The main drawbacks of this approach are the limited number of gait parameters being monitored, as well as a fix installation, requiring the intrinsic and extrinsic calibration of the Kinect. Gabel et al. [51] instead, proposed an easier-to-use approach that also provided a broader set of gait parameters. Those were estimated with a supervised learning method, where an ensemble of regression trees mimics the behavior of pressure sensors attached to the heels and toes of a subject wearing instrumented shoes. However, an appropriate clinical assessment of gait requires the patients to walk barefoot, as the pronounced altering effects of shoes on gait parameters are well known, and have been clearly defined in a pediatric population [52]. Therefore, Gabel's approach is unsuited for this specific clinical application in children, and this work proposes a framework based on a probabilistic matching of stride templates, with no shod feet requirements. Other Kinect-based approaches include [53–58] but they are very limited. Sun et al. [53] uses an autoregressive moving average model with a Kalman filter for predicting the temporal series of the distances between Kinect and lower extremity markers. Gianaria et al. [55] and Staranowicz et al. [56] report simple methods for computing only the stride length and the walking speed. Pfister et al. [57] provides a way for estimating only the stride timing and two other body flexion parameters of a person on a treadmill. Auvinet et al. [58] focusses only on improving the accuracy of the heel strikes estimation of a person on a treadmill. Clark et al. [54] uses a very simple method for computing parameters, based on thresholding the local velocity of the foot and ankle joints. Those approaches have been tested with adults, and have never been subjected to the high degree of variability and noise typical of skeleton tracking sequences acquired from walking children. It is very difficult to cope with such severe conditions when relying on straight peak detection or thresholding. In contrast, the proposed approach performs a robust matching of probabilistic stride template models, allowing for accurate identification of heel strikes and toe-off instants. Also [59] uses templates for the step segmentation of signals collected from gyroscopes attached to instrumented shoes. However, their data is not vector valued, the templates are deterministic, and straight subsequence dynamic time warping [60] is used for template matching. Here, instead, the Kinect skeleton data is multidimensional, the templates are probabilistic, and the matching estimates jointly the global uniform temporal scaling [61], as well as the local non-uniform temporal scaling (under the form of dynamic time warping (DTW) [62]), of the templates, thus allowing for large adjustments in the length and shape of the detected strides. In particular, the approach brings together for the first time, probabilistic multidimensional uniform and non-uniform scaling with subsequence DTW techniques for computational efficiency. Some previous Kinect methods have been compared against other systems. For instance, [41, 49, 54, 56–58] compare their approaches with the Vicon. However, only [54, 57] and [41] present a complete study of the concurrent validity of the methodology, while none of them are concerned with children's gait analysis. Also in this work we validate the proposed approach by studying its concurrent validity against the GAITRite, which is a previously validated system even for children [15–17]. The GAITRite is very easy to setup and use with barefoot children, and has small space requirements. The next section describes a computationally efficient algorithm we introduced for the temporal segmentation of data acquired by the Kinect, based on which a fully automated procedure for computing gait parameters is developed. This is described in the Methods section, along with a study conducted on healthy children for establishing the concurrent validity of the proposed approach. Temporal segmentation based on stride template models In order to compute the gait parameters from a Microsoft Kinect observing a walking child, we analyze the raw skeleton tracking data it acquires. Specifically, as will become clearer in later sections, we need to automatically identify when each stride starts and ends. The estimation of such instants requires the design of a temporal segmentation algorithm that can cope with the high variability of the raw data, while being computationally efficient. This section introduces such algorithm, which will then be leveraged in the Methods section. The raw tracking data acquired by the Kinect consists of a temporal sequence of length n, given by \(\mathbf {x}_1, \ldots , \mathbf {x}_n\), or \(\mathbf {x}_{1:n}\) for short, which is referred to as a trial walk. At time t, \(\mathbf {x}_t \, =\, [ x_{1,t}; \ldots ; x_{20,t}] \in \mathbb {R}^{60}\) represents a skeleton vector, collecting the 3D positions of the 20 skeleton joints depicted in Fig. 1. The positions are assumed to be measured with respect to a canonical reference frame, which is attached to the walking child, and therefore is independent from the reference frame of the Kinect. The Methods section will explain how such reference frame can be computed automatically. In the sequel, the notations \(\mathbf {x}_{\cdot : n}\), \(\mathbf {x}_{1:\cdot }\), or \(\mathbf {x}_{\cdot : \cdot }\), mean that the initial, final, or both time instants are not needed, or cannot be specified, depending on the context. Skeleton. Graphical representation of the 20 joints composing the skeleton model used by the Kinect SDK for tracking the motion of a person In order to automatically identify when a stride starts and ends, we take the approach of looking for the subsequence \(\mathbf {x}_{t_s:t_e}\) (starting at \(t_s\) and ending at \(t_e\)), of a trial walk \(\mathbf {x}_{1:n}\), that best matches a stride template model $$\begin{aligned} \mathcal {T}_m = (\varvec{\mu }_1, {\Lambda }_1), \cdots , (\varvec{\mu }_{m}, {\Lambda }_{m}), \end{aligned}$$ consisting of a sequence of \(m\in \mathcal {M}\) pairs. Each pair \(\varvec{\mu }_t\) and \({\Lambda }_t\) has the meaning of mean and covariance of a random vector that models the variability of the skeleton vector \(\mathbf {x}_t\), at time t of a stride. The set \(\mathcal {M}\) represents the possible temporal scales of the templates. Each scale m identifies a different template \(\mathcal {T}_m\). The Methods section will explain how stride template models are learned from training data. In the remaining part of this section we explain how we estimate \(t_s\) and \(t_e\) with different approaches. We begin with the simplest case where the template scale m is assumed to be known, then, we progressively improve the method by modeling uniform, and non-uniform temporal scaling, and finally we provide a computationally efficient approach that models both types of scaling variabilities. Constant stride time case If the length of the strides was known to be m, the simplest way to find the subsequence \(\mathbf {x}_{t_s:t_e}\) (where in this case \(t_e = t_s+m-1\)), that best matches a template \(\mathcal {T}_m\), would be to look for the one (or equivalently, to look for \(t_s\)) that minimizes the distance $$\begin{aligned} D_E(\mathbf {x}_{t_s: \cdot }, \mathcal {T}_m) = \sum _{i=1}^m \Vert \mathbf {x}_{t_s+i-1} - \varvec{\mu }_i \Vert ^2 \; , \end{aligned}$$ where \(\Vert \cdot \Vert\) denotes the Euclidean norm. However, this approach would lead to a poor estimation due to the large amount of noise in the skeleton positions, and the large variability of joint trajectories across different subjects. Indeed, the Euclidean distance treats every joint position independently and in the same way, whereas the joints have different variances and are correlated. An approach that takes those issues into account entails modeling the likelihood probability distribution of a subsequence, given the stride template, \(p(\mathbf {x}_{t_s:\cdot } | \mathcal {T}_m)\), and then estimating \(t_s\) in the maximum likelihood (ML) sense. Given the statistical model for \(\mathcal {T}_m\), this is equivalent to looking for \(t_s\) that minimizes the distance $$\begin{aligned} D_L(\mathbf {x}_{t_s:\cdot }, \mathcal {T}_m) = \sum _{i=1}^m (\mathbf {x}_{t_s+i-1} - \varvec{\mu }_i)^{\top } {\Lambda }_i^{-1}(\mathbf {x}_{t_s+i-1} - \varvec{\mu }_i) = \sum _{i=1}^m d_{M_i}(\mathbf {x}_{t_s+i-1}), \end{aligned}$$ where \(d_{M_i}(\mathbf {x}_t)\) is the Mahalanobis distance of the skeleton vector \(\mathbf {x}_t\), from the template component \(\varvec{\mu }_i\), according to \({\Lambda }_i\). Uniform temporal scaling Gait differences between different children correspond to skeleton trajectories exhibiting a variability in the uniform temporal scaling [61] (i.e., the global linear enlargement or shrinking of the time axis), such that relying on the assumption of a known equal length for \(\mathcal {T}_m\), like in (3), will lead to inaccurate segmentations. This issue is addressed by augmenting (3) with the estimation of the amount of scaling to be applied. This is done by looking for the best matching subsequence \(\mathbf {x}_{t_s:t_e}\) that minimizes the following ML uniform scaling distance $$\begin{aligned} US_L(\mathbf {x}_{t_s:\cdot }, \mathcal {T}_{\cdot }) = \min _{m \in \mathcal {M}} \frac{1}{m} D_L(\mathbf {x}_{t_s:\cdot }, \mathcal {T}_m), \end{aligned}$$ where the factor 1/m has been introduced to make every scaling equally likely. This approach would provide the best templete size \(\tilde{m}\), and time \(t_s\). Non-uniform temporal scaling Even after modeling uniform scaling, the residual temporal scaling variability, or so called non-uniform scaling, can still be significant to be modeled only by amplitude variation, like in (3). This is due to local variability of gait cycles in a person, to large amounts of noise in the joint trajectories, and to local variability of skeleton trajectories of children across different age groups. Non-uniform scaling can be handled by locally stretching the time axis, and dynamic time warping (DTW) [62] is known to be a good tool for doing so. DTW allows local flexibility in aligning time series, enabling the matching of sequences with tolerance of small local misalignments, thus achieving the goal of an accurate segmentation. The ML estimation (3) can be augmented by modeling non-uniform scaling effects with DTW. To illustrate this, the warping path \(p = (p_1, \cdots , p_w)\), where \(p_l = (n_l, m_l)\), is introduced, which defines a mapping between the elements of two sequences. Assuming that v and m are the lengths of the sequences, then it must be that \(p_1 = (1,1)\), \(p_w = (v, m)\), \(n_l \ge n_{l-1}\), \(m_l \ge m_{l-1}\), and \(\max (m,v) \le w \le m+v-1\). Therefore, the joint estimation of the non-uniform scaling and the ML subsequence \(\mathbf {x}_{t_s : t_e}\) relies on minimizing the distance $$\begin{aligned} DTW_L(\mathbf {x}_{t_s:t_e}, \mathcal {T}_m) = \min _{p} \sum _{l=1}^w d_{M_{m_l}}(\mathbf {x}_{t_s+n_l-1}), \end{aligned}$$ where, for each \(t_s\) and \(t_e\), p is optimized with dynamic programming, with complexity of O(vm) [62] with \(v = t_e - t_s +1\), using this recursive definition of \(DTW_L\) $$\begin{aligned} DTW_L( \varnothing , \varnothing )&= 0 \nonumber \\ DTW_L(\mathbf {x}_{t_s:t_e}, \varnothing )&= DTW_L( \varnothing , \mathcal {T}_m) = \infty \nonumber \\ DTW_L(\mathbf {x}_{t_s:t_e}, \mathcal {T}_m)&= d_{M_{m}}(\mathbf {x}_{t_e}) + \min \left\{ \begin{array}{l} DTW_L(\mathbf {x}_{t_s:t_{e-1}}, \mathcal {T}_m) \nonumber\\ DTW_L(\mathbf {x}_{t_s:t_e}, \mathcal {T}_{m,m-1} ) \\ DTW_L(\mathbf {x}_{t_s:t_{e-1}}, \mathcal {T}_{m,m-1} ) \\ \end{array} \right. \end{aligned}$$ In (6) the notation \(\mathcal {T}_{m,i}\) indicates the subsequence of \(\mathcal {T}_m\) up to the i-th pair. Joint uniform and non-uniform scaling The framework expected to provide the best segmentation accuracy combines the ML estimation of a subsequence with the uniform and non-uniform scaling. This is done by replacing \(D_L\) with \(DTW_L\) in (4), which gives an extension of the criterion used in [63], here referred to as ML scaling and time warping matching (SWM), which estimates the matching subsequence \(\mathbf {x}_{t_s:t_e}\) that minimizes the following distance $$\begin{aligned} SWM_L(\mathbf {x}_{t_s:t_e}, \mathcal {T}_{\cdot }) = \min _{m \in \mathcal {M}} \frac{1}{m} DTW_L(\mathbf {x}_{t_s:t_e}, \mathcal {T}_m). \end{aligned}$$ Besides \(t_s\) and \(t_e\), this approach provides also the optimal template size \(m^*\). The computational complexity analysis with respect to m and n provides insights on the criterions described so far. In particular, finding the matching subsequence with (2) or (3) implies testing for every \(t_s\), which requires O(n) operations. \(US_L\) (4), requires testing for every \(t_s\) and for all the \(| \mathcal {M} |\) templates, leading to \(O(n|\mathcal {M}|)\) operations. \(DTW_L\) (5), requires O(vm) operations, but a subsequence is found by testing every combination of \(t_s\) and \(t_e\), requiring a total of \(O(n^3m)\) operations. Finally, for every pair of \(t_s\) and \(t_e\), \(SWM_L\) tests \(| \mathcal {M} |\) different templates, leading to a complexity of \(O(n^3 m |\mathcal {M}|)\). Therefore, (7) leads to the highest computational complexity, which can quickly become impractical as soon as the length of the trial walk increases or the dependency from m and \(|\mathcal {M}|\) is not kept under control. Efficient joint uniform and non-uniform scaling Here the computational efficiency of (7) is improved by exploiting subsequence matching techniques, which do not require testing for every pair \(t_s\) and \(t_e\). Those include a subsequence DTW (SDTW) approach [60, 64], which computes the warping path p, the starting and ending times \(t_s\) and \(t_e\), and the DTW distance of the best matching subsequence. The ML extension of SDTW, indicated with \(SDTW_L\), is computed by solving the following recursion $$\begin{aligned} D_S( \varnothing , \varnothing )&= D_S(\mathbf {x}_{\cdot :t}, \varnothing ) = 0 \\ D_S( \varnothing , \mathcal {T}_{m,i})&= \infty \nonumber \\ D_S(\mathbf {x}_{\cdot :t}, \mathcal {T}_{m,i})&= d_{M_{i}}(\mathbf {x}_{t}) + \min \left\{ \begin{array}{l} D_S(\mathbf {x}_{\cdot :t-1}, \mathcal {T}_{m,i} )\nonumber \\ D_S(\mathbf {x}_{\cdot :t}, \mathcal {T}_{m,i-1}) \\ D_S(\mathbf {x}_{\cdot :t-1}, \mathcal {T}_{m,i-1} ) \\ \end{array} \right. \nonumber \\SDTW_L(\mathbf {x}_{\cdot : \cdot }, \mathcal {T}_m )& = \min _t D_S ( \mathbf {x}_{\cdot : t }, \mathcal {T}_m ), \nonumber \end{aligned}$$ where \(D_S(\mathbf {x}_{\cdot :t}, \mathcal {T}_{m,i})\) is a matrix, storing the cost accumulated so far, by the best warping path that includes the mapping element (t, i). Equation (8) is solved with dynamic programming, with a complexity of O(nm) [64]. Compared with minimizing \(DTW_L\) and checking for every pair of \(t_s\) and \(t_e\), the complexity has improved by a factor of \(n^2\), which is remarkable. The efficiency of computing the best matching subsequence \(\mathbf {x}_{t_s:t_e}\) through (7) improves greatly by replacing \(DTW_L\) with \(SDTW_L\), leading to the new ML subsequence scaling and time warping matching (SSWM) criterion, given by $$\begin{aligned} SSWM_L(\mathbf {x}_{\cdot :\cdot }, \mathcal {T}_{\cdot }) = \min _{m \in \mathcal {M}} \frac{1}{m} SDTW_L(\mathbf {x}_{\cdot :\cdot }, \mathcal {T}_m). \end{aligned}$$ If \(m^*\) is the optimal stride template size provided by (9) (which is supposed to be equal to the one provided by (7)), then, according to SDTW [60, 64], \(t_e\) is given by $$\begin{aligned} t_e = \arg \min _t D_S ( \mathbf {x}_{\cdot : t }, \mathcal {T}_{m^*} ). \end{aligned}$$ While computing the recursion (8), a warping matrix is populated, which allows tracing the path p from the end \(p_w = (t_e,m^*)\), back to the beginning \(p_1 = (t_s,1)\), from which \(t_s\) is readily available. The fundamental advantage of using (9) versus (7) is that the computational complexity of \(SSWM_L\) is \(O(nm|\mathcal {M}|)\), which improves by a factor of \(n^2\) against \(SWM_L\), enabling the implementation of the approach on a low-cost platform with limited computing power. This section leverages the technique we developed previously, and introduces a fully automatic system for gait analysis based on the Kinect. The system is also validated against the GAITRite with a study conducted on healthy children. This is the first time the Kinect is validated for children's gait analysis in healthy subjects. The validation process requires simultaneous measurements of gait parameters to be acquired by a previously validated tool that acts as the criterion (the GAITRite), and by the new system to be validated (based on the Kinect). The chosen criterion is particularly well suited to work with children, and does not interfere with the Kinect acquisitions. The remaining of the section describes the details of the study and of the new gait analysis system. Materials: GAITRite A GAITRite system (v3.9 [19]) was used. It consists of an electronic roll-up walkway connected to a laptop computer with a USB interface cable. The walkway is approximately 520 cm long, with an active sensor area that is 427 cm long and 61 cm wide, containing 16,128 pressure sensors arranged in a grid pattern with a spatial resolution of 1.27 cm. Data from the activated sensors is collected and transferred to the personal computer through a serial port connection. The sampling frequency of the system is 80 Hz. Materials: Kinect The Microsoft Kinect is a sensing device designed to allow controller-free game play on the Microsoft Xbox. Here the first generation of Kinect was used [19] , also known as Kinect for Xbox 360, or sometimes Kinect v1. The sensor contains an RGB as well as an infrared (IR) camera and an IR light emitter. The emitter projects a known pattern onto the scene, based on which the pixel intensities of the images captured by the IR camera are decoded into depth distances. Therefore, the Kinect captures standard video data, as well as depth data at 30 frames per second, encoded in an 11-bit image with resolution of \(640\times 480\) pixels. The Kinect SDK, of which the version 1.5 was used, gives access to the raw RGB and depth data, and also to a 3D virtual skeleton of the body of the people appearing in the scene [20]. See Fig. 1. The SDK maintains skeleton tracking at video rate, within a depth range that can stretch over a range of approximately 0.7–6 m. The setup of the GAITRite and two Kinect sensors is depicted in Fig. 2. In order to allow the subjects to perform a full walkthrough of the walkway with a free exit, the front-view Kinect was placed at the end of the GAITRite and closer to one of the corners. Moreover, it was positioned 0.5 m from the walkway edge to allow for a high overlap of its tracking range with the walkway extension. The second Kinect was looking at the walkway from the side. It was positioned approximately 1.5 m from the side walkway edge. Its purpose was to provide data for future use, and for supporting the manual annotation of the heel strikes and toe-off instants, as will be explained later. However, we stress the fact that the side-view Kinect was not used for 3D skeleton tracking. Only the front-view Kinect was devoted to that purpose. So, the side-view Kinect is used only for providing a better data visualization during the annotation phase, and the gait analysis is performed solely with data collected by the front-view Kinect. Both Kinects were mounted on tripods at a height of 1.3 m. Experimental setup. Layout of the GAITRite walkway with the position and field of views of the front-view and side-view Kinects. The front-view Kinect performs the fitting and tracking of a skeleton model composed of 20 joints, depicted in Fig. 1 Following the West Virginia University Institutional Review Board approval, 25 child subjects (15 females and 10 males) were recruited to participe in a data collection study. Those were healthy children with no known gait abnormalities. Their average age (\(\pm\) standard deviation) was \(3.26 \pm 0.96\) years, with a range from 2 to 4 years. Their average leg length was \(43.15\pm 5.64\) cm. They appeared for the collection at the Pediatric and Adolescent Group Practice of the Physician Office Center of the West Virginia University Hospitals. Written informed consent was obtained from the parents of each subject prior to data collection. Experimental protocol For every subject the data collection began with the acquisition of anthropometric measurements such as leg length, which is required by the GAITRite software. Subjects were instructed to walk barefoot over the GAITRite mat, at his or her usual comfortable walking speed, and they were given the opportunity to perform practice walks to familiarize with the procedure. In order to minimize the acceleration and deceleration effects, the subjects started the waking trials 2 m before and finished 2 m after the mat. At least three trials were recorded for each subject, in order to aggregate enough step cycles captured by the front-view Kinect for the computation of the gait parameters. The data recording from the GAITRite and the two Kinects was performed simultaneously by a single laptop workstation. In particular, we developed an application capable of recording temporally synchronized data streams coming from the front-view and side-view Kinects. However, skeleton tracking was performed by, and recorded from, only the front-view Kinect. Gait parameters The GAITRite computes a number of temporal and spatial gait parameters. Figure 3 summarizes the definitions of the temporal parameters. In particular, with respect to the i-th stride cycle of the right foot, for a subject with a gait with no abnormalities, \(t_{H_i}^r\) represents the time that the mat first senses the right heel, so it is the right heel strike first contact. Similarly, \(t_{H_i}^l\) is the left heel strike first contact. Moreover, \(t_{T_i}^r\) represents the time that the mat stops sensing the right forefoot, so it is the right toe-off last contact. Similarly, \(t_{T_i}^l\) is the left toe-off last contact. Unless otherwise specified, those quantities are always measured in seconds, and from them it is possible to compute several temporal parameters. This work has considered the ones defined below. The step time, S, is the time elapsed from the heel strike of one foot to the heel strike of the opposite foot. If k stride cycles are available, for the right foot, \(S^r\) is computed as $$\begin{aligned} S^r = \frac{1}{k} \sum _{i=1}^k (t_{H_{i+1}}^r - t_{H_i}^l). \end{aligned}$$ The stride time, R, is the time elapsed from the heel strikes of two consecutive footfalls of the same foot. If k right stride cycles are available, \(R^r\) is computed as $$\begin{aligned} R^r = \frac{1}{k} \sum _{i=1}^k (t_{H_{i+1}}^r - t_{H_i}^r). \end{aligned}$$ The number of strides taken in one minute is referred to as cadence, which is given by \(C \, = \, 60/R^r+60/R^l\). Temporal parameters. Summary of the definitions of the temporal gait parameters. A low signal represents a foot touching the ground, and a high signal means it is not touching. Ascending (red) and descending (blue) fronts identify toe-off and heel strike instants, respectively The swing time, W, is the time elapsed between the toe-off of the current footfall to the heel strike of the next footfall of the same foot. If k right stride cycles are available, \(W^r\) is given by $$\begin{aligned} W^r = \frac{1}{k} \sum _{i=1}^k (t_{H_{i+1}}^r - t_{T_{i}}^r). \end{aligned}$$ The GAITRite computes also a number of spatial parameters. Many of them rely on the position of the heel centers \(y_{H_i}\), estimated from the footprint revealed by the pressure sensors when the foot is flat and touching the mat (see Fig. 4). This work has considered the spatial gait parameters defined below, which are based on the heel center positions, where unless otherwise specified, every length is measured in centimeters. Spatial parameters. Summary of the definitions of the spatial gait parameters, based on the geometric position of the heel centers The stride length, L, is the distance between the heel centers of two consecutive footprints of the same foot. For instance, if k right stride cycles are available, \(L^r\) is computed as $$\begin{aligned} L^r = \frac{1}{k} \sum _{i=1}^k \Vert y_{H_{i+1}}^r - y_{H_{i}}^r \Vert . \end{aligned}$$ Given the stride length and the stride time, the average velocity, V, is computed as the average stride length divided by the average stride time, i.e., \(V=(L^r+L^l)/(R^r+R^l)\). The step length, D, requires the line of progression, which is defined by the segment obtained by connecting the heel centers of two consecutive footprints of the same foot, e.g., \(y_{H_{i-1}}^l\) and \(y_{H_{i}}^l\) (see Fig. 4). Then, the step length of the right foot is the distance between \(y_{H_{i-1}}^l\) and the projection of \(y_{H_i}^r\) on the line of progression. Analytically, when k right stride cycles are available, \(D^r\) is given by $$\begin{aligned} D^r = \frac{1}{k} \sum _{i=1}^k \frac{ ( y_{H_{i}}^r - y_{H_{i-1}}^l )^{\top } ( y_{H_{i}}^l - y_{H_{i-1}}^l )}{\Vert y_{H_{i}}^l - y_{H_{i-1}}^l \Vert } . \end{aligned}$$ Finally, although the parameters have been introduced for the right foot (superscript r), they are also valid for the left foot with a careful substitution of the superscripts (from r to l) and adjustment of the indices. Moreover, all the parameters could be averaged among right and left foot, besides being computed for each of them separately. Extraction of gait parameters with GAITRite From the recorded spatio-temporal occurrence of footprints, the proprietary GAITRite software automatically computes the heel strikes, the toe-offs, and other temporal instants, as well as the heel centers and other geometric properties of the footprints. Those are then used for computing several gait parameters, including those defined in the previous section. Manual extraction of gait parameters from Kinect data An annotation tool was developed to visualize the data acquired during trial walks, and to allow a human annotator to conveniently record the video frame numbers corresponding to the time instants of the heel strikes \(\tilde{t}_{H_i}\), and the toe-offs \(\tilde{t}_{T_i}\). The tool was developed using Matlab, and allows opening, visualizing and scrolling through three streams of data at the same time. Those streams correspond to (a) the RGB data coming from the front-view Kinect (see left of Fig. 5), (b) the RGB data coming from the side-view Kinect (see right of Fig. 5), and (c) the skeleton data coming from the front-view Kinect (see Fig. 6). Therefore, for a given frame number t, the annotation tool shows three views, corresponding to (a), (b), and (c). The user can scroll through the time axis back and forth using the arrow keys. Doing so increases and decreases the frame number t, and the three data views change accordingly. The tool allows the user to quickly label specific frame numbers as right/left toe-off, or as right/left heel strike. This functionality is used by a human annotator that carefully observes the three views (a), (b), and (c), and visually identifies and labels the frame numbers corresponding to heel strikes and toe-off instants. After annotating the entire dataset, we realized that having the side-view was very helpful. On the other hand, we found the skeleton view less useful, since the data appeared to be too noisy to accurately assess visually the occurrence of heel strikes and toe-offs. The annotation process produces a set of pairs \(\{(\tilde{t}_{H_i}, \tilde{t}_{T_i})\}\) that can be used for computing the temporal parameters defined previously. The spatial parameters, instead, require the heel center positions, which are estimated as follows. Let y(t) indicate the 3D coordinates at time t, of a point attached to a foot such that at foot flat \(y(t) = y_{H_i}\), i.e. y(t) is the heel center position when the foot is flat. Notice that the position of y at heel strike, \(y( t_{H_i} )\), and at foot flat, \(y_{H_i}\), are almost the same. In addition, \(y( t_{H_i} )\) can be approximated by the coordinates of the closest skeleton joint, which is the ankle, given by \(y_{a,t_{H_i}}\). Therefore, spatial parameters are estimated with the heel centers \(\{\tilde{y}_{{H_i}} \}\), computed by approximating \(\tilde{y}_{{H_i}}\) with \(y_{a,t_{H_i}}\). This has limited impact on the parameters, because they entail computing distances between heel centers at foot flat, which are almost identical to distances between the same foot points at heel strike. Finally, we will show later that the set \(\{(\tilde{t}_{H_i}, \tilde{t}_{T_i})\}\) is used also as training labels for learning the stride template models. Kinect views. Two frames captured by the RGB cameras of the front-view Kinect (left) and the side-view Kinect (right) during a trial walk Skeleton data. Fraction of a skeleton time series \(\mathbf {y}_{1:n}\), including a right swing cycle acquired with the Microsoft Kinect. The body parts are shown in blue, the left leg in red, and the right leg in green. The data was acquired with the front-view Kinect Automatic extraction of gait parameters from Kinect data Given Kinect skeleton tracking data, this section introduces a fully automated approach for estimating the heel strike and toe-off instants, as well as the heel centers, from which temporal and spatial gait parameters can be computed. For a trial walk of length n, such tracking data is given by \(\mathbf {y}_1, \cdots , \mathbf {y}_n\), or \(\mathbf {y}_{1:n}\) for short. At time t, \(\mathbf {y}_t \, = \, [ y_{1,t}; \cdots ; y_{20,t}] \in \mathbb {R}^{60}\) represents a skeleton vector, collecting the 3D positions of the 20 skeleton joints, with respect to the Kinect reference frame. Estimating the heel strike and toe-off instants entails the temporal segmentation of the trial walk \(\mathbf {y}_{1:n}\), which could be attained with the automatic procedure described in the previous section, by finding the subsequences of \(\mathbf {y}_{1:n}\) that match the template models. However, this idea cannot be directly applied, unless we first design the following: (a) a procedure for mapping trial walk data, expressed with respect to the Kinect reference frame, onto data expressed with respect to the canonical reference frame, where the stride templates are defined; (b) a procedure for learning the stride templates; (c) a robust temporal segmentation that identified all the heel strike and toe-off instants. The following sections will address those steps, and also the final step of estimating the heel centers. Canonical reference frame From \(\mathbf {y}_{1:n}\) a canonical reference frame, independent from the Kinect reference frame and robust to noise, is estimated as follows. All the joint positions \(\{ y_{i,t} \}\) are collected into a matrix \(Y = [y_{1,1}, y_{2,1}, \cdots ]\), and treated as a point cloud. After removing the mean from Y, the principal components are computed via singular value decomposition (SVD) [65]. The first principal component (p.c.) is parallel to the ground plane, and identifies the average direction of progression (green line in Fig. 7a). This is because the cloud is elongated in the walking direction of the subject and is typically extending for more than 3 m along a roughly straight line. The second p.c., instead, is perpendicular to the ground plane (red lines in Fig. 7). This is because the projection of the cloud onto the plane perpendicular to the first p.c. appears elongated towards the vertical extension of the body of a subject, which is always greater than the horizontal, and enjoys the right-left symmetry. See Fig. 7b. The second p.c., oriented towards the outside of the ground floor is the first axis \(u_1\), of the canonical reference frame. This method is quite robust to large amounts of noise and tracking errors. In addition, the joints corresponding to hands, wrists, and elbows are removed from Y to make the estimation of \(u_1\) robust to unusual and asymmetric arm movements during a trial walk. Skeleton point cloud. a point could of the 3D joint positions, downsampled for visualization purposes. Each blue asterisk is a point. The green line is the first principal component (p.c.) of the cloud. The red line is the second p.c. b Point cloud projected onto the plane perpendicular to the first p.c. The second p.c. (red line) indicates the direction normal to the ground plane At time t, the second axis \(u_{2,t}\) of the canonical reference frame, points along the current direction of progression of the subject, and is computed as follows. From \(\mathbf {y}_{t}\), a skeleton center point \(y_{c,t}\) is computed by averaging the joints given by the right hip, the left hip, and the center hip. Thus, the point cloud \([y_{c,t-\tau }\), \(\cdots\), \(y_{c,t+\tau }]\) is elongated in the current direction of progression, which can be computed via SVD after removing the mean of the cloud. In particular, \(u_{2,t}\) is computed from the first singular vector, after orienting it in the direction of progression of the subject, projecting it onto the ground plane defined by \(u_1\), and setting its norm to 1. The third axis is simply computed by the cross product \(u_{3,t} \, = \, u_1 \times u_{2,t}\). Finally, the origin of the canonical reference frame must be independent from the origin of the Kinect reference frame, and it is defined as the projection of the skeleton center point \(y_{c,t}\) onto the ground plane. Therefore, to map \(\mathbf {y}_t\) onto \(\mathbf {x}_t { = } [ x_{1,t}; \cdots ; x_{20,t}] \in \mathbb {R}^{60}\), where every joint position \(x_{i,t}\) is expressed in the canonical reference frame, let us define \(U_t = [ u_1, u_{2,t}, u_{3,t} ] \in \mathbb {R}^{3 \times 3}\), and let \(y_{0,t}\) be the lowest joint of \(\mathbf {y}_t\) along \(u_1\), which is touching the ground plane. Then, \(x_{i,t}\) is related to \(y_{i,t}\) as follows $$\begin{aligned} x_{i,t} = U_t^{\top } y_{i,t} - \left[ \begin{array}{c} u_1^{\top } y_{0,t} \\ u_{2,t}^{\top } y_{c,t} \\ u_{3,t}^{\top } y_{c,t} \end{array} \right]. \end{aligned}$$ We stress the fact that mapping the trial walk onto the canonical reference frame is a fully automatic process, and that the entire gait analysis framework never requires any form of (intrinsic or extrinsic) calibration of the Kinect. Also, the mapping assumes that a trial walk occurs roughly on a straight line, regardless of whether the Kinect is strictly in frontal position, as long as the skeleton tracking can be performed with sufficient accuracy. Finally, large deviations from a straight trial walk trajectory could be handled with a more complex mapping procedure, which is beyond the scope of this work. Learning the stride template models From each training trial walk \(\mathbf {x}_{1:n}\), using the heel strike annotations obtained manually, the subsequences representing single stride cycles are extracted. If \(\overline{m}\) and \(\sigma\) are the rounded mean and standard deviation of the lengths of the subsequences, template models are learned for each integer dimension \(m\in \mathcal {M}\), where \(\mathcal {M} = \{ \overline{m} -2 \sigma , \overline{m} - 2 \sigma +1, \cdots , \overline{m} + 2 \sigma \}\). This guarantees that about 95 % of strides will have a length in the range covered by \(\mathcal {M}\). For a dimension m, the subsequences are resampled to a length m with spline interpolation, and divided into the sets of right and left strides. For each set and time instant the mean and covariance are computed, generating the right and left stride template models $$\begin{aligned} \mathcal {T}_m^r =&\, (\varvec{\mu }_1^r, {\Lambda }_1^r), \cdots , (\varvec{\mu }_{m}^r, {\Lambda }_{m}^r) \nonumber \\ \mathcal {T}_m^l =&\, (\varvec{\mu }_1^l, {\Lambda }_1^l), \cdots , (\varvec{\mu }_{m}^l, {\Lambda }_{m}^l). \end{aligned}$$ Throughout the paper, the superscripts r and l are used only when indicating right or left is strictly needed. Figure 8 shows the plots of the means of the stride templates for the ankle joints. Within a stride template, there is a time index corresponding to the toe-off \(t_T\). This is computed by averaging the toe-off annotations obtained after having resampled the stride cycle subsequences to a length m. Finally, we note that learning the stride template models is a data-driven process that needs to be performed only once. This means that a user of the proposed gait analysis approach would not need to collect data, perform annotations, and learn the stride models, because they would be given to him already as part of the system. Stride templates. Plots of the means of the ankle joint positions of the right (a, b), and left (c, d) templates. The coordinates along the \(u_1\), \(u_2\), and \(u_3\) axes are shown in red, green, and blue, respectively. The left template is essentially the right template circular-shifted by the left step time. They have been learned from different data sets, and show minor differences Temporal segmentation Given a test trial walk \(\mathbf {x}_{1:n}\) and the stride templates (17), computing the temporal segmentation entails estimating how many right and left stride cycles are present, and when each of them starts and ends. This will tell where the heel strike and toe-off instants are located. After estimating the subsequence \(\mathbf {x}_{t_s:t_e}\) that best matches a stride template according to (9) and (10), other subsequences, supposedly corresponding to additional stride cycles, are estimated by examining the other local minima of \(D_S ( \mathbf {x}_{\cdot : t }, \mathcal {T}_{m^*} )\). In particular, a time \(t_{e_i}\) of a local minima \(D_S ( \mathbf {x}_{\cdot : t_{e_i} }, \mathcal {T}_{m^*} )\) is accepted as the ending time of the i-th stride if \(t_{e_i} \le t_{e_{j}} -2m^*/3\), and \(t_{e_i} \ge t_{e_{j}} +2m^*/3\), and if \(D_S ( \mathbf {x}_{\cdot : t_{e_i} }, \mathcal {T}_{m^*} )/m^* < \gamma\). This ensures that \(t_{e_i}\) is sufficiently far away from the ending times observed so far, \(\{ t_{e_j} \}\), and that the normalized DTW distance of the subsequence from the template \(\mathcal {T}_{m^*}\) is below a given threshold \(\gamma\). In addition, ending times are sequentially accepted by searching for minima in directions expanding from the initial ending time. This makes the subsequences correspond to contiguous strides. Ending times are no longer accepted if \(t_{e_i} \le t_{e_j} -4 m^*/3\), or \(t_{e_i} \ge t_{e_j} +4 m^*/3\), assuming that the search was expanding in the decreasing or increasing time direction, respectively, and \(t_{e_j}\) is the ending time at the boundary of the expansion. The number N of accepted ending times \(T_e = \{ t_{e_j} \}\) is the number of stride cycles found in the trial walk. Figure 9 summarizes the temporal segmentation procedure, named TrialWalkSegmentation, which includes the estimation of contiguous strides as explained next. The algorithm has to be repeated twice: once for the right and once for the left foot. TrialWalkSegmentation Algorithm. Algoritm that summarizes the steps necessary for segmenting a trial walk \(\mathbf {x}_{1:n}\), into strides delimited by heel strike instants \(T_H\), and toe-off instants \(T_T\). The TrialWalkSegmentation algorithm has to be executed with the right (left) stride template models to estimate the right (left) stride segmentation instants Heel strike and toe-off instants The N identified subsequences are not guaranteed to be "perfectly" contiguous, whereas for consecutive strides of the same foot it should be that \(t_{s_{i+1}} = t_{e_{i}}+1\). This can be ensured by composing a new template model by concatenating N templates \(\mathcal {T}_{m^*} \oplus \cdots \oplus \mathcal {T}_{m^*}\) and matching it against the trial walk by computing \(SDTW_L(\mathbf {x}_{\cdot :\cdot }, \mathcal {T}_{m^*} \oplus \cdots \oplus \mathcal {T}_{m^*} )\). The set of heel strikes \(T_H = \{ t_{H_i} \}\) is obtained by mapping, through the estimated warping path, the beginning of each template onto the trial walk. Similarly, the set of toe-off instants \(T_T =\{ t_{T_i} \}\) is estimated by mapping the toe-off instants of each template. This procedure, indicated as contiguous \(SDTW_L\), or \(CSDTW_L\), is depicted in Fig. 10 and allows a very precise contiguous estimation of the heel-strikes and toe-offs for each foot. Contiguous estimation of time instants. a Second coordinate of the right ankle, extracted from \(\mathcal {T}_{m^*}^r\). b Accumulated cost matrix \(D_S ( \mathbf {x}_{\cdot :t}, \mathcal {T}_{m^*}^r)\). Four local minima along the top edge identify the ending times of four matching subsequences. Four traced-back paths identify the starting times. c Second coordinate of the right ankle extracted from \(\mathbf {x}_{1:n}\). \(N=4\) right strides with length \(m^*\) are identified, and two gaps between matching subsequences are formed. The green dots represent the ground-truth segmentation. d Second coordinate of the right ankle, extracted from the concatenation of four templates \(\bigoplus _{i=1}^4 \mathcal {T}_{m^*}^r\). e Accumulated cost matrix \(D_S ( \mathbf {x}_{\cdot :t}, \bigoplus _{i=1}^4 \mathcal {T}_{m^*}^r)\). The minimum along the top edge identifies the ending time of four right strides. The traced-back path identifies the starting time. Heel strike and toe-off instants are identified by mapping them from the time domain of the concatenated templates (d), to the time domain of the trial walk (c), according to the warping path (red lines) Heel centers The heel centers are estimated by projecting the ankle joint positions onto the ground plane at the heel strike instants \(\{ t_{H_i} \}\). Therefore, if \(y_{0,t_{H_i}}\) are the coordinates of a skeleton point touching the ground plane at time \(t_{H_i}\), and \(y_{a,t_{H_i}}\) are the coordinates of an ankle joint at the same time, then the corresponding heel center coordinates, expressed in the Kinect reference frame, are given by $$\begin{aligned} y_{H_i} = U_{t_{H_i}} \left[ \begin{array}{c} u_1^{\top } y_{0,t_{H_i}} \\ u_{2,t_{H_i}}^{\top } y_{a,t_{H_i}} \\ u_{3,t_{H_i}}^{\top } y_{a,t_{H_i}} \end{array} \right]. \end{aligned}$$ For any given subject, step-by-step gait parameters computed from all the trial walks were averaged. Means and standard deviations (SD) for the system to be validated and the criterion were calculated. Bland and Altman plots were generated to provide a visual representation of the heteroscedasticity of the data [66]. The normal distribution of the data was tested with a Kolmogorov-Smirnov test. Agreement between the average parameters from the Kinect and GAITRite devices were assessed using Bland-Altman bias and limits of agreement (LoA), computed according to [67], Pearson's correlation (\(\rho\)) [68], the concordance correlation coefficient (CCC) [69, 70], and intra-class correlation (ICC) [71]. Pearson's correlation and CCC assess the relative and overall agreement, respectively, between the two methods. In particular, while the Pearson's correlation focusses on precision, CCC assesses both precision and deviation from the line of identity (accuracy). A visual representation of this assessment is provided also by the associated scatter plots. ICC coefficients of the type (2, k) with absolute agreement (as previously reported in [12, 72]), were used to further evaluate the level of agreement between methods. A repeatability analysis for the Kinect is performed by computing gait parameters as averages out of single trial walks. Repeatability coefficients are computed by considering pairs of trial walks from the same subject, and are expressed in absolute value (as 2 times the SD [66]), as well as in a percentage of the mean. Automatic estimation The approach is evaluated with a leave-one-subject-out cross-validation approach. This means that the trial walks of each subject are processed with the template models learned from the trial walks of all the remaining subjects. The manual estimates of the heel strike and toe-off instants are used as labels for learning the templates, and for performance evaluation of the automatic segmentation. The average length of a stride is \(\overline{m} = 25\) frames, the template models are learned for each dimension m in the range [15, 35], and \(\tau\) is set to 3. The automatic trial walk segmentation is evaluated by computing the Rand index [73] and the accuracy on detection (AoD) [74], which here is defined as follows. Let \(\mathbf {t} = [t_s,t_e]\) indicate the support of a subsequence \(\mathbf {x}_{t_s:t_e}\), and let \(\mathbf {g} = [g_s, g_e]\) indicate the corresponding ground-truth support. The percentage of overlap between the supports is defined as $$\begin{aligned} P_{\mathbf {t},\mathbf {g}} = \frac{\min \{t_e,g_e\} - \max \{t_s,g_s\} + 1}{\max \{t_e,g_e\} - \min \{t_s,g_s\} + 1} \; , \end{aligned}$$ when \(\min \{t_e,g_e\} \ge \max \{t_s,g_s\}\), otherwise \(P_{\mathbf {t},\mathbf {g}} = 0\). AoD is the average overlapping percentage. If \(\mathcal {P} = \{ P_{\mathbf {t},\mathbf {g}} \}\) is the set of all the overlapping percentages, then \(\text {AoD} = 1/|\mathcal {P}| \sum _{P \in \mathcal {P}} P\). While the Rand index and the AoD measure the accuracy of the temporal segmentation, the standard deviation of the estimation error \(t_{\cdot }-g_{\cdot }\), where \(t_{\cdot }\) and \(g_{\cdot }\) are corresponding starting or ending times, is indicative of the precision of the instant estimates and is also computed. Table 1 reports the AoD, the Rand index, and the SD of the instant estimation error for several approaches. For a trial walk, \(US_L\) provides a template length \(\tilde{m}\), which is used to estimate the following non-overlapping subsequences in a greedy fashion by minimizing \(D_M(\mathbf {x}_{t_s:\cdot }, \mathcal {T}_{\tilde{m}})\). \(SDTW_L\) segments the same trial walk with templates of length \(\tilde{m}\). The fourth row of Table 1 corresponds to using \(CSDTW_L\) with the template length set to \(\tilde{m}\). \(SSWM_L\), instead, provides the optimal template size \(m^*\), for any given trial walk, which is also used by \(CSDTW_L\) in the last row of Table 1. By all metrics, \(SSWM_L\) is the best approach for proposing the optimal template size, \(m^*\), and number of strides, N, to be used in the contiguous refinement \(CSDTW_L\). Thus, the combination of \(SSWM_L\) and \(CSDTW_L\) represents the automatic segmentation method of choice, and is referred to as Kinect-A. Finally, in all experiments, \(\gamma\) was set to 1. Table 1 Temporal segmentation Figure 10 shows how Kinect-A computes the heel strike and toe-off instants in two steps. The first one is summarized by Fig. 10a–c, where \(SSWM_L\) computes the optimal length \(m^*\), and N subsequences potentially separated by gaps. The second step is summarized by Fig. 10c–e, where \(CSDTW_L\) with parameters \(m^*\) and N, computes N contiguous stride subsequences. The green dots represent the ground-truth segmentation. The final segmentation, defined by the red lines, shows qualitatively a clear improvement with respect to the initial segmentation with gaps. Note that Fig. 10a, c, d only show the plots of one coordinate component of the ankle joint. However, the algorithms use the coordinates of all the leg joints and the center hip joint. For a typical trial walk, the number of contiguous strides was found to be \(N=4\), which means that the front-view Kinect records useful skeleton tracking data for about 3 m. However, also trial walks with 5 and 3 strides were found as this number depends also on the speed and the leg length of the subject. The level of agreement between the manual estimation approach (Kinect-M) and the GAITRite, and between Kinect-A and the GAITRite is evaluated. The gait parameters under consideration are the left and right step time, the cadence, the average swing time, the left and right stride length, the left and right step length , and the velocity. Figures 11, 12 and 13a, b show the Bland and Altman plots, where for each gait parameter the Kinect-M plot and the Kinect-A plot are next to each other to facilitate their visual comparison. All data were normally distributed (p-value \(<0.0002\), and \(<0.0015\) for Kinect-M right step length), and exhibited a mean bias but no heteroscedasticity and no proportional error. Bland–Altman bias and limits of agreement (LoA) are reported in Table 2 for Kinect-M, and Table 3 for Kinect-A. Figs. 13c, d, 14 and 15, instead, show the scatter plots, where, again, for each gait parameter the Kinect-M plot and the Kinect-A plot are next to each other to facilitate their visual comparison. Table 4 reports the means and standard deviations (SDs) of the parameters for the three methods. Bland and Altman plots. On the left side: a, c, e, g show the comparison between GAITRite (criterion) and Kinect-M, where gait parameters are estimated with heel strike and toe-off instants computed manually. On the right side: b, d, f, h show the comparison between GAITRite (criterion) and Kinect-A, where gait parameters are computed fully automatically. Solid lines indicate the mean difference between criterion and the system to be validated. Dashed lines indicate the limits of agreement (\(\pm 1.96\) SD). The parameters compared are left step time, left step length, right step time, and right step length Bland and Altman plots. On the left side: a, c, e, g show the comparison between GAITRite (criterion) and Kinect-M, where gait parameters are estimated with heel strike and toe-off instants computed manually. On the right side: b, d, f, h show the comparison between GAITRite (criterion) and Kinect-A, where gait parameters are computed fully automatically. Solid lines indicate the mean difference between criterion and the system to be validated. Dashed lines indicate the limits of agreement (\(\pm 1.96\) SD). The parameters compared are swing time, cadence, left stride length, and right stride length Table 2 Agreement and repeatability—Kinect-M Table 3 Agreement and repeatability—Kinect-A Table 4 Gait parameter statistics Bland and Altman plots and Scatter plots for the Velocity. On the left side: a, c show the comparison between GAITRite (criterion) and Kinect-M, where the velocity is estimated with heel strike and toe-off instants computed manually. On the right side: b, d show the comparison between GAITRite (criterion) and Kinect-A, where the velocity is computed fully automatically. Solid lines indicate the mean difference between criterion and the system to be validated for the Bland and Altman plots, as well as the linear best-fit for the scatter plot. Dashed lines indicate the limits of agreement (\(\pm 1.96\) SD) for the Bland and Altman plot, as well as the identity line for the scatter plot Scatter plots. On the left side: a, c, e, g show the comparison between GAITRite (criterion) and Kinect-M, where gait parameters are estimated with heel strike and toe-off instants computed manually. On the right side: b, d, f, h show the comparison between GAITRite (criterion) and Kinect-A, where gait parameters are computed fully automatically. Solid lines indicate the linear best-fit. Dashed lines indicate the identity line. The parameters compared are left step time, left step length, right step time, and right step length Scatter plots. On the left side: a, c, e, g show the comparison between GAITRite (criterion) and Kinect-M, where gait parameters are estimated with heel strike and toe-off instants computed manually. On the right side: b, d, f, h show the comparison between GAITRite (criterion) and Kinect-A, where gait parameters are computed fully automatically. Solid lines indicate the linear best-fit. Dashed lines indicate the identity line. The parameters compared are swing time, cadence, left stride length, and right stride length Tables 2 and 3 report additional agreement parameters for Kinect-M and Kinect-A, respectively. Levels of agreement are considered to be excellent, good, moderate, or modest if \(\rho\), CCC, or ICC are greater than 0.9, 0.8, 0.7, or 0.5, respectively. For Kinect-M, most parameters show excellent relative agreement (\(\rho >0.9\)), and good to excellent overall agreement (CCC \(>0.8\)), with mostly excellent absolute agreement (ICC \(>0.9\)). For Kinect-A the relative agreement is mostly good and excellent (\(\rho >0.8\)), with moderate and good overall agreement (CCC \(>0.7\)), and with good and excellent absolute agreement (ICC \(>0.8\)). The repeatability test shows that with probability greater than 95 %, the measurement of a parameter will differ from the previously measured value by an amount less than those reported in Tables 2 and 3. For Kinect-M in particular, the repeatability is very good for most of the parameters (<15 % of the mean), and good (<20 % of the mean) for the right step time, and the swing time. The same behavior is observed for Kinect-A. Table 1 confirms the importance of the design choices made to address the challenge of performing an accurate segmentation in presence of a very high variability of the temporal trajectories of skeleton vectors in children. In particular, \(US_L\) shows the poorest performance because it only models uniform scaling. \(SDTW_L\) adds to \(US_L\) the ability to account for non-uniform scaling, and leads to an improvement. \(CSDTW_L\), instead, forces the strides to be contiguous, further improving the performance. The first step of Kinect-A improves results even more because uniform and non-uniform scaling are handled jointly by \(SSWM_L\), not separately (\(US_L\) followed by \(SDTW_L\)). Finally, the second step of Kinect-A (last row of Table 1), refines the segmentation by imposing contiguous strides. Note that \(SSWM_L\) outperforms not only the two-steps \(US_L\!-\!SDTW_L\), but also their contiguous refinement (fourth row of Table 1). Overall, the accuracy of Kinect-A is excellent (AoD and Rand index \({>}0.9\)), and the precision of the instant estimates is good (i.e., around or less than 20 % of the means in Table 4, 95 % of the time). Kinect-A is also computationally efficient. Indeed, with a Matlab implementation on a low-end PC, the running time of \(SSWM_L\) applied to a trial walk with length \(n=135\) is 4.11 s, and the running time of \(CSDTW_L\) is 7.05 s. On the other hand, \(SWM_L\) takes 75 min even when the length of the matching subsequence is constrained in the range \([\lfloor 0.8~m \rfloor , \lceil 1.2m \rceil ]\), and the template has length m. Therefore, \(SSWM_L\) provides a remarkable 1000 speedup factor, which is essential for implementing Kinect-A in a low-cost platform with limited computing power. Kinect-M represents an upper bound on the agreement, and Kinect-A approaches it with an average percentage deterioration of 5.5 % for the relative agreement, of 6.1 % for the overall agreement, and of 4.5 % for the absolute agreement. The Bland-Altman bias, instead, on average changes only by 2.18 % of the mean of the corresponding GAITRite parameter. In terms of PE, there is an average deterioration of 2.76 percentage points. Overall, this means that Kinect-A can reach levels of agreement very close to those achievable by a manual inspection of Kinect data, which is extremely encouraging. The temporal parameters are those that exhibit more deterioration, especially the swing time. This is probably due to the limit imposed by the temporal resolution of the skeleton tracking, which is 30 frames per second. Kinect-A repeatability on average deteriorates only by 0.71 points, compared to Kinect-M, which is remarkable. In particular, it remains very good even when the agreement with the GAITRite decreases a bit more, like for the right step length. For temporal parameters the repeatability worsens on average by 1.2 points, and by only 0.23 points for spatial parameters. This highlights that temporal resolution affects repeatability, as is also suggested by comparing the repeatability of cadence and swing time. The former is better because less sensitive to the resolution, since it is related to measuring time intervals much larger than those measured for the swing time. Finally, we note that very good repeatability parameters, as often observed in both Kinect-M and Kinect-A, are also indicative of the fact that differences between trial walks of the same subject are limited. Agreement and repeatability are affected by temporal resolution and skeleton tracking quality. However, while temporal resolution appears to have a stronger impact on the Kinect-A performance with respect to Kinect-M, this is not the case for the agreement with the GAITRite in general. Indeed, spatial parameters have worse levels of agreement than temporal parameters; highlighting that tracking quality, rather than temporal resolution, should be responsible for this difference. This section describes the major limitations of the proposed approach, which might suggest future directions of investigation. An importan aspect that has not been fully studied is the effect of various sources of noise onto the gait parameters estimation. The Kinect skeleton tracking data is affected by noise in the spatial and temporal domain. In this work we acquired data with the default joint filtering option of the SDK turned on to filter out small jitters and maintain a very low latency. This allows smoothing the spatial noise across different frames to minimize jittering and stabilize the joint positions over time. In addition, the temporal sampling of the Kinect was assumed to be deterministic, with a frequency of 30 Hz. However, the sampling has a Gaussian jitter, as reported also in [75, 76]. For example, [75] reports a sampling period with mean 33.4 ms, and SD 3.7 ms. Although a full investigation of the temporal jittering effects should be addressed in future research, a very simplified analysis allows gauging to what extent jittering affects our approach. For example, if we are measuring a stride time of 0.8 s (essentially the average stride time of our population), we expect to sample the stride 24 times. Therefore, by assuming the sequence of sampling periods to be made by independent and identically distributed Gaussian variables, the stride time becomes a Gaussian variable with mean \(24 \times 33.4\) ms, and SD \(\sqrt{24} \times 3.7\) ms. However, according to (12), the average stride time R is computed over 3 trials, each of which has an average of 4 strides. Therefore, R is a Gaussian variable with mean \(24 \times 33.4\) ms, and SD \(\sqrt{24} \times 3.7 / \sqrt{3 \times 4} = \sqrt{2} \times 3.7\) ms. This means that R has a coefficient of variation due to the temporal jitter of 0.65 %, which is small, suggesting that a fixed sampling frequency of 30 Hz is a plausible working assumption, as confirmed by the promising validation results. The Kinect skeleton tracking data is also affected by the distance between the Kinect and the individual. The further away is the individual, and the lower is the tracking accuracy. Therefore, single cycle step lengths or step times will be affected by greater errors if they correspond to step cycles at the beginning of the trial walk, which is further away, whereas if they correspond to later steps, they will provide more accurate quantities. However, since gait parameters are computed by averaging over several step cycles, this has the effect of leveling off a lot of the effects induced by the dependency upon the distance of the accuracy. While this might sound reasonable and intuitive, a thorough investigation of this dependency should be addressed in future work. Another issue left unexplored is the effect of stratification. The stride template models are learned with data from the entire children age range (2–4 years). Therefore, as long as the child being tested has an age within that range, Kinect-A is expected to work. While this is a strength of the approach, it would still be possible to learn different stride template models for different age ranges, or for different children leg length ranges. In this way, a more specific template model could be preselected based on the child age, or could even be automatically selected, based on the automatic estimation of the leg length from the Kinect skeleton tracking data. A future investigation should establish whether using stratified template models will significantly increase the accuracy and precision of the approach. Although the Kinect has had a powerful impact on several clinical applications [45, 46], updated technology might further expand it, even for gait analysis applications. It is expected that improvements in the temporal resolution and in the quality of the skeleton tracking, coming with the updated versions of Kinect [28], will produce better concurrent validity and repeatability. Determining the size of such improvements, and to what extent Kinect-A can be used to replicate the large set of parameters computed by the GAITRite, will be the subject of future research. An important future direction for expanding the horizon of Kinect-A is its application to an adult population. In principle, this could be done as long as stride template models are learned for this specific case. However, the size of adults leads to proportional stride lengths increases, and to a reduced amount of strides captured by the system during a single trial walk. Therefore, this aspect as well as the different probability distribution of the skeleton tracking information, will have a nontrivial effect on the gait parameters that will need to be investigated. Finally, we stress the fact that this study has introduced Kinect-A for children's gait analysis, but the validation has been limited to healthy subjects. Therefore, perhaps the most relevant extension of Kinect-A should be operated with the goal in mind of doing children's gait analysis on any subject, regardless of her health status. This work has proposed the Kinect-A method for the automated estimation of children's gait parameters, based on the Microsoft Kinect, and has assessed its concurrent validity against the GAITRite on healthy subjects. The core of Kinect-A is based on bringing together maximum likelihood estimation, uniform and non-uniform scaling estimation, and subsequence matching principles. This approach has demonstrated the ability to cope with the high variability of healthy children's skeleton tracking data acquired by the Kinect by providing excellent temporal segmentation accuracy, and good precision ,computed against the ground-truth obtained with the specialized manual annotation procedure of Kinect-M. Moreover, the approach is computationally efficient, with low computing power needs. A study conducted with healthy children has shown that Kinect-A has good concurrent validity against the GAITRite, as well as very good repeatability. In particular, on a range of 9 gait parameters, the relative and absolute agreements were found to be good and excellent, and the overall agreements were found to be good and moderate. Moreover, we found that the agreement and repeatability parameters of Kinect-A very closely approached those of Kinect-M, which represents an upper bound. In particular, the agreement is found to have an average percentage deterioration of \(5.37~\%\), and the repeatability is found to deteriorate by 0.71 points on average. Despite the limited evaluation conditions based on healthy subjects, the results obtained with Kinect-A represent a step forward in that they encourage further development, with the goal of deploying a fully functional low-cost, parent-operable, portable system for in-home monitoring of gait in children (age 2–4 years), which can operate in actual rehabilitation intervention scenarios. Law M, King G, Russell D, MacKinnon E, Hurley P, Murphy C. Measuring outcomes in children's rehabilitation: a decision protocol. Archiv Phys Med Rehab. 1999;80(6):629–36. Majnemer A. Benefits of using outcome measures in pediatric rehabilitation. Phys Occup Therap Pediatr. 2010;30(3):165–7. Muro-de-la-Herran A, Garcia-Zapirain B, Mendez-Zorrilla A. Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications. Sensors. 2014;14(2):3362–94. van den Noort JC, Ferrari A, Cutti AG, Becher JG, Harlaar J. Gait analysis in children with cerebral palsy via inertial and magnetic sensors. Med Biol Eng Comp. 2013;51(4):377–86. Hamers FPT, Koopmans GC, Joosten EAJ. Catwalk-assisted gait analysis in the assessment of spinal cord injury. J Neurotrauma. 2006;23(3–4):537–48. Belda-Lois J-M, Mena-del Horno S, Bermejo-Bosch I, Moreno JC, Pons JL, Farina D, Iosa M, Molinari M, Tamburella F, Ramos A, Caria A, Solis-Escalante T, Brunner C, Rea M. Rehabilitation of gait after stroke: a review towards a top-down approach. J Neuroeng Rehab. 2011;8:66. Barak Y, Wagenaar RC, Holt KG. Gait characteristics of elderly people with a history of falls: a dynamic approach. Phys Therap. 2006;86(11):1501–10. Toro B, Nester CJ, Farren PC. The status of gait assessment among physiotherapists in the united kingdom. Archiv Phys Med Rehab. 2003;84(12):1878–84. Simon SR. Quantification of human motion: gait analysis-benefits and limitations to its application to clinical problems. J Biomech. 2004;37(12):1869–80. GAITRite. CIR Systems Inc., Havertown, PA Cutlip RG, Mancinelli C, Huber F, DiPasquale J. Evaluation of an instrumented walkway for measurement of the kinematic parameters of gait. Gait Posture. 2000;12:134–8. Bilney B, Morris M, Webster K. Concurrent related validity of the GAITRite walkway system for quantification of the spatial and temporal parameters of gait. Gait Posture. 2003;17(1):68–74. Menz HB, Latt MD, Tiedemann A, Mun San Kwan M, Lord SR. Reliability of the GAITRite walkway system for the quantification of temporo-spatial parameters of gait in young and older people. Gait Posture. 2004;20(1):20–5. Webster KE, Wittwer JE, Feller JA. Validity of the gaitrite walkway system for the measurement of averaged and individual step parameters of gait. Gait Posture. 2005;22(4):317–21. Thorpe DE, Dusing SC, Moore CG. Repeatability of temporospatial gait measures in children using the GAITRite electronic walkway. Archiv Phys Med Rehab. 2005;86(12):2342–6. Dusing SC, Thorpe DE. A normative sample of temporal and spatial gait parameters in children using the GAITRite electronic walkway. Gait Posture. 2007;25(1):135–9. Sorsdahl AB, Moe-Nilssen R, Strand LI. Test-retest reliability of spatial and temporal gait parameters in children with cerebral palsy as measured by an electronic walkway. Gait Posture. 2008;27(1):43–50. Vicon. OMG PLC, UK Kinect for Xbox 360. Microsoft Corporation, Redmond. Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A. Real-time human pose recognition in parts from single depth images. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. p. 1297–304. Macleod CA, Conway BA, Allan DB, Galen SS. Development and validation of a low-cost, portable and wireless gait assessment tool. Med Eng Phys. 2014;36(4):541–6. Crea S, Donati M, De Rossi SMM, Oddo CM, Vitiello N. A wireless flexible sensorized insole for gait analysis. Sensors. 2014;14(1):1073–93. Moeslunda TB, Hiltonb A, Krügerc V. A survey of advances in vision-based human motion capture and analysis. Comp Vision Image Underst. 2006;104(2–3):90–126. Aggarwal JK, Ryoo MS. Human activity analysis: a review. ACM Comp Surveys. 2011;43(3):16–11643. Yoo JH, Nixon M. Automated markerless analysis of human gait motion for recognition and classification. ETRI J. 2011;33(3):259–66. Wang F, Stone E, Skubic M, Keller JM, Abbott C, Rantz M. Toward a passive low-cost in-home gait assessment system for older adults. IEEE J Biomed Health Inform. 2013;17(2):346–55. Wang F, Skubic M, Rantz M, Cuddihy PE. Quantitative gait measurement with pulse-doppler radar for passive in-home gait assessment. IEEE Trans Biomed Eng. 2014;61(9):2434–43. Kinect for Xbox One. Microsoft Corporation, Redmond. Vernadakis N, Derri V, Tsitskari E, Antoniou P. The effect of Xbox Kinect intervention on balance ability for previously injured young competitive male athletes: a preliminary study. Phys Therap Sport Off J Assoc Chart Physiotherap Sports Med. 2014;15(3):148–55. Yang Y, Pu F, Li Y, Li S, Fan Y, Li D. Reliability and validity of Kinect RGB-D sensor for assessing standing balance. IEEE Sensors J. 2014;14(5):1633–8. Mentiplay BF, Clark RA, Mullins A, Bryant AL, Bartold S, Paterson K. Reliability and validity of the Microsoft Kinect for evaluating static foot posture. J Foot Ankle Res. 2013;6(1):14. Stone EE, Skubic M. Mapping Kinect-based in-home gait speed to TUG time: a methodology to facilitate clinical interpretation. In: International Conference on Pervasive Computing Technologies for Healthcare, 2013. p. 57–64. Bonnechère B, Jansen B, Salvia P, Bouzahouene H, Omelina L, Moiseev F, Sholukha V, Cornelis J, Rooze M, Sint Jan S. Validity and reliability of the Kinect within functional assessment activities: comparison with standard stereophotogrammetry. Gait Posture. 2014;39(1):593–8. Cippitelli E, Gasparrini S, Spinsante S, Gambi E. Kinect as a tool for gait analysis: validation of a real-time joint extraction algorithm working in side view. Sensors (Basel, Switzerland) 2015;15(1):1417–34 Clark RA, Pua Y-H, Fortin K, Ritchie C, Webster KE, Denehy L, Bryant AL. Validity of the Microsoft Kinect for assessment of postural control. Gait Posture. 2012;36(3):372–7. Chang Y-J, Chen S-F, Huang J-D. A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities. Res Develop Disabil. 2011;32(6):2566–70. Chang Y-J, Han W-Y, Tsai Y-C. A Kinect-based upper limb rehabilitation system to assist people with cerebral palsy. Res Develop Disabil. 2013;34(11):3654–9. Clark RA, Pua Y-H, Bryant AL, Hunt MA. Validity of the Microsoft Kinect for providing lateral trunk lean feedback during gait retraining. Gait Posture. 2013;38(4):1064–76. Olesh EV, Yakovenko S, Gritsenko V. Automated assessment of upper extremity movement impairment due to stroke. PloS One. 2014;9(8):104487. Clark RA, Vernon S, Mentiplay BF, Miller KJ, McGinley JL, Pua YH, Paterson K, Bower KJ. Instrumenting gait assessment using the Kinect in people living with stroke: reliability and association with balance tests. J Neuroeng Rehab. 2015;12:15. Galna B, Barry G, Jackson D, Mhiripiri D, Olivier P, Rochester L. Accuracy of the Microsoft Kinect sensor for measuring movement in people with Parkinson's disease. Gait Posture. 2014;39(4):1062–8. Procházka A, Vyšata O, Vališ M, Tupa O, Schätz M, Mařík V. Bayesian classification and analysis of gait disorders using image and depth sensors of Microsoft Kinect. Digital Signal Processing. 2015. Procházka A, Vyšata O, Vališ M, ?upa O, Schätz M, Ma?ík V. Use of the image and depth sensors of the Microsoft Kinect for the detection of gait disorders. Neur Comp Appl. 2015;26(7):1621–9. Behrens J, Pfüller C, Mansow-Model S, Otte K, Paul F, Brandt AU. Using perceptive computing in multiple sclerosis—the Short Maximum Speed Walk test. J Neuroeng Rehab. 2014;11:89. Hondori HM, Khademi M. A review on technical and clinical impact of microsoft kinect on physical therapy and rehabilitation. J Med Eng. 2014;2014:1–16. Webster D, Celik O. Systematic review of kinect applications in elderly care and stroke rehabilitation. J NeuroEng Rehab. 2014;11(108):1–24. Luna-Oliva L, Ortiz-Gutiérrez RM, Cano-de la Cuerda R, Piédrola RM, Alguacil-Diego IM, Sánchez-Camarero C, Martínez Culebras MC. Kinect Xbox 360 as a therapeutic modality for children with cerebral palsy in a school environment: a preliminary study. Neuro Rehab. 2013;33(4):513–21. Altanis G, Boloudakis M, Retalis S, Nikou N. Children with motor impairments play a kinect learning game: first findings from a pilot case in an authentic classroom environment. J Interact Design Architect. 2013;19:91–104. Stone E, Skubic M. Evaluation of an inexpensive depth camera for in-home gait assessment. J Ambien Intel Smart Environ. 2011;3(4):349–61. Stone E, Skubic M. Unobtrusive, continuous, in-home gait measurement using the microsoft kinect. IEEE Trans Biomed Eng. 2013;60(10):2925–32. Gabel M, Renshaw E, Schuster A, Gilad-Bachrach R. Full body gait analysis with kinect. In: IEEE International Conference of the Engineering in Medicine and Biology Society. 2012. Wegener C, Hunt A, Vanwanseele B, Burns J, Smith R. Effect of children's shoes on gait: a systematic review and meta-analysis. J Foot Ankle Res. 2011;4(1):3. Sun B, Liu X, Wu X, Wang H. Human gait modeling and gait analysis based on Kinect. In: IEEE International Conference on Robotics and Automation, 2014. p. 3173–178 (2014) Clark RA, Bower KJ, Mentiplay BF, Paterson K, Pua YH. Concurrent validity of the Microsoft Kinect for assessment of spatiotemporal gait variables. J Biomech. 2013;46(15):2722–5. Gianaria E, Balossino N, Grangetto M, Lucenteforte M. Gait characterization using dynamic skeleton acquisition. In: IEEE International Workshop on Multimedia Signal Processing, 2013. p. 440–45. Staranowicz A, Brown GR, Mariottini G. Evaluating the Accuracy of a Mobile Kinect-based Gait-monitoring System for Fall Prediction. In: ACM International Conference on PErvasive Technologies Related to Assistive Environments, 2013. p. 57–1574. Pfister A, West AM, Bronner S, Noah JA. Comparative abilities of Microsoft Kinect and Vicon 3D motion capture for gait analysis. J Med Eng Technol. 2014;38(5):274–80. Auvinet E, Multon F, Aubin CE, Meunier J, Raison M. Detection of gait cycles in treadmill walking using a kinect. Gait and Posture. 2014. Barth J, Oberndorfer C, Kugler P, Schuldhaus D, Winkler J, Klucken J, Eskofier B. Subsequence dynamic time warping as a method for robust step segmentation using gyroscope signals of daily life activities. In: IEEE International Conference of the Engineering in Medicine and Biology Society, 2013. p. 6744–6747. Müller M. Information Retrieval for Music and Motion. Germany: Springer; 2007. Keogh E, Palpanas T, Zordan VB, Gunopulos D, Cardle M. Indexing large human-motion databases. Proc Int Conf Very Large Data Bases. 2004;30:780–91. Rabiner L, Juang B-H. Fundamental of Speech Recognition. NJ: Prentice Hall; 1993. Fu AWC, Keogh E, Lau LY, Ratanamahatana CA, Wong RCW. Scaling and time warping in time series querying. VLDB J. 2008;17(4):899–921. Sakurai Y, Faloutsos C, Yamamuro M. Stream monitoring under the time warping distance. In: IEEE International Conference on Data Engineering, 2007. pp 1046–55. Golub GH, Van Loan CF. Matrix computations, 3rd edn. The Johns Hopkins University Press, MD. 1996. Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1(8476):307–10. Bland JM, Altman DG. Agreement between methods of measurement with multiple observations per individual. J Biopharm Stat. 2007;17(4):571–82. Article MathSciNet Google Scholar Bland JM, Altman DG. Calculating correlation coefficients with repeated observations: Part 2-Correlation between subjects. BMJ (Clinical research ed.). 1995;310(6980):633. Lin LI. A concordance correlation coefficient to evaluate reproducibility. Biometrics. 1989;45(1):255–68. Carrasco JL, Phillips BR, Puig-Martinez J, King TS, Chinchilli VM. Estimation of the concordance correlation coefficient for repeated measures using SAS and R. Comp Methods Programs Biomed. 2013;109(3):293–304. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2):420–8. Hartmann A, Luzi S, Murer K, de Bie RA, de Bruin ED. Concurrent validity of a trunk tri-axial accelerometer system for gait analysis in older adults. Gait Posture. 2009;29(3):444–8. Rand WM. Objective criteria for the evaluation of clustering methods. J Am Stat Assoc. 1971;66(336):846–50. Niennattrakul V, Wanichsan D, Ratanamahatana CA. Accurate subsequence matching on data stream under time warping distance. PAKDD Workshop. 2009;5669:156–67. Elgendi M, Picon F, Magnenat-Thalmann N, Abbott D. Arm movement speed assessment via a Kinect camera: a preliminary study in healthy subjects. BioMed Eng OnLine. 2014;13:88. Webster D, Celik O. Experimental evaluation of microsoft kinect's accuracy and capture rate for stroke rehabilitation applications. In: IEEE Haptics Symposium; 2014. p. 455–60. SM developed the software for the automated gait analysis algorithm and processed the data. PP and GD conceived the study. PP, CAM and GD designed the experimental setup and protocol. KG and PP led the data collection. PP and CAM critically revised the manuscript. GD designed the automated gait analysis approach, the statistical analysis, and drafted the manuscript. All authors have read and approved the final manuscript. The authors are grateful to Patrick Hathaway for helping with the initial experimental setup for the data collection. Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, USA Saeid Motiian & Gianfranco Doretto Department of Pediatrics, West Virginia University School of Medicine, Morgantown, WV, USA Paola Pergami Department of Biology, West Virginia University, Morgantown, WV, USA Keegan Guffey Division of Physical Therapy, West Virginia University School of Medicine, Morgantown, WV, USA Corrie A Mancinelli Saeid Motiian Gianfranco Doretto Correspondence to Gianfranco Doretto. Motiian, S., Pergami, P., Guffey, K. et al. Automated extraction and validation of children's gait parameters with the Kinect. BioMed Eng OnLine 14, 112 (2015). https://doi.org/10.1186/s12938-015-0102-9 Accepted: 15 November 2015 Children's gait analysis GAITRite Concurrent validity Dynamic time warping
CommonCrawl
Inventory of aspen trees in spruce dominated stands in conservation area Matti Maltamo1, Annukka Pesonen2, Lauri Korhonen1, Jari Kouki1, Mikko Vehmas3 & Kalle Eerikäinen4 Forest Ecosystems volume 2, Article number: 12 (2015) Cite this article The occurrence of aspen trees increases the conservation value of mature conifer dominated forests. Aspens typically occur as scattered individuals among major tree species, and therefore the inventory of aspens is challenging. We characterized aspen populations in a boreal nature reserve using diameter distribution, spatial pattern, and forest attributes: volume, number of aspens, number of large aspen stems and basal area median diameter. The data were collected from three separate forest stands in Koli National Park, eastern Finland. At each site, we measured breast height diameter and coordinates of each aspen. The comparison of inventory methods of aspens within the three stands was based on simulations with mapped field data. We mimicked stand level inventory by locating varying numbers of fixed area circular plots both systematically and randomly within the stands. Additionally, we also tested if the use of airborne laser scanning (ALS) data as auxiliary information would improve the accuracy of the stand level inventory by applying the probability proportional to size sampling to assist the selection of field plot locations. The results showed that aspens were always clustered, and the diameter distributions indicated different stand structures in the three investigated forest stands. The reliability of the volume and number of large aspen trees varied from relative root mean square error figures above 50% with fewer sample plots (5–10) to values of 25%–50% with 10 or more sample plots. Stand level inventory estimates were also able to detect spatial pattern and the shape of the diameter distribution. In addition, ALS-based auxiliary information could be useful in guiding the inventories, but caution should be used when applying the ALS-supported inventory technique. This study characterized European aspen populations for the purposes of monitoring and management of boreal conservation areas. Our results suggest that if the number of sample plots is adequate, i.e. 10 or more stand level inventory will provide accurate enough forest attributes estimates in conservation areas (minimum accuracy requirement of RMSE% is 20%–50%). Even for the more ecologically valuable attributes, such as diameter distribution, spatial pattern and large aspens, the estimates are acceptable for conservation purposes. One of the most interesting minor tree species in boreal forests of northern Europe is the European aspen (Populus tremula L.). The importance of aspen is closely related to its biodiversity values because it hosts particularly diverse groups of associated species, many of which are threatened in Fennoscandia (Esseen et al. 1992; Kouki et al. 2004). In addition, large-sized aspens have generally disappeared from managed forests because they have low economic value and are intermediate hosts of the pine rust fungus (Melampsora pinitorqua [Braun] Rostr.) that causes serious damage to young pine stands (Kurkela 1973; Heliövaara and Väisänen 1984). Although aspen is a typical species in post-disturbance, early successional stages, recent studies have indicated that aspen can maintain its populations in natural old-growth coniferous forests for up to several hundred years, even though they may slowly decline in abundance (Lilja et al. 2006; Vehmas et al. 2009b). In particular, old and large-sized aspen trees, which are most valuable for biodiversity, are mostly found in mature and old-growth mixed forests where they grow in small groups or as scattered individuals (Tikka 1954; Syrjänen et al. 1994). Because the spatiotemporal continuity of ecologically important characteristics is regarded as important for conservation purposes (Stokland et al. 2002; Kouki et al. 2004) the ability to inventory and monitor aspen trees is essential for the management of conservation areas. Several variables can be used to describe aspens in stand-level forest inventories. First, the existence of aspen can be recorded. Secondly, detail on the amount and size of aspen trees is of interest; in stand-level inventories, they are usually described using basal area, mean diameter, and mean height (Koivuniemi and Korhonen 2006). Thirdly, from a biodiversity point of a view, information on size variation and spatial distribution is highly relevant (Kouki et al. 2004). The determination of spatial distribution requires that the trees are individually mapped, which is usually practically impossible to conduct in field surveys, except for research purposes. Correspondingly, the tree height distributions are usually not assessed due to the laborious field measurements, whereas diameter distributions can be obtained. With this information, some indicators of the naturalness of the forest structure, such as the shape of the diameter distribution, of the given aspen population can be assessed. Furthermore, it is easy to define the proportion of large aspens when their diameters at breast height (dbh) are, for instance, greater than 25 cm. The problem related to the assessment of aspen at stand-level inventories is that the low density of the aspen trees results in high estimates of sampling errors. It is also possible that aspens are not separated from other economically less-important deciduous species in tree stock descriptions for forest management. In validation studies of the inventories by compartments, the root mean square errors (RMSEs) obtained for the total growing stock volume have ranged from 15% − 38% (Poso 1983; Haara and Korhonen 2004). However, species-specific errors are considerably higher, being 29%, 43% and 65% for Scots pine (Pinus sylvestris L.), Norway spruce (Picea abies L.), and the group consisting of silver birch (Betula pendula Roth) and downy birch (B. pubescens Ehrh), respectively (Haara and Korhonen 2004). While the errors are usually acceptable for the dominant coniferous species, minor deciduous tree species are described too inaccurately for many purposes. For European aspen, the relative RMSE can be several hundreds of percent (Arto Haara, personal comm.). Airborne laser scanning (ALS) -based technology has been successfully applied to stand-level inventories during recent years (Næsset 2007; Maltamo and Packalen 2014). Forest characteristics are usually estimated with 100% coverage for the inventory area by utilising the area-based approach (ABA), i.e. statistical relationships between forest attributes and ALS metrics at the plot level. The first applications estimated forest characteristics as a whole, but the inventory system by Packalén and Maltamo (2007), which also utilizes aerial photographs and relies on non-parametric imputation, was the first species-specific estimator for forest attributes. However, deciduous tree species are usually pooled into one single group (Packalén and Maltamo 2007). Thus, the previously developed ALS based inventory approaches are not appropriate for providing species-specific information on aspen. In studies by Breidenbach et al. (2010) and Pippuri et al. (2013) species-specific ALS inventory has also been applied to identify aspens, but the RMSE values have been over 100%. ALS can also provide information about individual trees, which can be aggregated to the stand level. In a study by Säynäjoki et al. (2008), single aspen trees were detected from dense ALS data. The inventory system was, however, rather complex, including, for instance, visual interpretations by aerial images to separate coniferous trees from deciduous trees. As a result, the classification accuracy of large (dbh > 25 cm) aspen trees was 78.6%. In addition, discrimination of aspen can be difficult because the ALS intensity metrics that are important in species detection overlap with spruce and birch (Ørka HO et al. 2007; Korpela et al. 2010). Despite the recent advances in tree-level identification, it is challenging to obtain stand-level information on aspen in remote sensing-based forest inventories. Correspondingly, the accuracy estimates have been rather low for aspen, or it has been completely ignored in traditional field inventories. Since the trend in forest inventories is toward remote sensing applications, rare and scattered tree species, such as aspen, could become neglected in inventories. On the other hand, there is an increasing need to have forest inventory information on aspen, especially in conservation areas where the occurrence and long-term persistence of scattered aspen trees may be crucial for many conservation-dependent species. The goal of this study was to characterise aspen populations in a boreal nature reserve. The study data are based on mapped individual aspens in three separate spruce dominated forest stands. In this unique data set the aspen populations have developed without the effects of active forest silviculture during recent decades. We characterised aspen using diameter distribution, spatial pattern of trees and forest attributes volume (V, m3∙ha–1), number of stems (N, ha–1), number of stems of large aspens (N dbh > 25 cm, ha–1) and basal area median diameter (D g M, cm). Furthermore, we applied stand level inventory simulations to examine the accuracy of the estimates of these characteristics. Finally, probability proportional to size (PPS) sampling using ALS metrics as auxiliary information was evaluated as a method to improve inventory estimates. Study area and field measurements The study area was located in Koli National Park (NP) in eastern Finland (29°50′E, 63°5′N). The area is characterised as a highly variable boreal landscape, where the altitude varies from 94 − 347 m above sea level (Lyytikäinen 1991; Kärkkäinen 1994). The area lies in the transitional area between the southern and middle boreal vegetation zones (Kalliola 1973). Most forests in the area are dominated by Norway spruce (Picea abies L. Karst.) and Scots pine (Pinus sylvestris L.) with a highly variable admixture of silver birch, downy birch, European aspen, and grey alder (Alnus incana [L.] Moench) (Lyytikäinen 1991; Grönlund and Hakalisto 1998). In a study by Vehmas et al. (2009b), the historical continuity of aspen was studied based on inventory registers, and some areas where large aspens have survived from 1910 were found within the current Koli NP. Three of the largest of these stands were selected for this study (Figures 1 and 2). Other stands were very small sized, included only a few aspens or had highly irregular shape. The total area of forest stand 1 was 8.05 ha, whereas stands 2 and 3 covered 5.96 and 12.93 ha of forest, respectively (Table 1). Within the three stands, both dbh and GPS position were recorded for all living aspen trees having a dbh larger than 5 cm in 2006. Stem volumes of the tallied aspens were calculated using Laasasenaho's (1982) volume function for Scots pine, since published equations were not available for European aspen (Kinnunen et al. 2007). The choice of volume model was based on expert opinion. Sum characteristics were converted to per-hectare levels, and D g M was calculated for the three stands (Table 1). In addition, total stand volume and basal area were taken from the existing stand register data and updated into aspen measurement date (see Vehmas et al. 2009b). The location of the study area. The study stands (A= stand 1, B = stand 2, C =stand 3) with aspen tree locations. Table 1 Areas and forest attributes for European aspen and stand totals for the three study stands Additionally, 15 rectangular sample plots located in Koli NP that did not overlap with the three stands previously described were used to find the best-correlating ALS metric with aspen volume. These data were used in PPS sampling. These plots were originally established to examine single-tree detection of aspen from remote sensing data and included at least one aspen tree (Säynäjoki et al. 2008). The dbh was measured and stem volumes calculated for all trees. More detailed information of the data obtained from the 15 sample plots can be found in Säynäjoki et al. (2008). Laser data The geo-referenced ALS point cloud data from Koli NP were collected on 13 July 2005 using an Optech ALTM 3100 scanner operating at a mean altitude of 900 m above ground level (a.g.l), which resulted in a nominal sampling density of ca. 4 measurements∙m–2. Both the first and last pulse data were recorded, and the last pulse data were employed to generate a digital terrain model (DTM) by the method explained in Axelsson (2000) using a grid cell size of 1 m. Above ground heights (i.e., canopy heights) for the laser points were obtained by subtracting the DTM at the corresponding location. In this study, the pulse data obtained with the ALS sensor was reclassified to "first echo" or "last echo". It is worth noting here that the original single echoes were duplicated to both first and last echo classes, whereas the intermediate echoes were completely ignored. For more details on the original ALS data, see Vehmas et al. (2009a). The height distribution of the first and last pulse canopy height hits was used to calculate plot-wise percentiles for 0, 1, 5, 10, 20, …, 90, 95, 99, and 100% heights (h 0, h 1, …, h 100) (Næsset 2004), and cumulative proportional crown densities (p 0, p 1, …, p 100) were calculated for the respective quantiles. The height distributions contained only those laser points that were classified as above-ground hits; a threshold value of 0.1 m was used. The h 5, for example, denotes the height at which the accumulation of laser hit heights in the vegetation was 5%, and, correspondingly, p 5 denotes the proportion of laser hits that accumulated at the 5% height. In addition, the following variables were calculated by sample plots: the laser pulse intensities accumulating in percentiles (i 10, i 30,…, i 90), the average intensity value of above-ground hits, the proportion of ground hits versus canopy hits using a threshold value of 0.1 m (veg), and the average height (h mean) and standard deviation of the above-ground hits (h sd). The intensity values were used as outputted by the sensor without calibration. All metrics were calculated separately for the first and the last pulse data. Stand level inventory The methods for aspen inventory were studied based on simulations using field data from the three stands where all aspens were mapped. We simulated stand level inventory by placing circular plots of size 400 m2 (radius 11.28 m) into the stands both systematically and randomly. The size of the plot was chosen to correspond to the grid cell size in PPS sampling (see methodology below). Five, ten, fifteen, or twenty plots were located in each study stand for different sampling intensities. All sampling alternatives were repeated 2500 times. In the simulations, plots were only included if the centre point of the plot was within the study stand. For plots located at the edge of the stand, an edge correction was applied by multiplying the attribute value of an edge plot by its expansion factor (Beers 1966): $$ \mathrm{Attribute}={\mathrm{Attribute}}_{\mathrm{Edge}\ \mathrm{plot}}\times \frac{\mathrm{Plot}\ \mathrm{size}}{\mathrm{Edge}\ \mathrm{plot}\ \mathrm{size}} $$ where, attribute is attribute value after correction, attributeEdge plot is attribute value of the edge plot, plot size is size of the sample plot, i.e., 400 m2, and edge plot size is the size of the edge plot within the stand. This correction was made for sum attributes V, N, and N dbh>25cm but not for D g M. This edge correction is slightly biased but leads to considerably more accurate results than without applying any correction (Schreuder et al. 1993). Finally, the estimates of forest attributes were calculated as sample means for each sample. Furthermore, we also tested if the use of ALS data as auxiliary information would improve the accuracy of the stand level inventory by applying PPS sampling. The basic idea of this approach is to use the ALS metric to guide the selection of field plot locations (Pesonen et al. 2010a, b). We applied the same number of sample plots as in the case of systematic and random sampling, but sampling probabilities varied according to ALS information. This was done to choose the most promising plot locations for aspen plots. First, probability layers were produced, i.e., the auxiliary data values were directly calculated for the whole stand that was divided into a grid of 20 m × 20 m sample units. When applying PPS sampling, the sample units were square and are referred to as grid cells. ALS based auxiliary data values were calculated for each sample unit (i = 1,…, N grid, where N grid is the total number of sample units in a stand), and the probabilities of each unit i being selected were determined. The selection probabilities for the sample units (p i ) were calculated by dividing the auxiliary data value x i for the sample unit i by the sum of the auxiliary data values over the whole area of the probability layer (p i = x i/∑x i ). These selection probabilities were finally utilised in sampling 5, 10, 15 or 20 sample units. The calculations were repeated 2500 times. Shape of the diameter distribution In stands 1 and 3 the shape of the diameter distribution estimated using the simulated fixed-radius, plot-based inventory approach was compared with the actual empirical distribution according to the developed rules. The unimodal form of diameter distribution (stand 2) was not considered. In the comparison of measured and estimated diameter distributions, the goal was to examine if the sampled distributions followed the underlying actual size distribution of aspen. The sampled diameter distributions were determined in 5- (bimodal stand) or 10-cm (descending stand) diameter classes within the range from 10 − 95 cm (See Figure 3 for actual distributions). For descending diameter distributions, the following rule was applied: Diameter distribution of stands 1–3 (A = stand 1, B= stand 2, C =stand 3). Number of stems in 10–20-cm dbh class > number of stems in 20–30-cm dbh class > number of stems in 30–40-cm dbh class. If this rule was fulfilled by the estimate, it was classified as a realistic estimate for the underlying empirical distribution. Correspondingly, in the case of bimodal distribution, the following rule was applied: The first mode in the distribution is within dbh classes from 10 − 20 cm, and the second mode in the distribution is after the 25–30-cm dbh class. Spatial pattern of aspens The spatial pattern of the aspens within the three study stands was determined by applying Ripley's K(t) function (Ripley 1981). It describes the expected number of trees at distance t from a randomly selected tree. If the value of the function is larger than what would be expected based on random spacing, the spatial pattern is clustered; if smaller, it is systematic. We applied the library spatstat (Baddeley and Turner 2005) in statistical software R to calculate the K(t) values for each of the three stands. Isotropic correction was applied to minimize edge effects in the calculation (Ripley 1988). The spatial patterns derived for the entire stands were compared with estimates obtained from the simulated samples. Therefore, a Fisher index (I) was calculated for each stand-wise simulation obtained using the following equation: $$ I=\frac{s_n^2}{\overset{-}{n}}, $$ where \( {s}_n^2 \) is the variance of the plot-wise numbers of aspens in the sample of 20 plots and \( \overline{n} \) is the mean of the plot-wise numbers of aspens. The I values greater than 1 indicate clustered spatial patterns. Reliability characteristics The simulation results were validated in term of relative RMSE. $$ RMSE\%=\frac{100x\sqrt{\frac{{\displaystyle \sum_{i=1}^N{\left(y-\frac{{\displaystyle \sum_{i=1}^n{\widehat{y}}_i}}{n}\right)}^2}}{N}}}{y} $$ and bias $$ bias\%=\frac{100x\frac{{\displaystyle \sum_{i=1}^N\left(y-\frac{{\displaystyle \sum_{i=1}^n{\overset{\frown }{y}}_i}}{n}\right)}}{N}}{y} $$ where N is the number of simulations, y is the observed value for the stand, ŷ i is the predicted value for sample plot simulation i, and n is number of sample plots in one sample. Finally, in the case of the ALS-guided inventory, the relative improvement in volume estimate compared to selecting sample units of 20 m × 20 m with equal probabilities was calculated. Reliability figures for attributes of simulated stand level inventories In general, the results are more accurate by means of RMSE% when the number of sample plots increases (Table 2). An exception is stand 3 with systematic placement of plots where accuracy decreased in the case of 20 plots. This is related to the shape of the stand and, thus, to the decreased possibilities to locate the systematic sample plot network to the narrow, densely stocked southern part of the stand in simulations. In general, the results are also slightly more accurate for systematic than random plot locations especially in the case of stands 1 and 2. The biases are in most cases below 2% and there are only a few cases where the values are over 5%. Table 2 Relative RMSE and bias (in brackets) values of the forest attributes in three study stands In the case of RMSE% of V, which is usually regarded as the most important stand attribute, the figures are rather high with smaller number of sample plots and still remain approximately at the level of 25%–40% even with 20 sample plots (Table 2). In stand 2 the RMSE% values were larger for V and also for N compared to stands 1 and 3. This outcome may be related to the smaller quantities of aspen in stand 2 (see Table 1). From the ecological point of a view, N dbh>25cm is the most important forest attribute. Especially in stand 1, but also in stand 2, most of the aspens had smaller dbh values, less than 25 cm (Table 1, Figure 3) and correspondingly the RMSE% figures are high. On the other hand, the RMSE% figures are lower in stand 3 where the diameter distribution (Figure 3C.) shows that a remarkable proportion of aspens that reside in the group of trees is in the dbh-class larger than 25 cm. Finally, in general the results are most accurate for D g M. In the case of PPS sampling, the chosen auxiliary information metric from ALS was h mean 2, which is based on the correlation estimate (0.76) between aspen V and this ALS metric in 15 sample plots of Koli NP. Corresponding correlations between this ALS metric and grid cell level values in the study area were also calculated, and the effect of the PPS sampling on the reliability of V estimates in general is presented in Table 3. As shown, the correlation was close to zero in stand 2 and the effect of PPS sampling is negative in this case. Regarding the two other stands the correlations between ALS metric and volume were greater than 0.3, and the improvements in volume estimates were more than 10% and 3%, respectively. The minor improvement in stand 3 may be related to the existence of very large aspens. Table 3 Correlation and the improvement in the RMSE of V (%) due to the use of ALS auxiliary information in PPS sampling Shape of the diameter distribution estimate The shape of the diameter distribution of aspen was unimodal and skewed to the right in stand 2, descending in stand 1, and bimodal in stand 3 (Figure 3A–C). The shape of sample plot based diameter distribution estimates obtained from the simulations was examined in stands 1 and 3, including ecologically interesting descending and bimodal distributions, respectively. Examination was implemented by classifying the diameter distribution estimate of each simulation according to the rules presented in the methods. In stand 1 the proportion of fixed-radius plot estimates, which correctly classified the descending structure, ranged from 50% − 80% for 5 − 20 plots (Table 4). This was the case both for systematically and randomly located plots. For stand 3 with a bimodal structure, the proportion of correctly classified plots with different number of sample plots corresponded to those of stand 1, but the success rates were lower. Table 4 Proportion (%) of correctly classified diameter distribution types in 2500 simulations Spatial pattern of trees The analysis based on mapped aspen data with Ripleys K-function showed that in all three stands the aspens are clustered, because the expected value of other trees close to each tree is larger than a Poisson distribution would suggest (Figure 4A–C). Correspondingly, the analysis based on sampling simulations and Fisher's index showed that in each case the average value showed that spatial pattern was clustered (Table 5). Also the proportion of simulations showing clustered spatial pattern was always over 50%, even with just five sample plots. Ripley's K function for stands 1–3 (A= stand 1, B = stand 2, C =stand 3). The dashed line describes the expected value of trees based on Poisson distribution with radius r, and solid line the estimate obtained using Ripley's K function and isotropic edge correction. Table 5 Average values of Fisher index and proportion of clustered spatial patterns of aspen trees in 2500 simulations This study considered stand level aspen populations in a boreal nature reserve. The analysis was based on diameter distribution, spatial pattern of aspen trees and reliability figures of forest attribute estimates of stand level inventory. Our unique data included mapped aspen trees in three spruce dominated forest stands where local aspen populations have survived during the past 100 years (Vehmas et al. 2009b). These stands also represent favourable growing environments of aspen. The proportion of aspen in these stands is considerably higher than the average value of 1.5% in Finland (Tomppo et al. 2001) being 16.3%, 5.5% and 16.2% of basal area in stands 1, 2 and 3, respectively. These statistics are for the forest area in general, but in conservation areas the proportion of aspen is usually considerably larger. It also should be noted that our study stands were rather large in size compared to the average stand size, which is ca. 2 ha in southern Finland. However, with respect to the state-owned forests and conservation areas where aspen is common, the stand sizes used in this study were broadly similar. The acceptable level of forest attribute results is, of course, dependent on the need for information, but according to inventory by compartments, the RMSE values of V for deciduous tree species in mixed stands should be between 20% − 50% in Finnish conditions (Uuttera et al. 2002). However, in the previous studies the RMSE% figures have been higher (65%) for deciduous tree species (birches) (Haara and Korhonen 2004). Our results showed, in general, that with the lower number of sample plots the RMSE figures are over 50% but the requirement set by Uuttera et al. (2002) is possible to obtain by measuring 10 or more plots. Between the three stands, the differences in accuracy were caused by the amount of aspen growing stocks, the sizes of the stands (i.e., sampling intensity) and small differences in the spatial patterns of trees. The effect of these aspects is to some extent inversely related. For example, in stand 2, the sampling intensity was highest, but also the RMSE figures were still the highest. This is most likely due to the low amount of aspen in the growing stock. We also guided random sample plot placement by applying auxiliary ALS information with PPS sampling. This kind of approach previously has been applied in studies by Pesonen et al. (2010a, b) in the estimation of quantities of coarse woody debris (CWD). The chosen ALS metric, the square of the mean height of laser echoes emphasized the grid cells with the tallest trees. In our case, this technique decreased sampling efficiency in stand 2. This was due to the negative correlation between aspen volume and the square of the mean height of laser, i.e. the tallest trees in stand 2 are not aspens. This can be considered a drawback of the approach. If pre-information concerning the chosen variable does not hold true, the benefit is completely lost. In our case, the correlation between aspen volume and ALS metric-derived mean height was very strong in the 15 large sized fixed-area aspen sample plots, which were earlier used in single tree-based aspen detection from ALS data in the same Koli area (Säynäjoki et al. 2008), but obviously this kind of information cannot be generalised without risks associated with the extrapolation. With respect to aspen populations in conservation areas, the detailed information on diameter distribution (e.g., the number of large aspens and the shape of distribution) is also of primary interest because many aspen-associated species are highly specialised to specific tree properties (Kouki et al. 2004; Sahlin and Ranius 2009). While descending diameter distribution shapes are interpreted as indicators of uneven-aged stand structure and, thus, may reflect the continuity of aspen populations, bimodal distributions reveal the existence of more than one aspen layer, which is usually also strongly related to the stand age structure. Information on both of these distribution types can be utilised in the management of conservation areas, and without this information, the management lacks primary attributes characterising stands. Regarding our results on mimicking distribution types with 10 fixed-radius plots, the proportions of correctly described diameter distributions were about 65% and 50% when obtained for the descending and bimodal diameter distributions, respectively. These proportions can be further increased more than 10 percentage units by increasing the number of sample plots. The same trend is also true for N dbh>25cm, 10 or more measured sample plots may be required to reach the 25%–50% level of RMSE%. During the last fifteen years, numerous field-based sampling methods for assessing different sparse populations have been presented (e.g. Holopainen et al. 2006; Ringvall et al. 2007; Gove et al. 2013). Although aspen populations are sparse in general this is not the case in our study data. The abundance of aspen in our stands is comparable to the abundance of birch in Finland which constitutes 17% of growing stock and is the third most frequent tree species of the country (Metsätilastollinen vuosikirja 2013). In such conditions the use of sparse population inventory methods may lead to high cost and time-consuming fieldwork. Since there are numerous sampling methods for sparse populations, the suitability of some of these, such as parallel strips suggested by Marquardt et al. (2012), could be investigated for our data that is, in any case, a topic for future studies. According to the analysis of spatial patterns, the aspens were strongly clustered in all three stands. This is in line with previous findings (e.g. Syrjänen et al. 1994). In general clustered spatial patterns of trees make inventory more challenging (Pippuri et al. 2012) which is consistent with the RMSE% levels of our study. On the other hand, it is worth noting that the clustered spatial pattern of aspens was also successfully identified from sample plots without the information of tree location. This is an important outcome for planning aspen inventories. In our study, remote sensing was only applied as auxiliary information in PPS sampling. However, in earlier studies, individual aspens have been detected from ALS data or they have been part of tree stock descriptions in area-based approaches. The problems related to single-tree detection include the typically very low general detection rate and overlapping of aspen intensity values with other tree species, such as birch and pine. Also, the vast size of the crown of mature aspen trees would eventually cause difficulties for interpretation when crown sizes of other trees were considerably smaller. On the other hand, when successful, single-tree detection would reveal unforeseen information on aspen crowns. Here, we did not apply single-tree detection, since the technique was already tested with data from Koli NP by Säynäjoki et al. (2008). In the case of ABA, earlier studies have shown poor accuracy estimates obtained for aspen. In our case, this approach was inapplicable, since the number of measured training plots available in Koli NP was not adequate and the forest vertical structure and tree species constitution outside Koli NP is considerably different. This study characterized European aspen populations for the purposes of monitoring and management of boreal conservation areas. Our results suggest that if the number of sample plots is adequate, i.e. 10 or more using plot size 400 m2, stand level inventory will provide accurate enough forest attributes estimates in conservation areas (minimum accuracy requirement of RMSE% is 20%–50%). Even for the more ecologically valuable attributes, such as diameter distribution, spatial pattern and large aspens, the estimates are acceptable for conservation purposes. Between the three stands, the differences in accuracy were caused by the amount of aspen growing stocks, the sizes of the stands and small differences in the spatial patterns of trees. ALS-based auxiliary information might also be useful in guiding the inventory. However, there is still the major risk that relying on ALS may decrease accuracy. Completely remote sensing-based inventory applications for such detailed attributes obtainable for aspens must still await further development of sensors and algorithms, such as multispectral ALS or combination of ALS and hyper spectral data. Axelsson P (2000) DEM generation from laser scanner data using TIN models. In: The International Archives of the Photogrammetry and Remote Sensing, vol 33, Part B4/1, Amsterdam, pp 110 − 117 Baddeley A, Turner R (2005) Spatstat: an R package for analyzing spatial point patterns. J Stat Soft 12:1–42 Beers TW (1966) The direct correction for boundary-line slopover in horizontal point sampling. Research Progress Report 224, Purdue University, Agricultural Experiment Station, Lafayette, Indiana, p 8 Breidenbach J, Næsset E, Lien V, Gobakken T, Solberg S (2010) Prediction of species-specific forest inventory attributes using a nonparametric semi-individual tree crown approach based on fused airborne laser scanning and multispectral data. Remote Sens Environ 114:911–924 Esseen PA, Ehnström B, Ericson L, Sjöberg K (1992) Boreal forests—the focal habitats of Scandinavia. In: Hansson L (ed) Ecological Principles of Nature Conservation. Elsevier Applied Science, London Gove JH, Ducey MJ, Valentine HT, Williams MS (2013) A comprehensive comparison of perpendicular distance sampling methods for sampling downed coarse woody debris. Forestry 86:129–143 Grönlund A, Hakalisto S (1998) Management of traditional rural landscapes in Koli National Park. Separate plan of Koli National Park. North Karelia Regional Environment Centre, Joensuu, Regional environmental publications 104, pp 81 Haara A, Korhonen KT (2004) Kuvioittaisen arvioinnin luotettavuus. Metsätieteen aikakauskirja 4(2004):489–508 Heliövaara K, Väisänen R (1984) Effects of modern forestry on northwestern European forest invertebrates: a synthesis. Acta For Fenn 189:1–32 Holopainen M, Leino O, Kämäri H, Talvitie M (2006) Drought damage in the park forests of the city of Helsinki. Urban For Urban Gree 4:75–83 Kalliola R (1973) Suomen kasvimaantiede. Werner Söderström Osakeyhtiö, Porvoo Kärkkäinen S (1994) Herb-rich forest vegetation of the Koli area. Publication of the Water and Environment Administration: series A 172, pp 51 Kinnunen J, Maltamo M, Päivinen R (2007) Standing-volume estimates of forests in Russia: how accurate is the published data? Forestry 80:53–64 Koivuniemi J, Korhonen KT (2006) Inventory by compartments. In: Kangas A., Maltamo M. (eds) Forest Inventory. Methodology and Applications. Managing Forest Ecosystems, vol 10, Springer, Dordrecht, pp 271–278 Korpela I, Ørka H-O, Maltamo M, Tokola T, Hyyppä J (2010) Tree species classification in airborne LiDAR data: influence of stand and tree factors, intensity normalisation, and sensor type. Silva Fenn 44:319–339 Kouki J, Arnold K, Martikainen P (2004) Long-term persistence of aspen, a key host for many threatened species, is endangered in old-growth conservation areas in Finland. J Nat Conserv 12:41–52 Kurkela T (1973) Epiphytology of Melampsora rusts of Scots pine (Pinus sylvestris L.) and aspen Populus tremula L. The Finnish Forest Research Institute Research Report 79, pp 68 Laasasenaho J (1982) Taper curve and volume functions for pine, spruce, and birch. Commun Inst For Fenn 108:74 Lilja S, Wallenius T, Kuuluvainen T (2006) Structural characteristics and dynamics of old Picea abies forests in northern boreal Fennoscandia. EcoScience 13:181–192 Lyytikäinen A (1991) Kolin luonto, maisema ja kulttuurihistoria. Kolin luonnonsuojelututkimukset. Vesi- ja ympäristöhallituksen monistesarja 308 Maltamo M, Packalen P (2014) Species-specific management inventory in Finland. In: Maltamo M, Naesset E, Vauhkonen J (eds) Forestry Applications of Airborne Laser Scanning: Concepts and Case Studies. Managing Forest Ecosystems, vol. 27th edn. Springer, Dordrecht, pp 241–252 Marquardt T, Temesgen H, Eskelson BNI, Anderson P (2012) Evaluation of sampling methods to quantify abundance of hardwoods and snags within conifer dominated riparian zones. Ann For Sci 69:821–828 Metsätilastollinen vuosikirja (2013) http://www.metla.fi/julkaisut/metsatilastollinenvsk. Accessed 20 Dec 2013 Næsset E (2004) Practical large-scale forest stand inventory using a small-footprint airborne scanning laser. Scand J For Res 19:164–179 Næsset E (2007) Airborne laser scanning as a method in operational forest inventory: status of accuracy assessments accomplished in Scandinavia. Scand J For Res 22:433–442 Ørka HO, Næsset E, Bollandsås OM (2007) Utilising airborne laser intensity for tree species classification. In: International Archives of the Photogrammetry, Remote Sensing, and Spatial Information Sciences, vol 36, Part 3/W52, pp 300–304 Packalén P, Maltamo M (2007) The k-MSN method in the prediction of species-specific stand attributes using airborne laser scanning and aerial photographs. Remote Sens Environ 109:328–341 Pesonen A, Kangas A, Maltamo M, Packalén P (2010a) Different sources of auxiliary information in coarse woody debris inventory. Forest Ecol Manag 259:1890–1899 Pesonen A, Maltamo M, Kangas A (2010b) The comparison of airborne laser scanning-based probability layers as auxiliary information for assessing coarse woody debris. Int J Remote Sens 31:1245–1259 Pippuri I, Kotamaa E, Maltamo M, Peltola H, Packalén P (2012) Exploring horizontal area-based metrics to discriminate the spatial pattern of trees and need for first thinning using airborne laser scanning. Forestry 85:305–314 Pippuri I, Maltamo M, Packalen P, Mäkitalo J (2013) Predicting species-specific basal areas in urban forests using airborne laser scanning data and existing stand register data. Eur J For Res 132:999–1012 Poso S (1983) Basic features of forest inventory by compartments. Silva Fenn 17:313–349 Ringvall A, Snäll T, Ekström M, Ståhl G (2007) Unrestricted guided transect sampling for surveying sparse species. Can J For Res 37:2575–2586 Ripley BD (1981) Spatial statistics. John Wiley & Sons, New York Ripley BD (1988) Statistical inference for spatial processes. Cambridge University Press, Cambridge Sahlin E, Ranius T (2009) Habitat availability in forests and clearcuts for saproxylic beetles associated with aspen. Biodivers Conserv 18:621–638 Säynäjoki R, Packalén P, Maltamo M, Vehmas M, Eerikäinen K (2008) Detection of aspens using high-resolution aerial laser scanning data and digital aerial images. Sensors 8:5038–5055 Schreuder HT, Gregoire TG, Wood GB (1993) Sampling Methods for Multiresource Forest Inventory. John Wiley & Sons, New York Stokland JN, Holien H, Gaarder G (2002) Arealtall for boreal regnskog i Norge 2002. NIJOS-rapport 2:1–20 Syrjänen K, Kalliola R, Puolasmaa A, Mattson J (1994) Landscape structure and forest dynamics in sub-continental Russian European taiga. Ann Zoo Fenn 31:19–34 Tikka PS (1954) Structure and quality of aspen stands. I. Structure. Commun Inst For Fenn 44:1–33 Tomppo E, Henttonen H, Tuomainen T (2001) Valtakunnan metsien 8. inventoinnin menetelmä ja tulokset Metsäkeskuksittain Pohjois-Suomessa 1992–94 sekä tulokset Etelä-Suomessa 1986–92 ja koko maassa 1986–94. Metsätieteen aikakauskirja B/2001: 99–248 Uuttera J, Hiltunen J, Rissanen P, Anttila P, Hyvönen P (2002) Uudet kuvioittaisen arvioinnin menetelmät– Arvio soveltuvuudesta yksityismaiden metsäsuunnittelu. Metsätieteen Aikakauskirja 3(2002):523–531 Vehmas M, Eerikäinen K, Peuhkurinen J, Packalén P, Maltamo M (2009a) Airborne laser scanning-based identification of herb-rich mature forests in the Koli National Park, eastern Finland. Forest Ecol Manag 257:46–53 Vehmas M, Kouki J, Eerikäinen K (2009b) Long-term spatiotemporal dynamics and historical continuity of European aspen (Populus tremula L.) stands in Koli National Park, eastern Finland. Forestry 82:135–148 This work was supported by by the strategic funding of the University of Eastern Finland. We thank Ms Anne Nylander for her help with the compilation of the figures. University of Eastern Finland, School of Forest Science, P.O. Box 111, FI-80101, Joensuu, Finland Matti Maltamo , Lauri Korhonen & Jari Kouki Blom Kartta Oy, Kauppakatu 15, 80100, Joensuu, Finland Annukka Pesonen City of Joensuu, 80100, Joensuu, Finland Mikko Vehmas Natural Resources Institute Finland, Joensuu Unit, P.O. Box 68, FI-80101, Joensuu, Finland Kalle Eerikäinen Search for Matti Maltamo in: Search for Annukka Pesonen in: Search for Lauri Korhonen in: Search for Jari Kouki in: Search for Mikko Vehmas in: Search for Kalle Eerikäinen in: Correspondence to Matti Maltamo. MM participated to all phases of the study. AP calculated the results concerning sampling simulations and participated to writing of corresponding parts of the study. LK calculated the results concerning spatial pattern of trees and participated to writing of corresponding parts of the study. JK was responsible of the ecological part of the Background and Discussion. MV conducted fieldwork and ALS metrics analysis. KE participated planning and writing of the study. All authors have read and commented the manuscript. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Maltamo, M., Pesonen, A., Korhonen, L. et al. Inventory of aspen trees in spruce dominated stands in conservation area. For. Ecosyst. 2, 12 (2015). https://doi.org/10.1186/s40663-015-0037-4 Diameter distribution Historical continuity Populus tremula L Spatial arrangement Stand characteristics
CommonCrawl
BayesCCE: a Bayesian framework for estimating cell-type composition from DNA methylation without the need for methylation reference Elior Rahmani ORCID: orcid.org/0000-0002-9017-20701, Regev Schweiger2, Liat Shenhav1, Theodora Wingert3, Ira Hofer3, Eilon Gabel3, Eleazar Eskin1,4 & Eran Halperin1,3,4 We introduce a Bayesian semi-supervised method for estimating cell counts from DNA methylation by leveraging an easily obtainable prior knowledge on the cell-type composition distribution of the studied tissue. We show mathematically and empirically that alternative methods which attempt to infer cell counts without methylation reference only capture linear combinations of cell counts rather than provide one component per cell type. Our approach allows the construction of components such that each component corresponds to a single cell type, and provides a new opportunity to investigate cell compositions in genomic studies of tissues for which it was not possible before. DNA methylation status has become a prominent epigenetic marker in genomic studies, and genome-wide DNA methylation data have become ubiquitous in the last few years. Numerous recent studies provide evidence for the role of DNA methylation in cellular processes and in disease (e.g., in multiple sclerosis [1], schizophrenia [2], and type 2 diabetes [3]). Thus, DNA methylation status holds great potential for better understanding the role of epigenetics, potentially leading to better clinical tools for diagnosing and treating patients. In a typical DNA methylation study, we obtain a large matrix in which each entry corresponds to a methylation level (a number between 0 and 1) at a specific genomic position for a specific individual. This level is the fraction of the probed DNA molecules that were found to have an additional methyl group at the specific position for the specific individual. Essentially, these methylation levels represent, for each individual and for each site, the probability of a given DNA molecule to be methylated. While simple in principle, methylation data are typically complicated owing to various biological and non-biological sources of variation. Particularly, methylation patterns are known to differ between different tissues and between different cell types. As a result, when methylation levels are collected from a complex tissue (e.g., blood), the observed methylation levels collected from an individual reflect a mixture of its methylation signals coming from different cell types, weighted according to mixing proportions that depend on the individual's cell-type composition. Thus, it is challenging to interpret methylation signals coming from heterogeneous sources. One notable challenge in working with heterogeneous methylation levels has been highlighted in the context of epigenome-wide association studies (EWAS), where data are typically collected from heterogeneous samples. In such studies, we typically search for rows of the methylation matrix (each corresponding to one genomic position) that are significantly correlated with a phenotype of interest across the samples in the data. In this case, unless accounted for, correlation of the phenotype of interest with the cell-type composition of the samples may lead to numerous spurious associations and potentially mask true signal [4]. In addition to its importance for a correct statistical analysis, knowledge of the cell-type composition may provide novel biological insights by studying cell compositions across populations. In principle, one can use high-resolution cell counting for obtaining knowledge about the cell composition of the samples in a study. However, unfortunately, such cell counting for a large cohort may be costly and often logistically impractical (e.g., in some tissues, such as blood, reliable cell counting can be obtained from fresh samples only). Due to the pressing need to overcome this limitation, development of computational methods for estimating cell-type composition from methylation data has become a key interest in epigenetic studies. Several such methods have been suggested in the past few years [5–10], some of which aim at explicitly estimating cell-type composition, while others aim at a more specific goal of correcting methylation data for the potential cell-type composition confounder in association studies. These methods take either a supervised approach, in which reference data of methylation patterns from sorted cells (methylomes) are obtained and used for predicting cell compositions [5], or an unsupervised approach (reference-free) [6–10]. The main advantage of the reference-based method is that it provides direct (absolute) estimates of the cell counts, whereas, as we demonstrate here, current reference-free methods are only capable of inferring components that capture linear combinations of the cell counts. Yet, the reference-based method can only be applied when relevant reference data exist. Currently, reference data only exist for the blood [11], breast [12], and brain [13], for a small number of individuals (e.g., six samples in the blood reference [11]). Moreover, the individuals in most available data sets do not match the reference individuals in their methylation-altering factors, such as age [14], gender [15, 16], and genetics [17]. This problem was recently highlighted in a study in which the authors showed that available blood reference collected from adults failed to estimate cell proportions of newborns [18]. Furthermore, in a recent work, we showed evidence from multiple data sets that a reference-free approach can provide substantially better correction for cell composition when compared with the reference-based method [19]. It is therefore often the case that unsupervised methods are either the only option or a better option for the analysis of EWAS. As opposed to the reference-based approach, although can be applied for any tissue in principle, the referencefree methods do not provide direct estimates of the cell-type proportions. Previously proposed reference-free methods allow us to infer a set of components, or general axes, which were shown to compose linear combinations of the cell-type composition [8, 9]. Another more recent reference-free method was designed to infer cell-type proportions; however, as we show here, it only provides components that compose linear combinations of the cell-type composition rather than direct estimates [10]. Unlike cell proportions, while linearly correlated components are useful in linear analyses such as linear regression, they cannot be used in any nonlinear downstream analysis or for studying individual cell types (e.g., studying alterations in cell composition across conditions or populations). Cell proportions may provide novel biological insights and contribute to our understanding of disease biology, and we therefore need targeted methods that are practical and low in cost for estimating cell counts. In an attempt to address the limitations of previous reference-free methods and to provide cell count estimates rather than linear combinations of the cell counts, we propose an alternative Bayesian strategy that utilizes prior knowledge about the cell-type composition of the studied tissue. We present a semi-supervised method, BayesCCE (Bayesian Cell Count Estimation), which encodes experimentally obtained cell count information as a prior on the distribution of the cell-type composition in the data. As we demonstrate here, the required prior is substantially easier to obtain compared with standard reference data from sorted cells. We can estimate this prior from general cell counts collected in previous studies, without the need for corresponding methylation data or any other genomic data. We evaluate our method using four large methylation data sets and simulated data and show that our method produces a set of components that can be used as cell count estimates. We observe that each component of BayesCCE can be regarded as corresponding to scaled values of a single cell type (i.e., high absolute correlation with one cell type, but not necessarily good estimates in absolute terms). We find that BayesCCE provides a substantial improvement in correlation with the cell counts over existing reference-free methods (in some cases a 50% improvement). We also consider the case where both methylation and cell count information are available for a small subset of the individuals in the sample, or for a group of individuals from external data. Notably, existing reference-based and reference-free methods for cell-type estimation completely ignore this potential information. In contrast, our method is flexible and allows to incorporate such information. Specifically, we show that our proposed Bayesian model can leverage such additional information for imputing missing cell counts in absolute terms. Testing this scenario on both real and simulated data, we find that measuring cell counts for a small group of samples (a couple of dozens) can lead to a further significant increase in the correlation of BayesCCE's components with the cell counts. Benchmarking existing reference-free methods for capturing cell-type composition We first demonstrate that existing reference-free methods can infer components that are correlated with the tissue composition of DNA methylation data collected from heterogeneous sources. For this experiment, as well as for the rest of the experiments in this paper, we used four large publicly available whole-blood methylation data sets: a data set by Hannum et al. [20] (n=650), a data set by Liu et al. [21] (n=658), and two data sets by Hannon et al. [22] (n=638 and n=665; denote Hannon et al. I and Hannon et al. II, respectively). In addition, we simulated data based on a reference data set of methylation levels from sorted leukocyte cells [11] (see the "Methods" section). While cell counts were known for each sample in the simulated data, cell counts were not available for the real data sets. We therefore estimated the cell-type proportions of six major blood cell types (granulocytes, monocytes, and four subtypes of lymphocytes: CD4+, CD8+, B cells, and natural killer cells) based on a reference-based method [5], which was shown to reasonably estimate leukocyte cell proportions from whole-blood methylation data collected from adult individuals [18, 23, 24]. Due to the absence of large publicly available data sets with measured cell counts, these estimates were considered as the ground truth for evaluating the performance of the different methods. For benchmarking performance of existing methods, we considered three reference-free methods, all of which were shown to generate components that capture cell-type composition information from methylation: ReFACTor [8], non-negative matrix factorization (NNMF) [9], and MeDeCom [10]. Although the reference-free methods can potentially allow the detection of more cell types than the set of predefined cell types in the reference-based approach, we evaluated six components of each of the reference-free methods—six being the number of estimated cell types composing the ground truth. We found all methods to capture a large portion of the cell composition information in all data sets; particularly, we observed that ReFACTor performed considerably better than NNMF and MeDeCom in all occasions (Additional file 1: Figure S1). In spite of the fact that all three methods can capture a large portion of the cell composition variation, each component provided by these methods is a linear combination of the cell types in the data rather than an estimate of the proportions of a single cell type. As a result, as we show next, in general, these methods perform poorly when their components are considered as estimates of cell type proportions. ReFACTor was not designed for estimating cell proportions but rather for providing orthogonal principal components of the data that together capture variation in cell compositions. In contrast, NNMF and MeDeCom, which extends the underlying model in NNMF, were designed to provide estimates of cell type proportions. In addition to empirical support from the data, as we report next, we also provide a mathematical proof for the non-identifiability nature of the NMMF model, which drives solutions towards undesired linear combinations of cell-type proportions rather than direct estimates of cell-type proportions (see the "Methods" section). BayesCCE: a Bayesian semi-supervised approach for capturing cell-type composition Every method that has been developed so far for capturing cell composition signal from methylation can be classified as either reference-based, wherein a reference of methylation patterns of sorted cells is used, or reference-free, wherein cell composition information is inferred in an unsupervised manner. Our proposed method, BayesCCE, combines elements from the underlying models of previous reference-free methods with further assumptions. BayesCCE does not use standard reference data of sorted methylation levels, but rather it leverages relatively weak prior information about the distribution of cell-type composition in the studied tissue. This allows BayesCCE to direct the solution towards the inference of one component for each cell type that is encoded in the prior information. BayesCCE is fully described in the "Methods" section. In order to evaluate BayesCCE, we obtained prior information about the distribution of leukocyte cell-type proportions in blood using high-resolution blood cell counts that were previously measured in 595 adult individuals (see the "Methods" section). In concordance with the estimated cell-type proportions used as the ground truth, we first considered the assumption of six constituting cell types in blood tissue (k=6). We applied BayesCCE on each of the four data sets and evaluated the resulted components. We observed that each time BayesCCE produced a set of six components such that each component was correlated with one of the cell types, as desired (Fig. 1 and Additional file 1: Tables S1 and S2). Specifically, we found the mean absolute correlation values across all six cell types to be 0.58, 0.63, 0.45, and 0.45 in the Hannum et al., Liu et al., Hannon et al. I, and Hannon et al. II data sets, respectively. We note, however, that the assignment of components into corresponding cell types could not be automatically determined by BayesCCE. In addition, in general, the BayesCCE components were not in the right scale of their corresponding cell types (i.e., each component represented the proportions of one cell type up to a multiplicative constant and addition of a constant). These symptoms are expected due to the nature of the prior information used by BayesCCE. For more details about the assignment of components into cell types and evaluation measurements, see the "Methods" section. BayesCCE captures cell-type proportions in four data sets under the assumption of six constituting cell types in the blood (k=6): granulocytes, monocytes, and four subtypes of lymphocytes (CD4+, CD8+, B cells, and NK cells). The BayesCCE estimated components were linearly transformed to match their corresponding cell types in scale (see the "Methods" section). For convenience of visualization, we only plot the results of 100 randomly selected samples for each data set We next considered a simplifying assumption of only three constituting cell types in blood tissue (k=3): granulocytes, lymphocytes, and monocytes. We applied BayesCCE on each of the four data sets and observed high correlations between the estimated components of granulocytes and the granulocyte levels (r≥0.91 in all data sets) and between the estimated components of lymphocytes and the lymphocyte levels (r≥0.87 in all data sets), yet much lower correlations for monocytes (r≤0.27 in all data sets; Additional file 1: Figure S2 and Tables S1 and S2). We note that poor performance in capturing some cell type may be partially derived by inaccuracies introduced by the reference-based estimates, which are used as the ground truth in our experiments. Notably, three recent studies, which consisted of samples for which both methylation levels and cell count measurements were available, demonstrated that while the reference-based estimates of the overall lymphocyte and granulocyte levels were found to be highly correlated with the true levels, the accuracy of estimated monocytes was found to be substantially lower [8, 18, 25]. This may explain the low correlations we report for monocytes in our experiments. Low correlations with some of the cell types may be driven by various reasons, such as utilizing inappropriate reference or failing to perform a good feature selection. We later provide a more detailed discussion about these issues. For assessing the performance of BayesCCE in light of previous reference-free methods, we sub-sampled the data and generated ten data sets of 300 randomly selected samples from each one of the four data sets. In addition, we simulated ten data sets of similar size (n=300; see the "Methods" section). Figure 2 demonstrates a significant and substantial improvement in performance for BayesCCE upon existing methods under the assumption of six constituting cell types (k=6). Repeating the same set of experiments while assuming three constituting cell types (k=3) revealed similar results (Additional file 1: Figure S3). The performance of existing reference-free methods and BayesCCE under the assumption of six constituting cell types in blood (k=6): granulocytes, monocytes, and four subtypes of lymphocytes (CD4+, CD8+, B cells, and NK cells). For each method, box plots show for each data set the performance across ten sub-sampled data sets (n=300), with the median indicated by a horizontal line. For each of the methods, ReFACTor, NNMF, MeDeCom, and BayesCCE, we considered a single component per cell type (see the "Methods" section). Additionally, we considered the scenario of cell count imputation wherein cell counts were known for 5% of the samples (n=15; BayesCCE imp) and the scenario wherein samples from external data with both methylation levels and cell counts were used in the analysis (n=15; BayesCCE imp ext). Top panel: mean absolute correlation (MAC) across all cell types. Bottom panel: mean absolute error (MAE) across all cell types. For BayesCCE imp and BayesCCE imp ext, the MAC and MAE values were calculated while excluding the samples with assumed known cell counts BayesCCE impute: cell count imputation We next considered a scenario in which cell counts are known for a small subset of the samples in the data. This problem can be viewed as a problem of imputing missing cell count values (see "Methods" section). We repeated all previous experiments, only this time we assumed that cell counts are known for randomly selected 5% of the samples in each data set. As opposed to the previous experiments, in which each one of the BayesCCE components constituted a scaled estimate of the proportions of one of the cell types, incorporating samples with known cell counts allowed BayesCCE to produce components that form absolute estimates of the cell type proportions (i.e., not scaled components, but components with low absolute error compared with the true proportions). Moreover, in contrast to previous experiments, each component was now automatically assigned to its corresponding cell type. Under the assumption of six constituting cell types in blood tissue (k=6), we observed a substantial improvement of up to 58% in mean absolute correlation values compared with our previous experiments (Fig. 3 and Additional file 1: Tables S1 and S2). Specifically, we found the mean absolute correlation values across all six cell types to be 0.71, 0.66, 0.56, and 0.71 in the Hannum et al., Liu et al., Hannon et al. I, and Hannon et al. II data sets, respectively. In addition, in contrast to our previous experiments, inclusion of some cell counts resulted in low mean absolute error, which reflects a correct scale for the components. We observed similar results when assuming three constituting cell types (k=3), providing an improvement of up to 28% in correlation and a substantial decrease in absolute errors compared with the previous experiments (Additional file 1: Figure S4 and Tables S1 and S2). BayesCCE captures cell-type proportions in four data sets under the assumption of six constituting cell types in blood (k=6): granulocytes, monocytes, and four subtypes of lymphocytes (CD4+, CD8+, B cells, and NK cells), and assuming known cell counts for randomly selected 5% of the samples in the data. All correlations were calculated while excluding the samples with assumed known cell counts. For convenience of visualization, we only plot the results of 100 randomly selected samples for each data set In the absence of cell counts for a subset of the individuals in the data, we can incorporate into the analysis the external data of samples for which both cell counts and methylation levels (from the same tissue) are available. We repeated again all previous experiments (k=3 and k=6); only this time for each data set, we added a randomly selected subset of samples from one of the other data sets (5% of the original sample size) and used both their methylation levels and cell-type proportions in the analysis. Specifically, we used randomly selected samples and corresponding estimates of cell-type proportions from the Hannon et al. I data set for the experiments in all three other data sets, and samples from the Hannon et al. II data set for the experiment with the Hannon et al. I data set. In order to pool samples from two data sets together, we considered only the intersection of CpG sites that were available for analysis in the two data sets. In addition, unlike in the previous experiments, here, we potentially introduce new batch effects into the analysis, as in each experiment the original sample is combined with external data. We therefore accounted for the new batch information by adding it as a new covariate into BayesCCE. As in the case of known cell counts for a subset of the samples, we found that the inclusion of external samples with both methylation and cell counts substantially improved the performance in terms of correlation and absolute errors (Additional file 1: Figures S5 and S6 and Tables S1 and S2). These results clearly show that estimates can be dramatically more accurate given the measured cell counts for as few as a couple of dozens of samples in the data (or such samples from external data). As before, for assessing performance more thoroughly, we applied BayesCCE on the same sub-sampled data sets we used before (n=300), while assuming known cell counts for a subset of the samples. In one scenario, we assumed cell counts are known for 5% of the samples in each data set (n=15), and in a second scenario, we included into the analysis methylation levels and cell-type proportions of 15 samples from external data. These experiments revealed in most cases a substantial improvement in correlation over a standard execution of BayesCCE (i.e., without inclusion of cell counts) and revealed in all cases a substantial improvement in the mean absolute error. The results are summarized in Fig. 2 for the case of six constituting cell types (k=6) and in Additional file 1: Figure S3 for the case of three constituting cell types (k=3). We further tested the performance of BayesCCE as a function of the number of samples for which cell counts are available. Remarkably, we found that known cell counts for only a couple of dozens of the samples are needed in order to achieve the maximal improvement in performance, including more samples with known cell counts did not provide a further improvement (Fig. 4). In addition, we evaluated the performance of BayesCCE as a function of the sample size. Interestingly, while performance did not improve by increasing the sample over a few hundred of samples in the case of unknown cell counts, we found that knowledge of cell counts for as few as 15 samples in the data allowed a monotonic improvement in performance in larger sample sizes (Additional file 1: Figure S7). Performance of BayesCCE as a function of the number of samples for which cell counts are known, under the assumption of six constituting cell types in blood (k=6): granulocytes, monocytes, and four subtypes of lymphocytes (CD4+, CD8+, B cells, and NK cells). Presented are the medians of the mean absolute correlation values (MAC; in blue) and the medians of the mean absolute error values (MAE; in red) across the six cell types. Error bars indicate the range of MAC and MAE values across ten different executions for each number of samples with known cell counts. In every execution, samples with known cell counts were randomly selected, and all MAC and MAE values were calculated while excluding the samples with assumed known cell counts Finally, we considered an alternative approach for verifying the results of BayesCCE. Although our study aims at estimating cell-type proportions without the need for reference methylation data, BayesCCE jointly learns cell-type composition and cell-type-specific mean methylation levels (methylomes). Hence, as a by-product of the BayesCCE algorithm, we also obtain cell-type-specific methylomes across the CpG sites selected by BayesCCE as part of its feature selection process (see the "Methods" section). Our experiments found BayesCCE to provide one component per cell type; however, these components are not necessarily appropriately scaled, which implies that estimated cell-type-specific methylation profiles are also not necessarily calibrated. Nevertheless, in the scenario where cell counts were known even for a small subset of the individuals in the study, BayesCCE provided calibrated cell count estimates. In such cases, we therefore expect BayesCCE to provide calibrated cell-type-specific methylation profiles. Using correlation maps, for each of the four whole-blood methylation data sets we analyzed, we verified high similarity between the cell-type-specific methylomes obtained by BayesCCE to those estimated by a reference methylation data collected from sorted blood cells [11] (Additional file 1: Figure S8). In spite of an overall high similarity between these two approaches, the correlation patterns detected by BayesCCE did not perfectly match those estimated using the reference data. While this may demonstrate the expected accuracy limitations of BayesCCE to some extent, we also attribute these imperfect matches, at least in part, to inaccuracies introduced by the reference data set, owing to the fact that it was constructed only from a small group of individuals (n=6), which do not represent well all the individuals in other data sets in terms of methylome altering factors such as age [14], gender [15, 16], and genetics [17]. Robustness of BayesCCE to biases introduced by the cell composition prior BayesCCE relies on prior information about the distribution of the cell-type composition in the studied tissue. In practice, the available prior information may not always precisely reflect the cell composition distribution of the individuals in the study. For instance, in a case/control study design, cases may demonstrate altered cell compositions compared with healthy individuals. Therefore, in this scenario, a prior estimated from a healthy population (or a sick population) is expected to deviate from the actual distribution in the sample. This potential problem is clearly not limited to case/control studies, but also applies to studies with quantitative phenotypes, in case these are correlated with changes in cell composition of the studied tissue. In principle, we can address this issue by incorporating several appropriate priors and assigning different priors to different individuals in the study. However, in practice, population-specific priors may be hard to obtain, mainly owing to the fact that numerous known and unknown factors can affect cell composition. We revisited our analysis from the previous subsections in an attempt to assess the robustness of BayesCCE to non-informative or misspecified priors. A desired behavior would allow BayesCCE to overcome a bias introduced by a prior which does not accurately represent all the individuals in the sample. Particularly, we considered three whole-blood case/control data sets, two schizopherenia data sets by Hannon et al., and a rheumatoid arthritis data set by Liu et al., all of which are expected to demonstrate differences in blood cell composition between cases and controls [26, 27]. In fact, in our analysis, we had an inherently misspecified prior since we learned the prior from hospital patients (outpatients), which are overall expected to represent a sick population better than a more general population. Specifically, out of the 595 individuals used for learning the prior, 64% are known to have taken at least one medication at the time of blood draw for cell counting and 24% were admitted to the hospital due to various conditions within 2 months before or after the time of their blood draw (70.4% either were admitted or took medications). We expect these conditions to be correlated with alterations in blood cell composition, and therefore, the prior information we used is expected to represent deviation from a healthy population and, as a result, to misrepresent at least the control individuals in the case/control data sets we analyzed. We further considered an additional fourth data set by Hannum et al., which was originally studied in the context of aging (age range 19–101, mean 64.03, SD 14.73). Our prior was calculated using sample with a different distribution of ages (range 20–88, mean 49.19, SD 16.69), thus potentially misrepresenting the cell composition distribution in the Hannum et al. data to some extent. Remarkably, we found the cell composition estimates given by BayesCCE to effectively detect differences between populations in the data sets, in spite of using a single prior estimated from one particular population. Specifically, we found that BayesCCE correctly detected the cell types which differentiate between cases and controls and between young and older populations; notably, in some of the data sets, we found BayesCCE to demonstrate some differences between cases and controls which were not captured by the reference-based estimates (Fig. 5). For example, NK cell abundance is known to change in aging in a process known as NK cell immunosenescence [28, 29], and monocyte levels are known to increase in RA patients compared with healthy individuals [30–32]. These differences in cell populations were detected by BayesCCE but not by the reference-based method, thus suggesting that BayesCCE could uncover signal which was undetected by the reference-based method (Fig. 5). That said, some other cell composition differences that were reported by BayesCCE but not by the reference-based method or vice versa may be the result of inaccuracies introduced by BayesCCE. Quantifying more accurately and reliably to what extent each method can detect cell composition differences would require several large data sets with known cell counts. The robustness of BayesCCE to prior misspecification and its ability to capture population-specific variability in cell-type composition, under the assumption of six constituting cell types in blood (k=6): granulocytes, monocytes, and four subtypes of lymphocytes (CD4+, CD8+, B cells, and NK cells). Left side: t test results (presented by the negative log of the Bonferroni-adjusted p values) for the difference in proportions of each cell type between cases and controls. Right side: the Dirichlet parameters of estimated cell counts stratified by cases and controls; red dashed rectangles emphasize the high similarity in the estimated case/control-specific cell composition distributions yielded by the different methods, regardless of the prior used ("prior"). Results are presented for four different data sets and using cell count estimates obtained by four approaches: the reference-based method, BayesCCE, BayesCCE with known cell counts for 5% of the samples (BayesCCE imp), and BayesCCE with 5% additional samples with both known cell counts and methylation from external data (BayesCCE imp ext). For the Hannum et al. data set, for the purpose of presentation, cases were defined as individuals with age above the median age in the study. In the evaluation of BayesCCE imp and BayesCCE imp ext, samples with assumed known cell counts were excluded before calculating p values and fitting the Dirichlet parameters In addition, for each data set, we estimated the distribution of white blood cells based on the BayesCCE cell count estimates, and verified the ability of BayesCCE to correctly capture two distinct distributions (cases and controls or young and older individuals), regardless of the single distribution encoded by the prior information (Fig. 5). While BayesCCE provides one component per cell type, these components are not necessarily appropriately scaled to provide cell count estimates in absolute terms. Therefore, for the latter analysis, we considered only the scenarios in which cell counts are known for a small number of individuals. We further evaluated the scenario in which two different population-specific prior distributions are available. Specifically, one prior for cases and another one for controls in the case/control studies, and one for young and another one for older individuals in the aging study. For the purpose of this experiment, we estimated the priors using the reference-based estimates of a subset of the individuals (5% of the sample size) that were then excluded from the rest of the analysis. Interestingly, we found the inclusion of two prior distributions to provide no clear improvement over using a single general prior (Additional file 1: Table S3). Thus, further confirming the robustness of BayesCCE to inaccuracies introduced by the prior information due to cell composition differences between populations. Finally, we evaluated the effect of incorporating noisy priors on the performance of BayesCCE by considering a range of possible priors with different levels of inaccuracies, including a non-informative prior (Additional file 1: Figure S9). Not surprisingly, we observed that given cell counts for a small subset of samples, BayesCCE was overall robust to prior misspecification, which did not result in a substantially reduced performance even given a non-informative prior. In the absence of known cell counts, the performance of BayesCCE was somewhat decreased, however, remained reasonable even in the scenario of a non-informative prior. Particularly, overall, BayesCCE with a non-informative prior performed better than the competing reference-free methods (ReFACTor, NNMF, and MeDeCom). We attribute this result to the combination of the constraints defined in BayesCCE with the sparse low-rank assumption it takes, which seems to handle more efficiently with the high-dimension nature of the computational problem (see the "Methods" section). We note that in the presence of a non-informative prior, BayesCCE conceptually reduces to the performance of ReFACTor, and therefore, it captures the same cell composition variability in the data. Yet, owing to the additional constrains, BayesCCE allows to overcome ReFACTor in capturing a set of components such that each component corresponds to one cell type. We introduce BayesCCE, a Bayesian method for estimating cell-type composition from heterogeneous methylation data without the need for methylation reference. We show mathematically and empirically the non-identifiability nature of the more straightforward reference-free NNMF approach for inferring cell counts, which tends to provide only linear combinations of the cell counts. In contrast, while we do not provide conditions for the uniqueness of a BayesCCE solution, our empirical evidence from multiple data sets clearly demonstrates the success of BayesCCE in providing desirable results of one component per cell type by leveraging readily obtainable prior information from previously collected data. The parameters of the prior required by BayesCCE can be estimated by utilizing previous studies that collected cell counts from the tissue of interest. In our evaluation of the method, we used whole-blood methylation data, and we considered the classical definition of leukocyte cell types, which relies on cell surface markers. Considering other definitions of cell types is of potential interest; particularly, it would be interesting to examine to what extent BayesCCE and the reference-free methods can capture cell-type composition following a methylation-based definition of cell types (i.e., when defining cell types according to their methylation patterns). Since BayesCCE captures cell composition variation under the classical definition of cell types by using the most dominant components of variation in the data, the main cell types of a natural methylation-based definition are expected to be a linear combination of the cell types under the classical definition. Much like in the experiments we presented here, wherein given a prior about the distribution of the cell types BayesCCE directed the solution towards an appropriate linear transformation, we would expect BayesCCE to perform similarly in the case of a methylation-based definition of cell types (given appropriate prior information about the distribution of cell types). Nevertheless, obtaining such a definition and evaluating BayesCCE under that definition would require obtaining appropriate single cell methylation data, which is currently scarcely available. Moreover, deriving an actual meaningful definition of cell types given such data is a non-trivial problem. Therefore, until such definition and appropriate data are available, we are bounded to consider the classical definition of cell types. Since BayesCCE requires a prior which can be estimated from previously collected cell counts without the need for any other genomic data, obtaining such as prior is relatively easy for many tissues, such as the brain [33], heart [34], and adipose tissue [35]. Particularly, such data should be substantially easier to obtain compared to reference data from sorted cells for the corresponding tissues. Ideally, in order to learn the prior, one would want to use cell counts coming from the same population as the target population. Nevertheless, empirically, we observe that BayesCCE leverages the prior to direct the solution while still allowing enough flexibility, which makes it robust even to substantial deviations of the prior from the true underlying cell composition distribution. In fact, our results demonstrate that BayesCCE handles biases introduced by the prior remarkably well. Particularly, it allows to capture differences in cell compositions between different populations in the same study, thus providing an opportunity to study cell composition differences between different populations even in the absence of methylation reference. Since no large data sets with measured cell counts are currently publicly available, we used a supervised method [5] for obtaining cell-type proportion estimates, which were used as the ground truth in our experiments. Even though the method used for obtaining these estimates was shown to reasonably estimate leukocyte cell proportions from whole-blood methylation data in several independent studies [18, 23, 24], these estimates may have introduced biases into the analysis. Particularly, any inaccuracies introduced by the reference-based method could have directly affect the results of our evaluation. Our results indicate that such inaccuracies are more likely in some particular cell types over others. Failing to accurately estimate a particular cell type may be the outcome of various reasons. Notably, utilizing inappropriate reference data or failing to select a set of informative features that mark a particular cell type may dramatically affect its estimated values. Other reasons which are not methodological may also lead to inaccuracies of the estimates. For example, two cell types with very similar methylation patterns will be hardly distinguishable. In spite of the potential pitfalls of using estimates as a baseline for evaluation, we believe that our results on several independent data sets, including simulated data, and the use of a prior estimated from a large data set of high-resolution cell counts, provide a compelling evidence for the utility of BayesCCE. We further demonstrate that imputation of cell counts can be highly accurate when cell counts are available for some of the samples in the data. Particularly, based on our experiments, only as few as a couple of dozens of samples with known cell counts are needed in order to substantially improve performance. Moreover, in the general setup of BayesCCE, where no cell counts are known, each component corresponds to one cell type, however, not necessarily in the right scale and there is no automatic way to determine the identity of that cell type. In contrast, in the case of cell count imputation, where cell counts are known for a subset of the samples, the assignment of components into cell types is straightforward. In addition, as we showed, BayesCCE is able to reconstruct cell counts up to a small absolute error (i.e., each component is scaled to form cell proportion estimates of one particular known cell type). We note that in our evaluation of BayesCCE, we considered only whole-blood data sets. Studying other tissues or biological conditions is clearly of interest. However, in the absence of other tissue-specific methylation references that were clearly shown to allow obtaining reasonable cell-type proportion estimates, evaluation of performance based on tissues other than the whole blood will not be reliable. We therefore opt to focus on evaluating the performance of BayesCCE using multiple large whole-blood data sets. Importantly, beyond its potential utility for complex biological scenarios in which reference data is unavailable, BayesCCE may also provide an opportunity to improve cell count estimates in whole-blood studies in scenarios where the currently available reference data is not appropriate. Notably, in a recent work, we have shown using multiple whole-blood data sets that ReFACTor outperforms the reference-based method in correcting for cell composition [19]. Differences in performance between ReFACTor (upon which BayesCCE relies for obtaining a starting point that captures the cell composition variation in the data) and the reference-based method are expected to be especially large in studies where the available reference data do not represent the individuals in the study well. We argue that this is likely to typically be the case, as the current go-to whole-blood reference consists of only six individuals [11], which represent a very specific and narrow population in terms of methylome altering factors, such as age [14], gender [15, 16], and genetics [17]. That said, large data sets with experimentally measured cell counts are required in order to fully investigate and demonstrate these claims. We further note that in our benchmarking of BayesCCE with existing reference-free methods, we considered only a subset of the available methods in the literature. Other reference-free methods that have been suggested in the context of accounting for cell composition in methylation data exist; however, these do not provide explicit components, but rather only implicitly account for cell composition variability in association studies. While in principle these methods can be modified to produce components, in this work, we focused only on methods that can be readily used to provide explicit components for evaluation. We further note that several supervised and unsupervised decomposition methods have been suggested for estimating cell composition from gene expression [36–40]. However, these were refined for gene expression data and, to the best of our knowledge, none of these methods takes into account prior knowledge about the cell composition distribution as in BayesCCE. It remains of interest to investigate whether BayesCCE can be adapted for estimating cell composition from gene expression without the need for purified expression profiles. Finally, our approach is based on finding a suitable linear transformation of the components found by ReFACTor [8]. It is therefore important to follow the guidelines for the application of ReFACTor, such as incorporation of methylation-altering covariates; these guidelines were recently highlighted elsewhere [19, 41]. Since BayesCCE relies on the ReFACTor components, it is limited by their quality, and particularly, if the variability of some cell type is not captured by ReFACTor, BayesCCE will not be able to estimate that cell type well. Such a result is possible in scenarios where the variation of a particular cell type is substantially weaker than other sources of variation in the data (which are unrelated to cell-type composition); we note, however, that this potential limitation is not exclusive for ReFACTor or BayesCCE but rather a general limitation of all existing reference-free methods. BayesCCE will effectively provide the same result as ReFACTor if used for correcting for a potential cell-type composition confounder in methylation data. Since ReFACTor does not allow to infer direct cell count estimates but rather linear transformations of those, we suggest to use BayesCCE in cases in which a study of individual cell types is performed and therefore ReFACTor cannot be used. In case merely a correction for cell composition is desired, we suggest to use BayesCCE when cell counts are known for a subset of the samples, and otherwise to use ReFACTor. We introduce a Bayesian method for estimating cell-type composition from heterogeneous methylation data using a prior on the cell composition distribution. In contrast to previous methods, using BayesCCE, we can generate components such that each component corresponds to a single cell type. These components can allow researchers to perform types of downstream analyses that are not possible using previous reference-free methods, which essentially capture linear combinations of several cell types in each component they provide. Based on our results, showing a further substantial improvement by incorporating some cell counts into the analysis, we recommend that in future studies either the cell counts be measured for at least a couple of dozens of the samples or the external data of samples with measured cell counts be utilized. Notations and related work Let \(O\in \mathbb {R}^{m\times n}\) be an m sites by n sample matrix of DNA methylation levels coming from a heterogeneous source consisting k cell types. For methylation levels, we consider what is commonly referred to as beta-normalized methylation levels, which are defined for each sample in each site as the proportion of methylated probes out of the total number of probes. Put differently, Oji∈[0,1] for each site j and sample i. We denote \(M\in \mathbb {R}^{m\times k}\) as the cell-type-specific mean methylation levels for each site and denote a row of this matrix, corresponding to the jth site, using Mj,·. Additionally, we denote \(R\in \mathbb {R}^{n\times k}\) as the cell-type proportions of the samples in the data. A common model for observed mixtures of DNA methylation is $$\begin{array}{@{}rcl@{}} && O_{ji} = M_{j,\cdot} R_{i}^{T} + \epsilon_{ji} \end{array} $$ $$\begin{array}{@{}rcl@{}} && \epsilon_{ji} \sim N(0,\sigma^{2}) \end{array} $$ $$\begin{array}{@{}rcl@{}} && \forall i \forall h:R_{ih}\geq 0 \end{array} $$ $$\begin{array}{@{}rcl@{}} && \forall i:\sum\limits_{h=1}^{k} R_{ih}=1 \end{array} $$ $$\begin{array}{@{}rcl@{}} && \forall j \forall h:0\leq M_{jh}\leq 1 \end{array} $$ where the error term εji models measurement noise and other possible unmodeled factors. The constraints in (3) and in (4) require the cell proportions to be positive and to sum up to one in each sample, and the constraints in (5) require the cell-type-specific mean levels to be in the range [0,1]. This model was initially suggested for DNA methylation in the context of reference-based estimation of cell proportions by Houseman et al. [5]. We are interested in estimating R. Taking a standard maximum likelihood approach for fitting the model results in the following optimization problem: $$\begin{array}{@{}rcl@{}} \hat{R},\hat{M}= \underset{R,M}{\text{argmin}} && \| O- M R^{T}\|_{F}^{2} \end{array} $$ $$\begin{array}{@{}rcl@{}} \text{s.t} && \forall i \forall h:R_{ih}\geq 0 \end{array} $$ where \(\|\cdot \|_{F}^{2}\) is the squared Frobenius norm. The reference-based method [5] first obtains an estimate of M from reference methylation data collected from sorted cells of the cell types composing the studied tissue. Once an estimate of M is fixed, R can be estimated by solving a standard quadratic program. If the matrix M is unknown, which is a reference-free version of the problem, the above formulation of the problem can be regarded as a version of non-negative matrix factorization (NNMF) problem. NNMF has been suggested in several applications in biology; notably, the problem of inference of cell-type composition from methylation data has been recently formulated as an NNMF problem [9]. In order to optimize the model, the authors used an alternating optimization procedure in which M or R are optimized while the other is kept fixed. However, as demonstrated by the authors [9], this solution results in the inference of a linear combination of the cell proportions R. Put differently, more than one component of the NNMF is required for explaining each cell type in the data. This was recently further highlighted and explained using geometric considerations [10], which nicely showed the non-identifiable nature of the NNMF model in (6) in case that a perfect factorization of O into M,R exists (i.e., O=MRT). However, in practice, perfect factorization never exists in real biological data. Thus, in addition to empirical evidence from several data sets on which we apply the NNMF method (see the "Results" section), in the next subsection, we provide a mathematical proof for the non-identifiability of the NNMF model in (6) under a more general case, where a perfect factorization does not necessarily exist. In an attempt to overcome the non-identifiability of the model in (6) and to provide cell-type proportions when reference methylation data are not available, a recent modification of the NNMF model has been suggested [10]. The method, MeDeCom, solves the optimization of the NNMF model while including additional penalty term in the objective function. Derived from biological knowledge about mean methylation levels, the penalty negatively weights mean methylation levels diverging from a known bimodal behavior of methylation levels, wherein CpGs tend to be overall methylated or unmethylated [10]. While the modified objective suggested in MeDeCom overcomes the non-identifiability of the NNMF model for a given weight of the penalty (λ), it is not entirely clear how to select λ. To circumvent this problem, the authors proposed a cross-validation procedure for the selection of λ. However, our empirical results from four large whole-blood methylation data sets, as well as from simulated data, show sub-optimal performance for MeDeCom, similar to the solutions of the simpler NNMF model. Our results suggest that the modification introduced by MeDeCom may not effectively avoid the non-identifiability nature of the NNMF model, possibly due to insufficient prior information or inability to effectively determine an appropriate value for λ. Another recent reference-free method for estimating cell composition in methylation data, ReFACTor [8], performs an unsupervised feature selection step followed by a principal components analysis (PCA). Similarly to the NNMF solution, ReFACTor is an unsupervised method and it only finds principal components (PCs) that form linear combinations of the cell proportions rather than directly estimates the cell proportion values [8]. Non-identifiability of the NNMF model We hereby show by construction the non-identifiability nature of the NNMF model in (6). For this proof, instead of the constraints in (9), we consider a slightly modified version of the constraints: $$ \forall j \forall h:0 < M_{jh} < 1 $$ While in theory we may have an equality (i.e., Mjh=0 or Mjh=1), in practice, such sites are typically not measured or excluded from the analysis, since they would not be demonstrating any variability. Let \(\hat {R},\hat {M}\) be a solution to the problem in (6). There exist \(\tilde {R}\neq \hat {R},\tilde {M}\neq \hat {M}\) such that \( \left \| O - \hat {M} \hat {R}^{T} \right \|_{F}^{2} = \left \| O - \tilde {M} \tilde {R}^{T} \right \|_{F}^{2}\) and the constraints in (7), (8), and (10) are satisfied. Let 0<c<1, define \(Q\in \mathbb {R}^{k\times k}\) to be the identity matrix up to two entries: Q11=1−c,Q12=c. It follows that Q−1 is also the identity matrix up to two entries: \(Q^{-1}_{11} = \frac {1}{1-c},Q^{-1}_{12} = \frac {c}{c-1}\). □ Denote \(\tilde {R} = \hat {R}Q\) and denote \(\tilde {M} = \hat {M} \left (Q^{-1}\right)^{T}\), we get that $$ \| O - \tilde{M} \tilde{R}^{T} \|_{F}^{2} = \| O - \hat{M} \left(Q^{-1}\right)^{T} Q^{T} \hat{R}^{T} \|_{F}^{2} = \| O - \hat{M} \hat{R}^{T} \|_{F}^{2} $$ The constraints in (7) hold since \(\tilde {R}_{ih} \geq 0\) for each 1≤i≤n,1≤h≤k. The constraints in (8) hold since for each 1≤i≤n $${\begin{aligned} \sum\limits_{l=1}^{k} \tilde{R}_{il} &= \sum\limits_{h=1}^{k}\sum\limits_{l=1}^{k} \hat{R}_{il} Q_{lh}\\ & = (1-c)\hat{R}_{i1} + c\hat{R}_{i1} + \sum\limits_{h=2}^{k} \hat{R}_{il} = \sum\limits_{h=1}^{k} \hat{R}_{il} = 1 \end{aligned}} $$ In addition, \(\tilde {M}^{T}_{hj} \in (0,1)\) for 2≤h≤k,1≤j≤m. In order to completely satisfy the constraints in (10), we also require these constraints to be satisfied for h=1,1≤j≤m. It is easy to see that for each j the latter is satisfied if $$0 < c < \text{min}\left\{ \begin{array}{c} \frac{1-\hat{M}^{T}_{1j}}{{1-\hat{M}^{T}_{2j}}}, \frac{\hat{M}^{T}_{1j}}{\hat{M}^{T}_{2j}} \end{array}\right\} $$ Therefore, we can simply select a value of c in the range $$0 < c < \text{min}_{j} \left\{ \text{min} \left\{ \begin{array}{c} \frac{1-\hat{M}^{T}_{1j}}{{1-\hat{M}^{T}_{2j}}}, \frac{\hat{M}^{T}_{1j}}{\hat{M}^{T}_{2j}} \end{array} \right\} \right\} $$ Note that we necessarily have either $$0 < c < \text{min}_{j} \left\{ \text{min} \left\{ \begin{array}{c} \frac{1-\hat{M}^{T}_{1j}}{{1-\hat{M}^{T}_{2j}}}, \frac{\hat{M}^{T}_{1j}}{\hat{M}^{T}_{2j}} \end{array} \right\} \right\} < 1 $$ In the latter case, we can switch the positions of the first two columns in M. Equality of the minimum to 1 in both cases would mean that M1=M2, which would mean that the problem is non-identifiable, as the first two cell types cannot be distinguished in this scenario. As a result of the above, the constraints in (10) can be satisfied for a range of values of c. \(\square \) We suggest a more detailed model by adding a prior on R and taking into account potential covariates. Specifically, we assume that $$ R_{i}^{T} \sim \text{Dirichlet}(\alpha_{1},...,\alpha_{k}) $$ where α1,...,αk are assumed to be known. In practice, the parameters are estimated from external data in which cell-type proportions of the studied tissue are known. Such experimentally obtained cell-type proportions were used to test the appropriateness of the Dirichlet prior in describing cell composition distribution (data not shown). Also, we consider additional factors of variation affecting observed methylation levels, in addition to variation in cell-type composition. Specifically, denote \(X\in \mathbb {R}^{n\times p}\) as a matrix of p covariates for each individual and \(S\in \mathbb {R}^{m\times p}\) as a matrix of corresponding effects of the p covariates on each of the m sites. As before, we are interested in estimating R, the cell-type proportions of the k cell types. Deriving a maximum likelihood-based solution for this model and repeating the constraints for completeness result in the following optimization problem: $$\begin{array}{@{}rcl@{}} \hat{R},\hat{M},\hat{S} &=& \underset{R,M,S}{\text{argmin}} \frac{1}{2\sigma^{2}} \left\| O-M R^{T} -S X^{T} \right\|_{F}^{2} \\ && - \sum\limits_{h=1}^{k} (\alpha_{h}-1) \sum\limits_{i=1}^{n} \log (R_{ih}) \end{array} $$ $$\begin{array}{@{}rcl@{}} \text{s.t } && \forall i \forall h:R_{ih} \geq 0 \end{array} $$ Our intuition in this model is that since the priors on R are estimated from real data, incorporating them will push the solution of the optimization to return estimates of R which are closer to the true values as opposed to a linear combination of them. Our algorithm uses ReFACTor as a starting point. Specifically, we estimate R by finding an appropriate linear transformation of the ReFACTor principal components (ReFACTor components). In principle, any of the reference-free methods we examined (ReFACTor, NNMF, and MeDeCom) could be used as the starting point for our method. However, we found that ReFACTor captures a larger portion of the cell composition variance compared with the alternatives (Additional file 1: Figure S1). Applying ReFACTor on our input matrix O, we get a list of t sites that are expected to be most informative with respect to the cell composition in O. Let \(\tilde {O}\in \mathbb {R}^{t\times n}\) be a truncated version of O containing only the t sites selected by ReFACTor. We apply PCA on \(\tilde {O}\) to get \(L\in \mathbb {R}^{t\times d}, P\in \mathbb {R}^{n\times d}\), the loadings and scores of the first d ReFACTor components. Then, we reformulate the original optimization problem in terms of linear transformations of L and P as follows: $$\begin{array}{@{}rcl@{}} \hat{A},\hat{V},\hat{B} = \underset{A,V,B}{\text{argmin}} && \frac{1}{2\sigma^{2}}\left|\left|\tilde{O} - LA V^{T}P^{T} -LBX^{T} \right|\right|_{F}^{2} \end{array} $$ $$\begin{array}{@{}rcl@{}} && - \sum\limits_{h=1}^{k} (\alpha_{h}-1) \sum\limits_{i=1}^{n}\log\left(\sum\limits_{l=1}^{d}P_{il}V_{lh}\right) \\ \text{s.t } && \forall i\forall h:\sum\limits_{l=1}^{d}P_{il}V_{lh} \geq 0 \end{array} $$ $$\begin{array}{@{}rcl@{}} && \forall i:\sum\limits_{h=1}^{k}\sum\limits_{l=1}^{d}P_{il}V_{lh}=1 \end{array} $$ $$\begin{array}{@{}rcl@{}} && \forall j \forall k: 0 \leq \sum\limits_{l=1}^{d} L_{jl}A_{lh} \leq 1 \end{array} $$ where \(A\in \mathbb {R}^{d\times k}\) is a transformation matrix such that \(\tilde {M}=LA\) (\(\tilde {M}\) being a truncated version of M with the t sites selected by ReFACTor), \(V\in \mathbb {R}^{d\times k}\) is a transformation matrix such that R=PV, and \(B\in \mathbb {R}^{d\times p}\) is a transformation matrix such that LB corresponds to the effects of each covariate on the methylation levels in each site. The constraints in (17) and in (18) correspond to the constraints in (13) and in (14), and the constraints in (19) correspond to the constraints in (15). Given \(\hat {V}\), we simply return \(\hat {R}=P\hat {V}\) as the estimated cell proportions. Note that in the new formulation we are now required to learn only d(2k+p) parameters— d,k, and p being small constants—a dramatically decreased number of parameters compared with the original problem which requires nk+m(k+p) parameters. By taking this approach, we make an assumption that \(\tilde {O}\) consists of a low-rank structure that captures the cell composition using d orthogonal vectors. While a natural value for d would be k, d is not bounded to be k. Particularly, in cases where substantial additional cell composition signal is expected to be captured by later ReFACTor components (i.e., components beyond the first k), we would expect to benefit from increasing d. Clearly, overly increasing d is expected to result in overfitting and thus a decrease in performance. Finally, taking into account covariates with potentially dominant effects in the data should alleviate the risk of introducing noise into \(\hat {R}\) in case of mixed low-rank structure of cell composition signal and other unwanted variation in the data. We note, however, that similar to the case of correlated explaining variables in regression, considering covariates that are expected to be correlated with the cell-type composition may result in underestimation of A,V and therefore to a decrease in the quality of \(\hat {R}\). Imputing cell counts using a subset of samples with measured cell counts In practice, we observe that each of BayesCCE's components corresponds to a linear transformation of one cell type rather than to an estimate of that cell type in absolute terms. That is, it still lacks the right scaling (multiplication by a constant and addition of a constant) for transforming it into cell-type proportions. Furthermore, we would like the ith BayesCCE component to correspond to the ith cell type described by the prior using the αi parameter. Empirically, this is not necessarily the case, especially in scenarios where some of the αi values are similar. In order to address these two caveats, we suggest incorporating measured cell counts for a subset of the samples in the data. Assume we have n0 reference samples in the data with known cell counts R(0) and n1 samples with unknown cell counts R(1) (n=n0+n1). This problem can be regarded as an imputation problem, in which we aim at imputing cell counts for samples with unknown cell counts. We can find \(\hat {M}\) by solving the problem in (12) under the constraints in (15) for the n0 reference samples while replacing R with R(0) and keeping it fixed. Then, given \(\hat {M}\), we can now solve the problem in (16), after replacing LA with \(\hat {M}\) (i.e., we find only V,B now), under the following constraints: $$\begin{array}{@{}rcl@{}} \forall (1 \leq i \leq n_{0}) \forall h: \sum\limits_{l=1}^{d}P_{il}^{(0)}V_{lh} = R^{(0)}_{ih} \end{array} $$ $$\begin{array}{@{}rcl@{}} \forall (1 \leq i \leq n_{1}) \forall h: \sum\limits_{l=1}^{d}P_{il}^{(1)}V_{lh} \geq 0 \end{array} $$ $$\begin{array}{@{}rcl@{}} \forall (1 \leq i \leq n_{1}): \sum\limits_{h=1}^{k} \sum\limits_{l=1}^{d}P_{il}^{(1)}V_{lh} = 1 \end{array} $$ where P(0) contains n0 rows corresponding to the reference samples in P and P(1) contains n1 rows corresponding to the remaining samples in P. In this case, both problems of estimating M and solving (16) while keeping \(\hat {M}\) fixed are convex—the first problem takes the form of a standard quadratic problem and the latter results in an optimization problem of the sum of two convex terms under linear constraints. Using \(\hat {M}\), estimated from cell counts and corresponding methylation levels of a group of samples, and adding the constraints in (20) are expected to direct the inference of R towards a set of components such that each one corresponds to one known cell type with a proper scale. We note that given an estimate \(\hat {M}\) as described above, we can also solve directly the problem in (12) rather than the problem in (16). This approach may be more desired in cases where P does not effectively capture the cell composition variation in the data. In the context of our study, however, it is not possible to reliably evaluate the approach of solving directly the problem in (12), owing to the fact that the ground truth we set for evaluation is based on the same matrix M. Specifically, in this case, the cell proportions of the reference individuals are expected to recover the same matrix M that was used for computing the ground truth proportions of the non-reference individuals. As a result, the estimated proportions of the non-reference individuals will be exactly the ground truth that is used in the evaluation (up to a statistical error arising from the estimation of M), regardless of the true accuracy of the estimate \(\hat {M}\) with respect to the true M and regardless of the true accuracy of the cell proportion estimates. Implementation and practical issues We estimate σ2 in (16) as the mean squared error of predicting \(\tilde {O}\) with P and X. The α1,...,αk Dirichlet parameters of the prior can be estimated from cell counts using maximum likelihood estimators. In practice, we add a column of ones to both L and P in (16) in order to assure feasibility of the problem—these constant columns are used to compose the mean methylation level per site across all cell types and the mean cell proportion fraction in each cell type across all samples. In addition, we slightly relax some of the constraints in the problem to avoid problems due to numeric instability and inconsistent noise issues. First, the inequality constraints in (17) and in (21) are changed to require the cell proportions to be greater than ε>0, as a result of the logarithm term in the objective (ε=0.0001). In addition, we do not impose the equality constraints in (18) and in (22) but rather allow a small deviation from equality (5%), and given cell counts for a subset of the samples, we allow a small deviation from the equality constraints in (20) due to expected inaccuracies of cell count measurements (1%). The last two constraints are required for assuring a feasible solution, owing to the fact that we fit \(\hat {M},\hat {R}\) jointly. Specifically, since the starting point for the optimization is essentially a set of principal components (given by ReFACTor), which are not guaranteed to capture only cell composition variation, in practice, obtaining linear transformations that precisely satisfy the constraints is expected to be an exception rather than the rule. We verified this empirically (data not shown), and we further observed that these relaxations eventually result in feasible solutions which typically tend to tightly concentrate around the original constraints. We performed all the experiments in this paper using a Matlab implementation of BayesCCE. Specifically, we solved the optimization problems in BayesCCE using the fmincon function with the default interior-point algorithm, and we used the fastfit [42] Matlab package for calculating maximum likelihood estimates of the Dirichlet priors. All executions of BayesCCE required less than an hour (and typically several minutes) on a 64-bit Mac OS X computer with 3.1 GHz and 16 GB of RAM. Corresponding code is available at https://github.com/cozygene/bayescce. Evaluation of performance The fraction of cell composition variation (R2) captured by each of the reference-free methods, ReFACTor, NNMF, and MeDeCom, was computed for each cell type using a linear predictor fitted with the first k components provided by each method. In order to evaluate the performance of BayesCCE, for each component i, we calculated its absolute correlation with the ith cell type and reported the mean absolute correlation (MAC) across the k estimated cell types. While the Dirichlet prior assigns a specific parameter αh for each cell type h, empirically, we observed that in the case of k=6 with no known cell counts for a subset of the samples, the ith BayesCCE component did not necessarily correspond to the ith cell type. Put differently, the labels of the k cell types had to be permuted before calculating the MAC. In this case, we considered the permutation of the labels which resulted with the highest MAC as the correct permutation. In the rest of the cases, we did not apply such permutation (all the experiments using k=3 and all the experiments using k=6 with known cell counts for a subset of the samples). For evaluating ReFACTor, NNMF, and MeDeCom, reference-free methods which do not attribute their components to specific cell types in any scenario, we considered for each method the permutation of its components leading to the highest MAC in all experiments when compared with BayesCCE. In addition, we considered the absolute error of the estimates from the ground truth as an additional quality measurement. We calculated the mean absolute error (MAE) across the k estimated cell types. When calculating absolute errors for the ReFACTor components, we scaled each ReFACTor component to be in the range [0,1]. Implementation and application of the reference-free and reference-based methods We calculated the ReFACTor components for each data set using the parameters k=6 and t=500 and according to the default implementation and recommended guidelines of ReFACTor as described in the GLINT tool [41] and in a recent work [19], while accounting for known covariates in each data set. More specifically, in the Hannum et al. data [20], we accounted for age, sex, ethnicity, and batch information; in the Liu et al. data [21], we accounted for age, sex, smoking status, and batch information; and in the two Hannon et al. data sets [22], we accounted for age, sex, and case/control state. We used the first six ReFACTor components (d=6) for simulated data in order to accommodate with the number of simulated cell types, and the first ten components (d=10) for real data, as real data are typically more complex and are therefore more likely to contain substantial signal in latter components. The NNMF components were computed for each data set using the default setup of the RefFreeEWAS R package from the subset of 10,000 most variable sites in the data set, as performed in the NNMF paper by the authors [9]. Similarly, the MeDeCom components were computed for each data set using the default setup of the MeDeCom R package [10] from the subset of 10,000 most variable sites in the data set, as repeatedly running the method on the entire set of CpGs was revealed to be computationally prohibitive. The regularization parameter λ was selected according to a minimum cross-validation error criterion, as instructed in the MeDeCom package. We used the GLINT tool [41] for estimating blood cell-type proportions for each one of the data sets, according to the Houseman et al. method [5], using 300 highly informative methylation sites defined in a recent study [24] and using reference data collected from sorted blood cells [11]. We evaluated the performance of BayesCCE using a total of six data sets, as described below. For the real data experiments, we downloaded four publicly available Illumina 450K DNA methylation array data sets from the Gene Expression Omnibus (GEO) database: a data set by Hannum et al. (accession GSE40279) from a study of aging rate [20], a data set by Liu et al. (accession GSE42861) from a recent association study of DNA methylation with rheumatoid arthritis [21], and two data sets by Hannon et al. (accessions GSE80417 and GSE84727; denote Hannon et al. I and Hannon et al. II) from a recent association study of DNA methylation with schizophrenia. We preprocessed the data according to a recently suggested normalization pipeline [43]. Specifically, we retrieved and processed raw IDAT methylation files using R and the minfi R package [44] as follows. We removed 65 single nucleotide polymorphism (SNP) markers and applied the Illumina background correction to all intensity values, while separately analyzing probes coming from autosomal and non-autosomal chromosomes. We used a detection p value threshold of p value < 10−16 for intensity values, setting probes with p values higher than this threshold to be missing values. Based on these missing values, we excluded samples with call rates < 95%. Since IDAT files were not made available for the Hannum et al. data set, we used the methylation intensity levels published by the authors. As for data normalization, following the same suggested pipeline [43], we performed a quantile normalization of the methylation intensity values, subdivided by probe type, probe sub-type, and color channel. Beta-normalized methylation levels were eventually calculated based on intensity levels (according to the recommendation by Illumina). On top of that, we excluded probes with over 10% missing values and used the "impute" R package for imputing remaining missing values. Additionally, using GLINT [41], we excluded from each data set all CpGs coming from the non-autosomal chromosomes, as well as polymorphic and cross-reactive sites, as was previously suggested [45]. We further removed outlier samples and samples with missing covariates. In more details, we removed six samples from the Hannum et al. data set and two samples from the Liu et al. data set, which demonstrated extreme values in their first two principal components (over four empirical standard deviations). Furthermore, we removed from the Liu et al. data set two additional remaining samples that were regarded as outliers in the original study of Liu et al., and we removed from the Hannon et al. data sets samples with missing age information. The final numbers of samples remained for analysis were n=650,n=658, n=638, and n=656, and the numbers of CpGs remained were 382,158, 376,021, 381,338, and 382,158, for the Hannum et al. data set, Liu et al. data set, and the Hannon et al. I and Hannon et al. II data sets, respectively. For learning prior information about the distribution of blood cell-type proportions, we used electronic medical record (EMR)-based study data that were acquired via the previously published Department of Anesthesiology and Perioperative Medicine at UCLA's perioperative data warehouse (PDW) [46]. The PDW is a structured reporting schema that contains all the relevant clinical data entered into an EMR via the use of Clarity, the relational database created by EPIC (EPIC Systems, Verona, WI) for data analytics and reporting. We used high-resolution cell count measurements from adult individuals (n=595) for fitting a Dirichlet distribution. The resulted parameters of the prior were 15.0727, 1.8439, 2.5392, 1.7934, 0.7240, and 0.7404 for granulocytes, monocytes, CD4+, CD8+, B cells, and NK cells, respectively. The parameters of the prior calculated for the case of three assumed cell types (k=3) were 7.7681, 0.9503, and 2.9876 for granulocytes, monocytes, and lymphocytes, respectively. Finally, for generating simulated data sets and for generating correlation maps of cell-type-specific methylomes, we used publicly available data of methylation reference of sorted cell types collected in six individuals from whole-blood tissue (GEO accession GSE35069) [11]. Data simulation We simulated data following a model that was previously described in details elsewhere [8]. Briefly, we used methylation levels from sorted blood cells [11] and, assuming normality, estimated maximum likelihood parameters for each site in each cell type. Cell-type-specific DNA methylation data were then generated for each simulated individual from normal distributions with the estimated parameters, conditional on the range [0,1], for six cell types and for each site. Cell proportions for each individual were generated using a Dirichlet distribution with the same parameters used in the real data analysis. Eventually, observed DNA methylation levels were composed from the cell-type-specific methylation levels and cell proportions for each individual, and a random normal noise was added to every data entry to simulate technical noise (σ = 0.01). To simulate inaccuracies of the prior, the Dirichlet parameters required by BayesCCE were learned from cell-type proportions of 50 samples generated at random from a Dirichlet distribution using the parameters learned from real data. Koch MW, Metz LM, Kovalchuk O. Epigenetic changes in patients with multiple sclerosis. Nat Rev Neurol. 2013; 9(1):35–43. Ikegame T, Bundo M, Sunaga F, Asai T, Nishimura F, Yoshikawa A, Kawamura Y, Hibino H, Tochigi M, Kakiuchi C, et al. DNA methylation analysis of BDNF gene promoters in peripheral blood cells of schizophrenia patients. Neurosci Res. 2013; 77(4):208–14. Toperoff G, Aran D, Kark JD, Rosenberg M, Dubnikov T, Nissan B, Wainstein J, Friedlander Y, Levy-Lahad E, Glaser B, et al. Genome-wide survey reveals predisposing diabetes type 2-related dna methylation variations in human peripheral blood. Hum Mol Genet. 2012; 21(2):371–83. Jaffe AE, Irizarry RA. Accounting for cellular heterogeneity is critical in epigenome-wide association studies. Genome Biol. 2014; 15(2):31. Houseman EA, Accomando WP, Koestler DC, Christensen BC, Marsit CJ, Nelson HH, Wiencke JK, Kelsey KT. DNA methylation arrays as surrogate measures of cell mixture distribution. BMC Bioinforma. 2012; 13(1):86. Houseman EA, Molitor J, Marsit CJ. Reference-free cell mixture adjustments in analysis of DNA methylation data. Bioinformatics. 2014; 30(10):1431–9. Zou J, Lippert C, Heckerman D, Aryee M, Listgarten J. Epigenome-wide association studies without the need for cell-type composition. Nat Methods. 2014; 11(3):309–11. Rahmani E, Zaitlen N, Baran Y, Eng C, Hu D, Galanter J, Oh S, Burchard EG, Eskin E, Zou J, et al. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies. Nat Methods; 13(5):443–5. Houseman EA, Kile ML, Christiani DC, Ince TA, Kelsey KT, Marsit CJ. Reference-free deconvolution of DNA methylation data and mediation by cell composition effects. BMC Bioinforma. 2016; 17(1):259. Lutsik P, Slawski M, Gasparoni G, Vedeneev N, Hein M, Walter J. MeDeCom: discovery and quantification of latent components of heterogeneous methylomes. Genome Biol. 2017; 18(1):55. Reinius LE, Acevedo N, Joerink M, Pershagen G, Dahlén SE, Greco D, Söderhäll C, Scheynius A, Kere J. Differential DNA methylation in purified human blood cells: implications for cell lineage and studies on disease susceptibility. PloS ONE. 2012; 7(7):41361. Teschendorff AE, Gao Y, Jones A, Ruebner M, Beckmann MW, Wachter DL, Fasching PA, Widschwendter M. DNA methylation outliers in normal breast tissue identify field defects that are enriched in cancer. Nat Commun. 2016; 7:10478. Guintivano J, Aryee MJ, Kaminsky ZA. A cell epigenotype specific model for the correction of brain cellular heterogeneity bias and its application to age, brain region and major depression. Epigenetics. 2013; 8(3):290–302. Horvath S. DNA methylation age of human tissues and cell types. Genome Biol. 2013; 14(10):115. Singmann P, Shem-Tov D, Wahl S, Grallert H, Fiorito G, Shin SY, Schramm K, Wolf P, Kunze S, Baran Y, et al. Characterization of whole-genome autosomal differences of DNA methylation between men and women. Epigenetics chromatin. 2015; 8(1):1–13. Yousefi P, Huen K, Davé V, Barcellos L, Eskenazi B, Holland N. Sex differences in DNA methylation assessed by 450 K BeadChip in newborns. BMC Genomics. 2015; 16(1):1. Rahmani E, Shenhav L, Schweiger R, Yousefi P, Huen K, Eskenazi B, Eng C, Huntsman S, Hu D, Galanter J, et al. Genome-wide methylation data mirror ancestry information. Epigenetics Chromatin. 2017; 10(1):1. Yousefi P, Huen K, Quach H, Motwani G, Hubbard A, Eskenazi B, Holland N. Estimation of blood cellular heterogeneity in newborns and children for epigenome-wide association studies. Environ Mol Mutagen. 2015; 56(9):751–8. Rahmani E, Zaitlen N, Baran Y, Eng C, Hu D, Galanter J, Oh S, Burchard E, Eskin E, Zou J, et al. Correcting for cell-type heterogeneity in dna methylation: a comprehensive evaluation. Nat Methods. 2017; 14(3):218. Hannum G, Guinney J, Zhao L, Zhang L, Hughes G, Sadda S, Klotzle B, Bibikova M, Fan JB, Gao Y, et al. Genome-wide methylation profiles reveal quantitative views of human aging rates. Mol Cell. 2013; 49(2):359–67. Liu Y, Aryee MJ, Padyukov L, Fallin MD, Hesselberg E, Runarsson A, Reinius L, Acevedo N, Taub M, Ronninger M, et al. Epigenome-wide association data implicate DNA methylation as an intermediary of genetic risk in rheumatoid arthritis. Nat Biotechnol. 2013; 31(2):142–7. Hannon E, Dempster E, Viana J, Burrage J, Smith AR, Macdonald R, St Clair D, Mustard C, Breen G, Therman S, et al. An integrated genetic-epigenetic analysis of schizophrenia: evidence for co-localization of genetic associations and differential DNA methylation. Genome Biol. 2016; 17(1):176. Koestler DC, Christensen BC, Karagas MR, Marsit CJ, Langevin SM, Kelsey KT, Wiencke JK, Houseman EA. Blood-based profiles of DNA methylation predict the underlying distribution of cell types: a validation analysis. Epigenetics. 2013; 8(8):816–26. Koestler DC, Jones MJ, Usset J, Christensen BC, Butler RA, Kobor MS, Wiencke JK, Kelsey KT. Improving cell mixture deconvolution by id entifying o ptimal dna methylation l ibraries (idol). BMC Bioinforma. 2016; 17(1):1. Cardenas A, Allard C, Doyon M, Houseman EA, Bakulski KM, Perron P, Bouchard L, Hivert MF. Validation of a DNA methylation reference panel for the estimation of nucleated cells types in cord blood. Epigenetics. 2016; 11(11):773–9. Lai CY, Scarr E, Udawela M, Everall I, Chen WJ, Dean B. Biomarkers in schizophrenia: a focus on blood based diagnostics and theranostics. World J Psychiatry. 2016; 6(1):102. Tekeoğlu İ, Gürol G, Harman H, Karakeçe E, Çiftçi İH. Overlooked hematological markers of disease activity in rheumatoid arthritis. Int J Rheum Dis. 2016; 19(11):1078–82. Solana R, Alonso M, Pena J. Natural killer cells in healthy aging. Exp Gerontol. 1999; 34(3):435–43. Solana R, Mariani E. NK and NK/T cells in human senescence. Vaccine. 2000; 18(16):1613–20. Kawanaka N, Yamamura M, Aita T, Morita Y, Okamoto A, Kawashima M, Iwahashi M, Ueno A, Ohmoto Y, Makino H. Cd14+, CD16+ blood monocytes and joint inflammation in rheumatoid arthritis. Arthritis Rheumatol. 2002; 46(10):2578–586. Wijngaarden S, Van Roon J, Bijlsma J, Van De Winkel J, Lafeber F. Fc γ receptor expression levels on monocytes are elevated in rheumatoid arthritis patients with high erythrocyte sedimentation rate who do not use anti-rheumatic drugs. Rheumatology. 2003; 42(5):681–8. Iwahashi M, Yamamura M, Aita T, Okamoto A, Ueno A, Ogawa N, Akashi S, Miyake K, Godowski PJ, Makino H. Expression of Toll-like receptor 2 on CD16+ blood monocytes and synovial tissue macrophages in rheumatoid arthritis. Arthritis Rheumatol. 2004; 50(5):1457–67. Azevedo FA, Andrade-Moraes CH, Curado MR, Oliveira-Pinto AV, Guimarães DM, Szczupak D, Gomes BV, Alho AT, Polichiso L, Tampellini E, et al. Automatic isotropic fractionation for large-scale quantitative cell analysis of nervous tissue. J Neurosci Methods. 2013; 212(1):72–8. Pinto AR, Ilinykh A, Ivey MJ, Kuwabara JT, D'Antoni ML, Debuque R, Chandran A, Wang L, Arora K, Rosenthal NA, et al. Revisiting cardiac cellular composition. Circ Res. 2016; 118(3):400–9. Divoux A, Tordjman J, Lacasa D, Veyrie N, Hugol D, Aissat A, Basdevant A, Guerre-Millo M, Poitou C, Zucker JD, et al. Fibrosis in human adipose tissue: composition, distribution, and link with lipid metabolism and fat mass loss. Diabetes. 2010; 59(11):2817–825. Lu P, Nakorchevskiy A, Marcotte EM. Expression deconvolution: a reinterpretation of dna microarray data reveals dynamic changes in cell populations. Proc Natl Acad Sci. 2003; 100(18):10370–5. Abbas AR, Wolslegel K, Seshasayee D, Modrusan Z, Clark HF. Deconvolution of blood microarray data identifies cellular activation patterns in systemic lupus erythematosus. PloS ONE. 2009; 4(7):6098. Kuhn A, Thu D, Waldvogel HJ, Faull RL, Luthi-Carter R. Population-specific expression analysis (PSEA) reveals molecular changes in diseased brain. Nat Methods. 2011; 8(11):945–7. Zuckerman NS, Noam Y, Goldsmith AJ, Lee PP. A self-directed method for cell-type identification and separation of gene expression microarrays. PLoS Comput Biol. 2013; 9(8):1003189. Steuerman Y, Gat-Viks I. Exploiting gene-expression deconvolution to probe the genetics of the immune system. PLoS Comput Biol. 2016; 12(4):1004856. Rahmani E, Yedidim R, Shenhav L, Schweiger R, Weissbrod O, Zaitlen N, Halperin E. GLINT: a user-friendly toolset for the analysis of high-throughput DNA-methylation array data. Bioinformatics. 2017; 33(12):1870–2. Minka T. Estimating a Dirichlet distribution. Technical report, MIT. 2000. Lehne B, Drong AW, Loh M, Zhang W, Scott WR, Tan ST, Afzal U, Scott J, Jarvelin MR, Elliott P, et al. A coherent approach for analysis of the Illumina HumanMethylation450 BeadChip improves data quality and performance in epigenome-wide association studies. Genome Biol. 2015; 16(1):37. Aryee MJ, Jaffe AE, Corrada-Bravo H, Ladd-Acosta C, Feinberg AP, Hansen KD, Irizarry RA. Minfi: a flexible and comprehensive bioconductor package for the analysis of infinium DNA methylation microarrays. Bioinformatics. 2014; 30(10):1363–9. Chen Y-a, Lemire M, Choufani S, Butcher DT, Grafodatskaya D, Zanke BW, Gallinger S, Hudson TJ, Weksberg R. Discovery of cross-reactive probes and polymorphic CpGs in the Illumina Infinium HumanMethylation450 microarray. Epigenetics. 2013; 8(2):203–9. Hofer IS, Gabel E, Pfeffer M, Mahbouba M, Mahajan A. A systematic approach to creation of a perioperative data warehouse. Anesth Analg. 2016; 122(6):1880–4. Rahmani E, Schweiger R, Shenhav L, Theodora W, Hofer I, Gabel E, Eskin E, Halperin E. BayesCCE: a Bayesian framework for estimating cell-type composition from DNA methylation without the need for methylation reference. zenodo. 2018. https://doi.org/10.5281/zenodo.1293009. Rahmani E, Schweiger R, Shenhav L, Theodora W, Hofer I, Gabel E, Eskin E, Halperin E. BayesCCE: a Bayesian framework for estimating cell-type composition from DNA methylation without the need for methylation reference. Github repository. 2018. https://github.com/cozygene/BayesCCE. This research was partially supported by the Edmond J. Safra Center for Bioinformatics at Tel Aviv University. E.H., E.R., L.S., and R.S. were supported in part by the Israel Science Foundation (grant 1425/13), and E.H., L.S., and R.S. by the United States Israel Binational Science Foundation grant 2012304. EH and ER were partially supported by the National Science Foundation (NSF) grant 1705197. E.R. and L.S. were supported by Len Blavatnik and the Blavatnik Research Foundation. R.S. was supported by the Colton Family Foundation. E.E. was supported by the National Science Foundation grants 0513612, 0731455, 0729049, 0916676, 1065276, 1302448, 1320589, and 1331176, and National Institutes of Health grants K25-HL080079, U01-DA024417, P01-HL30568, P01-HL28481, R01-GM083198, R01-ES021801, R01-MH101782, and R01-ES022282. The data sets analyzed in this study are available in the Gene Expression Omnibus (GEO) repository under the following accession IDs: GSE35069 [11], GSE40279 [20], GSE42861 [21], GSE80417 [22], and GSE84727 [22]. The BayesCCE source code has been deposited in Zenodo (https://doi.org/10.5281/zenodo.1293009) [47] and is available from GitHub (https://github.com/cozygene/BayesCCE) under the GPL-3 license [48]. Review history The review history is included as Additional file 2. Department of Computer Science, University of California Los Angeles, Los Angeles, CA, USA Elior Rahmani , Liat Shenhav , Eleazar Eskin & Eran Halperin Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel Regev Schweiger Department of Anesthesiology and Perioperative Medicine, University of California Los Angeles, Los Angeles, CA, USA Theodora Wingert , Ira Hofer , Eilon Gabel Department of Human Genetics, University of California Los Angeles, Los Angeles, CA, USA Eleazar Eskin Search for Elior Rahmani in: Search for Regev Schweiger in: Search for Liat Shenhav in: Search for Theodora Wingert in: Search for Ira Hofer in: Search for Eilon Gabel in: Search for Eleazar Eskin in: Search for Eran Halperin in: ER and EH conceived and designed the project. ER performed the data analysis. RS, LS, and EE contributed expertise. TW, IH, and EG generated and contributed data. ER and EH drafted the manuscript. All authors read and approved the manuscript. Correspondence to Eran Halperin. Additional file 1 Supplementary Tables and Figures. (PDF 1277 kb) Review history. (DOCX 91 kb) Rahmani, E., Schweiger, R., Shenhav, L. et al. BayesCCE: a Bayesian framework for estimating cell-type composition from DNA methylation without the need for methylation reference. Genome Biol 19, 141 (2018) doi:10.1186/s13059-018-1513-2 Cell-type composition Tissue heterogeneity Cell counts Bayesian model Epigenome-wide association studies
CommonCrawl
Jacobian 2010 Mathematics Subject Classification: Primary: 26B10 Secondary: 26B15 [MSN][ZBL] 1 Jacobian Matrix 2 Jacobian determinant 3 Generalizations of the Jacobian determinant 4 Jacobian variety Jacobian Matrix Also called Jacobi matrix. Let $U\subset \mathbb R^n$, $f: U\to \mathbb R^m$ and assume that $f$ is differentiable at the point $y\in U$. The Jacobi matrix of $f$ at $y$ is then the matrix \begin{equation}\label{e:Jacobi_matrix} Df|_y := \left( \begin{array}{llll} \frac{\partial f^1}{\partial x_1} (y) & \frac{\partial f^1}{\partial x_2} (y)&\qquad \ldots \qquad & \frac{\partial f^1}{\partial x_n} (y)\\ \frac{\partial f^2}{\partial x_1} (y) & \frac{\partial f^2}{\partial x_2} (y)&\qquad \ldots \qquad & \frac{\partial f^2}{\partial x_n} (y)\\ \\ \vdots & \vdots & &\vdots\\ \\ \frac{\partial f^m}{\partial x_1} (y) & \frac{\partial f^m}{\partial x_2} (y)&\qquad \ldots \qquad & \frac{\partial f^m}{\partial x_n} (y) \end{array}\right)\, , \end{equation} where $(f^1, \ldots, f^m)$ are the coordinate functions of $f$ and $x_1,\ldots, x_n$ denote the standard system of coordinates in $\mathbb R^n$. Jacobian determinant Also called Jacobi determinant. If $U$, $f$ and $y$ are as above and $m=n$, the Jacobian determinant of $f$ at $y$ is the determinant of the Jacobian matrix \ref{e:Jacobi_matrix}. Some authors use the same name for the absolute value of such determinant. If $U$ is an open set and $f$ a locally invertible $C^1$ map, the absolute value of the Jacobian determinant gives the infinitesimal dilatation of the volume element in passing from the variables $x_1, \ldots, x_n$ to the variables $f_1,\ldots, f_n$. Therefore the Jacobian determinant plays a crucial role when changing variables in integrals, see Sections 3.2 and 3.3 of [EG] (see also Differential form and Integration on manifolds). Generalizations of the Jacobian determinant The Jacobian determinant can be generalized also to the case where the dimension of the target differs from that of the domain (see Section 3.2 of [EG]). More precisely, let $f$, $U$, $n$, $m$ and $y$ be as above: If $m<n$, the Jacobian of $f$ at $y$ is given by the square root of the determinant of $Df_y\cdot (Df_y)^t$ (where $Df_y^t$ denotes the transpose of the matrix $Df_y$); If $m>n$, the Jacobian of $f$ at $y$ is given by the square root of the determinant of $(Df_y)^t\cdot Df_y$. These generalizations play a key role respectively in the Coarea formula and Area formula. An important characterization of the Jacobian is then given by the Cauchy Binet formula: $Jf(y)^2$ is the sum of the squares of the determinants of all $n\times n$ minors of $Df|_y$ (cp. with Theorem 4 in Section 3.2.1 of [EG]). Jacobian variety See Jacobi variety. [EG] L.C. Evans, R.F. Gariepy, "Measure theory and fine properties of functions" Studies in Advanced Mathematics. CRC Press, Boca Raton, FL, 1992. MR1158660 Zbl 0804.2800 [Fe] H. Federer, "Geometric measure theory", Springer-Verlag (1979). MR0257325 Zbl 0874.49001 [IP] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 1–2 , MIR (1982) MR0687827 MR0687828 [Ku] L.D. Kudryavtsev, "Mathematical analysis" , 1–2 , Moscow (1973) [Ma] P. Mattila, "Geometry of sets and measures in euclidean spaces". Cambridge Studies in Advanced Mathematics, 44. Cambridge University Press, Cambridge, 1995. MR1333890 Zbl 0911.28005 [Ni] S.M. Nikol'skii, "A course of mathematical analysis" , 2 , MIR (1977) MR0796320Zbl 0479.00001 [Ru] W. Rudin, "Principles of mathematical analysis", Third edition, McGraw-Hill (1976) MR038502 Zbl 0346.2600 [Si] L. Simon, "Lectures on geometric measure theory", Proceedings of the Centre for Mathematical Analysis, 3. Australian National University. Canberra (1983) MR0756417 Zbl 0546.49019 [Sp] M. Spivak, "Calculus on manifolds" , Benjamin/Cummings (1965) MR0209411 Zbl 0141.05403 Jacobian. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jacobian&oldid=28782 This article was adapted from an original article by V.A. Il'in (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/index.php?title=Jacobian&oldid=28782" Real functions
CommonCrawl
Nonlinear Dynamics October 2015 , Volume 82, Issue 1–2, pp 39–52 | Cite as Adaptive fractional-order switching-type control method design for 3D fractional-order nonlinear systems Chun Yin Yuhua Cheng YangQuan Chen Brandon Stark Shouming Zhong In this paper, an adaptive sliding mode technique based on a fractional-order (FO) switching-type control law is designed to guarantee robust stability for uncertain 3D FO nonlinear systems. A novel FO switching-type control law is proposed to ensure the existence of the sliding motion in finite time. Appropriate adaptive laws are shown to tackle the uncertainty and external disturbance. The calculation formula of the reaching time is analyzed and computed. The reachability analysis is visualized to show how to obtain a shorter reaching time. A stability criterion of the FO sliding mode dynamics is derived based on indirect approach to Lyapunov stability. Advantages of the proposed control scheme are illustrated through numerical simulations. Fractional-order switching-type control law Sliding mode control Reaching time 3D fractional-order nonlinear system Adaptive sliding mode technique This work was supported by National Basic Research Program of China (Nos. 61462065 and 51407024) and ZYGX2015KYQD020. Appendix: Proof of Lemma 2.1 For \(\forall t > 0\), there exists a time interval \((t_k,t_{k + 1} ]\) such that \(t \in (t_k,t_{k + 1} ]\) and \(\sigma (t')\ge 0, \forall t' \in (t_k,t_{k + 1} ]\) if \(\sigma (t)>0\), or \(\sigma (t') \le 0, \forall t'\in (t_k,t_{k + 1} ]\) if \(\sigma (t)<0\). Furthermore, there exists a finite partition given by \( 0=t_0<t_1<t_2<\cdots <t_{k-1 } < t_{k}\), such that: 1) for every interval \((t_i,t_{i + 1} ],(i=0,1,\cdots ,k-1)\), \(\sigma (t') \le 0, \forall t'\in (t_i,t_{i + 1} ]\) or \(\sigma (t') \ge 0, \forall t'\in (t_i,t_{i + 1} ]\); and 2) \(\sigma (t') \le 0,\forall t' \in (t_{i+1} ,t_{i + 2} ]\) if \(\sigma (t') \ge 0,\forall t' \in (t_{i },t_{i+1} ]\) or \(\sigma (t') \ge 0,\forall t' \in (t_{i+1},t_{i + 2} ]\) if \(\sigma (t') \le 0,\forall t' \in (t_{i },t_{i+1} ]\), for every two adjacent intervals \((t_{i },t_{i+1} ] \) and \((t_{i+1},t_{i + 2} ]\). Moreover, we require that the initial time in every \((t_i,t_{i+1}]\) is not equal to zero. From the integral properties, denoting \(t_0=0\), one has $$\begin{aligned} D_t^{\bar{\beta }} {\mathop {\mathrm{sgn}}} (\sigma (t))= & {} \frac{\left[ {f_0 (t) + f_1 (t) + \cdots + f_k (t)} \right] }{{\varGamma (1 - {\bar{\beta }} )}}, \end{aligned}$$ where \(f_i (t) = \frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_i }^{t_{i + 1} } {\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau },(i = 0,1,2, \cdots ,k - 1)\), \(f_k (t) = \frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_k }^t {\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau }\). First, we consider \(\sigma (t)>0\). From the above analysis, one has \(\sigma (t') \ge 0,\forall t' \in (t_k,t_{k + 1} ]\). There exists \(t_k=t_{k0} <t_{k1} <t_{k2}< \cdots <t_{kl_k - 1} <t_{kl_k } =t\) in \((t_k,t]\) such that \((t_k,t] = (t_{k0},t_{k1} ] \cup (t_{k1} ,t_{k2} ] \cup \cdots \cup (t_{kl_k - 1},t_{kl_k } ]\). Moreover, \(\sigma (t')\ge 0, \forall t' \in (t_{k0},t_{k1} ]\) in which certain \(\sigma (t') = 0\) just happen at some isolate points \( t'\); \(\sigma (t')\equiv 0, \forall t' \in (t_{k1},t_{k2} ]\); \(\sigma (t')\ge 0, \forall t' \in (t_{k2},t_{k3} ]\) in which certain \(\sigma (t') = 0\) just happen at some isolate points \( t'\); \(\cdots \); \(\sigma (t')\equiv 0, \forall t' \in (t_{kl_k-2},t_{kl_k-1} ]\); \(\sigma (t')\ge 0, \forall t' \in (t_{kl_k-1},t_{kl_k} ]\) in which \(\sigma (t') = 0\) just happen at isolate points \( t'\). In addition, we also require that the initial time in \((t_{kj},t_{kj+1}],(j=0,1,\cdots ,l_{k}-1)\) is not zero. Thus, one has $$\begin{aligned}&\!\!\!\frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_{kl_k - 1} }^t {\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau } = (t - t_{kl_k - 1} )^{-{\bar{\beta }}},\nonumber \\&\!\!\! \frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_{kl_k - 2} }^{t_{kl_k - 1} } {\!\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau }\\&\!\!\! = \frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_{kl_k - 2} }^{t_{kl_k - 1} } {\!\frac{0}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau } = 0, \nonumber \\&\!\!\! \frac{\mathrm{d}}{{\mathrm{d}t}}\int _{t_{kl_k - 3} }^{t_{kl_k - 2} } {\!\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau } = (t - t_{kl_k - 3} )^{-{\bar{\beta }}}\\&\!\!\! - (t - t_{kl_k - 2} )^{-{\bar{\beta }}},\nonumber \end{aligned}$$ and so on, one can conclude $$\begin{aligned} f_k (t)= & {} (t - t_{k0} )^{-{\bar{\beta }}} - (t - t_{k1} )^{-{\bar{\beta }}}+ (t - t_{k2} )^{-{\bar{\beta }}} \nonumber \\&- (t - t_{k3} )^{-{\bar{\beta }}} + \cdots +(t - t_{kl_k - 1} )^{-{\bar{\beta }}}. \end{aligned}$$ Since \((t - t_{kj} )^{-{\bar{\beta }}} \) is an increasing function in \(t_{kj}\), \( f_k (t) \ge (t - t_{k} )^{-{\bar{\beta }}}\). Then, we discuss \(f_i (t)\). When \(\sigma (t') \ge 0,\forall t' \in (t_i,t_{i + 1} ],\) there exists a time partition \((t_i,t_{i + 1} ] = (t_{i0} ,t_{i1} ] \cup (t_{i1},t_{i2} ] \cup \cdots \cup (t_{il_i - 1} ,t_{il_i } ]\) in which \(t_{i0} = t_i,t_{il_i } = t_{i + 1}\). Moreover, \(\sigma (t')\ge 0, \forall t' \in (t_{i0},t_{i1} ]\) in which \(\sigma (t') = 0\) just happen at some isolate points \( t'\); \(\sigma (t')\equiv 0, \forall t' \in (t_{i1},t_{i2} ]\); \(\sigma (t')\ge 0, \forall t' \in (t_{i2},t_{i3} ]\) in which certain \(\sigma (t') = 0\) just happen at some isolate points \( t'\); and so on. We also claim that the initial time in \((t_{ij},t_{ij+1}],(j=0,1,\cdots ,l_{i}-1)\) is not zero. Considering \((t_{il_i - 1},t_{il_i } ]\), there are two possibilities (i.e., \(\sigma (t') \ge 0,\forall t' \in (t_{il_i - 1},t_{il_i } ]\) or \(\sigma (t') \equiv 0,\forall t' \in (t_{il_i - 1},t_{il_i } ]\)). Hence, \(f_i (t)\) can be calculated, similarly the calculation of \(f_k (t)\), \( f_i (t) \ge (t - t_i )^{-{\bar{\beta }}} - (t - t_{i + 1} )^{-{\bar{\beta }}}, \mathrm{or}\quad \!\!f_i (t) \ge (t - t_{i0} )^{-{\bar{\beta }}} - (t - t_{il_i -1} )^{-{\bar{\beta }}}\). When \(\sigma (t') \le 0,\forall t' \in (t_i,t_{i + 1} ]\), there exists \((t_i,t_{i + 1} ]\) such that \((t_i,t_{i + 1} ] = (t_{i0} ,t_{i1} ] \cup (t_{i1},t_{i2} ] \cup \cdots \cup (t_{il_i - 1} ,t_{il_i } ]\) in which \(t_{i0} = t_i,t_{il_i } = t_{i + 1}\). Moreover, \(\sigma (t')\le 0, \forall t' \in (t_{i0},t_{i1} ]\) in which \(\sigma (t') = 0\) just happen at some isolate points \( t'\); \(\sigma (t')\equiv 0, \forall t' \in (t_{i1},t_{i2} ]\); \(\sigma (t')\le 0, \forall t' \in (t_{i2},t_{i3} ]\) in which certain \(\sigma (t') = 0\) just happen at some isolate points \( t'\); and so on. We also claim that the initial time in \((t_{ij},t_{ij+1}],(j=0,1,\cdots ,l_{i}-1)\) is not zero. Considering \((t_{il_i - 1},t_{il_i } ]\), there are two possibilities (i.e., \(\sigma (t') \le 0,\forall t' \in (t_{il_i - 1},t_{il_i } ]\) or \(\sigma (t') \equiv 0,\forall t' \in (t_{il_i - 1},t_{il_i } ]\)). Hence, one can conclude that $$\begin{aligned} \frac{\hbox {d}}{{\mathrm{d}t}}\int _{t_{i0} }^{t_{i1} } {\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau }= & {} (t - t_{i1} )^{-{\bar{\beta }}} - (t - t_{i0} )^{-{\bar{\beta }}},\nonumber \\ \end{aligned}$$ $$\begin{aligned} \frac{\hbox {d}}{{\mathrm{d}t}}\int _{t_{i1} }^{t_{i2} } {\frac{{{\mathop {\mathrm{sgn}}} (\sigma (\tau ))}}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau }= & {} \frac{\hbox {d}}{{\mathrm{d}t}}\int _{t_{i1} }^{t_{i2} } {\frac{0}{{(t - \tau )^{\bar{\beta }} }}\mathrm{d}\tau } = 0,\nonumber \\ \end{aligned}$$ and so on, one has $$\begin{aligned} f_i (t)= & {} (t - t_{i1} )^{-{\bar{\beta }}} \!-\! (t - t_{i0} )^{-{\bar{\beta }}} \!+\! \cdots + (t - t_{il_i } )^{-{\bar{\beta }}}\\&-(t - t_{il_i - 1} )^{-{\bar{\beta }}} > 0, \end{aligned}$$ $$\begin{aligned} f_i (t)= & {} (t - t_{i1} )^{-{\bar{\beta }}} \!-\! (t - t_{i0} )^{-{\bar{\beta }}} \!+\! \cdots \!+\! (t \!-\! t_{il_i -1} )^{-{\bar{\beta }}}\nonumber \\&-(t - t_{il_i - 2} )^{-{\bar{\beta }}} > 0.\nonumber \end{aligned}$$ So, one can conclude \(\sum \limits _{i = 0}^k {f_i (t)} > 0.\) Thus, we have $$\begin{aligned} D^{\bar{\beta }} {\mathop {\mathrm{sgn}}} (\sigma (t)) = \frac{\left[ {f_0 (t) + f_1 (t) + \cdots + f_k (t)} \right] }{{\varGamma (1 - {\bar{\beta }})}}> 0. \end{aligned}$$ Next, we consider the second case when \(\sigma (t) < 0\). Similar to the first case, we have $$\begin{aligned} D^{\bar{\beta }} {\mathop {\mathrm{sgn}}} (\sigma (t)) = \frac{\left[ {f_0 (t) + f_1 (t) + \cdots + f_k (t)} \right] }{{\varGamma (1 - {\bar{\beta }} )}} < 0. \end{aligned}$$ Podlubny, I.: Fractional Differential Equations. Academic Press, New York (1999)zbMATHGoogle Scholar Ahn, H.S., Chen, Y.Q.: Necessary and sufficient stability condition of fractional-order interval linear systems. Automatica 44, 2985–2988 (2008)zbMATHMathSciNetCrossRefGoogle Scholar Li, Y., Chen, Y.Q., Podlubny, I.: Mittag–Leffler stability of fractional order nonlinear dynamic systems. Automatica 45, 1965–1969 (2009)zbMATHMathSciNetCrossRefGoogle Scholar Maione, G.: Conditions for a class of rational approximants of fractional differentiators/integrators to enjoy the interlacing property. In: Proceeding of the 18th IFAC World Congr, 18, 13984–13989 (2011)Google Scholar Trigeassou, J.C., Maamri, N., Sabatier, J., Oustaloup, A.: State variable and transients of fractional order differential systems. Comput. Math. Appl. 64(10), 3117–3140 (2012)zbMATHMathSciNetCrossRefGoogle Scholar Li, C., Ma, Y.: Fractional dynamical system and its linearization theorem. Nonlinear Dyn. 71, 621–633 (2013)zbMATHCrossRefGoogle Scholar Maione, G.: On the Laguerre rational approximation to fractional discrete derivative and integral operators. IEEE Trans. Automat. Control 58, 1579–1585 (2013)MathSciNetCrossRefGoogle Scholar Sun, H., Chen, W., Chen, Y.Q.: Variable-order fractional differential operators in anomalous diffusion modeling. Phys. A 388(21), 4586C92 (2009)CrossRefGoogle Scholar Machado, J.A.T.: Fractional order modelling of fractional-order holds. Nonlinear Dyn. 70, 789–796 (2012)MathSciNetCrossRefGoogle Scholar Monje, C.A., Chen, Y.Q., Vinagre, B.M., Xue, D., Feliu, V.: Fractional order Systems and Controls: Fundamentals and Applications. Springer, London, New York (2010)CrossRefGoogle Scholar Luo, Y., Chen, Y.Q., Pi, Y.: Experimental study of fractional order proportional derivative controller synthesis for fractional order systems. Mechatronics 21, 204–214 (2011)CrossRefGoogle Scholar Yin, C., Stark, B., Chen, Y.Q., Zhong, S.M.: Adaptive minimum energy cognitive lighting control: integer order vs fractional order strategies in sliding mode based extremum seeking. Mechatronics 23, 863–872 (2013)CrossRefGoogle Scholar Li, R., Chen, W.: Lyapunov-based fractional-order controller design to synchronize a class of fractional-order chaotic systems. Nonlinear Dyn. 76, 785–795 (2014)CrossRefGoogle Scholar Yin, C., Chen, Y.Q., Zhong, S.M.: Fractional-order sliding mode based extremum seeking control of a class of nonlinear system. Automatica 50, 3173–3181 (2014)MathSciNetCrossRefGoogle Scholar Utkin, V.I.: Sliding Modes in Control and Optimization. Springer, New York (1992)zbMATHCrossRefGoogle Scholar Liu, J.K., Wang, X.H.: Advanced Sliding Mode Control for Mechanical Systems: Design, Analysis and MATLAB Simulation. Springer, Tsinghua University Press, Berlin, Beijing (2012)Google Scholar Tavazoei, M.S., Haeri, M.: Chaotic attractors in incommensurate fractional order systems. Phys. D 237, 2628–2637 (2008)zbMATHMathSciNetCrossRefGoogle Scholar Hosseinnia, S.H., Ghaderi, R., Ranjbar, N.A., Mahmoudian, M., Momani, S.: Sliding mode synchronization of an uncertain fractional order chaotic system. Comput. Appl. Math. 59, 1637–1643 (2010)zbMATHMathSciNetCrossRefGoogle Scholar Odibat, Z.M.: Adaptive feedback control and synchronization of non-identical chaotic fractional order systems. Nonlinear Dyn. 60, 479–487 (2010)zbMATHMathSciNetCrossRefGoogle Scholar Tavazoei, M.S., Haeri, M.: Synchronization of chaotic fractional-order systems via active sliding mode controller. Phys. A 387, 57–70 (2008)MathSciNetCrossRefGoogle Scholar Wang, X., Zhang, X., Ma, C.: Modified projective synchronization of fractional-order chaotic systems via active sliding mode control. Nonlinear Dyn. 69, 511–517 (2012)zbMATHMathSciNetCrossRefGoogle Scholar Yin, C., Zhong, S., Chen, W.: Design of sliding mode controller for a class of fractional-order chaotic systems. Commun. Nonlinear Sci. Numer. Simul. 17, 356–366 (2012)zbMATHMathSciNetCrossRefGoogle Scholar Zhang, R., Yang, S.: Robust synchronization of two different fractional-order chaotic systems with unknown parameters using adaptive sliding mode approach. Nonlinear Dyn. 71, 269–278 (2013)CrossRefGoogle Scholar Yin, C., Dadras, S., Zhong, S., Chen, Y.Q.: Control of a novel class of fractional-order chaotic systems via adaptive sliding mode control approach. Appl. Math. Modell. 37(4), 2469–2483 (2013)MathSciNetCrossRefGoogle Scholar Tian, X., Fei, S.: Robust control of a class of uncertain fractional-order chaotic systems with input nonlinearity via an adaptive sliding mode technique. Entropy 16, 729–746 (2014)MathSciNetCrossRefGoogle Scholar Aghababa, M.P.: A novel terminal sliding mode controller for a class of non-autonomous fractional-order systems. Nonlinear Dyn. 73, 679–688 (2013)zbMATHMathSciNetCrossRefGoogle Scholar Efe, M.: Fractional fuzzy adaptive sliding-mode control of a 2-DOF direct-drive robot arm. IEEE Trans. Syst. Man Cybern. Part B Cybern. 38, 1561–1570 (2008)CrossRefGoogle Scholar Yin, C., Stark, B., Chen, Y.Q., Zhong, S.M., Lau, E.: Fractional-order adaptive minimum energy cognitive lighting control strategy for the hybrid lighting system. Energy Build. 87, 176–184 (2015)CrossRefGoogle Scholar Yin, C., Chen, Y.Q., Zhong, S.M.: Fractional-order power rate type reaching law for sliding mode control of uncertain nonlinear system. In: Proceeding of 19th International Federation of Automatic Control World Congress, Cape Town, South Africa, 5369–5374 (2014)Google Scholar Trigeassou, J.C., Maamri, N., Sabatier, J., Oustaloup, A.: A Lyapunov approach to the stability of fractional differential equations. Signal Process. 91, 437–445 (2011)zbMATHCrossRefGoogle Scholar 1.School of Automation EngineeringUniversity of Electronic Science and Technology of ChinaChengduPeople's Republic of China 2.Mechatronics, Embedded Systems and Automation (MESA) Lab, School of EngineeringUniversity of California, MercedMercedUSA 3.School of Mathematics ScienceUniversity of Electronic Science and Technology of ChinaChengduPeople's Republic of China Yin, C., Cheng, Y., Chen, Y. et al. Nonlinear Dyn (2015) 82: 39. https://doi.org/10.1007/s11071-015-2136-8 Publisher Name Springer Netherlands
CommonCrawl
Ultraviolet Germicidal Irradiation Handbook DOI:10.1007/978-3-642-01999-9_10 In book: Ultraviolet Germicidal Irradiation Handbook (pp.233-254) Chapter: UV Surface Disinfection Wladyslaw J. Kowalski Sanuvox Request full-text PDF To read the full-text of this research, you can request a copy directly from the author. The disinfection of surfaces is perhaps the simplest and most predictable application of ultraviolet germicidal radiation. UV is highly effective at controlling microbial growth and at achieving sterilization of most types of surfaces. Early applications included equipment sterilization in the medical industry. Modern applications include pharmaceutical product disinfection, area disinfection, cooling coil and drain pan disinfection, and overhead UV systems for surgical suites. Such applications often involve using bare UV lamps and as such there may be UV hazards associated with them. Cooling coil disinfection with ultraviolet light has proven so effective that such installations often pay for themselves in short order. The use of UV is fairly common in the packaging industry and in the food processing industry where it is sometimes used for irradiating the surfaces of foodstuffs. Lower room UV systems are not common although they have been used in hospitals in the past. This chapter provides basic design information for each type of surface disinfection system based on theoretical analysis and field testing results. Good design practices are discussed and general guidelines are provided. No full-text available To read the full-text of this research, you can request a copy directly from the author. ... Ultraviolet from the sunlight is the main factor that destroys microbial control agents. The activity of microbial control agents, including Baculovirus, can be inactivated by the ultraviolet B (280-320 nm wavelength) of sunlight [6,7]. Thus, in order to improve the pathogenicity of Baculovirus, some ultraviolet protectant from natural ingredients have been used as additives. ... ... The wavelength used for the spectrophotometry analysis for this research ranges from 190 -420 nm because it shows the wavelength of ultraviolet spectrum. UV radiation consists of UVA (320-400 nm), UVB (280-320), and UVC (200-280) [6]. This research showed that all of the A. atlas cocoon extract concentration variations (0.5, 1, 2, and 2.5%) can be read in the wavelength (λ) of 190 -420 nm. ... ... death. The primary dimer formed in DNA due to UV exposure is thymine dimers [6,20]. UV-B radiation is one of the most important solar energy that can cause the formation of 3 types of DNA damages namely cyclobutane pyrimidine dimer (CPD), photoproduct pyrimidine 6-4 pyrimidine (6-4 PPs), and Dewar isomer [21]. ... UV Protectant Ability of Attacus atlas L. (Lepidoptera: Saturniidae) Sericin Extract to Increase Nucleopolyhedrovirus Effectiveness against Beet Army Worm, Spodoptera exigua (Hübner) (Lepidoptera: Noctuidae) Hana Widiawati Sukirno Sukirno Sumarmi Sumarmi Ignatius Sudaryadi Spodoptera exigua (Lepidoptera: Noctuidae) is the common pest known for attacking shallot crop. Baculovirus (Nucleopolyhedrovirus : NPV) is a biological agent that is widely used as the pest control agent. However, the activity of NPV is deteriorated when applied in the field due to the influence of ultraviolet from the sun. Attacus atlas silkworm cocoon has UV protectant ability due to the presence of sericin protein. Thus, the aim of this study is to determine the degree of UV protectant ability from A. atlas cocoon extract for NPV to prevent its deterioration and to increase its effectiveness against S. exigua pest. The stages of this study consist of UVB irradiation of A. atlas cocoon extract as NPV protectant and pathogenicity test of NPV activity against the first instar larvae of S. exigua. Experiments were carried out by exposing NPV to UVB irradiation for 0, 1, 2, 3, 4 weeks long with the addition of A. atlas cocoon extract in varied concentrations (0, 0.5, 1, 2, and 2.5%). The concentrations derived from 7.5 gram A. atlas cocoon was dilluted 1 gram TRO and 150 ml aquades. The results proved the UV protectant ability of A. atlas cocoon for NPV. After a week of UVB irradiation, the NPV applied in A. atlas cocoon extract (0-2.5%) caused larval mortality of 43.33%, 76.67%, 60%, 65%, 80%, respectively. After 2 weeks of UVB irradiation, the NPV applied in A. atlas cocoon extract (0-2.5%) caused larval mortality of 46.67%, 78.33%, 71.67%, 66.67%, 66.67%, respectively. Thus, for further applications, the findings of this research need to be tested in an actual field condition. ... In addition to medications, vaccinations and personal protective equipment, disinfection measures are an important element in the fight against all infections. Irradiation with ultraviolet radiation in the UVC spectral range of 200-280 nm is one of the oldest and most effective disinfection techniques [3], which can quickly reduce pathogens by several orders of magnitude, even at very low irradiation doses [4,5]. The effect is based on the destruction of DNA and RNA of bacteria, fungi and viruses [3][4][5] including coronaviruses like SARS-CoV-2 [4,[6][7][8]. ... ... Irradiation with ultraviolet radiation in the UVC spectral range of 200-280 nm is one of the oldest and most effective disinfection techniques [3], which can quickly reduce pathogens by several orders of magnitude, even at very low irradiation doses [4,5]. The effect is based on the destruction of DNA and RNA of bacteria, fungi and viruses [3][4][5] including coronaviruses like SARS-CoV-2 [4,[6][7][8]. Unfortunately, radiation of these wavelengths also cause harm to human DNA, which is why UVC radiation should not be applied in the presence of humans. ... ... In combination with the violet LEDs, the antimicrobial effect of the presented test illumination is about 10 times stronger than with pure white light (2,400 lux) alone, but 3.5 hours of irradiation are still required for a 90% bacterial reduction. This is still very long compared to typical UVC disinfection durations of minutes to seconds [5][6][7][8][30][31][32][33][34][35][36][37][38][39]. However, for less time-critical applications, such as the disinfection of work areas or rooms overnight, such irradiation with white-violet light is conceivable and in contrast to UVC emitters, without any relevant risk to humans. ... Surface disinfection with white-violet illumination device Martin Hessling Tobias Meurle Katharina Hönes The spread of infections, as in the coronavirus pandemic, leads to the desire to perform disinfection measures even in the presence of humans. UVC radiation is known for its strong antimicrobial effect, but it is also harmful to humans. Visible light, on the other hand, does not affect humans and laboratory experiments have already demonstrated that intense visible violet and blue light has a reducing effect on bacteria and viruses. This raises the question of whether the development of pathogen-reducing illumination is feasible for everyday applications. For this purpose, a lighting device with white and violet LEDs is set up to illuminate a work surface with 2,400 lux of white light and additionally with up to 2.5 mW/cm² of violet light (405 nm). Staphylococci are evenly distributed on the work surface and the decrease in staphylococci concentration is observed over a period of 46 hours. In fact, the staphylococci concentration decreases, but with the white illumination, a 90% reduction occurs only after 34 hours; with the additional violet illumination the necessary irradiation time is shortened to approx. 3.5 hours. Increasing the violet component probably increases the disinfection effect, but the color impression moves further away from white and the low disinfection durations of UVC radiation can nevertheless not be achieved, even with very high violet emissions. ... While UVC is a fairly common technique for disinfection (inactivation of vegetative bacteria and viruses) of water (Masschelein and Rice 2016) and air (Kesavan and Sagripanti 2014), e.g., aerosols, and air ducts (Kowalski 2009), its use on surfaces for the inactivation of bacterial spores (e.g., those of B. anthracis) is not widely used commercially. Most commercial UV germicidal equipment for surface disinfection is used for building and ventilation system surfaces or for dental and medical equipment. ... ... Most commercial UV germicidal equipment for surface disinfection is used for building and ventilation system surfaces or for dental and medical equipment. UVC inactivation rates for microbes in air are typically much higher than for surfaces (Kowalski 2009). Spores of bacteria are generally 5-10 times more resistant to UVC than their corresponding vegetative cells (Coohill and Sagripanti 2008). ... ... Decontamination efficacy is strongly dependent on the material with which the microorganisms are associated with, and most if not all UVC studies reported in the literature typically use only laboratory substrates such as glass or filters, rather than relevant realistic materials. Our study also examined the effect of relative humidity (RH) on UVC inactivation efficacy, which is another data gap (Kowalski 2009). ... Inactivation of Bacillus anthracis and Bacillus atrophaeus Spores on Different Surfaces with Ultraviolet Light Produced with a Low‐Pressure Mercury Vapor Lamp or Light Emitting Diodes Joseph P Wood M. Worth Calfee Vipin K Rastogi Aims: To obtain quantitative efficacy data of two ultraviolet light (UVC) technologies for surface inactivation of Bacillus anthracis Ames and Bacillus atrophaeus spores. Methods and results: Spores were deposited onto test coupons and controls of four different materials, via liquid suspension or aerosol deposition. The test coupons were then exposed to UVC light from either a low-pressure mercury vapor lamp or a system comprised of light emitting diodes, with a range of dosages. Positive controls were held at ambient conditions and not exposed to UVC light. Following exposure to UVC, spores were recovered from the coupons and efficacy was quantified in terms of log10 reduction (LR) in the number of viable spores compared to that from positive controls. Conclusions: Decontamination efficacy varied by material and UVC dosage (efficacy up to 5.7 LR was demonstrated). There was no statistical difference in efficacy between the two species or between inoculation methods. Efficacy improved for the LED lamp at lower relative humidity, but this effect was not observed with the mercury vapor lamp. Significance and impact of study: This study will be useful in determining whether UVC could be used for the inactivation of B. anthracis spores on different surface types. ... Here k is a microorganism susceptibility constant (m 2 /J; sometimes denoted Z) that is species dependent and experimentally derived. Typical values for a range of microroganisms are presented in a number of sources, with Kowalski (2009) presenting the most comprehensive compilation of published values from numerous experimental studies. Equation 1 represents a first-order decay assumption that is realistic for many microorganisms, including Mycobacterium tuberculosis and other Mycobacterium species in air (Kowalski 2009). ... ... Typical values for a range of microroganisms are presented in a number of sources, with Kowalski (2009) presenting the most comprehensive compilation of published values from numerous experimental studies. Equation 1 represents a first-order decay assumption that is realistic for many microorganisms, including Mycobacterium tuberculosis and other Mycobacterium species in air (Kowalski 2009). Other models incorporating a threshold dose or two-stage decay characteristics have been proposed for certain pathogens (Kowalski 2009). ... ... Equation 1 represents a first-order decay assumption that is realistic for many microorganisms, including Mycobacterium tuberculosis and other Mycobacterium species in air (Kowalski 2009). Other models incorporating a threshold dose or two-stage decay characteristics have been proposed for certain pathogens (Kowalski 2009). ... Modeling infection risk and energy use of upper-room Ultraviolet Germicidal Irradiation systems in multi-room environments Amirul Khan Catherine Noakes C.A. Gilkeson The effectiveness of ultraviolet irradiation at inactivating airborne pathogens is well proven, and the technology is also commonly promoted as an energy-efficient way of reducing infection risk in comparison to increasing ventilation. However, determining how and where to apply upper-room Ultraviolet Germicidal Irradiation devices for the greatest benefit is still poorly understood. This article links multi-zone infection risk models with energy calculations to assess the potential impact of a Ultraviolet Germicidal Irradiation installation across a series of inter-connected spaces, such as a hospital ward. A first-order decay model of ultraviolet inactivation is coupled with a room air model to simulate patient room and whole-ward level disinfection under different mixing and ultraviolet field conditions. Steady-state computation of quanta-concentrations is applied to the Wells-Riley equation to predict likely infection rates. Simulation of a hypothetical ward demonstrates the relative influence of different design factors for susceptible patients co-located with an infectious source or in nearby rooms. In each case, energy requirements are calculated and compared to achieving the same level of infection risk through improved ventilation. Ultraviolet devices are seen to be most effective where they are located close to the infectious source; however, when the location of the infectious source is not known, locating devices in patient rooms is likely to be more effective than installing them in connecting corridor or communal zones. Results show an ultraviolet system may be an energy-efficient solution to controlling airborne infection, particularly in semi-open hospital environments, and considering the whole ward rather than just a single room at the design stage is likely to lead to a more robust solution. ... In recent years, interest has grown in the potential for photochemical air filters, such as UV-C irradiation, to supplement or replace fine particle filtration for microbial control. These devices have the capacity of destroying or decomposing microorganisms and potentially volatile organic compounds (VOCs), and they may have energy performance benefits over conventional filters (Blatt 2006; Kowalski 2009 In-duct devices typically comprise one or more UV-C lamps mounted within the HVAC system to create a UV irradiation field inside an airflow duct. Microorganisms contained in the air passing through the UV field incur DNA damage proportional to the UV irradiance, time of exposure, and species of microorganism; with sufficient exposure, the damage may be lethal, rendering microorganisms inactive. ... ... Microorganisms contained in the air passing through the UV field incur DNA damage proportional to the UV irradiance, time of exposure, and species of microorganism; with sufficient exposure, the damage may be lethal, rendering microorganisms inactive. The technology has also shown reduction of bacteria concentration on surfaces after UV-C has been installed within a ventilation system (Taylor et al. 1995), leading to applications for reducing bio fouling of cooling coils and the potential for improving system energy efficiency (Blatt 2006; Kowalski 2009; Lee et al. 2009). With increasing application of in-duct UV-C systems, it is important to accurately quantify the technology's performance, and appropriate analysis and test mechanisms must be set in place. ... ... microorganisms in the UV field within a duct or device, microorganism susceptibility to UV irradiation, air velocity, air temperature, humidity, reflectivity of duct or device internal surfaces, velocity profile, air mixing, and lamp position. A number of studies have explored the influence of some of these parameters through mathematical modeling. Kowalski (2009) predicted UV device performance with average irradiation fields determined using a view factor approximation, and Lau (2009) explored the effects of airflow velocity and temperature on lamp output. However, these models assumed a fully mixed airflow and did not consider the 3D flow–UV field interaction that happens in a real case. Compu ... Computational fluid dynamics analysis to assess performance variability of in-duct UV-C systems Azael Capetillo Andrew Sleigh UV-C is becoming a mainstream air sterilization technology and is marketed in the form of energy-saving and infection-reduction devices. An accurate rating of device performance is essential to ensure appropriate microbial reduction yet avoid wastage of energy due to over performance. This article demonstrates the potential benefits from using computational fluid dynamics to assess performance. A computational fluid dynamics model was developed using discrete ordinate irradiation modeling and Lagrangian particle tracking to model airborne microorganisms. The study calculates the UV dose received by airborne particles in an in-duct UV system based on published EPA experimental tests for single-, four-, and eight-lamp devices. Whereas the EPA tests back calculated UV dose from measured microorganism inactivation data, the computational fluid dynamics model directly computes UV dose, then determines inactivation of microorganisms. Microorganism inactivation values compared well between the computational fluid dynamics model and the EPA tests, but differences between UV dosages were found due to uncertainty in microorganism UV susceptibility data. The study highlighted the need for careful consideration of test microorganisms and a reliable dataset of UV susceptibility values in air to assess performance. Evaluation of the dose distribution demonstrated the importance of creating an even UV field to minimize the risk of ineffective sterilization of some particles while not delivering excessive energy to others. ... Airborne microorganisms from the occupied zone can be transported to the upper-zone by ventilation flows and convention currents, passing the microorganisms through the UV field. Exposure of microorganisms to UV-C light at a wavelength close to 254 nm, damages the microorganism DNA; at a sufficient dose this can be lethal, effectively killing the microorganism [12]. Several experimental studies have verified the disinfection performance of UVGI for a range of microorganisms in various settings [13 16], and a recent study conducted in clinical setting demonstrated a 70% reduction in TB transmission risk following the installation of an upper-room UVGI system [17] One of the major difficulties of implementing upper-room UVGI systems is that the disinfection performance relies on the room airflow patterns which are responsible for transporting the microorganisms through the UV field. ... ... The parameter D is cumulative and depends on the 3D UV field present within the space as well as the path taken by a given microorganism. For a known pathogen, the dose received by a microorganism can be used, together with the microorganism susceptibility [12], to calculate the expected microorganism survival fraction, and hence the reduction in airborne concentrations. For many hospital environments the actual pathogens present are unknown, so the dose is a good parameter to represent the effectiveness of a UVGI system without requiring knowledge of microorganism species. ... ... Here Co represents the initial microorganism concentration and the term, k (m 2 /J), represents the microorganism susceptibility constant [12]. Microorganism susceptibilities are determined experimentally, and published values can vary substantially, which may be due to the experimental conditions or the particular strain of microorganism used. ... Computational fluid dynamics modelling and optimisation of an upper-room ultraviolet germicidal irradiation system in a naturally ventilated hospital ward INDOOR BUILT ENVIRON khan mai Ultraviolet germicidal irradiation (UVGI) has been shown to be an effective technology for reducing the airborne bioburden in indoor environments and is already advocated as a potential infection control measure for healthcare settings. However, much of the understanding of UVGI performance is based on experimental studies or numerical simulation in mechanically ventilated environments. This study considers the application of an upper-room UVGI system in a naturally ventilated multi-bed hospital ward. A computational fluid dynamics model is used to simulate a Nightingale-type hospital ward with wind-driven cross-ventilation and three wall-mounted UVGI fixtures. A parametric study considering 50 different fixture configurations and three ventilation rates was carried out using a design of experiments approach. Each configuration was assessed by calculating the UV dose distribution over the ward and at each bed. Results show that dose is influenced by the location of the fixtures and the ventilation regime. Thermal effects are likely to be important at low ventilation rates and may reduce UV effectiveness. A metamodel-based numerical optimisation was applied at a ventilation rate of 6 air changes per hour. In this case, the optimum result is achieved when UVGI fixtures are mounted on the leeward wall at their lowest mounting height. ... The device employed was an upperroom ultraviolet germicidal irradiation (UVGI) system. UVGI devices make use of light in ultraviolet wavelengths, specifically in the germicidal range of 200 – 320 nm, to disinfect air and surfaces [4]. UVGI disinfects by causing photochemical changes in the deoxyribonucleic acid (DNA) of a microorganism, thus destroying its ability to reproduce. ... ... The germicidal effects of UV irradiation have been recognized for many decades; in 1932, Ehrismann and Noethling [5] identified the germicidal effectiveness of UV to peak at 253.7 nm, today it is estimated to be at approximately 260 – 265 nm. This wavelength corresponds to the peak of UV absorption by bacterial DNA [4], although this varies between species. An upper-room UVGI system is one where UV fixtures are used to create a zone of UV irradiation in the upper portion of a room, well above head height. ... ... One major advantage of this system is that the disinfection device is continuously operational within the room, where the source of hazardous microbes typically exists. Furthermore, the system can be retrofitted to most existing rooms, is relatively inexpensive and has been proven to be an effective against a wide variety of airborne viruses and bacteria (e.g. [4], [6], [7], [8] and [9]). To specifically quantify the effectiveness of an upper-room UVGI system is difficult, due to the intricacy of the biological processes involved and the vast number of variables associated with its operation. ... Experimentally Evaluating the Effectivness of an Upper-Room UVGI System Ann McDonagh An experimental investigation was carried out to determine the effectiveness of an UltraViolet Germicidal Irradiation (UVGI) system for eradicating airborne pathogens in an indoor environment. Experimental and environmental conditions were varied and the resultant inactivated percentage of Staphylococcus aureus was measured. Results indicate that it is paramount to keep experimental parameters constant to achieve reliable and comparable results. In particular; the sampling plates used in the Andersen sampler must have a consistent depth of nutrient agar, and sufficient time (~ 40 mins) must be given for steady state conditions to be achieved prior to the commencement of air sampling. Furthermore a change in environmental conditions such as ventilation regime and ventilation rate, were found to significantly influence the determination of the effectiveness of an upper-room UVGI system; with a ventilation regime of in low, out high resulting in an average of 16 % higher microorganism inactivation than a regime of in high, out low. The environmental conditions for which the device was deemed most effective, i.e. which resulted in the highest percentage of airborne microorganisms inactivated (96.0 ± 3.2 %) were: for a ventilation rate of 3 air changes per hour (ACH) and ventilation regime of in low, out high. ... In-duct UVGI systems can also be deployed to treat air streams as they pass through HVAC ductwork, and potentially reduce the respiratory diseases that are transmitted through the ductwork. Since air is being recirculated a number of times, an overall increase in removal rate is expected for in-duct air disinfection as compared to single pass system [9]. This paper will focus on the performance and economics of in-duct UVGI system in treating air streams by taking the dynamic environmental conditions as experienced in the AHU of the VAV system into consideration. ... ... Relative humidity might have an effect on UV susceptibility of microorganisms. However, some results indicate positive influence, others show negative influence [9]; effect of relative humidity is therefore not considered here. In this paper, the survival of a population of microorganisms exposed to UVC is approximated by a single stage exponential decay equation: ... ... The design of UVGI systems depends greatly on the target (group of) microorganisms to be treated. In fact, Kowalski [9], suggests that it is more convenient and definitive to use "dose" as a design parameter, in which D 90 (dose for 90% inactivation) for many infectious agents were published. For example, if a UVGI system is designed to deliver a certain dose, any infectious agent of lower D 90 will be inactivated 90% in a single pass. ... Effects of installation location on performance and economics of in-duct ultraviolet germicidal irradiation systems for air disinfection BUILD ENVIRON Bruno Lee William Parry Bahnfleth ... Conventionally it can be measured with spherical actinometry method [34,35]. In actual condition, the reflectivity of wall for 254-nm ultraviolet radiation mainly depends on the wall material surface [25,32,36]. For most of painted walls, the reflection for 254-nm ultraviolet radiation is on the order of 5% [32]. ... ... Likewise, the comparison of our model and the existing bare lamp model was also discussed. In reported bare lamp model, the spatial radiance is calculated as [30,36]: ... ... In practice, non-ideal situation is encountered . Generally, the reflectivity of polished aluminum reflector ranges from 0.79 to 0.91 [36,37]. In order to investigate the effect of possible reflectivity with our reflector, the additional results of f = 0.9 is studied in details. ... A new mathematical model for irradiance field prediction of upper-room ultraviolet germicidal systems C. L. Wu Yi Yang S.L. Wong Alvin C.K. Lai There has been an increasing interest in the use of upper-room ultraviolet germicidal irradiation (UVGI) system because of its proven disinfection effect for airborne microorganisms. To better design and explore further potential applications of UVGI systems, it is of critical importance to predict the spatial UV intensity in enclosures. In this paper, we developed a new mathematical model to predict spatial radiation intensity for upper-room ultraviolet germicidal irradiation systems. The detail geometries of the lamp and the reflector were removed and replaced by introducing a fictitious irradiation surface near louver slots. The view factor approach was applied to evaluate the UV irradiance in a three-dimensional space with different louver configurations. With this approach no detail meshing of the fixture is required and this leads to significant simplification of the entire systems from modeling perspectives. To validate the model, experiments were performed in a full-scale environmental controlled chamber in which one UVGI fixture was mounted on a sidewall. The UV irradiance was measured by a radiometer. The results predicted by the present model agree very well with the experimental measurements. Factors affect the accuracy of the model was also discussed. ... It has been extensively used in the disinfection of equipment, glassware and air by food and medical industries for many years. Low pressure mercury (Hg) lamps are often called " germicidal " because most of their total radiation energy is at a wavelength of 253.7 nm, which is near the maximum for germicidal effectiveness, hence its usefulness in the control of microorganisms (Kowalski, 2009). While the application of hormesis is used in the context of this chapter on the benefits to plant tissues, hormic responses have also been shown in bacteria, fungi, animals and humans (Shama, 2007). ... ... These oxidation stresses become severe with the physiological age of the tissue and it responds by generating an array of detoxifying mechanisms (antioxidants and enzymes) against free radical attack. It is documented that short wavelength radiation exerts two pronounced effects on plant metabolism viz: at low intensities, it may give rise to an enhancement of secondary stress metabolites which can protect the plant from free radical damage, while at high intensities, it can cause an inhibition of these substances often leading to detrimental effects on the plant (Kowalski, 2009). Certain plants have repair mechanisms which involve the production of secondary stress metabolites for example, antioxidant compounds such as carotenoids, phenols, flavonoids, and polyamines. ... Significance of UV-C Hormesis and Its Relation to Some Phytochemicals in Ripening and Senescence Process Rohanie Maharaj Mohammed Ayoub ... The UVGI room used UV-C rays with 254 nm wavelength which was effective to disinfect air, water, and surface. 1 UVGI room has its advantages, which are short disinfection duration, huge capacity in one cycle, easy to use and relatively affordable to prepare and maintain. On the previous study, it was known that UVGI room of RSCM had the ability to disinfect SARS-CoV-2 viral using 1J/cm 2 dose. 2 The safe standard of N95 respirator mask according to NIOSH is, if the N95 mask had the ability to filter >95%. ... Filtration effectiveness of N95 medical mask exposed to repeated ultraviolet germicidal irradiation room Ratna Dwi Restuti Harim Priyono Tara Candida Mariska Joedo Prihartanto Background: The global Coronavirus disease (COVID-19) pandemic has created shortages of personal protective equipment (PPE) including N95 respirator medical masks. Ultraviolet Germicidal Irradiation (UVGI) is an effective way for disinfection of N95 masks before reuse. The UVGI chamber is an effective method of disinfection against SARS-CoV-2, however its effect on the N95 medical masks filtration ability is still uncertain. Purpose: To evaluate filtration effectiveness of N95 mask after repeated UV-C irradiation in the UVGI chamber. Method: This was a parallel two-group experimental study to see the effect of repeated UVGI exposure on the filtration of 2 types of N95 medical masks (type 8210 and 1860), with 25 pieces each group, using an aerosol particle counter, after 10 cycles of repeated UVGI exposure in the UVGI chamber of the ORL-HNS Department Dr. Cipto Mangunkusumo Hospital. Result: There were no significant differences in the filtration effectiveness of N95 medical masks after repeated UVGI exposure up to 10 cycles for 2 types of N95 masks and there was no significant change in the filtration ability of the N95 medical masks after repeated UVGI exposure. Conclusion: The filtration of N95 medical masks type 8210 and 1860 filtration were maintained >95% after repeated UVGI exposure with cumulative dose of 10,126-16.200 mJ/cm2 in UVGI chamber of ORL-HNS Department, Dr. Cipto Mangunkusumo Hospital. ABSTRAKLatar belakang: Pandemi Coronavirus disease 2019 (COVID-19) menyebabkan keterbatasan tersedianya alat pelindung diri (APD) termasuk masker respirator N95. Ultraviolet Germicidal Irradiation (UVGI) merupakan salah satu cara desinfeksi yang menjanjikan dan efektif, sehingga masker N95 dapat digunakan kembali. Bilik UVGI merupakan metode yang efektif dalam disinfeksi terhadap SARS-CoV-2, namun efek paparan UVGI terhadap kemampuan filtrasi masker N95 belum diketahui. Tujuan: Untuk mengevaluasi efektivitas filtrasi masker N95 setelah paparan UV-C berulang di Bilik UVGI. Metode: Penelitian ini adalah studi ekperimental dua kelompok paralel untuk melihat efek paparan UVGI berulang terhadap filtrasi 2 tipe masker N95 (tipe 8210 dan 1860) sebanyak 25 masker di setiap grup, menggunakan aerosol particle counter setelah paparan UVGI berulang sebanyak 10 siklus di Bilik UVGI Departemen THT-KL RSCM. Hasil: Tidak didapatkan perbedaan bermakna pada masker N95 pasca-paparan UVGI berulang sebanyak 10 siklus dengan rerata filtrasi pada 2 tipe masker, serta tidak terdapat perubahan signifikan kemampuan filtrasi masker N95 pasca-paparan UVGI berulang. Kesimpulan: Filtrasi masker N95 pada penelitian ini dapat dipertahankan 95% pasca-paparan UVGI berulang hingga dosis kumulatif 10.126˗16.200 mJ/cm2 di bilik UVGI Departemen THT-KL RSCM. ... Since the occurrence of the COVID-19 pandemic, a lot of effort has been dedicated to the development of photonic devices for fighting against SARS-CoV-2, both for virus diagnostic and inactivation purposes. The use of ultraviolet germicidal irradiation (UVGI, 200-280 nm) for sanification purposes dates back to the beginning of the XX century, and its efficacy against bacteria, viruses, and fungi is well known [1]. ... UV-C LED sources design and characterization Sarah Bollanti G. Di Giorgio Paolo Di Lazzaro Daniele Murra Ultraviolet C-band (UV-C) sources based on LED arrays, for near-field irradiation purposes, have been designed, realized, and thoroughly characterized both from an optical and a thermal point of view. Here we report the main theoretical and experimental results and discuss the preliminary applications of these sources. ... Ultra-violet (UV) germicidal activity was first reported in the nineteenth century [21,22]. UV light encompasses three wavelength ranges: 400 to 315 nm (UV-A), 315 to 280 nm (UV-B), and 280 to 200 nm (UV-C). ... Development of an Ultraviolet-C Irradiation Room in a Public Portuguese Hospital for Safe Re-Utilization of Personal Protective Respirators Int J Environ Res Publ Health Jorge Padrão Talita Nicolau Helena Felgueiras Andrea Zille Almost two years have passed since COVID-19 was officially declared a pandemic by the World Health Organization. However, it still holds a tight grasp on the entire human population. Several variants of concern, one after another, have spread throughout the world. The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) omicron variant may become the fastest spreading virus in history. Therefore, it is more than evident that the use of personal protective equipment (PPE) will continue to play a pivotal role during the current pandemic. This work depicts an integrative approach attesting to the effectiveness of ultra-violet-C (UV-C) energy density for the sterilization of personal protective equipment, in particular FFP2 respirators used by the health care staff in intensive care units. It is increasingly clear that this approach should not be limited to health care units. Due to the record-breaking spreading rates of SARS-CoV-2, it is apparent that the use of PPE, in particular masks and respirators, will remain a critical tool to mitigate future pandemics. Therefore, similar UV-C disinfecting rooms should be considered for use within institutions and companies and even incorporated within household devices to avoid PPE shortages and, most importantly, to reduce environmental burdens. ... [2,3] In particular, microbiological studies have shown that the exposure of microorganisms and nonliving organisms, such as viruses, to UV-C radiation results in photochemical changes to nucleic acids, which impairs their ability to reproduce and leads them to be inactive. [4][5][6] Furthermore, a precise UV-C radiation dose could effectively be used to decompose microplastics in wastewater treatment plants. [7,8] Accordingly, the study of UV-C radiation plays an important role in meeting the demands and desires in various applications. ... AC-Driven Ultraviolet-C Electroluminescence from an All-Solution-Processed CaSiO3:Pr3+ Thin Film Based on a Metal-Oxide-Semiconductor Structure Mohammad M Afandi Hyeonwoo Kang Taewook Kang Js Kim An ultraviolet (UV) light source is continuously required for applications of sterilization as well as industrial value. In particular, research on materials and devices emitting UV‐C radiation in the range from 210 to 280 nm is very meaningful and challenging work. Herein, UV‐C electroluminescence (EL) from an all‐solution processed CaSiO3:Pr3+ (CSO) thin film is reported for the first time. The CSO thin film is formed on a Si substrate (size of 13 × 13 mm2), and structurally, the UV‐C EL device has a metal‐oxide‐semiconductor (MOS) shape consisting of CSO and interlayered SiOx of 100 and 150 nm thickness, respectively, on Si. The emission and electrical properties of the UV‐C EL device are investigated under an alternating current system. The results reveal UV‐C emission peaking at 276 nm attributed to the 4f5d‐3H(F)j transition of Pr3+ ions within CSO, with a maximum output optical power of 8.37 µW cm−2 (power efficiency of 0.15%) at an operating voltage of 40 Vop (50 Hz). The work can provide a feasible method for realizing large‐area UV‐C‐emitting devices based on the MOS structure. The ultraviolet (UV)‐C emitting device is introduced based on metal‐oxide‐semiconductor structure consisting of CaSiO3:Pr3+ and interlayered SiOx on Si. The results reveal UV‐C emission peaking at 276 nm attributed to electronic transition of Pr3+ ions within CaSiO3, with a maximum output optical power of 8.37 µW cm−1 at an operation voltage of 40 V (50 Hz). ... The accuracy of the experimental data was verified again by the derived kinetic expression. As shown in Table 2, by comparing the linear relationship between inactivation rate and UV dose fitted by data points, it can be seen that the Z value of reaction rate coefficient of different kinds of microorganisms in the process of photocatalytic inactivation was different, and the Z value also reflected the difficulty of microbial inactivation to a certain extent, which was related to the microstructure of cells [22]. In our study, the Z value of E.coli was the highest (19.48 ×10 − 3 cm 2 /μJ), while the Z value of A. versicolor spore was the lowest (3.02 ×10 − 3 cm 2 /μJ). ... Photocatalytic disinfection of different airborne microorganisms by TiO2/MXene filler: Inactivation efficiency, energy consumption and self-repair phenomenon Liming Liu Azhar Laghari Ge Meng Yimei Xue Bacteria, fungi and viruses are airborne microorganisms, which can survive and spread in the form of aerosols, posing a serious threat to human health. A dynamic continuous flow photocatalytic reactor containing TiO2/MXene filler was developed to study the inactivation characteristics of four different microorganisms treated by ultraviolet and photocatalysis. By establishing the kinetic fitting model (−lgNpcN0=εAbΦNE1IrtPC+A), it was known that there were differences in the inactivation efficiency of airborne microorganisms with different microstructures. The introduction of catalysis greatly reduced the energy consumption of disinfection. The electrical efficiency per log order (EE/O) of UV254 inactivated E. coli was reduced from 0.012-0.015 kW·h·m⁻³ to 0.0016-0.0040 kW·h·m⁻³. Furthermore, the self-repair phenomenon was not obvious in a very short time (40 h) under UV irradiation, but the microbial activity continued to decline after photocatalytic treatment. Short wave ultraviolet had stronger penetration than long wave ultraviolet. High radiation intensity can provide more photons and produce more reactive oxygen species (ROS) for photocatalysis, while too high humidity (RH=95%) will inhibit it. Appropriate residence time (4.3 s) can efficiently treat airborne microorganisms with higher concentration (10⁹ CFU·m⁻³). These external factors affected the photocatalytic disinfection process of different kinds of microorganisms. ... Since Pr 3+ ion is located at a highly symmetrical lattice site, it provides a suitable crystal field for efficient 4f 1 5d 1 →4f 2 interconfigurational transition of Pr 3+ with deep-UV emission. In response to 240-nm light excitation, the LCGO:Pr 3+ yields a broadband deep-UV emission (~240-330 nm), that is overlapping well with the germicidal effectiveness curve (~220-280 nm) [39] and phototherapy cure in broadband UVB emission (290−320 nm) [40]. ... Multi-responsive deep-ultraviolet emission in praseodymium-doped phosphors for microbial sterilization Xinquan Zhou Jianwei Qiao Yifei Zhao Xia Zhiguo Perusing multimode luminescent materials capable of being activated by diverse excitation sources and realizing multi-responsive emission in a single system remains a challenge. Herein, we utilize a heterovalent substituting strategy to realize multimode deep-ultraviolet (UV) emission in the defect-rich host Li2CaGeO4 (LCGO). Specifically, the Pr3+ substitution in LCGO is beneficial to activating defect site reconstruction including the generation of cation defects and the decrease of oxygen vacancies. Regulation of different traps in LCGO:Pr3+ presents persistent luminescence and photo-stimulated luminescence in a synergetic fashion. Moreover, the up-conversion luminescence appears with the aid of the 4f discrete energy levels of Pr3+ ions, wherein incident visible light is partially converted into germicidal deep-UV radiation. The multi-responsive character enables LCGO:Pr3+ to response to convenient light sources including X-ray tube, standard UV lamps, blue and near-infrared lasers. Thus, a dual-mode optical conversion strategy for inactivating bacteria is fabricated, and this multi-responsive deep-UV emitter offers new insights into developing UV light sources for sterilization applications. Heterovalent substituting in trap-mediated host lattice also provides a methodological basis for the construction of multi-mode luminescent materials. ... Se divide en 4 grupos principales, ver Fig.1 [5], cada uno con un efecto germicida diferente: UV-A (315-400nm), UV-B (280-315nm), UV-C (200-280nm), UV en vacío (100-200nm). El uso de radiación UV es conocido desde hace más de 60 años para la eliminación de bacterias y en particular la radiación UV-C de 200nm a 280nm para la eliminación de virus en general [6] [7]. Actualmente se emplea radiación UV-C para esterilización de: quirófanos, instrumental, de aire en Aire Acondicionadores y de agua en plantas de tratamiento [8][9] [10]. ... Radiación ultravioleta C aplicada a la desinfección de ambulancias Eduardo Manzano Martín Ferreira Daniela Cudmani Miguel Angel Cabrera Surfaces and air inside ambulances can be contaminated by viruses such as SARS-CoV-2. Chemicals are generally used for disinfection. The process can be complemented with ultraviolet C radiation (UVC). For this, the available technologies were studied and UVC spectral irradiance measurements were carried out. The doses required for virus inactivation and the most effective way to irradiate surfaces were stud-d. Different configurations in ambulances and materials used were analysed and studied their spectral reflectance and transmittance properties in UVC. With that information, a 3D interior ambulance model was built to render images with a program of the irradiance UVC distribution of a prototype. The simulated results were compared with real tests carried out under different irradiations doses and bacteriological examinations. Finally adjustments were made in the irradiation exposure times to achieve a more effective system based in the prototype. (PDF) Radiación ultravioleta C aplicada a la desinfección de ambulancias. Available from: https://www.researchgate.net/publication/350103362_Radiacion_ultravioleta_C_aplicada_a_la_desinfeccion_de_ambulancias [accessed Mar 16 2021]. ... UV irradiation is germicidal mainly because its wavelength corresponds to the peak UV absorption of protein, ribonucleic acid (RNA) and deoxyribonucleic acid (DNA). UV absorption into pathogen genomes triggers radiolytic cleavage or radical reactions, causing changes in the pathogen's nucleic acids that eventually lead to inactivation (2,3). UV rays that inactivate double-stranded DNA viruses are thought to also inactivate single-stranded RNA genomes such as coronaviruses (4,5). ... Controversies on the Use of Ultraviolet Rays for Disinfection During the COVID-19 Pandemic Chii Chii Chew Philip Rajan During the coronavirus disease 2019 (COVID-19) pandemic, the use of ultraviolet (UV) rays to disinfect skin areas, clothes and other objects at the entry/exit points of public spaces has been widely discussed by stakeholders. While ultraviolet germicidal irradiation (UVGI) has been shown to effectively inactivate coronaviruses, including severe acute respiratory syndrome coronavirus (SARS-CoV)-1 and Middle East respiratory syndrome coronavirus (MERS-CoV), no specific evidence proves that it effectively inactivates the new SARS-CoV-2 virus that causes COVID-19. Because UV rays damage human tissue, UVGI should be used with caution and not directly on human skin. Various guidelines recommend that UVGI should not be used as a sole agent for disinfecting surfaces or objects but as an adjunct to the latest standard disinfecting procedures. ... The genome of SARS-CoV-2 has a 79,5-82 % homology to SARS-CoV 8,9 . UVC susceptibility studies of SARS-CoV have been described in literature 10 . Therefore, we can infer the UVC susceptibility of SARS-CoV-2. ... A Scalable Method for Ultraviolet C Disinfection of Surgical Facemasks Type IIR and Filtering Facepiece Particle Respirators 1 and 2 Ivar Lede Karina Nolte René Kroes Due to the SARS-CoV-2 pandemic a shortage of personal protective equipment, including surgical facemasks and Filtering Facepiece Particle Respirators has occurred. SARS-CoV-2 has a 79,5-82% homology to SARS-CoV-2. The SARS-CoV UVC sensitivity is described in literature. We have performed UVC transmission measurements of surgical facemasks and respirators. In addition, we performed UVC disinfection experiments of S. aureus with surgical facemasks and respirators. Results show that we can achieve an 8-log reduction of S. aureus in the inner layers of FFP1 respirators and the exterior of surgical facemasks. Furthermore, we showed a 7-log reduction of S. aureus in the inner layers of FFP2 respirators. We conclude that UVC disinfection is an effective, safe and scalable method for reuse of surgical facemask and respirators. ... This capacity to reflect is highly dependent on the material of the surfaces. For example, organic material will absorb the penetration and block reflection of UVC, which is why surfaces should be cleaned manually to remove organic substances before decontamination [6]. ... Ultraviolet-C decontamination of a hospital room: Amount of UV light needed Marie Lindblad Eva Tano Claes Lindahl Fredrik Rm Huss Introduction: Our primary aim was to investigate, using a commercial radiometer, the ultraviolet C (UVC) dose received in different areas in a burn ICU ward room after an automated UVC decontamination. The secondary aim was to validate a disposable UVC-dose indicator with the radiometer readings. Methods: Disposable indicators and an electronic radiometer were positioned in ten different positions in a burn ICU room. The room was decontaminated using the Tru-D™-UVC device. Colour changes of the disposable indicators and radiometer readings were noted and compared. Experiment was repeated 10 times. Findings: The UVC radiation received in different areas varied between 15.9mJ/cm2 and 1068mJ/cm2 (median 266mJ/cm2). Surfaces, at shorter distances and in the direct line of sight of the UVC device showed statistically significant higher UVC doses than surfaces in the shadow of equipment (p=0.019). The UVC-dose indicator's colour change corresponded with the commercially radiometer readings. Conclusions: The amount of UVC radiation that is received in surfaces depends on their locations in the room (ie distance from the UVC emitter) and whether any objects shadow the light. In this study we suggest that quality controls should be used to assure that enough UVC radiation reaches all surfaces. ... The second candidate was ultraviolet (UV) germicidal light. UV is highly effective at controlling microbial growth and at achieving disinfection at most types of surfaces (Kowalski 2009). UV radiation in the wavelength range of 250 ± 10 nm (UV-C) is lethal to most micro-organisms, i.e. bacteria, viruses, protozoa, mycelial fungi, yeasts and algae. ... Effectiveness of UV-C light irradiation on disinfection of an eSOS® smart toilet evaluated in a temporary settlement in the Philippines INT J ENVIRON HEAL R Fiona Zakaria Bertin Harelimana Josip Ćurko Damir Brdjanovic Ultraviolet germicidal (short wavelength UV-C) light was studied as surface disinfectant in an Emergency Sanitation Operation System® smart toilet to aid to the work of manual cleaning. The UV-C light was installed and regulated as a self-cleaning feature of the toilet, which automatically irradiate after each toilet use. Two experimental phases were conducted i.e. preparatory phase consists of tests under laboratory conditions and field testing phase. The laboratory UV test indicated that irradiation for 10 min with medium–low intensity of 0.15–0.4 W/m2 could achieve 6.5 log removal of Escherichia coli. Field testing of the toilet under real usage found that UV-C irradiation was capable to inactivate total coliform at toilet surfaces within 167-cm distance from the UV-C lamp (UV-C dose between 1.88 and 2.74 mW). UV-C irradiation is most effective with the support of effective manual cleaning. Application of UV-C for surface disinfection in emergency toilets could potentially reduce public health risks. ... Typical values for a range of microroganisms are presented in a number of sources including Kowalski (2009) and Noakes et al (2004). Equation (1) represents a first order decay assumption which is realistic for many microorganisms; other models incorporating a threshold dose or two-stage decay characteristics have been proposed for certain pathogens (Kowalski 2009). ... Optimizing upper-room UVGI systems for infection risk and energy MAI Khan The effectiveness of UV-C irradiation at inactivating airborne pathogens is well proven, and the technology is already advocated for control of some respiratory diseases such as Tuberculosis. UV-C air disinfection is also commonly promoted as an energy efficient way of reducing infection risk in comparison to increasing ventilation. However determining how and where to apply UVGI devices for the greatest benefit is still poorly understood. This paper focuses on upper-room UVGI systems, where microorganism inactivation is accomplished by passing contaminated room air through an open UV field above the heads of occupants. Multi-zone models are developed to assess the potential impact of a UVGI installation across a series of inter-connected spaces such as a hospital ward; this may comprise rooms for one or more patients that are all connected to a common zone that may be a corridor or may act as a communal space, housing fore xample the nurses station. Simulation of dose couples the ventilation, air mixing and upper-zone average field to explore factors influencing device coverage. A first-order decay model of UV inactivation is coupled with the room air model to simulate patient room and whole-ward level disinfection under different mixing and UV field conditions. Steady-state computation of quanta concentrations are applied to the Wells-Riley equation to predict likely infection rates. Simulation of a hypothetical ward demonstrates the relative benefits of different system options for susceptible patients co-located with an infectious source or in nearby rooms. In each case energy requirements are also calculated and compared to achieving the same level of risk through improved ventilation. A design of experiment technique is applied to sample the design space and explore the most effective system design for a given scenario. Devices are seen to be most effective where they are located close to the infectious source. However, results show that when the location of the infectious source is not known,locating devices in patient rooms is likely to be more effective than installing them in connecting corridor or communal zones. ... These systems produce an irradiance field which is limited to the upper air zone in the room of interest. Provided that the wavelength of the UV field is close to 254 nm, this has the potential to disinfect the air by killing bacteria, viruses and fungus spores which pass through the field [5]. Numerous experimental studies have verified the disinfection performance of UVGI for a range of microorganisms in various settings [6][7][8][9]. ... A Computational Study of UV disinfection performance within a naturally ventilated hospital ward The airborne transmission of pathogens including tuberculosis and influenza pose a significant threat to human health. This is especially the case in healthcare settings such as hospital wards which inevitably contain a high concentration of viruses and bacteria. These have the potential to infect both patients with weakened immune systems and healthcare workers. In order to reduce the infection risk, improvements in hospital ward design and the application of disinfection systems can offer significant benefits. One such strategy, upper-room Ultraviolet Germicidal Irradiation (UVGI), relies on a collimated irradiance field which works in conjunction with ventilation patterns to disinfect the air. The focus of this study is to predict the UVGI system performance within a naturally ventilated hospital ward, for a range of ambient conditions using Computational Fluid Dynamics (CFD). A computer model of an open-plan six-bed Nightingale-style hospital ward was generated based on the dimensions of a former hospital building situated in Bradford, UK. With a total volume of 200 m3, natural ventilation is supplied through three casement windows and a further three openings on the leeward side ensure steady cross-ventilation. Boundary conditions are based on experimental measurements of the ventilation rate which were determined using a tracer technique. An experimentally-determined irradiance field is included in the model and stored as a fixed-value scalar field. A total of fifty steady-state CFD simulations show that disinfection performance depends on the ventilation rate, the degree of mixing present and the position of the UVGI fixture within the ward. The results underline the potential performance gains from UVGI installations and how they could be integrated within existing healthcare facilities as an infection control measure. ... Radiação Ultravioleta Germicida é a radiação normalmente gerada por lâmpadas com pico de emissão no comprimento de onda de 253,7 nm (UV-C) com ação germicida. Matam ou neutralizam bactérias, vírus e outros organismos primitivos (Wladyslaw, 2009). A radiação ultravioleta, ao contrário de outros desinfetantes que têm ação química, atua por meio físico, atingindo principalmente os ácidos nucléicos dos microorganismos, promovendo reações fotoquímicas que inativam os vírus e as bactérias (Daniel et al, 2001). ... ÁGUA LIMPA SOLAR: DESINFECÇÃO ULTRAVIOLETA DE ÁGUA PARA CONSUMO ATRAVÉS DE SISTEMA DE BAIXO CUSTO UTILIZANDO ENERGIA SOLAR FOTOVOLTAICA Lucas Rafael do Nascimento Mauricio Guarnieri Jair Urbanetz Ricardo Rüther ... It has been verified that ultraviolet germicidal irradiation (UVGI) does disinfect specific bioaerosols in laboratory experimentation (Kujundzic et al., 2006; Xu et al., 2005). By destroying the DNA of microorganisms, UVGI could inactivate and stop the reproduction of the bioaerosols (Kowalski, 2009). UV light has been installed in the upper part of rooms to reduce the concentration of airborne cultruable microorganisms in the indoor environment (Peccia et al, 2001). ... A Field Test to Performance of Upper-room UVGI in Elementary School Chunxiao Su Josephine Lau Shawn G Gibbs ... Usually the tube was made from fuses quartz that is transparent to the UV light. The main property of germicidal lamp is the 254 nm that are effective for destruction of most microorganisms (Kowalski 2009; Miller, Linnes et al. 2013).Figure 1 shows the experimental setup for this experiment. Four type fluorescent lamps include 10 W Reno blacklight lamp, 10 W Sonic aquarium lamp, 10 W Senkyo germicidal lamp and 10 W Hitachi insect lamp were placed in a box of 10x12x12 cm. ... Ultravoilet (UV) Light Spectrum of flourescent lamps Abd. Rahman Tamuri Assyafiq Muhamad Sarah Akmal Yaacob Mat Daud Ultraviolet light is the electromagnectic radiation in the range of 100 nm to 400 nm. The UV light spectrum consists of electromagnetic waves with frequencies higher than those that humans can identify as the violet in colour. These frequencies are invisible to most humans except those with aphakia (the absence of the lens of the eye). There are two sources of UV light; natural and artifical sources. The sunlight is the main source of the natural UV ligth. On the other hand, the artifical source could be generated by florescent lamp, gas-discharge lamp, laser, and LED. In this paper, the spectrum of the common ultraviolet light in the market such as the black light lamp, germicidal lamp, aquarium lamp and insect lamp will be discussed. The spectrum of UV light for each lamp was placed in a box of 10x12x12 cm and detected by a special spectrometer that directly connected to a computer via USB connection. The result shows that the main spectral lines of all fluorescent lamps are 254 nm, 313 nm, 404 nm, 437 nm, and 546 nm. The information of the various spectrums of the UV lamp is very important for the user. This information can be used to prevent over exposure of UV lights to humans that may potentially cause skin cancer. ... RNA, causing them to form covalent bonds with each other and interrupting hydrogen bonds with adenine bases in the cDNA/ RNA strain (13, 18). Pyrimidine dimers of thymine/uracil bases distort the shape of DNA/RNA, altering the double-helical structure and preventing the cell's accurately transcribing or replicating its genetic material, which ultimately leads to the death of the cell (13, 18, 19). Extending the irradiation time increased the IE in the droplets absorbs UV (13, 18), and shielding of viruses near the center of the aggregate likely also contributes to this trend. ... Effects of Relative Humidity and Spraying Medium on UV Decontamination of Filters Loaded with Viral Aerosols APPL ENVIRON MICROB Myung-Heui Woo Adam Grippin Diandra Anwar Joseph Wander Although respirators and filters are designed to prevent the spread of pathogenic aerosols, a stockpile shortage is anticipated during the next flu pandemic. Contact transfer and reaerosolization of collected microbes from used respirators are also a concern. An option to address these potential problems is UV irradiation, which inactivates microbes by dimerizing thymine/uracil in nucleic acids. The objective of this study was to determine the effects of transmission mode and environmental conditions on decontamination efficiency by UV. In this study, filters were contaminated by different transmission pathways (droplet and aerosol) using three spraying media (deionized water [DI], beef extract [BE], and artificial saliva [AS]) under different humidity levels (30% [low relative humidity {LRH}], 60% [MRH], and 90% [HRH]). UV irradiation at constant intensity was applied for two time intervals at each relative humidity condition. The highest inactivation efficiency (IE), around 5.8 logs, was seen for DI aerosols containing MS2 on filters at LRH after applying a UV intensity of 1.0 mW/cm(2) for 30 min. The IE of droplets containing MS2 was lower than that of aerosols containing MS2. Absorption of UV by high water content and shielding of viruses near the center of the aggregate are considered responsible for this trend. Across the different media, IEs in AS and in BE were much lower than in DI for both aerosol and droplet transmission, indicating that solids present in AS and BE exhibited a protective effect. For particles sprayed in a protective medium, RH is not a significant parameter. Design of the System for the Analysis of Disinfection in Automated Guided Vehicle Utilisation Štefan Mozol Martin Krajčovič Ľuboslav Dulina Matus Oravec The article's main goal is to describe the system design for the analysis of disinfection automated guided vehicle (AGV) utilisation so that the AGV's optimal number can be determined. The simulation was used as the system's main tool, allowing a relatively objective approach to imitate real system behaviour. With the proposed system, it is possible to determine the utilisation of AGVs and the number of necessary AGVs that carry out disinfection of the premises through the superstructure platforms. In the simulation model, two main modes of disinfection of ground AGV were tested. A regular circuit is carried out at specific intervals as well as a dynamic evaluation of the area and its possible contamination. When the area reaches a certain threshold, the instruction to disinfect the area is triggered. Experiments were carried out for a different number of AGVs, with the possible restriction of entry in the presence of the patient, and for a combination of specialised AGVs. Based on the results, we can conclude that the use of only surface-disinfecting AGVs is limited by the movement of patients and does not bring the same results as the use of a combination of surface- and air-disinfecting specialised AGVs. A UVC LED Disinfection Closet for Reuse of Protective Coats Xing Qiu Jeffery C. C. Lo Yuanjie Cheng Shi-Wei Ricky Lee Experimental validation of determinants of UV sensitivity using synthetic DNA M. Otaki Y. Higashino Y. Yamada Treatment with ultraviolet (UV) light has been shown to be effective for disinfection of pathogens. For pathogenic risk management, it would be effective to estimate the UV sensitivity of unknown pathogenic microorganisms from their nucleotide sequences. In this study, we designed and used synthetic DNA molecules, 100–150 bases in length, to investigate not only the DNA sequences but also the secondary structures of the DNA as determinants of UV sensitivity of microorganisms. We showed that dimer-forming DNA by UV irradiation was undetectable by quantitative polymerase chain reaction (qPCR) when performed on all DNA bases. The undetectability ratio of PCR after UV irradiation indicated that UV reactivity tends to increase with the number of consecutive thymine pairs within DNA. In addition, UV reactivity was not influenced by increasing the number of complementary bonds in the DNA and the estimated free energy, indicating that complementary bonds in the DNA have no influence on the UV reactivity. The results of this study provide important information about the necessary factors determining UV sensitivity of microorganisms such as the number of consecutive thymine pairs within the DNA, which was the main driver regardless of the presence of secondary structures. Optical technologies for antibacterial control of fresh meat on display LWT-FOOD SCI TECHNOL Shirly Marleny Lara Perez Daniel José Chianfrone Vanderlei Bagnato Kate C. Blanco Ozone and ultraviolet light are techniques used for microbiological control in foods that use different mechanisms of action to complement their antibacterial action. This study aimed to evaluate the complementarity of these antimicrobial techniques in the food safety of beef contaminated with Escherichia coli. The treatments of aqueous ozone and UV-C were evaluated in cycles, with each cycle having a dose of 69 mJ/cm² of light and 30 s of ozone spray at a concentration of 0.9 ppm, the time between each cycle was one hour and repeated ten times. The 1.7 Log total of E. coli reductions corresponds to the colony sum of the amount reduced by treatments from the amount proliferated without treatment. The techniques were also evaluated in isolation, obtaining a significant reduction for UV-C Light and for aqueous ozone it maintained the microbial load controlling proliferation. The organoleptic properties of the meat were evaluated by checking the pH, quantification of proteins, and lipid oxidation. It was observed that the treatments did not cause significant changes in the meat samples, showing that the technologies have the potential to preserve food by avoiding an exponential proliferation of microorganisms without modification of their organoleptic properties. Chemical vapor deposition and its application in surface modification of nanoparticles Xinhe Zhao Chao Wei Zuoqi Gai Xiaojie Ren Nanomaterial has diverse applications in electronics, catalysis, energy, materials chemistry and even biology due to the special properties endowed by their high specific surface ratio. However, nanoparticles can easily get agglomerated and lose their original properties, which has become one critical issue and limits its application in nanotechnology. Various surface modification methods were used to reduce its surface energy and prevent the agglomeration. While physical methods use surfactants to prevent the latter, chemical methods are more favorable, since the strong covalent bonds are more durable under wide range of conditions. Among these chemical modifications, chemical vapor deposition is extensively studied. Here we introduced nanomaterial's characterization and reviewed different categories of chemical vapor deposition methods that have been used for nonmaterial's surface modification. We showed that photo-induced chemical vapor deposition (PICVD) is an attractive strategy that could be carried out under normal temperature and pressure conditions; it might be the best potential candidate to be used widely in large-scale processes. Finally, we discussed the factors affecting functionalization process of PICVD and the research progress of its application in surface modification of nanoparticles. UVC-LED Irradiation Effectively Inactivates Aerosolized Viruses, Bacteria, and Fungi in a Chamber-Type Air Disinfection System Do-Kyun Kim Dong-Hyun Kang The United Nations Environment Programme (UNEP) convened the Minamata Convention on Mercury in 2013 to ban mercury-containing products in order to ensure human and environmental health. It will be effectuated in 2020 to discontinue use of low-pressure mercury lamps and new UV-emitting sources have to replace this conventional technology. However, the UV germicidal irradiation (UVGI) system still uses conventional UV lamps, and no research has been conducted for air disinfection using UVC LEDs. The research reported here investigated the inactivation effect of aerosolized microorganisms, including viruses, bacteria, and fungi, with an UVC LED module. The results can be utilized as a primary database to replace conventional UV lamps with UVC LEDs, a novel type of UV emitter. Implementation of UVC LED technology is truly expected to significantly reduce the extent of global mercury contamination, and this study provides important baseline data to help ensure a healthier environment and increased health for humanity. Evaluation of a pulsed xenon ultraviolet light device for isolation room disinfection in a United Kingdom hospital AM J INFECT CONTROL Ian Hosein Rosie Madeloso Wijayaratnam Nagaratnam Chetan Jinadatha Background: Pathogen transmission from contaminated surfaces can cause hospital-associated infections. Although pulsed xenon ultraviolet (PX-UV) light devices have been shown to decrease hospital room bioburden in the United States, their effectiveness in United Kingdom (UK) hospitals is less understood. Methods: Forty isolation rooms at the Queens Hospital (700 beds) in North London, UK, were sampled for aerobic bacteria after patient discharge, after manual cleaning with a hypochlorous acid-troclosene sodium solution, and after PX-UV disinfection. PX-UV device efficacy on known organisms was tested by exposing inoculated agar plates in a nonpatient care area. Turnaround times for device usage were recorded, and a survey of hospital staff for perceptions of the device was undertaken. Results: After PX-UV disinfection, the bacterial contamination measured in colony forming units (CFU) decreased by 78.4%, a 91% reduction from initial bioburden levels prior to terminal cleaning. PX-UV exposure resulted in a 5-log CFU reduction for multidrug-resistant organisms (MDROs) on spiked plates. The average device turnaround time was 1 hour, with minimal impact on patient throughput. Ward staff were enthusiastic about device deployment, and device operators reported physical comfort in usage. Conclusions: PX-UV use decreased bioburden in patient discharge rooms and on agar plates spiked with MDROs. The implementation of the PX-UV device was well received by hospital cleaning and ward staff, with minimal disruption to patient flow. Decontamination Efficiency of a DBD Lamp Containing an UV-C Emitting Phosphor Bruno Caillier José Maurício A. Caiut Cristina Muja Ph. Guillot Among different physical and chemical agents, the UV radiation appears to be an important route for inactivation of resistant microorganisms. The present study introduces a new mercury free Dielectric Barrier Discharge (DBD) flat lamp, where the biocide action comes from the UV emission produced by rare earth phosphor obtained by spray pyrolysis, following plasma excitation. In this study, the emission intensity of the prototype lamp is tuned by controlling gas pressure and electrical power, 500 mbar and 15 W, corresponding to optimal conditions. In order to characterize the prototype lamp, the energetic output, temperature increase following lamp ignition and ozone production of the source were measured. The bactericidal experiments carried out showed excellent results for several gram-positive and gram-negative bacterial strains, thus demonstrating the high decontamination efficiency of the DBD flat lamp. Finally, the study of the external morphology of the microorganisms after the exposure to the UV emission suggested that other mechanisms than the bacterial DNA damage could be involved in the inactivation process.This article is protected by copyright. All rights reserved. Comparison of a continuous flow dipper well and a reduced water dipper well combined with ultraviolet radiation for control of microbial contamination FOOD CONTROL Kristen E Gibson Giselle Almeida Continuous flow (CF) dipper wells, or small countertop sinks, are used in the foodservice industry for rinsing utensils such as stirring spoons and dishers. In addition, these dipper wells are designed as continuous flow not only to rinse and clean but to also control for the buildup of microorganisms. Here, we evaluate a reduced water (RW) dipper well – with and without ultraviolet subtype C (UV-C) disinfection – for control and inactivation of Escherichia coli present on a stainless steel utensil. Overall, the RW dipper well (with and without UV-C) performed significantly better than the CF dipper well for removal of E. coli in 10% skim milk medium at various exposure and rinse times. More specifically, at 5, 10, and 30 s, the RW dipper well without UV-C achieved 1.04, 1.72, and 2.03 greater log10 (CFU/ml) reduction in E. coli compared to the CF dipper well at the same treatment times, respectively. When combined with UV-C, the RW dipper well increased reduction of E. coli by 0.36–1.68 log10 (CFU/ml) over prolonged use (i.e. 2 h continuous use). Moreover, the RW dipper well combined with UV-C may provide a preventative step to reduce the growth and/or persistence of bacteria on the utensil as well as the dipper well reservoir, especially for E. coli in 10% skim milk medium. To our knowledge this is the first study to evaluate the efficacy of dipper wells – both RW and CF systems – in the removal of E. coli on a stainless steel utensil. Evaluation of a Pulsed-Xenon Ultraviolet Room Disinfection Device for Impact on Hospital Operations and Microbial Reduction INFECT CONT HOSP EP Mark Stibich Julie Stachowiak Benjamin D Tanner Roy F Chemaly This study evaluated the use of pulsed-xenon ultraviolet (PX-UV) room disinfection by sampling frequently touched surfaces in vancomycin-resistant enterococci (VRE) isolation rooms. The PX-UV system showed a statistically significant reduction in microbial load and eliminated VRE on sampled surfaces when using a 12-minute multiposition treatment cycle. Ultraviolet Radiation—An Effective Bactericide for Fresh Meat R. A. Stermer MARGARET LASATER-SMITH C.F. Brasington Ultraviolet radiation (UV), with principal energy at a wavelength of 253.7 nm, was effective in destroying bacteria on the surface of fresh meat. A radiation dose of 150 mW s/cm² (275 uW/cm² for 550 s) reduced bacteria on smooth surface meat (beef plate) about 2 log cycles (99% "kill"). Further increases in dose level to 500 mW s/cm² (275 uW/cm² for 1800 s) reduced the bacteria level one additional log cycle. Since UV radiation does not penetrate most opaque materials, it was less effective on rough surface cuts of meat such as round steak because bacteria were partly shielded from the radiation. Unlike gamma (ionizing) radiation, UV had no deleterious effects on color (Hunter "a", redness) or general appearance. UV treatment chambers could be easily installed in new or existing meat processing facilities at relatively low cost. Experimental results indicate that UV irradiation of meat carcasses could effectively increase the lag phase of bacteria multiplication until adequate cooling had occurred. The ultraviolet susceptibility of aerosolised microorganisms and the role of photoreactivation L.a. Fletcher Clive B Beggs Kevin Kerr A number of different factors have contributed to an increased awareness of the threat posed by pathogenic microorganisms in indoor and outdoor environments and this in turn has led to renewed interest in ultraviolet germicidal irradiation (UVGI) as a potential control measure. This paper presents the results of a series of experiments carried out to determine the UV susceptibility of aerosolised Serratia marcescens and the effect of increased relative humidity on the UV susceptibility constant. Photoreactivation is a light induced DNA repair mechanism which enables microorganisms to recover from sub-lethal UV doses. Although this is an important issue it is often overlooked and not taken into account when estimating the UV susceptibility of microorganisms. This paper also presents data regarding the photoreactivation potential of Serratia marcescens and the effect that this can have on the UV susceptibility constant. INTRODUCTION In recent years a number of factors have stimulated an increased awareness of the presence of potentially pathogenic bioaerosols in indoor and outdoor environments and the detrimental health effects associated with them (Lin & Li, 2002). One of the main driving forces has been the re-emergence of tuberculosis as a major health concern mainly in the developing world but also more recently in the more developed countries of Europe. The situation has been compounded by the rise in the number of multi-drug resistant strains of Mycobacterium tuberculosis which has made antibiotic treatment of the disease increasingly problematic. There has also been an increase in the number of nosocomial infections (i.e. those acquired in hospital) and in the UK alone it is estimated that 1 in 10 patients will acquire some kind of infection during their stay in hospital (Beggs, 2000). Traditionally it has been thought that the main source of infection in healthcare facilities was person to person contact. However, it is now believed that the airborne transmission route accounts for as much as 10% of all nosocomial infections (Beggs, 2000). Repair and regrowth of Escherichia coli after low-and medium-pressure ultraviolet disinfection Water Sci Tech Water Supply Jiangyong Hu Xiaona Chu Puay Hoon Elaine Quek Xiaonan Tan Ultraviolet (UV) light disinfection has increasingly been used as an alternative method to replace conventional chlorine disinfection as it has been found to be a more efficient disinfection method. As UV disinfection only damages the nucleic acids of the microorganisms to prevent replication, there is a possibility of microorganisms repairing the damage sites. As few studies have investigated the reactivation of microorganisms after exposure to medium-pressure UV disinfection, it is essential for reactivation related to medium-pressure UV disinfection to be studied as medium-pressure lamps are gaining in popularity. Besides, disinfection by-products (DBPs) produced by UV disinfection have been discovered recently and may serve as a carbon source in the finished water, resulting in regrowth of the bacteria. It is therefore important to know the regrowth potential of bacteria with the existence of DBPs. In this study, the repair and regrowth of Escherichia coli after UV disinfection were investigated. Results showed that E. coli underwent photo repair (up to 5 log under fluorescent light conditions) more significantly than dark repair (up to 0.8 log in terms of bacterial count increase). The repair was generally found to be higher at low doses. At the same UV dose, it seems medium-pressure UV irradiation is able to control the repair to a lesser extent. In addition, the bacterial regrowth potential was studied with the addition of DBPs typically found in UV processes, such as acetic acid and formaldehyde. The maximum increase in bacterial count was found to be 0.3 log. Generally, the level of regrowth was insignificant compared with the increase of bacterial count due to bacterial repair. Effects of Relative Humidity on the Ultraviolet Induced Inactivation of Airborne Bacteria Jordan Peccia Holly M. Werth Shelly L Miller Mark T Hernandez Ultraviolet germicidal irradiation (UVGI) as an engineering control against infectious bioaerosols necessitates a clear understanding of environmental effects on inactivation rates. The response of aerosolized Serratia marcescens, Bacillus subtilis, and Mycobacterium parafortuitum to ultraviolet irradiation was assessed at different relative humidity (RH)levels in a 0.8 m3 completely-mixed chamber. Bioaerosol response was characterized by physical factors including median cell aerodynamic diameter and cell water sorption capacity and by natural decay and UV-induced inactivation rate as determined by direct microscopic counts and standard plate counts. All organisms tested sorbed water from the atmosphere at RH levels between 20% and 95% (up to 70% of dry cell mass at 95% RH); however, no concomitant change in median aerodynamic diameter in this same RH range was observed. Variations in ultraviolet spherical irradiance were minor and not statistically significant in the 20-95% RH range. Cell water sorption and inactivation response was similar for each of the pure cultures tested: when RH exceeded approximately 50%, sorption increased markedly and a sharp concurrent drop in UV-induced inactivation rate was observed. A Genomic Model for the Prediction of Ultraviolet Inactivation Rate Constants for RNA and DNA Viruses A mathematical model is presented to explain the ultraviolet susceptibility of viruses in terms of genomic sequences that have a high potential for photodimerization. The specific sequences with high dimerization potential include doublets of thymine (TT), thymine-cytosine (TC), cytosine (CC), and triplets composed of single purines combined with pyrimidine doublets. The complete genomes of 49 animal viruses and bacteriophages were evaluated using base-counting software to establish the frequencies of dimerizable doublets and triplets. The model also accounts for the effects of ultraviolet scattering. Constants defining the relative lethality of the four dimer types were determined via curve-fitting. A total 77 water-based UV rate constant data sets were used to represent 22 DNA viruses. A total of 70 data sets were used to represent 27 RNA viruses. Predictions are provided for dozens of viruses of importance to human health that have not previously been tested for UV susceptibility. Rayleigh-Debye-Gans as a model for continuous monitoring of biological particles: Part I, assessment of theoretical limits and approximations OPT EXPRESS Alicia Garcia-Lopez Arthur Snider Luis H. Garcia-Rubio A rapid tool for the characterization of submicron particles is light spectroscopy. Rayleigh-Debye-Gans and Mie theories provide light scattering solutions that can be evaluated within the time constants required for continuous real time monitoring applications, as in characterization of biological particles. A multiwavelength assessment of Rayleigh-Debye-Gans theory for spheres was conducted over the UV-Vis wavelength range where strict adherence to the limits of the theory at a single wavelength could not be met. Reported corrections to the refractive indices were developed to extend the range of application of the Rayleigh-Debye-Gans approximation. The results of this study show that there is considerable disagreement between Rayleigh-Debye-Gans and Mie theory across the UV-Vis spectrum. Carcinogens Enhance Survival of UV-Irradiated Simian Virus 40 in Treated Monkey Kidney Cells: Induction of a Recovery Pathway? Alain Sarasin Philip Courtland Hanawalt Treatment of monkey kidney cells with low doses of carcinogen enhances the survival of UV-irradiated simian virus 40 (SV40). This is true for compounds with UV-like effects (metabolites of aflatoxin B1, N-acetoxyacetylaminofluorene) and compounds with x-ray-like effects (methyl methanesulfonate, ethyl methanesulfonate). This phenomenon resembles the UV-reactivation of viruses in eukaryotic cells. The carcinogen-induced enhancement of the survival of UV-irradiated SV40 is correlated with the inhibition of host-cell DNA synthesis, suggesting that the inhibition is an inducing agent. An enhancement of UV-irradiated SV40 survival is also obtained in cells treated with hydroxyurea or cycloheximide for long enough that there is still inhibition of host DNA synthesis during the early stage of SV40 infection. We hypothesize that treatment of host cells with carcinogens induces a new recovery pathway that facilitates the replication of damaged DNA, bypassing the lesions and resulting in the enhanced survival of UV-irradiated SV40. This inducible process might represent the expression of "SOS repair" functions in eukaryotic cells analogous to the previously demonstrated induction of SOS repair in bacteria after UV or carcinogen treatment. Fungi Isolated in Culture from Soils of the Nevada Test Site L. W. Durrell Lora Mangum Shields Ultraviolet light in water and wastewater sanitation W.J. Masschelein R.G. Rice Several general books are available on ultraviolet light and its applications. However, this is the first comprehensive monograph that deals with its application to water and wastewater treatment. There is a rapidly growing interest in using UV light in water sanitation due to the increased knowledge of the potential health and environmental impacts of disinfection byproducts. Ultraviolet Light in Water and Wastewater Sanitation integrates the fundamental physics applicable to water and wastewater sanitation, the engineering aspects, and the practical experience in the field. The text analyzes the concerns associated with this application of UV light and brings together comprehensive information on the presently available UV technologies applicable to water and wastewater treatment including: lamp technologies, criteria of evaluation and choice of technology; fundamental principles; performance criteria for disinfection; design criteria and methods; synergistic use of UV and oxidants (advanced oxidation); and functional requirements and potential advantages and drawbacks of the technique. Ultraviolet Light in Water and Wastewater Sanitation is the only treatise currently available combining fundamental knowledge, recommendations for design, evaluations of performance, and future prospects for this application. Water and wastewater treatment professionals, water utility employees, governmental regulators, and chemists will find this book an essential and unique reference for a technology which has received growing regulatory acceptance. Comparative effectiveness of UV wavelengths for the inactivation of Cryptosporidium parvum oocysts in water WATER SCI TECHNOL Karl G Linden Gwy-Am Shin Mark Sobsey Cryptosporidium parvum oocysts in water were exposed to distinct wavelength bands of collimated beam ultraviolet (UV) radiation across the germicidal UV wavelength range (210-295 nm) that were emitted from a medium pressure (MP) mercury vapour lamp. The dose of UV radiation transmitted though each narrow bandpass filter was measured utilising potassium ferrioxalate actinometry. Oocyst infectivity was determined using a cell culture assay and titre was expressed as an MPN. The log10 inactivation for each band of radiation was determined for a dose of 2 mJ/cm2. Doses from all wavelengths between 250-275 nm resulted in approximately 2 log10 inactivation of Cryptosporidium parvum oocyst infectivity while doses with wavelengths higher and lower than this range were less effective. Because polychromatic radiation from MP UV lamps had about the same germicidal activity between the wavelengths of 250-275 nm for inactivation of oocyst infectivity, there was no unique advantage of MP UV over low pressure (LP) UV except for the simultaneous delivery of a wide range of germicidal wavelengths. Nucleic Acid Structure Wilhelm Guschlbauer Air Contamination Control in Hospitals Joseph R. Luciano Radiative heat transfer Michael Modest Absorption and scattering of light by small particles C. Bohren D. Huffman Persistence and survival of saprophytic fungi antagonistic to Botrytis cinerea on kiwifruit leaves Kirsty Sarah Helen Boyd-Wilson Joanne Perry Monika Walter Position Paper on the Use of Ultraviolet Lights in Biological Safety Cabinets Jyl Burgener Comprehensive Virology H. Fraenkel-Conrat R.R. Wagner 'Comprehensive virology 12' deals with several special groups of viruses showing properties that set them apart from the main virus families. The book comprises 5 chapters, all written by specialists, in which respectively 5 groups of viruses are discussed: in the 1st chapter, the viruses of invertebrates; chapter 2, the viruses of fungi; chapter 3, the cyanophages and viruses of eukaryotic algae; chapter 4, the viruses of fungi capable of replication in bacteria; and chapter 5, a view on lipid-containing bacteriophages. Each chapter has an impressive number of references. An index concludes this 12th volume in a series of 15. A comparative report on infection of thoracoplasty wounds R H OVERHOLT R H BETTS Effect of ultra-violet irradiation of air on incidence of infections in an infant's hospital F. Delmundo C.F. McKhann Two-dimensional angular scattering measurements of single airborne microparticles Stephen Holler Yongle Pan Jerold R. Bottiger Richard K. Chang The detection and characterization of micro-particles, particularly airborne biological particles, is currently of great interest. We present a novel technique for recording the 2D angular scattering pattern from a single airborne microparticle. Angular scattering measurements were performed in both the near-forward and near-backward regions for a variety of particles including for ethanol droplets, single polystyrene latex spheres, psl clusters, and clusters of Bacillus subtilis spores, all of various sizes. Because the angular scattering pattern is sensitive to size, shape and refractive index, the angular feature associate with clusters may be used to better characterize such airborne micro-particles. A watershed image processing routine has also been implemented. Through this routine, the number of intensity patches per solid angles is found to increase with cluster diameter. A STUDY INTO THE EFFICACY OF ULTRAVIOLET DISINFECTION CABINETS FOR STORAGE OF AUTOCLAVED PODIATRIC INSTRUMENTS PRIOR TO USE, IN COMPARISON WITH CURRENT PRACTICES Light Scattering by Viral Suspensions LIMNOL OCEANOGR William Balch James Vaughn James Novotny Amanda Ashe Viruses represent one of the most abundant, ocean-borne particle types and have significant potential for affecting optical backscattering. Experiments addressing the light-scattering properties of viruses have heretofore not been conducted. Here we report the results of laboratory experiments in which the volume-scattering functions of several bacterial viruses (bacteriophages) were measured at varying concentrations with a laser light-scattering photometer using a He-Ne and/or Argon ion laser (632.8 and 514.0 nm, respectively). Four bacterial viruses of varying size were examined, including the coliphages MS-2 (capsid size 25-30 nm) and T-4 (capsid size ∼100 nm), and marine phages isolated from Saco Bay, Maine (designation Y-l, capsid size 50-80 nm) and Boothbay Harbor, Maine (designation C-2, capsid size -∼110 nm). Volume-scattering functions (VSFs) were fitted with the Beardsley-Zaneveld function and then integrated in the backward direction to calculate backscattering cross section. This was compared to the virus geometric cross section as determined by transmission electron microscopy and flow-field fractionation. Typical backscattering efficiencies varied from $20 \times 10^{-6}$ to 1,000 × 10-6. Data on particle size and backscattering efficiencies were incorporated into Mie scattering calculations to estimate refractive index of viruses. The median relative refractive index of the four viruses was ∼1.06. Results presented here suggest that viruses, while highly abundant in the sea, are not a major source of backscattering. UV Intensity Measurement and Modelling and Disinfection Performance Prediction for Irradiation of Solid Surfaces with UV Light FOOD BIOPROD PROCESS D. W. M. GARDNER Gilbert Shama UV intensities on the surfaces of solid cylinders were measured in a disinfection chamber by means of a UV bioassay. The bioassay employed spores of the bacterium Bacillus subtilis (ATCC 6633) which had been deposited onto coupons of filter paper. These experimental measurements were then compared to surface intensity predictions obtained using an extense source with spherical emission (ESSE) model. Good agreement was obtained between both sets of data, with the predicted values lying between 75% and 95% of the experimental ones. The model was also applied to the prediction of UV intensities on the surfaces of an object of slab geometry travelling through a conceptualized disinfection facility having the con. guration of a tunnel in which UV sources were arranged on the walls of the tunnel. By assuming that the object was uniformly contaminated with spores of B. subtilis, estimates of the extent of disinfection achieved as the object travelled along the tunnel were made using previously published inactivation data for the spores. The versatility of the ESSE model was demonstrated by presenting three dimensional surface intensity plots of the various slab surfaces for both horizontal and vertical arrangements of UV sources in the tunnel. The model described here could prove useful in optimizing the arrangement and numbers of UV sources in surface disinfection facilities. Effects of UV and Phototoxins on Selected Fungal Pathogens of Citrus Archana Asthana R W Tuveson Photons in the UV region of the spectrum are important for organisms since they are energy-rich and strongly absorbed by biological molecules having the potential to react with membranes, enzymes, and nucleic acids. These wavelengths can also be absorbed by specific molecules that undergo conversion to a more reactive state (light activation) which can then cause damage to molecules of critical physiological function (phototoxicity). The importance of pigments in two genera of the Citrus pathogens, Fusarium and Penicillium, was assessed for ability to protect against inactivation by UV-A, B, and C and two phototoxins activated by UV-A. Pigment-deficient mutants of both genera were isolated following UV-C mutagenesis. Direct exposure of fungal spores in suspension of wild type and pigment-deficient mutants was carried out under the appropriate light source. The UV-A activated phototoxins investigated were: alpha-terthienyl (alpha-T), which produces predominantly singlet oxygen (O-1(2)), an excited state of oxygen, which causes chiefly membrane damage; and 8-methoxypsoralen (8-MOP), which induces cycloadduct formation in DNA. For both genera, UV-A and UV-B alone were ineffective in causing inactivation of conidia at the fluences tested. Using appropriate Escherichia coli tester strains, it was demonstrated that the UV-B source was capable of inducing DNA lesions leading to lethality, presumably cyclobutane dimers in large measure. The carotenoids in one of the Fusarium species did not appreciably protect against lethal damage induced by UV-C, but the pigments of both Penicillium species were presumably able to screen UV-C and offer protection. It is assumed that the carotenoids in the wild type Fusarium species protected against UV-A activated alpha-T damage by quenching singlet oxygen. The blue-green pigment(s) in P. italicum prevent DNA damage caused by 8-MOP most probably by screening the UV-A wavelengths necessary to activate the phototoxin. Poliovirus double-stranded RNA: Inactivation by ultraviolet light J M Bishop Nancy Quintrell Gebhard Koch THE FEASIBILITY OF USING THE MIE THEORY FOR THE SCATTERING OF LIGHT FROM SUSPENSIONS OF SPHERICAL BACTERIA V. G. Petukhov The distribution of the energy of light waves during the interaction with suspensions of spherical bacteria obeys the Mie theory of scattering. In the visible range of the spectrum the Rayleigh-Gans theory cannot always be used for describing the scattering of light of bacterial suspensions. The International Ultraviolet Explorer ESA BULL-EUR SPACE Ferdinando Duccio Macchetto M. V. Penston The scientific goals, instrumentation, and observational routine of the International Ultraviolet Explorer are described. The IUE, launched January 26, 1978, is in geosynchronous orbit, and there is continuous communication with ESA's ground station at Villafranca near Madrid and NASA's ground station at Goddard Space Flight Center during the observing shifts. Since the satellite can be commanded and data can be received in real time, the ground-based observer can make step-by-step decisions about his observing program in the same way as he would at a ground-based observatory. Among other goals, the project seeks to obtain high-resolution spectra of stars of all spectral types and to study gas streams in and around some binary systems. The format of the spectrum, which consists of a series of adjacent spectral orders displayed one above another in a rasterlike pattern, makes efficient use of the sensitive area of the SEC Vidicon television tubes used to integrate and record the spectrum. Preliminary studies are surveyed. Effect of High Doses of High and Low Intensity UV Irradiation on Surface Microbiological Counts and Storage‐Life of Fish J FOOD SCI Yao-wen Huang Romeo T. Toledo Ultraviolet (UV) irradiation at 254 run and doses of 300 mWs/cm2 from a photochemical reactor (16.6 min at 300 μW/cm2) or 4.8 Ws/cm2 from a high intensity UV-C lamp (40 sec at 120–180 mW/cm2 reduced surface microbial count on mackerel by two to three log cycles. UV treated mackerel wrapped in 1 mil polyethylene and packed in -1°C ice had at least a 7 day longer shelf life than conventional ice-packed untreated controls. Spray washing with water containing 10 ppm chlorine by itself or in combination with UV irradiation was necessary to reduce surface counts on rough surfaced fish to the same extent as that on smooth surfaced fish. When UV irradiated and packed in 0°C ice, surface microbial counts on vacuum packaged mackerel lagged by 4 days those on mackerel wrapped in 1 mil polyethylene. An action spectrum for cell killing and Pyrimidine dimer formation in Chinese hamsterV–79 cells Robert Rothman R B Setlow Abstract— We have determined action spectra for pyrimidine dimer formation and loss of colony-forming ability in Chinese Hamster V-79 cells and have found a very strong correlation between the two. These data are consistent with the notion that damage to DNA is the principle cause of cell death and that the most important type of damage is the pyrimidine dimer. While the shape of the V-79 spectra mimics that of action spectra for bacteria. phage, and purified DNA, V-79 cells are about twice as sensitive to radiation at long wavelengths, relative to the sensitivity at 265 nm. However, if the action spectra are normalized to 297 nm. a wavelength included in the solar spectrum, the two sets of action spectra would coincide at wavelengths relevant to human skin-cancer. Thus an action spectrum based on microorganisms should be adequate for extrapolation to humans in terms of risk due to ozone depleteion. The destruction of spores of Bacillus subtilis by the combined effects of hydrogen peroxide and ultraviolet light LETT APPL MICROBIOL William M Waites Stephen Ernest Harding D.R. Fowler Ultraviolet light irradiation of bacterial spores in the presence of hydrogen peroxide has been shown to produce synergistic kills when compared with ultraviolet light (u.v.) and hydrogen peroxide used sequentially. This use in combination has been patented for the commercial sterilization of packaging before filling with UHT-processed products. Previous results have shown that lamps producing u.v. light with a maximum output at about 254 nm were extremely effective. Results obtained using a Synchrotron radiation source to produce a narrow band of irradiation now shows that the greatest kill of spores of Bacillus subtilis in the presence of hydrogen peroxide is obtained with radiation at ˜270 nm. Such results suggest that the action of the u.v. light is not directly on the spore DNA but may be related to the production of free hydroxyl radicals from hydrogen peroxide. The ultraviolet absorbance spectrum of coliform bacteria and its relationship to astronomy F. Hoyle Nalin Chandra Wickramasinghe E. R. Jansz P. M. Jayatissa The ultraviolet absorbance spectrum of a mixed culture of coliform bacilli is obtained and shown to be remarkably similar to recent measurements of the extinction curve of starlight. Mathematical Modeling of Ultraviolet Germicidal Irradiation for Air Disinfection Quant Microbiol D. L. Witham Thomas S Whittam A comprehensive treatment of the mathematical basis for modeling the disinfection process for air using ultraviolet germicidal irradiation (UVGI). A complete mathematical description of the survival curve is developed that incorporates both a two stage inactivation curve and a shoulder. A methodology for the evaluation of the three-dimensional intensity fields around UV lamps and within reflective enclosures is summarized that will enable determination of the UV dose absorbed by aerosolized microbes. The results of past UVGI studies on airborne pathogens are tabulated. The airborne rate constant for Bacillus subtilis is confirmed based on results of an independent test. A re-evaluation of data from several previous studies demonstrates the application of the shoulder and two-stage models. The methods presented here will enable accurate interpretation of experimental results involving aerosolized microorganisms exposed to UVGI and associated relative humidity effects Strahlung und antagonistische Wirkung S. Prát Light scattering by microorganisms in the open ocean PROG OCEANOGR Dariusz Stramski Dale A. Kiefer Recent enumeration and identification of marine particles that are less than 2μm in diameter, suggests that they may be the major source of light scattering in the open ocean. The living components of these small particles include viruses, heterotrophic and photoautotrophic bacteria and the smallest eucaryotic cells. In order to examine the relative contribution by these (and other) microorganisms to scattering, we have calculated a budget for both the total scattering and backscattering coefficients (at 550nm) of suspended particles. This budget is determined by calculating the product of the numerical concentration of particles of a given category and the scattering cross-section of that category. Values for this product are then compared to values for the particulate scattering coefficients predicted by the models of GORDON and MOREL (1983) and MOREL (1988). UV hormesis in fruits: A concept ripe for commercialisation TRENDS FOOD SCI TECH Peter G Alderson Hormesis is the application of potentially harmful agents at low doses to living organisms in order to induce stress responses. When fruit are exposed to low doses of UV a number of changes are induced including the production of anti-fungal compounds and delays in ripening. Both of these responses could be exploited by the horticultural sector to reduce postharvest losses. We review the results of UV treatment of a variety of fruits and the work done in identifying chemical changes in them. The prospects for treating fruits with UV on a commercial scale are considered. Nanoparticle sizing with a resolution beyond the diffraction limit using UV light scattering spectroscopy OPT COMMUN Kun Chen Alexey Kromin M. P. Ulmer Vadim Backman We investigated the detection of dielectric nanoparticles using static light scattering spectroscopy (LSS) in the UV range (from 250 to 390 nm). The light scattered by the polystyrene nanospheres in the backward direction were collected by means of an optical fiber probe and a charge-coupled device (CCD) spectrograph. The size distributions of the nanoparticles were obtained by a discrete inverse on the backscattering spectra using a theoretical model based on Mie theory. Our results show that UV LSS can be used to measure the sizes of nanoparticles with an accuracy far exceeding the diffraction limit and to study subwavelength structures at nanometer scale. This technique may find potential scientific and industrial applications including the study of macromolecular complexes at nanoscale, detection and identification of viral particles, non-invasive probing of nanoscale surface structures, and monitoring the processing of pharmaceutical nanoparticles. Re-use of wastewater: Preventing the recovery of pathogens by using medium-pressure UV lamp technology Ben F. Kalisvaart Ultraviolet (UV) light has become widely accepted as an alternative to chlorination or ozonation for wastewater disinfection. There are now over 2,000 wastewater treatment plants worldwide using either low- or medium-pressure UV technology. Recent studies investigating UV lamp technology, configuration, cleaning requirements and ageing, as well as long-term performance tests, have demonstrated beyond any doubt the effectiveness of UV in inactivating pathogens in wastewater. Research has also shown that, to ensure permanent inactivation and prevent the recovery of microorganisms following exposure to UV, a broad, "polychromatic" spectrum of UV wavelengths is necessary. These wavelengths inflict irreparable damage not only on cellular DNA, but on other molecules, such as enzymes, as well. Only medium-pressure UV lamps produce the necessary broad range of wavelengths; low-pressure lamps emit a single wavelength peak which only affects DNA. Polychromatic medium-pressure UV light is so effective because of the lamp's exceptionally high UV energy output at specific wavelengths across the UV spectrum. It has been shown, for example, that pathogenic E. coli O175:H7 was able to repair the damage caused by low-pressure UV, but no repair was detected following exposure to UV from medium-pressure lamps. Radiation Biology Alison P. Casarett Molecular photobiology :inactivation and recovery. Kendric C. Smith User Guide for the Discrete Dipole Approximation Code DDSCAT 7.3 B. T. Draine Piotr J. Flatau DDSCAT 7.1 is an open-source Fortran-90 software package applying the discrete dipole approximation to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The targets may be isolated entities (e.g., dust particles), but may also be 1-d or 2-d periodic arrays of "target unit cells", allowing calculation of absorption, scattering, and electric fields around arrays of nanostructures. The theory of the DDA and its implementation in DDSCAT is presented in Draine (1988) and Draine & Flatau (1994), and its extension to periodic structures (and near-field calculations) in Draine & Flatau (2008). DDSCAT 7.1 includes support for MPI, OpenMP, and the Intel Math Kernel Library (MKL). DDSCAT supports calculations for a variety of target geometries. Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to "import" arbitrary target geometries into the code. DDSCAT automatically calculates total cross sections for absorption and scattering and selected elements of the Mueller scattering intensity matrix. This User Guide explains how to use DDSCAT 7.1 to carry out electromagnetic scattering calculations. DDfield, a Fortran-90 code to calculate E and B at user-selected locations near the target, is included in the distribution. A number of changes have been made since the last release, DDSCAT 7.0 . Comment: 83 pages, 11 figures The survival of bacteria in dust. II. The effect of atmospheric humidity on the survival of bacteria in dust J Hyg O M Lidwell E. J. L. Lowbury Dust from scarlet-fever wards was exposed to a controlled range of atmospheric humidities by enclosure in metal boxes containing anhydrous calcium chloride and saturated solutions of potassium carbonate, sodium nitrite, potassium bromide and sodium sulphate. The death-rate of total organisms, Staphylococcus aureus and Streptococcus pyogenes in the dust was assessed by periodic sampling of series of twenty 10mg. portions. A positive correlation between atmospheric humidity and death-rate was observed for the three groups of organisms counted in three specimens of dust. Photochemistry and photobiology of nucleic acids / edited by Shin Yi Wang Shih Yi Wang La biblioteca tiene v. 1-2 v.1. Chemistry. -- v.2. Photobiology The Biological Effects of Ultraviolet Radiation Walter Harm Incluye bibliografía e índice The Effects of UV-C on Biological Contamination of AHU's in a Commercial Office Building: Preliminary Results Richard Shaughnessy Christine A Rogers Estelle Levetin Light Scattering By Small Particles PHYS TODAY Henk C. van de Hulst Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies The Physical State of Viral Nucleic Acid and the Sensitivity of Viruses to Ultraviolet Light Andrew Rauth Ultraviolet light action spectra in the range 2250 to 3020 A have been determined for the plaque-forming ability of the following bacteriophage and animal viruses: T-2, varphix-174, R-17, fr, MS2, 7-S, fd, vesicular stomatitis, vaccinia, encephalomyocarditis, reovirus-3, and polyoma. Absolute quantum yields for the plaque-forming ability of MS2, fr, fd, varphix-174, and T-2 were determined over the range 2250 to 3020 A. Relative quantum yields for plaque-forming ability indicated that viruses with single-stranded nucleic acid were on the average ten times more sensitive to UV than double-stranded viruses. In addition for ten of the twelve viruses a relation existed between the shape of their action spectra and the stranded state of their nucleic acid. The ratio of the inactivation cross-section at 2650 A to that at 2250 A for these viruses was 1.0 for single-stranded viruses and 2.0 for viruses with double-stranded nucleic acid. The above relations were dependent on the stranded state of the nucleic acid not the ribose or deoxyribose form of the sugar present. Evaluation of ultraviolet radiation and dust control measures in control of respiratory disease at a Naval training center. W R MILLER E T JARRETT Studies of the control of acute respiratory diseases among naval recruits I. A review of a four-year experience with ultraviolet irradiation and dust supressive measures, 1943–1947. Am J Hyg T L WILLMON A HOLLAENDER A D LANGMUIR Action Spectra for the Ultraviolet and Visible Light Inactivation of Phage T7: Effect of Host-Cell Reactivation Meyrick J. Peak Jennifer G. Peak Kinetics of the inactivation of phage T7 by six ultraviolet and visible light wavelengths of the far (below 320 nm) and the near uv (above 320 nm) were studied, with and without host-cell reactivation. Inactivations were always exponential with the three shorter wavelengths (254,313, and 334 nm), whereas with the longer wavelengths (365, 405, and 460 nm), a small shoulder (extrapolation number <2) was consistently obtained. The host-cell reactivation sector was prominent with 254 and 313 nm of radiation, reduced with 334 nm, and either trivial or absent with the three longer wavelengths. Action spectra for the inactivation revealed small shoulders in the near-uv region, both with and without host-cell reactivation. A comparison of single-strand break (alkali-labile bond) induction by 365 nm of radiation in phage T4, compared with lethality in phage T7, revealed that a frequency of 0.3 single-strand breaks may occur per lethal hit. On the B to A conformation change of the double helx James M. Eyster Earl W Prohofsky We have investigated the B to A conformation change of DNA double helices by a new method "soft-mode analysis." We find theoretically that a mode does soften when the vibration normal modes are perturbed by increasing the electrostatic interaction between the unbalanced charges on atoms in the double helix. The same mode also softens for enhanced van der Waals interactions. The mode softening indicates the onset of conformation change. The enhancing of the electrostatic and van der Waals interaction mimic the effect of decreasing the polar nature of the solvent or water of hydration associated with the B conformation DNA. We discuss qualitatively the concept of soft modes and their relation to conformation change as well as their applicability to macromolecules. We discuss previous work in which the normal vibrational modes have been calculated. We also discuss the displacement which comes from the soft mode and show that it correlates very well with that expected for the B to A conformation charge. Rosi Destiani J Wladyslaw Biological Weapons Defense Technology Cannabis Technology Evaluation of an ultraviolet disinfection unit December 1987 · The Journal of prosthetic dentistry Robert J Boylan Gary Goldstein Allan Schulman Challenges of Combined Sewer Overflow Disinfection by Ultraviolet Light Irradiation July 2001 · Critical Reviews in Environmental Science and Technology Izabela Wojtenko Mary K. Stinson Richard Field This article examines the performance and effectiveness of ultraviolet (UV) irradiation for disinfection of combined sewer overflow (CSO). Due to the negative impact of conventional water disinfectants on aquatic life, new agents (e.g., UV light) are being investigated for CSO. This low-quality water with high flow rates, volumes, and suspended solids content requires the use of high-rate ... [Show full abstract] techniques for its disinfection. Although many pilot-scale studies have investigated UV irradiation as an alternative technology, to date no full-scale CSO treatment facilities in the United States are using UV light. A survey of the major pilot-scale studies investigating UV light as a CSO disinfectant suggests that UV light irradiation, correctly applied, is an effective alternative to chlorination for CSO. The success of disinfecting with UV light seems to be strongly dependent on water quality. Thus, pretreatment of CSO prior to disinfection is a major prerequisite to ensure UV light effectiveness. Peter A. Lambert IntroductionRadiation energyRadiation sourcesSensitivity and resistance of microorganisms to radiationMechanisms of lethal actionChoice of radiation doseControl proceduresUses of ionizing radiationUltraviolet radiationSurvival curves following UV radiationSensitivity to ultraviolet radiationTarget site and inactivationRepair mechanismsEffect of ultraviolet radiation on bacterial sporesPractical ... [Show full abstract] uses of ultraviolet radiationOther forms of radiation used in disinfection and sterilizationConclusions References Bromate Reduction by Ultraviolet Light Irradiation Using Medium Pressure Lamp August 2013 · International Journal of Environmental Studies Nasr Bensalah Xu Liu Ahmed Abdel-Wahab Bromate reduction in water by ultraviolet irradiation using medium-pressure lamps (UV-M) emitting light in the range of 200–600 nm has been investigated. Effects of certain experimental parameters including the initial bromate concentration, UV light intensity, initial pH, and presence of dissolved organic and inorganic carbon and nitrate on the kinetics and efficiency of UV irradiation for ... [Show full abstract] bromate reduction were evaluated. Experimental results showed that UV-M irradiation achieved complete destruction of bromate and almost total conversion of bromate into bromide for the initial bromate concentrations ranging from 10 to 1000 μg/L under different pH conditions. A simple kinetic model for bromate destruction was developed. Bromate decay with time during UV-M irradiation follows pseudo-first-order kinetics. The observed rate constant (kobs) decreased with increasing bromate concentrations upto 100 μg/L and then it becomes constant at 0.058 min−1 for higher bromate concentrations. Increasing the UV light intensity resulted in the increase of the rate of bromate destruction. The kobs increased linearly with increasing light intensity. A UV dose of 1000 mJ/cm2 was sufficient to reduce the bromate concentration to less than 10 μg/L within 10 min. The presence of dissolved organic carbon or carbonate/bicarbonate slowed the bromate reduction rate due to absorption of non-negligible fraction of UV light by these compounds. Presence of nitrate affects both the kinetics and efficiency of bromate reduction. Looking for the full-text? You can request the full-text of this chapter directly from the authors on ResearchGate.
CommonCrawl
Discrete logarithm like problems and linear recurring sequences AMC Home New extremal binary self-dual codes of length $68$ from $R_2$-lifts of binary self-dual codes May 2013, 7(2): 197-217. doi: 10.3934/amc.2013.7.197 Asymptotic lower bound on the algebraic immunity of random balanced multi-output Boolean functions Claude Carlet 1, and Brahim Merabet 2, LAGA, Universities of Paris 8 and Paris 13, CNRS, France Department of Algebraic and number theory, University of Sciences and Technology, Houari Boumedienne, Algiers, Algeria, and University of Kasdi Merbah, Ouargla, Algeria Received January 2013 Published May 2013 This paper extends the work of F. Didier (IEEE Transactions on Information Theory, Vol. 52(10): 4496-4503, October 2006) on the algebraic immunity of random balanced Boolean functions, into an asymptotic lower bound on the algebraic immunity of random balanced multi-output Boolean functions. Keywords: Reed-Muller codes, algebraic immunity., Boolean functions, erasure channel, vectorial functions, generalized Hamming distances. Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C3. Citation: Claude Carlet, Brahim Merabet. Asymptotic lower bound on the algebraic immunity of random balanced multi-output Boolean functions. Advances in Mathematics of Communications, 2013, 7 (2) : 197-217. doi: 10.3934/amc.2013.7.197 F. Armknecht, C. Carlet, P. Gaborit, S. Kunzli, W. Meier and O. Ruatta, Efficient computation of algebraic immunity for algebraic and fast algebraic attacks,, in, (2006), 147. Google Scholar C. Carlet, A method of construction of balanced functions with optimum algebraic immunity,, in, (2008). Google Scholar C. Carlet, Boolean functions for cryptography and error correcting codes,, in, (2010), 257. Google Scholar C. Carlet and K. Feng, An infinite class of balanced functions with optimal algebraic immunity, good immunity to fast algebraic attacks and good nonlinearity,, in, (2008), 425. Google Scholar N. Courtois and W. Meier, Algebraic attacks on stream ciphers with linear feedback,, in, (2003), 345. Google Scholar D. K. Dalai, K. C. Gupta and S. Maitra, Cryptographically significant Boolean functions: Construction and analysis in terms of algebraic immunity,, in, (2005), 98. doi: 10.1007/11502760_7. Google Scholar F. Didier, A new bound on the block error probability after decoding over the erasure channel,, IEEE Trans. Inform. Theory, 52 (2006), 4496. doi: 10.1109/TIT.2006.881719. Google Scholar K. Feng, Q. Liao and J. Yang, Maximal values of generalized algebraic immunity,, Des. Codes Crypt., 50 (2009), 243. doi: 10.1007/s10623-008-9228-0. Google Scholar R. G. Gallager, "Information Theory and Reliable Communication,'', John Wiley and Sons Inc., (1968). Google Scholar M. Liu, Y. Zhang and D. Lin, Perfect algebraic immune functions,, in, (2012), 172. Google Scholar F. J. Macwilliams And N. J. Sloane, "The theory of Error-Correcting Codes,'', Amsterdam, (1977). Google Scholar W. Meier, E. Pasalic and C. Carlet, Algebraic attacks and decomposition of Boolean functions,, in, (2004), 474. doi: 10.1007/978-3-540-24676-3_28. Google Scholar V. K. Wei, Generalized Hamming weights for linear codes,, IEEE Trans. Inform. Theory, 37 (1991), 1412. doi: 10.1109/18.133259. Google Scholar Olav Geil, Stefano Martin. Relative generalized Hamming weights of q-ary Reed-Muller codes. Advances in Mathematics of Communications, 2017, 11 (3) : 503-531. doi: 10.3934/amc.2017041 Sihem Mesnager, Gérard Cohen. Fast algebraic immunity of Boolean functions. Advances in Mathematics of Communications, 2017, 11 (2) : 373-377. doi: 10.3934/amc.2017031 Daniele Bartoli, Adnen Sboui, Leo Storme. Bounds on the number of rational points of algebraic hypersurfaces over finite fields, with applications to projective Reed-Muller codes. Advances in Mathematics of Communications, 2016, 10 (2) : 355-365. doi: 10.3934/amc.2016010 Martino Borello, Olivier Mila. Symmetries of weight enumerators and applications to Reed-Muller codes. Advances in Mathematics of Communications, 2019, 13 (2) : 313-328. doi: 10.3934/amc.2019021 Jian Liu, Sihem Mesnager, Lusheng Chen. Variation on correlation immune Boolean and vectorial functions. Advances in Mathematics of Communications, 2016, 10 (4) : 895-919. doi: 10.3934/amc.2016048 Andreas Klein, Leo Storme. On the non-minimality of the largest weight codewords in the binary Reed-Muller codes. Advances in Mathematics of Communications, 2011, 5 (2) : 333-337. doi: 10.3934/amc.2011.5.333 Sara D. Cardell, Joan-Josep Climent. An approach to the performance of SPC product codes on the erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 11-28. doi: 10.3934/amc.2016.10.11 Constanza Riera, Pantelimon Stănică. Landscape Boolean functions. Advances in Mathematics of Communications, 2019, 13 (4) : 613-627. doi: 10.3934/amc.2019038 Claude Carlet, Serge Feukoua. Three basic questions on Boolean functions. Advances in Mathematics of Communications, 2017, 11 (4) : 837-855. doi: 10.3934/amc.2017061 Carolyn Mayer, Kathryn Haymaker, Christine A. Kelley. Channel decomposition for multilevel codes over multilevel and partial erasure channels. Advances in Mathematics of Communications, 2018, 12 (1) : 151-168. doi: 10.3934/amc.2018010 Joan-Josep Climent, Diego Napp, Raquel Pinto, Rita Simões. Decoding of $2$D convolutional codes over an erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 179-193. doi: 10.3934/amc.2016.10.179 David Keyes. $\mathbb F_p$-codes, theta functions and the Hamming weight MacWilliams identity. Advances in Mathematics of Communications, 2012, 6 (4) : 401-418. doi: 10.3934/amc.2012.6.401 Yang Yang, Xiaohu Tang, Guang Gong. Even periodic and odd periodic complementary sequence pairs from generalized Boolean functions. Advances in Mathematics of Communications, 2013, 7 (2) : 113-125. doi: 10.3934/amc.2013.7.113 Claude Carlet, Khoongming Khoo, Chu-Wee Lim, Chuan-Wen Loe. On an improved correlation analysis of stream ciphers using multi-output Boolean functions and the related generalized notion of nonlinearity. Advances in Mathematics of Communications, 2008, 2 (2) : 201-221. doi: 10.3934/amc.2008.2.201 Peter Beelen, David Glynn, Tom Høholdt, Krishna Kaipa. Counting generalized Reed-Solomon codes. Advances in Mathematics of Communications, 2017, 11 (4) : 777-790. doi: 10.3934/amc.2017057 Ayça Çeşmelioğlu, Wilfried Meidl. Bent and vectorial bent functions, partial difference sets, and strongly regular graphs. Advances in Mathematics of Communications, 2018, 12 (4) : 691-705. doi: 10.3934/amc.2018041 Yu Zhou. On the distribution of auto-correlation value of balanced Boolean functions. Advances in Mathematics of Communications, 2013, 7 (3) : 335-347. doi: 10.3934/amc.2013.7.335 JiYoon Jung, Carl Mummert, Elizabeth Niese, Michael Schroeder. On erasure combinatorial batch codes. Advances in Mathematics of Communications, 2018, 12 (1) : 49-65. doi: 10.3934/amc.2018003 Alonso sepúlveda Castellanos. Generalized Hamming weights of codes over the $\mathcal{GH}$ curve. Advances in Mathematics of Communications, 2017, 11 (1) : 115-122. doi: 10.3934/amc.2017006 Sihong Su. A new construction of rotation symmetric bent functions with maximal algebraic degree. Advances in Mathematics of Communications, 2019, 13 (2) : 253-265. doi: 10.3934/amc.2019017 Claude Carlet Brahim Merabet
CommonCrawl
pf3 electron geometry Services, Working Scholars® Bringing Tuition-Free College to the Community. tetrahedral. Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. Why don't libraries smell like bookstores? PF 3… All other trademarks and copyrights are the property of their respective owners. answer! Molecular Orbital Theory: Tutorial and Diagrams, Using Orbital Hybridization and Valence Bond Theory to Predict Molecular Shape, Ionization Energy: Trends Among Groups and Periods of the Periodic Table, Dipoles & Dipole Moments: Molecule Polarity, The Octet Rule and Lewis Structures of Atoms, Tetrahedral in Molecular Geometry: Definition, Structure & Examples, Lattice Energy: Definition, Trends & Equation, Lewis Structures: Single, Double & Triple Bonds, Valence Bond Theory of Coordination Compounds, London Dispersion Forces (Van Der Waals Forces): Weak Intermolecular Forces, Acid-Base Indicator: Definition & Concept, Factors Influencing the Formation of Ionic Bonds, Metallic Bonding: The Electron-Sea Model & Why Metals Are Good Electrical Conductors, Atomic Radius: Definition, Formula & Example, CLEP Natural Sciences: Study Guide & Test Prep, Middle School Life Science: Tutoring Solution, Holt McDougal Modern Chemistry: Online Textbook Help, Praxis Chemistry (5245): Practice & Study Guide, College Chemistry: Homework Help Resource, CSET Science Subtest II Chemistry (218): Practice & Study Guide, ISEB Common Entrance Exam at 13+ Geography: Study Guide & Test Prep, Holt Science Spectrum - Physical Science with Earth and Space Science: Online Textbook Help, Biological and Biomedical Pyramidal. Problem: Determine the Electron geometry, molecular geometry, idealized bond angles for each molecule.PF3, SBr2, CHCl3, CS2. Phosphorus trifluoride is the name of PF 3.It's a gas that is known for its toxicity. It is the well-known fact that if there is a vast difference of the electronegativity, there are more chances of polarity. In which cases do you expect deviations from the idealized bond angle?1.pf3 2.sbr2 3.ch3br 4.bcl3 . Question: /Determine The Electron Geometry, Molecular Geometry, Idealized Bond Angles For Each Molecule. Using the VSEPR theory, the electron bond pairs and lone pairs on the center atom will help us predict the shape of a molecule. Chemistry. The valence shell electron pair repulsion (VSEPR) theory is a model used to predict 3-D molecular geometry based on the number of valence shell electron bond pairs among the atoms in a molecule or ion. Where can i find the fuse relay layout for a 1990 vw vanagon or any vw vanagon for the matter? The domain is related to the orbitals … 87% (369 ratings) Problem Details. We will use valence shell electron pair repulsion (VSEPR) theory to determine its molecular geometry. The VSEPR theory is based on the idea that. What is the shape of the ammonia molecule? When did organ music become associated with baseball? Fluorine needs one electron to complete its octet. Determine the electron geometry, molecular geometry, and idealized bond angles for each of the following molecules. The two X atoms (in white) are 180° away from one another. Is there a way to search all eBay sites for different countries at once? Who is the longest reigning WWE Champion of all time? eg=tetrahedral, mg=tetrahedral. P has a lone pair of electrons in addition to the three F attached. It's a gas that is known for its toxicity. Check all that apply. Is the PF3 polar or non-polar and why? Copyright © 2020 Multiply Media, LLC. This square pyramidal shape would be distorted. © copyright 2003-2020 Study.com. Or if you need more Molecular vs Electron Geometry practice, you can also practice Molecular vs Electron Geometry practice problems. The presence of unbonded lone-pair electrons gives a different molecular geometry and electron geometry. Is evaporated milk the same thing as condensed milk? The molecular geometry, on the other hand, is Trigonal Pyramidal. A) eg=tetrahedral, mg=trigonal pyramidal, polar B) eg=tetrahedral, mg=tetrahedral, nonpolar C) eg=trigonal planar, mg=trigonal planar, nonpolar D) eg= trigonal bipyramidal, mg=trigonal planar, polar E) eg=trigonal pyramidal, mg=bent, nonpolar When and how lovebirds will enter into the nest box? C. On the… Although we will speak often of electron pairs in this discussion, the same logic will hold true for single electrons in orbitals, and for double bonds, where one could think of the bond as consisting of two pairs of electrons. Phosphorus Pentafluoride on Wikipedia. We have a central Nitrogen double-bonded to two separate Nitrogens (completing the central atom's octet). To use this key, first draw out the Lewis structure for a molecule. All Rights Reserved. If the central atom also contains one or more pairs of non-bonding electrons, these additional regions of negative charge will behave much like those associated with the bonded atoms. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. The lone pairs take up more space so they increase the bonding angle and force PF3 into a trigonal planar geometry. Linear electron geometry: This ball-and-stick model represents a linear compound for formula AX2. Back to Molecular Geometries & Polarity Tutorial: Molecular Geometry & Polarity Tutorial. From an electron-group-geometry perspective, GeF 2 has a trigonal planar shape, but its real shape is dictated by the positions of the atoms. The molecular geometry of PCL3 is trigonal pyramidal with the partial charge distribution on the phosphorus. PF3 SBr2 CHCl3 CS2 PF3 SBr2 CHCl3 CS2 This problem has been solved! Create your account. Why did cyclone Tracy occur in 1974 at Darwin? Is Series 4 of LOST being repeated on SKY? Given four points P(1, 5, 4), Q(- 0, 3, 2), R(6,... Let A = (-1, -2, 5), B = (-5, -2, 5), C = (-9, 3,... Find the curvature of r(t) = \left \langle 3t,... Find, correct to the nearest degree, the three... Find the distance d from the point (1, -6, -1) to... What is the value of the smallest bond angle in... What is the electron-domain (charge-cloud)... What is the molecular geometry of SO_3 ^{2-}? So, the end difference is 0.97, which is quite significant. How do you put grass into a personification? This model assumes that electron pairs will arrange themselves to … The geometry of molecule of BF3 is 'Trigonal Planar.' With the reference of Chemistry, 'Trigonal Planar' is a model with three atoms around one atom in the middle. For homework help in math, chemistry, and physics: www.tutor-homework.com. For homework help in math, chemistry, and physics: www.tutor-homework.com. The electron geometry ("Electronic Domain Geometry") for PF3 is Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. What are wildlife sanctuaries national parks biosphere reserves? The molecular geometry of PF 5 is trigonal bipyramidal with symmetric charge distribution. BF3 Lewis Structure Phosphorus trifluoride is the name of PF3. It's like peripheral atoms all in one plane, as all three of them are similar with the 120° bond angles on each that makes them an equilateral triangle. Which do not obey the octet rule? It determines the electron-pair arrangement, the geometry that maximizes the distances between the valence-shell electron pairs. Our tutors have indicated that to solve this problem you will need to apply the Molecular vs Electron Geometry concept. Assuming you mean the ion Azide (N[math]_{3}^{-}[/math]), 3 Nitrogens and a negative charge give 16 electrons total. B. Determine the electron geometry (eg), molecular geometry (mg), and polarity of SO3. Determine the electron geometry, molecular geometry, and idealized Therefore this molecule is nonpolar. Lewis Structure. PF3 b. SBr2 c. CHCl3 d. CS2. This is given by : The second number can be determined directly from the Lewis structure. We will use valence shell electron pair repulsion (VSEPR) theory to determine its molecular geometry. Determine the electron geometry (eg) and molecular geometry (mg) of SiF4. Therefore this molecule is nonpolar. All rights reserved. PF3 has a trigonal pyramidal molecular geometry. Solution for Draw the Lewis electron-dot structures for PF3 and PF5 and predict the molecular geometry. Here is a chart that describes the usual geometry for molecules based on their bonding behavior. The molecular geometry of AlBr 3 is trigonal planar with symmetric charge distribution around the central atom. Answer a. eg=trigonal bipyramidal, mg=trigonal bipyramidal. Count how many electron pairs are present, including both bonding pairs and lone pairs.Treat both double and triple bonds as if they were single electron pairs. In OPF 3, the lone pair is replaced with a P-O bond, which occupies less space than the lone pair in PF 3. In PF 3 the lone pair on the phosphorus pushes the P-F bonding electrons away from itself, resulting in a F-P-F bond angle of 97.8°, which is appreciably smaller than the ideal bond angle of 109.5°. In the molecule of PF3, the phosphorus atom is the central atom surrounded by three fluorine atoms. Aluminum Tribromide on Wikipedia. FREE Expert Solution. Let's count the areas around the phosphorus atom that... Our experts can answer your tough homework and study questions. Mol mass of PF3 = 1 * 30 (mol mass of P) + 3 * 18.9 (mol mass of F) = 87.96 g/mol. You can view video lessons to learn Molecular vs Electron Geometry. The valence-shell electron-pair repulsion (VSEPR) model allows us to predict which of the possible structures is actually observed in most cases. So electron geometry is tetrahedral. Determine the electron geometry (eg) and molecular geometry (mg) of the underlined carbon in CH3CN. How long will the footprints on the moon last? The molecular mass of PF3 is calculated as. Lewis electron structures give no information about molecular geometry, the arrangement of bonded atoms in a molecule or polyatomic ion, which is crucial to understanding the chemistry of a molecule. Predicting Molecular Geometry . PF3 is tetrahedral for its electron geometry and trigonal pyramidal for its molecular geometry. Determine the electron geometry (eg) and molecular geometry (mg) of PF5. Become a Study.com member to unlock this The electron geometry ("Electronic Domain Geometry") for PF3 is tetrahedral. True or False: molecular geometry and electron-group geometry are the same when there are no lone pairs. The phosphorus has an electronegativity value of 2.19 and chlorine comes with 3.16. The Valence Shell Electron Pair Repulsion Theory (VSEPR), as it is traditionally called helps us to understand the 3d structure of molecules. … The overall geometry is octahedral but with one lone pair the structure becomes a square pyramid. The electrons will push the fluorine's away from it, distorting the bottom of the pyramid. Sciences, Culinary Arts and Personal From Lewis theory, we know that the total number of valence electrons for 1 $\mathrm{P}$ atom and 3 $\mathrm{F}$ atoms is $26 .$ This gives four electron groups on the central $\mathrm{P}$ atom. The electron geometry and the molecular geometry are the same when every electron group bonds two atoms together. The molecular geometry, on the other hand, is Trigonal Determine the Electron geometry, molecular geometry, idealized bond angles for each molecule. In general, the region in space occupied by the pair of electrons can be termed the domainof the electron pair. Wherewhen and how do you apply for a job at Winco foods in indio ca.? Does pumpkin pie need to be refrigerated? Door Seals For Fridges, Baby Bullet Vs Nutribullet, The Face Shop Rice Water Bright Cleansing Oil, Homemade Red Pasta Dough, Deadheading Group 3 Clematis, Mac Remap Fn Key To Control, Skinceuticals Anti Aging System, Badruka Degree College Admissions 2020, Polk Audio Hts 12 Vs Svs Sb 2000, Herbs For Circulation In Legs, Build It Prices, Shadows Yo La Tengo Lyrics, pf3 electron geometry 2020
CommonCrawl
The life and numbers of Fibonacci R.Knott and the Plus team Submitted by plusadmin on November 4, 2013 For a brief introduction to the Fibonacci sequence, see here. Fibonacci is one of the most famous names in mathematics. This would come as a surprise to Leonardo Pisano, the mathematician we now know by that name. And he might have been equally surprised that he has been immortalised in the famous sequence – 0, 1, 1, 2, 3, 5, 8, 13, ... – rather than for what is considered his far greater mathematical achievement – helping to popularise our modern number system in the Latin-speaking world. The Roman Empire left Europe with the Roman numeral system which we still see, amongst other places, in the copyright notices after films and TV programmes (2013 is MMXIII). The Roman numerals were not displaced until the mid 13th Century AD, and Leonardo Pisano's book, Liber Abaci (which means "The Book of Calculations"), was one of the first Western books to describe their eventual replacement. Leonardo Fibonacci c1175-1250. Leonardo Pisano was born late in the twelfth century in Pisa, Italy: Pisano in Italian indicated that he was from Pisa, in the same way Mancunian indicates that I am from Manchester. His father was a merchant called Guglielmo Bonaccio and it's because of his father's name that Leonardo Pisano became known as Fibonacci. Centuries later, when scholars were studying the hand written copies of Liber Abaci (as it was published before printing was invented), they misinterpreted part of the title – "filius Bonacci" meaning "son of Bonaccio" – as his surname, and Fibonacci was born. Fibonacci (as we'll carry on calling him) spent his childhood in North Africa where his father was a customs officer. He was educated by the Moors and travelled widely in Barbary (Algeria), and was later sent on business trips to Egypt, Syria, Greece, Sicily and Provence. In 1200 he returned to Pisa and used the knowledge he had gained on his travels to write Liber Abaci (published in 1202) in which he introduced the Latin-speaking world to the decimal number system. The first chapter of Part 1 begins: "These are the nine figures of the Indians: 9 8 7 6 5 4 3 2 1. With these nine figures, and with this sign 0 which in Arabic is called zephirum, any number can be written, as will be demonstrated." Italy at the time was made up of small independent towns and regions and this led to use of many kinds of weights and money systems. Merchants had to convert from one to another whenever they traded between these systems. Fibonacci wrote Liber Abaci for these merchants, filled with practical problems and worked examples demonstrating how simply commercial and mathematical calculations could be done with this new number system compared to the unwieldy Roman numerals. The impact of Fibonacci's book as the beginning of the spread of decimal numbers was his greatest mathematical achievement. However, Fibonacci is better remembered for a certain sequence of numbers that appeared as an example in Liber Abaci. A page of Fibonacci's Liber Abaci from the Biblioteca Nazionale di Firenze showing the Fibonacci sequence (in the box on the right)." The problem with rabbits One of the mathematical problems Fibonacci investigated in Liber Abaci was about how fast rabbits could breed in ideal circumstances. Suppose a newly-born pair of rabbits, one male, one female, are put in a field. Rabbits are able to mate at the age of one month so that at the end of its second month a female can produce another pair of rabbits. Suppose that our rabbits never die and that the female always produces one new pair (one male, one female) every month from the second month on. The puzzle that Fibonacci posed was... How many pairs will there be in one year? At the end of the first month, they mate, but there is still only 1 pair. At the end of the second month the female produces a new pair, so now there are 2 pairs of rabbits. At the end of the third month, the original female produces a second pair, making 3 pairs in all. At the end of the fourth month, the original female has produced yet another new pair, the female born two months ago produced her first pair also, making 5 pairs. Now imagine that there are pairs of rabbits after months. The number of pairs in month will be (in this problem, rabbits never die) plus the number of new pairs born. But new pairs are only born to pairs at least 1 month old, so there will be new pairs. So we have which is simply the rule for generating the Fibonacci numbers: add the last two to get the next. Following this through you'll find that after 12 months (or 1 year), there will be 233 pairs of rabbits. Bees are better The rabbit problem is obviously very contrived, but the Fibonacci sequence does occur in real populations. Honeybees provide an example. In a colony of honeybees there is one special female called the queen. The other females are worker bees who, unlike the queen bee, produce no eggs. The male bees do no work and are called drone bees. Males are produced by the queen's unfertilised eggs, so male bees only have a mother but no father. All the females are produced when the queen has mated with a male and so have two parents. Females usually end up as worker bees but some are fed with a special substance called royal jelly which makes them grow into queens ready to go off to start a new colony when the bees form a swarm and leave their home (a hive) in search of a place to build a new nest. So female bees have two parents, a male and a female whereas male bees have just one parent, a female. Let's look at the family tree of a male drone bee. He has 1 parent, a female. He has 2 grandparents, since his mother had two parents, a male and a female. He has 3 great-grandparents: his grandmother had two parents but his grandfather had only one. How many great-great-grandparents did he have? Again we see the Fibonacci numbers : Number of parents grandparents great- grandparents great-great- grandparents great-great-great- of a MALE bee 1 2 3 5 8 of a FEMALE bee 2 3 5 8 13 Spirals and shells Bee populations aren't the only place in nature where Fibonacci numbers occur, they also appear in the beautiful shapes of shells. To see this, let's build up a picture starting with two small squares of size 1 next to each other. On top of both of these draw a square of size 2 (=1+1). We can now draw a new square – touching both one of the unit squares and the latest square of side 2 – so having sides 3 units long; and then another touching both the 2-square and the 3-square (which has sides of 5 units). We can continue adding squares around the picture, each new square having a side which is as long as the sum of the latest two square's sides. This set of rectangles whose sides are two successive Fibonacci numbers in length and which are composed of squares with sides which are Fibonacci numbers, we will call the Fibonacci Rectangles. If we now draw a quarter of a circle in each square, we can build up a sort of spiral. The spiral is not a true mathematical spiral (since it is made up of fragments which are parts of circles and does not go on getting smaller and smaller) but it is a good approximation to a kind of spiral that does appear often in nature. Such spirals are seen in the shape of shells of snails and sea shells. The image below of a cross-section of a nautilus shell shows the spiral curve of the shell and the internal chambers that the animal using it adds on as it grows. The chambers provide buoyancy in the water. Fibonacci numbers also appear in plants and flowers. Some plants branch in such a way that they always have a Fibonacci number of growing points. Flowers often have a Fibonacci number of petals, daisies can have 34, 55 or even as many as 89 petals! A particularly beautiful appearance of fibonacci numbers is in the spirals of seeds in a seed head. The next time you see a sunflower, look at the arrangements of the seeds at its centre. They appear to be spiralling outwards both to the left and the right. At the edge of this picture of a sunflower, if you count those curves of seeds spiralling to the left as you go outwards, there are 55 spirals. At the same point there are 34 spirals of seeds spiralling to the right. A little further towards the centre and you can count 34 spirals to the left and 21 spirals to the right. The pair of numbers (counting spirals curving left and curving right) are (almost always) neighbours in the Fibonacci series. The same happens in many seed and flower heads in nature. The reason seems to be that this arrangement forms an optimal packing of the seeds so that, no matter how large the seed head, they are uniformly packed at any stage, all the seeds being the same size, no crowding in the centre and not too sparse at the edges. Nature seems to use the same pattern to arrange petals around the edge of a flower and to place leaves round a stem. What is more, all of these maintain their efficiency as the plant continues to grow and that's a lot to ask of a single process! So just how do plants grow to maintain this optimality of design? Golden growth Botanists have shown that plants grow from a single tiny group of cells right at the tip of any growing plant, called the meristem. There is a separate meristem at the end of each branch or twig where new cells are formed. Once formed, they grow in size, but new cells are only formed at such growing points. Cells earlier down the stem expand and so the growing point rises. Also, these cells grow in a spiral fashion: it's as if the meristem turns by an angle, produces a new cell, turns again by the same angle, produces a new cell, and so on. These cells may then become a seed, a new leaf, a new branch, or perhaps on a flower become petals and stamens. The leaves here are numbered in turn – each is exactly 0.618 of a clockwise turn (222.5°) from the previous one. The amazing thing is that a single fixed angle of rotation can produce the optimal design no matter how big the plant grows. The principle that a single angle produces uniform packings no matter how much growth appears was suspected as early as last century but only proved mathematically in 1993 by Stéphane Douady and Yves Couder, two French mathematicians. Making 0.618 of a turn before producing a new seed (or leaf, petal, etc) produces the optimal packing of seeds no matter the size of the seed head. But where does this magic number 0.618 come from? The golden ratio If we take the ratio of two successive numbers in Fibonacci's series, dividing each by the number before it, we will find the following series of numbers: 1/1 = 1, 2/1 = 2, 3/2 = 1.5, 5/3 = 1.666..., 8/5 = 1.6, 13/8 = 1.625, 21/13 = 1.61538... If you plot a graph of these values you'll see that they seem to be tending to a limit, which we call the golden ratio (also known as the golden number and golden section). Ratio of successive Fibonacci terms. It has a value of ( approximately 1.618034) and is often represented by a Greek letter Phi, written as . The closely related value which we write as , a lowercase phi, is just the decimal part of Phi, namely 0.618034... ( ), the number that accounts for the spirals in the seedheads and the arrangements of leaves in many plants. But why do we see phi in so many plants? The number Phi (1.618034...), and therefore also phi (0.618034...), are irrational numbers: they can't be written as a simple fraction. Let's see what would happen if the meristem in a seed head instead turned by some simpler number, for example the fraction 1/2. After two turns through half of a circle we would be back to where the first seed was produced. Over time, turning by half a turn between seeds would produce a seed head with two arms radiating from a central point, leaving lots of wasted space. A seed head produced by 0.5=1/2 turns between seeds: alternate seeds line up. A seed head produced by 0.48=12/25 turns between seeds: the seeds form two revolving arms. A seed head produced by 0.6=3/5 turns between seeds: the seeds form 5 straight arms. Pi turns between seeds produces seven spiralling arms Something similar happens for any other simple fraction of a turn: seeds grow in spiral arms that leave a lot of space between them (the number of arms is the denominator of the fraction). So the best value for the turns between seeds will be an irrational number. But not just any irrational number will do. For example, the seed head created with pi turns per seed seems to have seven spiralling arms of seeds. This is because 22/7 is a very good rational approximation of pi. What is needed in order not to waste space is an irrational number that is not well approximated by a rational number. And it turns out that Phi (1.618034...) and its decimal part phi (0.618034...) are the "most irrational" of all irrational numbers. (You can find out why in Chaos in number land: the secret life of continued fractions.) This is why a turn of Phi gives the optimal packing of seeds and leaves in plants. It also explains why the Fibonacci numbers appear in the leaf arrangements and as the number of spirals in seedheads. Adjacent Fibonacci numbers give the best approximations of the golden ratio. They take turns at being the denominator of the approximations and define the number or spirals as the seed heads increase in size. How did so many plants discover this beautiful and useful number, Phi? Obviously not from solving the maths as Fibonacci did. Instead we assume that, just as the ratio of successive Fibonacci numbers eventually settles on the golden ratio, evolution gradually settled on the right number too. The legacy of Leonardo Pisano, aka Fibonacci, lies in the heart of every flower, as well as in the heart of our number system. If you have enjoyed this article you might like to visit Fibonacci Numbers and the Golden Section. This article is based on material written by Dr R. Knott, who was previously a lecturer in the Department of Computing Studies at the University of Surrey. Knott started the website on Fibonacci Numbers and the Golden Section back in 1996 as an experiment at using the web to inspire and encourage more maths investigations both inside and outside of school time. It has since grown and now covers many other subjects, all with interactive elements and online calculators. Although now retired, Knott still maintains and extends the web pages. He is currently a Visiting Fellow at the University of Surrey and gives talks all over the country to schools, universities, conferences and maths societies. He also likes walking, mathematical recreations, growing things to eat and cooking them. Very interesting stuff thank Permalink Submitted by Anonymous on May 4, 2011 Very interesting stuff thank you Permalink Submitted by Anonymous on June 6, 2011 Interesting, indeed! Now I know where the producers of the iq tests got their number series questions. Permalink Submitted by Anonymous on June 14, 2011 Its in fact the golden mean sequence used in sacred geometry. Phi is what Fibonacci used to make the mathmatical formula. He didn't invent the pattern itself. That was brought here. This pattern will be above us cosmicly to view in 2012. Permalink Submitted by Anonymous on November 19, 2011 Please tell more- what do you mean above us? Exactl;y when will this occur? I love this stuff! love maths Permalink Submitted by Anonymous on February 17, 2013 Ive got a maths homework to research famous mathmaticians and do a presentation on one and I chose fibonacci . this has really helped me thanks a lot So when did we see this Permalink Submitted by Anonymous on January 11, 2013 So when did we see this pattern, it's 2013 now and i didn't hear anything about this?? Jane @ Arraial d'Ajuda Thank you. This was the most Permalink Submitted by Anonymous on August 2, 2013 Thank you. This was the most complete report I have read on the Fibonacci. I needed this. Creating the code for secret messages to-day, how would it look like using the Fibonacci sequence and the encryption at Kryptos (CIA) super imposed on a Mobius circle? Any one, who comes up with a solution will be acknowledged in my new novel Thanks. SIR My name is saksham goyal Permalink Submitted by Anonymous on July 12, 2011 My name is saksham goyal from india. we could code like this: 1st letter a as 3rd fibonacci term 1 2nd letter b as 4th fibonacci term 2 for spaces we can use 0 for very long number such as 75025 (26th for z) we can use 7+5+0+2+5 = 19 in case for same sums like 75025 and 2584(18th) we can either right full number or write 19 to base 18 or to base 26(i.e. in sub script) in case you like this..............plz email me on sakshamgoyal06@yahoo.com crypros well worth thinking. Indeed it crossed from my mind when I was composing a short story.ı would say non verbal signals such as musical tones would be my choice and a good ear training of course. I knew about "The Golden Permalink Submitted by Anonymous on December 1, 2011 I knew about "The Golden ratio" before but never understand how and why this is applicable in "Fibonacci Sequence"... Thanks a lot for your explanation... . Also nature loves Mathematics and it is again proved by your example. Permalink Submitted by Anonymous on March 18, 2012 this helped me understand more about Fibonacci and his sequence of numbers for my report on him.. thanks :) Hybrid Fibonacci number sequence Permalink Submitted by Anonymous on March 6, 2013 for those that think the Fibonacci number(Fn) sequence is just a coincidence there are MANY"non" math equations and sums on the "insides".For example by combining 3 consecutive Fn creates a new Hybrid Fibonacci number that still sums in a sequence!!...btw...a "0"* is added where needed. ex 0,1,1,2,3,5,8 become 011,112,123,235,358 etc 011+112=123 20305+30508=50813* 5813 +81321=132134 81321+132134=213455 132134+213455=345589 21034055+34055089=55089144* this works forwards and backwards. It also works with the "sister" Lucas numbers(Ln)sequence and of course adding 2 hybrid Fn equals a hybrid Ln and all it's variations......and yes ....they still sum to the golden ratio!!.The Fn are the most unique sequence there is.IMO it is the key to everything. asking for more info Hi, I like to study Fibonacci numbers and would like to ask you more about this comment, if you don't mind....pls contact me linelites 'at' gmail.com Your site was a saviour to me Your site was a saviour to me and my project partner we had researched about why Fibonacci had formed this sort of formula. Thanks A Lot You know who else is your savior? the author of this article. Fibonacci numbers Permalink Submitted by Anonymous on September 9, 2013 Fibonacci sequence is interesting. For fun and learning, I did some c++ codes for it here: http://cppstudent.blogspot.com/2013/09/fibonacci-using-recursion.html Fibonacci's "Liber Abaci" and "casting out the nines" Permalink Submitted by Anonymous on November 4, 2013 Fibonacci also described the method of "casting out the nines" (to check the accuracy of arithmetical calculations) in his book "Liber Abaci" in 1202. See http://www.significancemagazine.org/details/webexclusive/1382001/One-sma... What are the 2 major works of Fibonacci? Why this number appears Permalink Submitted by Anonymous on July 9, 2015 It has been already seen that the Fibonacci sequence or Golden ratio appears in the Nature as we can see in many examples. However, I would like to know if there is an explanation why this specific number appears ? Is there any plausible mathematical explanation to it ? Why nature chosen this constant number ? Actually there is. If you go Actually there is. If you go on youtube and search up Vi Hart, go and watch her videos on Fibonacci and plants. But basically plants have a hormone that tells them to grow and they grow away from each other if you look at the picture i the article above you can see that happen. And It just so happens that it happens in fibonacci numbers. Vi Hart's videos will explain it in more depth. The explanation is in the Permalink Submitted by Marianne on November 26, 2015 The explanation is in the last paragraphs of this article. It is connected to the irrationality of phi. The kind of number system Europe used before Fibonacci introduce Permalink Submitted by Anonymous on February 5, 2016 What was the kind of number system Europe used before Fibonacci introduced the new number system Roman Numerals Permalink Submitted by Anonymous on April 6, 2016 It was the Roman Numeral System which was very messy when doing long calculations this piece Permalink Submitted by roy on June 21, 2016 A true romantic discourese on Fibinacci numbers, excellent. Permalink Submitted by Anynonymus on August 18, 2016 This was extremely helpful thanks!!!! Permalink Submitted by Bedazzle on October 18, 2016 How was the mathematical rule for the Fibonacci patter was derived? A question about rabbits Permalink Submitted by Stephen Bartels on May 19, 2017 In this paragraph: "Now imagine that there are $x_ n$ pairs of rabbits after $n$ months. The number of pairs in month $n+1$ will be $x_ n$ (in this problem, rabbits never die) plus the number of new pairs born. But new pairs are only born to pairs at least 1 month old, so there will be $x_{n-1}$ new pairs." You say that new pairs are only born to pairs at least 1 month old but is this correct? Earlier you state that pairs can mate at 1 month but only produce new pairs at 2 months. Therefor new pairs are only born to pairs at least 2 months old (not 1 month old). If this is so, how does it affect your formula? Answer to a question about rabbits He actually mentions this and takes it into his calculations. Fibonecci sequence Permalink Submitted by ALAA on August 4, 2017 Hi, I think that I discovered a new sequence related to Fibonacci sequence: You might knew that the Fibonacci sequence starts with 0 and 1 and the following number is the sum of the previous 2; every time you go further in the sequence, the ratio of two consecutive numbers be nearer to the golden ratio (phi). But you can start with any two numbers not only 0 and 1 for example (2, 6; 490, 10; 56, 56...etc.) or two similar numbers and the ratio of two consecutive numbers is also the golden ratio. If we think deeper, we can start with phi and phi as the first two numbers and the ratio of two consecutive numbers (if you choose them far away from the beginning) is also approximately phi. But if you look on the numbers of this sequence, an amazing pattern appear. The first 4 or 5 numbers are ordinary but the 5th or 6th numbers are the beginning of the pattern. The digits after the decimal point of these numbers is as following: 0,9,0,9,0,99,00,99,00,99,000,999,000… and so on!!! Permalink Submitted by matus hromec on March 2, 2018 one of the best articles that i ever read Permalink Submitted by Rajan L on October 14, 2018 Hi can you explain Fibonacci Retracements and how they work? Fibonacci retracement Permalink Submitted by Liam Morris on January 7, 2019 In reply to the man's question 10/18, about Fibonacci retracement, this is an expression in the investment industry referring to the % of a recent movement in an investment's market price that is 'retraced' after a recognized reversal in the security's price. Technical analysts within the investment management world use the Fibonacci ratio, and variations, to aid in forecasting the probability of the 'next turn' in the dynamic market price of a specified security or market index. It is only one of many types of calculations and geometric observations investment analysts use in forecasting 'probable' turning points (or reversals) in investment prices. For example, if stock A's recent price movement was downward by $10, before it turned, or bounced, then the 'retracement' of the $10 is what is monitored with Fibonacci ratios to determine the next possible reversal of the stock price. In this example, if one utilizes the most popular Fibonacci ratio of .618, then the retracement of the $10 in our example might be be $6.18, the amount the security's price may advance before reversing again, as in a stair step formation. As most people are aware, stock market prices don't go straight up or down over time; they go up and down in a stair-step fashion, and it's the Fibonacci ratios that help analysts determine these possible change in the current trend. In the real world, investments don't turn on the Fibonacci-calculated pricing, but they do seem to turn at certain variations of the Fibonacci ratios, ie, the square root of .618, or Fib squared, or even the reciprocal of Fib compared to 1, ie, .382 (1- .618). None of the various Fib ratios do not guarantee a turning point in a stock's price, but they identify possible pricing points, that wouldn't exist otherwise. It's not certain when the Leonardo's famous ratio started to be utilized in the investment industry, but that would be something to investigate. Liam Morris Fib Fan Fibonacci number Permalink Submitted by V.S.Gopalakrishnan on October 15, 2018 Actually Fibonacci number was nothing original. When he was in North Africa, he studied translated ancient Indian mathematical books from which he learnt the concept. After he went back to Italy, Europe called the numbers Fibonacci numbers.
CommonCrawl
Volume 84, Numbers 3-4, 2018 Béla Szőkefalvi-Nagy Medal 2018 On the set of principal congruences in a distributive congruence lattice of an algebra Gábor Czédli MORE DETAILS LESS DETAILS Abstract. Let $Q$ be a subset of a finite distributive lattice $D$. An algebra $A$ \emph{represents the inclusion $Q\subseteq D$ by principal congruences} if the congruence lattice of $A$ is isomorphic to $D$ and the ordered set of principal congruences of $A$ corresponds to $Q$ under this isomorphism. If there is such an algebra for \emph{every} subset $Q$ containing $0$, $1$, and all join-irreducible elements of $D$, then $D$ is said to be \emph{fully (A1)-representable}. We prove that every fully (A1)-representable finite distributive lattice is planar and it has at most one join-reducible coatom. Conversely, we prove that every finite planar distributive lattice with at most one join-reducible coatom is \emph{fully chain-representable} in the sense of a recent paper of G. Grätzer. Combining the results of this paper with another result of the present author, it follows that every fully (A1)-representable finite distributive lattice is ``fully representable'' even by principal congruences of \emph{finite lattices}. Finally, we prove that every \emph{chain-representable} inclusion $Q\subseteq D$ can be represented by the principal congruences of a finite (and quite small) algebra. DOI: 10.14232/actasm-017-538-7 AMS Subject Classification (1991): 06B10 Keyword(s): distributive lattice, principal lattice congruence, congruence lattice, chain-representability Received June 1, 2017 and in final form February 8, 2018. (Registered under 38/2017.) Cross-connections and variants of the full transformation semigroup P. A. Azeef Muhammed Abstract. Cross-connection theory propounded by Nambooripad describes the ideal structure of a regular semigroup using the categories of principal left (right) ideals. A variant $\mathscr{T}_X^\theta $ of the full transformation semigroup $(\mathscr{T}_X,\cdot )$ for an arbitrary $\theta\in \mathscr{T}_X$ is the semigroup $\mathscr{T}_X^\theta = (\mathscr{T}_X,\ast )$ with the binary operation $\alpha\ast \beta = \alpha\cdot \theta\cdot \beta $ where $\alpha, \beta\in \mathscr{T}_X$. In this article, we describe the ideal structure of the regular part ${\msbm R}eg (\mathscr{T}_X^\theta )$ of the variant of the full transformation semigroup using cross-connections. We characterize the constituent categories of ${\msbm R}eg (\mathscr{T}_X^\theta )$ and describe how they are \emph{cross-connected} by a functor induced by the sandwich transformation $\theta $. This leads us to a structure theorem for the semigroup and gives the representation of ${\msbm R}eg (\mathscr{T}_X^\theta )$ as a cross-connection semigroup. Using this, we give a description of the biordered set and the sandwich sets of the semigroup. DOI: 10.14232/actasm-017-044-z AMS Subject Classification (1991): 20M10, 20M17, 20M50 Keyword(s): regular semigroup, full transformation semigroup, cross-connections, normal category, variant Received June 30, 2017, and in revised form February 12, 2018. (Registered under 44/2017.) Finiteness of the nearring of congruence preserving and 0-preserving functions of an expanded group Gary L. Peterson, Stuart D. Scott Abstract. In this paper we shall obtain that the nearring $C_0(V)$ of congruence preserving functions that are 0-preserving of a tame $N$-module $V$ of a nearring $N$ is finite when $N$ is finite. As a consequence, $C_0(V)$ of an expanded group $\langle V,+,F\rangle $ is finite when the nearring of 0-preserving polynomial functions $P_0(V)$ of $\langle V,+,F\rangle $ is finite. We then go on to obtain further consequences of this result. AMS Subject Classification (1991): 16Y30; 08A40 Keyword(s): nearring, expanded group, tame module, polynomial functions, congruence preserving functions, endomorphism nearring Received July 10, 2017 and in final form May 20, 2018. (Registered under 49/2017.) Quasi-units as orthogonal projections Zsigmond Tarcsay, Tamás Titkos Abstract. The notion of quasi-unit has been introduced by Yosida in unital Riesz spaces. Later on, a fruitful potential-theoretic generalization was obtained by Arsove and Leutwiler. Due to the work of Eriksson and Leutwiler, this notion also turned out to be an effective tool by investigating the extreme structure of operator segments. This paper has multiple purposes which are interwoven, and are intended to be equally important. On the one hand, we identify quasi-units as orthogonal projections acting on an appropriate auxiliary Hilbert space. As projections form a lattice and are extremal points of the effect algebra, we conclude the same properties for quasi-units. Our second aim is to apply these results for nonnegative sesquilinear forms. Constructing an order-preserving bijection between operator and form segments, we provide a characterization of being extremal in the convexity sense, and we give a necessary and sufficient condition for the existence of the greatest lower bound of two forms. Closing the paper we revisit some statements by using the machinery developed by Hassi, Sebestyén, and de Snoo. It will turn out that quasi-units are exactly the closed elements with respect to the antitone Galois connection induced by parallel addition and subtraction. AMS Subject Classification (1991): 47A07, 47B65 Keyword(s): quasi-unit, orthogonal projection, extreme points, Galois connection Received December 30, 2017 and in final form June 29, 2018. (Registered under 88/2017.) On order automorphisms of the effect algebra Roman Drnov˛ek Abstract. We give short proofs of two descriptions given by Šemrl of order automorphisms of the effect algebra. This sheds new light on both formulas that look quite complicated. Our proofs rely on Molnár's characterization of order automorphisms of the cone of all positive operators. Keyword(s): self-adjoint operator, operator interval, effect algebra, order isomorphism, operator monotone function Received November 15, 2017, and in revised form January 13, 2018. (Registered under 8/2018.) $C_0$-semigroups of holomorphic Carathéodory isometries in reflexive TRO László L. Stachó Abstract. We refine earlier results concerning the structure of strongly continuous one-parameter semigroups ($C_0$-SGR) of holomorphic Carathéodory isometries of the unit ball in infinite-dimensional reflexive TROs (ternary rings of operators) We achieve finite algebraic formulas for them in terms of joint boundary fixed points and Möbius charts. AMS Subject Classification (1991): 47D03, 32H15, 46G20 Keyword(s): Carathéodory distance, isometry, fixed point, holomorphic map, $C_0$-semigroup, infinitesimal generator, JB*-triple, Möbius transformation, Cartan factor, ternary ring of operators (TRO) Received January 16, 2018 and in final form April 26, 2018. (Registered under 11/2018.) Maps between the positive definite cones of operator algebras preserving a norm of a geodesic correspondence Lajos Molnár Abstract. We prove that any bijective map between the positive definite cones of von Neumann algebras which preserves a certain unitarily invariant norm of a particular weighted geometric mean of elements is essentially (up to two-sided multiplication by an invertible positive element) equal to the restriction of a Jordan *-isomorphism between the algebras. DOI: 10.14232/actasm-018-514-x AMS Subject Classification (1991): 47B49; 46L40, 47A64 Keyword(s): positive definite cone, operator means, geodesic correspondence, preservers Received January 18, 2018 and in final form February 23, 2018. (Registered under 14/2018.) Lebesgue type decompositions for linear relations and Ando's uniqueness criterion Seppo Hassi, Zoltán Sebestyén, Henk de Snoo Abstract. A linear relation, i.e., a multivalued operator $T$ from a Hilbert space $\sH $ to a Hilbert space $\sK $ has Lebesgue type decompositions $T=T_{1}+T_{2}$, where $T_{1}$ is a closable operator and $T_{2}$ is an operator or relation which is singular. There is one canonical decomposition, called the Lebesgue decomposition of $T$, whose closable part is characterized by its maximality among all closable parts in the sense of domination. All Lebesgue type decompositions are parametrized, which also leads to necessary and sufficient conditions for the uniqueness of such decompositions. Similar results are given for weak Lebesgue type decompositions, where $T_1$ is just an operator without being necessarily closable. Moreover, closability is characterized in different useful ways. In the special case of range space relations the above decompositions may be applied when dealing with pairs of (nonnegative) bounded operators and nonnegative forms as well as in the classical framework of positive measures. AMS Subject Classification (1991): 4705, 47A06, 47A65; 46N30, 47N30 Keyword(s): regular relations, singular relations, (weak) Lebesgue type decompositions, uniqueness of decompositions, domination of relations and operators, closability Received January 11, 2018 and in final form April 30, 2018. (Registered under 7/2018.) Almost everywhere convergence of multiple operator averages for affine semigroups Takeshi Yoshimoto Abstract. This paper projects another affine case study in the program of analyzing multiparameter a.e. convergence, based on the Sucheston's type convergence principles. An affine semigroup is considered as a natural extension of strongly continuous semigroups of linear operators on $L_{p}$ spaces. We prove some affine extensions of multiparameter martingale theorems, multiparameter ergodic theorems, and multiparameter ergodic theorems for the so-called nonlinear sums. Moreover, an affine (nonlinear) generalization is given of Berkson--Bourgain--Gillespie's theorem concerning the connection between the ergodic Hilbert transform and the ergodic theorem for power-bounded invertible linear operators on $L_{p}$ ($1< p< \infty $) spaces. In addition, the random ergodic Hilbert transforms will be established. We improve the local ergodic theorem of McGrath concerning strongly continuous $m$-parameter semigroups of positive linear operators in a more general affine setting. We shall also show that the Sucheston convergence principle is also very effective even in yielding a multiparameter generalization of Starr's theorem.The final section includes some examples. AMS Subject Classification (1991): 47A35, 40H05; 40G10 Keyword(s): affine semigroup, compound semigroup, ergodic Hilbert transform, random ergodic Hilbert transform, Cotlar's theorem, Berkson-Bourgain-Gillespie's theorem, Sucheston's type convergence principle, Orlicz class, multiparameter martingale theorem, nonlinear sum, ergodic theorem for affine semigroups, Abelian ergodic theorem for affine semigroups, Starr's theorem Received February 15, 2016 and in final form February 26, 2018. (Registered under 10/2016.) Property $(UW {\scriptstyle\Pi })$ and localized SVEP Pietro Aiena, Mohammed Kachad Abstract. Property $(UW_{\Pi })$ for a bounded linear operator $T\in L(X)$ on a Banach space $X$ is a variant of Browder's theorem, and means that the points $\lambda $ of the approximate point spectrum for which $\lambda I-T$ is upper semi-Weyl are exactly the spectral points $\lambda $ such that $\lambda I-T$ is Drazin invertible. In this paper we investigate this property, and we give several characterizations of it by using typical tools from local spectral theory. We also relate this property with some other variants of Browder's theorem (or Weyl's theorem). AMS Subject Classification (1991): 47A53, 47A10, 47A11 Keyword(s): property $(UW {\scriptstyle\Pi })$, SVEP Received September 26, 2016 and in final form May 15, 2018. (Registered under 53/2016.) Weyl's theorem and Putnam's inequality for class $p$-$wA(s,t)$ operators M. H. M. Rashid, Muneo Ch?, T. Prasad, Kotaro Tanahashi, Atsushi Uchiyama Abstract. In this paper, we study spectral properties of class $p$-$wA(s,t)$ operators with $0< p\leq1$ and $0< s,t,s+t\leq1$. We show that Weyl's theorem and Putnam's inequality hold for class $p$-$wA(s,t)$ operators. DOI: 10.14232/actasm-017-020-y AMS Subject Classification (1991): 47A10, 47A20, 47B20 Keyword(s): class $p$-$wA(s, t)$, normaloid, reguloid, Weyl's theorem, Putnam's inequality Received March 28, 2017, and in revised form November 24, 2017. (Registered under 20/2017.) Sharpness results concerning finite differences in Fourier analysis on the circle group Rodney Nillsen, Susumu Okada Abstract. Let $G$ denote the group ${\msbm R}$ or ${\msbm T}$, let $\iota $ denote the identity element of $G$, and let $s\in{{\msbm N}}$ be given. Then, a \emph{difference of order} $s$ is a function $f\in L^2(G)$ for which there are $a\in G$ and $g \in L^2({G})$ such that $f= (\delta_{\iota }-\delta_{a})^s\ast g$. Let ${{\cal D}}_s(L^2(G))$ be the vector space of functions that are finite sums of differences of order $s$. It is known that if $f\in L^2({{\msbm R}})$, $f\in{{\cal D}}_s(L^2({{\msbm R}}))$ if and only if $\int_{-\infty }^{\infty }|{\widehat f}(x)|^2|x|^{-2s}dx< \infty $. Also, if $f\in L^2({{\msbm T}})$, $f\in{{\cal D}}_s(L^2({{\msbm T}}))$ if and only if ${\widehat f}(0)=0$. Consequently, ${{\cal D}}_s(L^2(G))$ is a Hilbert space in a (possibly) weighted $L^2$-norm. It is known that every function in ${{\cal D}}_s(L^2(G))$ is a sum of $2s+1$ differences of order $s$. However, there are functions in ${{\cal D}}_s(L^2({{\msbm R}}))$ that are not a sum of $2s$ differences of order $s$, and we call the latter type of fact a \emph{sharpness result}. In ${{\cal D}}_1(L^2({{\msbm T}}))$, it is known that there are functions that are not a sum of two differences of order one. A main aim here is to obtain new sharpness results in the spaces ${{\cal D}}_s(L^2({{\msbm T}}))$ that complement the results known for ${{\msbm R}}$, but also to present new results in ${{\cal D}}_s(L^2({{\msbm T}}))$ that do not correspond to known results in ${{\cal D}}_s(L^2({{\msbm R}}))$. Some results are obtained using connections with Diophantine approximation. The techniques also use combinatorial estimates for potentials arising from points in the unit cube in Euclidean space, and make use of subtraction sets in arithmetic combinatorics. AMS Subject Classification (1991): 42A16, 42A38 Keyword(s): Fourier transform, finite differences, subspaces of $L^2({{\msbm T}})$, combinatorial inequalities, badly approximable vectors in ${{\msbm R}}^n$, sharpness results, Sobolev spaces Received April 7, 2017 and in final form May 22, 2018. (Registered under 22/2017.) On the characterizations of some distinguished subclasses of Hilbert space operators C. Bouraya, A. Seddik Abstract. In this note, we present several characterizations for some distinguished classes of bounded Hilbert space operators (self-adjoint operators, normal operators, unitary operators, and isometry operators) in terms of operator inequalities. Keyword(s): closed range operator, Moore-Penrose inverse, group inverse, self-adjoint operator, unitary operator, normal operator, partial isometry operator, isometry operator, operator inequality Received April 8, 2017, and in final form November 25, 2017. (Registered under 23/2017.) $k$-quasi-$A(n)$ and $k$-quasi-$*$-$A({n})$ composition and weighted composition operators on $L^{2}(\mu )$ Anuradha Gupta, Renu Chugh, Jagjeet Jakhar Abstract. In this paper, we discuss the conditions under which composition operators and weighted composition operators become quasi-$A(n)$ operators, quasi-$*$-$A(n)$ operators, $k$-quasi-$A(n)$ operators and $k$-quasi-$*$-$A(n)$ operators in terms of the Radon--Nikodym derivative $h_n$. AMS Subject Classification (1991): 47B33, 47B20; 46C05 Keyword(s): composition operators, weighted composition operators, quasi-$A(n)$ operators, quasi-$*$-$A(n)$ operators, $k$-quasi-$A(n)$ operators and $k$-quasi-$*$-$A(n)$ operators Received May 23, 2017, and in revised form November 27, 2017. (Registered under 32/2017.) Compact operators with BMO symbols on multiply-connected domains Roberto Raimondo Abstract. In this paper we study the problem of the boundedness and compactness of the Toeplitz operator $T_{\varphi }$ on $L_{a}^{2}(\Omega )$, where $\Omega $ is a multiply-connected domain and $\varphi $ is not bounded. We find a necessary and sufficient condition when the symbol is $\mathcal{BMO}.$ For this class we also show that the vanishing at the boundary of the Berezin transform is a necessary and sufficient condition for compactness. The same characterization is shown to hold when we analyze operators which are finite sums of finite products of Toeplitz operators with unbounded symbols. AMS Subject Classification (1991): 47B35; 47B38 Keyword(s): Bergman space, Toeplitz operator, Berezin transform Received May 29, 2017 and in final form September 2, 2018. (Registered under 33/2017.) Solvability of generalized third-order coupled systems with two-point boundary conditions Feliz Minhós, Infeliz Coxe Abstract. In this paper we consider the nonlinear third-order coupled system composed by the differential equations \[\left\{{ -u^{\prime\prime\prime}(t)=f\left(t,u(t),u^{\prime }(t),u^{\prime\prime }(t),v(t),v^{\prime }(t),v^{\prime\prime}(t)\right), \atop -v^{\prime\prime\prime}(t) =h\left( t,u(t),u^{\prime }(t),u^{\prime\prime }(t),v(t),v^{\prime }(t),v^{\prime\prime }(t)\right ),}\right. \] with $f,h\colon[0,1] \times\mathbb {R}^{6}\rightarrow\mathbb {R}$ continuous functions, and the boundary conditions \[ \left\{{ u(0)=u^{\prime }(0) =u^{\prime}(1) =0, \atop v(0)=v^{\prime}(0) =v^{\prime}(1) =0. }\right.\] We remark that the nonlinearities can depend on all derivatives of both unknown functions, which is new in the literature, as far as we know. This is due to an adequate auxiliary integral problem with a truncature, applying lower and upper solutions method with bounded perturbations. The main theorem is an existence and localization result, which provides some qualitative data on the system solution, such as, sign, variation, bounds, etc., as it can be seen in the example. AMS Subject Classification (1991): 34B15, 34B27, 34L30 Keyword(s): coupled systems, Green functions, Nagumo-type condition, coupled lower and upper solutions Received May 29, 2017 and in final form June 10, 2018. (Registered under 35/2017.) Some convergence theorems for multipliers on commutative Banach algebras Heybetkulu Mustafayev Abstract. Let $A$ be a complex commutative semisimple Banach algebra and let $T$ be a power bounded multiplier of $A$. This paper is concerned with finding necessary and sufficient conditions for the convergence of the sequence $\left\{ T^{n}a\right\} $ $( a\in A) $ in $A.$ Some related problems are also discussed. AMS Subject Classification (1991): 46HXX, 43A20, 43A22 Keyword(s): commutative Banach algebra, multiplier, set of synthesis, convergence Received June 23, 2017, and in revised form December 6, 2017. (Registered under 41/2017.) Generalized almost contact structures on Lie algebroids E. Peyghan, C. Arcuł, A. Baghban, E. Sharahi Abstract. We consider the direct sum of a Lie algebroid structure with its dual space and equip this bigger space with a contact form called the generalized almost contact structure and characterize these in the sense of contact morphisms. Attaching an almost generalized complex structure, we recover properties as normality conditions in a direct way and study some aspects of metrical kind of this spaces. AMS Subject Classification (1991): 53D10 Keyword(s): generalized almost contact structure, Lie algebroid, metric structure, normality conditions Received April 24, 2017, and in revised form September 21, 2017. (Registered under 27/2017.) Characterization of probability measures on locally compact Abelian groups via Q-independence B. L. S. Prakasa Rao Abstract. We obtain a characterization for probability measures on a locally compact Abelian group $X$ based on linear forms of $Q$-independent random elements taking values in $X$ generalizing the earlier work of the author in [12]. AMS Subject Classification (1991): 60B15, 62E10 Keyword(s): $Q$-independence, characterization, locally compact Abelian group Received April 27, 2017 and in final form April 18, 2018. (Registered under 30/2017.)
CommonCrawl
Content-Based Image Retrieval Using Multi-Resolution Multi-Direction Filtering-Based CLBP Texture Features and Color Autocorrelogram Features Journal of Information Processing Systems. ISSN: 2092-805X. 2020;16(4):991-1000 Hee-Hyung Bu* , Nam-Chul Kim** , Byoung-Ju Yun** and Sung-Ho Kim* Corresponding Author: Nam-Chul Kim** , nckim@knu.ac.kr Corresponding Author: Sung-Ho Kim* , shkim@knu.ac.kr Hee-Hyung Bu*, School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea, hhbu@knu.ac.kr Nam-Chul Kim**, School of Electronic Engineering, Kyungpook National University, Daegu, Korea, nckim@knu.ac.kr Byoung-Ju Yun**, School of Electronic Engineering, Kyungpook National University, Daegu, Korea, bjisyun@ee.knu.ac.kr Sung-Ho Kim*, School of Electronic Engineering, Kyungpook National University, Daegu, Korea, shkim@knu.ac.kr Revision received: February 22 2019 Accepted: March 8 2019 Abstract: We propose a content-based image retrieval system that uses a combination of completed local binary pattern (CLBP) and color autocorrelogram. CLBP features are extracted on a multi-resolution multi-direction filtered domain of value component. Color autocorrelogram features are extracted in two dimensions of hue and saturation components. Experiment results revealed that the proposed method yields a lot of improvement when compared with the methods that use partial features employed in the proposed method. It is also superior to the conventional CLBP, the color autocorrelogram using R, G, and B components, and the multichannel decoded local binary pattern which is one of the latest methods. Keywords: Autocorrelogram , Content-Based Image Retrieval , MRMD CLBP , Multi-Resolution Multi-Direction Filter Recently, content-based image retrieval (CBIR) systems are being developed by global IT companies. Google Search engine supports CBIR. The search engine is weak with rotationand scale-variant images; it cannot even retrieve complex rotated images. Bixby of Samsung Galaxy S8 also supports CBIR for pictures. When retrieving complex rotated images on a cellphone, the retrieved images are seldom similar. The existing methods usually extract features related to texture and color information because it is considered an essential function of the human visual system for object recognition. The research on texture has usually been conducted based on the frequency domain. Because the spatial frequency domain of an image represents the rate of change of pixel values, the variant condition of edges can be determined. In particular, because high and low frequencies can be separated, high frequency components including most edge variants are employed in many research areas. Representatively, frequency domains include Gabor transformation [1], wavelet transformation [2], Fourier transformation [3], etc. Research on color, including color histogram [4] and color autocorrelogram [5], has been conducted. Color histogram is popular as it is based on statistics; it does not consider local relations, it is measured in global areas, and it has advantages of rotation and scale-invariance. It has been adapted in many studies because of its simplicity and wide applicability. As a method that employs color distance, color autocorrelogram adds distance information to the color histogram. Recently, the major goal of image retrieval studies has been rotation and scale-invariance for rotated and scaled variants of images. Examples of such methods are rotationand scale-invariant Gabor features for texture image retrieval proposed by Han and Ma [6]; color autocorrelogram and block difference of inverse probabilities-block variation of local correlation coefficient in wavelet domain proposed by Chun et al. [7]; texture feature extraction method for rotationand scale-invariant image retrieval proposed by Rahman et al. [8]; rotation-invariant textural feature extraction for image retrieval using eigenvalue analysis of intensity gradients and multi-resolution analysis proposed by Gupta et al.[9]; rotation-invariant texture retrieval considering the scale dependence of Gabor wavelet proposed by Li et al. [10]; and CBIR using combined color and texture features extracted by multi-resolution multidirection (MRMD) filtering proposed Bu et al. [11]. However, among the latest retrieval methods, some methods are not rotation-invariant and show good performance on databases composed of photographs taken by photographers on the ground. One of them is the retrieval method using multichannel decoded local binary pattern (LBP) proposed by Dubey et al. [12]. In image retrieval, a number of aspects need to be considered. However, in this paper, we consider two: (1) features should have less amount of redundant information and (2) dimensions of feature vectors should not be very large. The color features employed in this paper are extracted from autocorrelograms [5] by using the distance of colors in chrominance space including hue (H) and saturation (S) color components. The used texture features are extracted from complete local binary pattern (CLBP) based on MRMD filtering [11] in luminance space of value (V) color component. The MRMD filters allow easy extraction of rotation-invariant features. CLBP [13] is generalized from LBP proposed by Ojala et al. [14]; it yields more texture information than LBP. Employing HSV color space is more efficient for image retrieval than RGB color space because texture information is contained in [TeX:] $$V$$ component of luminance, and color information is contained in H and S components of chrominance. This paper combines the CLBP texture features based on MRMD filtering and the color autocorrelogram features. As such, there is an advantage of high retrieval performance; in addition, the feature dimension is not too large. Moreover, the amount of redundant information is less because of the use of HSV color space to separate the luminance of the [TeX:] $$V$$ component for texture feature and the chrominance of the H and S components for color feature. Furthermore, the proposed CLBP is scale-based, unlike the conventional CLBP, which is distance-based. The proposed method is explained in Section 2. Experiment and results are discussed in Section 3. Finally, the conclusion is presented in Section 4. 2. The Proposed Texture and Color Features Extraction In this paper, CBIR system using CLBP based on MRMD filtering and color autocorrelogram is proposed. The CLBP based on MRMD filtering is describes in the following Section 2.1. Color autocorrelogram is the same as the method proposed in our previous paper [15]. Fig. 1 shows the block diagram of the proposed image retrieval system. Block diagram of the proposed CBIR system. 2.1 Texture Feature Extraction Using MRMD Filtering-Based CLBP 2.1.1 CLBP_S feature extraction in MRMD filtering Texture Features are extracted using CLBP based on MRMD filtering in [TeX:] $$V$$ component. CLBP_S corresponds to RULBP [4]. The CLBP_S feature extraction on MRMD filtered domain includes four steps as follows: Step 1: Converts the RGB query image / to HSV image for the [TeX:] $$V \text { component image } I_{V}$$. Step 2: Conducts MRMD high pass filtering [11] with directions in a resolution [TeX:] $$r$$ in the image [TeX:] $$I_{V}$$ to get the filtered images [TeX:] $$y_{r, \theta}$$. The resolution levels are one of [TeX:] $$r \in\{1,2, \ldots, M\}$$, where [TeX:] $$M$$ is the number of resolution levels and the directions are expressed as [TeX:] $$\theta=(2 \cdot \pi \cdot n) / N$$ where [TeX:] $$n \in \{0,1,2, \ldots, N-1\} \text { and } N$$ is the number of directions. Step 3: Creates binary image for the pixel values from the filtered images. The outcome is LBP of the image [TeX:] $$I_{V}$$. LBP based on MRMD filtering can be expressed as in the following: [TeX:] $$L B P_{N, 2^{r-1}}(p)=\sum_{n=0}^{N-1} s\left(y_{r, \theta_{n}}(p)\right) \cdot 2^{n}$$ where, [TeX:] $$s(x)=\left\{\begin{array}{l} 1, x \geq 0 \\ 0, x<0 \end{array}\right\}, \theta_{n}$$ refers to the [TeX:] $$n$$-th direction and [TeX:] $$p$$ is pixel position. Step 4: Normalizes RULBP histogram. The RULBP has N + 2 bins at a resolution level and the total number of directions N. The RULBP on the MRMD filtered domain is expressed as in the following: [TeX:] $$\begin{aligned} R L B P_{N, 2} r-1(p)=&\left\{\sum_{n=0}^{N-1} s\left(y_{r, \theta_{n}}(p)\right),\right.& & \text {if } U\left(L B P_{N, 2^{r-1}}(p)\right) \leq 2 \\ & N+1, & & \text {otherwise } \end{aligned}$$ where [TeX:] $$U\left(L B P_{N, 2^{r-1}}(p)\right)=\sum_{n=0}^{N-1}\left|s\left(y_{r, \theta_{n}}(p)\right)-s\left(y_{r, \theta_{n-1}}(p)\right)\right|$$ and it refers to the sum of bit changes in LBP. The normalized RULBP histogram is expressed as in the following: [TeX:] $$H_{r}(i)=\frac{1}{|P|} \sum_{p \in P} \delta\left(R U L B P_{N, 2^{r-1}}(p)-i\right)$$ where [TeX:] $$i \in\{0,1,2, \ldots, N, N+1\},|P|$$ stands for the size of P or the size of image, refers to the Kronecker delta. As an outcome, the extracted total feature dimension is M × (N + 2). 2.1.2 CLBP_M feature extraction in MRMD filtering CLBP_M stands for RULBP of magnitude images. The CLBP_M feature extraction on MRMD filtered domain includes four steps as in the following: Step 1–2: Steps 1 and 2 are the same as in the RULBP procedure. Step 3: Evaluates the average [TeX:] $$\mu_{r}$$ of absolute values of the same pixel position with directions in a resolution [TeX:] $$r$$ in the filtered images. Then, compare each absolute value and the average [TeX:] $$\mu_{r}$$. The result comes out as CLBP_M of the image [TeX:] $$I_{V}$$. The CLBP_M (MLBP) on MRMD filtered domain is expressed as in the following: [TeX:] $$M L B P_{N, 2^{r-1}}(p)=\sum_{n=0}^{N-1} t\left(\left|y_{r, \theta_{n}}(p)\right|, \mu_{r}(p)\right) \cdot 2^{n}$$ [TeX:] $$\mu_{r}(p)=\operatorname{mean}_{\theta_{n} \in \Theta}\left[\left|y_{r, \theta_{n}}(p)\right|\right], \quad t(x, c)=\left\{\begin{array}{l} 1, x \geq c \\ 0, x<c \end{array}\right\}$$ Step 4: Creates and normalize RUMLBP histogram with CLBP_M. The RUMLBP has N + 2 bins in a level and the total number of directions N. The RUMLBP on the MRMD filtered domain is expressed as in the following: [TeX:] $$\operatorname{RUMLBP}_{N, 2^{r-1}(p)=}\left\{\begin{array}{ll} \sum_{n=0}^{N-1} t\left(\left|y_{r, \theta_{n}}(p)\right|, \mu_{r}(p)\right), & \text {if } U\left(M L B P_{N, 2^{r-1}}(p)\right) \leq 2 \\ N+1, & \text {otherwise} \end{array}\right.$$ The normalized RUMLBP histogram with CLBP_M is the same as in Eq. (3). 2.1.3 CLBP_C feature extraction in MRMD filtering CLBP_C [13] is a feature related to a center, but in this paper, CLBP_C is related to the value averaged over all directions instead of the center. The CLBP_C operator gives a histogram as the result of the comparison between each value averaged over all the directions and the global average. Two bins are used per resolution level. Thus, the total feature dimension is [TeX:] $$2 M$$. The CLBP_C feature extraction on the MRMD filtered domain includes four steps as in the following: Step 1–2: Steps 1 and 2 are the same as the RULBP procedure. Step 3: Creates the average image [TeX:] $$I_{\mu, r}$$ by computing the average of the same pixel positions for the absolute values of filtered images with directions at a resolution [TeX:] $$r$$. Then, evaluate [TeX:] $$\mu\left(I_{\mu, r}\right)$$, which is the global average of the image [TeX:] $$\boldsymbol{I}_{\mu, r}$$. Compare each average value of [TeX:] $$\boldsymbol{I}_{\mu, r}$$ and global average value [TeX:] $$\mu\left(I_{\mu, r}\right)$$. The CLBP_C on the MRMD filtered domain is expressed as in the following: [TeX:] $$\operatorname{CLBP}_{-} C_{N, 2^{r-1}}(p)=t\left(I_{\mu, r}(p), \mu\left(I_{\mu, r}\right)\right), \quad t(x, c)=\left\{\begin{array}{l} 1, x \geq c \\ 0, x<c \end{array}\right\}$$ [TeX:] $$I_{\mu, r}(p)=\operatorname{mean}_{\theta \in \Theta}\left[\left|y_{r, \theta}(p)\right|\right]$$ [TeX:] $$\mu\left(I_{\mu, r}\right)=\operatorname{mean}_{p \in P}\left[I_{\mu, r}(p)\right]$$ Step 3: Creates and Normalizes CLBP_C histogram is expressed as follows: [TeX:] $$H_{r}(i)=\frac{1}{|P|} \sum_{p \in P} \delta\left(C L B P_{-} C_{N, 2^{r-1}}(p)-i\right)$$ where [TeX:] $$i \in\{0,1\},|P|$$ stands for the size of [TeX:] $$P$$ or the size of the image and stands for the Kronecker delta. As an outcome, the extracted total feature dimension is [TeX:] $$2 M$$. 3. Experiment and Results The experiment is conducted in two groups for 6 databases—Corel [16] and VisTex [17]; Corel_MR and VisTex_MR with scale-variant images, and Corel_MD and VisTex_MD with rotation-variant images. First is the comparison of superiority between the methods of partial features employed in the proposed method and the entire proposed method in this paper. Second is the comparison of the CLBP method and the color autocorrelogram method that uses R, G, and B components with the proposed method that use H, S, and V components. The measurement of similarity for the comparison is given by Mahalanobis distance [18] where each of the same components is normalized by their standard deviation. The performance of image retrieval is evaluated as precision and recall [19]. The precision is computed as the percentage of relevant images among retrieved images for a query image. The recall is computed as the percentage of relevant images retrieved over the total relevant images for a query image. In this experiment, the proposed method has 152 dimensions—CLBP_S(40), CLBP_M(40), CLBP_C(8), and color autocorrelogram(64)—as shown in Table 1. Color spaces and dimensions of retrieval methods used in the experiments Fig. 2 shows the precision versus recall for comparing the partial features of the proposed method with the proposed method, for the 6 databases. Fig. 3 shows the precision versus recall for comparing the methods using R, G, and B components with the proposed methods, for the 6 databases. The precision versus recall for comparing the separate methods employed in the proposed method with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD. The precision versus recall for comparing the existing CLBP method and color autocorrelogram method using R, G and B components with the proposed method for 6 databases: (a) Corel, (b) VisTex, (c) Corel_MR, (d) VisTex_MR, (e) Corel_MD, and (f) VisTex_MD. The average gains of the proposed method over the methods of partial features are also investigated. In the first experiment, the average gains are 26.5% and 14.17% in Corel and 31.15% and 20.97% in VisTex; 24.75% and 12.56% in Corel_MR and 33.4% and 21.91% in VisTex_MR; 24.4% and 12.11% in Corel_MD and 35.45% and 23.13% in VisTex_MD, respectively. In the second experiment, the average gains of the proposed method over the methods using R, G and B components are 22.96% and 12.88% in Corel and 9.3% and 6.96% in VisTex; 18.25% and 9.45% in Corel_MR and 11.01% and 7.51% in VisTex_MR; and 18.15% and 9.44% in Corel_MD and 15.16% and 10.42% in VisTex_MD, respectively. As a result, the proposed method is superior to the methods using partial features employed in the proposed method and the CLBP and color autocorrelogram methods using R, G, and B components. Additionally, we compare the retrieval performance of the proposed method to that of the multichannel decoded LBP on Corel-1K database under the same condition of [12]. The proposed method shows the precision of 78.3%, which is 3.1% higher than that of the latter (74.93%). In this paper, the combined method of CLBP based on MRMD and color autocorrelogram is proposed. CLBP features are extracted in an MRMD filtered domain of the V component. Color autocorrelogram features are extracted in two dimensions of H and S components. In the experiments, the proposed method is compared with the separate methods employed in the proposed method, the CLBP method, the color autocorrelogram method using R, G, and B components, and the multichannel decoded LBP method. As a result, the proposed method outperforms the three conventional methods. Our future research will include inventing a scale-invariant feature extraction method efficient for various scale-variant images. This study was supported by the BK21 Plus project (SW Human Resource Development Program for Supporting Smart Life) funded by the Ministry of Education, School of Computer Science and Engineering, Kyungpook National University, Korea (No. 21A20131600005). Hee-Hyung Bu She received B.S., M.S., and Ph.D. degrees in Computer Engineering from Mokpo National University (Jeonnam, Korea), Chonnam National University (Gwangju, Korea), and Kyungpook National University (Daegu, Korea) in 2004, 2006, and 2013, respectively. Since September 2019, she has been with the School of Computer Science & Engineering at Kyungpook National University, Daegu, Korea, where she is currently an invited professor. Her research interests include image retrieval, video compression, and image processing. Nam-Chul Kim He received B.S. degree in Electronic Engineering from Seoul National University, in 1978, and M.S. and Ph.D. degrees in Electrical Engineering from the Korea Advanced Institute of Science and Technology, Seoul, Korea, in 1980 and 1984, respectively. Since March 1984, he has been with the School of Electronics Engineering at Kyungpook National University, Daegu, Korea, where he is currently a full professor. During 1991–1992, he was a visiting scholar in the Department of Electrical and Computer Engineering, Syracuse University, Syracuse, NY, USA. His research interests are image processing and computer vision, biomedical image processing, and image and video coding. Byoung-Ju Yun He received the Ph.D. degree in electrical engineering and computer science from the Korea Advanced Institute of Science and Technology, Daejeon, South Korea, 2002. From 1996 to May 2003, he has been with SK Hynix Semiconductor Inc., where he worked as a senior engineer. From June 2003 to February 2005, he has been with the Center for Next Generation Information Technology Kyungpook National University, where he worked as assistant professor. Since March 2005, he has been with the school of Electronics Engineering, where he works as an invited professor. His current re-search interests include image processing, color consistency, multimedia communi-cation system, HDR color image enhancement, biomedical image processing, and HCI. Sung-Ho Kim He received his B.S. degree in Electronics from Kyungpook National University, Korea in 1981, and his M.S. and Ph.D. degrees in Computer Science from the Korea Advanced Institute of Science and Technology, Korea in 1983 and 1994, respectively. He has been a faculty member of the School of Computer Science & Engineering at Kyungpook National University since 1986. His research interests include real-time image processing and telecommunication, multi-media systems, etc. 1 Z. Tang, M. Ling, H. Yao, Z. Qian, X. Zhang, J. Zhang, S. Xu, "Robust image hashing via random Gabor filtering and DWT," ComputersMaterials and Continua, vol. 55, no. 2, pp. 331-344, 2018.custom:[[[-]]] 2 L. Chen, H. C. Chen, Z. Li, Y. Wu, "A fusion approach based on infrared finger vein transmitting model by using multi-light-intensity imaging," Human-centric Computing and Information Sciences, vol. 7, no. 35, 2017.custom:[[[-]]] 3 S. Akbarov, M. Mehdiyev, "The interface stress field in the elastic system consisting of the hollow cylinder and surrounding elastic medium under 3D non-axisymmetric forced vibration," CMC-ComputersMaterials & Continua, vol. 54, no. 1, pp. 61-81, 2018.custom:[[[-]]] 4 E. Hadjidemetriou, M. D. Grossberg, S. K. Nayar, "Multiresolution histograms and their use for texture classification," in Proceedings of the 3rd International Workshop on Texture Analysis and Synthesis, Nice, France, 2003;custom:[[[-]]] 5 J. Huang, S. R. Kumar, M. Mitra, W. J. Zhu, R. Zabih, "Image indexing using color correlograms," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 1997;pp. 762-768. custom:[[[-]]] 6 J. Han, K. K. Ma, "Rotation-invariant and scale-invariant Gabor features for texture image retrieval," Image and Vision Computing, vol. 25, no. 9, pp. 1474-1481, 2007.doi:[[[10.1016/j.imavis.2006.12.015]]] 7 Y. D. Chun, N. C. Kim, I. H. Jang, "Content-based image retrieval using multiresolution color and texture features," IEEE Transactions on Multimedia, vol. 10, no. 6, pp. 1073-1084, 2008.doi:[[[10.1109/TMM.2008.2001357]]] 8 M. H. Rahman, M. R. Pickering, M. R. Frater, D. Kerr, "Texture feature extraction method for scale and rotation invariant image retrieval," Electronics Letters, vol. 48, no. 11, pp. 626-627, 2012.custom:[[[-]]] 9 R. D. Gupta, J. K. Dash, M. Sudipta, "Rotation invariant textural feature extraction for image retrieval using eigen value analysis of intensity gradients and multi-resolution analysis," Pattern Recognition, vol. 46, no. 12, pp. 3256-3267, 2013.doi:[[[10.1016/j.patcog.2013.05.026]]] 10 C. Li, G. Duan, F. Zhong, "Rotation invariant texture retrieval considering the scale dependence of Gabor wavelet," IEEE Transactions on Image Processing, vol. 24, no. 8, pp. 2344-2354, 2015.doi:[[[10.1109/TIP.2015.2422575]]] 11 H. H. Bu, N. C. Kim, C. J. Moon, J. H. Kim, "Content-based image retrieval using combined color and texture features extracted by multi-resolution multi-direction filtering," Journal of Information Processing Systems, vol. 13, no. 3, pp. 464-475, 2017.doi:[[[10.3745/JIPS.02.0060]]] 12 S. R. Dubey, S. K. Singh, R. K. Singh, "Multichannel decoded local binary patterns for content-based image retrieval," IEEE Transactions on Image Processing, vol. 25, no. 9, pp. 4018-4032, 2016.doi:[[[10.1109/TIP.2016.2577887]]] 13 Z. Guo, L. Zhang, D. Zhang, "A completed modeling of local binary pattern operator for texture classification," IEEE Transactions on Image Processing, vol. 19, no. 6, pp. 1657-1663, 2010.doi:[[[10.1109/TIP.2010.2044957]]] 14 T. Ojala, M. Pietikainen, T. Maenpaa, "Multiresolution gray-scale and rotation invariant texture classification with local binary patterns," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, 2002.doi:[[[10.1109/TPAMI.2002.1017623]]] 15 H. H. Bu, N. C. Kim, K. W. Park, S. H. Kim, "Content-based image retrieval using combined texture and color features based on multi-resolution multi-direction filtering and color autocorrelogram," Journal of Ambient Intelligence and Humanized Computing, 2019.doi:[[[10.1007/s12652-019-01466-0]]] 16 Y. D. Chun, S. Y. Seo, N. C. Kim, "Image retrieval using BDIP and BVLC moments," IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 9, pp. 951-957, 2003.doi:[[[10.1109/TCSVT.2003.816507]]] 17 R. Pickard, C. Graszyk, S. Mann, J. Wachman, L. Pickard, and L. Campbell, 1995 (Online). Available:, https://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html 18 W. Y. Ma, B. S. Manjunath, "A comparison of wavelet transform features for texture image annotation," in Proceedings of International Conference on Image Processing, Washington, DC, 1995;pp. 256-259. custom:[[[-]]] 19 D. Comaniciu, P. Meer, K. Xu, D. Tyler, "Retrieval performance improvement through low rank corrections," in Proceedings IEEE Workshop on Content-Based Access of Image and Video Libraries (CBAIVL), Fort Collins, CO, 1999;pp. 50-54. custom:[[[-]]] CLBP RGB 186 Color autocorrelogram RGB 216 (6×6×6) CLBP + Color autocorrelogram RGB 250 (186, 64) RULBP (CLBP_S) V 40 CLBP_M V 40 CLBP_C V 8 Color autocorrelogram HS 64 Proposed HSV 152 (88, 64)
CommonCrawl
Solving a system of temporal non-linear (reaction-diffusion) PDEs over a region using Neumann conditions I am trying to solve a system of PDEs with non-linear terms: $\frac{\partial a(x,y,z,t)}{\partial t}=\color{red}{-\text{$\tau_2 $ } a(x,y,z,t) h(x,y,z,t)}+\text{$\tau_1 $ } d(x,y,z,t) \\\frac{\partial b(x,y,z,t)}{\partial t}=\color{red}{-\text{$\tau_2 $ } b(x,y,z,t) i(x,y,z,t)}+\text{$\tau_1$ } e(x,y,z,t) \\\frac{\partial c(x,y,z,t)}{\partial t}=\color{red}{-\text{$\tau_2 $ } c(x,y,z,t) g(x,y,z,t)}+\text{$\tau_1$ } f(x,y,z,t) \\\frac{\partial d(x,y,z,t)}{\partial t}=\color{red}{\text{$\tau_2 $ } a(x,y,z,t) h(x,y,z,t)}-\text{$\tau_1 $ } d(x,y,z,t) \\\frac{\partial e(x,y,z,t)}{\partial t}=\color{red}{\text{$\tau_2 $ } b(x,y,z,t) i(x,y,z,t)}-\text{$\tau_1 $ } e(x,y,z,t) \\\frac{\partial f(x,y,z,t)}{\partial t}=\color{red}{\text{$\tau_2 $ } c(x,y,z,t) g(x,y,z,t)}-\text{$\tau_1 $ } f(x,y,z,t) \\\frac{\partial g(x,y,z,t)}{\partial t}=\color{blue}{\mathscr{D} \nabla _{\{x,y,z\}}^{}g(x,y,z,t)}+\text{$\tau_3$ } a(x,y,z,t)-\frac{g(x,y,z,t)}{\text{$\tau $4 }}+\text{$\tau_1 $ } f(x,y,z,t) \\\frac{\partial h(x,y,z,t)}{\partial t}=\color{blue}{\mathscr{D} \nabla _{\{x,y,z\}}^{}h(x,y,z,t)}+\text{$\tau_3$ } b(x,y,z,t)-\frac{h(x,y,z,t)}{\text{$\tau $4 }}+\text{$\tau_1 $ } d(x,y,z,t) \\\frac{\partial i(x,y,z,t)}{\partial t}=\color{blue}{\mathscr{D} \nabla _{\{x,y,z\}}^{}i(x,y,z,t)}+\text{$\tau_3 $ } c(x,y,z,t)-\frac{i(x,y,z,t)}{\text{$\tau $4 }}+\text{$\tau_1 $ } e(x,y,z,t)$ with non-linear terms in $\color{red}{red}$ and spatial terms in $\color{blue}{blue}$ pdes = { Derivative[0, 0, 0, 1][a][x, y, z, t] == 0.05*d[x, y, z, t] - 0.05*a[x, y, z, t]*h[x, y, z, t], Derivative[0, 0, 0, 1][b][x, y, z, t] == 0.05*e[x, y, z, t] - 0.05*b[x, y, z, t]*i[x, y, z, t], Derivative[0, 0, 0, 1][c][x, y, z, t] == 0.05*f[x, y, z, t] - 0.05*c[x, y, z, t]*g[x, y, z, t], Derivative[0, 0, 0, 1][d][x, y, z, t] == -0.05*d[x, y, z, t] + 0.05*a[x, y, z, t]*h[x, y, z, t], Derivative[0, 0, 0, 1][e][x, y, z, t] == -0.05*e[x, y, z, t] + 0.05*b[x, y, z, t]*i[x, y, z, t], Derivative[0, 0, 0, 1][f][x, y, z, t] == -0.05*f[x, y, z, t] + 0.05*c[x, y, z, t]*g[x, y, z, t], Derivative[0, 0, 0, 1][g][x, y, z, t] == 100*a[x, y, z, t] + 0.05*f[x, y, z, t] + 0.05*(Derivative[0, 0, 2, 0][g][x, y, z, t] + Derivative[0, 2, 0, 0][g][x, y, z, t] + Derivative[2, 0, 0, 0][g][x, y, z, t]), Derivative[0, 0, 0, 1][h][x, y, z, t] == 100*b[x, y, z, t] + 0.05*d[x, y, z, t] + 0.05*(Derivative[0, 0, 2, 0][h][x, y, z, t] + Derivative[0, 2, 0, 0][h][x, y, z, t] + Derivative[2, 0, 0, 0][h][x, y, z, t]), Derivative[0, 0, 0, 1][i][x, y, z, t] == 100*c[x, y, z, t] + 0.05*e[x, y, z, t] + 0.05*(Derivative[0, 0, 2, 0][i][x, y, z, t] + Derivative[0, 2, 0, 0][i][x, y, z, t] + Derivative[2, 0, 0, 0][i][x, y, z, t]) with the following intitial conditions: initcs = { a[x, y, z, 0] == (Sqrt[40/Pi])/ E^(40*((0.5 + x)^2 + y^2 + z^2)), b[x, y, z, 0] == (Sqrt[40/Pi])/E^(40*(x^2 + y^2 + z^2)), c[x, y, z, 0] == (Sqrt[40/Pi])/E^(40*((-0.5 + x)^2 + y^2 + z^2)), d[x, y, z, 0] == 0, e[x, y, z, 0] == 0, f[x, y, z, 0] == 0, g[x, y, z, 0] == 0, h[x, y, z, 0] == 0, i[x, y, z, 0] == 0 if I solve this in a cubic region I DO get an answer (although it tells me that the step-size might be too large): sol = NDSolve[ Flatten[{pdes, initcs}], {a, b, c, d, e, f, g, h, i}, {x, -1, 1}, {y, -1, 1}, {z, -1, 1}, {t, 0, 1}] to plot: Export["disks.gif", ListDensityPlot3D /@ Transpose[sol[[1, 9, 2]]["ValuesOnGrid"], {2, 3, 4, 1}]] However, I want to solve it in a specific region (a complex curved region). Lets take a cuboid region as an example since it should give the exact same solution: sol2 = NDSolve[ Flatten[{pdes, initcs}], {a, b, c, d, e, f, g, h, i}, {x, y, z} \[Element] Cuboid[{-1, -1, -1}, {1, 1, 1}], {t, 0, 1}] this gives me an error, even though it is the exact same problem NDSolve::femnonlinear: Nonlinear coefficients are not supported in this version of NDSolve. Why does the second method not work when the second one does? How can I solve my problem? Edit: I have been suggested to look at the amazing answer of user21 to solving the naiver-stokes equation. This seems like the right way to start, but this solves the steady state instead of the here required time-resolved solution. After linearizion (see chapter 4 and 5) I come to: alfabet = {a, b, c, d, e, f, g, h, i}; coords = {x, y, z}; rulefunct = # -> #[x, y, z] & /@ alfabet; alfabet2 = alfabet /. rulefunct; F = { #} & /@ -{a*h - τ1 d, b*i - τ1 e, c*g - τ1 f, -a*h + τ1 d, -b*i + τ1 e, -c* g + τ1 f, τ3*g, τ3 h, τ3 i} /. rulefunct /. {τ1 -> 1, τ3 -> 1}; A = Table[-D[F[[α]], alfabet2[[β]]], {α, 9}, {β,9}]; σ = -Normal[SparseArray[Table[{i, i, j, j} -> - d, {i, 7, 9}, {j, 1, 3}] // Flatten[# , 1] &]]; Γ = Join[ ConstantArray[0, {6, 3}],Table[-d D[alfabet2[[α]],coords[[β]]], {α, 7,9}, {β, 3}]]; τ = IdentityMatrix[9] to implement in: nr = ToNumericalRegion[Ball[]]; vd = NDSolve`VariableData[{"DependentVariables", "Space"} -> {alfabet, coords}]; sd = NDSolve`SolutionData["Space" -> nr]; nlPdeCoeff = InitializePDECoefficients[vd, sd, "LoadCoefficients" ->(*F*)F, "LoadDerivativeCoefficients" ->(*gamma*)Γ, "ReactionCoefficients" ->(*a*)A, "DampingCoefficients" -> IdentityMatrix[9], "DiffusionCoefficients" -> σ] I do not yet see a way to give the right initial conditions(and initialize a 4D region?) such that the coeficients can indeed be scalar given I require the temporal solution. differential-equations regions nonlinear finite-element-method Ruud3.1415 $\begingroup$ The error message is quite clear, FEM still doesn't support Nonlinear coefficients. $\endgroup$ – zhk Jul 10 '17 at 11:40 $\begingroup$ In sol1 NDSolve doesn't use FEM but by default uses MOL that's why it solve the system with some warnings. $\endgroup$ – zhk Jul 10 '17 at 12:03 $\begingroup$ "Or is the method of lines unavailable for arbitrary shapes?" No, it's "TensorProductGrid" method that is unavailable for arbitrary shapes. "TensorProductGrid" and "FiniteElement" can both be used for spatial discretization of "MethodOfLines", the former can handle nonlinear coefficient but can't handle irregular domain, while the latter can handle irregular domain but cannot handle nonlinear coefficient (at least now). Here is a related post. $\endgroup$ – xzczd Jul 22 '17 at 5:46 $\begingroup$ If you want to solve nonlinear PDE in irregular domain, then some relatively low level programming is needed. There already exist several examples in this site. You can start from this post. $\endgroup$ – xzczd Jul 22 '17 at 5:48 $\begingroup$ BTW, if low-level FEM programming is too hard, there exists another possible method, which is not that accurate but maybe acceptable: please check this answer. $\endgroup$ – xzczd Jul 27 '17 at 12:04 In version 12.0 you can do this: Needs["NDSolve`FEM`"] mesh = ToElementMesh[Cuboid[{-1, -1, -1}, {1, 1, 1}], "MeshOrder" -> 1]; sol = NDSolve[Flatten[{pdes, initcs}], {a, b, c, d, e, f, g, h, i}, Element[{x, y, z}, mesh], {t, 0, 1}][[1]]; fun = i[x, y, z, t] /. sol; DensityPlot3D[Evaluate[fun /. t -> 1], {x, y, z} \[Element] mesh] Not the answer you're looking for? Browse other questions tagged differential-equations regions nonlinear finite-element-method or ask your own question. How do I solve a PDE with a strange boundary condition? Nonlinear FEM Solver for Navier-Stokes equations in 2D Solving a coupled nonlinear PDE using low level FEM programming Solving the 2D heat equation Alternatives to FiniteElement as Spatial Discretization Method for NDSolve Are there some other ways to solve a second PDE except DSolve? Neumann boundary conditions in NDSolve over nontrivial region How to specify a NeumannValue in a system of non-linear partial differential equations? How could I solve this Reaction-Diffusion PDE using mathematica? Solving the non-linear heat equation numerically Solving a system of delayed partial differential equations Stress Operators for Finite Element Analysis Solving non-linear PDEs One term in non-linear equations prevents NDSolve to work Stress analysis in axisymmetric bodies
CommonCrawl
A multi-mode expansion method for boundary optimal control problems constrained by random Poisson equations On the mod p Steenrod algebra and the Leibniz-Hopf algebra June 2020, 28(2): 961-976. doi: 10.3934/era.2020051 An adaptive edge finite element method for the Maxwell's equations in metamaterials Hao Wang 1, , Wei Yang 1, and Yunqing Huang 2, Hunan Key Laboratory for Computation and Simulation in Science and Engineering, School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, Hunan, China Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, School of Mathematics and Computational Science, Xiangtan University, Xiangtan 411105, Hunan, China Received February 2020 Revised April 2020 Published June 2020 Figure(16) / Table(1) In this paper, we study an adaptive edge finite element method for time-harmonic Maxwell's equations in metamaterials. A-posteriori error estimators based on the recovery type and residual type are proposed, respectively. Based on our a-posteriori error estimators, the adaptive edge finite element method is designed and applied to simulate the backward wave propagation, electromagnetic splitter, rotator, concentrator and cloak devices. Numerical examples are presented to illustrate the reliability and efficiency of the proposed a-posteriori error estimations for the adaptive method. Keywords: Maxwell's equations, wave source terms, a-posteriori error estimator, adaptive edge finite element method, metamaterial media. Mathematics Subject Classification: 78M10, 65N30. Citation: Hao Wang, Wei Yang, Yunqing Huang. An adaptive edge finite element method for the Maxwell's equations in metamaterials. Electronic Research Archive, 2020, 28 (2) : 961-976. doi: 10.3934/era.2020051 K. Ando, H. Kang and H. Liu, Plasmon resonance with finite frequencies: A validation of the quasi-static approximation for diametrically small inclusions, SIAM J. Appl. Math., 76 (2016), 731-749. doi: 10.1137/15M1025943. Google Scholar G. Bao, H. Liu and J. Zou, Nearly cloaking the full Maxwell equations: Cloaking active contents with general conducting layers, J. Math. Pures. Appl. (9), 101 (2014), 716-733. doi: 10.1016/j.matpur.2013.10.010. Google Scholar J.-P. Berenger, A perfectly matched layer for the absorption of electromagnetic waves, J. Comput. Phys., 114 (1994), 185-200. doi: 10.1006/jcph.1994.1159. Google Scholar E. Blåsten, H. Li, H. Liu and Y. Wang, Localization and geometrization in plasmon resonances and geometric structures of Neumann-Poincaré eigenfunctions, ESAIM Math. Model. Numer. Anal., 54 (2020), 957-976. doi: 10.1051/m2an/2019091. Google Scholar J. H. Bramble and J. E. Pasciak, Analysis of a Cartesian PML approximation to the three dimensional electromagnetic wave scattering problem, Int. J. Numer. Anal. Model., 9 (2012), 543-561. Google Scholar S. C. Brenner, J. Gedicke and L.-Y. Sung, An adaptive $P_1$ finite element method for two-dimensional Maxwell's equations, J. Sci. Comput., 55 (2013), 738-754. doi: 10.1007/s10915-012-9658-8. Google Scholar S. C. Brenner, J. Gedicke and L.-Y. Sung, An adaptive $P_1$ finite element method for two-dimensional transverse magnetic time harmonic Maxwell's equations with general material properties and general boundary conditions, J. Sci. Comput., 68 (2016), 848-863. doi: 10.1007/s10915-015-0161-x. Google Scholar Z. Cai and S. Cao, A recovery-based a posteriori error estimator for $H$(curl) interface problems, Comput. Methods. Appl. Mech. Engrg., 296 (2015), 169-195. doi: 10.1016/j.cma.2015.08.002. Google Scholar Z. Cai and S. Zhang, Recovery-based error estimators for interface problems: Mixed and nonconforming finite elements, SIAM J. Numer. Anal., 48 (2010), 30-52. doi: 10.1137/080722631. Google Scholar Z. Cai and S. Zhang, Flux recovery and a posteriori error estimators: Conforming elements for scalar elliptic equations, SIAM J. Numer. Anal., 48 (2010), 578-602. doi: 10.1137/080742993. Google Scholar J. Cui, Multigrid methods for two-dimensional Maxwell's equations on graded meshes, J. Comput. Appl. Math., 255 (2014), 231-247. doi: 10.1016/j.cam.2013.05.007. Google Scholar Y. Deng, H. Liu and G. Uhlmann, Full and partial cloaking in electromagnetic scattering, Arch. Ration. Mech. Anal., 223 (2017), 265-299. doi: 10.1007/s00205-016-1035-6. Google Scholar W. Dörfler, A convergent adaptive algorithm for Poisson's equation, SIAM J. Numer. Anal., 33 (1996), 1106-1124. doi: 10.1137/0733054. Google Scholar Y. Hao and R. Mittra, FDTD modeling of metamaterials: Theory and applications, Artech. House., (2008). Google Scholar B. He, W. Yang and H. Wang, Convergence analysis of adaptive edge finite element method for variable coefficient time-harmonic Maxwell's equations, J. Comput. Appl. Math., 376 (2020), 16pp. doi: 10.1016/j.cam.2020.112860. Google Scholar Y. Huang, J. Li and C. Wu, The averaging technique for superconvergence: Verification and application of 2D edge elements to Maxwell's equations in metamaterials, Comput. Methods Appl. Mech. Engrg., 255 (2013), 121-132. doi: 10.1016/j.cma.2012.11.008. Google Scholar Y. Huang, J. Li and W. Yang, Interior penalty DG methods for Maxwell's equations in dispersive media, J. Comput. Phys., 230 (2011), 4559-4570. doi: 10.1016/j.jcp.2011.02.031. Google Scholar Y. Huang, J. Li and W. Yang, Modeling backward wave propagation in metamaterials by the finite element time-domain method, SIAM J. Sci. Comput., 35 (2013), B248–B274. doi: 10.1137/120869869. Google Scholar Y. Huang and N. Yi, The superconvergent cluster recovery method, J. Sci. Comput., 44 (2010), 301-322. doi: 10.1007/s10915-010-9379-9. Google Scholar H. Li, J. Li and H. Liu, On quasi-static cloaking due to anomalous localized resonance in $\mathbb{R}^{3}$, SIAM J. Appl. Math., 75 (2015), 1245-1260. doi: 10.1137/15M1009974. Google Scholar H. Li, S. Li, H. Liu and X. Wang, Analysis of electromagnetic scattering from plasmonic inclusions beyond the quasi-static approximation and applications, ESAIM Math. Model. Numer. Anal., 53 (2019), 1351-1371. doi: 10.1051/m2an/2019004. Google Scholar J. Li and J. S. Hesthaven, Analysis and application of the nodal discontinuous Galerkin method for wave propagation in metamaterials, J. Comput. Phys., 258 (2014), 915-930. doi: 10.1016/j.jcp.2013.11.018. Google Scholar J. Li, Y. Huang and W. Yang, An adaptive edge finite element method for electromagnetic cloaking simulation, J. Comput. Phys., 249 (2013), 216-232. doi: 10.1016/j.jcp.2013.04.026. Google Scholar H. Liu, Virtual reshaping and invisibility in obstacle scattering, Inverse Problems, 25 (2009), 16pp. doi: 10.1088/0266-5611/25/4/045006. Google Scholar [25] P. Monk, Finite Element Methods for Maxwell's Equations, Oxford University Press, New York, 2003. doi: 10.1093/acprof:oso/9780198508885.001.0001. Google Scholar A. Naga and Z. Zhang, A posteriori error estimates based on the polynomial preserving recovery, SIAM J. Numer. Anal., 42 (2004), 1780-1800. doi: 10.1137/S0036142903413002. Google Scholar N. C. Nguyen, J. Peraire and B. Cockburn, Hybridizable discontinuous Galerkin methods for the time-harmonic Maxwell's equations, J. Comput. Phys., 230 (2011), 7151-7175. doi: 10.1016/j.jcp.2011.05.018. Google Scholar J. B. Pendry, D. Schurig and D. R. Smith, Controlling electromagnetic fields, Science, 312 (2006), 1780-1782. doi: 10.1126/science.1125907. Google Scholar N. A. Pierce and M. B. Giles, Adjoint recovery of superconvergent functionals from PDE approximations, SIAM. Rev., 42 (2000), 247-264. doi: 10.1137/S0036144598349423. Google Scholar A. Taflove and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method, Artech House, Inc., Boston, MA, 2000. Google Scholar H. Wang, W. Yang and Y. Huang, Adaptive finite element method for the sound wave problems in two kinds of media, Comput. Math. Appl., 79 (2020), 789-801. doi: 10.1016/j.camwa.2019.07.029. Google Scholar D. H. Werner and D.-H. Kwon, Transformation Electromagnetics and Metamaterials. Fundamental Principles and Applications, Springer-Verlag, London, 2014. doi: 10.1007/978-1-4471-4996-5. Google Scholar W. Yang, Y. Huang and J. Li, Developing a time-domain finite element method for the Lorentz metamaterial model and applications, J. Sci. Comput., 68 (2016), 438-463. doi: 10.1007/s10915-015-0144-y. Google Scholar W. Yang, J. Li and Y. Huang, Modeling and analysis of the optical black hole in metamaterials by the finite element time-domain method, Comput. Methods Appl. Mech. Engrg., 304 (2016), 501-520. doi: 10.1016/j.cma.2016.02.029. Google Scholar W. Yang, J. Li and Y. Huang, Mathematical analysis and finite element time domain simulation of arbitrary star-shaped electromagnetic cloaks, SIAM J. Numer. Anal., 56 (2018), 136-159. doi: 10.1137/16M1093835. Google Scholar W. Yang, J. Li, Y. Huang and B. He, Developing finite element methods for simulating transformation optics devices with metamaterials, Commun. Comput. Phys., 25 (2019), 135-154. doi: 10.4208/cicp.oa-2017-0225. Google Scholar O. C. Zienkiewicz and J. Z. Zhu, The superconvergence patch recovery (SPR) and adaptive finite element refinement, Comput. Methods. Appl. Mech. Engrg., 101 (1992), 207-224. doi: 10.1016/0045-7825(92)90023-D. Google Scholar Figure 1. Example 4.2: The first line and the second line are the real values of $ E_1 $ and the meshes, respectively. From left to right: $ 8510 $ Ndof (for the initial mesh), $ 133620 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 14 $ refinements, $ 139743 $ Ndof (by using $ \eta^{r1}_{K_l} $) and $ 132334 $ Ndof (by using $ \eta^{r2}_{K_l} $) with the same times of 12 refinements Figure 2. Example 4.2: Snapshots of numerical solution and adaptive meshes for the real values of $ E_1 $ after $ 10 $ refinements. First two columns: $ 142833 $ Ndof (by using $ \eta^{r1}_{K_l} $); The last two columns: $ 138064 $ Ndof (by using $ \eta^{r2}_{K_l} $) Figure 3. Example 4.3: The first is the initial mesh with $ 6090 $ Ndof and the last three are the real values of $ E_1 $ based on the initial mesh Figure 4. Example 4.3: The first line and the second line are the real values of $ E_1 $ and the meshes with $ (x_0, y_0) = (1, 1.45) $ and $ m_p = 2 $, respectively. From left to right: $ 299395 $ Ndof (using $ \eta^{r0}_{K_l} $) after $ 18 $ refinements, $ 315182 $ Ndof (by using $ \eta^{r1}_{K_l} $) and $ 273473 $ Ndof (by using $ \eta^{r2}_{K_l} $) with the same times of $ 13 $ refinements Figure 5. Example 4.3: Snapshots of numerical solution and adaptive meshes for the real values of $ E_1 $ after $ 13 $ refinements with $ (x_0, y_0) = (1, 1.45) $ and $ m_p = 2 $. First two columns: $ 303142 $ Ndof (by using $ \eta^{r1}_{K_l} $); The last two columns: $ 278572 $ Ndof (by using $ \eta^{r2}_{K_l} $) Figure 6. Example 4.3: The first line and the second line are the real values of $ E_1 $ and the meshes with $ (x_0, y_0) = (-1, 1.45) $ and $ m_p = -2 $, respectively. From left to right: $ 265615 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 21 $ refinements, $ 282153 $ Ndof (by using $ \eta^{r1}_{K_l} $) and $ 237085 $ Ndof (by using $ \eta^{r2}_{K_l} $) with the same times of $ 14 $ refinements Figure 7. Example 4.3: Snapshots of numerical solution and adaptive meshes for the real values of $ E_1 $ after $ 14 $ refinements with $ (x_0, y_0) = (-1, 1.45) $ and $ m_p = -2 $. First two columns: $ 282422 $ Ndof (by using $ \eta^{r1}_{K_l} $); The last two columns: $ 245356 $ Ndof (by using $ \eta^{r2}_{K_l} $) Figure 8. Example 4.3: The first line and the second line are the real values of $ E_1 $ and the meshes, respectively. From left to right: $ 323340 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 21 $ refinements, $ 306934 $ Ndof (by using $ \eta^{r1}_{K_l} $) and $ 280265 $ Ndof (by using $ \eta^{r2}_{K_l} $) with the same times of $ 13 $ refinements Figure 10. Example 4.4: The first line, the second line and the third line are the real values of $ E_1 $, the real values of $ E_2 $ and the meshes, respectively. From left to right: $ 344405 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 3 $ refinements, $ 344620 $ Ndof (by using $ \eta^{r1}_{K_l} $) and $ 332141 $ Ndof (by using $ \eta^{r2}_{K_l} $) with the same times of $ 17 $ refinements Figure 11. Example 4.4: The first column, the second column and the third column are snapshots of numerical solution for the real values of $ E_1 $, $ E_2 $ and adaptive meshes, respectively. The first line: $ 393423 $ Ndof after $ 23 $ refinements (by using $ \eta^{r1}_{K_l} $); The second line: $ 416438 $ Ndof after $ 21 $ refinements (by using $ \eta^{r2}_{K_l} $) Figure 12. Example 4.5: The first line and the second line are the real values of $ E_2 $ and the meshes, respectively. From left to right: $ 794533 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 4 $ refinements, $ 506294 $ Ndof (by using $ \eta^{r1}_{K_l} $) after $ 23 $ refinements and $ 505234 $ Ndof (by using $ \eta^{r2}_{K_l} $) after $ 17 $ refinements Figure 13. Example 4.5: Snapshots of numerical solution and adaptive meshes for the real values of $ E_2 $. First two columns: $ 468957 $ Ndof (by using $ \eta^{r1}_{K_l} $) after $ 28 $ refinements; The last two columns: $ 656397 $ Ndof (by using $ \eta^{r2}_{K_l} $) after $ 25 $ refinements Figure 14. Example 4.6: The computational domain for the cloak simulation Figure 15. Example 4.6: The first line and the second line are the real values of $ E_2 $ and the meshes, respectively. From left to right: $ 445224 $ Ndof (by using $ \eta^{r0}_{K_l} $) after $ 126 $ refinements, $ 323420 $ Ndof (by using $ \eta^{r1}_{K_l} $) after $ 10 $ refinements, $ 120272 $ Ndof (by using $ \eta^{r2}_{K_l} $) after $ 30 $ refinements Figure 16. Example 4.6: Snapshots of numerical solution and adaptive meshes for the real values of $ E_2 $. First two columns: $ 291690 $ Ndof (by using $ \eta^{r1}_{K_l} $) after $ 10 $ refinements; The last two columns: $ 78497 $ Ndof (by using $ \eta^{r2}_{K_l} $) after $ 34 $ refinements Table 1. The Discrete $ l_2 $ errors and convergence rate $ h $ $ ||R(\mu^{-1}\nabla\times \boldsymbol{E}) - \mu^{-1}\nabla\times \boldsymbol{E}||_{l_2} $ Rate $ ||R(\varepsilon \boldsymbol{E})-\varepsilon \boldsymbol{E}||_{l_2} $ Rate 1/2 1.72241533259 $ \ast $ 0.65798419344 $ \ast $ 1/4 1.43147288047 0.2669 0.33590028941 0.9700 1/16 0.33780141151 1.6256 0.02622591011 1.9271 Download as excel Hsueh-Chen Lee, Hyesuk Lee. An a posteriori error estimator based on least-squares finite element solutions for viscoelastic fluid flows. Electronic Research Archive, 2021, 29 (4) : 2755-2770. doi: 10.3934/era.2021012 Hatim Tayeq, Amal Bergam, Anouar El Harrak, Kenza Khomsi. Self-adaptive algorithm based on a posteriori analysis of the error applied to air quality forecasting using the finite volume method. Discrete & Continuous Dynamical Systems - S, 2021, 14 (7) : 2557-2570. doi: 10.3934/dcdss.2020400 Georg Vossen, Stefan Volkwein. Model reduction techniques with a-posteriori error analysis for linear-quadratic optimal control problems. Numerical Algebra, Control & Optimization, 2012, 2 (3) : 465-485. doi: 10.3934/naco.2012.2.465 Patrick Henning, Mario Ohlberger. A-posteriori error estimate for a heterogeneous multiscale approximation of advection-diffusion problems with large expected drift. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : 1393-1420. doi: 10.3934/dcdss.2016056 Martin Burger, José A. Carrillo, Marie-Therese Wolfram. A mixed finite element method for nonlinear diffusion equations. Kinetic & Related Models, 2010, 3 (1) : 59-83. doi: 10.3934/krm.2010.3.59 Gang Bao, Mingming Zhang, Bin Hu, Peijun Li. An adaptive finite element DtN method for the three-dimensional acoustic scattering problem. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 61-79. doi: 10.3934/dcdsb.2020351 Michael Hintermüller, Monserrat Rincon-Camacho. An adaptive finite element method in $L^2$-TV-based image denoising. Inverse Problems & Imaging, 2014, 8 (3) : 685-711. doi: 10.3934/ipi.2014.8.685 Yi Shi, Kai Bao, Xiao-Ping Wang. 3D adaptive finite element method for a phase field model for the moving contact line problems. Inverse Problems & Imaging, 2013, 7 (3) : 947-959. doi: 10.3934/ipi.2013.7.947 Liupeng Wang, Yunqing Huang. Error estimates for second-order SAV finite element method to phase field crystal model. Electronic Research Archive, 2021, 29 (1) : 1735-1752. doi: 10.3934/era.2020089 B. L. G. Jonsson. Wave splitting of Maxwell's equations with anisotropic heterogeneous constitutive relations. Inverse Problems & Imaging, 2009, 3 (3) : 405-452. doi: 10.3934/ipi.2009.3.405 Jiann-Sheng Jiang, Chi-Kun Lin, Chi-Hua Liu. Homogenization of the Maxwell's system for conducting media. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 91-107. doi: 10.3934/dcdsb.2008.10.91 Jiangxing Wang. Convergence analysis of an accurate and efficient method for nonlinear Maxwell's equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2429-2440. doi: 10.3934/dcdsb.2020185 Kun Wang, Yinnian He, Yueqiang Shang. Fully discrete finite element method for the viscoelastic fluid motion equations. Discrete & Continuous Dynamical Systems - B, 2010, 13 (3) : 665-684. doi: 10.3934/dcdsb.2010.13.665 Yacheng Liu, Runzhang Xu. Wave equations and reaction-diffusion equations with several nonlinear source terms of different sign. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 171-189. doi: 10.3934/dcdsb.2007.7.171 Philip Trautmann, Boris Vexler, Alexander Zlotnik. Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients. Mathematical Control & Related Fields, 2018, 8 (2) : 411-449. doi: 10.3934/mcrf.2018017 Runchang Lin, Huiqing Zhu. A discontinuous Galerkin least-squares finite element method for solving Fisher's equation. Conference Publications, 2013, 2013 (special) : 489-497. doi: 10.3934/proc.2013.2013.489 Wenya Qi, Padmanabhan Seshaiyer, Junping Wang. A four-field mixed finite element method for Biot's consolidation problems. Electronic Research Archive, 2021, 29 (3) : 2517-2532. doi: 10.3934/era.2020127 Xiaoxiao He, Fei Song, Weibing Deng. A stabilized nonconforming Nitsche's extended finite element method for Stokes interface problems. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021163 Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583 Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 Hao Wang Wei Yang Yunqing Huang
CommonCrawl
Spectrum sensing using low-complexity principal components for cognitive radios Zeba Idrees1, Farrukh A Bhatti2 & Adnan Rashdi1 Principal component (PC) algorithm has recently been shown as a very accurate blind detection technique in comparison with other covariance-based detection algorithms. However, it also has a higher complexity owing to the computation of the eigenvectors. We propose a low-complexity Lanczos principal component (LPC) algorithm that utilizes Lanczos iterative method to compute the eigenvectors. In comparison with the PC algorithm, the proposed LPC algorithm offers significant reduction in complexity while giving a similar detection performance. Low-complexity LPC algorithm allows for the use of larger sized covariance matrix that further improves the detection performance. Maximum-minimum eigenvalue (MME) algorithm is also included in the comparison and it gives an inferior performance as compared to both PC and LPC algorithm. All the algorithms were tested with experimental data while using universal software radio peripheral (USRP) testbed that was controlled by GNU radio software. Cognitive radio has the ability to communicate over the unused frequency spectrum intelligently and adaptively. Spectrum sensing in a cognitive radio (CR) is crucial in generating awareness about the radio environment [1]. Blind detection methods such as covariance-based detection (CBD) algorithms enable signal detection in low signal-to-noise ratio (SNR) conditions without relying on the prior knowledge of the primary user's (PU) signal. CBD techniques also overcome the issue of noise power uncertainty that exists in an energy detector [2, 3]. These methods use the covariance and variances of the received signal and do not require information about the noise variance. The performance of the CBD algorithms is associated with the number of samples involved in the detection. However, using a large number of samples also increases the sensing time and complexity [4–8]. Recently, principal component analysis (PCA) has been applied for spectrum sensing in cognitive radios [9–11]. The principal of dimension reduction has been used in [12] to devise PC algorithm that outperforms other CBD algorithms, such as Maximum-minimum eigenvalue (MME), maximum eigenvalue detection (MED), and energy with maximum eigenvalue (EME). PCA reduces the dimensionality of the data and retains the most significant components that account for the greatest variation of the original data [13–16]. However, the PC algorithm also has the highest complexity in comparison with other CBD techniques. As envisaged in internet of things (IoT), the number of things (or devices) connected to the network might exceed the number of human users. The same idea also derives the research on 5G networks where the network capacity will be enhanced by a 1000 fold. Opportunistic spectrum access may help in this scenario where multiple overlaid devices try to access the spectrum. Spectrum sensing can help eradicate collisions and excessive contention delay experienced by dense node deployment. Such devices/sensors have embedded computing nature, hence energy efficiency is their major concern. Therefore, low complexity, energy efficient spectrum sensing algorithms are vital for implementation in such devices. In this paper, we firstly analyse the performance of the PC algorithm while considering the effect of dimension reduction on the detection performance and the related complexity. Next we describe the proposed Lanczos principal component (LPC) algorithm, which is an energy efficient version of the PC algorithm. The LPC can be used in low powered devices (e.g. sensors employed in IoT) for blind signal detection. The performance of LPC is compared with PC as well as MME algorithms. MME has been included in the comparison as it is the best-known CBD technique. All the algorithms were tested with actual wireless microphone signals using a universal software radio peripheral (USRP)2 testbed and GNU radio software. The specific contributions of this paper are as follows. The detection performance of the PC algorithm is analysed under a low SNR (<−15 dB) scenario while varying the number of principal components included in the decision test statistic. The effect of dimension reduction on the sensing performance is also considered along with the complexity involved in each case. A low-complexity LPC algorithm is proposed that employs an iterative approach to compute the principal components and achieves the similar detection performance as of the PC algorithm with a reduced complexity. This reduction in complexity saves the sensing time and improves the energy efficiency. The performance of the proposed LPC algorithm is compared with the PC and the MME algorithms while using the actual signals. In addition, computational complexity of all the three algorithms (MME, PC and LPC) is also computed and compared mathematically and graphically. All the algorithms has been evaluated under both single and multiple receive antenna system with actual wireless microphone signals. Experimental setup is established using USRP2 and GNU radio software. System model, detection with MME and PCA We have considered both a single and a multiple receive antenna system. f s is the signal's sampling rate and f s >>w, here w is the received signal's bandwidth. The signal is over-sampled to get a high correlation between the samples. Let T s =1/f s be the sampling period. We define \({y(n)\triangleq y({nT}_{s})}\), \({s(n)\triangleq s({nT}_{s})}\) and \({w(n)\triangleq w(nT_{s})}\), where y(n) is the received signal after passing through the channel, s(n) is the PU's signal and w(n) is the additive white Gaussian noise (AWGN). w(n) follows a normal distribution \({w(n)\sim \mathcal {N}(0,1)}\). Complex baseband samples from a single receive antenna system can be represented as $$ \textbf{y}(n) =\left[y_{1}(n),y_{1}(n-1),\cdot \cdot \cdot,y_{1}(n-N+1)\right]. $$ Where N is the total number of samples used in making a single decision about the signal's presence under both hypotheses. For a multiple receive antenna system with M radio frequency (RF) front ends, the complex base band samples are represented as $$ \begin{aligned} \textbf{y}(n)=\left[y_{1}(n),\cdot\cdot\cdot,y_{M}(n),y_{1}(n-1)\cdot\cdot\cdot,y_{M}(n-1),\right.\\ \left. y_{1}(n-N/M+1)\cdot\cdot\cdot,y_{M}(n-N/M+1)\right]. \end{aligned} $$ Spectrum sensing can be expressed as $$ y(n)=\left\{ \begin{array}{ll} w(n):\qquad\qquad\qquad H_{0} \\ s(n)+w(n):\qquad \quad H_{1}, \\ \end{array}\right. $$ Where H 0 is the null hypothesis that shows the absence of the PU's signal. H 1 is the alternative hypothesis which indicates the presence of the signal. Probability of detection (P d ) and probability of false alarm (P fa ) characterizes the sensing performance. Where P d =Pr(H 1|H 1) and P fa =Pr(H 1|H 0). MME algorithm calculates the eigenvalues (λ) and finds the ratio of maximum to minimum eigenvalue [7]. $$ \lambda_{\text{max}} / \lambda_{\text{min}}> \psi $$ PU's signal exists if the ratio is greater than ψ, where ψ is the threshold set according to desired probability of false alarm. MME is a blind detection algorithm without noise uncertainty issue, but calculation of eigenvalues using conventional method is computationally intensive. PC Algorithm Considering L consecutive outputs where L is the smoothing factor and N samples are involved in making a detection decision. Data set consists of baseband samples that are complex, and follow a normal distribution. k=2M dimensional data set in the matrix form is $$ \mathbf{X}=\left[\textsc{x}(n),\textsc{x}(n-1),...., \textsc{x}(n-N+1)\right]. $$ We have a finite number of samples which needs to define a sample covariance matrix instead of a statistical covariance matrix, expressing X(n−i) as X i covariance matrix will be $$ \mathbf{R}(N)=\frac{1}{N}\sum_{i=1}^{N}\textsc{x}_{i} \textsc{x}_{i}^{T}. $$ Let λ 1≥λ 2≥λ 3≥.....λ k be the eigenvalues of R such that $$ |\mathbf{R}-\lambda \mathbf{I}|=0, $$ Here I is the identity matrix with dimension as of covariance matrix R. γ 1,γ 2,γ 3....γ k are the characteristic vectors of R. To form a feature matrix, first, we need to select G, that is, the number of most significant eigenvector corresponds to the highest eigenvalues where 1≤G≤k. Principal components associated with eigenvalues larger in magnitude than the average of the eigenvalues are taken. The transformation of the original data set is as below $$ \mathbf{p}_{i}= \mathbf{F}^{T} \mathbf{x}_{i}, $$ F is the feature matrix containing G most significant eigenvectors i=1,2,..,N complete PC can be expressed as P j =[p j1,p j2,…,p jN ], j=1,2,.…G. The set of PCs with no redundant data is P=[P 1,P 2,…,P G ]T with g row vectors each containing N entries. $$ \sum_{i=1}^{N} p_{ji}^{2} =\lambda_{j}. $$ (9) follows that jth PC gives the distribution of the energy given by the jth eigenvalue [15]. Test statistic T to distinguish between signal and noise is as follows [12] $$ T=1/N \sum_{j=1}^{g} p_{j1}^{2} +p_{j2}^{2} +p_{j3}^{2}+\ldots+ p_{jN}^{2} > \psi. $$ Here p ji is the ith element of jth PC and ψ is the detection threshold determined empirically at a desired probability of false alarm. PU exists if T>ψ. Performance analysis of the PC algorithm Wireless microphone signals were used to formulate the PU, simulations were done using N samples. The value of N is set to 20,000. Figure 1 represents the probability of detection versus SNR curves with varying number of PCs. It can be seen from the figure that the probability of detection is improved as we include more principal components for detection; in Fig. 1, 'pc' represents the number of principal components included in test statistic. It is also clear that the first two PCs retain the most useful information as compared to the last ones. There is an improvement in detection performance while moving from one to two and two to three PCs, but no significant improvement is observed when we include all four PCs, that is the point where we encounter the redundant data that can be discarded to save the sensing time. This supports the statement that the variables with little variances could be discarded without significantly influencing the total variance, thereby reducing the number of variables. The use of smaller number of PCs not only reduces the sensing time but also loses some useful info that results in a low probability of detection in our case. As we increase the number of PCs, detection time also increases because it gets involved more number of operations to calculate the test statistic that increases the complexity of the algorithm. Analysis of the PC algorithm. Improvement in probability of detection with increasing number of principal components at 10 % probability of false alarm and N=20,000 Proposed LPC algorithm From the performance analysis of the PC algorithm, it is observed that the detection performance can be improved by including more PCs while using larger sized covariance matrix. However, doing so increases the complexity. In this section, we propose the LPC algorithm that uses the iterative method to compute the eigenvectors, required to generate PCs; the use of this approach significantly reduces the complexity. The proposed method performs much faster than the existing method, as it only computes the eigenvectors corresponding to the highest eigenvalues, whereas the direct method computes all the eigenvectors, thus wasting resources in computing in significant eignvectors. This approach is more efficient as it obviates the calculation of all the eignvectors and then sorting them at the end. Advance algorithms such as Lanczos and Arnoldi save this data and use the Gram-Schmidt process or Householder algorithm to reorthogonalize them into a basis spanning the Krylov subspace corresponding to the matrix. As the matrix size increases, the direct method becomes very slow, therefore not feasible practically, while the proposed method only calculates the desired eigenvectors via an iterative approach. There are many iterative approaches like Arnoldi algorithm, Jacobi-Davidson algorithm and Lanczos algorithm [17]. We found Lanczos algorithm appropriate for our scenario as it has the least convergence time as compared to other approaches [17]. A disadvantage of this algorithm is that the number of iterations can be large. To cater for this issue, a variation of the Lanczos algorithm known as implicitly restarted Lanczos algorithm (IRLA) is used to compute the desired eigenvectors [18]. Implicitly restart (IR) extracts the useful information from a large Krylove subspace and resolves the storage issue and the difficulties associated with the standard approach. IR does this by compressing the useful data into a fixed size k dimensional subspace. IRLA is summarized in Algorithm 1 [18]. A is the symmetric matrix of interest v is the starting vector, T k ∈ℜk×k is real, symmetric and tridiagonal with nonnegative subdiagonal elements. V k ∈C n×k (the columns of V k ) are the Lanczos vectors. Here k represents the desired number of eigenvectors to calculate and r is the residual. Selection of shift μ j depends upon the user's required set of vectors. o steps of shifted Q R iterations are applied to T m using μ 1,μ 2,....μ o as shifts. [β k =T m (k+1,k)]=0 if we use exact shifts. LPCA can be summarized as follows: Calculate the covariance matrix as in (6). Decompose the covariance matrix via implicitly restarted Lanczos algorithm as described in Algorithm 1. Generate the principal components. Calculate test statistic T as in (9). Decide between H 1 and H 0 by comparing the T with a predetermined threshold (Empirically determined at the desired probability of false alarm). As of the other covariance-based detection techniques, complexity of the PC algorithm also comprises of two major steps, one is the computation of the covariance matrix as in (6) and the other is the decomposition of the covariance matrix to calculate eigenvectors in our case. As the covariance matrix is a block Toeplitz and Hermitian, due to these properties of covariance matrix, we only need to evaluate its first block. Calculation of the covariance matrix requires (M 2 L N s ) multiplications and O(M 2 L(N s −1)) additions; here M is the number of receive antennas, L is the smoothing factor and N s is the number of samples [4]. Eigen decomposition of the covariance matrix requires O(M 3 L 3) multiplications and additions [5]. Therefore, the total complexity of eigenvalue-based detection algorithm (i.e. MME) is O(M 2 L N s )+O(M 3 L 3). Hence, after sorting, the total complexity of the PC algorithm becomes O(M 2 L N s )+O(M 3 L 3)+O(L 2), the complexity involved in the generation of principal components is negligible as compare to the calculation and decomposition of covariance matrix. The use of the iterative approach reduces the complexity up to 2L 2 [17]. Complexity with purposed LPC algorithm reduced to O(M 2 L N s )+O(M 3 L). Figure 2 shows the lognormal plot of the complexity of all three algorithms (MME, PCA and LPCA). Lognormal plot of complexity. Complexity plot of proposed LPCA, existing PC and MME method Experimental setup The algorithms were tested using the USRP testbed. Two receiver systems with single (M=1) and multiple antenna (M=2) were set up with the help of USRP2 for receiving correlated wireless signals, as shown in Fig. 3. Transmission parameters were set as in [12]. USRP and USRP2 are generally used hardware platforms in the field of cognitive radios and SDR provided by Ettus Research [19]. It consists of a motherboard and a selectable RF daughter boards along with a gigabit ethernet port that can be attached to a host computer for further processing. WBX was connected to the motherboard. It has a single transmit and receive antenna and covers a wide frequency range of 50 MHz to 2.2 GHz [19]. One of the advantages of USRP2 is that it works with GNU radio, open source software along with an abundance of resources, which simplify its usage [20]. Signal reception and down conversion was performed by the RF daughterboard. Afterwards a gigabit ethernet was used to pass the down-converted signal to host computer for further processing. A master/slave configuration was established to connect the two USRPs. The host computer was connected to the master USRP through an ethernet interface. The two USRP kits were interconnected via a MIMO cable that enables the synchronization between them. In addition, it also transfers down-converted signals from slave to master. Synchronization is achieved by feeding two reference signals to the master USRP. A frequency reference is provided by a 10-MHz signal and sample time is synchronized by a one pulse per second (1 PPS) signal. In our experimental setup, the PU was emulated using an Rhode & Schwarz SMF100A microwave signal generator that transmits an FM signal at a frequency of 410 MHz with a bandwidth of 200 kHz [21]. The SNR was varied by changing the transmit power of the signal generator. The sampling rate of the received signal was 6.25 Mega samples per second. A high correlation between the samples is achieved by over-sampling. The received signals were fed to the host computer through an ethernet cable for subsequent performance analysis. The SNR was measured by tuning off the transmitter and recording the noise at each RF front end. The power of the signal at M th front end is calculated by \({P_{M}=1/S\sum _{n=1}^{S}|X_{M}(n)|^{2}}\). Where S is the total samples used for computing power, P M,1 represents the power of signal and P M,0 is the powerof noise. $$ SNR_{M}=10log_{10}[(P_{M,1} - P_{M,0})/P_{M,0}]. $$ Experimental setup. Experimental setup with two receive antennas Results and discussions This section describes the performance comparison of the LPC, PC and MME algorithms. Figure 4 shows the performance analysis of the three algorithms in terms of probability of detection vs. SNR at 10 % probability of false alarm for single (M=1) and multiple (M=2) receive antenna cases. Probability of detection improves for the M=2 case as the use of multiple antennas at the receiver overcomes the effect of channel fading and also enhances the robustness against interference [22]. Antennas were configured for correlated reception (i.e antenna spacing <λ c /2) which enhance the received signal's correlation between the samples. The results obtained from the PC and LPC algorithms were with G=2 and the number of samples used in making a single detection decision N was kept constant for all the three algorithms. It was observed that the proposed method (LPCA) gives similar detection performance as of the PC method with significantly reduced complexity and it also outperforms the MME algorithm. There is no effect on the detection test statistic except the reduction in computational complexity, this allows the inclusion of more PCs that ultimately improve the detection performance. The proposed algorithm reduces the complexity by using an iterative approach and subspace method. Improvement in probability of detection is attained using LPCA, this improvement is achieved by using the covariance matrix of order L=6 instead of L=4 while in case of PC and MME algorithm L=4. With this L the complexity of PC algorithm is L 3+L 2=80 and MME has the complexity L 3=64 while that of the LPCA is L=6, the proposed LPCA gives the better detection with low complexity even using large covariance matrix. Performance comparison. Comparison of performance in term of probability of detection for single and multiple receive antennas cases at 10 % probability of false alarm G=2,N=60,000 The direct method becomes slow as the matrix size increases; therefore, it is not feasible practically. With the direct method, while getting the required eigenvector, we also got a series of vectors, which were, finally, discarded, this can result in a large amount of disregarded information; LPCA overcomes this issue and makes the things simpler. Receiver operating curves at different SNR are shown in Fig. 5 for MME, PC and LPCA at SNR −15 and −20 dB with L=4 and N=60,000. Detection threshold was empirically calculated by considering the normalized histogram of the test statistic under H 0 and corresponds to probability of false alarm of range 0 to 1. It can be seen from the figure that receiver operating characteristics (ROC) of both PC and LPCA are almost the same (Fig. 5). As expected, the proposed algorithm gives the same performance as that of the PC algorithm with a reduced complexity and significantly exceeds the sensing requirements defined by the FCC, that is, achieving 90 % P d with 10 % P f a at SNR of −12 dB for wireless microphone signals. ROC. Receiver operating curve probability of detection (P d ) vs. probability of false alarm (P f ) at SNR −15 and −20 dB Complexity is a major issue in blind signal detection algorithms that are based on a covariance matrix. The use of a large number of received samples increases the size of the covariance matrix and as a result the complexity. In this paper, we proposed a novel algorithm for blind signal detection, i.e. LPC algorithm that has an iterative nature which reduces the complexity and saves sensing time. LPC achieves the same detection performance as PC, yet its complexity is significantly less than the PC algorithm. Thus, LPC can be used in even low-powered devices for blind signal detection. The performance of LPC is compared with PC as well as MME algorithms. The proposed method gives the best sensing performance while reducing the complexity from O(L 3+L 2) to O(L). All the algorithms are tested with actual wireless microphone signals while using a USRP2 testbed and GNU radio software. In the future, these algorithms can be tested on a stand-alone platform for real-time performanceevaluation. S Haykin, DJ Thomson, JH Reed, Spectrum sensing for cognitive radio. Proc. IEEE. 97(5), 849–877 (2009). DD Ariananda, M Lakshmanan, H Nikookar, in Cognitive Radio and Advanced Spectrum Management, 2009. CogART 2009. Second International Workshop On. A survey on spectrum sensing techniques for cognitive radio (IEEE, 2009), pp. 74–79. T Yücek, H Arslan, A survey of spectrum sensing algorithms for cognitive radio applications. Commun. Surv. Tutorials IEEE. 11(1), 116–130 (2009). Y Zeng, Y-C Liang, Spectrum-sensing algorithms for cognitive radio based on statistical covariances. Vehicular Technol. IEEE Trans. 58(4), 1804–1815 (2009). Y Zeng, Y-C Liang, Eigenvalue-based spectrum sensing algorithms for cognitive radio. Commun. IEEE Trans. 57(6), 1784–1793 (2009). Y Zeng, CL Koh, Y-C Liang, in Communications, 2008. ICC'08. IEEE International Conference On. Maximum eigenvalue detection: theory and application (IEEE, 2008), pp. 4160–4164. Y Zeng, Y-C Liang, in Personal, Indoor and Mobile Radio Communications, 2007. PIMRC 2007. IEEE 18th International Symposium On. Maximum-minimum eigenvalue detection for cognitive radio (IEEE, 2007), pp. 1–5. Y Zeng, Y-C Liang, R Zhang, Blindly combined energy detection for spectrum sensing in cognitive radio. Signal Process. Lett. IEEE. 15, 649–652 (2008). AM Rao, B Karthikeyan, GRK DipayanMazumdar, Energy detection technique for spectrum sensing in cognitive radio. SAS_TECH. 9 (2010). S Hou, R Qiu, M Bryant, M Wicks, in IEEE Waveform Diversity and Design Conference, 514. Spectrum sensing in cognitive radio with robust principal component analysis, (2012). Y Han, H Lee, J Lee, in Vehicular Technology Conference (VTC Fall), 2013 IEEE 78th. Spectrum sensing using robust principal component analysis for cognitive radio (IEEE, 2013), pp. 1–5. FA Bhatti, GB Rowe, KW Sowerby, in Wireless Communications and Networking Conference (WCNC), 2012 IEEE. Spectrum sensing using principal component analysis (IEEE, 2012), pp. 725–730. J Ma, GY Li, BHF Juang, Signal processing in cognitive radio. Proc. IEEE. 97(5), 805–823 (2009). JE Jackson, A User's Guide to Principal Components, vol. 587 (John Wiley & Sons, Canada, 2005). BC Moore, Principal component analysis in linear systems: Controllability, observability, and model reduction. Automatic Control IEEE Trans. 26(1), 17–32 (1981). JF Hair, WC Black, BJ Babin, RE Anderson, RL Tatham, Multivariate Data Analysis, vol. 6 (Pearson Prentice Hall Upper Saddle River, NJ, New York, 2006). A Ikram, A Rashdi, in Communications (APCC), 2012 18th Asia-Pacific Conference On. Complexity analysis of eigenvalue based spectrum sensing techniques in cognitive radio networks (IEEE, 2012), pp. 290–294. D Calvetti, L Reichel, DC Sorensen, An implicitly restarted lanczos method for large symmetric eigenvalue problems. Electronic Trans. Numeric. Anal. 2(1), 21 (1994). Ettus Reserch. http://www.ettus.com/. GNU RADIO. http://www.gnuradio.org/. C Clanton, M Kenkel, Y Tang, Wireless microphone signal simulation method. IEEE 802.22-07/0124r0 (2007). Y-C Liang, G Pan, Y Zeng, in Global Telecommunications Conference (GLOBECOM 2010), 2010 IEEE. On the performance of spectrum sensing algorithms using multiple antennas (IEEE, 2010), pp. 1–5. Department of Electrical Engineering, National University of Sciences and Technology (NUST), Islamabad, H-12 sector44000, Pakistan Zeba Idrees & Adnan Rashdi Electrical Engineering Department, Institute of Space Technology, Islamabad, 44000, Pakistan Farrukh A Bhatti Search for Zeba Idrees in: Search for Farrukh A Bhatti in: Search for Adnan Rashdi in: Correspondence to Zeba Idrees. Zeba Idrees received her B.E. degree in telecommunication engineering from the Govt. College University Faisalabad, Pakistan and a Masters degree in electrical engineering from the National University of Sciences and Technology (NUST), Islamabad, Pakistan in 2012 and 2014, respectively. She possesses professional and research experience of more than 2 years in academia as well as industry. During her research period, she conducted research on Cognitive radio networks. She has a research interest in the areas of Wireless communications and Signal Processing. Farrukh A Bhatti received his B.E. and M.S. degrees in avionics engineering and electrical engineering from the National University of Sciences and Technology, Pakistan, in 2004 and 2008, respectively. He received his Ph.D. degree in electrical and electronic engineering from The University of Auckland, New Zealand, in 2014. During his Ph.D., he undertook research field trips to the Industrial Research Ltd Wellington and the Wireless @ Virginia Tech research group, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA. Presently, he is an Assistant Professor in the department of electrical engineering at the Institute of Space Technology, Islamabad, Pakistan. His research interests include advanced cellular networks, multiple-antenna systems, software-defined radios and cognitive radio networks. Adnan Rashdi received his B.Sc, M.Sc and Ph.D degrees in Electrical Engineering from the University of Engineering & Technology (UET), Lahore, Pakistan in 1994, 2003 and 2008, respectively. His research interest includes software defined radios, cognitive radios, wired and wireless multiuser communication systems, signal processing and optimization techniques. He has published many research papers in international journals and conference proceedings. He has established SDR Lab at Military College of signals (MCS) and is heading SDR Research Group. Currently he is working as an assistant professor at the National University of Science and Technology (NUST). Idrees, Z., Bhatti, F.A. & Rashdi, A. Spectrum sensing using low-complexity principal components for cognitive radios. J Wireless Com Network 2015, 184 (2015) doi:10.1186/s13638-015-0412-4 Spectrum sensing Covariance-based detection
CommonCrawl
Asymptotic expansion of the mean-field approximation DCDS Home Linear response for Dirac observables of Anosov diffeomorphisms April 2019, 39(4): 1821-1889. doi: 10.3934/dcds.2019079 A non-local problem for the Fokker-Planck equation related to the Becker-Döring model Joseph G. Conlon 1, and André Schlichting 2,, University of Michigan, Department of Mathematics, Ann Arbor, MI 48109-1109, USA Universität Bonn, Institut für Angewandte Mathematik, Endenicher Allee 60, 53129 Bonn, Germany Received November 2017 Revised September 2018 Published January 2019 This paper concerns a Fokker-Planck equation on the positive real line modeling nucleation and growth of clusters. The main feature of the equation is the dependence of the driving vector field and boundary condition on a non-local order parameter related to the excess mass of the system. The first main result concerns the well-posedness and regularity of the Cauchy problem. The well-posedness is based on a fixed point argument, and the regularity on Schauder estimates. The first a priori estimates yield Hölder regularity of the non-local order parameter, which is improved by an iteration argument. The asymptotic behavior of solutions depends on some order parameter $ \rho $ depending on the initial data. The system shows different behavior depending on a value $ \rho_s>0 $, determined from the potentials and diffusion coefficient. For $ \rho \leq \rho_s $, there exists an equilibrium solution $ c^ {{ \rm{eq}}} _{(\rho)} $. If $ \rho\le\rho_s $ the solution converges strongly to $ c^ {{ \rm{eq}}} _{(\rho)} $, while if $ \rho > \rho_s $ the solution converges weakly to $ c^ {{ \rm{eq}}} _{(\rho_s)} $. The excess $ \rho - \rho_s $ gets lost due to the formation of larger and larger clusters. In this regard, the model behaves similarly to the classical Becker-Döring equation. The system possesses a free energy, strictly decreasing along the evolution, which establishes the long time behavior. In the subcritical case $ \rho<\rho_s $ the entropy method, based on suitable weighted logarithmic Sobolev inequalities and interpolation estimates, is used to obtain explicit convergence rates to the equilibrium solution. The close connection of the presented model and the Becker-Döring model is outlined by a family of discrete Fokker-Planck type equations interpolating between both of them. This family of models possesses a gradient flow structure, emphasizing their commonality. Keywords: Non-linear non-local pde, Fokker-Planck equation, coarsening, convergence to equilibrium, entropy method, gradient flow. Mathematics Subject Classification: Primary: 35Q84; Secondary: 35F05, 35K55, 37D35, 82C26, 82C70. Citation: Joseph G. Conlon, André Schlichting. A non-local problem for the Fokker-Planck equation related to the Becker-Döring model. Discrete & Continuous Dynamical Systems, 2019, 39 (4) : 1821-1889. doi: 10.3934/dcds.2019079 L. Ambrosio and G. Buttazzo, Weak lower semicontinuous envelope of functionals defined on a space of measures, Ann. Di Mat. Pura Ed Appl., (4) 150 (1988), 311–339. doi: 10.1007/BF01761473. Google Scholar C. Ané, S. Blachère, D. Chafai, P. Fougères, I. Gentil, F. Malrieu, C. Roberto and G. Scheffer, Sur Les Inégalités de Sobolev Logarithmiques, Panoramas et Synthèses, Société Mathématique de France, 2000. Google Scholar J. M. Ball, J. Carr and O. Penrose, The Becker Döring cluster equations: Basic properties and asymptotic behavior of solutions, Comm. Math. Phys., 104 (1986), 657-692. doi: 10.1007/BF01211070. Google Scholar F. Barthe and C. Roberto, Sobolev inequalities for probability measures on the real line, Stud. Math., 159 (2003), 481-497. doi: 10.4064/sm159-3-9. Google Scholar F. Barthe and C. Roberto, Modified Logarithmic Sobolev Inequalities on $\mathbb{R}$, Potential Anal., 29 (2008), 167-193. doi: 10.1007/s11118-008-9093-5. Google Scholar R. Becker and W. Döring, Kinetische Behandlung der Keimbildung in übersättigten Dämpfen, Ann. Der Phys., 24 (1935), 719-752. doi: 10.1002/andp.19354160806. Google Scholar P. Billingsley, Convergence of Probability Measures, Wiley, 1968, New York-London. Google Scholar S. G. Bobkov and F. Götze, Exponential integrability and transportation cost related to logarithmic sobolev inequalities, J. Funct. Anal., 163 (1999), 1-28. doi: 10.1006/jfan.1998.3326. Google Scholar V. I. Bogachev, Measure Theory, Springer Berlin Heidelberg, 2007. doi: 10.1007/978-3-540-34514-5. Google Scholar V. Bögelein, F. Duzaar and G. Mingione, The boundary regularity of non-linear parabolic systems Ⅰ, Ann. Inst. H. Poincaré Anal. Non Linéaire, 27 (2010), 201–255. doi: 10.1016/j.anihpc.2009.09.003. Google Scholar F. Bolley and C. Villani, Weighted Csiszar-Kullback-Pinsker inequalities and applications to transportation inequalities, Fac. Des Sci. Toulouse, 14 (2005), 331-352. doi: 10.5802/afst.1095. Google Scholar G. Buttazzo, Semicontinuity, Relaxation and Integral Representation in the Calculus of Variations, vol. 207 of Pitman Research Notes in Mathematics Series, Longman Scientific & Technical, Harlow; copublished in the United States with John Wiley & Sons, Inc., New York, 1989. Google Scholar J. A. Cañizo, A. Einav and B. Lods, Trend to equilibrium for the Becker-Döring equations: An analogue of Cercignani's conjecture, Anal. PDE., 10 (2017), 1663-1708. doi: 10.2140/apde.2017.10.1663. Google Scholar J. A. Cañizo, A. Einav and B. Lods, Uniform moment propagation for the Becker-Döring equation, preprint, arXiv: 1706.03524. Google Scholar J. Cañizo and B. Lods, Exponential convergence to equilibrium for subcritical solutions of the Becker-Döring equations, Journ. Diff. Eqns., 255 (2013), 905-950. doi: 10.1016/j.jde.2013.04.031. Google Scholar J.-F. Collet, Some modelling issues in the theory of fragmentation-coagulation systems, Commun. Math. Sci., 2 (2004), 35-54. doi: 10.4310/CMS.2004.v2.n5.a3. Google Scholar J.-F. Collet, T. Goudon, F. Poupaud and A. Vasseur, The Becker-Döring system and its Lifshitz-Slyozov limit, SIAM J. Appl. Math., 62 (2002), 1488-1500. doi: 10.1137/S0036139900378852. Google Scholar J.-F. Collet and T. Goudon, On solutions of the Lifshitz-Slyozov model, Nonlinearity, 13 (2000), 1239-1262. doi: 10.1088/0951-7715/13/4/314. Google Scholar J.-F. Collet and S. Hariz, A modified version of the Lifshitz-Slyozov model, Appl. Math. Lett., 12 (1999), 81-85. doi: 10.1016/S0893-9659(2.40)00138-4. Google Scholar J. G. Conlon, On a diffusive version of the Lifschitz–Slyozov–Wagner equation, J. Nonlinear Sci., 20 (2010), 463-521. doi: 10.1007/s00332-010-9065-y. Google Scholar J. Conlon and M. Guha, Stochastic Variational formulas for linear diffusion equations, Rev. Mat. Iberoam., 30 (2014), 581-666. doi: 10.4171/RMI/794. Google Scholar D. DeBlassie and R. Smits, The influence of a power law drift on the exit time of Brownian motion from a half-line, Stochastic Process. Appl., 117 (2007), 629-654. doi: 10.1016/j.spa.2006.09.009. Google Scholar S. Eberle, B. Niethammer and A. Schlichting, Gradient flow formulation and longtime behaviour of a constrained Fokker–Planck equation, Nonlinear Anal., 158 (2017), 142-167. doi: 10.1016/j.na.2017.04.009. Google Scholar L. C. Evans, Weak Convergence Methods for Nonlinear Partial Differential Equations, Regional Conference Series in Mathematics, Vol 74, Amer. Math. Soc., 1990. doi: 10.1090/cbms/074. Google Scholar L. C. Evans, Partial Differential Equations, Amer. Math. Soc. Graduate Study in Mathematics, 19, 1998, AMS Providence. Google Scholar A. Figalli and N. Gigli, A new transportation distance between non-negative measures, with applications to gradients flows with Dirichlet boundary conditions, J. Math. Pures Appl., 94 (2010), 107-130. doi: 10.1016/j.matpur.2009.11.005. Google Scholar M. Friedlin and A. D. Wentzell, Random Perturbations of Dynamical Systems, Grundlehren der Mathematischen Wissenschaften, 260, Springer, Heidelberg, 2012. doi: 10.1007/978-3-642-25847-3. Google Scholar A. Friedman, Partial Differential Equations of Parabolic Type, Prentice-Hall, Inc., 1964,347 pp. Google Scholar M. Fukushima, Y. Oshima and M. Takeda, Dirichlet Forms and Symmetric Markov Processes, Second revised and extended edition, Walter de Gruyter, Berlin, 2011. Google Scholar P.-E. Jabin and B. Niethammer, On the rate of convergence to equilibrium in the Becker–Döring equations, J. Differ. Equ., 191 (2003), 518–543. doi: 10.1016/S0022-0396(03)00021-4. Google Scholar P. Laurenccot and S. Mischler, From the Becker–Döring to the Lifshitz–Slyozov–Wagner equations, J. Stat. Phys., 106 (2002), 957-991. doi: 10.1023/A:1014081619064. Google Scholar I. M. Lifshitz and V. V. Slyozov, The kinetics of precipitation from supersaturated solid solutions, J. Phys. Chem. Solids., 19 (1961), 35-50. doi: 10.1016/0022-3697(61)90054-3. Google Scholar D. Matthes, A. Jüngel and G. Toscani, Convex Sobolev inequalities derived from entropy dissipation, Arch. Rat. Mech. Anal., 199 (2011), 563-596. doi: 10.1007/s00205-010-0331-9. Google Scholar J. Morales, A new family of transportation costs with applications to reaction-diffusion and parabolic equations with boundary conditions, J. Math. Pures Appl., (9) 112 (2018), 41–88. doi: 10.1016/j.matpur.2017.12.001. Google Scholar B. Muckenhoupt, Hardy's inequality with weights, Stud. Math., 44 (1972), 31-38. doi: 10.4064/sm-44-1-31-38. Google Scholar B. Niethammer, On the evolution of large clusters in the Becker-Döring model, J. Nonlinear Sci., 13 (2003), 115-155. doi: 10.1007/s00332-002-0535-8. Google Scholar B. Niethammer and R. L. Pego, On the initial value problem in the Lifschitz-Slyozov-Wagner theory of Ostwald ripening, SIAM J. Math. Anal., 31 (2000), 467-485. doi: 10.1137/S0036141098338211. Google Scholar B. Niethammer and R. L. Pego, Well-posedness for measure transport in a family of nonlocal domain coarsening models, Indiana Univ. Math. J., 54 (2005), 499-530. doi: 10.1512/iumj.2005.54.2598. Google Scholar O. Penrose, The Becker-Döring equations at large times and their connection with the LSW theory of coarsening, J. Stat. Phys., 89 (1997), 305-320. doi: 10.1007/BF02770767. Google Scholar Y. V. Prokhorov, Convergence of random processes and limit theorems in probability theory, Teor. Veroyatnost. i Primenen, 1 (1956), 177-238. doi: 10.1137/1101016. Google Scholar M. Protter and H. Weinberger, Maximum principles in Differential Equations, Springer-Verlag, New York, 1984. doi: 10.1007/978-1-4612-5282-5. Google Scholar [42] M. Reed and B. Simon, Methods of Modern Mathematical Physics Ⅰ. Functional Analysis, Academic Press, New York-London, 1972. doi: 10.1088/0031-9112/23/12/045. Google Scholar O. S. Rothaus, Analytic inequalities, isoperimetric inequalities and logarithmic Sobolev inequalities, J. Funct. Anal., 64 (1985), 296-313. doi: 10.1016/0022-1236(2.32)90079-5. Google Scholar A. Schlichting, Macroscopic Limit of the Becker-Döring Equation Via Gradient Flows, ESAIM: COCV, Forthcoming article, 2018. doi: 10.1051/cocv/2018011. Google Scholar S. R. S. Varadhan, Large Deviations and Applications, SIAM, Philadelphia, 1984. doi: 10.1137/1.9781611970241. Google Scholar J. J. L. Velázquez, The Becker-Döring equations and the Lifschitz-Slyozov theory of coarsening, J. Stat. Phys., 92 (1998), 195-236. doi: 10.1023/A:1023099720145. Google Scholar C. Villani, Topics in Optimal Transportation, Graduate Studies in Mathematics, 58, Amer. Math. Soc., Providence R.I., 2003. doi: 10.1090/gsm/058. Google Scholar C. Wagner, Theorie der Alterung von Niederschlägen durch Umlösen (Ostwald-Reifung), Z. Elektrochem., 65 (1961), 581-591. Google Scholar [49] J. A. Walker, Dynamical Systems and Evolution Equations: Theory and Applications., Plenum Press, New York-London, 1980. doi: 10.1007/978-1-4684-1036-5. Google Scholar Simon Plazotta. A BDF2-approach for the non-linear Fokker-Planck equation. Discrete & Continuous Dynamical Systems, 2019, 39 (5) : 2893-2913. doi: 10.3934/dcds.2019120 Hamza Khalfi, Amal Aarab, Nour Eddine Alaa. Energetics and coarsening analysis of a simplified non-linear surface growth model. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 161-177. doi: 10.3934/dcdss.2021014 Giuseppe Toscani. A Rosenau-type approach to the approximation of the linear Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (4) : 697-714. doi: 10.3934/krm.2018028 Shui-Nee Chow, Wuchen Li, Haomin Zhou. Entropy dissipation of Fokker-Planck equations on graphs. Discrete & Continuous Dynamical Systems, 2018, 38 (10) : 4929-4950. doi: 10.3934/dcds.2018215 Michael Herty, Lorenzo Pareschi. Fokker-Planck asymptotics for traffic flow models. Kinetic & Related Models, 2010, 3 (1) : 165-179. doi: 10.3934/krm.2010.3.165 Qiyu Jin, Ion Grama, Quansheng Liu. Convergence theorems for the Non-Local Means filter. Inverse Problems & Imaging, 2018, 12 (4) : 853-881. doi: 10.3934/ipi.2018036 Sylvain De Moor, Luis Miguel Rodrigues, Julien Vovelle. Invariant measures for a stochastic Fokker-Planck equation. Kinetic & Related Models, 2018, 11 (2) : 357-395. doi: 10.3934/krm.2018017 Marco Torregrossa, Giuseppe Toscani. On a Fokker-Planck equation for wealth distribution. Kinetic & Related Models, 2018, 11 (2) : 337-355. doi: 10.3934/krm.2018016 Michael Herty, Christian Jörres, Albert N. Sandjo. Optimization of a model Fokker-Planck equation. Kinetic & Related Models, 2012, 5 (3) : 485-503. doi: 10.3934/krm.2012.5.485 José Antonio Alcántara, Simone Calogero. On a relativistic Fokker-Planck equation in kinetic theory. Kinetic & Related Models, 2011, 4 (2) : 401-426. doi: 10.3934/krm.2011.4.401 Feng-Yu Wang. Exponential convergence of non-linear monotone SPDEs. Discrete & Continuous Dynamical Systems, 2015, 35 (11) : 5239-5253. doi: 10.3934/dcds.2015.35.5239 Zeinab Karaki. Trend to the equilibrium for the Fokker-Planck system with an external magnetic field. Kinetic & Related Models, 2020, 13 (2) : 309-344. doi: 10.3934/krm.2020011 Kaïs Ammari, Thomas Duyckaerts, Armen Shirikyan. Local feedback stabilisation to a non-stationary solution for a damped non-linear wave equation. Mathematical Control & Related Fields, 2016, 6 (1) : 1-25. doi: 10.3934/mcrf.2016.6.1 Bouthaina Abdelhedi, Hatem Zaag. Single point blow-up and final profile for a perturbed nonlinear heat equation with a gradient and a non-local term. Discrete & Continuous Dynamical Systems - S, 2021, 14 (8) : 2607-2623. doi: 10.3934/dcdss.2021032 Olivier Bonnefon, Jérôme Coville, Guillaume Legendre. Concentration phenomenon in some non-local equation. Discrete & Continuous Dynamical Systems - B, 2017, 22 (3) : 763-781. doi: 10.3934/dcdsb.2017037 Roberta Bosi. Classical limit for linear and nonlinear quantum Fokker-Planck systems. Communications on Pure & Applied Analysis, 2009, 8 (3) : 845-870. doi: 10.3934/cpaa.2009.8.845 Stig-Olof Londen, Hana Petzeltová. Convergence of solutions of a non-local phase-field system. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 653-670. doi: 10.3934/dcdss.2011.4.653 Helge Dietert, Josephine Evans, Thomas Holding. Contraction in the Wasserstein metric for the kinetic Fokker-Planck equation on the torus. Kinetic & Related Models, 2018, 11 (6) : 1427-1441. doi: 10.3934/krm.2018056 Andreas Denner, Oliver Junge, Daniel Matthes. Computing coherent sets using the Fokker-Planck equation. Journal of Computational Dynamics, 2016, 3 (2) : 163-177. doi: 10.3934/jcd.2016008 Ioannis Markou. Hydrodynamic limit for a Fokker-Planck equation with coefficients in Sobolev spaces. Networks & Heterogeneous Media, 2017, 12 (4) : 683-705. doi: 10.3934/nhm.2017028 Joseph G. Conlon André Schlichting
CommonCrawl
History of Science and Mathematics History of Science and Mathematics Meta History of Science and Mathematics Beta Introduction of the Gravitational constant The constant G in Newton's law $F = G m_1m_2/r^2$ is, as far as I know, absent from Newton's work - who introduced this constant? $\begingroup$ It is implicitly present in Newton's work. So one did not have to "introduce" it. Once you choose the units, you get a constant. At the time of Newton, they avoided writing constants depending on the units because the units were not firmly established. They preferred to phrase the laws in terms of proportionality. $\endgroup$ – Alexandre Eremenko Jan 25 '16 at 22:40 $\begingroup$ That's what I mean by absent, and my question is who introduced it, in your terms, explicitly, $\endgroup$ – user2255 Jan 26 '16 at 6:41 $\begingroup$ I don't know but I suppose that physical units were standartized only at the time of French revolution. $\endgroup$ – Alexandre Eremenko Jan 26 '16 at 21:32 $\begingroup$ In theoretical physics research, publication of papers, and graduate (usually) level text books, a number of physical constants are set to 1, the integer 1 -- that is, no units. Obviously this changes the units of most if not all of the other values of a given equation but it simplifies the mathematical work. In quantum field theory work, it is common to set $\hbar=1$, and $c=1$. In Cosmology is is common to set $c=1$ and $G=1$. $\endgroup$ – K7PEH Jan 28 '16 at 19:25 As a constant using ordinary units (e.g., metric), the gravitational constant G doesn't occur until late in the 19th century. Scientists in Newton's time and into the 18th century were quite happy to work in terms of proportionalities and ratios; the constant of proportionality never needed to be written down. For example, one does not see a gravitational constant, in any form, in Cavendish's description of his experiments to "determine the density of the Earth" (1798). This practice of not using a gravitational constant with regard to earthly matters continued throughout much of the 19th century, e.g., Pratt (1855). One does see something much akin to the Newtonian gravitational constant in the works of Laplace (1799) and Gauss (1809). Using modern nomenclature, Newton's law of gravitation using the gravitational constant of Laplace and Gauss is $$F = k^2 \frac{m_1 m_2}{r^2}$$ The key differences between that and $F=G \frac{m_1 m_2}{r^2}$ are that the Laplace's and Gauss's $k$ is the square root of the Newtonian constant, and that the system of units is more apropos to modeling the solar system. Gauss explicitly specified his system of units: One mean solar day as the unit of time, one solar mass as the unit of mass, and one astronomical unit (the mean distance between the Earth and the Sun) as the unit of length. The Gaussian gravitational constant has a numerical value of 0.01720209895, as reported by Gauss. This value persisted as a defined constant until very recently (2012, and perhaps later). Aside: Note that making $k$ a defined constant effectively made the astronomical unit a derived quantity, divorcing it from the size of the Earth's orbit. Keeping $k$ at the value established by Gauss was standard practice throughout the 19th and 20th centuries. The push to measure $G$ using earthly units didn't occur until after physicists saw the value of the metric system and its predecessors, was largely driven by electromagnetism. A flurry of publications occurred late in the 19th century regarding the Newtonian gravitational constant, apparently starting with Cornu and Baille (1873). (The reason I wrote "apparently" is because I can't access that paper, and because Poynting describes that paper as "brief".) Judging a book by its cover (or a scientific paper by its title), the title "A new determination of the constant of attraction and of the mean density of the Earth" certainly does appear that Cornu and Baille made the connection between assessing the mean density of the Earth and the gravitational constant. Boys (1889) makes the connection very explicit; Boys announced he was going to use a Cavendish experiment to measure the gravitational constant as the primary goal. That this measurement also would yield an estimate of the density of the Earth was secondary. C.V.Boys (1889), "On the Cavendish experiment", Proc. R. Soc. London, 46, 253-268 C.V. Boys (1894), "The Newtonian constant of gravitation," Notices R. Inst. 14 353-377. H. Cavendish (1798), "Experiments to determine the Density of the Earth," Phil. Trans. R. Soc. London, 88 469-526. A. Cornu et B. Baille (1873), "Détermination nouvelle de la constante de l'attraction et de la densité moyenne de la Terre," Comptes Rendus lxxvi, 954-8. Gauss (1809), "Theoria motus corporum coelestium in sectionibus conicis solem ambientium" Laplace (1799), "Traité de mécanique céleste". Poynting (1894), "The mean density of the Earth". David HammenDavid Hammen According to Wikipedia: "one of the first references to $G$ is in 1873, 75 years after Cavendish's work." Newton assumed an inverse square law, as had already been proposed. Inverse square laws are usually due to spherical propagation: something is emitted in all directions from a point, so that at later times it is distributed on the surface of a sphere. This implies velocity, which Newton ignored (or assumed was too fast to mention). He could have conjectured that gravity was spherically propagated, and used $4\pi\cdot r \cdot r,$ instead of just $r\cdot r.$ G is not merely a constant of proportionality. It has units which deserve further attention. It is roughly ($c\cdot c \cdot R / M)/(4\pi),$ where $c=3\mathrm{E}8 ~\textrm{m/s}$ = speed of light (or gravity); $R=4.6\mathrm{E}26~\textrm{meters}$ = radius of visible universe; $M=3\mathrm{E}52~\textrm{kg};$ and $4\pi$ is the neglected spherical factor. amIamI $\begingroup$ Please get rid of the last paragraph. That's just woo. $\endgroup$ – David Hammen Jan 29 '16 at 9:26 $\begingroup$ Yes, but less that 1E-11 woo, which makes it worth consideration, unless you explain... $\endgroup$ – amI Feb 3 '16 at 18:00 $\begingroup$ Unless you can find a citation for that woo in peer reviewed journals, it's just woo. There are lots of citations that claim that G truly is a constant, meaning that it doesn't vary in time. We happen to live at a time where your woo is approximately true, to within an order of magnitude or so. It's numerological woo. $\endgroup$ – David Hammen Feb 3 '16 at 18:08 $\begingroup$ Those citations only claim that G has been constant within a factor of 2. To say that 'now' is so special of a coincidence is harder to believe than that there is new physics to find. $\endgroup$ – amI Feb 3 '16 at 18:27 $\begingroup$ Nonsense. Let's use different sets of units. In natural units, the speed of light is 1, the radius of the observable universe is $2.7\times10^{61}$ planck lengths, and the mass of the observable universe is $1.5\times10^{62}$ planck masses. Your expression yields a value of 0.014 for G. It should be 1 in this system. Using the astronomical unit as the unit of length, the day as the unit of time, and the mass of the Sun as the unit mass, your expression yields a value of $1.4\times10^{12}$. The correct value of G in this system is $2.959\times10^{-4}$. $\endgroup$ – David Hammen Feb 4 '16 at 16:23 Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! When did physics texts start to teach Kepler's $3/2$'s power law as a result of Newton's $1/r^2$ law of gravitation, rather than the other way around? What became of the Boltzmann-Zermelo debate about the second law of thermodynamics? When was the vector notation in physics and other sciences first introduced? Was Aristotle really wrong about gravity? Why was the reduced Planck constant introduced and when? Where did $P=VI$ come from? Who was first to integrate Newton's equations of motion to derive the conservation laws for mechanical energy and momentum? Why not proton volt instead of $eV$? Who originally derived the general force law equation of force between current elements? When and how did the notion/idea of physical constant emerge?
CommonCrawl
Intermittent Hormone Therapy Models Analysis and Bayesian Model Comparison for Prostate Cancer S. Pasetto ORCID: orcid.org/0000-0002-4926-37041, H. Enderling1,2,3, R. A. Gatenby1,4 & R. Brady-Nicholls1 Bulletin of Mathematical Biology volume 84, Article number: 2 (2022) Cite this article The prostate is an exocrine gland of the male reproductive system dependent on androgens (testosterone and dihydrotestosterone) for development and maintenance. First-line therapy for prostate cancer includes androgen deprivation therapy (ADT), depriving both the normal and malignant prostate cells of androgens required for proliferation and survival. A significant problem with continuous ADT at the maximum tolerable dose is the insurgence of cancer cell resistance. In recent years, intermittent ADT has been proposed as an alternative to continuous ADT, limiting toxicities and delaying time-to-progression. Several mathematical models with different biological resistance mechanisms have been considered to simulate intermittent ADT response dynamics. We present a comparison between 13 of these intermittent dynamical models and assess their ability to describe prostate-specific antigen (PSA) dynamics. The models are calibrated to longitudinal PSA data from the Canadian Prospective Phase II Trial of intermittent ADT for locally advanced prostate cancer. We perform Bayesian inference and model analysis over the models' space of parameters on- and off-treatment to determine each model's strength and weakness in describing the patient-specific PSA dynamics. Additionally, we carry out a classical Bayesian model comparison on the models' evidence to determine the models with the highest likelihood to simulate the clinically observed dynamics. Our analysis identifies several models with critical abilities to disentangle between relapsing and not relapsing patients, together with parameter intervals where the critical points' basin of attraction might be exploited for clinical purposes. Finally, within the Bayesian model comparison framework, we identify the most compelling models in the description of the clinical data. Avoid the common mistakes The prostate is an exocrine gland of most mammals' male reproductive system. The normal prostate is dependent on androgens, specifically testosterone and 5α-dihydrotestosterone (DHT), for development and maintenance (Feldman and Feldman 2001). Prostate carcinoma (PCa) results from the abnormal growth of tissue from the prostate's epithelial cells, which might induce metastasis in bones and lymph nodes. PCa is the second most common cancer in the USA and the second leading cause of cancer-related death after lung cancer (Siegel et al. 2021). The average male is 70 years of age at the time of diagnosis, with a strong of the distribution asymmetry biased toward older ages. PCa risk is often influenced by genetics. Men with a first-degree relative with PCa are twice as likely to develop it themselves; men with high blood pressure are also at higher risk of PCa. Treatment options typically include surgery, radiotherapy, high-intensity focused ultrasound, chemotherapy, and hormonal therapy. Screening for PCa is commonly performed through rectal examination or the noninvasive blood biomarker prostate-specific antigen (PSA), although its efficiency remains controversial (Lin et al. 2008). Today, more robust marker indicators, such as the overexpression of prostate cancer gene 3 (PCA3) obtained from the messenger-RNA (mRNA) in the urines, are considered more suited to monitoring the cancer evolution (Bussemakers et al. 1999, p. 3; Laxman et al. 2008; Neves et al. 2008; Hessels and Schalken 2009, p. 3; Borros 2009; Qin et al. 2020). PSA is a measure of a hematic enzyme produced by the prostate. PSA levels between 4.0 and 6.5 µh L−1 are generally considered normal (with a strong dependence on age). PSA is naturally present in the serum, and usually, only a small amount of PSA of the prostate leaks into the blood. Hence, high levels are an indication of prostatic hyperplasia or cancer. Since prostate cells and their malignant counterparts require androgen stimulation to grow, prostate cancer can be treated by androgen deprivation therapy (ADT), a type of hormone therapy. This therapy reduces androgen-dependent (AD) cancer cells by preventing their growth and inducing cellular apoptosis. Unfortunately, treating with ADT often results in a relapse in the form of hormone-refractory PCa due to the selection for the androgen-independent (AI) cells. Intermittent androgen deprivation (IAD) therapy, whereby treatment is cycled on and off, is often used as an alternative to ADT to delay treatment resistance. In IAD, androgen deprivation therapy is administered until a patient experiences a remission and then is withheld until the disease progresses up to a certain level. Clinical studies have shown that patients are responsive to multiple hormone therapy cycles, eventually delaying the androgen independence insurgence (Klotz et al. 1986; Larry Goldenberg et al. 1995; Bruchovsky et al. 2006). We consider models of intermittent therapy due to clinical interest and solve the inference problem using longitudinal PSA data from the Canadian Prospective Phase II Trial of IAD for locally advanced prostate cancer. This work aims to present the first systematic comparative study of IAD models, emphasizing their ability to disentangle relapsing and not relapsing patients and compare the models in the Bayesian framework. The goal is to detect the single model (or the group of models) that best represent the information in the considered dataset and, therefore, if possible, the most promising biological frame representing them. A general and historical review of the available prostate cancer models can be found elsewhere (Phan et al. 2020). In Sect. 2, we present the data included in our analysis. In Sect. 3, we introduce the statistical framework used to analyze the data. Section 4 presents an analysis of the models and their performance over the dataset utilizing the framework considered. Section 5 compares the performance of all the models, and Sect. 6 concludes and discusses the paper's findings and future developments. Data Cohort We consider data from the Canadian Prospective Phase II Trial of intermittent ADT for biochemically recurrent prostate cancer (Bruchovsky et al. 2006, 2008). The total patient number is Npat = 101. Their median pretreatment serum testosterone is 13.0 µg L−1, ranging between 0.4 and 23.0 µg L−1. Over a maximum of \(n = 5\) intermittent ADT cycles, a median of 35.1–36.0 weeks is spent on-treatment (depending on n), and 25.6–53.7 weeks (e.g., n = 5 and n = 1 respectively) are off-treatment during the 6-year study. An example of a PSA profile for an individual patient is shown in Fig. 1a. This patient responded to treatment during the first two treatment cycles (\(\tau_{1}\) and \(\tau_{2}\)) and progressed in his third cycle of treatment (\(\tau_{3}\)). The oscillatory dynamics demonstrate the effect of the intermittent treatment, with a decrease in PSA during treatment and an increase once treatment is turned off. Each data point is assigned with an error of 1 day in time (i.e., the time resolution of the dataset) and a maximal PSA error value \(e_{{{\text{max}}}}\) assigned of \(e_{{{\text{max}}}} = 0.1\) µg L−1 assumed from the literature (Borros 2009). Model data. a PSA data for patient #33 from tmin = 88 [day] to tmax = 941 [day]. Black dots indicate PSA values (error bars are omitted due to little variability), orange points indicate where PSA was collected, and graphically represented as an orange continuous box function, evidenced only in this example panel by yellow shaded areas. Treatment intervals are labeled τ1, τ2 and τ3. t* is the first minimum of PSA in τ1. b Distribution of the number of data points per patient. The original data are shown by the red dashed lines, while the selected subset of patients used in this analysis is shown in the yellow shaded region (Color figure online) We set the minimal PSA detection threshold equal to \({\Delta }_{1} = 0.1\) µg L−1, i.e., any patient data below this threshold is set to \(0.1\) µg L−1. Patients with a minimal per-day fluctuation below \(2.0{ }\) µg L−1, i.e., a minimal per-day fluctuation of the KLK3 glycoprotein enzyme of PSA of a typical man (Morgentaler and Conners 2015), are excluded because such small fluctuations are considered natural and not pathological. To only consider PSA concentrations above Poisson noise, patients with less than \({\Delta }_{2} = \sqrt {N_{{{\text{pat}}}} }\) (i.e., the sample shot/Poisson noise) data points are also excluded. These exclusions result in our analysis considering data from 89 (Npat = 89) rather than 101 patients. The patients' distribution per number of data points used after the selection process is shown in Fig. 1b compared to the original distribution. The PSA trend shown in Fig. 1a is based on the interplay between cellular populations, i.e., with a compartment modeling approach. An androgen-dependent set of \(N_{D} \ge 1\) cell populations (each with a concentration \(n_{D,k} = n_{D,k} \left( t \right),\) \(k = 1, \ldots ,N_{D}\) representing the compartment concentration [\(\mu g L^{ - 1}\)], \(t \in {\mathbb{R}}\) time [day]) is assumed to contribute to the oscillatory behavior of PSA. Additionally, a set of time-dependent androgen-independent cellular populations of \(N_{I} \ge 0\) cell populations \(n_{I,l} = n_{I,l} \left( t \right)\), \(l = 0, \ldots ,N_{I}\), also contributes to the PSA profile, such that PSA concentration \(c_{\rm PSA}\) is given by \(c_{\rm PSA} = f\left( {n_{D,k} ,n_{I,l} } \right),\) where \(f \in C^{0}\) is a function belonging to the class of continuous solution \(C^{0}\) (not necessarily smooth) of a suitably designed ODEs system\(.\) Any further dependence on space, temperature, and pressure is generally neglected in the IAD models' compartment approach. Furthermore, \(f\) is often assumed to be a linear combination of the \(N_{D}\) and \(N_{I}\) compartments, e.g., \(c_{\rm PSA} = \mathop \sum \limits_{k}^{{}} w_{k} n_{D,k} + \mathop \sum \limits_{l}^{{}} w_{l} n_{I,l}\) for some weights \(w_{l}\) and \(w_{k} .\) By assuming ADT to be highly effective in the first treatment interval \(\tau_{1}\), we can set \(n_{D} \left( {t \in \tau_{1} } \right) \cong 0\) for some t as an initial condition (hereafter i.c.). This approach does not necessarily hold for \(\tau_{i}\) with \(i > 1\): Generally, \(\forall i\) where \(c_{\rm PSA} \cong 0\), we can equally well assume this setting for the i.c. of the \(n_{D,k} = n_{D,k} \left( t \right)\) equations. Equivalently we can assume that a non-holonomic (i.e., with inequalities) condition for the fitting procedure holds at the beginning of the patient time series \(n_{D} \left( t \right) \le n_{I} \left( t \right)\) for some \(t \in \tau_{1}\). Furthermore, in most of the models that we accounted for, these considerations are articulated with the addition of a few extra equations that interpret, at a local or global level in the parameter space, the contribution to \(c_{{\textit{PSA}}} \left( t \right)\) by the androgen quota, cellular plasticity, staminal cells populations, or other model specificities. Finally, we know from biological arguments that, under treatment, the models' equations are designed to permit, at least for \(c_{PSA}\), to tend asymptotically to the value \(c_{\rm PSA} = 0\). Any model that does not permit the phase state \(c_{\rm PSA} = c_{\rm PSA} \left( t \right)\) to reach approximatively null values for any \(t\), i.e., ∄t ℝ: \(c_{{{\text{PSA}}}} \left( t \right) \cong 0\), would fail to reproduce the patients whose first treatment is always successful (see Fig. 1a). We elaborate more on this in Sect. 4. Therefore, it is worth investigating if the models allow for stationary equilibria outside the treatment intervals and then if any of the patient best fit values have fallen close to those equilibria (when they exist). This behavior would imply a stationary or recurrent solution for the dynamics and, therefore, a constrained PSA's evolution if this "basin of attraction" is achievable in a biological time of interest. We stress that this mathematical behavior does not imply that the patient can effectively reach the point of equilibrium on the biological timescale of interest or a plausible point regarding toxicity levels. Bayesian Inference The Bayesian regression approach stems from the concept of probability as a measure of the plausibility of a model given the truth of the information in the data presented above. First, we encode the prior state of knowledge about the parameters considered \({\bf{p}} = \left\{ {p_{1} ,p_{2} ,\ldots} \right\}\) into a prior distribution function \({\bf{Pr}}\left( {{\text{p}}|I} \right)\), where \(I\) represents any available information. Typically, this can be achieved with a flat, uniform, and not informative prior at the beginning or with a sharper prior when the model is better trained. We return to this point in Sect. 3.1. Secondly, we consider the dataset, \(D\), through the likelihood \(L\left( {{\bf{p}},D} \right) = {\text{Pr}}\left( {D|{\bf{p}},I} \right)\). Finally, the inference problem is solved, studying the probability distribution function encoding the knowledge of the prior and the information encoded in the likelihood of the data \(\Pr \left( {\bf{p}}|{D,I} \right) \propto \Pr \left( {\bf{p}}|{I} \right)L\left( {\bf{p}},{D} \right)\). Standard techniques to achieve this result are fully analytical (e.g., for some linear regression), approximated (e.g., asymptotic approximation, Laplacian approximation, Gaussian approximation, etc.), iterative (e.g., Levenberg–Marquardt), or fully numerical (e.g., simulated annealing genetic algorithms). The choice between these techniques depends on the nature of the problem. Here, we start using Laplacian approximation with hyperparameters (Hutter et al. 2011; Murphy 2012; Theodoridis 2015), as a few of the mathematical models that we consider herein are nested, to solve the inference problem (i.e., to search for the optimal set of parameters \(\bf{p}\) that best represent the data). In order to confirm the inference results and to perform the Bayesian model comparison numerically, we test the results both against the nested sampling approach to the global likelihood (hereafter evidence) (Skilling 2004; Mukherjee et al. 2006; Feroz and Hobson 2008) and with the differential evolution search (Feoktistov 2006; Goode and Annin 2015) with up to aggressive scaling factors (\(\le 0.9\)) and cross probabilities (\(\ge 0.1\)). For the Bayesian model comparison part of our work, see Sect. 5, the nested sampling-based approach will embed the results in a natural framework. Finally, we note that substantial limitations in the fitting procedure came from the sparse and irregular temporal sampling in the clinical data. This irregularity impacts the parameter space exploration due to the lack of condition on the PSA trend's derivative. The partial derivative \(\partial_{t} c_{PSA} \left( {{\bf{p}};t} \right)\) is not smooth, thus inhibiting using some straightforward optimization techniques based on the PSA curves' gradients or convexity (Theodoridis 2015). The Priors \(\Pr \left( {\bf{p}}|{I} \right)\) While robust approximations or numerical tools have been adopted for the Bayesian framework, special attention is paid to the use of priors. As mentioned, Bayesian inference requires the use of the priors, \(\Pr \left( {\bf{p}}|{I} \right)\), for parameter estimation. With initially unknown priors, we implement uniform priors over the parameters' full ranges (Fig. 2a). By requiring all model parameters to be positive, we can assume the Heaviside step function \(\theta = \theta \left( {\bf{p}} \right)\) as (unnormalized) prior, this approach is generally referred to as "improper prior" as it is unbounded above, it cannot be normalized and therefore it does not have a mean, standard deviation, median, or quantiles. We set an upper bound for each parameter to be \(p < p_{{{\text{max}}}}\) with a max value \(p_{{{\text{max}}}} < + \infty\) \(\forall p\) strictly. An alternative functional tested is the non-informative Jeffreys prior, \({\text{Pr}}\left( {{\bf{p}}|I} \right) \propto \sqrt {{\text{det}}\left( {{\text{F}}\left( {\bf{p}} \right)} \right)}\) with \({\text{F}}\) symbol referring to the Fisher Information matrix (Jeffreys 1946) and "det" to the matrix determinant. Model prior development. This example refers to the model by Hirata et al. 2010 and its 13 defining parameters. A similar technique is adopted for the other models. a. Initial bounded flat prior. b. Evolution of prior development for γDon [day−1] as the number of patients analyzed is increased (Npat = {10,25,60,72}, respectively). c Final priors for the remaining 12 parameters (colors correspond to those shown in panel a) (Color figure online) Extra than testing with flat/Jeffreys priors, in the numerical nested sampling approach, we explore the parameter space logarithmically to avoid divergences, and once we reach a statistically significative sample, i.e., above the Poisson noise fluctuation (\(\sim\sqrt {N_{{{\text{pat}}}} }\)) shaping the posterior PDF, we proceed to implement the posterior as a prior for the patients analyzed in the dataset; finally, we reiterate by implementing a recursive determination of the prior (Fig. 2b, c). Further details can be found in Pasetto et al. (2021), where we discuss Bayesian analysis of retrospective data to guide clinical decisions. Analysis of IADT Mathematical Models Here we consider only IAD models due to current clinical interest. Each model is presented and justified in a biological and mathematical sense in the original paper where the model was first presented, and we refer the reader to that papers for detailed model derivations. Similarly, the sensitivity analysis of the model parameters is presented in each paper individually, and we elaborate on it here only where necessary. We will refer to the relapsing patient set as \({\Omega }\)-set and not relapsing to as relapse \(\neg {\Omega }\)-set. We parametrize the individual IAD data with a patient-specific control function \(T_{ps}\) defined as follows: \(T_{ps} \left( t \right) = \sum\nolimits_{i = 1}^{n} {{\bf{1}}_{{\tau_{i} }} } \left( t \right)\), \(0 < t \in \left[ {t_{{{\text{min}}}} ,t_{{{\text{max}}}} } \right]\)Footnote 1 with \(t_{{{\text{min}}}}\) and \(t_{{{\text{max}}}}\) minimum and maximum patient-specific treatment under consideration (e.g., Fig. 1a) and \(t_{{{\text{min}}}}\) generally after the first treatment drop; \(n \ge 1\) is the number of intervals \(\tau_{i}\) considered. \(\tau_{i} \subseteq \left] {t_{{{\text{min}}}} ,t_{{{\text{max}}}} } \right[\forall i\) is referred to as the "ith treatment cycle," and \({\bf{1}}_{{\tau_{i} }}\) is the indicator functionFootnote 2 for the interval \(\tau_{i}\) (defined as \({\bf{1}}_{{\tau_{i} }} = 1\) for \(t \in \tau_{i}\), 0 otherwise). For modeling purposes, the weights/errors, \(e_{i}\) for each data \(i\), have been assigned either uniformly \(e_{i} = {\text{cnst}}.\forall i\) or with a linear decreasing relevance from the last PSA concentration \(c_{PSA}\) peak, say \(\hat{c}_{PSA}\) (e.g., in \(\tau_{3}\) of Fig. 1) with \(e_{i} = \left| {c_{{{\text{PSA}}i}} - \hat{c}_{PSA} } \right|_{{t = \hat{t}}}\), i.e. at \(t = \hat{t}\), and \(e_{i} = \left| {\hat{c}_{{{\text{PSA}}}} } \right| - \left| {\hat{t} - t} \right| + \left| {c_{{{\text{PSA}}i}} } \right|\) for \(t \ne \hat{t}\). Finally, we performed sensitivity analysis on all the models included here. Comments on the technique adopted are technical and left to Supplement A. Ideta et al. (2008) The model by Jackson (2004) can be considered the continuous ADT model prototype. Its extension to IAD therapy of interest was presented by Ideta et al. (2008). In this model (hereafter, I08), the authors drop the dependence of Jackson's model on the spatial distribution, which is only of theoretical interest but not resolved in clinical PSA data. Model simulations predict that intermittent ADT can only prevent progression if normal androgen levels decrease the growth rate of AI cells, which may be biologically unlikely since AI cells have androgen receptors with increased sensitivity (Grossmann et al. 2001). We consider the I08 model in the following form: $$\begin{array}{*{20}l} {\frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = \left( {\gamma_{D} - \delta_{D} - \mu_{{\textit{{\textit{DI}}}}} } \right)n_{D} ,} \hfill \\ {\frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = \mu_{{\textit{{\textit{DI}}}}} n_{D} + \left( {\gamma_{I} - \delta_{I} } \right)n_{I} ,} \hfill \\ \end{array}$$ with initial conditions \(n_{D} \left( {t_{0D} } \right) = n_{D0} ,n_{I} \left( {t_{0I} } \right) = n_{I0}\).Footnote 3 As previously mentioned, \(n_{D}\) and \(n_{I}\) are the androgen-dependent and -independent population number of cells (or concentration). \(\gamma_{i}\) and \(\delta_{i} , i \in \left\{ {D,I} \right\}\) are growth and apoptosis rates for AD and AI cells, given, respectively, by: $$\begin{array}{*{20}l} {\gamma_{D} = \gamma_{D\max } \left( {\gamma_{DA} + \left( {1 - \gamma_{DA} } \right)\frac{{c_{A} }}{{c_{A} + k_{DA\gamma } }}} \right),} \hfill & {\gamma_{I} = 1 - \left( {1 - \frac{{\delta_{IA} }}{{\gamma_{IA} }}} \right)\frac{{c_{A} }}{{c_{A0} }},} \hfill \\ {\delta_{D} = \delta_{D\max } \left( {\delta_{DA} + \left( {1 - \delta_{DA} } \right)\frac{{c_{A} }}{{c_{A} + k_{DA\delta } }}} \right),} \hfill & {\delta_{I} = 1.} \hfill \\ \end{array}$$ In Eq. (2) \(\gamma_{{D{\text{max}}}}\) and \(\delta_{{D{\text{max}}}}\) are the maximal AD proliferation and apoptosis rates, \(\delta_{DA}\) is a control parameter on the effect of low androgen levels on the AD apoptosis rate, \(k_{DA\gamma } \ne 0\) is the AD half-saturation rate, \(k_{DA\delta } \ne 0\) is the AD apoptosis rate dependence on androgen. Finally, \(\delta_{IA}\) and \(\gamma_{IA} \ne 0\) modulate hormonally patient failing death and growth. Mutation from AD to AI cells are allowed at a mutation rate: $$\mu_{{\textit{{\textit{DI}}}}} = \mu_{{{\textit{{\textit{DI}}}}{\text{max}}}} \left( {1 - \frac{{c_{A} }}{{c_{A0} }}} \right),$$ thus, the mutation rate decreases as the androgen (here normalized at its homeostatic level \(c_{A0} \ne 0\)) approaches its max value \(\mu_{{{\textit{{\textit{DI}}}}{\text{max}}}}\). A decoupled ODE model of the serum androgen concentration under treatment \(c_{A}\) is given by: $$\frac{{{\text{d}}c_{A} }}{{{\text{d}}t}} = \delta_{cA} \left( {c_{A0} - c_{A} } \right) - \delta_{cA} c_{A0} T_{ps} ,$$ with initial condition \(c_{A} \left( {t_{{0{\text{A}}}} } \right) = c_{A0} \ne 0,\) where \(\delta_{cA}\) is the androgen clearance rate. Here \(T_{ps} = T_{ps} \left( t \right)\) is the patient treatment-specific function as defined in Sect. 2.1. Finally, the PSA density concentration of interest to us, \(c_{PSA}\), is a linear combination with weight \(w_{i}\) of the population densities $$c_{{{\text{PSA}}}} = \mathop \sum \limits_{{i \in \left\{ {D,I} \right\}}}^{{}} w_{i} n_{i} .$$ Based on the original analysis of Ideta et al. and the available dataset, we explored two versions of this model, namely, where \(\delta_{IA} = \gamma_{IA}\) in Eq. (2), i.e., \(\gamma_{I} = {\text{cnst}}.\) (hereafter model I08A) and the original form of the equations (\(\delta_{IA} \ne \gamma_{IA}\) hereafter model I08B). I08A in the Context of the Data We noted that the system of equations (hereafter SoE) composed by Eqs. (1)–(4) decouples in the androgen concentration \(c_{A}\). The analysis of the system results in a line of infinite equilibria on the intersection of the plane \(n_{D} = 0\) with the plane \(c_{A} = c_{{{\text{A}}0}} - c_{{{\text{A}}0}} T_{ps}\) in the space of phase-state variables \(\left( {n_{D} ,n_{I} ,c_{A} } \right)\). Thus, \(c_{A} = c_{{{\text{A}}0}}\) off-treatment and \(c_{A} = 0\) on-treatment. Standard linear stability analysis (Wiggins 2003) shows that the Jacobian of the system produces a null generalized eigenvalue \(\lambda_{1} = 0\), a negative one \(\lambda_{2} = - \delta_{A}\), and a more complicate third generalized eigenvalue that takes, off-treatment, the elegant form:\({ }\lambda_{3}^{{{\text{off}}}} = \gamma_{{{\text{Dmax}}}} + \frac{{\left( {\gamma_{{{\text{DA}}}} - 1} \right)\gamma_{{{\text{Dmax}}}} k_{{{\text{D}}\gamma /2}} }}{{c_{{{\text{A}}0}} + k_{{{\text{D}}\gamma /2}} }} - \delta_{{{\text{Dmax}}}} - \frac{{\left( {\delta_{{{\text{DA}}}} - 1} \right)\delta_{{{\text{Dmax}}}} k_{{{\text{D}}\delta /2}} }}{{c_{{{\text{A}}0}} + k_{{{\text{D}}\delta /2}} }}.\) The sign of \(\lambda_{3}^{{{\text{off}}}}\) can be evaluated for the best-fit parameter values that result from the inference works of Sect. 3 in the patients' cohort considered here (Sect. 2), resulting in being always positive for all the patients. Therefore, the above-found equilibria lines represent a 1D nonstable manifold, and further investigations (e.g., in the context of the central manifold theory) are not of additional interest to us. We are indeed more interested to further exploit the characteristics of the present dataset in the context of this model by using the decoupled nature of the serum androgen concentration \(c_{A}\). All the patients are considered from their first cycle of treatment, starting with \(T_{ps} \left( t \right) = 1\) for \(t \in \tau_{1}\). Hence, we can emulate with a Heaviside step function \(T_{ps} = \theta \left( { - t} \right)\) a cycle of treatment followed by the off-treatment period for a suitable cyclic interval (on–off, on–off, on–off, and so forth) around the off-treatment start, set at \(t = 0\). Within this approach, the general solution of Eq. (4) is algebraic and reads: $$c_{A} \left( t \right) = c_{{{\text{A}}0}} e^{{ - \delta_{A} \left( {t + 1} \right)}} \left( {e^{{\delta_{A} }} \theta \left( {e^{{\delta_{A} t}} - 1} \right) + 1} \right).$$ This equation is monotonic on the two phases on/off-treatment because the derivative \(dc_{A} /dt = c_{{{\text{A}}0}} \delta_{A} e^{{ - \delta_{A} \left( {t + 1} \right)}} \left( {e^{{\delta_{A} }} \theta - 1} \right)\) is never null neither for \(t < 0\), i.e., on-treatment nor for \(t \ge 0\), off-treatment. By splitting the treatment in on/off-time, we can always reverse the bilinear map \(c_{A} = c_{A} \left( t \right)\) in \(t = t\left( {c_{A} } \right)\). For example, in our case, it reads \(t = - \frac{1}{{\delta_{A} }}{\text{log}}\left( {\frac{{c_{A} }}{{c_{A0} }}} \right) + \delta_{A}\) on-treatment and \(t = \frac{1}{{\delta_{A} }}{\text{log}}\frac{{c_{A0} \left( {{\text{e}}^{{ - \delta_{A} }} - 1} \right)}}{{c_{A} - c_{A0} }}\) off-treatmentFootnote 4 for \(c_{A} \ne c_{A0} ,\) and \(c_{A} \ne 0\) and \(\delta_{A} \ne 0\). We exploit Eq. (6) to obtain the probability distribution function (PDF) of the orbits over all the sets of patients remapping each cycle over the phase space section \(\left( {O,n_{D} ,n_{I} } \right)\). We take advantage by the sharp \(c_{A}\) passage from its homeostasis value \(c_{A0}\) to null and vice versa in conjunction with the bijection map just found. Figure 3a shows the \(c_{A}\) profile for a representative patient. While time is a monotonic increasing function, the map we considering is one-to-one only over the treatment cycle \(T_{ps} = 1\) and the off-cycle \(T_{ps} = 0\), respectively, and in these two tracks we can write the SoE as \(n_{D} = n_{D} \left( {t\left( {c_{A} } \right)} \right) = n_{D} \left( {c_{A} } \right)\) and \(n_{I} = n_{I} \left( {c_{A} } \right)\). As \(c_{A}\) sharply switches from \(c_{A} = c_{A0}\) and \(c_{A} = 0\), we can limit ourselves to a first-order solution of the SoE. After simple algebra, we arrive at the approximate solution of the SoE in the form: $$\begin{array}{*{20}l} {n_{D} \simeq n_{D0} - \frac{1}{{c_{A0} \delta_{A} }}n_{D0} \left( {c_{A} - c_{A0} } \right)\left( {\frac{{\gamma_{D\max } \left( {c_{A0} + \gamma_{A} k_{D\gamma /2} } \right)}}{{c_{A0} + k_{D\gamma /2} }} - \frac{{\delta_{D\max } \left( {c_{A0} + \delta_{D} k_{D\delta /2} } \right)}}{{c_{A0} + k_{D\delta /2} }}} \right),} \hfill \\ {n_{I} \simeq n_{I0} ,} \hfill \\ \end{array}$$ to the first order in \(c_{A}\) (and where \(\simeq\) means asymptotic-to). As evident, the second equation remains close to its initial value \(n_{I0}\), while the first is perturbed away, suggesting that we can first sample the PDF of the dataset for fixed values in \(n_{I}\) around \(n_{I0}\) and then investigate the PDF as sampled from the best fit obtained by the patient in the trial with Eq. (7). The results are shown in Fig. 3b. The trend of the two distributions for the development of resistance and continuing response patients is comparable as above the starting value \(n_{D} = n_{D0}\), while the trend diverges for smaller values \(n_{D}\). Because we assume \(n_{D}\) is a proxy for \(c_{PSA}\) at small values of \(n_{I}\), as evicted from Eqs. (5) and (7), if the model correctly interprets the data, then a patient with an initial PSA-drop below 10% of its initial value is highly likely to be a continuous responder. The risk of resistance development grows to about 50% when the initial drop in PSA is around 30%. Ideta et al. model analysis results. a Evolution of the normalized androgen concentration cA (normalized to its homeostatic value cA0, (left y-axis) as a function of time (blue curve) for a representative patient. For completeness, the PSA profile is also reported in light red (right y-axis). b Normalized androgen-dependent probability distribution functions on the cohort of best-fit parameters for progressive (\(\Omega\), red) and responsive (\(\neg \Omega\) , blue) sets (Color figure online) I08B in the Context of the Data For I08B, where \(\delta_{IA} \ne \gamma_{IA}\), the ratio presented in Eq. (3) evidences the structural non-identifiability of the SoE. The treatment of the equilibria and their stability is more straightforward in this model form than in I08A. The only equilibrium point is given by \(\{ n_{D} ,n_{I} ,c_{A} \}_{{{\text{eq}}}} = \left\{ {0,0,c_{{{\text{A}}0}} - c_{{{\text{A}}0}} T_{ps} } \right\}\) with generalized eigenvalues \(\lambda_{i}\) of the Jacobian at the equilibrium given by \(\lambda_{1} = \left( {T_{ps} - 1} \right)\left( {\gamma_{{{\text{IA}}}} - \delta_{{{\text{IA}}}} } \right)\gamma_{IA}^{ - 1}\) with \(\gamma_{IA} \ne 0\), \(\lambda_{2} = \lambda_{2}^{{{\text{I}}08{\text{A}}}}\) and \(\lambda_{3} = \lambda_{3}^{{{\text{I}}08{\text{A}}}}\). Following the I08A assumptions, we investigate the model under the conditions \(\delta_{{D{\text{max}}}} > \gamma_{{D{\text{max}}}}\), \(\delta_{DA} > 1\), and by requiring that \(\mu_{{{\text{max}}}} < \gamma_{I} - \delta_{I}\) to avoid the annihilation of the populations. Under these conditions, we can prove that \(\lambda_{1} \le 0\) and \(\lambda_{2} \le 0\), and that for \(\lambda_{3}\) it holds the same consideration as for I08A due to the non-stable nature of the resulting equilibrium manifold. Analogous consideration on the non/relapsing treatment holds for I08B as for I08A, but with more straightforward treatment for I08B than for I08A: the two equilibria at homeostasis \(c_{A} = c_{A0}\) and at null androgen concentration, \(c_{A} = 0\), attract the dynamics as for I08A and self-explain the orbit profiles. Therefore, the identical results from the inference of I08B on patients' trials can be obtained for the PDF but are not depicted again. Eikenberry et al. (2010) The model developed by Eikenberry et al. (2010, hereafter E10) was an attempt to describe the interaction between testosterone (T, the primary androgen in the serum), its enzyme 5a-reductase to dihydrotestosterone (DHT), and their binding (T:AR and DHT:AR) with the androgen receptors (AR) in the prostate. Because of model E10's versatility, we have included it in the IAD treatment model comparison. Of note, the authors have not proposed the model to fit data, and here we reinterpret E10 beyond the scope of the original paper. The modulation due to intermittent IAD is assumed in testosterone time modulation. While a linear relation might not be readily available from the literature between testosterone and PSA level (Elzanaty et al. 2017), we recode the testosterone concentration \(n_{T}\), in E10 as follows: $$\frac{{{\text{d}}n_{T} }}{{{\text{d}}t}} = n_{T} \left( {\delta_{T} - \frac{{\mu_{cat} n_{5\alpha } }}{{k_{M} + n_{T} }} - \kappa_{T:R} n_{R} } \right) + \delta_{T:R} q_{T:R} - \left( {T_{ps} - 1} \right)\Upsilon \left( {n_{S} } \right),$$ which we couple with the original system of equations: $$\begin{gathered} \frac{{{\text{d}}n_{R} }}{{{\text{d}}t}} = n_{R} \left( {\gamma_{R} - \delta_{R} - \kappa_{DHT} n_{DHT} - \kappa_{T:R} n_{T} } \right) + \delta_{DHT:R} q_{DHT:R} + \delta_{T:R} q_{T:R} , \hfill \\ \frac{{{\text{d}}n_{DHT} }}{{{\text{d}}t}} = \frac{{\mu_{cat} n_{5\alpha } n_{T} }}{{k_{M} + n_{T} }} - n_{DHT} \left( {\delta_{DHT} + \kappa_{DHT} n_{R} } \right) + \delta_{DHT:R} q_{DHT:R} , \hfill \\ \frac{{{\text{d}}q_{T:R} }}{{{\text{d}}t}} = \kappa_{T:R} n_{R} n_{T} - \delta_{T:R} q_{T:R} , \hfill \\ \frac{{{\text{d}}q_{DHT:R} }}{{{\text{d}}t}} = \kappa_{DHT} n_{DHT} n_{R} - \delta_{DHT:R} q_{DHT:R} , \hfill \\ \end{gathered}$$ with five nominals initial conditions: \(n_{R0} = n_{R} \left( {t_{0R} } \right)\), \(n_{T0} = n_{T} \left( {t_{0T} } \right)\), \(n_{DHT0} = n_{DHT} \left( {t_{0DHT} } \right)\), \(q_{T:R0} = q_{T:R} \left( {t_{0T:R} } \right)\) and \(q_{DHT:R0} = q_{DHT:R} \left( {t_{0DHT:R} } \right)\). Here, the treatment function \(T_{ps}\) modulates testosterone influx into the prostate-function \(\Upsilon \left( {n_{S} } \right)\) original in E10 and that we are going to adopt here, where \(n_{{\text{S}}}\) is the testosterone serum concentration. Furthermore, we consider the androgen receptor concentration \(n_{R}\) and the dihydrotestosterone concentration \(n_{DHT}\) together with two quota concentrations \(q_{T:R}\) and \(q_{DHT:R}\) (Droop 1968), here, taken to be the T:AR complex and the DHT:AR complex concentration, respectively. \(\gamma_{R}\) is the AR production rate, \(\delta_{R}\) is the AR degradation rate, \(\delta_{T}\) is the testosterone-specific degradation rate, and \(\delta_{DHT}\) is the dihydrotestosterone degradation rate. The mass-action constants for the androgen-dependent component (testosterone) and dihydrotestosterone binding the AR are \(\left\{ {\kappa_{a}^{T} ,\kappa_{d}^{T} ,\kappa_{a}^{{{\text{DHT}}}} ,\kappa_{d}^{{{\text{DHT}}}} } \right\}\), and the \(5\alpha\) reductase converts T to DHT by Michaelis–Menten enzyme kinetics with concentration \(n_{5\alpha }\), turnover number \(\mu_{cat}\) and constant \(k_{M} \ne 0\). The Model in the Context of the Data If we set \(a \equiv \mu_{{{\text{cat}}}} n_{5\alpha } - \delta_{T} k_{M}\), \(b \equiv \left( {1 - T_{ps} } \right)\Upsilon \left( {n_{s} } \right)\), and \({\Delta } \equiv \sqrt {(a + b)^{2} - 4b\delta_{T} k_{M} }\), then two critical points can be isolated at the intersection of the nullclines hyperplanes of the phase space. On-treatment, the first point \(\left\{ {n_{R} ,n_{T} ,n_{DHT} ,q_{T:R} ,q_{DHT:R} } \right\}_{\text{eq}}^{{\left( {1,2} \right)}} = \left\{ {0,0,0,\frac{ - a - b \mp \Delta }{{2\delta_{T} }},\frac{ - a + b \pm \Delta }{{2\delta_{DHT} }}} \right\}\) holds as soon as \(\mp a + b + {\Delta } \le 0 \wedge \mp a + {\Delta } \le b\). While only the second of these equilibria is of biological interest, it is not a stable equilibrium. Obtaining the complete set of generalized eigenvalues requires a cumbersome solution of three cubic equations, yet the check for the stability requires much less effort once we realize that one of the generalized eigenvalues from the characteristic equations reads simply \(\delta_{T} - \frac{{4\delta_{T}^{2} \mu_{{{\text{cat}}}} k_{M} n_{5\alpha } }}{{(a + b - \Delta - 2\delta_{T} k_{M} )^{2} }}\) where \(a + b - {\Delta } - 2\delta_{T} k_{M} \ne 0\) and that it proves to be always positive for all the inference results in the trial patients. Finally, we note how the model could represent an essential instrument for investigating the relapsing mechanism evidenced in some patients, which remains one of the goals of this work for its potential clinical implications. We identify three over five state variables by inspecting the phase-state space with a striking separation between \({\Omega }\) and \(\neg {\Omega }\). Figure 4a shows the 3D probability distribution function of \(n_{T} ,n_{R} ,\) and \(q_{T:R}\). The density map of the temporal evolution of \({\Omega }\) and \(\neg {\Omega }\) sets clusters (over the orbital evolution spanned by the patients analyzed) on a well distinct area of the phase space, splitting in the \(n_{T}\) vs. \(n_{R}\) space and at least partially in the orthogonal \(q_{T:R}\) space. Eikenberry et al. model analysis results. a Probability distribution function of the \(q_{T:R}\), nT, and nR space. The isocontours for the Pr of \(\Omega\) and \(\neg \Omega\) sets are shown in blue and red, respectively. A few isocontours are shown at the border-slicing-planes for Pr = {0.1,0.68,0.95}. b–d Sensitivity of cPSA in response to changes in the normalized values of \(q_{T:R}\), nT and nR for a representative patient. The optimal fit cPSA dynamics (i.e., for optimal parameters \(\widehat{\bf{p}}\)) and corresponding data are shown by the red curve and black dots with error bars, respectively; dashed green curves and the corresponding shadows show the sensitivity of cPSA when the parameters are increased/decreased by Δ (Color figure online) In Fig. 4, panels b, c, and d, we exploited the Direct Differential Method (DDM) for sensitivity analysis to track the time dependence of the sensitivity \(S_{c_{{\textit{PSA}}}j} \equiv \frac{{\partial c_{PSA} \left( {t,\hat{\bf{p}}} \right)}}{{\partial p_{j} }}\) computed at the best fit parameter values \(\hat{\bf{p}}\), where \(\bf{p} = \left\{ {n_{T0} ,n_{R0} ,q_{T:R0} } \right\}\), respectively. As shown in Fig. 4b–d, a slight variation of the parameters does not dramatically affect the trend of \(c_{PSA} .\) Thus, there is minimal sensitivity of \(c_{PSA}\) to the parameters. This result shows that the PDF of the combination of parameters investigated might be an excellent tool to explore the origin of the resistance with the E10 model. The sensitivities were computed using the Direct Differential Method (DDM), as mentioned at the beginning of Sect. 4 and it is reported more in details in Supplement A. As evident from Fig. 4b-d, different parameters have different sensitivity on a different phase orbit with \(n_{T0}\) more sensitive under treatment and \(n_{R}\) or \(q_{T:R}\) more sensitive out of treatment. DDM not only demonstrates the stability of the results obtained but also adds extra information on when a model is sensitive to a parameter change. This result is significant when dealing with models with varying behavior on- and off-treatment. Hirata et al. (2010) A series of studies (Tanaka et al. 2010; Hirata et al. 2012; Hirata and Aihara 2015) motivated the model by Hirata et al. 2010 (hereafter model H10) to capture intermittent ADT dynamics. The model is based on the coupled AD-AI population cells, supplemented with a population of irreversible AI cells, AI-Irr representing the first three-compartment model in the literature (Fig. 5a). Here we report the mathematical formulation in the proposed framework's formalism and refer to the original paper for a detailed model description. The SoE reads with our generalized notation: $$\begin{array}{*{20}l} {\frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = n_{D} \left( {T_{ps} \left( {\gamma_{D}^{{{\text{on}}}} - \gamma_{D}^{{{\text{off}}}} } \right) + \gamma_{D}^{{{\text{off}}}} } \right) + \mu_{\textit{ID}} \left( {1 - T_{ps} } \right)n_{I} ,} \hfill \\ {\frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = \mu_{{\textit{DI}}} T_{ps} n_{D} + n_{I} \left( {T_{ps} \left( {\gamma_{I}^{{{\text{on}}}} - \gamma_{I}^{{{\text{off}}}} } \right) + \gamma_{I}^{{{\text{off}}}} } \right),} \hfill \\ {\frac{{{\text{d}}n_{Irr} }}{{{\text{d}}t}} = \mu_{DIrr} T_{ps} n_{D} + \mu_{IIrr} T_{ps} n_{I} + n_{Irr} \left( {T_{ps} \left( {\gamma_{Irr}^{{{\text{on}}}} - \gamma_{Irr}^{{{\text{off}}}} } \right) + \gamma_{Irr}^{{{\text{off}}}} } \right),} \hfill \\ \end{array}$$ \(n_{D} \left( {t_{0D} } \right) = n_{D0} ,{ }n_{I} \left( {t_{0I} } \right) = n_{I0} , n_{Irr} \left( {t_{0Irr} } \right) = n_{Irr0} ,\) where terms retain the identical biological meaning as previously described and the two irreversible, and reversible changes in the AI cell population are considered with the relative growth rate \(\gamma_{i}^{{{\text{on}}/{\text{off}}}}\) on- and off-treatment with \(i \in \left\{ {D,I,Irr} \right\}\). The serum concentration is computed as in Eq. (5) for \(i \in \left\{ {D,I,Irr} \right\}\). Hirata et al. model analysis results. a. Model schematic: under treatment (yellow arrows) and off-treatment (violet arrow). b PDF on the nI and nD space as phase space density histograms for resistant (\(\Omega\), red) and responsive (\(\neg \Omega\), blue) patients. c. Flex in the PSA profile under treatment and off-treatment in the nD, nI and nIrr phase space for patient #33. Colors match the sketch of panel a. d. PSA density profile (red curve), with data points with error bar (black dots). The nI, nD and nIrr populations are shown by the dashed blue, dashed green), and dashed cyan curves, respectively. Yellow lines along the x-axis show the intervals of treatment (Color figure online) During both on- and off-treatment cycles, nullclines analysis leads to \(\left\{ {n_{D} ,n_{I} ,n_{{{\text{Irr}}}} } \right\}_{{{\text{eq}}}} = \left\{ {0,0,0} \right\}\) as the only equilibrium point. By setting \(a \equiv \gamma_{D}^{{{\text{off}}}} + \gamma_{I}^{{{\text{off}}}}\) \(b \equiv \gamma_{D}^{{{\text{on}}}} + \gamma_{I}^{{{\text{on}}}}\), \(c \equiv \gamma_{D}^{{{\text{off}}}} - \gamma_{I}^{{{\text{off}}}}\) and \(d \equiv \gamma_{D}^{{{\text{on}}}} - \gamma_{I}^{{{\text{on}}}} ,\) with the discriminant \({\Delta }\) implicitly defined by the relation \({\Delta }^{2} = c^{2} + T_{ps} \left( {T_{ps} \left( {(c - d)^{2} - 4\mu_{{{\text{{\textit{DI}}}}}} \mu_{{{\text{\textit{ID}}}}} } \right) - 2c\left( {c - d} \right) + 4\mu_{{{\text{{\textit{DI}}}}}} \mu_{{{\text{\textit{ID}}}}} } \right)\), we can write the generalized eigenvalues of the Jacobian at the equilibrium as \(\lambda_{1} = \gamma_{Irr}^{{{\text{off}}}} + T_{ps} \left( {\gamma_{Irr}^{{{\text{on}}}} - \gamma_{Irr}^{{{\text{off}}}} } \right)\) and the other two \(\lambda_{2,3}\) in a compact form as \(\lambda_{2,3} = \frac{1}{2}\left( {T_{ps} \left( {b - a} \right) + a \pm {\Delta }} \right)\). This result implies that the equilibrium is stable on-treatment and unstable off-treatment. The phase space shows that responsive and resistant patients cluster differently on the phase-state variables. Figure 5b shows that the probability density function for the best-fit patient groups around the initial value for \(n_{I} \cong n_{I0}\) and \(n_{Irr} \cong 2.1n_{Irr0}\). Thus, the irreversible component of the model offers a potential tool to disentangle patient responses from the model fitting. As the resistant patients are expected to increase their irreversible cell component (i.e., asymptotically \(n_{Irr} \succ n_{Irr0}\) with "\(\succ\)" meaning asymptotic greater), we note that \(n_{I} \ll n_{I0}\) in responsive patients. The model structure allows for the simulation of various PSA profiles thanks to the introduction of a new degree of freedom carried out with the third-compartment equations. Figure 5c shows the phase space plane for an example taken from the \({\Omega }\) set of patients (Patient #33), while Fig. 5d shows the quality of the captured PSA concentration \(c_{PSA}\) profile achieved by this model. Portz et al. (2012) The Portz et al. (2012) model is based on the cell quota concept (Droop 1968), which is modeled as: $$\frac{{{\text{d}}q_{i} }}{{{\text{d}}t}} = \frac{{v_{{{\text{max}}}} \left( {q_{{{\text{max}}}} - q_{i} } \right)\left( {1 - T_{ps} } \right)}}{{\left( {q_{{{\text{max}}}} - q_{{i{\text{min}}}} } \right)\left( {k_{q/2} - T_{ps} + 1} \right)}} - \delta_{q} q_{i} + \gamma_{{{\text{max}}}} \left( {q_{{i{\text{min}}}} - q_{D} } \right),$$ with \(q\left( {t_{0i} } \right) = q_{0i}\) for \(i \in \left\{ {D,I} \right\}.\) The cell quota can grow to the maximum cell quota rate \(\gamma_{{{\text{max}}}}\) and degrades at a constant rate \(\delta_{q}\), with \(q_{{{\text{max}}}}\) representing the shared max cell quota, \(v_{{{\text{max}}}}\) the maximum cell quota uptake rate, \(q_{{i{\text{min}}}} < q_{{{\text{max}}}}\) the minimum cell quota for androgen, and \(1 \ne k_{q/2} > 0\) the uptake rate half-saturation level (Packer et al. 2011). The authors allow mutation between both cell populations, from AD to AI and vice versa, at rates \(\mu_{{\textit{DI}}}\) and \(\mu_{\textit{ID}}\) given, respectively, by the Hill's equations of index \(m = 2\): $$\begin{array}{*{20}l} {\mu_{{\textit{DI}}} \left( q \right) = \mu_{{\textit{DI}}\text{max} } \frac{{k_{{\textit{DI}}/2}^{m} }}{{q^{m} + k_{{\textit{DI}}/2}^{m} }},} \hfill & {\mu_{\textit{ID}} \left( q \right) = \mu_{{\textit{ID}}{\text{max}}} \frac{{q^{m} }}{{q^{m} + k_{ID/2}^{m} }},} \hfill \\ \end{array}$$ where \(\mu_{{{\textit{DI}}{\text{max}}}}\) is the maximum AD to AI mutation rate, \(\mu_{{{\textit{ID}}{\text{max}}}}\) is the maximum AI to AD mutation rate, and \(k_{{\textit{DI}}/2}^{m}\) and \(k_{{\textit{ID}}/2}^{m}\) are the cells mutation rate half-saturation level. The model follows the evolution of AD/AI cell populations, \(n_{D}\) and \(n_{I}\), respectively, with the following equations: $$\begin{array}{*{20}l} {\frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = n_{D} \left( { - \delta_{D} - \mu_{{\textit{DI}}\text{max} } \frac{{k_{{\textit{DI}}/2}^{2} }}{{k_{{\textit{DI}}/2}^{2} + q_{D}^{2} }} + \gamma_{\text{max} } \left( {1 - \frac{{q_{D\text{min} } }}{{q_{D} }}} \right)} \right) + \mu_{{\textit{ID}}{\text{max}}} n_{I} \frac{{q_{I}^{2} }}{{k_{ID/2}^{2} + q_{I}^{2} }},} \hfill \\ {\frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = n_{I} \left( { - \delta_{I} - \mu_{{\textit{ID}}{\text{max}}} \frac{{q_{I}^{2} }}{{k_{ID/2}^{2} + q_{I}^{2} }} + \gamma_{\text{max} } \left( {1 - \frac{{q_{I\text{min} } }}{{q_{I} }}} \right)} \right) + \mu_{{\textit{DI}}\text{max} } n_{D} \frac{{k_{{\textit{DI}}/2}^{2} }}{{k_{{\textit{DI}}/2}^{2} + q_{D}^{2} }},} \hfill \\ \end{array}$$ for \(q_{i} \left( t \right) \ne 0\forall t\) and i.c. \(n_{D} \left( {t_{0D} } \right) = n_{D0}\) and \(n_{I} \left( {t_{0I} } \right) = n_{I0}\). The cell apoptosis and proliferation rates are, respectively, given by \(\delta_{i}\) and \(\gamma_{i}\) for \(i = \left\{ {D,I} \right\}\). The authors model the quota for both AD and AI cell populations independently. In general, we assume \(q_{{I{\text{min}}}} < q_{{D{\text{min}}}}\) to ensure that AI cells have a greater proliferation capacity in low androgen environments and \(n_{D} \left( {t_{0D} } \right) \cong 0\) with \(t_{0D}\) soon after treatment, as well as \(n_{I} \left( {t_{0I} } \right) \cong 0\) at \(t_{0I}\) at the beginning of the first treatment. Furthermore, a communal maximum proliferation rate \(\gamma_{{{\text{max}}}}\) between the two populations is assumed. Both AD and AI cells produce PSA at a baseline rate \(\gamma_{{{\text{PSA}}0}}\) under the androgen dependence specified by: $$\frac{{{\text{d}}c_{PSA} }}{{{\text{d}}t}} = n_{D} \left( {\gamma_{{{\text{PSA}}0}} + \frac{{\gamma_{{{\text{PSA}},D}} q_{D}^{2} }}{{k_{{{\text{PSA}},D/2}}^{2} + q_{D}^{2} }}} \right) - c_{{{\text{PSA}}}} \delta_{{{\text{PSA}}}} + n_{I} \left( {\gamma_{{{\text{PSA}}0}} + \frac{{\gamma_{{{\text{PSA}},I}} q_{I}^{2} }}{{k_{{{\text{PSA}},I/2}}^{2} + q_{I}^{2} }}} \right),$$ with \(c_{PSA} \left( {t_{0PSA} } \right) = c_{PSA0}\), and where \(k_{PSA,i/2}\) are the half-saturation rates and \(\gamma_{PSA,i}\) the growth rates, for \(i = \left\{ {D,I} \right\}\). Several variants of this quota model can be found in the literature. In the present work, we consider only a couple of them. A detailed comparison between (Hirata et al. 2010) and (Portz et al. 2012) can be found elsewhere (Everett et al. 2014). The model's complexity is demonstrated with a tube plot (Fig. 6a, b). Portz et al. model analysis results. a Quotas for dependent and independent cells shaping the tube plot of panel b for a particular patient. The yellow dot–lines represent the on-treatment periods, and blue and green are the model's independent and dependent quota levels, respectively. b The quota profiles are represented as a cross section. The orbital evolution corresponds to the gray box time interval in panel a. The blue and green arrows represent the independent and dependent quota levels of the model, respectively. On the orbit section, the tube cross section has been computed considering qD along the normal N and qI along the binormal B and by solving the Frenet–Serret formulas (assuming the vector field of the system under consideration being along the tangent T). c, d The distribution of the eigenvalues \(\lambda_{1}^{{{\text{off}}}}\) and \(\lambda_{2}^{{{\text{off}}}}\) off-treatment for models P12B and P12A, respectively (Color figure online) P12A in the Context of the Data The model is an extension of the models by Ideta et al., shown in Sect. 4.1, where the equation of the quota decouples from the two cell populations behavior. Nevertheless, the quota evolution \(q = q\left( t \right)\), common to \(n_{D}\) and \(n_{I}\), is generally smoother than \(c_{A} = c_{A} \left( t \right)\) in I08A or I08B hence not justifying the approximations worked out in those models. In P12A, the only equilibrium point is at \(\left\{ {n_{D} ,n_{I} } \right\}_{{{\text{eq}}}}^{{{\text{on}}/{\text{off}}}} = \left\{ {0,0} \right\}\) and for the decoupled quota equation at \(q_{{{\text{eq}}}}^{{{\text{on}}}} = \frac{{\gamma_{{{\text{Dmax}}}} q_{{{\text{min}}}} }}{{\gamma_{{{\text{Dmax}}}} + \delta_{q} }}\) and \(q_{{{\text{eq}}}}^{{{\text{off}}}} = \frac{{\gamma_{{D{\text{max}}}} \left( {k_{q/2} + 1} \right)q_{{{\text{min}}}} \left( {q_{{{\text{max}}}} - q_{{{\text{min}}}} } \right) + q_{{{\text{max}}}} v_{{{\text{max}}}} }}{{\left( {k_{q/2} + 1} \right)\left( {\gamma_{{D{\text{max}}}} + \delta_{q} } \right)\left( {q_{{{\text{max}}}} - q_{{{\text{min}}}} } \right) + v_{{{\text{max}}}} }}\) for the on/off-treatments, respectively. The eigenvalues at this equilibrium point are real and negative along the direction of \(n_{D}\) and \(n_{I}\): \(\lambda_{i} \in {\mathbb{R}}_{0}^{ - }\), for \(i = 1,2\) both on- and off-treatment. In the decoupled \(q\) direction, the generalized eigenvalues \(\lambda_{3}^{{{\text{on}}}} = - \left( {\gamma_{{D{\text{max}}}} + \delta_{q} } \right)\) and \(\lambda_{3}^{{{\text{off}}}} = \lambda_{3}^{{{\text{on}}}} - \frac{{v_{{{\text{max}}}} }}{{\left( {k_{q/2} + 1} \right)\left( {q_{{{\text{max}}}} - q_{{{\text{min}}}} } \right)}}\) are always negative, leading to a node (attractor). Nevertheless, we note that from the plot in Fig. 6c, how the best-fit solutions obtained from our inference work for all patients with this model falls in the area where \(\lambda_{i} > 0\), for both \(i = 1\) and \(i = 2\), i.e., we are never in the presence of an attractor (off-treatment). Therefore, patient dynamics never intercept an area of the parameters' space defined by the hyperplane that would (eventually asymptotically) lead to the annihilation of the \(n_{D}\) and \(n_{I}\) cell population, i.e., a steady state or a reduction of the disease present under the detection threshold. This plot is compared with the companion model in the next section, which simplifies P12A. P12B in the Context of the Data In this model, the authors extend the use of the quota concept to both \(n_{D}\) and \(n_{I}\) individually, i.e., fully exploiting Eq. (11), but retaining the same proliferation rate \(\gamma_{{{\text{max}}}}\). The large number of parameters required by the model makes the posterior maximization time-consuming and computationally expensive in the Bayesian framework, especially in a fully numerical nested sample approach (Skilling 2004) or using differential evolution optimization tool (Feoktistov 2006; Goode and Annin 2015). For this reason, a first inference approach has been performed within Laplace approximation and followed up at the patient-specific level where judged necessary.Footnote 5 As in P12A, the P12B critical points are \(\left\{ {n_{D} ,n_{I} ,c_{PSA} } \right\}_{{{\text{eq}}}} = \left\{ {0,0,0} \right\}\) both on- and off-treatment, while for the decoupled quota equation-stability points are found at \(q_{{i,{\text{eq}}}}^{{{\text{on}}}} = \frac{{\gamma_{{{\text{max}}}} }}{{\delta_{q} + \gamma_{{{\text{max}}}} }}q_{{i{\text{min}}}}\) and \(q_{{i,{\text{eq}}}}^{{{\text{off}}}} = a^{ - 1} \mu_{{{\text{max}}}} q_{{i{\text{min}}}} \left( {k_{q/2} + 1} \right)\left( {q_{{{\text{max}}}} - q_{{i{\text{min}}}} } \right) + q_{{{\text{max}}}} v_{{{\text{max}}}}\) with \(a \equiv v_{{{\text{max}}}} - \left( {k_{q/2} + 1} \right)\left( {\delta_{q} + \gamma_{{{\text{max}}}} } \right)\left( {q_{{i{\text{min}}}} - q_{{{\text{max}}}} } \right) \ne 0\), \(\delta_{q} \ne 0\) and \(\gamma_{{{\text{max}}}} \ne 0\) and for \(i \in I,D\). As in P12A, the three generalized eigenvalues of the Jacobian at the equilibrium are always negative. Equations for the generalized eigenvalues \(\lambda_{i}^{{{\text{on}}/{\text{off}}}}\) for \(i = 1,2\) along the quota directions are analytically available but slightly cumbersome; vice versa, more interesting is the plot of \(\lambda_{i}^{{{\text{off}}}}\) for \(i = 1,2\) shown in Fig. 6d. The P12B solutions distribute a small number of patients in the P12A inaccessible area of double negative generalized eigenvalues (orange square in Fig. 6c, d). In this zone of the P12B parameter space off-treatment, the model predicts a constrained (or asymptotically constrainable) tumor cell population. Finally, we note how P12A is nested in P12B. Thus, P12B always obtains a better score in the same data representation but suffers from overfitting. We investigate this problem and offer a solution in Sect. 5 in the context of the Bayesian model comparison. Morken et al. (2014) In Morken et al. (2014), the authors extend model P12B by adding ADT-induced apoptosis of prostate cancer cells in addition to the inhibition of their growth and proliferation. Therefore, the model (hereafter M14) implements the per capita mortality of androgen-dependent and independent populations introduced in the previous section with the equation: $$\delta_{i} \left( {q_{i} } \right) = \delta_{i\max } \frac{{k_{i/2}^{2} }}{{q_{i}^{2} + k_{i/2}^{2} }},$$ where \(k_{i/2}\) for \(i \in \left\{ {D,I} \right\}\) are the apoptosis and half-saturation levels for the dependent and independent populations, respectively. We consider the SoE in the form of: $$\begin{gathered} \frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = n_{D} \left( { - \delta_{D} - \frac{{\delta_{D\text{max} } k_{D\delta /2}^{2} }}{{k_{D\delta /2}^{2} + q_{D}^{2} }} - \frac{{k_{{\textit{DI}}/2}^{2} \mu_{{\textit{DI}}\text{max} } }}{{k_{{\textit{DI}}/2}^{2} + q_{D}^{2} }} + \gamma_{\text{max} } \left( {1 - \frac{{q_{D\text{min} } }}{{q_{D} }}} \right)} \right) + \frac{{\mu_{{\textit{ID}}{\text{max}}} n_{I} q_{I}^{2} }}{{k_{ID/2}^{2} + q_{I}^{2} }}, \hfill \\ \frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = \frac{{k_{{\textit{DI}}/2}^{2} \mu_{{\textit{DI}}\text{max} } n_{D} }}{{k_{{\textit{DI}}/2}^{2} + q_{D}^{2} }} + n_{I} \left( { - \delta_{I} - \frac{{\delta_{I\text{max} } k_{I\delta /2}^{2} }}{{k_{I\delta /2}^{2} + q_{I}^{2} }} - \frac{{\mu_{{\textit{ID}}{\text{max}}} q_{I}^{2} }}{{k_{ID/2}^{2} + q_{I}^{2} }} + \gamma_{\max } \left( {1 - \frac{{q_{I\text{min} } }}{{q_{I} }}} \right)} \right), \hfill \\ \end{gathered}$$ for \(q_{i} \left( t \right) \ne 0{ }\forall t\) and i.c. \(n_{D} \left( {t_{0D} } \right) = n_{D0}\) and \(n_{I} \left( {t_{0I} } \right) = n_{I0}\), together with the equivalent of Eq. (14): $$\frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = - c_{{{\text{PSA}}}} \delta_{{{\text{PSA}}}} + n_{D} \left( {\gamma_{{{\text{PSA}}0}} + \frac{{\gamma_{{{\text{PSA}},D}} q_{D}^{2} }}{{k_{{{\text{PSA}},D/2}}^{2} + q_{D}^{2} }}} \right) + n_{I} \left( {\gamma_{{{\text{PSA}}0}} + \frac{{\gamma_{{{\text{PSA}},I}} q_{I}^{2} }}{{k_{{{\text{PSA}},I/2}}^{2} + q_{I}^{2} }}} \right),$$ with i.c. \(c_{{{\text{PSA}}}} \left( {t_{{0{\text{PSA}}}} } \right) = c_{{{\text{PSA}}0}}\). Furthermore, the same notation as in the models by Portz et al. is followed and not repeated here. The analytical treatment is analogous to P12B but enriched in the dynamics variety for the extra parameters introduced in Eq. (15), although without changing equilibrium points.Footnote 6 Our model analysis did not report other notable features. Baez and Kuang (2016) The model by Baez and Kuang (2016) presents a variant of the P12A model that is able to fit PSA and androgen dynamics, thus improving PSA trend forecasting. Two models are presented in the authors' work and considered here. The first (hereafter B16A) is a single population model of cellular concentration \(n\), and two equations are coupled with it, for \(\delta_{{{\text{max}}}}\) the time-dependent (over a timescale \(\tau_{{\delta_{{{\text{max}}}} }}\)) maximum baseline cell death rate and \(c_{PSA}\) the PSA concentration that are modeled as: $$\begin{array}{*{20}l} {\frac{{{\text{d}}n}}{{{\text{d}}t}} = n\left( { - n\delta - \frac{{k_{n/2} \delta_{{{\text{max}}}} }}{{q + k_{n/2} }} - \frac{{\gamma_{{{\text{max}}}} q_{{{\text{min}}}} }}{q} + \gamma_{{{\text{max}}}} } \right),} \hfill \\ {\frac{{{\text{d}}\delta_{{{\text{max}}}} }}{{{\text{d}}t}} = - \tau_{{\delta_{{{\text{max}}}} }} \delta_{{{\text{max}}}} ,} \hfill \\ {\frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = q\left( {\gamma_{{{\text{PSA}}1}} n + \gamma_{{{\text{PSA}}0}} } \right) - \delta_{{{\text{PSA}}}} c_{{{\text{PSA}}}} ,} \hfill \\ \end{array}$$ and a decoupled equation for androgen level: $$\frac{dq}{{dt}} = \gamma \left( {q_{{{\text{max}}}} - q} \right) - \gamma_{{{\text{max}}}} \left( {q - q_{{{\text{min}}}} } \right),$$ with \(n\left( {t_{0n} } \right) = n_{0} , c_{PSA} \left( {t_{0PSA} } \right) = c_{PSA0,} \delta_{{{\text{max}}}} \left( {t_{{0\delta {\text{max}}}} } \right) = \delta_{{0{\text{max}}}}\), and \(q\left( {t_{0q} } \right) = q_{0} > 0\) strictly. The quota \(q \ne 0\forall t\) is produced at a rate \(\gamma = \gamma_{1} T_{ps} + \gamma_{2}\). In the same work, the authors also presented a two-populations model tracking both sensitive \(n_{D}\) and independent \(n_{I}\) cell evolution (hereafter B16B). By implementing their SoE within the approximation that all the cells have, on average, the same mass and density, we can recast their SoE in the form: $$\begin{gathered} \frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = n_{D} \left( { - \frac{{\delta_{D\text{max} } k_{D/2} }}{{q + k_{D/2} }} - \frac{{k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } }}{{q + k_{{\textit{DI}}/2} }} + \gamma_{\text{max} } \left( {1 - \frac{{q_{D\text{min} } }}{q}} \right)} \right) - \delta_{D} n_{D}^{2} , \hfill \\ \frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = \frac{{k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } n_{D} }}{{q + k_{{\textit{DI}}/2} }} + n_{I} \left( {\gamma_{\text{max} } \left( {1 - \frac{{q_{I\text{min} } }}{q}} \right) - \frac{{\delta_{I\text{max} } k_{I/2} }}{{q + k_{I/2} }}} \right) - \delta_{I} n_{I}^{2} , \hfill \\ \frac{{{\text{d}}q}}{{{\text{d}}t}} = - q\left( {\gamma_{2} + \gamma_{\text{max} } + \gamma_{1} T_{ps} } \right) + \frac{{\gamma_{\text{max} } \left( {q_{D\text{min} } n_{D} + q_{I\text{min} } n_{I} } \right)}}{{n_{D} + n_{I} }} + q_{\text{max} } \left( {\gamma_{2} + \gamma_{1} T_{ps} } \right), \hfill \\ \frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = q\left( {\gamma_{{{\text{PSA}}0}} + \gamma_{{{\text{PSA}}1}} \left( {n_{D} + n_{I} } \right)} \right) - \delta_{{{\text{PSA}}}} c_{{{\text{PSA}}}} , \hfill \\ \end{gathered}$$ for \(n_{i} , i \in \left\{ {D,I} \right\}\) never contemporaneously null, with initial conditions \(n_{D} \left( {t_{0D} } \right) = n_{D0}\), \(n_{I} \left( {t_{0I} } \right) = n_{I0}\), \(q\left( {t_{0q} } \right) = q_{0}\), and \(c_{{{\text{PSA}}}} \left( {t_{{0{\text{PSA}}}} } \right) = c_{{{\text{PSA}}0}}\). The maximum AD to AI mutation rate is given by \(\mu_{DImax}\). Furthermore, because AI cells, \(n_{I}\), proliferate at lower androgen level it is assumed that \(q_{I\text{min} } < q_{D\text{min} }\), and \(\delta_{D\text{max} } > \delta_{I\text{max} }\) because independent cells are less susceptible to apoptosis by androgen deprivation than sensitive cells. B16A in the Context of the Data The decoupled quota equation presents an equilibrium at \(q_{{{\text{eq}}}} = \frac{{\gamma_{{{\text{max}}}} \left( {q_{{{\text{min}}}} - q_{{{\text{max}}}} } \right)}}{{\gamma_{{{\text{max}}}} + (\gamma_{2} + \gamma_{1} T_{ps} )}} + q_{{{\text{max}}}}\) when \(\gamma_{{{\text{max}}}} + (\gamma_{2} + \gamma_{1} T_{ps} ) \ne 0,\) belonging to the positive hyper-quadrant of the phase space (i.e., it is of biological interest). The remaining set in Eq. (18) shows two equilibria at \(\left\{ {n,\delta_{{{\text{max}}}} ,c_{PSA} } \right\}_{{{\text{eq}}}}^{\left( 1 \right)} = \left\{ {0,0,\frac{{\gamma_{{{\text{PSA}},0}} q_{{{\text{eq}}}} }}{{\delta_{{{\text{PSA}}}} }}} \right\}\), which are always in the positive quadrant of the phase space of interest and \(\left\{ {n,\delta_{{{\text{max}}}} ,c_{PSA} } \right\}_{{{\text{eq}}}}^{\left( 2 \right)} = \left\{ {\frac{{\gamma_{{{\text{max}}}} \left( {q_{{{\text{eq}}}} - q_{{{\text{min}}}} } \right)}}{{q_{{{\text{eq}}}} \delta }},0,\frac{{\delta \gamma_{{{\text{PSA}},0}} q_{{{\text{eq}}}} + \gamma_{{{\text{max}}}} \gamma_{{{\text{PSA}},1}} \left( {q_{{{\text{eq}}}} - q_{{{\text{min}}}} } \right)}}{{\delta_{{{\text{PSA}}}} \delta }}} \right\}\) with \(q_{{{\text{eq}}}} \ne 0\), \(\delta_{{{\text{PSA}}}} \ne 0\) and \(\delta \ne 0\), which is also biologically meaningful. By studying the generalized eigenvalues, we see that the first of the equilibrium presents three negative generalized eigenvalues, one of which is always positive (i.e., it is a saddle point); the second equilibrium point produces the eigenvalues \(\lambda_{1}^{\left( 2 \right)} = \gamma_{{{\text{max}}}} \left( {\frac{{q_{{{\text{min}}}} }}{{q_{{{\text{eq}}}} }} - 1} \right)\), \(\lambda_{2}^{\left( 2 \right)} = - \delta_{{{\text{PSA}}}}\), \(\lambda_{3}^{\left( 2 \right)} = - \tau_{{\delta_{{{\text{max}}}} }}\) and \(\lambda_{4}^{\left( 2 \right)} = - \gamma_{{{\text{max}}}} - \left( {\gamma_{2} + \gamma_{1} T_{ps} } \right)\) which are all always negative, thus representing a stable point of attraction. Due to the stability of the second equilibrium (on- and off-treatment), it is worth investigating the proximity of the patients' orbits to the equilibria on the Poincare sections involving the PSA concentration \(c_{{{\text{PSA}}}}\) we obtained from Eq. (18). Nevertheless, the low quality of the likelihood, \(L\left( \bf{p} \right) = \Pr \left( {D|\bf{p}},{I} \right)\), see Sect. 3) in the \({\Omega }\) set of patients, demotivates further analysis. A single population \(n\) seems to not adequately capture disease progression, which remain the primary focus of our work, making the model less attractive for clinical implications and therefore not pushed forward here. B16B in the Context of the Data The model presents cubic dependence on \(q\) and quadratic on \(n_{D}\).Footnote 7 We select to investigate only the null-equilibrium point of independent and dependent cells. It is evident that \(n_{D} = 0\) is an equilibrium for the first of Eq. (20). Therefore, by assuming \(n_{D} = 0\) (and \(n_{I} > 0\) strictly), we can confirm the existence of two equilibria, the first located at \(\left\{ {n_{I} ,q,c_{{{\text{PSA}}}} } \right\}_{{{\text{eq}}}}^{\left( 1 \right)} = \left\{ {0,q_{{{\text{max}}}} + \frac{{\gamma_{{{\text{max}}}} \left( {q_{{I{\text{min}}}} - q_{{{\text{max}}}} } \right)}}{{\gamma_{{{\text{max}}}} + (\gamma_{2} + \gamma_{1} T_{{{\text{ps}}}} )}},\frac{{\gamma_{{{\text{PSA}}0}} }}{{\delta_{{{\text{PSA}}}} }}q_{{{\text{eq}}}} } \right\}\), for \(\gamma_{{{\text{max}}}} + (\gamma_{2} + \gamma_{1} T_{{{\text{ps}}}} ) \ne 0\) and \(\delta_{{{\text{PSA}}}} \ne 0\), which is of biological interest. The second, algebraically more cumbersome, reduces its nonnegativity condition to the simple one \(\delta_{I\text{max} } + \frac{{\gamma \gamma_{{{\text{max}}}} \left( {q_{{I{\text{min}}}} - q_{{{\text{max}}}} } \right)}}{{\gamma_{{{\text{max}}}} q_{{I{\text{min}}}} + \gamma q_{{{\text{max}}}} }} + \frac{{\gamma \gamma_{{{\text{max}}}} \left( {q_{{I{\text{min}}}} - q_{{{\text{max}}}} } \right)}}{{k_{I/2} \left( {\gamma + \gamma_{{{\text{max}}}} } \right)}} \le 0\), that is verified over all studied patients. Again, as explored in previous models, we are interested in the existence of negative generalized eigenvalues of the Jacobian at the equilibria off-treatment, i.e., a point of equilibrium with an asymptotic constrained expansion of the tumoral cell population. Despite the model complexity, it is easy to prove numerically that the Jacobian for both equilibrium points has at least one positive generalized eigenvalue, making these points saddle points that are not of interest to us. Elishmereni et al. (2016) The Elishmereni et al. (Elishmereni et al. 2016) model accounts for two dynamics: disease dynamics represented by PSA used as a proxy for tumor volume and the pharmacology dynamics combined with the emergence of resistant cells from androgen receptor-independent \(n_{I}\) and testosterone androgen receptor-dependent \(n_{IAR}\) mechanism. The PSA concentration \(c_{PSA}\) of interest to us, is governed by the following numerically highly complex SoEFootnote 8: $$\begin{aligned} & \frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = \widehat{{\underline {c} }}_{{{\text{PSA}}}} \gamma_{{t{\text{PSA}}}} {\text{min}}\left( {\gamma_{{{\text{PSA}}}} c_{{{\text{PSA}}}}^{K} ,\frac{{{\text{log}}2}}{{\gamma_{{{\text{PSAmax}}}} }}} \right) \\ & \qquad+ \eta_{{T,{\text{PSA}}}} \left( {c_{{{\text{PSA}}}} - \frac{{\tilde{c}_{{{\text{PSA}}}} }}{2}} \right)^{ + } \left( {\eta_{I,T} R_{I} \widehat{{\underline {c} }}_{PSA} + n_{T} - 1} \right), \hfill \\ & \frac{{{\text{d}}n_{T} }}{{{\text{d}}t}} = \frac{{\gamma_{T} \left( {1 - T_{ps} } \right)}}{{\eta_{H,T} H + 1}} - \gamma_{T} n_{T} , \hfill \\ & \frac{{{\text{d}}H}}{{{\text{d}}t}} = T - \frac{{\delta_{T} l_{H}^{\max } He^{{R_{T:AR} }} }}{{e^{{R_{T:AR} }} + l_{H}^{\max } }}, \hfill \\ & \frac{{{\text{d}}R_{T:AR} }}{{{\text{d}}t}} = \gamma_{T:AR} T\widehat{{\underline {R} }}_{T:AR} , \hfill \\ & \frac{{{\text{d}}R_{I} }}{{{\text{d}}t}} = \gamma_{I} T\widehat{{\underline {R} }}_{I} , \hfill \\ &\frac{{{\text{d}}K}}{{{\text{d}}t}} = - \rho_{K} , \hfill \\ & \frac{{{\text{d}}T}}{{{\text{d}}t}} = - \delta_{T} T \hfill \\ \end{aligned}$$ with \(c_{{{\text{PSA}}}} \left( {t_{{0{\text{PSA}}}} } \right) = c_{{{\text{PSA}}0}}\), \(n_{T} \left( {t_{{0n_{T} }} } \right) = n_{T0}\), \(H\left( {t_{0H} } \right) = H_{0} ,\) \(R_{T:AR} \left( {t_{0T:AR} } \right) = R_{T:AR0} , R_{I} \left( {t_{{0{\text{I}}}} } \right) = R_{I0} , K\left( {t_{0K} } \right) = K_{0}\), and \(T\left( {t_{0T} } \right) = T_{0}\) with \(\left( x \right)^{ + } = x\theta \left( x \right)\) ramp/positive function of the generic \(x\), \(\theta\) the previously introduced Heaviside step function. In the above equation \(\gamma_{{{\text{PSAmax}}}}\) is the limit to the PSA growth rate, \(\rho_{K}\) the K growth rate, \(\eta_{{T,{\text{PSA}}}}\) the testosterone, \(T\), effect on the PSA growth, \(\gamma_{T}\) the instantaneous rate of change in \(T\), \(\eta_{H,T}\) the effect of intermediate components \(H\), e.g., bound androgen receptor AR, on \(T\), with same clearance rate \(\delta_{T}\). \(\gamma_{T:AR}\) is the increase resistance rate, \(\gamma_{I}\) the increase resistance rate for testosterone-AR-independent paths \(R_{I}\), and \(\eta_{I,T}\) rules the effect of \(R_{I}\) on the PSA growth. The growth rate of \(c_{PSA}\) is given by $$\gamma_{{{\text{PSA}}}} = \left\{ {\begin{array}{*{20}l} 1 \hfill & {c_{{{\text{PSA}}}} > c_{{t{\text{PSA}}}} } \hfill \\ {\sigma_{{{\text{PSA}}}} + \left( {1 - \sigma_{{{\text{PSA}}}} } \right)\frac{{c_{{{\text{PSA}}}} }}{{c_{{t{\text{PSA}}}} }}} \hfill & {c_{{{\text{PSA}}}} \le c_{{t{\text{PSA}}}} ,} \hfill \\ \end{array} } \right.$$ where \(\sigma_{{{\text{PSA}}}}\) rules the steepness on the linear grown relation, \(c_{{t{\text{PSA}}}}\) the PSA threshold to switch in quiescent mode. Finally, control limits \(l_{i}\) \(i \in \left\{ {{\text{PSA}},H,n_{I} ,n_{IAR} } \right\}\) are added by hand to handle system divergences with a "manual"-bounding scheme (\(\underline{{\hat{f}}}_{i} \equiv \frac{{\left( {l_{i\max } - f_{i} } \right)^{ + } }}{{l_{i\max } }}\) for the generic function \(f_{i}\)). In the practice the dynamics of the system is designed so that the instantaneous androgen rate of change \(\gamma_{T}\) is saturated by a control coefficient \(\eta_{{T,{\text{PSA}}}}\) through an intermediary delaying effect ruled by a delay modeling function \(H\) over the ADT therapy, \(T\) therapy function with scale factor \(\delta_{{{\text{ADT}}}}\) and a double mechanism for androgen independence cell population depending on \(\eta_{I,T}\), and not depending on \(n_{I}\), the androgen receptor (with the respective scale factor \(\gamma_{I}\) and \(\gamma_{T:AR}\)). The system has no equilibria influencing its dynamics, as evident from the 6th of Eq. (21). Further analysis is done in Sect. 5 to determine how well the model performs in the Bayesian model comparison. Zhang et al. (2017) Zhang et al. (2017) present a three-population competition model, based on Lotka–Volterra (LV) dynamics, where androgen-dependent \(n_{D}\), androgen producing \(n_{P}\), and androgen-independent cells \(n_{I}\), are considered. Basing the approach on game theory, the authors derive a competition matrix \(\alpha = \alpha_{ij} \;i,j \in \left\{ {D,I,P} \right\}\) based on the parametrization of growth rates \(\gamma_{i}\) and carrying capacities \(K_{i}\) with \(i \in \left\{ {D,I,P} \right\}\) resulting in this set of algebraic-differential equations: $$\begin{array}{*{20}c} {\frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = \gamma_{D} n_{D} \left( {1 - \frac{{\alpha_{11} n_{D} + \alpha_{12} n_{p} + \alpha_{13} n_{I} }}{{n_{p} \left( {\beta - T_{ps} + 1} \right)}}} \right),} \\ {\frac{{{\text{d}}n_{P} }}{{{\text{d}}t}} = \gamma_{P} n_{p} \left( {1 - \frac{{\alpha_{21} n_{D} + \alpha_{22} n_{p} + \alpha_{23} n_{I} }}{{K_{P} }}} \right),} \\ {\frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} = \gamma_{i} n_{I} \left( {1 - \frac{{\alpha_{31} n_{D} + \alpha_{32} n_{p} + \alpha_{33} n_{I} }}{{K_{I} }}} \right),} \\ \end{array}$$ where ADT is modeled by the decreasing carrying capacity with \(\beta < 1\) or supporting androgen-dependent cells with \(\beta > 1\). The authors considered several constraints derived from the literature and researchers' experience to shape the model parameter influence: \(\alpha_{ii} = 1\forall i\), \(\alpha_{31} > \alpha_{21} ,\) \(\alpha_{32} > \alpha_{12} ,\) \(\alpha_{13} > \alpha_{23} ,\) \(\alpha_{13} > \alpha_{21} ,\) \(\alpha_{32} > \alpha_{31} ,\) and \(\alpha_{ij} \in \left] {0,1} \right[\forall i \ne j\). Finally, the PSA dynamics is governed by: $$\frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = \mathop \sum \limits_{{i \in \left\{ {D,P,I} \right\}}}^{{}} n_{i} - \delta c_{{{\text{PSA}}}} ,$$ with \(\delta\) the PSA clearance rate. With the coupling of Eq. (24), the system presents four equilibria, but only two are of biological interest: \(\left\{ {n_{D} ,n_{P} ,n_{I} ,c_{{{\text{PSA}}}} } \right\}_{{{\text{eq}}}}^{\left( 1 \right)} = \left\{ {0,k_{P} ,0,\frac{{k_{P} }}{\delta }} \right\} \in {\mathbb{R}}_{0}^{4 + }\) and \(\left\{ {n_{D} ,n_{P} ,n_{I},c_{{{\text{PSA}}}} } \right\}_{{\text{eq}}}^{\left( 2 \right)} =\Bigg\{ \frac{{k_{P} \left( {\beta - \alpha_{12} - T_{ps} + 1}\right)}}{{\alpha_{21} \left( {\beta - \alpha_{12} - T_{ps} + 1}\right) + 1}}\), \(\frac{{k_{P} }}{{\alpha_{21} \left( {\beta - \alpha_{12} - T_{ps} + 1} \right) + 1}},0\frac{{k_{P} \left( {\alpha - \alpha_{12} - T_{ps} + 2} \right)}}{{\delta + \delta \alpha_{21}\left( {\beta - \alpha_{12} - T_{ps} + 1} \right)}}\Bigg\}\) where these ratios exist. For the first equilibrium, the eigenvalues of the Jacobian are positive in the \(n_{D}\), \(n_{P}\) and cPSA phase space and therefore of marginal interest. Vice versa, by setting \(a \equiv 1 + \beta\), \(b \equiv \beta - \alpha_{12} + 1\), \(d \equiv \alpha_{21} \left( {\beta - \alpha_{12} + 1} \right) + 1\) and \(e \equiv \beta + \alpha_{21} (\beta - \alpha_{12} + 1)^{2} - \alpha_{12} + 1\) together with the discriminant squared \(\Delta^{2} = (e\gamma_{D} + \beta \gamma_{P} + \gamma_{P} )^{2} - 4ade\gamma_{D} \gamma_{P}\), we can write the four eigenvalues of the Jacobian for the second equilibrium off-treatment as: \(\lambda_{1}^{{\left( 2 \right){\text{off}}}} = - \delta ,\) \(\lambda_{2}^{{\left( 2 \right){\text{off}}}} = \gamma_{I} - \frac{{\gamma_{I} k_{P} \left( {b\alpha_{31} + \alpha_{32} } \right)}}{{dk_{I} }}\), \(\lambda_{3}^{{\left( 2 \right){\text{off}}}} = - \frac{{a\gamma_{P} + {\Delta } + e\gamma_{D} }}{2ad}\) and \(\lambda_{4}^{{\left( 2 \right){\text{off}}}} = \frac{{{\Delta } - a\gamma_{P} - e\gamma_{D} }}{2ad}\), where the ratios exist, which are always negative for the fitted parameters, hence representing a stable equilibrium and opening the possibility to achieve an equilibrium off-treatment. Phan et al. (2019) The model (hereafter P19) presented by Phan et al. (Phan et al. 2019) is a variant of the work of Sect. 4.6 (Baez and Kuang 2016) in which the third population of weakly dependent cells, nwD, is added to investigate the influence of extra degrees of freedom added by the new population. The death term is also adapted from Eq. (16). Retaining the notation used in Sect. 4.6, we can recast the model in the following form: $$\begin{aligned} \frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} & = n_{D} \left( { - \frac{{\delta_{D\text{max} } k_{D/2} }}{{q + k_{D/2} }} - \frac{{2k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } }}{{q + k_{{\textit{DI}}/2} }} + \gamma_{\text{max} } \left( {1 - \frac{{q_{D\text{min} } }}{q}} \right)} \right) \\ &\quad+ \frac{{k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } n_{wD} }}{{q + k_{{\textit{DI}}/2} }} - \delta_{D} n_{D}^{2} \\ \frac{{{\text{d}}n_{wD} }}{{{\text{d}}t}} & = n_{wD} \left( { - \frac{{2k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } }}{{q + k_{{\textit{DI}}/2} }} - \frac{{\delta_{wD\text{max} } k_{wD/2} }}{{q + k_{wD/2} }} + \gamma_{\max } \left( {1 - \frac{{q_{wD\text{min} } }}{q}} \right)} \right) \\ &\quad+ \frac{{k_{{\textit{DI}}/2} \mu_{{\textit{DI}}\text{max} } n_{D} }}{{q + k_{{\textit{DI}}/2} }} - \delta_{wD} n_{wD}^{2} \\ \frac{{{\text{d}}n_{I} }}{{{\text{d}}t}} & = \frac{{k_{{\textit{DI}}/2} \mu_{{{\textit{DI}}{\text{max}}}} \left( {n_{D} + n_{wD} } \right)}}{{q + k_{{\textit{DI}}/2} }} + n_{I} \left( {\gamma_{\text{max} } \left( {1 - \frac{{q_{I{\text{min}}}}}{q}} \right) - \frac{{\delta_{I\text{max} } k_{I/2} }}{{q + k_{I/2} }}} \right) - \delta_{I} n_{I}^{2} \\ \frac{{{\text{d}}q}}{{{\text{d}}t}} & = q\left( { - \left( {\gamma_{2} + \gamma_{1} T_{ps} } \right) - \gamma_{\text{max} } } \right) + \frac{{\gamma_{\text{max} } \left( {q_{D\text{min} } n_{D} + q_{I\text{min} } n_{I} + q_{wD\text{min} } n_{wD} } \right)}}{{n_{D} + n_{I} + n_{wD} }} \\ &\quad+ q_{\max } \left( {\gamma_{2} + \gamma_{1} T_{ps} } \right) \\ \frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} & = q\left( {\gamma_{{{\text{PSA}}0}} + \gamma_{{{\text{PSA}}1}} \left( {n_{D} + n_{I} + n_{wD} } \right)} \right) - \delta_{{{\text{PSA}}}} c_{{{\text{PSA}}}} \\ \end{aligned}$$ with initial conditions \(n_{D} \left( {t_{0D} } \right) = n_{D0} ,n_{wD} \left( {t_{0wD} } \right) = n_{wD0} ,n_{I} \left( {t_{0I} } \right) = n_{I0} ,q\left( {t_{0q} } \right) = q_{0} ,c_{{{\text{PSA}}}} \left( {t_{{0{\text{PSA}}}} } \right) =c_{{{\text{PSA}}0}}\) together with the required biological inequalities \(q_{{D{\text{min}}}} > q_{{wD{\text{min}}}}\) and \(q_{{D{\text{min}}}} > q_{{I{\text{min}}}}\). P19 in the Context of the Data The idea of a third population is not new and already advanced with success in the model by Hirata et al. (2010). Nevertheless, the structure of the equations in Eq. (10) is very different from the Hirata et al. model in Eq. (10), with significantly more parameters not readily justifiable within the present dataset quality. Similar considerations were already worked out by Phan et al. We remark only that the complexity of the analysis, already evident in Sect. 4.6.2, is pushed further in this context, where only numerical investigation is available for equilibria and stability. The only off-treatment equilibrium accessible by the orbits is the one for \(\left\{ {n_{D} ,n_{wD,} n_{I} ,q,c_{{{\text{PSA}}}} } \right\}_{{{\text{eq}}}}^{{{\text{off}}}} = \left\{ {0,{ }0,{ }0,{ }\frac{{\gamma_{{{\text{max}}}} q_{I\,\min } + \gamma_{2} q_{{{\text{max}}}} }}{{\gamma_{2} + \gamma_{{{\text{max}}}} }},{ }\frac{{\gamma_{{{\text{PSA}}0}} \left( {\gamma_{{{\text{max}}}} q_{I\,\min } + \gamma_{2} q_{{{\text{max}}}} } \right)}}{{\delta_{{{\text{PSA}}}} \left( {\gamma_{2} + \gamma_{{{\text{max}}}} } \right)}}} \right\}\) and \(\delta_{{{\text{PSA}}}} \ne 0\) which is always positive with always negative eigenvalues \(\lambda_{1}^{{{\text{off}}}} = - \gamma_{2} - \gamma_{{{\text{max}}}}\) and \(\gamma_{2}^{{{\text{off}}}} = - \delta_{{{\text{PSA}}}}\). This is of limited biological interest as it is not compatible with the irreversibility nature of \(n_{I}\), if not by surgical castration. Brady-Nicholls et al. (2020) The Brady-Nicholls et al. (2020) model (hereafter B20) is based on the hypothesis that prostate cancer stem cells' enrichment induces resistance. The model correlates stem cell proliferation with serum PSA through SoE for the prostate cancer stem cells \(n_{S}\), the non-stem (differentiated) cells \(n_{D}\), and for PSA serum concentration \(c_{PSA}\). We report the system in the following way: $$\begin{array}{*{20}l} {\frac{{{\text{d}}n_{S} }}{{{\text{d}}t}} = \frac{{p_{S} \log \left( 2 \right)n_{S}^{2} }}{{n_{D} + n_{S} }},} \hfill \\ {\frac{{{\text{d}}n_{D} }}{{{\text{d}}t}} = \log \left( 2 \right)n_{S} \left( {1 - \frac{{p_{S} n_{S} }}{{n_{D} + n_{S} }}} \right) - \delta_{D} T_{ps} n_{D} ,} \hfill \\ {\frac{{{\text{d}}c_{{{\text{PSA}}}} }}{{{\text{d}}t}} = \gamma_{{{\text{PSA}}}} n_{D} - \delta_{{{\text{PSA}}}} c_{{{\text{PSA}}}} ,} \hfill \\ \end{array}$$ with initial conditions \(n_{S} \left( {t_{0S} } \right) = n_{S0}\), \(n_{D} \left( {t_{0D} } \right) = n_{D0}\) and \(c_{PSA} \left( {t_{0PSA} } \right) = c_{PSA0}\). It is assumed that stem cells divide at rate log(2), and the division is either symmetric yielding two stem cells (Enderling 2015) or asymmetric, where the stem cell produces one stem and one differentiated cell. The parameter that governs this effect is \(p_{s}\). The PSA differentiated cell production rate and PSA clearance rate are given by \(\gamma_{PSA}\) and \(\delta_{PSA}\), respectively, and \(T_{ps}\) is the patient-specific treatment function (see Sect. 2.1). The SoE presents an infinite set of equilibrium points when off-treatment \(T_{ps} \left( t \right) = 0\) in the intersection of the plane \(n_{S} \left( t \right) = 0\) with the plane given by \(c_{{{\text{PSA}}}} \left( t \right) = \frac{{\gamma_{{{\text{PSA}}}} n_{D} \left( t \right)}}{{\delta_{{{\text{PSA}}}} }}\) conditional to \(n_{D} \ne 0\) and \(\delta_{PSA} \ne 0\) and the generalized eigenvalues of the Jacobian results in a double-zero generalized eigenvalue \(\lambda_{1} = 0\), \(\lambda_{2} = 0\) and a third negative eigenvalue \(\lambda_{3} = - \delta_{PSA}\). Standard center manifold computation (Wiggins 2003) shows slow-2D-manifold dynamics that can be integrated to prove that the equilibria are unstable, and therefore not of interest. Bayesian Model Comparison Maybe the most vital point of the Bayesian framework, and the reason for its increasing popularity, is its innate model comparison ability, based on logic as an instrument for selection. We exploit this feature here using the Bayesian factor to compare the different models in their ability to simulate the data. It should be noted that this framework innate penalizes models based on the number of parameters required. This phenomenon is sometimes referred to as the Occam's razor factor (Jefferys and Berger 1992). Starting from the classical Bayesian theorem, the Bayes factor \(\beta_{ij}\) for PSA model \(M_{i}\) over the PSA model \(M_{j}\) is computed as a ratio of the probabilities of the two models (the odd-ratio, \(O_{ij}\)) $$O_{ij} = \frac{{\Pr \left( {M_{i} ,I} \right)\Pr \left( {D|M_{i} ,I} \right)}}{{\Pr \left( {M_{j} ,I} \right)\Pr \left( {D|M_{j} ,I} \right)}} = \frac{{\Pr \left( {M_{i} ,I} \right)}}{{\Pr \left( {M_{j} ,I} \right)}}\beta_{ij} ,$$ such that, because \(\sum\nolimits_{i = 1}^{{N_{m} }} {\Pr } \left( {M_{i} |D,I} \right) = 1\) (with \(N_{m}\) number of models to compare) if we are interested in how a model, say \(M_{1}\), compares to the other models \(M_{j}\), we arrive at $${\text{Pr}}\left( {M_{1} |D,I} \right) = \frac{{O_{i1} }}{{\mathop \sum \nolimits_{j = 1}^{{N_{m} }} O_{j1} }}.$$ We implement Eq. (28) to compare one patient at a time in one model against all the other models individually. For example, we are going to implement the comparison between \(M_{1}\), and every other \(M_{2}\) as \({\text{Pr}}\left( {M_{2} |D,I} \right) = \frac{1}{{1 + O_{21}^{ - 1} }}\), and we proceed iteratively. We first explore the Laplace approximation framework under the assumption of equally-prioritized models, i.e., assuming that no previous preference can be accorded to any of the PSA models considered. We can exploit the asymptotic approximation (Murphy 2012; Theodoridis 2015) to the global likelihood, i.e., the evidence of the \(i^{th}\) model, \({\text{Pr}}\left( {D|M_{i} } \right)\), writing $$\begin{aligned} \Pr \left( {D|M_{i} } \right) & = \mathop \smallint \limits_{{}}^{{}} {\text{d}}{\bf{p}}\Pr \left({\bf{p}}|M,I \right)L\left({\bf{p}}|I \right) \\ & \cong \Pr \left( {\hat{\bf{p}}|M_{i} } \right)L\left( {\hat{\bf{p}}} \right)\sqrt {\det \left( {F\left( \bf{p} \right)} \right)} , \\ \end{aligned}$$ with \(F\) being the information matrix introduced in Sect. 3.1. A classical result of Bayesian analysis is to consider the limit of the previous expression, but for an increased number of data points (\(N_{p} \to \infty\)) and flat priors, i.e., to compute the popular BIC index against AIC (Akaike 1974; Schwarz 1978). As the number of patient data points is often limited \(N_{p} \ll \infty\) and we make explicit use of priors, BIC or AIC indices are not justifiable for model comparison. Instead, we build up a model-of-model function (Pasetto et al. 2021) to encode prior information as soon as available. Furthermore, as introduced above, we verified the Laplace approximation with fully numerical integration based on nested sampling algorithms (Skilling 2004; Mukherjee et al. 2006; Feroz and Hobson 2008), i.e., a numerical technique designed explicitly to compute the global likelihood of models with different degrees of freedom. Single Patient Comparison Results Figure 7a shows an example of the quality of the model calibration achieved by Bayesian posterior inference introduced in Sect. 3 applied to the parameter inference problem to all the models. The simulated disease dynamics vary significantly between the different models, and discrepancies between different models and patient data may indicate likely or unlikely biological mechanisms driving individual patients' resistance. Bayesian model comparison results. a Best fits for the 13 models analyzed for a representative patient. The black squares are the error bars enhanced with a more prominent marker for visibility reasons; yellow lines along the x-axis represent the on-treatment periods. b Model log-evidence comparison on patient #60 with error bars as obtained by nest-sampling technique. The color range shows the best-performing model (yellow) and fades to the worst-performing model (gray). c Unnormalized posterior PDF for the best performing model, E16, and credible interval as black segment over the x-axis. d Comparison between the normalized log-evidence overall patient data. (The color scheme is consistent with panel b) (Color figure online) Model evidence (Fig. 7b) demonstrates that no single model represents all patient data accurately, suggesting that different biology drive individual patients' responses or that no model correctly faces the PSA problem. It may also imply that the PSA dynamics alone may be insufficient to discriminate between the different biological models. For some patients, model selection identifies models with a higher probability than others, but selection varies on a per-patient basis. As a classical proof-of-concept of the Bayesian technology employed, we report for the best performing model, E16, for patient #60 the unnormalized posterior marginalized PDF for each parameter in Fig. 7c. The PDFs are almost unimodal (but not for all parameters, see next Sect. 6), suggesting that this model represents fairly the patient and that the Laplace approximation could be justified. The credible intervals for the log parameters are also plotted and superimposed to the x-axis. Overall Model Selection We calculate the Bayesian maximum a posteriori performance for all the patients for each model (Fig. 7d), resulting in the Elishmereni et al. (2016) model marginally performing better on most patients. This result does not surprise us, as it is a model designed on clinical necessities, i.e., it was crafted with careful handling of the medical treatment. Nevertheless, as mentioned before, in the case of model comparison on a patient-to-patient basis, we could not identify a model that performed statistically better than the others thus eventually indicating the correct biological mechanics governing PSA dynamics. Figure 7d shows that E16 is preferred only on 10% of the patients, and eight of the 13 models have scores above the 8%. Conclusion and Discussion This work considers several mathematical models (Table 1) to simulate PSA dynamics of prostate cancer response to IADT in a prospective clinical trial. We exploit Bayesian continuous and discrete inference to interpret the data and identify the model with the highest likelihood of simulating the clinically observed dynamics. Using the PSA biomarker and the comparison between the different models, we1) identify several models that can separate responding patients and patients that develop resistance to intermittent ADT through the model fitting, 2) performed the Bayesian model comparison and demonstrated that the model by Elishmereni et al. (2016) performed slightly better than the others, i.e., as a better representative of most patients in the trial. Nevertheless, as evidenced in the example of Fig. 7c, the marginalized posterior PDF is often not all optimally single-peaked, casting shadows in an attempt to use this model to solve forecast problems. While we have focused on the models' inference to evaluate the possible connection with their underpinned biology, we will explore the potentiality and limitation of the models' forecasting ability to predict clinical PSA trends in a follow-up paper (Pasetto et al. 2021, in preparation). Table 1 Model compartment sketches with phase variables and parameters modeled in the inference process The models analyzed herein synonymously use longitudinal PSA data to infer biological mechanisms underlying the observed PSA dynamics. PSA alone limited the potentiality of the presented approach and did not identify a single dominant model. Further information is necessary to simulate accurately and ultimately predict patient-specific PSA trajectories and the corresponding biological drivers of resistance. PSA alone might not be a helpful biomarker due to several dominant environmental factors outside the models' scopes that influence its evolution under treatment. The use of PSA as a surrogate marker for prostate cancer burden is indeed controversial. Overexpression of the PCA3 gene obtained from the mRNA in urine samples is proposed to be more suited to monitoring the cancer evolution (Bussemakers et al. 1999, p. 3; Laxman et al. 2008; Neves et al. 2008; Hessels and Schalken 2009, p. 3; Borros 2009). Two alternative directions might to improve our understanding of the PSA as a prostate cancer monitor biomarker. From one side, a deeper understanding of the connection between PSA and tumor burden throughout model investigation might present the opportunity for a new class of models. Recently, the role of immature blood vessels formed under angiogenesis cues has been investigated to decrypt the relation between an increased tumor burden contemporaneous to decreasing PSA concentration (Barnaby et al. 2021). Additionally, models that include both PSA and androgen concentrations might present some advantages in the future. The modest but significant evidence of the E16 model over the other models might indicate a more critical relevance of the dormancy whose biology and mathematics are worth undoubtedly deeper understanding. Exploring PSA model probability distributions to disentangle responsive and resistant patient cohorts in a clinical setting could be investigated through cross-correlations with PCA3 biomarkers. Such cross-correlation would provide independent verification of the analytical findings herein that remain, for the moment, data-driven and, therefore, entirely dependent on the one dataset utilized in all discussed models. Alternatively, PSA could be a perfect biomarker, but inter-patient heterogeneity in resistance mechanisms may disallow identifying a single model for all patients. Additionally, different resistance mechanisms may evolve in an individual patient, with their respective contribution to the observed response dynamics changing during therapy. More complex models and dynamic adaptive weighting of different variables, terms, and parameters may be necessary. Such models, however, would be non-identifiable with the presently available data. A close dialogue between biologists, statisticians, and mathematical and genitourinary oncologists may help identify which data should be collected in future clinical studies to help detangle the complex prostate cancer response dynamics to intermittent ADT. While the Bayesian framework is an invaluable tool to estimate model parameters and fit model dynamics to clinical measurements, the goodness of a fit informs neither the reliability of the estimated parameters nor the likelihood of a model representing the data chosen for the valid biological reason. Relatively invariant PSA profiles can be obtained for a significant range in each parameter, as it is the case of a weakly sensitive—highly non-identifiable parameter. This fact is often omitted in the modeling literature, where the results are often presented without structural or practical identifiability analysis. Many of the herein discussed models have not demonstrated structural identifiability, hence jeopardizing the attempt to claim the inference's practical identifiability herein. Nevertheless, we stress that a model's value may also be found in its interpretative role (Enderling and Wolkenhauer 2021). The complexity of the mechanism involved in the biological responses to intermittent ADT can be captured correctly for a single patient but fail for others. Therefore, the model comparison is not intended to provide an absolute ranking; instead, it provides an instrument to explore the different biological mechanisms implemented in mathematical models in clinically observed treatment response and progression dynamics. We have performed a sensitivity analysis for all the models included in the paper. However, as this analysis overlaps with the original papers' work, we do not include those results here. Our sensitivity is motivated by: (1) to understand the dependence of our results on the parameters. For example, if we can claim the possibility to split between relapsing (\(\Omega\)) and not to relapse (\(\neg \Omega\)) patients by exploiting some specific model parameter combination, then the robustness of our result worth be investigated on the same parameter sensitivity to assign it the correct relevance and to evaluate its possibility to be applied to clinical tumor forecasting. (2) The technique implemented for the sensitivity analysis investigates the parameter's sensitivity and the best-fit orbital integration, i.e., over the available longitudinal data. This approach enhances our understanding of when a particular \(\Omega /\neg \Omega\) segregating technique is more useful during or off-treatment, with consequent indications on the role that a model splitting potentiality might or might not have (and when) on a per-patient base. (3) Continuous but not differentiable functions might need particular attention in the computation of the sensitivities because of their definition as the Jacobian matrix's function. This approach represents a current research field often omitted in the mathematical oncology literature and is worth being brought to light. Therefore, in what follows, we exploit the direct differential method (DDM) for sensitivity analysis (Gu and Wang 2013) to track the time dependence of the sensitivity \(S_{ij} = \frac{{\partial x_{i} (t,\widehat{\bf{p}})}}{{\partial p_{j} }}\), where in general it is xi = cPSA and pj the generic parameter dependent on the particular model in the exam. For a generic vector field \(\frac{{\partial \bf{x}\left( {\bf{p};t} \right)}}{\partial t} = f\left( {\bf{x},\bf{p}} \right)\) with \(x\left( {t_{0} ,\bf{p}} \right) = \bf{x}_{0}\) we couple the integration of the SoE defining the model with: $$\frac{{\partial S_{ij} \left( {t,\hat{\bf{p}}} \right)}}{\partial t} = \frac{{\partial f_{i} \left( {x_{k} \left( {t,\hat{\bf{p}}} \right),\hat{\bf{p}}} \right)}}{{\partial x_{k} \left( {t,\hat{\bf{p}}} \right)}}S_{kj} \left( {t,\hat{\bf{p}}} \right) + \frac{{\partial f_{i} }}{{\partial p_{j} }}.$$ Generalized sensitivity (Stechlinski et al. 2018), based on the concept of generalized derivative for non-smooth cPSA profiles (Clarke 1990) and used because of the loss of differentiability at the bifurcation points \(T_{ps} = \left\{ {0,1} \right\}\) on the treatment parameter, has also been considered. We will not report the DDM analysis if not relevant to strength our specific results and we refer to the original model paper for general sensitivity analysis of the presented models. We use compact set notation here, e.g., \(0 < t \in \left[ {t_{\min } ,t_{\max } } \right]\) means all the possible values of t, positive, between \(t_{{{\text{min}}}}\) and \(t_{{{\text{max}}}}\), i.e., \(0 < t_{{{\text{min}}}} \le t \le t_{{{\text{max}}}}\). Open brackets will exclude the borders, e.g., soon after \(\tau_{i} \subseteq \left] {t_{{{\text{min}}}} ,t_{{{\text{max}}}} } \right[\) means that the interval \(\tau_{i}\), e.g., \(\tau = \left[ {a,b} \right]\) is properly included between \(t_{{{\text{min}}}}\) and \(t_{{{\text{max}}}}\), but the limits of \(t_{{{\text{min}}}}\) and \(t_{{{\text{max}}}}\) are excluded: \(t_{{{\text{min}}}} < \tau_{i} < t_{{{\text{max}}}}\). This allows us to work with the domain of existence of the indicator functions but arbitrarily truncate it at \(t_{\min }\) or \(t_{{{\text{max}}}}\). Note that in our case the indicator function \({\bf{1}}_{{\tau_{i} }}\) is not-continuous scalar function, but traditionally indicated with bold characters even if it is not a matrix or a vector. Note how, in general \(t_{0D} \ne t_{0I}\) for what is seen in Sect. 2.2. Note how we could consider the resulting SoE as function of the variable \(c_{A}\) to reach a fully algebraic solution of the system by taking the ration of \(\frac{{{\text{d}}n_{D} }}{{{\text{d}}c_{A} }}/\frac{{{\text{d}}n_{I} }}{{{\text{d}}c_{A} }}\). Nevertheless, it is more fruitful to look at the trend of Eq. (6) as obtained by the best fit procedure introduced in the next section. Note on the statistical analysis: Details of statistical implementation are often omitted to focus the present work on the mathematical biology aspects pertinent to the work and the journal, but available upon request to the authors. Due to the complexity of the model, analogous inference approximations to P12B have been used in this analysis. Throughout algebraic manipulators as Mathematica or MAPLE it is possible to show that the system characterizing equation for cPSA is algebraic of order 12, whose complete numeric roots investigation is beyond the scope of the present paper. Note on the integrators: details of numerical integration are often omitted to focus the present work on the mathematical biology aspects pertinent to the work and the journal, but available upon request to the authors. Akaike H (1974) A new look at the statistical model identification. IEEE Trans Autom Control 19:716–723. https://doi.org/10.1109/TAC.1974.1100705 Article MathSciNet MATH Google Scholar Baez J, Kuang Y (2016) Mathematical models of androgen resistance in prostate cancer patients under intermittent androgen suppression therapy. Appl Sci 6:352. https://doi.org/10.3390/app6110352 Barnaby JP, Sorribes IC, Jain HV (2021) Relating prostate-specific antigen leakage with vascular tumor growth in a mathematical model of prostate cancer response to androgen deprivation. Comput Syst Oncol 1:e1014. https://doi.org/10.1002/cso2.1014 Borros A (2009) Clinical significance of measuring prostate-specific antigen. Lab Med 40:487–491. https://doi.org/10.1309/LMEGGGLZ2EDWRXUK Brady-Nicholls R, Nagy JD, Gerke TA et al (2020) Prostate-specific antigen dynamics predict individual responses to intermittent androgen deprivation. Nat Commun 11:1750. https://doi.org/10.1038/s41467-020-15424-4 Bruchovsky N, Klotz L, Crook J et al (2006) Final results of the Canadian prospective phase II trial of intermittent androgen suppression for men in biochemical recurrence after radiotherapy for locally advanced prostate cancer. Cancer 107:389–395. https://doi.org/10.1002/cncr.21989 Bruchovsky N, Klotz L, Crook J, Phillips N, Abersbach J, Goldenberg SL (2008) Quality of life, morbidity, and mortality results of a prospective phase II study of intermittent androgen suppression for men with evidence of prostate-specific antigen relapse after radiation therapy for locally advanced prostate cancer. Clin Genitourin Cancer 6(1):46–52. https://doi.org/10.3816/CGC.2008.n.008 Bussemakers MJ, van Bokhoven A, Verhaegh GW et al (1999) DD3: a new prostate-specific gene, highly overexpressed in prostate cancer. Cancer Res 59:5975–5979 Clarke FH (1990) Optimization and nonsmooth analysis. Society for Industrial and Applied Mathematics, Philadelphia Droop MR (1968) Vitamin B12 and marine ecology. IV. The kinetics of uptake, growth and inhibition in Monochrysis Lutheri. J Mar Biol Assoc UK 48:689–733. https://doi.org/10.1017/S0025315400019238 Eikenberry SE, Nagy JD, Kuang Y (2010) The evolutionary impact of androgen levels on prostate cancer in a multi-scale mathematical model. Biol Direct 5:24. https://doi.org/10.1186/1745-6150-5-24 Elishmereni M, Kheifetz Y, Shukrun I et al (2016) Predicting time to castration resistance in hormone sensitive prostate cancer by a personalization algorithm based on a mechanistic model integrating patient data. Prostate 76:48–57. https://doi.org/10.1002/pros.23099 Elzanaty S, Rezanezhad B, Dohle G (2017) Association between serum testosterone and PSA levels in middle-aged healthy men from the general population. Curr Urol 10:40–44. https://doi.org/10.1159/000447149 Enderling H (2015) Cancer stem cells: small subpopulation or evolving fraction? Int Bio (cam) 7:14–23. https://doi.org/10.1039/c4ib00191e Enderling H, Wolkenhauer O (2021) Are all models wrong? Comput Syst Oncol 1:e1008. https://doi.org/10.1002/cso2.1008 Everett RA, Packer AM, Kuang Y (2014) Can mathematical models predict the outcomes of prostate cancer patients undergoing intermittent androgen deprivation therapy? Biophys Rev Lett 09:173–191. https://doi.org/10.1142/S1793048014300023 Feldman BJ, Feldman D (2001) The development of androgen-independent prostate cancer. Nat Rev Cancer 1:34–45. https://doi.org/10.1038/35094009 Feoktistov V (2006) Differential evolution: in search of solutions. Springer, New York MATH Google Scholar Feroz F, Hobson MP (2008) Multimodal nested sampling: an efficient and robust alternative to Markov Chain Monte Carlo methods for astronomical data analyses. Month Notices R Astronom Soc 384:449–463. https://doi.org/10.1111/j.1365-2966.2007.12353.x Goode SW, Annin SA (2015) Differential equations and linear algebra. Prentice Hall, Upper Saddle River Grossmann ME, Huang H, Tindall DJ (2001) Androgen receptor signaling in androgen-refractory prostate cancer. J Natl Cancer Inst 93:1687–1697. https://doi.org/10.1093/jnci/93.22.1687 Gu Q, Wang G (2013) Direct differentiation method for response sensitivity analysis of a bounding surface plasticity soil model. Soil Dyn Earthq Eng 49:135–145. https://doi.org/10.1016/j.soildyn.2013.01.028 Hessels D, Schalken JA (2009) The use of PCA3 in the diagnosis of prostate cancer. Nat Rev Urol 6:255–261. https://doi.org/10.1038/nrurol.2009.40 Hirata Y, Aihara K (2015) Ability of intermittent androgen suppression to selectively create a non-trivial periodic orbit for a type of prostate cancer patients. J Theor Biol 384:147–152. https://doi.org/10.1016/j.jtbi.2015.08.010 Hirata Y, Bruchovsky N, Aihara K (2010) Development of a mathematical model that predicts the outcome of hormone therapy for prostate cancer. J Theor Biol 264:517–527. https://doi.org/10.1016/j.jtbi.2010.02.027 Hirata Y, Tanaka G, Bruchovsky N, Aihara K (2012) Mathematically modelling and controlling prostate cancer under intermittent hormone therapy. Asian J Androl 14:270–277. https://doi.org/10.1038/aja.2011.155 Hutter F, Hoos HH, Leyton-Brown K (2011) Sequential model-based optimization for general algorithm configuration. In: Coello CAC (ed) Learning and intelligent optimization. Springer, Berlin, pp 507–523 Ideta AM, Tanaka G, Takeuchi T, Aihara K (2008) A mathematical model of intermittent androgen suppression for prostate cancer. J Nonlinear Sci 18:593–614. https://doi.org/10.1007/s00332-008-9031-0 Jackson TL (2004) A mathematical investigation of the multiple pathways to recurrent prostate cancer: comparison with experimental data. Neoplasia 6:697–704. https://doi.org/10.1593/neo.04259 Jefferys WH, Berger JO (1992) Ockham's razor and Bayesian analysis. Am Sci 80:64–72 Jeffreys H (1946) An invariant form for the prior probability in estimation problems. Proc R Soc Lond Ser A Math Phys Sci 186:453–461. https://doi.org/10.1098/rspa.1946.0056 Klotz LH, Herr HW, Morse MJ, Whitmore WF (1986) Intermittent endocrine therapy for advanced prostate cancer. Cancer 58:2546–2550. https://doi.org/10.1002/1097-0142(19861201)58:11%3c2546::AID-CNCR2820581131%3e3.0.CO;2-N Larry Goldenberg S, Bruchovsky N, Gleave ME et al (1995) Intermittent androgen suppression in the treatment of prostate cancer: a preliminary report. Urology 45:839–845. https://doi.org/10.1016/S0090-4295(99)80092-2 Laxman B, Morris DS, Yu J et al (2008) A first-generation multiplex biomarker analysis of urine for the early detection of prostate cancer. Cancer Res 68:645–649. https://doi.org/10.1158/0008-5472.CAN-07-3224 Lin K, Lipsitz R, Miller T, Janakiraman S (2008) Benefits and harms of prostate-specific antigen screening for prostate cancer: an evidence update for the U.S. preventive services task force. Ann Intern Med 149:192. https://doi.org/10.7326/0003-4819-149-3-200808050-00009 Morgentaler A, Conners W (2015) Testosterone therapy in men with prostate cancer: literature review, clinical experience, and recommendations. http://www.ajandrology.com/article.asp?issn=1008-682X;year=2015;volume=17;issue=2;spage=206;epage=211;aulast=Morgentaler. Accessed 8 Jun 2020 Morken JD, Packer A, Everett RA et al (2014) Mechanisms of resistance to intermittent androgen deprivation in patients with prostate cancer identified by a novel computational method. Cancer Res 74:3673–3683. https://doi.org/10.1158/0008-5472.CAN-13-3162 Mukherjee P, Parkinson D, Liddle AR (2006) A nested sampling algorithm for cosmological model selection. ApJ 638:L51. https://doi.org/10.1086/501068 Murphy KP (2012) Machine learning: a probabilistic perspective. The MIT Press, Cambridge Neves AF, Araújo TG, Biase WKFS et al (2008) Combined analysis of multiple mRNA markers by RT-PCR assay for prostate cancer diagnosis. Clin Biochem 41:1191–1198. https://doi.org/10.1016/j.clinbiochem.2008.06.013 Packer A, Li Y, Andersen T et al (2011) Growth and neutral lipid synthesis in green microalgae: a mathematical model. Bioresource Technol 102:111–117. https://doi.org/10.1016/j.biortech.2010.06.029 Phan T, Crook SM, Bryce AH et al (2020) Review: mathematical modeling of prostate cancer and clinical application. Appl Sci 10:2721. https://doi.org/10.3390/app10082721 Phan T, He C, Martinez A et al (2019) Dynamics and implications of models for intermittent androgen suppression therapy. MBE 16:187–204. https://doi.org/10.3934/mbe.2019010 Portz T, Kuang Y, Nagy JD (2012) A clinical data validated mathematical model of prostate cancer growth under intermittent androgen suppression therapy. AIP Adv 2:011002. https://doi.org/10.1063/1.3697848 Qin Z, Yao J, Xu L et al (2020) Diagnosis accuracy of PCA3 level in patients with prostate cancer: a systematic review with meta-analysis. Int Braz J Urol 46:691–704. https://doi.org/10.1590/S1677-5538.IBJU.2019.0360 Schwarz G (1978) Estimating the dimension of a model. Ann Stat 6:461–464. https://doi.org/10.1214/aos/1176344136 Siegel et al. (2021) Cancer statistics, 2021. https://acsjournals.onlinelibrary.wiley.com/doi/https://doi.org/10.3322/caac.21654. Accessed 26 Jan 2021 Skilling J (2004) Nested Sampling AIP Conf Proc 735:395–405. https://doi.org/10.1063/1.1835238 Stechlinski P, Khan KA, Barton PI (2018) Generalized sensitivity analysis of nonlinear programs. SIAM J Optim 28:272–301. https://doi.org/10.1137/17M1120385 Tanaka G, Hirata Y, Goldenberg SL et al (2010) Mathematical modelling of prostate cancer growth and its application to hormone therapy. Proc R Soc A 368:5029–5044. https://doi.org/10.1098/rsta.2010.0221 Theodoridis S (2015) Machine learning: a bayesian and optimization perspective, 1st edn. Academic Press Inc, Orlando Wiggins S (2003) Introduction to applied nonlinear dynamical systems and chaos, 2nd edn. Springer, New York Zhang J, Cunningham JJ, Brown JS, Gatenby RA (2017) Integrating evolutionary dynamics into treatment of metastatic castrate-resistant prostate cancer. Nat Commun 8:1–9. https://doi.org/10.1038/s41467-017-01968-5 Dr. S. Pasetto thanks Prof. D. Crnojevic, Prof. S. Ross, and Dr. S. Smit for stimulating discussions, technical support, or careful reading of this manuscript's early version. The research reported in this publication was supported by the Ocala Royal Dames for Cancer Research, Inc., The JAYNE KOSKINAS TED GIOVANIS FOUNDATION FOR HEALTH AND POLICY, a Maryland private foundation dedicated to effecting change in health care for the public good, and the National Cancer Institute of the National Institutes of Health under Award Numbers R21CA234787 and U54CA143970. The content is solely the authors' responsibility and does not necessarily represent the official views of the National Institutes of Health. The opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and not necessarily those of the JAYNE KOSKINAS TED GIOVANIS FOUNDATION FOR HEALTH AND POLICY, its directors, officers, or staff. Department of Integrated Mathematical Oncology, H. Lee Moffitt Cancer and Research Institute, 12902 Magnolia Drive, Tampa, FL, 33612, USA S. Pasetto, H. Enderling, R. A. Gatenby & R. Brady-Nicholls Department of Radiation Oncology, H. Lee Moffitt Cancer and Research Institute, 12902 Magnolia Drive, Tampa, FL, 33612, USA H. Enderling Department of Genitourinary Oncology, H. Lee Moffitt Cancer Center and Research Institute, 12902 Magnolia Drive, Tampa, FL, 33612, USA Department of Radiology, H. Lee Moffitt Cancer and Research Institute, 12902 Magnolia Drive, Tampa, FL, 33612, USA R. A. Gatenby S. Pasetto R. Brady-Nicholls Correspondence to S. Pasetto or R. Brady-Nicholls. Pasetto, S., Enderling, H., Gatenby, R.A. et al. Intermittent Hormone Therapy Models Analysis and Bayesian Model Comparison for Prostate Cancer. Bull Math Biol 84, 2 (2022). https://doi.org/10.1007/s11538-021-00953-w DOI: https://doi.org/10.1007/s11538-021-00953-w Intermittent hormone therapy models
CommonCrawl
Physics Meta Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. Join them; it only takes a minute: Logical connection of Newton's Third Law to the first two The first law and second laws of motion are obviously connected. But it seems to me that the third law is not related to the first two, at least logically. (In Kleppner's Mechanics the author states that the third law is a necessity to make sense of the second law. It didn't make sense to me, though. I'll post the excerpt if anyone would like to see it.) EDIT: Excerpt from Introduction to Mechanics by Kleppner & Kolenkow (1973), p. 60: Suppose that an isolated body starts to accelerate in defiance of Newton's second law. What prevents us from explaining away the difficulty by attributing the acceleration to carelessness in isolating the system? If this option is open to us, Newton's second law becomes meaningless. We need an independent way of telling whether or not there is a physical interaction on a system. Newton's third law provides such a test. If the acceleration of a body is the result of an outside force, then somewhere in the universe there must be an equal and opposite force acting on another body. If we find such a force, the dilemma is resolved; the body was not completely isolated. [...] Thus Newton's third law is not only a vitally important dynamical tool, but it is also an important logical element in making sense of the first two laws. newtonian-mechanics forces Qmechanic♦ $\begingroup$ Hi Ron, and welcome to Physics Stack Exchange! I think it would help if you can quote the piece from Kleppner that you're talking about, as long as it's not too long. $\endgroup$ – David Z♦ Dec 11 '11 at 3:21 $\begingroup$ Thanks! I had already edited the original post and added the excerpt I hope it isn't too long. $\endgroup$ – Ron Dec 11 '11 at 6:06 $\begingroup$ It is not clear to me, what's exactly your question. $\endgroup$ – student Dec 11 '11 at 9:52 $\begingroup$ There is a historical context which hasn't been address by other answers to date, and may play an important part in the seemingly redundant statement of the first and second laws. Previous to Newton, much though on the nature of motion in Europe could be traced back to Aristotle, who claimed that bodies in motion naturally tended to return to rest (nor is that stupid: try sliding a book on a table). The first law is a bald contradiction of this doctrine. $\endgroup$ – dmckee♦ Dec 11 '11 at 17:45 $\begingroup$ Newton makes an argument that a violation of the third law would produce a violation of the first law, in the case of attractive forces. This is in the scholium following the statement of the laws of motion, at "In attractions, I briefly demonstrate ..." en.wikisource.org/wiki/… $\endgroup$ – Ben Crowell May 28 '13 at 0:36 Newton's Fist and Second Law relate forces acting on a single system to conservation or changes of that system's momentum. It doesn't say anything about the nature of these forces, their origin; they could come "out of nowhere" and Laws 1 and 2 still hold. The Third Law, however, indicates that all forces, or "actions", are just one side of an interaction. This view, that systems act on each other, by either attracting or repelling each other, still holds even in circumstances where other aspects of newtonian mechanics don't (i.e. relativistic or quantum mechanics). All forces observed since Newton's time until today are still modelized as being the results of the fundamental interactions. One could say that the First Law describes the nature of momentum, the Thid Law the nature of forces, and the Second Law the link between the two. The First and Third Law thus provide the setting where the Second Law is stated. DalkerDalker $\begingroup$ Wow! Thanks! This post made a lot of sense. I agree. Newton, as you have posted, describes the nature of momentum and also, I think, in hindsight he gives a definition of a force, i.e. that which changes the momentum of the object. $\endgroup$ – Ron Dec 12 '11 at 11:08 Kleppner's statement is carefully worded so that he is not claiming (1) that the third law follows from the first and second, nor (2) that the third law is necessary in order to make sense of the first and second laws. He's simply saying that there's a connection, not that they are conjoined twins. In my opinion he is still overstating the connection. For examples of experiments that directly test the third law, see Bartlett 1986 and Kreuzer 1968. Battat 2007 is a test of the first law that is independent of the third. Kleppner says: Suppose that an isolated body starts to accelerate in defiance of Newton's second law. What prevents us from explaining away the difficulty by attributing the acceleration to carelessness in isolating the system? If this option is open to us, Newton's second law becomes meaningless. We need an independent way of telling whether or not there is a physical interaction on a system. Newton's third law provides such a test. The three experiments I've described all happen to be gravitational experiments. In principle, we could worry that a non-null result from an experiment such as Battat's could be interfered with by gravitational forces from distant bodies -- gravity is, after all, a long-range force. But these distant bodies would have to be unknown, or else we could account for their effects. If they were unknown, then Kleppner's proposed test doesn't work: we can't check whether they're accelerating due to the third-law partner of the force they exerted on our experiment. Bartlett and van Buren, Phys. Rev. Lett. 57 (1986) 21, summarized in Will, http://relativity.livingreviews.org/Articles/lrr-2006-3/ Battat 2007, http://arxiv.org/abs/0710.0702 Kreuzer, "Experimental measurement of the equivalence of active and passive gravitational mass," Phys. Rev. 169 (1968) 1007, http://bit.ly/13Z6XAm Ben CrowellBen Crowell Newton's third law of motion gives meaning to the first two laws by restricting what type of fundamental forces act between particles. This restriction gives meaning to the "force" described in the first two laws. Taken by themselves (by which I mean without any reference to any explicit form for the fundamental forces acting between particles), the third law is what gives the first two laws any predictive power. This is what Mr. Kleppner is referring to when he says that Newton's third law is "an important logical element in making sense of the first two laws." For example, suppose you are watching two balls float in outer space; ball A and ball B. You see ball A accelerate towards ball B. Using Newton's first law you know there is a force acting on ball A from ball B. Using Newton's second law you know that the force is along a vector connecting the two balls. Finally, using Newton's third law you can predict that ball B should also be accelerating towards ball A. And you can test that prediction. So, the third law is what gives the first two laws any predictive power. I would argue that instead of Newton's third law it is the explicit form of the fundamental forces which "really" give the first two laws their meaning. In this case, Newton's third law of motion is just a restriction on what form those fundamental forces can take. To illustrate this idea, suppose we want to use Newton's laws to do some science. Newton's first law states: A body in motion will stay in motion unless acted upon by an external force. Since we don't know what a force is yet, it could be anything and this statement can be rephrased as: A body in motion will stay in motion unless it doesn't. You can see why this is not useful. Newton's second law of motion states: The acceleration of a body is parallel and proportional to the force exerted on the body and inversely proportional to the mass of the body. Again, without a definition of force, this statement is useless. Now, suppose we have a definition for a force, such as gravity. $F_{gravity} = G \frac{m_{1} m_{2}}{r^2}$ Now, the first two laws have meaning. We can use our definition of force to predict the motion of a particle due to the gravitational attraction of some other body and then go out and test it! Newton's third law of motion states: Any force exerted by body A on body B implies an equal and opposite force on body B by body A. Given a description of all the fundamental forces acting between particles, we don't need Newton's third law. Instead, Newton's third law is telling us how these forces act, namely symmetric with respect to both particles. You can see this reflected in the mathematical formula for the force of gravity; switching $m_1$ and $m_2$ you get the same force. $\begingroup$ I think you make here two interesting points: Newton's second law is an incomplete law (it contains a definitional element, you still have to work out the explicit form the force takes in a particular circumstance) and Newton's third law poses some restrictions on the mathematical form of the force, since interactions between two particles are symmetric with respect to both. My question is: does Newton's third law incorporate somehow the classical principle of relativity? $\endgroup$ – quark1245 Dec 11 '11 at 9:32 Today, Newton's first law is often interpreted (or extended) to the statement that inertial reference systems exist. For example let me cite from Jose,Saletan:"Classical Dynamics A contemporary Approach" There exist certain frames, called inertial, with the following two properties. Property A) Every isolated particle moves in a straight line in such a frame Property B) If the notion of time is quantified by defining the unit of time so that one >particular isolated particle moves at constant velocity in this frame, then every other >isolated particle moves at constant velocity in this frame. " Otherwise the first law would be a trivial consequence of the second one... Concerning the connection between the second and the third law, note that one has to define the word "mass". This is sometimes done via the third law. If you do so, you need the third law just to understand the variables that occur in the second law. Note that there where many many critiques in the history of physics concerning the locigal status of Newton's laws and there were many attempts to make it logically clearer. studentstudent $\begingroup$ The second law does not imply the first. The second law only says that F=0 implies a=0, but that does not mean that the velocity is constant, merely that the acceleration is zero, but if you have a nonzero jerk, then the acceleration can change. Jumping from a pointwise zero acceleration to a constant velocity is just like a student analyzing projectile motion, noting that the velocity is zero at the top and then assuming the projectile stays there forever. The student ignored the possibility of a nonzero acceleration, you ignored the possibility of a nonzero jerk $\endgroup$ – Timaeus Dec 29 '14 at 1:59 $\begingroup$ @Timaeus Well the law would say that a=0 everywhere. Wouldn't this imply that velocity is constant? $\endgroup$ – timur Sep 23 '17 at 17:08 Thanks for contributing an answer to Physics Stack Exchange! Not the answer you're looking for? Browse other questions tagged newtonian-mechanics forces or ask your own question. Logical requirement of newton's third law Are Newton's 1st and 3rd laws just consequences of the 2nd? Can one of Newton's Laws of motion be derived from other Newton's Laws of motion? What is the difference between Newton's first & third law? History of interpretation of Newton's first law How to intuitively understand Newton's third Law? Newton's Third Law Exceptions? Time varying currents and Newton's Third Law? Confused about Newton's 3rd law of motion Why do people say that Hamilton's principle is all of classical mechanics? How to get Newton's third law? Does Newton's third law apply in inertial frames of reference? Tension and Newton's Third Law Newton's third law. how things can move? Newton's third law - why is it true?
CommonCrawl
The Tortuous Geometry of the Flat Torus Take a square sheet of paper. Can you glue opposite sides without ever folding the paper? This is a conundrum that many of the greatest modern mathematicians, like Gauss, Riemann, and Mandelbrot, couldn't figure out. While John Nash did answer yes, he couldn't say how. After 160 years of research, Vincent Borrelli and his collaborators have finally provided a revolutionary and breathtaking example of a bending of a square sheet of paper! And it is spectacularly beautiful! March 9, 2014 ArticleComputer Science, Fractals, Geometry, Mathematics, TopologyLê Nguyên Hoang 16903 views In 2012, mathematics has given birth to a new baby. And she is beautiful! Her parents, Vincent Borrelli, Saïd Jabrane, Francis Lazarus, Boris Thibert and Damien Rohmer, who formed the Hévéa project, have named her the first $\mathcal C^1$ isometric embedding of the flat square torus. Sexy, right? Have a look at the first video ever taken of her: OK, her name is a bit long so I'll just call her the Hévéa Torus. She is beautiful indeed… But what is she? Amazingly, she was first imagined a century and a half ago, by her ancestors Carl Friedrich Gauss and Bernhard Riemann, in 1854. But it took a century for her great grandfather John Nash to actually prove that she could one day be conceived, although he didn't specify how. Another 20 years later, her grandfather Benoît Mandelbrot laid the foundations of a new kind of geometry which hinted at an actual possible conception. But, weirdly enough, it took the advent of computers to finally procreate her, after 5 long years of top-level mathematics research! And yet, it all started with a very simple problem… Bending the Flat Square Torus These days, I'm spending way too much time playing the game on the right called Netwalk where a network needs to be built. The game is played within a square. One tricky aspect is that whenever you get out of the square on the right, you reappear on the left. Like in PACMAN, or in these awesome games. What do these games have to do with the video above? Amazingly, the network universe and the Hévéa Torus are geometrically identical! You're kidding, right? I mean, on the right we have a square, and the Hévéa Torus is… well, a weird twisted shape! You're right. From our perspective, these are very different. But, from the perspective of some being stuck within these 2D worlds, there is absolutely no difference! The Netwalk world and the Hévéa Torus are intrinsically mathematically (nearly) identical. OK… I admit, there's a slight difference in terms of accelerations in these geometries. We'll get to that… Humm… I have the biggest troubles imagining myself living in the Hévéa Torus… You're not the only one! It took two of the greatest giants of mathematics to figure out what it meant to live within a torus. The first one is Carl Friedrich Gauss, also known as the Prince of mathematics, who famously proved the Theorema Egregium, which you can learn more about by reading Scott's article on non-Euclidean geometry. But, more importantly, it was Bernhard Riemann, Gauss' disciple, who unlocked the wider geometry of so-called manifolds by providing a powerful new picture of what geometry could be. What is this Riemann's picture of geometry you're talking about? In short, a Riemannian manifold is a space, such that each local neighborhood of a point of that space looks flat. The torus of the video is an example of a 2-dimension manifold, also known as surface. Around each point, if you zoom sufficiently, then your surface will look like a 2-dimensional sheet of paper. And, on this sheet of paper, lengths and angles are the same as on actual sheets of paper! More precisely, a scalar product must be defined on each local neighborhood. In the middle of the image above, we have a 2D section of a 6D Calabi-Yau manifold, which is widely studied by string theorists. Find out more about Riemannian geometry with my article on the spacetime of general relativity. Now, involving lengths and angles makes things complicated… so let's start with a simpler kind of manifold: The topological torus. The topological torus? Topology is the art of forgetting the geometrical concepts of angles and lengths. This means that topology allows stretching sheets of paper without effectively changing it. Sure! Topologically, it is straightforward to transform the Netwalk square into a torus. We merely have to glue together opposite sides. This is what's done below: I've tried to make a torus with the Netwalk square but I miserably failed after hours of ridiculous attempts… Sorry for that! This procedure of gluing is also described by this beautiful video by Geometric Animations: Find out more about these gluing operations with my article on Poincaré conjecture. You can also learn more with my article on topology. Nash's Isometric Embedding The trouble is that, as you can feel it while watching this video, it seems impossible to glue opposite sides without stretching the square we started with. Is that much harder? It is! This no-stretching requirement corresponds to an isometry. To get a sense of the constraint that isometry represents, check this awesome video by Colm Kelleher on TedEd, which explains how isometry can prevent the tip of your pizza from falling down: As the video explains it, it's impossible to bend a sheet of paper into a sphere, or into a potato chip. This is because of the curvatures of these shapes. And the classical smooth donut-like torus is highly curved too, so it's impossible to bend the flat Netwalk world into that classical torus. An isometric bending of the Netwalk world requires to transform it into an non-curved torus! Hummm… I'm now beginning to think it's impossible to bend a square into a torus! You're not the only one! 60 years ago, most mathematicians even thought it was impossible! But no one could prove it. In fact, when deeply annoyed by his young, brilliant and pretentious colleague, MIT mathematician Ambrose Warren rashly retorted to him: "If you're so good, why don't you solve the [100-year-old isometric] embedding problem for manifolds", which the bending of the Netwalk world with no stretching is an example of. Warren was hoping it would thereby keep his colleague busy for the rest of his life. But shortly later, that colleague was claiming he had solved it! Mathematically, an embedding is an injective map of a manifold into some larger space, typically $\mathbb R^n$, where $n$ is at strictly greater than the dimension of the manifold. This map needs to be an immersion, which means that its derivatives (which are linear applications) must always be injective. Who was that colleague? He was the founder of game theory, future winner of a Nobel prize in economics and future hero of the movie A Beautiful Mind… John Forbes Nash. And he immediately solved a 100-year-old problem? Not really… As I said, the young Nash was a pretentious douchebag. After all, he had just completed a revolutionary 28-page PhD thesis, and he was visibly smarter than most of his colleagues. So, he figured out that only a prestigious problem was worthy of his time. One that would mark History. So, to test whether the isometric embedding problem was of that kind, Nash looked at the reactions of his colleagues as he told them he had cracked it. And surely enough, it seemed that solving the isometric embedding would mark History. So, Nash decided to give it a try. Did Nash actually solve it? Amazingly, he did! Nash might have been showy, he's also definitely a world-class mathematician. So what's the answer? Can a square be bent into a torus? The answer is yes. It can. Sort of… I've explained it all in this Science4All video: I uploaded the above video the day before Nash sadly passed away in a car accident. Given how little praises I gave in the video above, I made another tribute video to honour this beautiful mind. So, as it turned out, Nash proved that we could, provided the bent torus had an infinite number of points on which acceleration could not be defined! Here's an illustration of what that means. Imagine a skater rolling down a slope: In the first case, there is a huge discontinuity in the slope at the green point. In some sense, the slope cannot be defined at that point. In the third case, the slope is so smooth that the skater doesn't feel anything special at this point. Finally, and most interestingly, in the second case, at the green point, the skater will suddenly feel a force acting on him which was not here before. Nash proved that this second case must happen at an infinite amount of times on the bent torus. Mathematically, this second case corresponds to the green point having no second derivative. You can learn about the basics of derivatives in my article on differential calculus. Nash's claim means that Newton's laws of mechanics would have no sense in the bent torus on an infinite amount of points, kind of like general relativity no longer makes sense at singularities better known as black holes! Waw! That sounds sick! And hard to imagine! I know! In fact, it was so sick that Nash couldn't provide an example of a bent square torus! His proof was not constructive. He merely showed that there must be a way to bend a square into a torus, but didn't say how… Mandelbrot's Fractals Weirdly enough, it's much easier to imagine a bent torus for which acceleration is undefinable everywhere. Really? That sounds so weird… To understand what that could mean, let's look at some other sick geometrical objects: Non-differentiable continuous curves. Do you mean curves which are continuous everywhere but differentiable nowhere? Is that even possible? First examples of such curves were given in the 19th century. They were called monsters. I guess that, Historically, one of the first examples of a monster was that of the function $\sum 2^{-n} cos(4^n x)$. But the more popular example nowadays is the Koch snowflake, as beautifully explained in the awesome show Fractals – The Hidden Dimension by NOVA. Or, even better, the curve formed by the British coastline: Oh! So this is what fractals are: Non-differentiable continuous curves! Exactly! And it took another genius to unveil the magic of fractal geometry. This genius is Benoît Mandelbrot. Learn more with Thomas' article on fractals. Curiously though, Mandelbrot's ideas weren't popular at first. His first papers on this new kind of geometry were so unusual that the mathematical community rejected them. Referees claimed that the papers, despite displaying pretty images, did not contain actual mathematics. Because the community was not ready for his new ideas, Mandelbrot decided to write a book on his own. This best-seller was so amazing that it quickly became iconic, not just for young mathematicians, but in popular culture as well! Now, the thing with fractals is that it's very easy to make a shape of any length. For instance, you can draw a "curve" that looks exactly like a circle, but actually has a length of 4. This is what Vihart did in this awesome video: So, following Mandelbrot's (and Vihart's) ideas, it's not too hard to imagine a torus whose lengths perfectly match that of the Netwalk world. In other words, through a fractal process, it's (at least theoretically) possible to describe a simple procedure to fold the square of the Netwalk world into an origami torus-like "zigfinihedron" which glues opposite sides. So why did it take 5 years to actually construct it? Look carefully… The Hévéa Torus is not a zigfinihedron! What the Hévéa team was searching for was a smoother bending of the torus, which, although had no second derivatives, still had continuous tangent planes at all points! And the trouble of fractals is that they are precisely rough objects with no such continuous tangent planes… So why did you bring that up? Somehow we need a trade off, between the smoothness of classical surfaces and the roughness of fractals. This trade off is what the Hévéa team called smooth fractals. The Hévéa Torus The idea of smooth fractals isn't far from what was hinted at by John Nash. His embedding theorem stated that if a manifold could be topologically embedded in a higher dimension space, as we have done it for the Netwalk world, then this embedding could be smoothly corrugated like Vihart's zigfinigons to end up isometric, while having tangent planes at all points. What's more, Nash proved that the amplitudes of the corrugations could be as small as we wanted. Technically, Nash still required a $\mathcal C^1$ continuity, which means that tangent spaces must vary continuously. Moreover, the initial embedding needs to be a shrinking of the manifold. Furthermore, Nash only proved this result, provided that the dimension of the manifold was 2 smaller than the dimension of the space it was embedded in (and it would thus not work for our torus). The refinement to all embeddings is due to Nicolaas Kuiper, and is known as the Nash-Kuiper theorem. Finally, Nash also proved a $\mathcal C^k$ isometric embedding theorem, which says that we can still embed a manifold even with greater continuities, provided the higher space we embed our manifold in is of much greater dimension. So, following Nash and Mandelbrot's ideas, the Hévéa team has added several layers of corrugation to a classical torus embedding. This is what's illustrated below, where the 3 first layers have been added sequentially: All the images of this section are taken from the Hevea project webpage. Let's have a closer look! Just like Vihart's drawings, the Hévéa Torus is theoretically obtained by adding an infinite amount of such corrugations. But, computationally, the Hévéa team merely added 5 layers, as any greater layer would in fact be imperceptible given the resolution of the image. But, is the Hévéa Torus really a Netwalk world with no stretching? That's hard to verify… but yes. Below is an image which displays the correspondence between lines in the Netwalk world and lines in the Hévéa Torus: Amazingly, the black and green loops actually have the same lengths in both figures. Granted, it's a bit less obvious on the Hévéa Torus, but that's because the black loop has a more fractal-like structure, which makes it longer than it seems to be! Wait… Is the Hévéa Torus really smooth? Yes! The key to obtain the smoothness of our corrugated torus is to have the amplitudes of successive corrugations decreasing faster than their "wavelengths". By opposition, Vihart's drawings had rather one-to-one ratios. For instance, at 3:50, she divided amplitudes of corrugations of her curves by 2 while she made twice many of them. Instead, by carefully adjusting the faster decrease of amplitudes of successive corrugations, the Hévéa team managed to guarantee the $\mathcal C^1$ continuity of the Hévéa Torus! I'm not sure this convinces me… One way to see that is to take the shape of the black loop of the figure above (which corresponds to a meridian), in the 3D space the Hévéa Torus is embedded in. Let's compare that to a Koch snowflake. Importantly, as opposed to the non-differentiable Koch snowflake, the meridian of the Hévéa Torus is still smooth enough to have (continuous) tangents at all points. Instead of recapitulating, I'll let the great James Grime sum up all we've discussed here: Now, the result of the Hévéa project is a spectacular achievement of over 160 years of mathematical pondering. Obviously, this article only mentions the few mathematicians who made the most stunning breakthroughs in this investigation. But, unfortunately, it also dramatically fails to account for the hundreds of other mathematicians who have shaped and reshaped our intuitions of curves and surfaces, in a deeper but less obvious fashion. For me, this amazing silent build up is the source of the magic of mathematics. Think about it. Thanks to many unsung mathematicians, we now have in front of our eyes an object that the great Gauss, Riemann, Nash and Mandelbrot couldn't even imagine, even though they had been searching for it all along. I hope you feel privileged and dismayed by the following precious images of the Hévéa Torus… I should say that even though combining Nash and Mandelbrot's ideas sound reasonably doable, it is actually a huge and difficult endeavor. Once again, the Hévéa team had to spend 5 years to come up with the Hévéa Torus, after several unsuccessful attempts. It was a tedious and tricky work, which involved very modern mathematics, including and especially Mikhaïl Gromov's homotopy principle. This gives me one last occasion to stress how big an achievement the conception of the Hévéa Torus is. Congrats to the Hévéa team! Let me leave you with my favorite image of the Hévéa Torus, taken from its interior, with light coming from the opposite side… Astonishing! One last thought… Can you believe that cutting through the Hévéa Torus twice yields a square? Topology: from the Basics to Connectedness Topology: from the Basics to Connectedness Topology was my favorite course in pure maths. I love it because it's a powerful abstract theory to describe intuitive and visual ideas about space. This article gives you an introduction to this amazing field. We'll introduce graph topology, metric spaces, continuity and connectedness. Euler's Formula and the Utilities Problem Euler's Formula and the Utilities Problem I was a kid when I was first introduced to the deceptively simple utilities problem. It's only lately that I've discovered its solution! And it's an amazing one! Indeed, it provides a wonderful insight into some fundamental mathematics, including Euler's formula! This is nothing less than the gateway to the wonderful world of algebraic topology! Poincaré Conjecture and Homotopy Poincaré Conjecture and Homotopy Poincaré conjecture is the most recent major proven theorem. Posited a century ago by Henri Poincaré, this major conjecture of topology was solved by Gregori Perelman. It has revolutionized our understanding of space and raised intriguing questions regarding the global structure of our Universe. Non-Euclidean Geometry and Map-Making Non-Euclidean Geometry and Map-Making By Scott McKinney | Updated:2016-02 | Views: 9097 Geometry literally means "the measurement of the Earth", and more generally means the study of measurements of different kinds of space. Geometry on a flat surface, and geometry on the surface of a sphere, for example, are fundamentally different. A consequence of this disparity is the fact that it is impossible to create a perfectly accurate (flat) map of the Earth's (spherical) surface. Every map of the Earth necessarily has distortions. In this post we look at a few different methods of map-making and evaluate their distortions as well as their respective advantages. Spacetime of General Relativity Spacetime of General Relativity Most popular science explanations of the theory of general relativity are very nice-looking. But they are also deeply misleading. This article presents you a more accurate picture of the spacetime envisioned by Albert Einstein. From Britain's coast to Julia set: an introduction to fractals From Britain's coast to Julia set: an introduction to fractals By Thomas C | Updated:2016-02 | Views: 4205 The Cubic Ball of the 2014 FIFA World Cup The Cubic Ball of the 2014 FIFA World Cup I know this sounds crazy. Even stupid. But Adidas did design a cubic ball, called brazuca, for the 2014 World Cup. And, yet, this cubic ball is rounder than any previous ball in football History. How is it possible? This article explains it. Hévéa project home page. Flat tori in three-dimensional space and convex integration on PNAS. Gnash, un tore plat sur Image des Maths. One comment to "The Tortuous Geometry of the Flat Torus" Amazing page!!!! I`m sure gonna read this a couple of times. I already read once, and this concept of dimensions and plans is getting my attention. Really, I have to learn this.
CommonCrawl
PM2.5 exposure and anxiety in China: evidence from the prefectures Buwei Chen1 na1, Wen Ma1 na1, Yu Pan2, Wei Guo ORCID: orcid.org/0000-0002-4888-48163 & Yunsong Chen1 BMC Public Health volume 21, Article number: 429 (2021) Cite this article Anxiety disorders are among the most common mental health concerns today. While numerous factors are known to affect anxiety disorders, the ways in which environmental factors aggravate or mitigate anxiety are not fully understood. Baidu is the most widely used search engine in China, and a large amount of data on internet behavior indicates that anxiety is a growing concern. We reviewed the annual Baidu Indices of anxiety-related keywords for cities in China from 2013 to 2018 and constructed anxiety indices. We then employed a two-way fixed effect (FE) model to analyze the relationship between PM2.5 exposure and anxiety at the prefectural level. The results indicated that there was a significant positive association between PM2.5 and anxiety index. The anxiety index increased by 0.1565258 for every unit increase in the PM2.5 level (P < 0.05), which suggested that current PM2.5 levels in China pose a considerable risk to mental health. The enormous impact of PM2.5 exposure indicates that the macroscopic environment can shape individual mentality and social behavior, and that it can be extremely destructive in terms of societal mindset. Anxiety is characterized by inner discomfort, and it is usually accompanied by agitated behavior due to disproportionate concerns over safety, the future, personal fate, and the fates of others [1]. Anxiety affects all populations, and it has long been among the most prevalent and debilitating psychiatric conditions worldwide [2]. Stein et al. (2020) found that anxiety disorders were the sixth leading cause of long-term disability in high-income, low-income, and middle-income countries [3]. Its impacts on health, physical function, and economic output are highly detrimental [4]. Many previous studies on anxiety have focused on the influence of factors at the individual level [5, 6]. However, in his discussion on anxiety research methodology, Hunt (1999) described anxiety as a reaction to dangerous situations and a product of social change [7]. In cultural terms, May (2010) described anxiety as a spreading uneasiness [8]. In times of great social change, individuals who anticipate negative events feel insecure [9]. This type of insecurity is a form of anxiety. Since significant social change affects everyone within a society, anxiety builds at the individual level and eventually affects society at the macroscopic level. Unlike individual anxiety, macroscopic or societal anxiety is a property of societal structure that is broadly distributed among various groups. The influences of social and economic factors on anxiety at the macroscopic level are the most well understood (e.g., Gough, 2009; Viseu et al., 2018) [10, 11], however, environmental factors that aggravate or mitigate anxiety have received less attention. Environmental pollution is among the most serious problems facing populations worldwide. Particulate matter in the atmosphere with diameters of 2.5 μm or less are referred to as PM2.5, which is an important measure of air pollution and haze [12]. PM2.5 is a health hazard because it can readily enter the lungs [13,14,15]. PM2.5 exposure has been linked to changes in the central nervous system associated with mental disorders [16,17,18]. In support of the Air Pollution Prevention and Control Action Plan promulgated by the State Council of China in 2013, a series of stringent clean air actions was implemented from 2013 to 2017. With the implementation of stringent clean air actions, PM2.5 concentration across the country decreased rapidly [19]. Given the issues mentioned above, we investigated whether PM2.5 levels had a direct impact on societal anxiety in the contexts of gross domestic product (GDP), housing prices, the rate of urbanization, internet development, medical resource, and health service level. China has experienced rapid economic development and monumental social change. However, the health and longevity of Chinese citizens have not increased significantly [20]. The incidence of mental illness has increased markedly along with rapid economic development [21]. Anxiety in particular has become a common social phenomenon. Huang et al. (2019) confirmed that the prevalence of anxiety disorders in China was 4.98% in 2017, which was significantly higher than it was in surveys conducted in the 1980s and 1990s [22]. Anxiety is often stigmatized in China, which makes affected individuals unwilling to reveal their psychological status. Thus, using traditional survey methods and clinical protocols to assess the extent of anxiety in China would likely yield inaccurate results. Analyzing internet search behavior to detect anxiety has unique advantages over traditional survey methods and medical tests. A research subject can privately search for relevant information in the absence of a third party, which dramatically reduces underreporting relative to traditional survey methods. Similar methods are often used to reveal attitudes or behaviors that are difficult to capture in an academic setting [17, 18, 23]. Baidu is the most prevailing Web search engine in China with over 80% of market share [24]. We compiled a list of anxiety-related terms used for internet searches on Baidu. The wide use of Baidu in China therefore makes it a more representative search query for using the terms to construct anxiety indicators. We then examined whether and how local PM2.5 levels were associated with anxiety levels using a nationally representative panel of data collected in 297 Chinese cities in the years from 2013 to 2018. The data used for this analysis was obtained from multiple sources. Baidu search terms submitted in the years from 2013 to 2018 were used to construct anxiety indices for cities at the prefectural level. We collected average monthly PM2.5 data from China's Air Quality Online Monitoring and Analysis Platform and used them to calculate the annual PM2.5 levels.Footnote 1 Data about economic and urban development in the years from 2013 to 2017 were obtained from the statistical yearbooks of Chinese cities, while the 2018 data were obtained from provincial statistical yearbooks. Data about medical resource and health service level in the years from 2013 to 2018 in this research were obtained from the China health statistics yearbook. We then constructed a representative panel of data for 293 prefecture-level cities and four municipalities for the years from 2013 to 2018. The total sample size was 1782. We first constructed a list of words related to anxiety (Appendix, Table 3). The selection of words was based on life experience and the Baidu demand map,Footnote 2 which showed terms that frequently appeared together with "anxiety" in online searches. These terms were related to specific psychological and social pressures that could cause anxiety. For example, individuals usually associate hair loss (tuo fa) with anxiety due to the stigmatization of baldness. Internet search behavior thus reflected the psychological state of the individual and the influence of negative social attitudes. The Baidu Index is the official database and data-sharing platform of the Baidu search engine, and it reflects the behavior of Baidu users. The Baidu Index provides detailed information about keywords that appear in internet searches. We collected the annual Baidu Indices for all keywords entered by internet searchers in each city in the years from 2013 to 2018 and standardized the indices by including each anxiety-related keyword. The anxiety index for each city was the ratio of the Baidu Index summation to the city's population. We could review data for a given day, week, month, or year at both the provincial and national levels. Selecting keywords for online searches is an active and problem-driven process. Although we could not measure anxiety directly, the keywords were subjective choices based on concerns about specific problems. Users may have been experiencing such problems themselves, or they may have had friends or relatives with psychological issues. This motivated users to "Baidu it" to obtain explanation or relief. The internet users comprised only a portion of the Chinese population, therefore sampling bias was inevitable. However, Baidu has a wide range of users, and the data reflects real online search behavior. We thus concluded that the data was sufficiently representative for our research. Explanatory variables PM2.5 was the key explanatory variable in this study. Others have investigated the relationship between pollution and health, but the relationship between PM2.5 and anxiety is an important issue. We controlled for related variables, particularly variables associated with the economy and urban development. The GDP is the sum of all productive activity in a region over a given period of time based on market prices. We used the annual GDPs of prefecture-level cities as indicators of economic development. The GDP of each city encompassed output in administrative regions, urban areas, municipal districts, and the county. We evaluated the per-capita GDPs and obtained similar results. Soaring housing costs in first- and second-tier cities are a source of anxiety for many residents [25,26,27], therefore we also used the average annual housing price in each prefecture-level city as an indicator. The housing data can be found at gotohui.com,Footnote 3 which is an instrumental platform for housing price queries. The ratio of the urban population to the total population in a region reflects its degree of urbanization. The urbanization ratio of each prefecture-level city was thus used as an index to quantify urbanization. Broadband internet access was based on the number of users who subscribed to a telecommunications enterprise at the end of the reporting period. Residents in China can access the internet wirelessly or through XDSL, FTTX+LAN, and WLAN connections. Other ways to access the internet include XDSL, dedicated LAN lines, and LAN end use. The proportion of internet broadband users in the population of each prefecture-level city was used as a measure of internet development. Information about broadband internet access and the average yearly population of each prefecture-level city was obtained from the China City Statistical Yearbook. In addition to these variables of major interest, we also measured medical resource and health service level in local areas using the variables of number of medical institutions of each prefecture-level city, total hospital beds of each prefecture-level city, and number of physicians of each prefecture-level city. The descriptive statistics for the variables are shown in Table 1. Table 1 Descriptive Statistics for Variables Modeling strategy We employed a two-way fixed effect (FE) regression model to examine how PM2.5 and related macroscopic factors might affect anxiety in China. The two-way fixed effects regression model with unit and time fixed effects is a default methodology for estimating causal effects from panel data and adjusting for unobserved unit-specific and time-specific confounders at the same time [28]. Therefore, this made it easy to rule out any confounding effects from time-invariant factors at the city level in our study. The model is represented by Eq. 1. $$ {\displaystyle \begin{array}{l}{\mathrm{Anxiety}}_{it}={\beta}_0+{\beta}_1{\mathrm{GDP}}_{it}+{\beta}_2{\mathrm{price}}_{it}+{\beta}_3{\mathrm{urban}}_{it}+{\beta}_4{\mathrm{internet}}_{it}+\\ {}{\beta}_5\mathrm{PM}2{.5}_{it}+{\beta}_6{\mathrm{institution}}_{it}+{\beta}_7\kern0.5em {\mathrm{bed}}_{it}+{\beta}_8{\mathrm{physician}}_{it}+{\upsilon}_i+{\delta}_{it}\\ {}i=1,2,\dots \dots, \mathrm{n};t=1,2,\dots \dots, \mathrm{T}\end{array}} $$ where Anxietyit is a dependent variable that represents the level of anxiety in city i at time t. The larger the anxiety index, the higher the level of anxiety. GDP is the regional gross domestic product, which reflects the level of urban economic development. Price represents the average annual cost of housing in the city; Urban is its urbanization level; and Internet represents the level of internet development in the city. PM2.5 is the key independent variable, which represents the annual PM2.5 level in city i in year t. Institution, bed, and physician represent the number of medical institutions, total hospital beds, the number of physicians in the city, respectively. The random variable υi is an unobservable effect term that reflects individual heterogeneity, and δit is a disturbance term that varies over time. Spatiotemporal trends in anxiety Anxiety in China fluctuated, but it followed an increasing trend overall. The anxiety index was approximately 2.7-fold higher in 2018 than it was in 2013. The spatial distributions of anxiety in the years from 2013 to 2018 are shown in Fig. 1. The anxiety indices of Guangdong Province in southern China, Shanghai, Jiangsu, and Zhejiang in eastern China were the highest in the country. The anxiety indices of Beijing and Tianjin were also relatively high. Spatial Distributions of Anxiety Indices in China. Note. The maps are generated using GIS 10.5 In 2013, anxiety was either severe or moderate in 40 Chinese cities (Fig. 1). The number of coastal cities with moderate anxiety continued to increase in 2014, and moderate anxiety was detected in a small number of inland cities. By the end of 2014, a total of 65 cities were severely or moderately anxious. Anxiety was moderate or severe in 98 cities in 2015. Moderate anxiety continued to increase in coastal areas, and more inland cities north of the Yangtze River appeared to be moderately anxious. In 2016, severe or moderate anxiety was detected in 88 coastal cities. While the number of cities with moderate anxiety decreased overall, the number of inland cities with moderate anxiety rose slightly. A total of 137 cities experienced severe or moderate anxiety in 2017, and the number of moderately anxious cities in coastal areas increased. Moderate anxiety was also detected in cities in central and southwest China. A total of 139 cities experienced severe or moderate anxiety in 2018. The number of coastal cities with moderate anxiety fell slightly, but anxiety continued to increase in the southwestern and southern regions. Results of two-way FE modeling The two-way FE modeling results are shown in Table 2. Our results were not consistent with those of a previous study [25], and there was no significant association between housing cost and the anxiety index. There was significant correlation between the anxiety index and the urbanization rate (P < 0.1). One-unit increases in the urbanization rate was associated with a − 1.97668 decrease in the anxiety index. After controlling for other factors, we found that the anxiety index decreased by 0.0009126 for every unit increase in hospital beds. Table 2 Results of Two-way FE Modeling The model also demonstrated that PM2.5 exposure was positively and significantly associated with the anxiety index. The anxiety index increased by 0.1565258 for every unit increase in the PM2.5 level (P < 0.05). According to China's Air Quality Online Monitoring and Analysis Platform, it is quite normal for the PM2.5 level to increase from 50 μg·m− 3 to 100 μg·m− 3 within one week. Our results indicated that an increase of this magnitude would raise the anxiety index by approximately eight units. The effect of PM2.5 exposure on anxiety levels was thus considerable, and it impacted the anxiety index to a much greater extent than other related factors. Anxiety is becoming increasingly common in China as social transition progresses. We examined changes in urban anxiety and the social and environmental factors that influenced it during social transition in China. We used our Baidu Index of anxiety-related words to construct the anxiety indices and included information about the GDPs, housing costs, urbanization rates, internet development levels, medical institutions, total hospital beds, the number of physicians, and PM2.5 levels in 297 cities in the years from 2013 to 2018 to develop a representative model. Urban anxiety fluctuated, but the observed trend was an overall increase. The number of cities with moderate anxiety continued to increase over time. Cities with anxiety indices greater than zero experienced either moderate or severe anxiety, and the number of inland cities in this category gradually increased. Two-way FE modeling revealed that PM2.5 levels were significantly and positively correlated with anxiety. After controlling for related factors, we found that the anxiety indices rose with increases in PM2.5 index. PM2.5 levels had a much stronger influence on the anxiety indices, which indicated that PM2.5 was the most important contributor to anxiety. Studying anxiety at the macroscopic level has great practical value. The social structure of China is undergoing significant changes, and social priorities are shifting. Access to education, medical care, pensions, and social security have become sources of anxiety.Footnote 4 Understanding anxiety in this context will facilitate smooth social transition in China. Social transition involves transformations at both the material level and in social mentality. A positive social mentality will be conducive to positive social change, while a negative social mentality will have adverse effects. Crime and violence are more likely when anxiety is a common societal affliction [29]. Thus, relieving anxiety serves a vital purpose in promoting positive and healthy social development. Using big data to develop indices for anxiety measurement was an innovative way to compensate for the deficiencies of existing methods used to measure anxiety. Zhou (2014) has shown that anxiety is a type of social mentality with emerging properties [30]. In other words, anxiety at the macroscopic level arises from anxiety at the individual level. However, societal anxiety cannot be reduced to the individual level, because it has unique characteristics and functions. Wilkinson (2001) states that when individual anxiety becomes universal, it will cause social tension and negatively impact smooth societal operation [31]. A problem with current anxiety research is that theoretical analyses are much more common than empirical studies. Empirical studies often assume that a coalescence of personal anxiety represents societal anxiety. However, the explanatory power of macroscopic anxiety is reduced when it is defined this way. To address this problem, we compiled a Baidu Index of anxiety-related words to represent anxiety. The index reflected both individual anxiety and the impact of individual stress as a social phenomenon. Constructing an anxiety index this way provided a better representation of anxiety at the macroscopic level. Current empirical research primarily addresses stress in specific groups based on the answers to questionnaires. Only a few studies have been conducted from a macro-sociological perspective, and most of them have been theoretical in nature. To perform an empirical analysis with a macroscopic focus, we analyzed macro-socioeconomic factors that affected anxiety and examined them by constructing a panel model. The limitations of this study warrant discussion, and they provide direction for future research in this area. Although Baidu is the most popular search engine and accounts for more than 80% search market share in China, we acknowledge that the proportion of Baidu coverage across different areas in China might affect the results of our study. The factors that influence societal anxiety include transitions in social structure, social risk, social security, political reform, modern technology, and cultural values. We were limited by the amount of available data and the difficulty of identifying influencing factors. Thus, we only considered PM2.5, GDPs, housing costs, urbanization rates, levels of internet development, medical resource, and health service level. Since we studied short-term anxiety over five years, some of the macroscopic factors related to crisis and reform may not have had much impact on anxiety. This limitation influenced our selection of measures in this study. Although anxiety at the macroscopic level can be assumed to have characteristics that differ from individual anxiety, internet users do not represent all members of society. Despite its limitations, our analysis has generated some intriguing questions about quantifying anxiety and using large datasets to study macroscopic influences. Among the macroscopic factors we examined, PM2.5 was the most important atmospheric pollutant associated with anxiety. We used PM2.5 levels as air pollution indicators and analyzed the relationship between PM2.5 exposure and anxiety. We found that PM2.5 exposure and anxiety were significantly correlated. Anxiety is a condition that affects society as a whole. The enormous impact of PM2.5 exposure indicates that the macroscopic environment can shape individual mentality and social behavior, and that it can be extremely destructive in terms of societal mindset. Pollution is a great societal hazard in both spiritual and material terms. Analyzing the associations between air pollution and anxiety in more detail could provide solutions to alleviate anxiety in China. All data are available from the corresponding author on reasonable request. See https://www.aqistudy.cn See https://index.baidu.com/v2/main/index.html#/demand/焦虑?words=焦虑 See https://www.gotohui.com https://news.gmw.cn/2019-08/22/content_33097077.htm FE: Fixed effect GDP: DSL: Digital subscriber line WLAN: Wireless local area networks Seligman ME, Walker EF, Rosenhan DL. Abnormal psychology. New York: W.W. Norton & Company; 2001. Najafipour H, Banivaheb G, Sabahi A, Naderi N, Nasirian M, Mirzazadeh A. Prevalence of anxiety and depression symptoms and their relationship with other coronary artery disease risk factors: a population-based study on 5900 residents in Southeast Iran. Asian J Psychiatr. 2016;20:55–60. Stein DJ, Medeiros LF, Caumo W, Torres IL. Transcranial direct current stimulation in patients with anxiety: current perspectives. Neuropsychiatr Dis Treat. 2020;16:161–9. Chisholm D, Sweeny K, Sheehan P, Rasmussen B, Smit F, Cuijpers P, Saxena S. Scaling-up treatment of depression and anxiety: a global return on investment analysis. Lancet Psychiatry. 2016;3(5):415–24. Link BG, Lennon MC, Dohrenwend BP. Socioeconomic status and depression: the role of occupations involving direction, control, and planning. Am J Sociol. 1993;98(6):1351–87. Macher D, Paechter M, Papousek I, Ruggeri K. Statistics anxiety, trait anxiety, learning behavior, and academic performance. Eur J Psychol Educ. 2012;27(4):483–98. Hunt A. Anxiety and social explanation: some anxieties about anxiety. J Soc Hist. 1999;32(3):509–28. May R. The meaning of anxiety. New York: WW Norton & Company; 2010. Barlow DH. Unraveling the mysteries of anxiety and its disorders from the perspective of emotion theory. The American Psychologist. 2000;55(11):1247–63. Gough N. No country for young people?: anxieties in Australian society and education [Australian Association for Research in education President's address 2008.]. Aust Educ Res. 2009;36(2):1–19. Viseu J, et al. Relationship between economic stress factors and stress, anxiety, and depression: moderating role of social support. Psychiatry Res. 2018;268:102–7. Guo W, Tan Y, Yin X, Sun Z. Impact of PM2. 5 on second birth intentions of China's floating population in a low fertility context. Int J Environ Res Public Health. 2019;16(21):4293. Laumbach R, et al. A controlled trial of acute effects of human exposure to traffic particles on pulmonary oxidative stress and heart rate variability. Particle and Fibre Toxicology. 2014;11(1):45. Mirabelli MC, et al. Air quality awareness among U.S. adults with respiratory and heart disease. Am J Prev Med. 2018;54(5):679–87. Smelser B, Neil J, editors. International encyclopedia of the Social & Behavioral Sciences. New York: Elsevier; 2001. Calderón-Garcidueñas L, et al. Air pollution and your brain: what do you need to know right now. Primary Health Care Research & Development. 2015;16(4):329–45. Chen Y, et al. The association between PM2.5 exposure and suicidal ideation: a prefectural panel study. BMC Public Health. 2020;20(1):293. He G, et al. The association between PM2.5 and depression in China. Dose-Response. 2020;18(3). https://doi.org/10.1177/1559325820942699. Zhang Q, Zheng Y, Tong D, Shao M, Wang S, Zhang Y, et al. Drivers of improved PM2. 5 air quality in China from 2013 to 2017. Proc Natl Acad Sci. 2019;116(49):24463–9. United Nations Development Program (UNDP). Human Development Reports. 2018. Retrieved from http://hdr.undp.org/en/data. Graham C, Zhou S, Zhang J. Happiness and health in China: the paradox of Progress. Brookings Institution Working Paper. 2015. Huang Y, et al. Prevalence of mental disorders in China: a cross-sectional epidemiological study. Lancet Psychiatry. 2019;6(3):211–24. Huang X, Zhang L, Ding Y. The Baidu index: uses in predicting tourism flows—a case study of the Forbidden City. Tour Manag. 2017;58:301–6. Li K, Liu M, Feng Y, Ning C, Ou W, Sun J, et al. Using Baidu search engine to monitor AIDS epidemics inform for targeted intervention of HIV/AIDS in China. Sci Rep. 2019;9(1):1–12. Zeng L. On social mentality: Chinese anxiety. Cross-Cultural Communication. 2014;10(4):159–63. Zhou M, Guo W. Fertility intentions of having a second child among the floating population in China: effects of socioeconomic factors and home ownership. Population, Space and Place. 2020;26(2):e2289. Zhou M, Guo W. Comparison of secondchild fertility intentions between local and migrant women in urban China: A Blinder–Oaxaca Decomposition. J Ethnic Migration Studies. 2020. https://doi.org/10.1080/1369183X.2020.1778456. Somaini P, Wolak FA. An algorithm to estimate the two-way fixed effects model. Journal of Econometric Methods. 2016;5(1):143–52. Hollway W, Jefferson T. The risk Society in an age of anxiety: situating fear of crime. Br J Sociol. 1997;48(2):255–66. Zhou X. Social mentality and Chinese feeling in the era of transformation: a dialogue with the paper "social mentality": social psychological research on transitional society (in Chinese). Soc Stud. 2014;29(4):1–23. Wilkinson I. Anxiety in a risk society. Psychology Press; 2001. The authors acknowledge the support by grants from the Major Project of the National Social Science Fund of China under Grant No. 19ZDA149, the National Social Science Fund of China under Grant No. 20VYJ039, and the National Natural Science Foundation of China under Grant No. 71921003. The funders had no role in study design, data collection and analysis, decision to publish, interpretation or preparation of the manuscript. Buwei Chen and Wen Ma are First co-author. Department of Sociology, School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu Province, China Buwei Chen, Wen Ma & Yunsong Chen JD.com Retail, Technology and Data Center, Transaction Product Department, Core Transaction Product Group, Beijing, China Yu Pan Center on Population, Environment, Technology, and Society (C-PETS), School of Social and Behavioral Sciences, Nanjing University, Nanjing, 210023, Jiangsu Province, China Wei Guo Buwei Chen Wen Ma Yunsong Chen YC and WG conceived and designed the study. BC and WM helped to collect the data and provided the suggestions in interpreting the findings. YP, BC, and WG conducted the data analysis. YC, WG and BC prepared the first draft of the manuscript. WG, YC, and BC helped to revise the paper. All authors read the final version of the manuscript and approved to submission. Correspondence to Wei Guo or Yunsong Chen. Not applicable. This analysis is based on search engine data at prefectural level, and no individual information is required. This work was supported by the Major Project of the National Social Science Fund of China under Grant No. 19ZDA149, the National Social Science Fund of China under Grant No. 20VYJ039, and the National Natural Science Foundation of China under Grant No. 71921003. Table 3 List of Anxiety-related Words Chen, B., Ma, W., Pan, Y. et al. PM2.5 exposure and anxiety in China: evidence from the prefectures. BMC Public Health 21, 429 (2021). https://doi.org/10.1186/s12889-021-10471-y DOI: https://doi.org/10.1186/s12889-021-10471-y Baidu index Two-way FE model
CommonCrawl
Existence of solutions for space-fractional parabolic hemivariational inequalities DCDS-B Home Forward attracting sets of reaction-diffusion equations on variable domains March 2019, 24(3): 1273-1295. doi: 10.3934/dcdsb.2019016 On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity Peter I. Kogut 1, and Olha P. Kupenko 2,3, Oles Honchar Dnipro National University, Department of Differential Equations, Gagarin av., 72, 49010 Dnipro, Ukraine Dnipro University of Technology, Department of System Analysis and Control, Yavornitskii av., 19, 49005 Dnipro, Ukraine Institute for Applied System Analysis, National Academy of Sciences and Ministry of Education and Science of Ukraine, Peremogy av., 37/35, IASA, 03056 Kyiv, Ukraine To the memory of our big Friend and Teacher V. S. Mel'nik Received December 2017 Revised March 2018 Published January 2019 We study an optimal control problem for one class of non-linear elliptic equations with $p$-Laplace operator and $L^1$-nonlinearity. We deal with such case of nonlinearity when we cannot expect to have a solution of the state equation for any given control. After defining a suitable functional class in which we look for solutions, we reformulate the original problem and prove the existence of optimal pairs. In order to ensure the validity of such reformulation, we provide its substantiation using a special family of fictitious optimal control problems. The idea to involve the fictitious optimization problems was mainly inspired by the brilliant book of V.S. Mel'nik and V.I. Ivanenko "Variational Methods in Control Problems for the Systems with Distributed Parameters", Kyiv, 1998. Keywords: Existence result, optimal control, $p$-Laplace operator, elliptic equation, fictitious control. Mathematics Subject Classification: Primary: 49J20, 49K20; Secondary: 58J37. Citation: Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016 L. Boccardo and F. Murat, Almost everywhere convergence of the gradients of solutions to elliptic and parabolic equations, Nonlinear Anal., Theory, Methods, Appl., 19 (1992), 581-597. doi: 10.1016/0362-546X(92)90023-8. Google Scholar E. Casas, O. Kavian and J. P. Puel, Optimal control of an ill-posed elliptic semilinear equation with an exponential nonlinearity, ESAIM: Control, Optimization and Calculus of Variations, 3 (1998), 361-380. doi: 10.1051/cocv:1998116. Google Scholar E. Casas, P. I. Kogut and G. Leugering, Approximation of optimal control problems in the coefficient for the $p$-Laplace equation. I. Convergence result, SIAM Journal on Control and Optimization, 54 (2016), 1406-1422. doi: 10.1137/15M1028108. Google Scholar S. Chandrasekhar, An Introduction to the Study of Stellar Structures, Dover Publications, Inc., New York, N. Y., 1957. Google Scholar M. G. Crandall and P. H. Rabinowitz, Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems, Arch. Rational Mech. Anal., 58 (1975), 207-218. doi: 10.1007/BF00280741. Google Scholar J. Dolbeault and R. Stańczy, Non-existence and uniqueness results for supercritical semilinear elliptic equations, Annales Henri Poincaré, 10 (2010), 1311-1333. doi: 10.1007/s00023-009-0016-9. Google Scholar R. Ferreira, A. De Pablo and J. L. Vazquez, Classification of blow-up with nonlinear diffusion and localized reaction, J. Differential Equations, 231 (2006), 195-211. doi: 10.1016/j.jde.2006.04.017. Google Scholar D. A. Franck-Kamenetskii, Diffusion and Heat Transfer in Chemical Kinetics, Second edition, Plenum Press, 1969. Google Scholar H. Fujita, On the blowing up of the solutions to the Cauchy problem for $u_t = Δ u+u^{1+α}$, J. Fac. Sci. Univ. Tokyo Sect. IA, Math., 13 (1996), 109-124. Google Scholar T. Gallouët, F. Mignot and J. P. Puel, Quelques résultats sur le problème $-Δ u = λ e^u$, C. R. Acad. Sci. Paris, Série I, 307 (1988), 289–292. Google Scholar I. M. Gelfand, Some problems in the theory of quasi-linear equations, Amer. Math. Soc. Transl., Ser. 2, 29 (1963), 295–381. doi: 10.1090/trans2/029/12. Google Scholar V. I. Ivanenko and V. S. Mel'nik, Variational Methods in Control Problems for the Systems with Distributed Parameters, Naukova Dumka, Kyiv, 1988 (in Russian). Google Scholar D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, Academic Press, New York, 1980. Google Scholar P. I. Kogut and G. Leugering, Optimal Control Problems for Partial Differential Equations on Reticulated Domains. Approximation and Asymptotic Analysis, Series: Systems and Control, Birkhäuser, Boston, 2011. doi: 10.1007/978-0-8176-8149-4. Google Scholar P. I. Kogut, R. Manzo and A. O. Putchenko, On approximate solutions to the Neumann elliptic boundary value problem with non-linearity of exponential type, Boundary Value Problems, 2016 (2016), 1-32. doi: 10.1186/s13661-016-0717-1. Google Scholar P. I. Kogut and A. O. Putchenko, On approximate solutions to one class of non-linear Dirichlet elliptic boundary value problems, Visnyk DNU. Series: Mathematical Modelling, Dnipropetrovsk: DNU, 24 (2016), 27-25. Google Scholar P. I. Kogut and V. S. Mel'nik, On one class of extremum problems for nonlinear operator systems, Cybern. Syst. Anal., 34 (1998), 894-904. Google Scholar P. I. Kogut and V. S. Mel'nik, On weak compactness of bounded sets in Banach and locally convex spaces, Ukrainian Mathematical Journal, 52 (2001), 837-846. doi: 10.1007/BF02591778. Google Scholar P. I. Kogut and O. P. Kupenko, On attainability of optimal solutions for linear elliptic equations with unbounded coefficients, Visnyk DNU. Series: Mathematical Modelling, Dnipropetrovsk: DNU, 20 (2012), 63-82. Google Scholar O. P. Kupenko and R. Manzo, On optimal controls in coefficients for ill-posed non-linear elliptic Dirichlet boundary value problems, Discrete and Continuous Dynamic Systems. Series B, 23 (2018), 1363-1393. doi: 10.3934/dcdsb.2018155. Google Scholar J.-L. Lions, Some Methods of Solving Non-Linear Boundary Value Problems, Dunod-Gauthier-Villars, Paris 1969. Google Scholar F. Mignot and J. P. Puel, Sur une classe de problémes non linéaires avec nonlinéarité positive, croissante, convexe, Comm. in PDE, 5 (1980), 791-836. doi: 10.1080/03605308008820155. Google Scholar I. Peral, Multiplicity of Solutions for the p-Laplacian, Second School of Nonlinear Functional Analysis and Applications to Differential Equations, Miramare–Trieste, 1997. Google Scholar R. G. Pinsky, Existence and Nonexistence of global solutions $u_t=Δ u+a(x) u^p$ in $\mathbb{R}^d$, J. of Differential Equations, 133 (1997), 152-177. doi: 10.1006/jdeq.1996.3196. Google Scholar D. H. Sattinger, Monotone methods in nonlinear elliptic and parabolic boundary value problems, Indiana Univ. Math., 21 (1972), 979-1000. doi: 10.1512/iumj.1972.21.21079. Google Scholar Christian Clason, Vu Huu Nhu, Arnd Rösch. Optimal control of a non-smooth quasilinear elliptic equation. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020052 Hongbo Guan, Yong Yang, Huiqing Zhu. A nonuniform anisotropic FEM for elliptic boundary layer optimal control problems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1711-1722. doi: 10.3934/dcdsb.2020179 Hong Niu, Zhijiang Feng, Qijin Xiao, Yajun Zhang. A PID control method based on optimal control strategy. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 117-126. doi: 10.3934/naco.2020019 Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436 Ahmad Z. Fino, Wenhui Chen. A global existence result for two-dimensional semilinear strongly damped wave equation with mixed nonlinearity in an exterior domain. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5387-5411. doi: 10.3934/cpaa.2020243 Lars Grüne, Matthias A. Müller, Christopher M. Kellett, Steven R. Weller. Strict dissipativity for discrete time discounted optimal control problems. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020046 Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107 Vaibhav Mehandiratta, Mani Mehra, Günter Leugering. Fractional optimal control problems on a star graph: Optimality system and numerical solution. Mathematical Control & Related Fields, 2021, 11 (1) : 189-209. doi: 10.3934/mcrf.2020033 Yichen Zhang, Meiqiang Feng. A coupled $ p $-Laplacian elliptic system: Existence, uniqueness and asymptotic behavior. Electronic Research Archive, 2020, 28 (4) : 1419-1438. doi: 10.3934/era.2020075 Laure Cardoulis, Michel Cristofol, Morgan Morancey. A stability result for the diffusion coefficient of the heat operator defined on an unbounded guide. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020054 Biao Zeng. Existence results for fractional impulsive delay feedback control systems with Caputo fractional derivatives. Evolution Equations & Control Theory, 2021 doi: 10.3934/eect.2021001 Yoshitsugu Kabeya. Eigenvalues of the Laplace-Beltrami operator under the homogeneous Neumann condition on a large zonal domain in the unit sphere. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3529-3559. doi: 10.3934/dcds.2020040 Youming Guo, Tingting Li. Optimal control strategies for an online game addiction model with low and high risk exposure. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020347 Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213 Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051 Lars Grüne, Roberto Guglielmi. On the relation between turnpike properties and dissipativity for continuous time linear quadratic optimal control problems. Mathematical Control & Related Fields, 2021, 11 (1) : 169-188. doi: 10.3934/mcrf.2020032 Jingrui Sun, Hanxiao Wang. Mean-field stochastic linear-quadratic optimal control problems: Weak closed-loop solvability. Mathematical Control & Related Fields, 2021, 11 (1) : 47-71. doi: 10.3934/mcrf.2020026 Arthur Fleig, Lars Grüne. Strict dissipativity analysis for classes of optimal control problems involving probability density functions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020053 Fuensanta Andrés, Julio Muñoz, Jesús Rosado. Optimal design problems governed by the nonlocal $ p $-Laplacian equation. Mathematical Control & Related Fields, 2021, 11 (1) : 119-141. doi: 10.3934/mcrf.2020030 2019 Impact Factor: 1.27 Peter I. Kogut Olha P. Kupenko
CommonCrawl
Sun, 04 Nov 2018 23:26:11 GMT 2.1: Linear Functions [ "article:topic", "slope", "linear function", "point-slope equation", "slope-intercept form", "decreasing linear function", "increasing linear function", "y-intercept", "authorname:openstax", "calcplot:yes", "license:ccbyncsa", "showtoc:yes", "transcluded:yes" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FTruckee_Meadows_Community_College%2FTMCC%253A_Precalculus_I_and_II%2FUnder_Construction_test2_02%253A_Linear_Functions%2FUnder_Construction_test2_02%253A_Linear_Functions%252F%252F2.1%253A_Linear_Functions TMCC: Precalculus I and II 2: Linear Functions Mathematics at OpenStax CNX Representing Linear Functions Representing a Linear Function in Word Form Representing a Linear Function in Function Notation Representing a Linear Function in Tabular Form Representing a Linear Function in Graphical Form Determining whether a Linear Function Is Increasing, Decreasing, or Constant Calculating and Interpreting Slope Writing the Point-Slope Form of a Linear Equation Writing the Equation of a Line Using a Point and the Slope Writing the Equation of a Line Using Two Points Writing and Interpreting an Equation for a Linear Function Modeling Real-World Problems with Linear Functions Represent a linear function. Determine whether a linear function is increasing, decreasing, or constant. Interpret slope as a rate of change. Write and interpret an equation for a linear function. Graph linear functions. Determine whether lines are parallel or perpendicular. Write the equation of a line parallel or perpendicular to a given line. Just as with the growth of a bamboo plant, there are many situations that involve constant change over time. Consider, for example, the first commercial maglev train in the world, the Shanghai MagLev Train (maglev train in the world, the Shanghai MagLev Train (Figure \(\PageIndex{1}\)). It carries passengers comfortably for a 30-kilometer trip from the airport to the subway station in only eight minutes. Figure \(\PageIndex{1}\): Shanghai MagLev Train (credit: "kanegen"/Flickr) Suppose a maglev train were to travel a long distance, and that the train maintains a constant speed of 83 meters per second for a period of time once it is 250 meters from the station. How can we analyze the train's distance from the station as a function of time? In this section, we will investigate a kind of function that is useful for this purpose, and use it to investigate real-world situations such as the train's distance from the station at a given point in time.maglev train were to travel a long distance, and that the train maintains a constant speed of 83 meters per second for a period of time once it is 250 meters from the station. How can we analyze the train's distance from the station as a function of time? In this section, we will investigate a kind of function that is useful for this purpose, and use it to investigate real-world situations such as the train's distance from the station at a given point in time. The function describing the train's motion is a linear function, which is defined as a function with a constant rate of change, that is, a polynomial of degree 1. There are several ways to represent a linear function, including word form, function notation, tabular form, and graphical form. We will describe the train's motion as a function using each method. Let's begin by describing the linear function in words. For the train problem we just considered, the following word sentence may be used to describe the function relationship. The train's distance from the station is a function of the time during which the train moves at a constant speed plus its original distance from the station when it began moving at constant speed. The speed is the rate of change. Recall that a rate of change is a measure of how quickly the dependent variable changes with respect to the independent variable. The rate of change for this example is constant, which means that it is the same for each input value. As the time (input) increases by 1 second, the corresponding distance (output) increases by 83 meters. The train began moving at this constant speed at a distance of 250 meters from the station. Another approach to representing linear functions is by using function notation. One example of function notation is an equation written in the form known as the slope-intercept form of a line, where xis the input value, \(m\) is the rate of change, and \(b\) is the initial value of the dependent variable. \[\begin{align} &\text{Equation form } &y=mx+b \\ &\text{Equation notation } &f(x)=mx+b \end{align}\] In the example of the train, we might use the notation \(D(t)\) in which the total distance \(D\) is a function of the time \(t\). The rate, \(m\), is 83 meters per second. The initial value of the dependent variable \(b\) is the original distance from the station, 250 meters. We can write a generalized equation to represent the motion of the train. \[D(t)=83t+250\] A third method of representing a linear function is through the use of a table. The relationship between the distance from the station and the time is represented in Figure \(\PageIndex{2}\). From the table, we can see that the distance changes by 83 meters for every 1 second increase in time. Figure \(\PageIndex{2}\): Tabular representation of the function \(D\) showing selected input and output values Can the input in the previous example be any real number? No. The input represents time, so while nonnegative rational and irrational numbers are possible, negative real numbers are not possible for this example. The input consists of non-negative real numbers. Another way to represent linear functions is visually, using a graph. We can use the function relationship from above, \(D(t)=83t+250\), to draw a graph, represented in Figure \(\PageIndex{3}\). Notice the graph is a line. When we plot a linear function, the graph is always a line. The rate of change, which is constant, determines the slant, or slope of the line. The point at which the input value is zero is the vertical intercept, or y-intercept, of the line. We can see from the graph in Figure \(\PageIndex{3}\) that the y-intercept in the train example we just saw is \((0,250)\) and represents the distance of the train from the station when it began moving at a constant speed. Figure \(\PageIndex{3}\): The graph of \(D(t)=83t+250\). Graphs of linear functions are lines because the rate of change is constant. Notice that the graph of the train example is restricted, but this is not always the case. Consider the graph of the line \(f(x)=2x+1\). Ask yourself what numbers can be input to the function, that is, what is the domain of the function? The domain is comprised of all real numbers because any number may be doubled, and then have one added to the product. Note: Linear Function A linear function is a function whose graph is a line. Linear functions can be written in the slope-intercept form of a line \[f(x)=mx+b\] where \(b\) is the initial or starting value of the function (when input, \(x=0\)), and \(m\) is the constant rate of change, or slope of the function. The y-intercept is at \((0,b)\). Example \(\PageIndex{1}\): Using a Linear Function to Find the Pressure on a Diver The pressure, \(P\), in pounds per square inch (PSI) on the diver in Figure \(\PageIndex{4}\) depends upon her depth below the water surface, \(d\), in feet. This relationship may be modeled by the equation, \(P(d)=0.434d+14.696\). Restate this function in words. Figure \(\PageIndex{4}\): (credit: Ilse Reijs and Jan-Noud Hutten)Hutten) To restate the function in words, we need to describe each part of the equation. The pressure as a function of depth equals four hundred thirty-four thousandths times depth plus fourteen and six hundred ninety-six thousandths. The initial value, 14.696, is the pressure in PSI on the diver at a depth of 0 feet, which is the surface of the water. The rate of change, or slope, is 0.434 PSI per foot. This tells us that the pressure on the diver increases 0.434 PSI for each foot her depth increases. The linear functions we used in the two previous examples increased over time, but not every linear function does. A linear function may be increasing, decreasing, or constant. For an increasing function, as with the train example, the output values increase as the input values increase. The graph of an increasing function has a positive slope. A line with a positive slope slants upward from left to right as in Figure \(\PageIndex{5}\)(a). For a decreasing function, the slope is negative. The output values decrease as the input values increase. A line with a negative slope slants downward from left to right as in Figure \(\PageIndex{5}\)(b). If the function is constant, the output values are the same for all input values so the slope is zero. A line with a slope of zero is horizontal as in Figure \(\PageIndex{5}\)(c). Figure \(\PageIndex{5}\): Three graphs depicting an increasing function, a decreasing function, and a constant function.] The slope determines if the function is an increasing linear function, a decreasing linear function, or a constant function. \(f(x)=mx+b\) is an increasing function if \(m>0\). \(f(x)=mx+b\) is an decreasing function if \(m<0\). \(f(x)=mx+b\) is a constant function if \(m=0\). Example \(\PageIndex{2}\): Deciding whether a Function Is Increasing, Decreasing, or Some recent studies suggest that a teenager sends an average of 60 texts per day. For each of the following scenarios, find the linear function that describes the relationship between the input value and the output value. Then, determine whether the graph of the function is increasing, decreasing, or constant. a. The total number of texts a teen sends is considered a function of time in days. The input is the number of days, and output is the total number of texts sent. b. A teen has a limit of 500 texts per month in his or her data plan. The input is the number of days, and output is the total number of texts remaining for the month. c. A teen has an unlimited number of texts in his or her data plan for a cost of $50 per month. The input is the number of days, and output is the total cost of texting each month. Analyze each function. a. The function can be represented as \(f(x)=60x\) where \(x\) is the number of days. The slope, 60, is positive so the function is increasing. This makes sense because the total number of texts increases with each day. b. The function can be represented as \(f(x)=500−60x\) where \(x\) is the number of days. In this case, the slope is negative so the function is decreasing. This makes sense because the number of texts remaining decreases each day and this function represents the number of texts remaining in the data plan after \(x\) days. c. The cost function can be represented as \(f(x)=50\) because the number of days does not affect the total cost. The slope is 0 so the function is constant. In the examples we have seen so far, we have had the slope provided for us. However, we often need to calculate the slope given input and output values. Given two values for the input, \(x_1\) and \(x_2\), and two corresponding values for the output, \(y_1\) and \(y_2\)—which can be represented by a set of points, \((x_1,y_1)\) and \((x_2,y_2)\)—we can calculate the slope \(m\), as follows \[m= \dfrac{\text{change in output (rise)}}{ \text{change in input (run)}} = \dfrac{{\Delta}y}{ {\Delta}x} = \dfrac{y_2−y_1}{x_2−x_1}\] where \({\Delta}y\) is the vertical displacement and \({\Delta}x\) is the horizontal displacement. Note in function notation two corresponding values for the output \(y_1\) and \(y_2\) for the function \(f\), \(y_1=f(x_1)\) and \(y_2=f(x_2)\), so we could equivalently write \[m=\dfrac{f(x_2)-f(x_1)}{x_2-x_1}\] Figure \(\PageIndex{6}\) indicates how the slope of the line between the points, \((x_1,y_1)\) and \((x_2,y_2)\), is calculated. Recall that the slope measures steepness. The greater the absolute value of the slope, the steeper the line is. Figure \(\PageIndex{6}\): The slope of a function is calculated by the change in \(y\) divided by the change in \(x\). It does not matter which coordinate is used as the \((x_2,y_2)\) and which is the \((x_1 ,y_1)\),as long as each calculation is started with the elements from the same coordinate pair. Are the units for slope always \(\frac{\text{units for the output}}{ \text{units for the input}}\)? Yes. Think of the units as the change of output value for each unit of change in input value. An example of slope could be miles per hour or dollars per day. Notice the units appear as a ratio of units for the output per units for the input. Calculate Slope The slope, or rate of change, of a function \(m\) can be calculated according to the following: \[m=\dfrac{\text{change in output (rise)}}{\text{change in input (run)}}=\dfrac{{\Delta}y}{{\Delta}x}=\dfrac{y_2-y_1}{x_2-x_1}\] where \(x_1\) and \(x_2\) are input values, \(y_1\) and \(y_2\) are output values. Given two points from a linear function, calculate and interpret the slope. Determine the units for output and input values. Calculate the change of output values and change of input values. Interpret the slope as the change in output values per unit of the input value. Example \(\PageIndex{3}\): Finding the Slope of a Linear Function If \(f(x)\) is a linear function, and \((3,−2)\) and \((8,1)\) are points on the line, find the slope. Is this function increasing or decreasing? The coordinate pairs are \((3,−2)\) and \((8,1)\). To find the rate of change, we divide the change in output by the change in input. \[m=\dfrac{\text{change in output (rise)}}{\text{change in input (run)}}=\dfrac{1-(-2)}{8-3}=\dfrac{3}{5}\] We could also write the slope as \(m=0.6\). The function is increasing because \(m>0\). As noted earlier, the order in which we write the points does not matter when we compute the slope of the line as long as the first output value, or y-coordinate, used corresponds with the first input value, or x-coordinate, used. \(\PageIndex{1}\): If \(f(x)\) is a linear function, and \((2, 3)\) and \((0,4)\) are points on the line, find the slope. Is this function increasing or decreasing? \(m=\frac{4−3}{0−2} =\frac{1}{-2}=-\frac{1}{2}\); decreasing because \(m<0\). Example \(\PageIndex{4}\): Finding the Population Change from a Linear Function The population of a city increased from 23,400 to 27,800 between 2008 and 2012. Find the change of population per year if we assume the change was constant from 2008 to 2012. The rate of change relates the change in population to the change in time. The population increased by \(27,800−23,400=4400\) people over the four-year time interval. To find the rate of change, divide the change in the number of people by the number of years. \[\dfrac{4,400 \text{ people}}{4 \text{ years}} =1,100 \dfrac{\text{people}}{\text{year}}\] So the population increased by 1,100 people per year. Because we are told that the population increased, we would expect the slope to be positive. This positive slope we calculated is therefore reasonable. \(\PageIndex{2}\): The population of a small town increased from 1,442 to 1,868 between 2009 and 2012. Find the change of population per year if we assume the change was constant from 2009 to 2012. \(m=\frac{1,868−1,442}{2,012−2,009} = \frac{426}{3} =\text{ 142 people per year}\) Up until now, we have been using the slope-intercept form of a linear equation to describe linear functions. Here, we will learn another way to write a linear function, the point-slope form. \[y-y_1=m(x-x_1)\] The point-slope form is derived from the slope formula. \[ \begin{align} &m=\dfrac{y-y_1}{x-x_1} &\text{assuming }x{\neq}x_1 \\ &m(x-x_1)=\dfrac{y-y_1}{x-x_1}(x-x_1) &\text{Multiply both sides by }(x-x_1). \\ &m(x-x_1)=y-y_1 &\text{Simplify} \\ &y-y_1=m(x-x_1) &\text{Rearrange} \end{align}\] Keep in mind that the slope-intercept form and the point-slope form can be used to describe the same function. We can move from one form to another using basic algebra. For example, suppose we are given an equation in point-slope form, \(y−4=− \frac{1}{2}(x−6)\). We can convert it to the slope-intercept form as shown. \[\begin{align} y-4&=-\dfrac{1}{2}(x-6) \\ y-4&=-\dfrac{1}{2}x+3 &\text{Distribute the }-\dfrac{1}{2}. \\ y&=-\dfrac{1}{2}x+7 &\text{Add 4 to each side.}\end{align}\] Therefore, the same line can be described in slope-intercept form as \(y=\dfrac{1}{2}x+7\). Point-Slope Form of a Linear Equation The point-slope form of a linear equation takes the form \[y-y_1=m(x−x_1)\] where \(m\) is the slope, \(x_1\) and \(y_1\) are the \(x\) and \(y\) coordinates of a specific point through which the line passes. The point-slope form is particularly useful if we know one point and the slope of a line. Suppose, for example, we are told that a line has a slope of 2 and passes through the point \((4,1)\). We know that \(m=2\) and that \(x_1=4\) and \(y_1=1\). We can substitute these values into the general point-slope equation. \[\begin{align} y−y_1&=m(x−x_1) \\ y−1&=2(x−4) \end{align}\] If we wanted to then rewrite the equation in slope-intercept form, we apply algebraic techniques. \[\begin{align} y−1&=2(x−4) \\ y−1&=2x−8 &\text{Distribute the 2.} \\ y&=2x−7 &\text{Add 1 to each side.} \end{align}\] Both equations, \(y−1=2(x−4)\) and \(y=2x–7\), describe the same line. See Figure \(\PageIndex{7}\). Example \(\PageIndex{5}\): Writing Linear Equations Using a Point and the Slope Write the point-slope form of an equation of a line with a slope of 3 that passes through the point \((6,–1)\). Then rewrite it in the slope-intercept form. Let's figure out what we know from the given information. The slope is 3, so \(m=3\). We also know one point, so we know \(x_1=6\) and \(y_1 =−1\). Now we can substitute these values into the general point-slope equation. \[\begin{align} y-y_1&=m(x-x_1) \\ y−(−1)&=3(x−6) &\text{Substitute known values.} \\ y+1&=3(x−6) &\text{Distribute −1 to find point-slope form.} \end{align}\] Then we use algebra to find the slope-intercept form. \[\begin{align} y+1&=3(x−6) \\ y+1&=3x−18 &\text{Distribute 3.} \\ y&=3x−19 &\text{Simplify to slope-intercept form.} \end{align}\] \(\PageIndex{3}\): Write the point-slope form of an equation of a line with a slope of –2 that passes through the point \((–2, 2)\). Then rewrite it in the slope-intercept form. \(y−2=−2(x+2)\) ; \(y=−2x−2\) The point-slope form of an equation is also useful if we know any two points through which a line passes. Suppose, for example, we know that a line passes through the points \((0, 1)\) and \((3, 2)\). We can use the coordinates of the two points to find the slope. \[\begin{align} m&=\dfrac{y_2-y_1}{x_2-x_1} \\ &=\dfrac{2-1}{3-0} \\ &=\dfrac{1}{3} \end{align}\] Now we can use the slope we found and the coordinates of one of the points to find the equation for the line. Let use \((0,1)\) for our point. \[\begin{align} y-y_1&=m(x-x_1) \\ y-1&=\dfrac{1}{3}(x-0) \end{align}\] As before, we can use algebra to rewrite the equation in the slope-intercept form. \[\begin{align} y-1&=\dfrac{1}{3}(x-0) \\ y-1&=\dfrac{1}{3}x &\text{Distribute the }\dfrac{1}{3}. \\ y&=\dfrac{1}{3}x+1 &\text{Add 1 to each side.} \end{align}\] Both equations describe the line shown in Figure \(\PageIndex{8}\). Example \(\PageIndex{6}\): Writing Linear Equations Using Two Points Write the point-slope form of an equation of a line that passes through the points \((5,1)\) and \((8, 7)\). Then rewrite it in the slope-intercept form. Let's begin by finding the slope. \[\begin{align} m&=\dfrac{y_2-y_1}{x_2-x_1} \\ &=\dfrac{7-1}{8-5} \\ &=\dfrac{6}{3} \\ &= 2 \end{align}\] So \(m=2\). Next, we substitute the slope and the coordinates for one of the points into the general point-slope equation. We can choose either point, but we will use \((5,1)\). \[\begin{align} y-y_1&=m(x-x_1) \\ y-1&=2(x-5) \end{align}\] The point-slope equation of the line is \(y_2–1=2(x_2–5)\). To rewrite the equation in slope-intercept form, we use algebra. \[\begin{align} y-1&=2(x-5) \\ y-1&=2x-10 \\ y&=2x-9 \end{align}\] The slope-intercept equation of the line is \(y=2x–9\). \(\PageIndex{4}\): Write the point-slope form of an equation of a line that passes through the points \((–1,3)\) and \((0,0)\). Then rewrite it in the slope-intercept form. \(y−0=−3(x−0)\); \(y=−3x\) Now that we have written equations for linear functions in both the slope-intercept form and the point-slope form, we can choose which method to use based on the information we are given. That information may be provided in the form of a graph, a point and a slope, two points, and so on. Look at the graph of the function \(f\) in Figure \(\PageIndex{9}\). Figure \(\PageIndex{9}\): Graph depicting how to calculate the slope of a line. We are not given the slope of the line, but we can choose any two points on the line to find the slope. Let's choose \((0,7)\) and \((4, 4)\). We can use these points to calculate the slope. \[\begin{align} m&=\dfrac{y_2-y_1}{x_2-x_1} \\ &=\dfrac{4-7}{4-0} \\&=-\dfrac{3}{4}\end{align}\] Now we can substitute the slope and the coordinates of one of the points into the point-slope form. \[\begin{align} y-y_1&=m(x-x_1) \\ y-4&=-\dfrac{3}{4}(x-4) \end{align}\] If we want to rewrite the equation in the slope-intercept form, we would find \[\begin{align} y-4&=-\dfrac{3}{4}(x-4) \\ y-4 &=-\dfrac{3}{4}x+3 \\ y&=-\dfrac{3}{4}x+7\end{align}\] If we wanted to find the slope-intercept form without first writing the point-slope form, we could have recognized that the line crosses the y-axis when the output value is 7. Therefore, \(b=7\). We now have the initial value \(b\) and the slope \(m\) so we can substitute \(m\) and \(b\) into the slope-intercept form of a line. Figure \(\PageIndex{10}\) So the function is \(f(x)=−\frac{3}{4}x+7\), and the linear equation would be \(y=−\frac{3}{4}x+7\). Given the graph of a linear function, write an equation to represent the function. Identify two points on the line. Use the two points to calculate the slope. Determine where the line crosses the y-axis to identify the y-intercept by visual inspection. Substitute the slope and y-intercept into the slope-intercept form of a line equation. Example \(\PageIndex{7}\): Writing an Equation for a Linear Function Write an equation for a linear function given a graph of \(f\) shown in Figure \(\PageIndex{11}\). Identify two points on the line, such as \((0, 2)\) and \((−2,−4)\). Use the points to calculate the slope. \[\begin{align} m&=\dfrac{y_2-y_1}{x_2-x_1} \\ &=\dfrac{4-2}{-2-0} \\ &=\dfrac{-6}{-2} \\ &=3 \end{align}\] Substitute the slope and the coordinates of one of the points into the point-slope form. \[\begin{align} y-y_1&=m(x-x_1) \\ y-(-4)&=3(x-(-2)) \\ y+4 &= 3(x+2)\end{align}\] We can use algebra to rewrite the equation in the slope-intercept form. \[\begin{align} y+4&= 3(x+2) \\ y+4&= 3x+6 \\ y & = 3x + 2 \end{align}\] This makes sense because we can see from Figure \(\PageIndex{12}\) that the line crosses the y-axis at the point \((0, 2)\), which is the y-intercept, so \(b=2\). Figure \(\PageIndex{12}\): Graph of an increasing line with points at \((0, 2)\) and \((-2, -4)\). Example \(\PageIndex{8}\): Writing an Equation for a Linear Cost Function Suppose Ben starts a company in which he incurs a fixed cost of $1,250 per month for the overhead, which includes his office rent. His production costs are $37.50 per item. Write a linear function \(C\) where \(C(x)\) is the cost for \(x\) items produced in a given month. The fixed cost is present every month, $1,250. The costs that can vary include the cost to produce each item, which is $37.50 for Ben. The variable cost, called the marginal cost, is represented by 37.5. The cost Ben incurs is the sum of these two costs, represented by \(C(x)=1250+37.5x\). If Ben produces 100 items in a month, his monthly cost is represented by \[\begin{align} C(100)&=1250+37.5(100) \\ &=5000 \end{align}\] So his monthly cost would be $5,000. Example \(\PageIndex{9}\): Writing an Equation for a Linear Function Given Two Points If \(f\) is a linear function, with \(f(3)=−2\), and \(f(8)=1\), find an equation for the function in slope-intercept form. We can write the given points using coordinates. \[\begin{align} f(3)&= -2{\rightarrow}(3,2) \\ f(8)&=1{\rightarrow}(8,1) \end{align}\] We can then use the points to calculate the slope. \[\begin{align} m&=\dfrac{y_2-y_1}{x_2-x_1} \\ &=\dfrac{1-(-2)}{8-3} \\ &=\dfrac{3}{5} \end{align}\] \[\begin{align} y-y_1&=m(x-x_1) \\ y-(-2)&=\dfrac{3}{5}(x-3) \end{align}\] \[\begin{align} y+2&=\dfrac{3}{5}(x-3) \\ y+2&=\dfrac{3}{5}x-\dfrac{9}{5} \\ y&=\dfrac{3}{5}x-\dfrac{19}{5} \end{align}\] \(\PageIndex{5}\): If \(f(x)\) is a linear function, with \(f(2)=–11\), and \(f(4)=−25\), find an equation for the function in slope-intercept form. \(y=−7x+3\) In the real world, problems are not always explicitly stated in terms of a function or represented with a graph. Fortunately, we can analyze the problem by first representing it as a linear function and then interpreting the components of the function. As long as we know, or can figure out, the initial value and the rate of change of a linear function, we can solve many different kinds of real-world problems. Given a linear function \(f\) and the initial value and rate of change, evaluate \(f(c)\). Determine the initial value and the rate of change (slope). Substitute the values into \(f(x)=mx+b\). Evaluate the function at \(x=c\). Example \(\PageIndex{10}\): Using a Linear Function to Determine the Number of Songs in a Music Collection Marcus currently has 200 songs in his music collection. Every month, he adds 15 new songs. Write a formula for the number of songs, \(N\), in his collection as a function of time, \(t\), the number of months. How many songs will he own in a year? The initial value for this function is 200 because he currently owns 200 songs, so \(N(0)=200\), which means that \(b=200\). The number of songs increases by 15 songs per month, so the rate of change is 15 songs per month. Therefore we know that \(m=15\). We can substitute the initial value and the rate of change into the slope-intercept form of a line. We can write the formula \(N(t)=15t+200\). With this formula, we can then predict how many songs Marcus will have in 1 year (12 months). In other words, we can evaluate the function at \(t=12\). \[\begin{align} N(12)&=15(12)+200 \\ &=180+200 \\ &= 380 \end{align}\] Marcus will have 380 songs in 12 months. Notice that \(N\) is an increasing linear function. As the input (the number of months) increases, the output (number of songs) increases as well. Example \(\PageIndex{11}\): Using a Linear Function to Calculate Salary Plus Working as an insurance salesperson, Ilya earns a base salary plus a commission on each new policy. Therefore, Ilya's weekly income, I,depends on the number of new policies, \(n\), he sells during the week. Last week he sold 3 new policies, and earned $760 for the week. The week before, he sold 5 new policies and earned $920. Find an equation for \(I(n)\), and interpret the meaning of the components of the equation. The given information gives us two input-output pairs: \((3,760)\) and \((5,920)\). We start by finding the rate of change. \[\begin{align} m&=\dfrac{920-760}{5-3} \\ &=\dfrac{$160}{2 \text{ policies}} \\ &=$80 \text{ per policy} \end{align}\] Keeping track of units can help us interpret this quantity. Income increased by $160 when the number of policies increased by 2, so the rate of change is $80 per policy. Therefore, Ilya earns a commission of $80 for each policy sold during the week. We can then solve for the initial value. \[\begin{align} I(n)&=80n+b \\ 760&=80(3)+b \text{ when } n=3, I(3)=760 \\ 760-80(3)&=b \\ 520 & =b \end{align}\] The value of \(b\) is the starting value for the function and represents Ilya's income when \(n=0\), or when no new policies are sold. We can interpret this as Ilya's base salary for the week, which does not depend upon the number of policies sold. We can now write the final equation. \[I(n)=80n+520\] Our final interpretation is that Ilya's base salary is $520 per week and he earns an additional $80 commission for each policy sold. Example \(\PageIndex{12}\): Using Tabular Form to Write an Equation for a Linear Table \(\PageIndex{1}\) relates the number of rats in a population to time, in weeks. Use the table to write a linear equation. Table \(\PageIndex{1}\) w, number of weeks P(w), number of rats We can see from the table that the initial value for the number of rats is 1000, so \(b=1000\). Rather than solving for \(m\), we can tell from looking at the table that the population increases by 80 for every 2 weeks that pass. This means that the rate of change is 80 rats per 2 weeks, which can be simplified to 40 rats per week. \[P(w)=40w+1000\] If we did not notice the rate of change from the table we could still solve for the slope using any two points from the table. For example, using \((2,1080)\) and \((6,1240)\) \[\begin{align} m&=\dfrac{1240-1080}{6-2} \\ &=\dfrac{160}{4} \\ &= 40\end{align}\] Is the initial value always provided in a table of values like Table \(\PageIndex{1}\)? No. Sometimes the initial value is provided in a table of values, but sometimes it is not. If you see an input of 0, then the initial value would be the corresponding output. If the initial value is not provided because there is no value of input on the table equal to 0, find the slope, substitute one coordinate pair and the slope into \(f(x)=mx+b\), and solve for \(b\). \(\PageIndex{5}\): A new plant food was introduced to a young tree to test its effect on the height of the tree. Table \(\PageIndex{2}\) shows the height of the tree, in feet, \(x\) months since the measurements began. Write a linear function, \(H(x)\), where \(x\) is the number of months since the start of the experiment. \(x\) \(H(x)\) \(H(x)=0.5x+12.5\) slope-intercept form of a line: \(f(x)=mx+b\) slope: \(m=\dfrac{\text{change in output (rise)}}{\text{change in input (run)}}=\dfrac{{\Delta}y}{{\Delta}x}=\dfrac{y_2-y_1}{x_2-x_1}\) point-slope form of a line: \(y−y_1 =m(x-x_1)\) The ordered pairs given by a linear function represent points on a line. Linear functions can be represented in words, function notation, tabular form, and graphical form. The rate of change of a linear function is also known as the slope. An equation in the slope-intercept form of a line includes the slope and the initial value of the function. The initial value, or y-intercept, is the output value when the input of a linear function is zero. It is the y-value of the point at which the line crosses the y-axis. An increasing linear function results in a graph that slants upward from left to right and has a positive slope. A decreasing linear function results in a graph that slants downward from left to right and has a negative slope. A constant linear function results in a graph that is a horizontal line. Analyzing the slope within the context of a problem indicates whether a linear function is increasing, decreasing, or constant. The slope of a linear function can be calculated by dividing the difference between y-values by the difference in corresponding x-values of any two points on the line. The slope and initial value can be determined given a graph or any two points on the line. One type of function notation is the slope-intercept form of an equation. The point-slope form is useful for finding a linear equation when given the slope of a line and one point. The point-slope form is also convenient for finding a linear equation when given two points through which a line passes. The equation for a linear function can be written if the slope \(m\) and initial value \(b\) are known. A linear function can be used to solve real-world problems. A linear function can be written from tabular form. 1 www.chinahighlights.com/shang...glev-train.htm 2 www.cbsnews.com/8301-501465_1...ay-study-says/ decreasing linear function a function with a negative slope: If \(f(x)=mx+b\), then \(m<0\). increasing linear function a function with a positive slope: If \(f(x)=mx+b\), then \(m>0\). linear function a function with a constant rate of change that is a polynomial of degree 1, and whose graph is a straight line point-slope form the equation for a line that represents a linear function of the form \(y−y_1=m(x−x_1) the ratio of the change in output values to the change in input values; a measure of the steepness of a line slope-intercept form the equation for a line that represents a linear function in the form \(f(x)=mx+b\) y-intercept the value of a function when the input value is zero; also known as initial value Jay Abramson (Arizona State University) with contributing authors. Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at https://openstax.org/details/books/precalculus. 2.0: Prelude to Linear Functions 2.2: Graphs of Linear Functions point-slope equation
CommonCrawl
Journal of the Mathematical Society of Japan Advance publication J. Math. Soc. Japan A generalization of the Shestakov-Umirbaev inequality Shigeru KURODA More by Shigeru KURODA We give a generalization of the Shestakov-Umirbaev inequality which plays an important role in their solution of the Tame Generators Problem on the automorphism group of a polynomial ring. As an application, we give a new necessary condition for endomorphisms of a polynomial ring to be invertible, which implies Jung's theorem in case of two variables. J. Math. Soc. Japan, Volume 60, Number 2 (2008), 495-510. First available in Project Euclid: 30 May 2008 https://projecteuclid.org/euclid.jmsj/1212156660 doi:10.2969/jmsj/06020495 Primary: 14R10: Affine spaces (automorphisms, embeddings, exotic structures, cancellation problem) Secondary: 12H05: Differential algebra [See also 13Nxx] polynomial automorphism Tame Generators Problem KURODA, Shigeru. A generalization of the Shestakov-Umirbaev inequality. J. Math. Soc. Japan 60 (2008), no. 2, 495--510. doi:10.2969/jmsj/06020495. https://projecteuclid.org/euclid.jmsj/1212156660 A. van den Essen, L. Makar-Limanov and R. Willems, Remarks on Shestakov-Umirbaev, Report 0414, Radboud University of Nijmegen, Toernooiveld, 6525 ED Nijmegen, The Netherlands, 2004. H. Jung, Über ganze birationale Transformationen der Ebene, J. Reine Angew. Math., 184 (1942), 161–174. Mathematical Reviews (MathSciNet): MR8915 W. van der Kulk, On polynomial rings in two variables, Nieuw Arch. Wisk. (3), 1 (1953), 33–41. Mathematical Reviews (MathSciNet): MR54574 M. Nagata, On Automorphism Group of $k[x,y]$, Lectures in Mathematics, Department of Mathematics, Kyoto University, 5, Kinokuniya Book-Store Co. Ltd., Tokyo, 1972. Mathematical Reviews (MathSciNet): MR337962 M. K. Smith, Stably tame automorphisms, J. Pure Appl. Algebra, 58 (1989), 209–212. Digital Object Identifier: doi:10.1016/0022-4049(89)90158-8 I. Shestakov and U. Umirbaev, Poisson brackets and two-generated subalgebras of rings of polynomials, J. Amer. Math. Soc., 17 (2004), 181–196. Digital Object Identifier: doi:10.1090/S0894-0347-03-00438-7 I. Shestakov and U. Umirbaev, The tame and the wild automorphisms of polynomial rings in three variables, J. Amer. Math. Soc., 17 (2004), 197–227. S. Vénéreau, A parachute for the degree of a polynomial in algebraically independent ones.1. arXiv: math.AC/0704156 Mathematical Society of Japan Amalgamations and automorphism groups Wright, David, , 2017 The automorphism theorem and additive group actions on the affine plane Kuroda, Shigeru, Nihonkai Mathematical Journal, 2017 ISOMORPHISM PROBLEM FOR ENDOMORPHISM NEAR-RINGS Mason, Gordon and Meldrum, J. D. P., Taiwanese Journal of Mathematics, 2009 On the fundamental group of a smooth projective surface with a finite group of automorphisms GURJAR, Rajendra Vasant and PURNAPRAJNA, Bangere P., Journal of the Mathematical Society of Japan, 2018 Chapter IV. Groups and Group Actions Anthony W. Knapp, Basic Algebra, Digital Second Edition (East Setauket, NY: Anthony W. Knapp, 2016), 2016 On commuting canonical endomorphisms of subfactors SANO, Takashi, Journal of the Mathematical Society of Japan, 2003 Structure and equivalence of a class of tube domains with solvable groups of automorphisms SHIMIZU, Satoru, Journal of the Mathematical Society of Japan, 2017 Shestakov-Umirbaev reductions and Nagata's conjecture on a polynomial automorphism Kuroda, Shigeru, Tohoku Mathematical Journal, 2010 What an infra-nilmanifold endomorphism really should be$\ldots$ Dekimpe, Karel, Topological Methods in Nonlinear Analysis, 2012 Differential Subordination Results for Certain Integrodifferential Operator and Its Applications Kutbi, M. A. and Attiya, A. A., Abstract and Applied Analysis, 2012 euclid.jmsj/1212156660
CommonCrawl
Integral points on elliptic curves and $3$-torsion in class groups by H. A. Helfgott and A. Venkatesh PDF We give new bounds for the number of integral points on elliptic curves. The method may be said to interpolate between approaches via diophantine techniques and methods based on quasi-orthogonality in the Mordell-Weil lattice. We apply our results to break previous bounds on the number of elliptic curves of given conductor and the size of the $3$-torsion part of the class group of a quadratic field. The same ideas can be used to count rational points on curves of higher genus. A. Baker, The Diophantine equation $y^{2}=ax^{3}+bx^{2}+cx+d$, J. London Math. Soc. 43 (1968), 1–9. MR 231783, DOI 10.1112/jlms/s1-43.1.1 B. Brindza, J.-H. Evertse, and K. Győry, Bounds for the solutions of some Diophantine equations in terms of discriminants, J. Austral. Math. Soc. Ser. A 51 (1991), no. 1, 8–26. MR 1119684, DOI 10.1017/S1446788700033267 Armand Brumer and Kenneth Kramer, The rank of elliptic curves, Duke Math. J. 44 (1977), no. 4, 715–743. MR 457453 E. Bombieri and J. Pila, The number of integral points on arcs and ovals, Duke Math. J. 59 (1989), no. 2, 337–357. MR 1016893, DOI 10.1215/S0012-7094-89-05915-2 Armand Brumer and Joseph H. Silverman, The number of elliptic curves over $\mathbf Q$ with conductor $N$, Manuscripta Math. 91 (1996), no. 1, 95–102. MR 1404420, DOI 10.1007/BF02567942 Yann Bugeaud, Bounds for the solutions of superelliptic equations, Compositio Math. 107 (1997), no. 2, 187–219. MR 1458749, DOI 10.1023/A:1000130114331 J. H. Conway and N. J. A. Sloane, Sphere packings, lattices and groups, Grundlehren der mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 290, Springer-Verlag, New York, 1988. With contributions by E. Bannai, J. Leech, S. P. Norton, A. M. Odlyzko, R. A. Parker, L. Queen and B. B. Venkov. MR 920369, DOI 10.1007/978-1-4757-2016-7 Sinnou David, Points de petite hauteur sur les courbes elliptiques, J. Number Theory 64 (1997), no. 1, 104–129 (French, with English and French summaries). MR 1450488, DOI 10.1006/jnth.1997.2100 W. Duke, Bounds for arithmetic multiplicities, Proceedings of the International Congress of Mathematicians, Vol. II (Berlin, 1998), 1998, pp. 163–172. MR 1648066 W. Duke and E. Kowalski, A problem of Linnik for elliptic curves and mean-value estimates for automorphic representations, Invent. Math. 139 (2000), no. 1, 1–39. With an appendix by Dinakar Ramakrishnan. MR 1728875, DOI 10.1007/s002229900017 Noam D. Elkies, Rational points near curves and small nonzero $|x^3-y^2|$ via lattice reduction, Algorithmic number theory (Leiden, 2000) Lecture Notes in Comput. Sci., vol. 1838, Springer, Berlin, 2000, pp. 33–63. MR 1850598, DOI 10.1007/10722028_{2} [EV]EV Ellenberg, J. and A. Venkatesh, On uniform bounds for rational points on non-rational curves. IMRN 35 (2005). J.-H. Evertse and J. H. Silverman, Uniform bounds for the number of solutions to $Y^n=f(X)$, Math. Proc. Cambridge Philos. Soc. 100 (1986), no. 2, 237–248. MR 848850, DOI 10.1017/S0305004100066068 É. Fouvry, Sur le comportement en moyenne du rang des courbes $y^2=x^3+k$, Séminaire de Théorie des Nombres, Paris, 1990–91, Progr. Math., vol. 108, Birkhäuser Boston, Boston, MA, 1993, pp. 61–84 (French). MR 1263524 Robert Gross and Joseph Silverman, $S$-integer points on elliptic curves, Pacific J. Math. 167 (1995), no. 2, 263–288. MR 1328329, DOI 10.2140/pjm.1995.167.263 L. Hajdu and T. Herendi, Explicit bounds for the solutions of elliptic equations with rational coefficients, J. Symbolic Comput. 25 (1998), no. 3, 361–366. MR 1615334, DOI 10.1006/jsco.1997.0181 Helmut Hasse, Arithmetische Theorie der kubischen Zahlkörper auf klassenkörpertheoretischer Grundlage, Math. Z. 31 (1930), no. 1, 565–582 (German). MR 1545136, DOI 10.1007/BF01246435 D. R. Heath-Brown, The density of rational points on curves and surfaces, Ann. of Math. (2) 155 (2002), no. 2, 553–595. MR 1906595, DOI 10.2307/3062125 H. A. Helfgott, On the square-free sieve, Acta Arith. 115 (2004), no. 4, 349–402. MR 2099831, DOI 10.4064/aa115-4-3 [Her]Her Herrmann, E., Bestimmung aller $S$-ganzen Lösungen auf elliptischen Kurven, Ph.D. thesis, Universität des Saarlandes. Marc Hindry and Joseph H. Silverman, Diophantine geometry, Graduate Texts in Mathematics, vol. 201, Springer-Verlag, New York, 2000. An introduction. MR 1745599, DOI 10.1007/978-1-4612-1210-2 G. A. Kabatjanskiĭ and V. I. Levenšteĭn, Bounds for packings on the sphere and in space, Problemy Pereda�i Informacii 14 (1978), no. 1, 3–25 (Russian). MR 0514023 S. V. Kotov and L. A. Trelina, $S$-ganze Punkte auf elliptischen Kurven, J. Reine Angew. Math. 306 (1979), 28–41 (German). MR 524645 Serge Lang, Elliptic curves: Diophantine analysis, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 231, Springer-Verlag, Berlin-New York, 1978. MR 518817, DOI 10.1007/978-3-662-07010-9 Vladimir I. Levenshtein, Universal bounds for codes and designs, Handbook of coding theory, Vol. I, II, North-Holland, Amsterdam, 1998, pp. 499–648. MR 1667942 Barry Mazur, Rational points of abelian varieties with values in towers of number fields, Invent. Math. 18 (1972), 183–266. MR 444670, DOI 10.1007/BF01389815 Loïc Merel, Bornes pour la torsion des courbes elliptiques sur les corps de nombres, Invent. Math. 124 (1996), no. 1-3, 437–449 (French). MR 1369424, DOI 10.1007/s002220050059 David Mumford, A remark on Mordell's conjecture, Amer. J. Math. 87 (1965), 1007–1016. MR 186624, DOI 10.2307/2373258 M. Ram Murty, Exponents of class groups of quadratic fields, Topics in number theory (University Park, PA, 1997) Math. Appl., vol. 467, Kluwer Acad. Publ., Dordrecht, 1999, pp. 229–239. MR 1691322 Jan Nekovář, Class numbers of quadratic fields and Shimura's correspondence, Math. Ann. 287 (1990), no. 4, 577–594. MR 1066816, DOI 10.1007/BF01446915 L. B. Pierce, The 3-part of class numbers of quadratic fields, J. London Math. Soc. (2) 71 (2005), no. 3, 579–598. MR 2132372, DOI 10.1112/S002461070500637X �kos Pintér, On the magnitude of integer points on elliptic curves, Bull. Austral. Math. Soc. 52 (1995), no. 2, 195–199. MR 1348477, DOI 10.1017/S000497270001460X Wolfgang M. Schmidt, Integer points on curves of genus $1$, Compositio Math. 81 (1992), no. 1, 33–59. MR 1145607 [Sch]scholz Scholz, A., Über die Beziehung der Klassenzahlen quadratischer Körper zueinander, J. Reine Angew. Math. 166 (1932), 201–203. Jean-Pierre Serre, Lectures on the Mordell-Weil theorem, 3rd ed., Aspects of Mathematics, Friedr. Vieweg & Sohn, Braunschweig, 1997. Translated from the French and edited by Martin Brown from notes by Michel Waldschmidt; With a foreword by Brown and Serre. MR 1757192, DOI 10.1007/978-3-663-10632-6 Jean-Pierre Serre, Local fields, Graduate Texts in Mathematics, vol. 67, Springer-Verlag, New York-Berlin, 1979. Translated from the French by Marvin Jay Greenberg. MR 554237, DOI 10.1007/978-1-4757-5673-9 Takuro Shintani, On Dirichlet series whose coefficients are class numbers of integral binary cubic forms, J. Math. Soc. Japan 24 (1972), 132–188. MR 289428, DOI 10.2969/jmsj/02410132 Carl Ludwig Siegel, Lectures on the geometry of numbers, Springer-Verlag, Berlin, 1989. Notes by B. Friedman; Rewritten by Komaravolu Chandrasekharan with the assistance of Rudolf Suter; With a preface by Chandrasekharan. MR 1020761, DOI 10.1007/978-3-662-08287-4 Joseph H. Silverman, Advanced topics in the arithmetic of elliptic curves, Graduate Texts in Mathematics, vol. 151, Springer-Verlag, New York, 1994. MR 1312368, DOI 10.1007/978-1-4612-0851-8 Joseph H. Silverman, The arithmetic of elliptic curves, Graduate Texts in Mathematics, vol. 106, Springer-Verlag, New York, 1986. MR 817210, DOI 10.1007/978-1-4757-1920-8 Joseph H. Silverman, The difference between the Weil height and the canonical height on elliptic curves, Math. Comp. 55 (1990), no. 192, 723–743. MR 1035944, DOI 10.1090/S0025-5718-1990-1035944-5 Joseph H. Silverman, Lower bound for the canonical height on elliptic curves, Duke Math. J. 48 (1981), no. 3, 633–648. MR 630588 Joseph H. Silverman, Lower bounds for height functions, Duke Math. J. 51 (1984), no. 2, 395–403. MR 747871, DOI 10.1215/S0012-7094-84-05118-4 Joseph H. Silverman, A quantitative version of Siegel's theorem: integral points on elliptic curves and Catalan curves, J. Reine Angew. Math. 378 (1987), 60–100. MR 895285, DOI 10.1515/crll.1987.378.60 K. Soundararajan, Divisibility of class numbers of imaginary quadratic fields, J. London Math. Soc. (2) 61 (2000), no. 3, 681–690. MR 1766097, DOI 10.1112/S0024610700008887 Siman Wong, Automorphic forms on $\textrm {GL}(2)$ and the rank of class groups, J. Reine Angew. Math. 515 (1999), 125–153. MR 1717617, DOI 10.1515/crll.1999.075 Siman Wong, On the rank of ideal class groups, Number theory (Ottawa, ON, 1996) CRM Proc. Lecture Notes, vol. 19, Amer. Math. Soc., Providence, RI, 1999, pp. 377–383. MR 1684617, DOI 10.1090/crmp/019/33 Retrieve articles in Journal of the American Mathematical Society with MSC (2000): 11G05, 11R29, 14G05, 11R11 Retrieve articles in all journals with MSC (2000): 11G05, 11R29, 14G05, 11R11 H. A. Helfgott Affiliation: Department of Mathematics, Yale University, New Haven, Connecticut 06520 Address at time of publication: Département de mathématiques et de statistique, Université de Montréal, CP 6128 succ Centre-Ville, Montréal QC H3C 3J7, Canada Email: helfgott@dms.umontreal.ca A. Venkatesh Affiliation: Department of Mathematics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139–4307 Address at time of publication: Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 Email: akshay@ias.edu Received by editor(s): May 21, 2004 Published electronically: January 19, 2006 Additional Notes: The second author was supported in part by NSF grant DMS-0245606. The copyright for this article reverts to public domain 28 years after publication. MSC (2000): Primary 11G05, 11R29; Secondary 14G05, 11R11
CommonCrawl
Why do differential forms have a much richer structure than vector fields? I apologize in advance because this question might be a bit philosophical, but I do think it is probably a genuine question with non-vacuous content. We know as a fact that differential forms have a much richer structure than vector fields, to name a few constructions that are built on forms but not on vectors, we have: (1)Exterior derivative, and hence Stokes theorem, de Rham cohomology and etc. (2)Integration. (3)Functorality, i.e. we can always pull back a differential form but we cannot always push forward a vector field. However, I feel it's somewhat paradoxical considering the fact that differential forms are defined to be the dual of vector fields, and this gives me the intuition that they should be almost "symmetric". Clearly this intuition is in fact far off the mark. But why? I mean, is there an at least heuristic argument to show, just by looking at the definition from scratch, that differential forms and vector fields must be very "asymmetric"? differential-geometry differential-forms Jia YiyangJia Yiyang $\begingroup$ @MikeMiller: you are right I should've been more accurate. As for (3) functorality, I still would like to take it as an example of the richness of the structure of diff forms, rather than the reason for it, because I don't see a way to "see" the functorality from the definition of diff forms. I in fact don't know what kind of answer will satisfy me, but I certainly hope what I said doesn't make an answer impossible :) $\endgroup$ – Jia Yiyang $\begingroup$ Um, @Mike, what is a covector field? $\endgroup$ – Ted Shifrin $\begingroup$ @TedShifrin: differential 1-form I suppose. $\endgroup$ $\begingroup$ Uh huh, which is a $1$-form.:) $\endgroup$ The overwhelming preference to work with $k$-covector fields ("differential forms") stems from a few basic facts: First, you might know of $\nabla$ from vector calculus. It is related to the exterior derivative $d$ in the sense that you can do $\nabla \wedge$ on a covector field and it is equivalent to $d$. $\nabla$ itself transforms as a covector does, and so it takes 1-covectors to 2-covectors, $k$-covectors to $k+1$-covectors (and these are all fields, of course). So there is a very convenient element of closure under the operation. Second, integration on a manifold naturally involves the tangent $k$-vector of the manifold. This is something traditional differential forms notation tends to gloss over. When you see, for example, something like this: $$\int f \, \mathrm dx^1 \wedge \mathrm dx^2$$ It really means this: $$\int f \, (\mathrm dx^1 \wedge \mathrm dx^2)(e_1 \wedge e_2) \, dx^1 \, dx^2$$ For this reason, the basis covectors $\mathrm dx^i$ should not be confused with the differentials $dx^i$. Further, that we use $e_1 \wedge e_2$ here, and not $e_2 \wedge e_1$, reflects an implicit choice of orientation, which is usually picked by convention from the ordering of the basis, but this need not always be the case. The tangent $k$-vector, and especially its orientation, must necessarily be considered in these integrals. So why does this make $k$-covector-fields preferred? Because the action of these fields on the manifolds' tangent $k$-vectors is inherently nonmetrical. So, differential forms allows you to do a lot of calculus without imposing a metric. This point, however, is somewhat obfuscated when you introduce the Hodge star operator and interior differentials, for these are metrical. Then, you get a big problem with differential forms: by working exclusively with $k$-covector fields, and expunging all reference to $k$-vector fields, the treatment when we do have a metric is extremely ham-fisted. Yes, you can do everything with wedges, exterior derivatives, and Hodge stars. But it makes much more sense to use corresponding grade-lowering operations and derivatives instead. Geometric calculus does this, but let met get to that in a moment. Regarding the pushforward vs. pullback, I must confess a lack of understanding. I do not see why we would want to pull covectors back from a target manifold while insisting we must push vectors forward. I'm very familiar with the mathematics: that under a smooth map, the adjoint Jacobian transforms covectors from the target cotangent space to the original, and the inverse Jacobian does the same for vectors. Perhaps it has to do with defining the pushforward as the inverse of this inverse. Now, do all these remarks put together mean that $k$-vector fields are inherently disadvantaged, or less rich, than $k$-covector fields? I would say no. I mentioned geometric calculus earlier: it is the originator of the $\nabla \wedge$ notation that I used earlier, and it handles $k$-vector fields just fine. Geometric calculus is the calculus that goes with clifford algebra, and you may find it illuminating. Many of the theorems and results of differential forms translate to geometric calculus and to $k$-vector fields. Stokes' theorem? Used extensively. de Rham cohomology? Most of the same results apply. My point above about differential forms integrals using tangent $k$-vectors implicitly? That comes from geometric calculus, too, where the tangent $k$-vector is not digested "trivially" and you have to look at all the metrical ways in which it might interact with the vector field you're integrating. A grade-lowering derivative is natural to use with $k$-vector fields. In geometric calculus, this is notated as $\nabla \cdot$. You can see that successive chains of $\nabla \cdot$ continually lower the grade of a field, just as successive exterior derivatives raise it. My ultimate point is that, when you do have a metric, it's quite nonsensical to treat everything as a differential form instead of using $k$-vector fields when appropriate. I feel the tendency to do this in physics divorces students from a lot of the vector calculus they had learned, unnecessarily so. I can't speak to mathematics courses, but I imagine some of that criticism applies, too. Now, there are some properties of covector fields and exterior derivatives that are nicer than working with vector fields. For instance, under a map $f(x) = x'$ with adjoint Jacobian $\overline f$, it's true that $\overline f(\nabla ' \wedge A') = \nabla \wedge A$ for some covector field $A$. That's a very convenient result, and there's no correspondingly nice identity for vector fields. MuphridMuphrid $\begingroup$ Nicely put. A rich answer. $\endgroup$ $\begingroup$ Thanks for the informative answer, +1. To summarize your point :diff forms and multi-vector fields aren't really that "asymmetric" despite the conventional preference of diff forms, geometric calculus(whatever it means) is an alternative. So would you say the preference of diff forms is a historical incident, that it is only because the language of diff forms got popular earlier than geometric calculus? $\endgroup$ $\begingroup$ Yes, that's my position. And to be fair, vanilla tensor calculus handle vector fields on the same footing, too, but that gives up some of the cleanliness of notation in forms or GC parlance. $\endgroup$ – Muphrid $\begingroup$ Can you elaborate a bit on the distinction between differentials and covectors? And I don't really understand what $$\int f \, (\mathrm dx^1 \wedge \mathrm dx^2)(e_1 \wedge e_2) \, dx^1 \, dx^2$$ means... $\endgroup$ – goblin GONE $\begingroup$ To illustrate my confusion. When we're young, we ask people "why not write $\int_{x=a}^b f(x)$ instead of the more mystical and mystifying $\int_{a}^bf(x) dx$," and they say "because $dx$ can be thought of as an infinitesimal element." Then we think about non-standard analysis just a little bit and it becomes pretty clear this is a dumb justification. Then we get a bit older, and we ask more people about the $dx$, and eventually we get the answer: "because $f(x) dx$ is a differential form!" You seem to be saying that, actually, it's not. Is that right? $\endgroup$ You can think of vector fields as the "Lie algebra of the diffeomorphism group"; that is, you can think of a vector field as an infinitesimal diffeomorphism. This in particular explains why you shouldn't expect vector fields to be functorial, because diffeomorphism groups are not functorial, but why you should expect vector fields to act on various other objects attached to a manifold via Lie derivative (this is some kind of "infinitesimal functoriality" for these objects). You can think of $1$-forms as the "universal derivatives" of functions; in fact you can think of differential forms on a smooth manifold $X$ as being like the free commutative differential graded algebra generated by $C^{\infty}(X)$ (although I think this is slightly wrong as stated), which in particular explains morally why taking differential forms ought to be functorial. Because vector fields act by derivations on $C^{\infty}(X)$ this explains in some sense why vector fields pair with $1$-forms, but not why this pairing is perfect. I suspect that this is a special fact about smooth manifolds and is false in greater generality, suitably interpreted. Qiaochu YuanQiaochu Yuan $\begingroup$ Thanks and +1. This looks like good stuff but the terminologies in the 2nd paragraph are beyond my mathematical background. $\endgroup$ $\begingroup$ @Jia: for a simpler thing than what I said in the second paragraph see en.wikipedia.org/wiki/K%C3%A4hler_differential for starters (although it is slightly false that $\Omega^1(X)$ is the Kahler differentials of $C^{\infty}(X)$; one needs a notion of "smooth Kahler differential" instead). $\endgroup$ – Qiaochu Yuan $\begingroup$ There's also something to say about Hochschild homology vs. cohomology but I've probably already gone too far. $\endgroup$ $\begingroup$ Well, certainly over my head for now but thanks for the extra information! $\endgroup$ Short answer: things that look like functions are very convenient, because we can do algebra, calculus, and such with them. Things that look like geometry are less convenient to work with. Tangent vectors on a manifold $M$ closely relate to differentiable curves $\mathbf{R} \to M$. Infinitesimal segments of such curves give one of the visualizations of the meaning of a tangent vector; more rigorously, any tangent vector on $M$ can be identified with the image of the standard tangent vector $\partial/\partial x$ on $\mathbf{R}$ at the origin. Cotangent vectors -- i.e. differential $1$-forms -- on a manifold $M$ closely relate to differentiable functions $M \to \mathbf{R}$. All of the things I said about tangent vectors apply in dual form to cotangent vectors. However, we can already see one glaring asymmetry: we have a structure entirely internal to $M$ that is closely related to differentiable functions $M \to \mathbf{R}$: specifically, the notion of differentiable scalar fields. This asymmetry was already seen in elementary calculus: functions are interesting, intervals less so. In higher dimensions, curves can be interesting, but their study is primary through the functions defining them and how they relate to functions. (although see things like topology, homotopy, and homology for ways in which curves in a space can be made into a more primary object of study) While tangent vectors seem a pleasing idea from an external point of view and are important to the relationship between various manifolds, they play a much less significant role internally to a manifold, predominantly serving to act as the dual space to cotangent vectors. If you have a metric then $k$-vector fields are isomorphic to $k$-forms, so in this setting they do not have richer structure. The reason we prefer forms is that we can do a lot of operations on them without a metric. The simplest example is integration on a line segment. Visualize this line segment as a curved piece of elastic band in space. Suppose we have a function that assigns a real value to each point on the band. Does it make sense to talk about the integral of this function invariant under diffeomorphisms? It doesn't because stretching the band in a particular area increases the contribution of that area, thus changing the value of the integral. Suppose instead we put a discrete set of dots on the band. No matter how we stretch it, the total number of dots between two points on the band does not change. So if we put a dot density on the band then we have a quantity that can be integrated invariant under diffeomorphism. A dot density can be given by a function $f(x_1, x_2)$ giving the number of dots between $x_1$ and $x_2$. Now letting $x_2$ approach $x_1$ we define $$g(x)(v) = \lim_{\epsilon \to 0} \frac{f(x, x+ \epsilon v)}{\epsilon}.$$ This is a 1-form giving the dots in a small interval $(x, x+dx)$ as $g(x)(dx)$. We can find the dots between $x_1$ and $x_2$ as the integral of $g(x)(dx)$. This is why forms are the right quantity to integrate. In higher dimensions $g(x, y)(dx, dy)$ gives the number of dots in a small square at a point $(x, y)$. The reason we can always pullback forms but not always push forward vectors is not due to an asymmetry in vectors/forms but due to an asymmetry in functions: each $x$ has a unique $y$ but not every $y$ has a unique $x$. You can always push forward a vector along $f^{-1}$. In particular, you can push forward and pull back along diffeomorphisms. Alex Provost JulesJules I will add that smooth k-forms on a smooth manifold M carry an additional structure of a graded algebra, with product given by wedge product and for which both the exterior derivative and interior multiplication ("contraction") are anti-derivations of degree 1 and -1, respectively, which square to zero, while (the Lie algebra of) smooth vector fields on a smooth manifold does not carry a nontrivial grading. There is still a product structure on vector fields given by Lie derivative turning it into a non-associative algebra, but no grading a priori. Reginald AndersonReginald Anderson I was thinking about this very topic on a walk a few weeks ago. There is a hierarchy of possible structures you can put on a manifold. Each allows you to define a richer set of natural covariant/coordinate-free operations on the manifold: you can start with only differential structure; to this you can add a Riemannian metric, and finally you can add a connection. While tensor algebra in its full form requires all three, the "alternating" part, exterior calculus, does not: differential structure and a metric is enough to define the differential, codifferential, and Hodge star, which gives you the deRham cohomology, Stokes's theorem, etc. (And as Muphrid has pointed out in his answer, if you don't even have a metric, you still have the differential, though I would argue that almost all applications people have in mind for differential forms, including those listed in your question, require also the Hodge star.) Consider for instance the second derivative of functions. The alternating part of the second derivative, $d^2f=0$, is automatically covariant without needing any notion of connection. The symmetric part, the Hessian, $\nabla \nabla f$, can only be defined in a coordinate-free way by means of a connection. So to answer your question, I wouldn't say that differential forms have richer structure than vector fields; after all they are equivalent via the musical isomorphisms. But I would say that exterior calculus, being the part of tensor calculus that is automatically coordinate-free without additional structure in the form of a connection, is more fundamental, or at least more basic, than the full tensor calculus. Not the answer you're looking for? Browse other questions tagged differential-geometry differential-forms or ask your own question. What is $dx$ in integration? What does "philosophical" mean in mathematics? Source for Differential Manifolds/Geometry Questions? Identities for differential forms and vectorfields (reference request) Invariant vector fields and forms- should I have paid more attention in Linear algebra? Integration of differential forms - how to extend (and fix?) this intuition? Is the isomorphism between $\Lambda^2(\mathbb{R}^n)$ and $\mathfrak{so}(n)$ typical?
CommonCrawl
Current Search: Research Repository (x) » info:fedora/ir:thesisCModel (x) » info:fedora/islandora:sp_basic_image (x) The "noble experiment" in Tampa: A study of prohibition in urban America. Alduino, Frank William., Florida State University Prohibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society,... Show moreProhibition sprang forth from the Progressive Era--the widespread reform movement that swept across the United States at the turn of the century. Responding to the dramatic changes in American society since the end of the Civil War, the Progressive movement encompassed a wide array of individuals and groups advocating a far-reaching program of economic, political, and social reform. For over forty years temperance zealots strived to impose their values on the whole of American society, particularly on the rapidly expanding immigrant population. These alien newcomers epitomized the transformation of the country from rural to urban, from agricultural to industrial., Rapidly-expanding urban centers were often the battleground between prohibitionists and supporters of the whiskey traffic. European immigrants, retaining their traditional values, gravitated to metropolitan areas such as Boston, New York, and Chicago. With the opening of the cigar industry in the mid-1880s, Tampa, Florida also began attracting large numbers of immigrants. Because of its pluralistic composition, the city might serve as a microcosm of the national struggle between the "wet" and "dry" forces., Using newspapers, oral interviews, and other primary materials, this study traces the various aspects of the prohibition movement in the city of Tampa. In addition, it details other peripheral areas associated with the advent of the Eighteenth Amendment including the drug and alien trades. Finally, this study examines the lengthy efforts to repeal the "Noble Experiment" and return legalized drinking back to Tampa. The "talk" of returning women graduate students: An ethnographic study of reality construction. McKenna, Alexis Yvonne., Florida State University This study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the... Show moreThis study looked at women's internal experience of graduate school. In particular, it focused on the experience of women returning full-time to graduate school after an extended time-out for careers and/or family. The questions examined were: (1) how do returning women "name and frame" their experience? (2) what, if any, is the relationship between the way the women "name and frame" their experience and their response to it? and, (3) what role does the researcher-as-interviewer play in the construction of the data?, Data were collected through a series of three ethnographic interviews with 12 returning women, ranging in age from 28 to 50. Two of the twelve women were single, two were widowed, seven were divorced and one was divorced and remarried. Eight of the women had children., Analysis of the data showed that returning women, as a group, "named and framed" their experience in terms of change. Some women wanted to change self-image or self-concept while others wanted to acquire a new set of skills or credentials. Individually, the women "named and framed" their experiences in terms of an internalized "meaning-making map" acquired in the family of origin but modified through adult experiences. This "map" told them who they were and what kind of a life they could have. It gave their "talk" and behavior a consistency that could be recognized; it could make life easier or harder. A woman who felt she must "prove" herself, for example, found graduate school more difficult than a woman who wanted to "work smart.", The researcher-as-interviewer influenced the construction of data through her presence as well as through the kinds of questions she asked. The women understood and gave meaning to their experiences through the process of explaining them to the interviewer. The insights gained through this process of "shared talk" influenced future action and decisions. (oxygen-16 + thorium-232) incomplete fusion followed by fission at 140 MeV. Gavathas, Evangelos P., Florida State University Cross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments... Show moreCross sections for incomplete fusion followed by fission have been measured for the reaction ($\sp{16}$O + $\sp{232}$Th) at 140 MeV. In plane and out of plane measurements were made of cross sections for beamlike fragments in coincidence with fission fragments. The beamlike fragments were detected with the Florida State large acceptance Bragg curve spectrometer. The detector was position sensitive in the polar direction. The beamlike particles observed in coincidence with fission fragments were He, Li, Be, B, C, N and O. Fission fragments were detected by three surface barrier detectors using time of flight for particle identification. The reaction cross section due to incomplete fusion is 747 $\pm$ 112 mB, or 42% of the total fission cross section. The strongest incomplete fusion channels were the helium and carbon channels. The average transferred angular momentum for each incomplete fusion channel was calculated using the $Q\sb{opt}$ model of Wilczynski, and the angular correlation was calculated using the saddle point transition state model. The K distribution was determined from the Rotating Liquid Drop model. The theoretical angular distributions were fitted to the experimental angular distributions with the angular momentum J and the dealignment factor $\alpha\sb{o}$ as free parameters. The fitted parameter J was in excellent agreement with the $Q\sb{opt}$ model predictions. The conclusions of this study are that the incomplete fusion cross section is a large part of the total cross section, and that the saddle point transition state model adequately describes the observed angular correlations for fission following incomplete fusion. 125-Iodine: a probe in radiobiology. Warters, Raymond Leon 14-3-3 and aggresome formation: implications in neurodegenerative diseases.. Jia, Baohui, Wu, Yuying, Zhou, Yi Protein misfolding and aggregation underlie the pathogenesis of many neurodegenerative diseases. In addition to chaperone-mediated refolding and proteasomal degradation, the aggresome-macroautophagy pathway has emerged as another defense mechanism for sequestration and clearance of toxic protein aggregates in cells. Previously, the 14-3-3 proteins were shown to be indispensable for the formation of aggresomes induced by mutant huntingtin proteins. In a recent study, we have determined that 14... Show moreProtein misfolding and aggregation underlie the pathogenesis of many neurodegenerative diseases. In addition to chaperone-mediated refolding and proteasomal degradation, the aggresome-macroautophagy pathway has emerged as another defense mechanism for sequestration and clearance of toxic protein aggregates in cells. Previously, the 14-3-3 proteins were shown to be indispensable for the formation of aggresomes induced by mutant huntingtin proteins. In a recent study, we have determined that 14-3-3 functions as a molecular adaptor to recruit chaperone-associated misfolded proteins to dynein motors for transport to aggresomes. This molecular complex involves a dimeric binding of 14-3-3 to both the dynein-intermediate chain (DIC) and an Hsp70 co-chaperone Bcl-2-associated athanogene 3 (BAG3). As 14-3-3 has been implicated in various neurodegenerative diseases, our findings may provide mechanistic insights into its role in managing misfolded protein stress during the process of neurodegeneration. FSU_pmch_24549097, PMC4189886, 24549097, 24549097, 28123 14-3-3 protein targets misfolded chaperone-associated proteins to aggresomes. Xu, Zhe, Graham, Kourtney, Foote, Molly, Liang, Fengshan, Rizkallah, Raed, Hurt, Myra, Wang, Yanchang, Wu, Yuying, Zhou, Yi The aggresome is a key cytoplasmic organelle for sequestration and clearance of toxic protein aggregates. Although loading misfolded proteins cargos to dynein motors has been recognized as an important step in the aggresome formation process, the molecular machinery that mediates the association of cargos with the dynein motor is poorly understood. Here, we report a new aggresome-targeting pathway that involves isoforms of 14-3-3, a family of conserved regulatory proteins. 14-3-3 interacts... Show moreThe aggresome is a key cytoplasmic organelle for sequestration and clearance of toxic protein aggregates. Although loading misfolded proteins cargos to dynein motors has been recognized as an important step in the aggresome formation process, the molecular machinery that mediates the association of cargos with the dynein motor is poorly understood. Here, we report a new aggresome-targeting pathway that involves isoforms of 14-3-3, a family of conserved regulatory proteins. 14-3-3 interacts with both the dynein-intermediate chain (DIC) and an Hsp70 co-chaperone Bcl-2-associated athanogene 3 (BAG3), thereby recruiting chaperone-associated protein cargos to dynein motors for their transport to aggresomes. This molecular cascade entails functional dimerization of 14-3-3, which we show to be crucial for the formation of aggresomes in both yeast and mammalian cells. These results suggest that 14-3-3 functions as a molecular adaptor to promote aggresomal targeting of misfolded protein aggregates and may link such complexes to inclusion bodies observed in various neurodegenerative diseases. FSU_pmch_23843611, 10.1242/jcs.126102, PMC3772389, 23843611, 23843611, jcs.126102 14-3-3 proteins are required for hippocampal long-term potentiation and associative learning and memory. Qiao, Haifa, Foote, Molly, Graham, Kourtney, Wu, Yuying, Zhou, Yi 14-3-3 is a family of regulatory proteins highly expressed in the brain. Previous invertebrate studies have demonstrated the importance of 14-3-3 in the regulation of synaptic functions and learning and memory. However, the in vivo role of 14-3-3 in these processes has not been determined using mammalian animal models. Here, we report the behavioral and electrophysiological characterization of a new animal model of 14-3-3 proteins. These transgenic mice, considered to be a 14-3-3 functional... Show more14-3-3 is a family of regulatory proteins highly expressed in the brain. Previous invertebrate studies have demonstrated the importance of 14-3-3 in the regulation of synaptic functions and learning and memory. However, the in vivo role of 14-3-3 in these processes has not been determined using mammalian animal models. Here, we report the behavioral and electrophysiological characterization of a new animal model of 14-3-3 proteins. These transgenic mice, considered to be a 14-3-3 functional knock-out, express a known 14-3-3 inhibitor in various brain regions of different founder lines. We identify a founder-specific impairment in hippocampal-dependent learning and memory tasks, as well as a correlated suppression in long-term synaptic plasticity of the hippocampal synapses. Moreover, hippocampal synaptic NMDA receptor levels are selectively reduced in the transgenic founder line that exhibits both behavioral and synaptic plasticity deficits. Collectively, our findings provide evidence that 14-3-3 is a positive regulator of associative learning and memory at both the behavioral and cellular level. FSU_pmch_24695700, 10.1523/JNEUROSCI.4393-13.2014, PMC3972712, 24695700, 24695700, 34/14/4801 14-3-3 proteins in neurological disorders. Foote, Molly, Zhou, Yi 14-3-3 proteins were originally discovered as a family of proteins that are highly expressed in the brain. Through interactions with a multitude of binding partners, 14-3-3 proteins impact many aspects of brain function including neural signaling, neuronal development and neuroprotection. Although much remains to be learned and understood, 14-3-3 proteins have been implicated in a variety of neurological disorders based on evidence from both clinical and laboratory studies. Here we will... Show more14-3-3 proteins were originally discovered as a family of proteins that are highly expressed in the brain. Through interactions with a multitude of binding partners, 14-3-3 proteins impact many aspects of brain function including neural signaling, neuronal development and neuroprotection. Although much remains to be learned and understood, 14-3-3 proteins have been implicated in a variety of neurological disorders based on evidence from both clinical and laboratory studies. Here we will review previous and more recent research that has helped us understand the roles of 14-3-3 proteins in both neurodegenerative and neuropsychiatric diseases. FSU_pmch_22773956, PMC3388734, 22773956, 22773956 14-3-3τ promotes surface expression of Cav2.2 (α1B) Ca2+ channels. Liu, Feng, Zhou, Qin, Zhou, Jie, Sun, Hao, Wang, Yan, Zou, Xiuqun, Feng, Lingling, Hou, Zhaoyuan, Zhou, Aiwu, Zhou, Yi, Li, Yong Surface expression of voltage-gated Ca(2+) (Cav) channels is important for their function in calcium homeostasis in the physiology of excitable cells, but whether or not and how the α1 pore-forming subunits of Cav channels are trafficked to plasma membrane in the absence of the known Cav auxiliary subunits, β and α2δ, remains mysterious. Here we showed that 14-3-3 proteins promoted functional surface expression of the Cav2.2 α1B channel in transfected tsA-201 cells in the absence of any known... Show moreSurface expression of voltage-gated Ca(2+) (Cav) channels is important for their function in calcium homeostasis in the physiology of excitable cells, but whether or not and how the α1 pore-forming subunits of Cav channels are trafficked to plasma membrane in the absence of the known Cav auxiliary subunits, β and α2δ, remains mysterious. Here we showed that 14-3-3 proteins promoted functional surface expression of the Cav2.2 α1B channel in transfected tsA-201 cells in the absence of any known Cav auxiliary subunit. Both the surface to total ratio of the expressed α1B protein and the current density of voltage step-evoked Ba(2+) current were markedly suppressed by the coexpression of a 14-3-3 antagonist construct, pSCM138, but not its inactive control, pSCM174, as determined by immunofluorescence assay and whole cell voltage clamp recording, respectively. By contrast, coexpression with 14-3-3τ significantly enhanced the surface expression and current density of the Cav2.2 α1B channel. Importantly, we found that between the two previously identified 14-3-3 binding regions at the α1B C terminus, only the proximal region (amino acids 1706-1940), closer to the end of the last transmembrane domain, was retained by the endoplasmic reticulum and facilitated by 14-3-3 to traffic to plasma membrane. Additionally, we showed that the 14-3-3/Cav β subunit coregulated the surface expression of Cav2.2 channels in transfected tsA-201 cells and neurons. Altogether, our findings reveal a previously unidentified regulatory function of 14-3-3 proteins in promoting the surface expression of Cav2.2 α1B channels. FSU_pmch_25516596, 10.1074/jbc.M114.567800, PMC4317001, 25516596, 25516596, M114.567800 Sociology, Criminology and Penology (160) + - Political Science, General (154) + - Library Science (151) + - Literature, American (139) + - Theater (132) + - doctoral thesis (17) + - bachelor thesis (2) + - master thesis (2) + - Masters Abstracts International (23) + - FTaSU FSUER (16) + - Special Collections, Florida State University Libraries, Tallahassee, Fla. (1) + - FSU (16253) + -
CommonCrawl
Crop nutrient management using Nutrient Expert improves yield, increases farmers' income and reduces greenhouse gas emissions Optimizing nutrient inputs by balancing spring wheat yield and environmental effects in the Hetao Irrigation District of China Yuzhen Chen, Na Zhao, … Liguo Jia Optimizing plant density and balancing NPK inputs in combination with innovative fertilizer product for sustainable maize production in North China Plain Tesema Feyissa, Shuaixiang Zhao, … Weifeng Zhang Long-term conservation agriculture and best nutrient management improves productivity and profitability coupled with soil properties of a maize–chickpea rotation Vijay Pooniya, R. R. Zhiipao, … Amlan Nath The game changing role of traditional ecological knowledge based Agri amendment systems in nutrient dynamics in the stress prone semi arid tropics Seema B. Sharma, G. A. Thivakaran & Mahesh G. Thakkar Spatial variation of yield response and fertilizer requirements on regional scale for irrigated rice in China Xinpeng Xu, Ping He, … Wei Zhou Crop production in Pakistan and low nitrogen use efficiencies Ahmad Naeem Shahzad, Muhammad Kamran Qureshi, … Tom Misselbrook Understanding nutrient imbalances in maize (Zea mays L.) using the diagnosis and recommendation integrated system (DRIS) approach in the Maize belt of Nigeria Kamaluddin T. Aliyu, Jeroen Huising, … Bernard Vanlauwe Root-zone fertilization improves crop yields and minimizes nitrogen loss in summer maize in China Chaoqiang Jiang, Dianjun Lu, … Huoyan Wang Substituting chemical P fertilizer with organic manure: effects on double-rice yield, phosphorus use efficiency and balance in subtropical China Yanhong Lu, Yajie Gao, … Qidong Zhu Tek B. Sapkota ORCID: orcid.org/0000-0001-5311-05861, Mangi L. Jat ORCID: orcid.org/0000-0003-0582-11262, Dharamvir S. Rana3, Arun Khatri-Chhetri4, Hanuman S. Jat5, Deepak Bijarniya6, Jhabar M. Sutaliya7, Manish Kumar8, Love K. Singh8, Raj K. Jat9, Kailash Kalvaniya2, Gokul Prasad2, Harminder S. Sidhu8, Munmun Rai2, T. Satyanarayana10 & Kaushik Majumdar11 Scientific Reports volume 11, Article number: 1564 (2021) Cite this article Climate-change mitigation Reduction of excess nutrient application and balanced fertilizer use are the key mitigation options in agriculture. We evaluated Nutrient Expert (NE) tool-based site-specific nutrient management (SSNM) in rice and wheat crops by establishing 1594 side-by-side comparison trials with farmers' fertilization practices (FFP) across the Indo-Gangetic Plains (IGP) of India. We found that NE-based fertilizer management can lower global warming potential (GWP) by about 2.5% in rice, and between 12 and 20% in wheat over FFP. More than 80% of the participating farmers increased their crop yield and farm income by applying the NE-based fertilizer recommendation. We also observed that increased crop yield and reduced fertilizer consumption and associated greenhouse gas (GHG) emissions by using NE was significantly influenced by the crop type, agro-ecology, soil properties and farmers' current level of fertilization. Adoption of NE-based fertilizer recommendation practice in all rice and wheat acreage in India would translate into 13.92 million tonnes (Mt) more rice and wheat production with 1.44 Mt less N fertilizer use, and a reduction in GHG of 5.34 Mt CO2e per year over farmers' current practice. Our study establishes the utility of NE to help implement SSNM in smallholder production systems for increasing crop yields and farmers' income while reducing GHG emissions. In recent years, the potential to mitigate climate change by improving nutrient use efficiency (NUE) in croplands has received considerable attention in the agricultural research and policy agendas1,2,3. The use of chemical fertilizers, nitrogen (N) in particular, in crop production is at the center of managing both food security and environmental problems4,5,6. Enhancing crop yields through increased use of nutrients is essential to meet current as well as future food demand7. On the other hand, because fertilizer application in croplands is a major source of anthropogenic nitrous oxide (N2O) emissions8, reducing greenhouse gas (GHG) emissions through proper fertilizer management is essential to address agriculture's contribution to climate change2. Moreover, excess and improper use of nutrients in crop production have large cost implications for the farmers9. Therefore, improving NUE in croplands provides the opportunity to address the triple challenge of food security, farmers' livelihood and environmental protection, globally. Site-Specific Nutrient Management (SSNM) involves optimizing nutrient inputs considering demand (plant needs) and supply (from soils indigenous sources) of the nutrients according to their variation in time and space thereby ensuring field-specific nutrient management in a particular cropping season10,11. Various technologies and practices such as Chlorophyll Meter12, Leaf Color Chart13, GreenSeeker14 and decision support systems for instance Nutrient Expert (NE) (http://software.ipni.net) and Rice Crop Manager (http://cropmanager.irri.org/home) are available for helping farmers to implement SSNM and improve NUE9,15. The NE tool was developed to implement crop nutrient management specific to farmers' fields with or without a soil test16. Although a few studies have evaluated the agronomic and environmental performance of NE17,18,19, it has not been evaluated on a large number of farms with varying agro-climatic conditions and across various levels of crop intensification. Farmers' participatory trials are useful to assess the utility of the tool and also enable farmers to make informed decisions for crop nutrient management. This study presents results from a large number of on-farm participatory trials (1594-paired data) comparing farmers' fertilizer practices (FFP) vs NE-based nutrient management in terms of fertilizer inputs, yields, economic returns, and GHG emissions in rice and wheat fields in India. Rice and wheat are the major crops grown in India and consume 50% of the fertilizer used in the country20. The study in India was particularly important because India consumes 14% of total fertilizer use globally20 but its NUE is one of the lowest in the world21. This is mainly driven by imbalanced and inadequate use of nutrients given the skewed government's subsidy on nitrogenous fertilizer than on other nutrients22. The results of this study provide rich information to the agriculture and fertilizer policymakers to enable them to design fertilizer use and distribution policies and farmers' support programs in the IGP and other parts of the country. Fertilizer input and crop yield The use of the NE tool significantly reduced the amount of nigrogen (N) use in both rice and wheat crops compared to the FFP (Fig. 1a). Although potash (K2O) input was significantly higher under NE than under FFP (Fig. 1b), phosphorus (P2O5) input was also significantly lower under NE-based recommendations than FFP, except for rice in Western IGP (Fig. 1c). The N, P2O, and K2O application rates in each comparison trial in rice and wheat fields under NE and FFP in the study areas are presented in Supplementary Figure S1. Farmers in the Western IGP were either applying high P2O5 rates in rice cultivation or none at all. We also observed that many farmers in the IGP region were not using K2O on rice and wheat crops. Rate of nitrogen (a), potash (b) and phosphorus (c) application and grain yield (d) from rice and wheat under Nutrient Expert (NE) and Farmers' Fertilizer Practices (FFP) in the study areas. Values are average of all the comparison trials over the study period. Within each pair, bars bearing different lowercase letters are significantly different from each other based on the paired t-test (p = 0.05). The error bar shows the standard deviation. IGP = Indo-Gangetic Plains. The NE-based recommendation significantly reduced N application over the FFP in the wheat crop more than in rice, and in the Western IGP more than in the Eastern IGP (Fig. 2, left panel). The effect of NE in reducing N application over the FFP was influenced by farmers' fertilizer application rates. The reduction in N rate by NE over FFP was higher in the cases where farmers' N application rate was higher and the effect gradually reduced as the rate of farmers' N application decreased (Fig. 2, left panel). Similarly, the effect of NE in reducing the N application rate was higher where farmers were applying between 41 and 70 kg P2O5 ha−1 than in the cases where farmers were not applying it or were applying smaller quantities, i.e., < 40 kg of P2O5 ha−1. N reduction was also higher in the cases where farmers were not applying K2O than in the cases where farmers were applying K2O (Fig. 2, left panel). Percent change in nitrogen use rate (left panel) and grain yields (right panel) due to NE-based fertilizer management compared to FFP segregated by crop types, agro-ecological zones and farmers' fertilizer application rates. In the Y-axis, the numbers outside parentheses denote the number of villages and the ones inside parentheses show the number of pairs analyzed. Error bars indicate 95% confidence intervals (CI). The changes are considered significant when 95% CI does not overlap zero, and the effects of categories within a parameter are significantly different when their 95% CI do not overlap with each other. IGP = Indo-Gangetic Plains. Overall, NE-based fertilizer management significantly increased the yield of both rice and wheat in both agro-ecologies as compared to FFP (Fig. 1d). However, the increment was highest in rice in Eastern IGP. When separated by agro-ecology, the yield increment due to NE was higher in Eastern IGP than in Western IGP (Fig. 2, right panel). Although the yield response to NE-based fertilizer management was not significant under different rates of fertilizer (N, P2O5 and K2O) application by farmers, yield improvement due to NE was comparatively high where farmers used less K2O (Fig. 2, right panel). Out of 135 pair comparisons in the rice fields of Eastern IGP, NE yielded higher than FFP in all cases, with less N input than FFP in 49% of cases and with more N input than FFP in 50% of cases (Supplementary Fig. S2, upper left). In the case of wheat in this agro-ecology and compared to FFP, NE increased crop yield in 78% of cases out of 116 pair comparisons, in 65% of cases with reduced N input and in the remaining 13% of cases with increased N input (Supplementary Fig. S2, lower left). Here, compared to FFP, NE reduced yield in 20% of cases mostly with reduced N input. In Western IGP, on the other hand, compared to FFP, NE increased rice and wheat yield in 83% out of 595 pair comparisons and in 91% out of 748 pair comparisons, respectively, mostly with reduced N input (Supplementary Fig. S2, right panels). The farm gate GWP from rice production in the study area ranged from 2463 to 5482 kgCO2e ha−1 whereas that from wheat production ranged from 287 to 2463 kgCO2e ha−1. Similarly, GHG emission intensity of rice ranged from 509 to 1606 kgCO2e tonne−1 grain yield and that of wheat ranged from 71 to 769 kgCO2e tonne−1 grain yield. Total GWP and GHG emission intensity of rice and wheat production was significantly lower under NE-based fertilizer management than under FFP in both agro-ecologies (Fig. 3A,B). Total global warming potential (A) and emission intensity (B) from rice and wheat production under NE and FFP in the study areas. Values are average of all the comparison trials over the study period. Within each pair, bars bearing different lower case letters are significantly different from each other based on a paired t-test (p = 0.05). Error bar shows the standard deviation. IGP = Indo-Gangetic Plains. The NE-based fertilizer management reduced more GWP in wheat than in rice and more in Western IGP than in Eastern IGP (Fig. 4, left panel). When separated by farmers' fertilizer rate, the effect of NE-based fertilizer recommendations in reducing total GWP was significantly smaller with lower applications of N and P2O5 (< 40 kg P2O5 ha−1) than in the cases where farmers were applying higher rates of these fertilizers (Fig. 4, left panel). Similarly, the effect of NE in reducing GHG intensity was higher in wheat than in rice and higher in Eastern IGP than in Western IGP (Fig. 4, right panel). The effect of NE in reducing total GHG emission intensity was significantly smaller with lower applications of N and P2O5, (< 40 kg P2O5 ha−1) than in cases where farmers were applying higher rates, i.e., > 175 kg N and > 40 kg P2O5 ha−1 (Fig. 4, right panel). Percent change in global warming potential (GWP, left panel) and emission intensity (right panel) due to NE-based fertilizer management compared to FFP segregated by crop types, agro-ecological zones and farmers' fertilizer application rates. In the Y-axis, the numbers outside parentheses denote the number of villages and the ones inside parentheses show the number of pairs analyzed. Error bars indicate 95% confidence intervals (CI). The changes are considered significant when 95% CI does not overlap zero, and the effects of categories within a parameter are significantly different when their 95% CI do not overlap with each other. IGP = Indo-Gangetic Plains. Cost–benefit of change in fertilizer rate The cost of fertilizer application under NE-based recommendations in rice and wheat crops was largely affected by the FFP in the different agro-ecological zones. NE significantly increased the cost of fertilizer for both rice and wheat in Western IGP (Fig. 5a). In Eastern IGP, the cost of fertilizer for rice was not significantly different between NE and FFP, whereas that for wheat was significantly higher under FFP (Fig. 5a). The percent increment in fertilizer cost due to NE over FFP was higher in rice than in wheat (Fig. 6, left panel). When separated by agro-ecologies, compared to FFP, NE significantly increased cost of fertilizer in Western IGP but reduced it in Eastern IGP, although not significantly. Compared to FFP, NE slightly increased the total cost of fertilizer in cases where farmers' N rate was < 175 compared to cases where farmers' N rate was > 175 kg ha−1 (Fig. 6, left panel). The total cost of fertilizer due to NE significantly increased in cases where farmers did not apply any P2O5 or K2O, and the total cost actually decreased in cases where farmers applied > 70 kg P2O5 and > 10 kg K2O ha−1 (Fig. 6, left panel). Cost of fertilizer and income from yield in rice and wheat production under Nutrient Expert (NE) and Farmers' Fertilizer Practice (FFP) in the study areas. Values are average of all the comparison trials over the study period. Within each pair, bars bearing different lowercase letters are significantly different from each other based on paired t-test (p = 0.05). Error bar shows standard deviation. IGP = Indo-Gangetic Plains. Percent in fertilizer cost (left panel) and revenue from yield (right panel) due to NE-based fertilizer management compared to FFP segregated by crop types, agro-ecological zones and fertilizer application rates. In the Y-axis, the numbers outside parentheses denote the number of villages and the ones inside parentheses show the number of pairs analyzed. Error bars indicate 95% confidence intervals (CI). The changes are considered significant when 95% CI does not overlap zero, and the effects of categories within a parameter are significantly different when their 95% CI do not overlap with each other. IGP = Indo-Gangetic Plains. Except for rice in Western IGP, NE significantly increased the gross income from crop production compared to FFP (Fig. 5b). When separated by agro-ecological zones, the revenue increase due to NE over FFP was significantly higher in Eastern IGP than in Western IGP (Fig. 6, right panel). Similarly, the percentage of revenue increase from NE over FFP was higher in cases where farmers' N application rate was lower (N < 125 kg N ha−1) than in cases where farmers' N application rate was higher (N > 175 kg N ha−1). Use of the nutrient expert tool for SSNM This study presented the benefits of using the NE tool for SSNM in rice and wheat crops in two different agro-ecological zones in India. The NE-based nutrient management not only increased the grain yield of rice and wheat crops but also decreased the total N and P2O5 applied in the fields (Fig. 1). The results of increased yields and decreased N rates through NE-based nutrient management are in agreement with other findings in the region19 that also reported increased yield and reduced consumption of N and P2O5 through NE-based fertilizer recommendation. Increased crop yield and reduced fertilizer consumption by NE can be attributed to increased NUE as NE gives dynamic fertilizer recommendation based on growing conditions, soil indigenous nutrient supply and residual nutrients from previous crops thereby minimizing the loss of nutrients. Through on-farm comparison of various nutrient management strategies, Sapkota et al.19 reported significant improvement in the use efficiency of N and P2O5 through NE based fertilizer management in wheat thereby increasing yield and profitability. In addition, a more balanced nutrition in NE might have led to increases in NUE through more vigorous plant growth and greater tolerance against biotic and abiotic stresses. In general, farmers do not apply K fertilizer resulting into imbalanced nutrients, which ultimately reduces the efficiency of other applied nutrients. Xu et al.23 also reported increased NUE and crop yield due to NE over farmers' practice of fertilizer management in spring maize in Northeast China. The magnitude of the effect of NE in improving NUE thereby decreasing the use of fertilizers and/or increasing yields varied by crops, agro-ecology and farmers' current fertilizer rates. The regional variations in N reduction and yield gains (Fig. 2) were largely influenced by the current level of crop intensification and yield. The net gain through NE-based fertilizer recommendations is probably due to the increased partial factor productivity of N, which can be attributed to the balanced use of nutrients in rice and wheat crops. The majority of rice and wheat farmers in the IGP and elsewhere in South Asia apply N fertilizer following blanket recommendations based upon crop response data averaged over large geographic areas24. Crop fertilization following such blanket recommendations results in under-fertilization in some areas and over-fertilization in others. The NE-based recommendations help to overcome this problem by considering field conditions, management practices and crop characteristics for nutrient application. This study presented interesting results on N rate change and yield response across the agro-ecologies. Most farmers (64% of rice growers and 77% of wheat growers) realized yield increases despite the reduction in N application on both rice and wheat crops (Fig. S2). Only about 15% of rice growers and 12% of wheat growers experienced yield losses due to NE but with decreased N application, probably because these farmers were applying N beyond the economic optimum. Increased N application had positive yield impacts for 27% of rice and 7% of wheat growers mainly in Eastern IGP (Fig. S2). This shows an opportunity to close yield gaps in low-input areas by NE. Further, the percentage of N reduction by NE under high N rate in farmers' fields (Fig. 2, left panel) with equal yield improvement (Fig. 2, right panel) suggests that the benefit of NE over FFP comes by both reducing N rate and improving yield in high-input areas, and mainly by increasing yield in low-input areas. Overall, NE-based fertilizer management in rice–wheat systems can reduce N fertilizer by ca. 18% (ca. 10% in Eastern IGP and ca. 25% in Western IGP; Fig. 2, left panel) without compromising yield (Fig. 2, right panel). Our analysis further shows that SSNM through NE increased rice and wheat production by about 4–12% in India (Fig. 2, right panel). These results reveal that NE-based SSNM has great potential for improving yields and NUE in rice and wheat crops to close the existing yield gaps and reduce excess N application. Moreover, efficient N fertilizer management strategies through adoption of NE-based recommendations could be one of the sustainable intensification pathways for the rice–wheat system in IGP and similar agro-ecologies in the region and beyond. Cost implications of the NE use This study showed that NE-based fertilizer recommendation has both positive and negative impacts on the total cost of fertilizer use. More than 50% of farmers (55% in rice and 51% in wheat) have experienced increase in total cost of fertilizer over FFP (Supplementary Fig. S3). This also varied across the agro-ecological zones, again mainly due to different levels of crop intensification. Many farmers in the study areas either avoid or practice low application of P2O5 and K2O. NE balances the fertilizer use with adequate application of P2O5 and K2O, which increased the total cost of fertilizer use despite the reduction in N application (Fig. 6, left panel). However, we also observed that, compared to FFP, NE reduced the total fertilizer cost for many farmers (45% in rice and 49% in wheat). These farmers were already using some amount of P2O5 and K2O, and therefore, the cost decrease came mainly from reduced N application under NE. Most farmers experienced yield gain and, therefore, higher income from grain yield despite the increase or decrease in total fertilizer cost (Fig. 6, right panel, and Fig. S2). In cases where yield gain was achieved due to increased fertilizer use, the increase in fertilizer cost was compensated by the yield gains. Few farmers in our study have received double gain i.e. decrease in fertilizer cost as well as yield gains. This is probably due to balanced fertilization with adequate application of potassium under NE-based fertilizer management that improved the NUE and increased the grain yields, thereby resulting in positive net returns. Larger revenue gain than the fertilizer cost indicates a large yield gap and huge potential to close this gap through better fertilizer management. Our results are in agreement with Xu et al.23 who reported that increase in gross return above fertilizer cost due to NE in spring maize was mainly due to increase in grain yield. GHG mitigation potential This study showed there is a large potential for reducing excess N from rice and wheat fields with the use of NE-based fertilizer recommendations (Fig. 1a). Application of N fertilizer is typically a main driver of N2O fluxes from rice–wheat systems25. The GHG emission reduction due to NE-based fertilizer management was higher in wheat than in rice (Fig. 4), mainly because of two reasons. Firstly, farmers generally apply higher doses of N fertilizer in wheat than in rice (Fig. 1a) and N reduction due to NE-based fertilizer management was higher in wheat than in rice (Fig. 2, left panel). This results in more GHG reduction in wheat. Secondly, the fertilizer-induced field emissions of N2O would be higher in upland crops such as wheat than in lowland crops such as rice26. Therefore, even with same level of fertilizer N reduction through NE, the percentage of GHG emission reduction would be higher in wheat than in rice. This emission reduction potential also varied spatially depending on the current level of fertilizer use (Fig. 4, left panel). Reduction in both GWP as well as GHG emission intensity by NE was higher in cases where farmers were applying higher rates of N and P2O5 (Fig. 4). This was mainly due to the reduction in fertilizer use by NE in these cases (Fig. 2, left panel) and subsequently, the reduction in fertilizer-induced emissions of N2O and CO2. The emission intensity in both crops decreased under NE-based recommendations (Fig. 3, upper panel) due to the partial or combined effect of the reduction in N application and yield gain by NE (Figs. 1 and 2). NE reduced GHG emission intensity more in wheat than in rice (Fig. 4, right panel). This is because NE resulted into higher reduction in N rates and higher yield increments in wheat than in rice (Fig. 2). Similarly, NE reduced GHG emission intensity more in Eastern IGP than in Western IGP (Fig. 4, right panel) mainly because NE resulted in larger yield gains in Eastern IGP compared to Western IGP (Fig. 2, right panel). These results demonstrate the importance of NE for closing yield gaps in low-input production systems. Magnitude of GHG reduction by NE in our study (ca. 2.5% in rice, and 12–20% in wheat) was lower than reported by Zhang et al.27 (ca. 45%) from winter wheat in North-central China. This was mainly because farmers in North-Central China commonly apply higher dose of N (> 300 kg N ha−1)27,28. Thus, the magnitude of N reduction through adoption of NE is higher in such areas with over-fertilization and therefore higher magnitude of fertilizer-induced GHG savings. Implications of NE-based nutrient management Mineral fertilizers play an important role in increasing crop production and securing food security of growing population. However, excessive or imbalanced use of fertilizer not only increase the production cost to farmers but also contributes to the environmental pollution. Therefore, in intensive crop growing area such as Indian IGP, N fertilizer must be applied judiciously to balance optimum yield against the cost of fertilizer and the negative environmental effects of excess N application. Agriculture is the second largest source of GHG emissions in India, accounting for ~ 18% of gross national emissions. Identifying high-yield low-emission pathways for the country's cereal production is key for reducing agriculture's contribution to total GHG emissions24. India recently declared a voluntary goal of reducing the emission intensity of its gross domestic product by 35% over the 2005 level, by 2030 (India's NDC submitted to UNFCCC). Reduction in excess nutrient application and balanced fertilizer use are the key mitigation options in Indian agriculture29,30. This option can contribute to reducing large amounts of GHG emissions from the agriculture sector including gains in crop yield and income for most farmers. Soil test-based fertilizer recommendations are difficult to use for smallholder farms in South Asia because of constraints such as available testing facilities, farmers' access to such facilities, cost, and timeliness. Given the situation, a science-based, reliable and practically feasible site-specific fertilizer recommendation method is required to respond to the low NUE caused by imbalanced fertilization practices. Decision support systems (DSS) are nowadays progressively being used to facilitate application of improved nutrient management practices in the farmers' fields. Thus, scaling the use of NE-based SSNM can partially address the challenge of increasing food production to meet the growing food demand and reducing agricultural emissions particularly in the area where crop yield gaps and agricultural emissions are high. NE-based nutrient recommendation can be scaled-up through government extension systems and schemes (e.g. Soil Health Card Scheme of India: https://www.soilhealth.dac.gov.in/). Based on authors' experience in the region, NE is easy to learn by farmers, can be used through their android cell phones. Many progressive farmers in the study area are already using NE not only on their farm but also in the farm of fellow farmers. The implications of NE-based fertilizer management in terms of yield, N consumption and GHG emissions are tremendously high in countries like India. In 2016–2017, India produced 109.7 and 98.51 Mt of rice and wheat, respectively (https://eands.dacnet.nic.in/). If our observed on-farm yield increases of rice and wheat through NE-based fertilizer management over FFP represent the total rice and wheat area in India, this will translate into the production of 8.5 and 5.4 Mt additional rice and wheat, respectively, without additional production costs. Annual N fertilizer consumption in India was about 17.4 Mt in 2016–201731. Assuming 50% of this total N is used for rice and wheat production20, estimated N fertilizer savings due to NE-based fertilizer management in rice and wheat in India will be about 1.44 Mt with huge implications on costs and GHG savings. Through a bottom-up analysis using a large number of datasets in India, Sapkota et al. calculated fertilizer-related emissions from rice and wheat to be 558 and 775 kg CO2e per ha, respectively29. If our results of GHG emission savings of 2.5% in rice and 20% in wheat due to NE-based fertilizer management could be achieved in all rice and wheat areas in the country, this would translate into GHG savings of 5.2 Mt CO2e, i.e., 0.61 Mt CO2e from rice and 4.63 Mt CO2e from wheat. However, this would also increase the consumption of K2O with huge implications for the production and import of K fertilizer and associated costs and this trade-off warrants further study. We conducted this research both in high-input (Western IGP) and low-input (Eastern IGP) production systems in the major rice–wheat belt of India and covered a sufficiently large number of farmers (ca. 1600 pair comparisons) to make it representative of major rice–wheat growing areas in the region. Given the level of implications in terms of yields, total N, P2O5 and K2O consumptions and GHG emissions, NE-based fertilizer management certainly merits further scientific investigation and policy consideration. This study evaluated NE-based site-specific nutrient management vis-à-vis farmers' fertilizer practice in rice and wheat in both high-input and low-input production systems across the rice–wheat belt of India through large numbers of on-farm comparison trials. Overall, NE-based recommendations reduced N input by 15–35%, increased grain yield by 4–8% and reduced global warming potential by 2–20%. The study also shows that NE-based SSNM is more important for closing the yield gap in low-intensive systems and decreasing nutrient input and minimizing nutrient loss high-intensive systems. Adoption of NE-based site-specific nutrient management across all rice and wheat growing areas in India would translate into additional grain production of 13.92 Mt, N consumption reduction of 1.44 Mt and total GHG savings of 5.24 Mt CO2e per year with some additional use of K fertilizer. In smallholder production systems, where soil testing of each field is nearly impossible, a simple decision support tool such as NE could be helpful to promote site-specific nutrient management contributing to both food security and environmental sustainability goals. On-farm comparison trials We conducted this study in farmers' fields in the states of Punjab, Haryana and Bihar in India (Fig. S4). Punjab and Haryana represent high-input production systems typical of the Western Indo-Gangetic Plains (IGP), whereas Bihar represents relatively low-input production systems typical of the Eastern IGP. In general, intensive mechanized production makes the cereal systems of IGP GHG emission intensive. The Western IGP are characterized by semi-arid climate with mean annual rainfall varying from 544 to 970 mm. The climate in Eastern IGP is characterized by hot and humid summers and cold winters, with an average annual rainfall of 1350 mm, 70% of which falls between July to September. An overview of the agro-ecological conditions of the study sites are given in Supplementary Table 1. We conducted on-farm comparison trials for four years during the 2013–2014 to 2016–2017 cropping seasons. Altogether, we conducted 1594 pair comparison trials (1094 in Haryana, 245 in Punjab and 251 in Bihar), 730 trials for rice and 864 trials for wheat. Each trial had two paired plots–one with a fertilizer recommendation determined by NE and one with FFP. We used basic information about the plots, such as soil characteristics, yield and nutrients applied to previous crops, together with information about the present crop and the target yield to estimate nutrient recommendation from NE software (http://software.ipni.net). NE estimates the attainable yield utilizing the information provided by farmers about growing conditions, determines the nutrient balance in the cropping system based on yield and nutrients applied to previous crops, and combines such information with soil characteristics to predict the crop response to N, P and K, and generates nutrient recommendations specific to that field16. The plot size ranged from 1000 to 2000 m2 in Haryana and Punjab, and from 500 to 1500 m2 in Bihar. All practices, except fertilizer management, were similar for paired plots within the comparison trials. The participating farmers primarily managed the plots. The researchers consulted with farmers to calculate NE-based recommendations using the Nutrient Expert tool and collected relevant data from the trials. Crop management in the field Most farmers adopted intensive tillage practices, i.e., conventional tillage (CT: two harrowings/rotavator, two plowings using a tine cultivator, and one field leveling using a wooden plank), whereas some farmers adopted zero tillage (ZT) for growing rice and wheat. In the CT system, rice was established by transplanting 25–30-day-old seedlings in puddled (wet tillage) soil. In the CT system, wheat was established either by broadcasting the seeds in the field after land preparation or by drilling using a seed-cum-fertilizer drill. In the ZT system, both rice and wheat were seeded using a zero-till planter or a turbo Happy Seeder32 without preparatory tillage. Depending upon water availability and farmers' preference, some farmers kept their rice field continuously flooded, whereas some followed alternate wetting and drying cycles. In Punjab and Haryana, rice fields received 20–25 irrigations per season, whereas in Bihar, farmers applied only 4–8 irrigations depending upon rainfall. In general, wheat received four irrigations of 6–7 cm each at 20–25, 45–50, 75–80 and 110–120 days after sowing. We calculated NE-based fertilizer recommendations using the farm management information provided by the farmers supplemented with soil and climatic condition of the field. NE-based fertilizer recommendations varied from farm to farm depending upon soil type, cropping history and management practices, whereas a farmer's fertilizer practice was as per his/her prevailing practice. Farmers' fertilizer management practices also varied from farm to farm depending upon farmers' knowledge of fertilizer management, their purchasing power and so on. For both crops, the total amounts of P2O5 and K2O and 15–20% of the N were applied as basal fertilizer using di-ammonium phosphate and muriate of potash. The remaining amount of N was top-dressed in two equal splits 20–25 and 40–50 days after seeding/transplanting using urea under both NE and FFP. Figure 1 shows the average N, P2O5 and K2O rates under NE and FFP for both crops in both agro-ecologies and Supplementary Fig. S1 presents their pairwise distribution. We recorded and compiled all the management practices in each farmer's field, such as tillage and residue management, nutrient and water management, as well as crop protection. We obtained climate information (temperature and rainfall) about the farm from the nearest agricultural science center. We obtained site specific soil data such as texture, soil organic carbon, soil pH and bulk density from the International Soil Reference and Information Centre (https://www.isric.org/explore/soilgrids)33. We also recorded the amount of fuel and electricity consumed for various farm operations during the entire crop cycle. At maturity, we recorded the grain yield (at 13% moisture content) by harvesting three 3 m2 quadrates in each plot. Estimation of GHG emissions and global warming potential We estimated GHG emissions from each plot using the CCAFS Mitigation Options Tool34, hereafter referred to as CCAFS-MOT, which combines several empirical models to estimate GHG emissions from different land uses. The tool recognizes context specific factors that influence GHG emissions such as soil and climate, production inputs and management practices. To estimate total GHG emissions from the production systems, i.e., global warming potential (GWP), we converted all GHG emissions into CO2 equivalents (CO2e) using the global warming potential (over 100 years) of 28 and 265 for CH4 and N2O, respectively35. We then divided total GWP by grain yield to determine GHG emission intensity. Estimation of fertilizer costs and income from crop yield As everything except nutrient management was similar within the comparison pair, we only used the fertilizer cost and income from yield for comparison between NE and FFP. The year-wise fertilizer cost and price of grains that were used for the economic analysis is provided in Supplementary Table 2. The fertilizer cost was estimated by using the market rate of the respective fertilizer for the respective years obtained from the Fertilizer Association of India (https://www.faidelhi.org/; Supplementary Table 2). We calculated the income from grain yield by multiplying the total grain yield with minimum support price (MSP) for the respective years (Supplementary Table 2). The MSP is an agriculture product price set by the Food Corporation of India (FCI) to purchase directly from the farmers (http://fci.gov.in). We conducted a paired t-test comparison of the variables of interest using Costat Software36. As we had pair comparisons of NE versus FFP in each farmer's field, the paired t-test is appropriate for examining the difference in means. Once the effect of NE over FFP in terms of fertilizer rate, yield, GHG emissions, fertilizer cost and income were determined through the paired t-test, those variables were also subjected to meta-analysis to determine the influence of various agro-climatic conditions and management factors (e.g., crop types, agro-climatic zone, farmers' fertilizer rate) on the effectiveness of NE over FFP. For this, we considered each of the on-farm comparison trials characterized by crop, location, year, soil properties, management information and so on, as a data point. All the trials within a location that were similar in the above-mentioned characteristics constituted a study (a village had one or more studies), and all pair-comparison trials within the specific study were considered replications and used to calculate the standard deviation and effect size in meta-analysis. We performed meta-analysis using MetaWin 2.1 in two stages37,38. At first, we calculated the effect size for each study as the natural log of the response ratio (lnR) using the following equation39: $$Effect \; size = lnR=\mathit{ln}\left[\frac{{X}_{NE}}{{X}_{FFP}}\right]$$ where XNE is the mean of response variables (yield, N-rate, GHG emission intensity, global warming potential) due to NE, and XFFP is the mean of these variables in FFP. This ratio is comparable between the studies, while the logarithmic transformation ensures that variability in the ratio's denominator has no greater influence on the metric than variability in its numerator. We then combined the effect sizes from the studies using a mixed-effect model to calculate the cumulative effect size and the 95% confidence intervals (CIs) through bootstrapping with 4999 iterations40. The mixed-effect model is a random-effect meta-analytic model for categorical data37, assuming random variation among studies within a group and fixed variation between groups. We considered the cumulative effect significant if the CIs did not overlap with zero and the effect sizes among the categories significantly different if their CIs did not overlap. For ease of interpretation, we back-transformed the results and reported them as the percentage change caused by NE in relation to FFP. We considered the difference significant only when p values were < 0.05. Ross, K. et al. Enhancing NDCs: Opportunities in Agriculture (World Resources Institute and Oxfam, Washington DC, 2019). Carlson, K. M. et al. Greenhouse gas emissions intensity of global croplands. Nat. Clim. Change. 7, 63–78 (2017). Kanter, D. R., Zhang, X., Mauzerall, D. L., Malyshev, S. & Shevliakova, E. The importance of climate change and nitrogen use efficiency for future nitrous oxide emissions from agriculture. Environ. Res. Lett. 11, 094003 (2016). Article ADS Google Scholar IPCC. IPCC special report on climate change, desertification, land degradation, sustainable land management, food security, and greenhouse gas fluxes in terrestrial ecosystems. Summary for policymakers (Intergovernmental Panel on Climate Change, Geneva, Switzerland, 2019). Reynolds, T. W. et al. Environmental impacts and constraints associated with the production of major food crops in Sub-Saharan Africa and South Asia. Food Secur. 7, 795–822 (2015). Spiertz, J. H. J. Nitrogen, sustainable agriculture and food security: A review. Sustain. Agric. 30, 635–651 (2009). Garnett, T. et al. Sustainable intensification in agriculture: Premises and policies. Science 341, 33–34 (2013). Sutton, M. A. et al. Our nutrient world: the challenge to produce more food and energy with less pollution. (Global Overview of Nutrient Management. Centre for Ecology and Hydrology, Edinburgh on behalf of the Global Partnership on Nutrient Management and the International Nitrogen Initiative, 2013). Good, A. G. & Beatty, P. H. Fertilizing nature: A tragedy of excess in the commons. PLoS Biol. 9, 1–9 (2011). Buresh, R. J. & Witt, C. Site-specific nutrient management. In Fertilizer Best Management Practices: Gneral principles, strategy for their adoption and voluntary initiatives vs regulations. Paper presented at IFA International Workshop on Fertilizer Best Management Practices, 7–9 March 2007, Brussels, Belgium (ed. IFIA) (International Fertilizer Industry Association, 2007). Dobermann, A. & Witt, C. The evolution of site-specific nutrient management in irrigated rice systems of Asia. In Increasing productivity of intensive rice systems through site-specific nutrient management (eds. Dobermann, A., Witt, C. & Dawe, D.) 410 (Science Publisher Inc., and International Rice Research Institute (IRRI), 2004). Shapiro, C. A. et al. Using a Chlorophyll Meter to Improve N Management (University of Nebraska-Lincoln Extension Bulletin G, Lincoln, 2013). LCC. Leaf Color Chart (LCC). (2020). http://www.knowledgebank.irri.org/step-by-step-production/growth/soil-fertility/leaf-color-chart. Bijay-Singh, et al. Site-specific fertilizer nitrogen management in irrigated transplanted rice (Oryza sativa) using an optical sensor. Precis. Agric. 16, 455–475 (2015). Sapkota, T. B. et al. Precision nutrient management under conservation agriculture-based cereal systems in South Asia. In Climate Change and Agricultural Development: Improving Resilience through Climate Smart Agriculture, Agroecology and Conservation (ed. Nagothu, U. S.) 322 (Taylor & Francis Group, Routledge, 2016). Pampolino, M. F., Witt, C., Pasuquin, J. M., Johnston, A. & Fisher, M. J. Development approach and evaluation of the Nutrient Expert software for nutrient management in cereal crops. Comput. Electron. Agric. 88, 103–110 (2012). Xu, X. et al. Methodology of fertilizer recommendation based on yield response and agronomic efficiency for rice in China. Food Crop. Res. 206, 33–42 (2017). Jat, R. D. et al. Conservation agriculture and precision nutrient management practices in maize-wheat system: Effects on crop and water productivity and economic profitability. Food Crop. Res. 222, 111–120 (2018). Sapkota, T. B. et al. Precision nutrient management in conservation agriculture based wheat production of Northwest India: Profitability, nutrient use efficiency and environmental footprint. Food Crop. Res. 155, 233–244 (2014). Heffer, P., Gruere, A. & Roberts, T. Assessment of Fertilizer Use by Crop at the Global Level. International Fertilizer Association (IFA) and International Plant Nutrition Institute (IPNI) 5, (2017). Farnworth, C. R., Stirling, C., Sapkota, T. B., Jat, M. L. & Misiko, M. Gender and inorganic nitrogen: What are the implications of moving towards a more balanced use of nitrogen fertilizer in the tropics ?. Int. J. Agric. Sustain. 15, 136–152 (2017). Singh, V. K., Dwivedi, B. S., Shukla, A. K., Chauhan, Y. S. & Yadav, R. L. Diversification of rice with pigeonpea in a rice–wheat cropping system on a TypicUstochrept: Effect on soil fertility, yield and nutrient use efficiency. Food Crop. Res. 92, 85–105 (2005). Xu, X. et al. Narrowing yield gaps and increasing nutrient use efficiencies using the Nutrient Expert system for maize in Northeast China. Food Crop. Res. 194, 75–82 (2016). Sapkota, T. B. et al. Identifying optimum rates of fertilizer nitrogen application to maximize economic return and minimize nitrous oxide emission from rice–wheat systems in the Indo-Gangetic Plains of India. Arch. Agron. Soil Sci. 00, 1–16 (2020). Reay, D. S. et al. Global agriculture and nitrous oxide emissions. Nat. Clim. Change 2, 410–416 (2012). Albanito, F. et al. Direct nitrous oxide emissions from tropical and sub-tropical agricultural systems—A review and modelling of emission factors. Sci. Rep. 7, 1–12 (2017). Zhang, J. J. et al. Nutrient expert improves nitrogen efficiency and environmental benefits for winter wheat in China. Agron. J. 110, 696–706 (2018). Cui, Z. et al. On-farm evaluation of an in-season nitrogen management strategy based on soil Nmin test. Food Crop. Res. 105, 48–55 (2008). Sapkota, T. B. et al. Cost-effective opportunities for climate change mitigation in Indian agriculture. Sci. Total Environ. 655, 1342–1354 (2019). Bordoloi, N., Baruah, K. K. & Hazarika, B. Fertilizer management through coated urea to mitigate greenhouse gas (N2O) emission and improve soil quality in agroclimatic zone of Northeast India. Environ. Sci. Pollut. Res. 27, 11919–11931 (2020). Tewatia, R. K. & Chanda, T. K. Trends in fertilizer nitrogen proudction and consumption in India. in The Indian Nitrogen Assessment: Sources of Reactive Nitrogen, Environmental and Climate Effects, Management Options and Policies (eds. Abrol, Y. P. et al.) 45–56 (Woodhead Publishing, Duxford, United Kingdom, 2017). Sidhu, H. S. et al. Development and evaluation of the Turbo Happy Seeder for sowing wheat into heavy rice residues in NW India. Food Crop. Res. 184, 201–212 (2015). Hengl, T. et al. SoilGrids250m: Global gridded soil information based on machine learning. PLoS ONE 12, e0169748 (2017). Feliciano, D., Nayak, D. R., Vetter, S. H. & Hillier, J. CCAFS-MOT—A tool for farmers, extension services and policy-advisors to identify mitigation options for agriculture. Agric. Syst. 154, 100–111 (2017). IPCC. Climate Change 2013. The Physical Science Basis. Working Group I contribuiton to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Chapter 8: Anthropogenic and Natural RAdiative Forcing. Intergovernmental Panel on Climate Chang. (2013). CoHort, S. CoHort Software. Monterey, CA. USA (2017). http://www.cohort.com. Accessed 17 July 2017. Rosenberg, M. S., Adams, D. C. & Gurevitch, J. MetaWin: Statistical Software for Meta-analysis. Version 2.0 (Sinauer Associates, Sunderland, 2000). Chakraborty, D. et al. A global analysis of alternative tillage and crop establishment practices for economically and environmentally efficient rice production. Sci. Rep. 7, 1–11 (2017). Hedges, L. V., Gurevitch, J. & Curtis, P. S. The meta-analysis of response ratios in experimental ecology. Ecology 80, 1150–1156 (1999). Adams, D. C., Gurevitch, J. & Rosenberg, S. Resampling tests for meta-analysis of ecological data. Ecology 78, 1277–1283 (1997). This work was carried out by the International Maize and Wheat Improvement Center (CIMMYT) in collaboration with farmers, and funded by the CGIAR research programs (CRPs) on Climate Change, Agriculture and Food Security (CCAFS) & Wheat Agri-Food System (WHEAT) and the Indian Council of Agricultural Research (ICAR) under window 3. CCAFS' work is supported by CGIAR Fund Donors and through bilateral funding agreements. For details please visit https://ccafs.cgiar.org/donors. The views expressed in this paper cannot be taken to reflect the official opinions of these organizations. The dataset associated with this manuscript will be available together with the supplementary materials of this manuscript. International Maize and Wheat Improvement Center (CIMMYT), El Batan, Mexico Tek B. Sapkota International Maize and Wheat Improvement Center (CIMMYT), New Delhi, India Mangi L. Jat, Kailash Kalvaniya, Gokul Prasad & Munmun Rai International Rice Research Institute (IRRI), NASC Complex, New Delhi, 110012, India Dharamvir S. Rana CGIAR Research Program on Climate Change, Agriculture and Food Security (CCAFS), CIAT-Bioversity Alliance, Cali, Colombia Arun Khatri-Chhetri ICAR-Central Soil Salinity Research Institute (CSSRI), Karnal, Haryana, India Hanuman S. Jat International Maize and Wheat Improvement Center (CIMMYT), CSSRI, Karnal, India Deepak Bijarniya CCS Haryana Agriculture University, Hisar, Haryana, India Jhabar M. Sutaliya International Maize and Wheat Improvement Center (CIMMYT), Borlaug Institute for South Asia (BISA), Ludhiana, Punjab, India Manish Kumar, Love K. Singh & Harminder S. Sidhu International Maize and Wheat Improvement Center (CIMMYT), Borlaug Institute for South Asia (BISA), Pusa, Samastipur, Bihar, India Raj K. Jat International Plant Nutrition Institute (IPNI), Gurgaon, Haryana, 122001, India T. Satyanarayana African Plant Nutrition Institute (APNI), & Univerśite Mohammed VI Polytechnique, Benguérir, Morocco Mangi L. Jat Love K. Singh Kailash Kalvaniya Gokul Prasad Harminder S. Sidhu Munmun Rai M.L.J. and T.B.S. conceptualized the idea. M.L.J designed the experiment and coordinated field work. H.S.J., D.B., J.M.S., M.K., L.K.S., R.K.J., K.K., H.S.S., implemented on-farm trials and collected data, T.B.S., D.S.R. and G.P. analyzed the data. T.B.S. and A.K.C. drafted the manuscript. All authors reviewed the manuscript and contributed. Correspondence to Mangi L. Jat. Sapkota, T.B., Jat, M.L., Rana, D.S. et al. Crop nutrient management using Nutrient Expert improves yield, increases farmers' income and reduces greenhouse gas emissions. Sci Rep 11, 1564 (2021). https://doi.org/10.1038/s41598-020-79883-x Nutrient Use Efficiency and Greenhouse Gas Emissions Affected by Fertilization and Farmyard Manure Addition in Rice–Wheat System Jagdeep-Singh Shahida Nisar M. S. Mavi International Journal of Plant Production (2023) Foliar nutrient supplementation with micronutrient-embedded fertilizer increases biofortification, soil biological activity and productivity of eggplant Ram Swaroop Bana Gograj Singh Jat Teekam Singh The impact of nanofertilizer on agro-morphological criteria, yield, and genomic stability of common bean (Phaseolus vulgaris L.) Dina M. Salama M. E. Abd El-Aziz Mohamed S. Abd El-Wahed Greenhouse gas emissions from global production and use of nitrogen synthetic fertilisers in agriculture Stefano Menegat Alicia Ledo Reyes Tirado Nitrogen Use Efficiency in Crop Production in India: Trends, Issues, and Challenges Bijay-Singh Agricultural Research (2022)
CommonCrawl
On curvature: cars, clothoids and cartography Issah Merchant discusses the geometric principles behind, and real-world applications of, curvature A roundabout constructed using clothoids or Euler spirals. Image: Anders Sandberg, CC BY-NC 2.0. by Issah Merchant. Published on 5 March 2020. Rollercoasters and railways were originally created without using curvature calculations. People on them would experience undesirable jerk (a sharp change in curvature results in jerk or centripetal forces). You may still experience this on old railways. The image below depicts a centrifugal railway that was constructed in 1846. Nowadays, rollercoaster and railway designers use a type of curve called a clothoid or Euler spiral to make the change in curvature less abrupt, for example when riding a loop-the-loop. I will later mention a couple of applications of Euler spirals. So curvature is clearly an important concept. Let's get to grips with how it works, and where we should consider it. A sketch of a centrifugal railway in Manchester, 1904. Suppose you're in a car that travels along a meandering, curved road. The fact that the road is curved affects your car and you feel the force of curvature, which can be defined as the degree to which the road deviates from a straight line. Since following a curved line means changing direction, you undergo a change in velocity. The formula below combines velocity ($\mathrm{d}y/\mathrm{d}x$) and acceleration ($\mathrm{d}^2y/\mathrm{d}x^2$) in a way that expresses this 'degree of curvature', which we typically denote with the Greek letter $\kappa$—'kappa'. We will derive this formula later. $$\kappa = \frac{\frac{\mathrm{d}^2y}{\mathrm{d}x^2}}{\left(1 + \frac{\mathrm{d}y}{\mathrm{d}x}^2 \right)^{3/2}}$$ The osculating circle to the curve $C$ and the point $P$, with radius $r$. Curvature can be expressed in terms of the radius of an osculating circle, which is the circle that is tangent to a curved line. The image on the right shows the osculating circle at the point $P$ on the curve $C$. The radius of the circle is $r$. Different degrees of curvature result in circles with varying radii. This is a good way to describe how much you're turning as you travel along the curve. Thinking back to our example of a car driving along the road, a small turning (osculating) circle means that the car is making a sharp turn. Curvature is the reciprocal of the radius of this circle. Since curvature is a measure of how much something turns, a sharper turn should mean greater curvature. But a sharper turn produces a smaller radius, so we take the value of curvature to be the reciprocal. By contrast, a relatively straight road would trace out a circle with a larger radius, corresponding to a small curvature. Proof of the curvature formula Before we look at some more applications of curvature, let's derive the algebraic expression for $\kappa$. Well, actually we will derive the expression for the radius of curvature $\rho$ ('rho') and then we just have $\kappa = 1/\rho$ by the argument above. We want to see how $\rho$ changes as we move along the curve, which has arc-length $s$ and angle $\theta$ measured from the horizontal. By Pythagoras theorem, the change in arc length is $\mathrm{d}s^2 = \mathrm{d}x^2 + \mathrm{d}y^2$ — that is, this is the expression for how the length of the curve $s$ changes as we adjust $x$ and $y$ by some small amounts $\mathrm{d}x$ and $\mathrm{d}y$. By trigonometry, we can express $x$ and $y$ in terms of $\rho$ and $\theta$: $x = \rho \cos{\theta}$ and $y = \rho \sin{\theta}$. The product rule applied to these two expressions gives us $$ \frac{\mathrm{d}x}{\mathrm{d}\theta} = \cos{\theta} \frac{\mathrm{d} \rho}{\mathrm{d}\theta} – \rho \sin{\theta}, \\ \frac{\mathrm{d}y}{\mathrm{d}\theta} = \sin{\theta} \frac{\mathrm{d} \rho}{\mathrm{d}\theta} + \rho \cos{\theta}.$$ Multiplying through by $\mathrm{d}\theta$ and combining these in the arc-length formula we find $$\mathrm{d}s^2 =\mathrm{d}\rho^2 + \rho^2 \mathrm{d} \theta^2,$$ where for infinitesimal changes along the curve $\mathrm{d}\rho$ is very small so that $\mathrm{d}s = \rho \mathrm{d}\theta$. So we just need to find $\rho = \mathrm{d}s/\mathrm{d}\theta$. But we already know $\mathrm{d}s$! It remains to find $\mathrm{d}\theta$, that is how the angle changes as we move along the curve. Again, trigonometry helps us out. We know that $\tan{\theta} = \mathrm{d}y/\mathrm{d}x$. Differentiating both sides with respect to $x$, and using a trig identity (a nice exercise!) we get: $$(1 + \frac{\mathrm{d}y}{\mathrm{d}x}^2) \frac{\mathrm{d}\theta}{\mathrm{d}x} = \frac{\mathrm{d}^2y}{\mathrm{d}x^2}.$$ Combining the expressions for $\mathrm{d}\theta$ and $\mathrm{d}s$ we get: $$\rho = \frac{\mathrm{d}s}{\mathrm{d}\theta} = \frac{\left(1 + \frac{\mathrm{d}y}{\mathrm{d}x}^2 \right)^{3/2}}{ \frac{\mathrm{d}^2y}{\mathrm{d}x^2} },$$ which is just the inverse of the expression for $\kappa$ that we started with. Phew! Now let's think about why curvature is important in the real world, and some of the places where designers have to take it into account. The Mercator projection Throughout the 16th century, cartographers began creating maps of the world which included distortions of land. In 1569 the Dutch geographer Gerardus Mercator came up with a (not-so-perfect but still pretty good) solution to the problem of mapping the earth, a three-dimensional planet, onto a two-dimensional map. It is known as the Mercator projection. The Mercator projection distorts countries that are close to the poles. Image: Strebe, CC BY-SA 3.0. A map is a two-dimensional object, so it is flat with zero curvature. But the mean curvature of the globe isn't zero—you can't 'lie it flat' without distortions. This means it is impossible to create a flat map of the round Earth without distorting it in some way. There is an inevitable sacrifice of either relative area of locations, depending on proximity to the poles, or relative location of areas being distorted, which results in countries being jumbled. The Mercator projection can be conceptualised as projecting the globe onto a cylinder, then unrolling it to create a flat surface. Suppose you had a globe with a lightbulb in the middle of it. The lightbulb emanates light and maps countries through shadows and areas of light. We can then unroll the cylinder to produce a two-dimensional map. Distortion of area occurs as you get closer to the poles because the size of the 'shadow' produced by the land as it is projected onto the cylinder is larger than at the equator. This is particularly true for countries such as Greenland which appear to be larger than they actually are. While this a disadvantage of the Mercator projection, it is advantageous to use for navigation as it preserves direction and keeps the shape of countries. The website thetruesize.com allows you to play around with the Mercator projection, moving countries around to get a sense of how distorted they are. The Euler spiral On the other hand, if we sacrifice location and instead attempt to create a map which could propose accurate relative sizes of land, we could cut up the globe into an Euler spiral. This is where the curvature changes linearly with length. To make an Euler spiral from a globe, start at the North pole and cut around the globe in a spiral, moving downwards as you go. You'll end up with a spiral shape, which you can flatten out. The distortion of the area of locations decreases by the inverse square of the number of spirals. That is, the amount the globe has to deform to be represented on a two-dimensional plane tends to zero as we make infinitely many spirals. Numberphile have a video explaining this, here. A double-ended Euler spiral. As the number of loops grows, the curve tends towards the points marked with a cross. Image: AdiJapan, CC BY-SA 3.0. Euler spirals are used by engineers in road design. They allow for smoother transition curves and reduce the deceleration of cars as they change direction onto another road. An object travelling in a circle experiences centripetal acceleration (towards the centre of the osculating circle), and Euler spirals are useful because they allow a vehicle to change direction by increasing this centripetal acceleration linearly (gently) with the curve length. You can imagine if one was to turn directly at a tangent it would cause strain to the vehicle and the passenger would feel an uncomfortable change in centripetal acceleration (jerk). This is the same principle at play in the roller-coaster I mentioned earlier. More curves Other applications of curvature include in car design and graphic design. French engineer Pierre Bézier created a type of parametric curve while designing cars for Renault in the 1960's. Bézier curves are created by using 'control points'. The number of control points is equal to one more than the curve order (so four points for a cubic curve). A quadratic bezier curve will be described by 3 control points: $P_0$, $P_1$ and $P_2$. Starting at $P_0$, the curve moves towards $P_2$, passing close to $P_1$. The Bézier curve is defined in such a way that the curve gets close to the control points while maintaining a smooth change in curvature so that it still looks natural. Specifically, the curve is constructed by moving along a line (the green line below) that connects the straight lines between successive control points (the grey lines). If the curve is described by a parameter $t$, then the relative position along each of the construction lines is the same (watch how all the dots move as the $t$ increases in the animation below — at $t = 0.5$ the green dots are halfway between the control points, and the black dot which traces the curve is halfway along the green line). Bézier curves are also used in graphic design to make complex shapes, draw smoother lines and create the feeling of realistic motion. The construction of a quadratic Bézier curve. In conclusion, the use of calculus has made the concept of curvature mathematically explicit, and this has given us smooth rollercoasters, allowed us to understand maps with greater accuracy and much more besides. Issah Merchant Issah Merchant is a Year 11 student at Harrow School, where he is taught by Natalya Silcott. At school, Issah and has given talks on Topology and the Euler characteristic, the Riemann hypothesis, and Euler's Identity. + More articles by Issah Crossnumber winners, issue 11 Did you solve it? On √2 Yiannis Petridis connects square roots and continued fractions They might not be giants Angela Brett might not be standing on their shoulders How to crochet a fractal This article will (unsurprisingly) tell you how to crochet a fractal Star polynomials Natalya Silcott introduces us to star polynomials! Linear algebra… with diagrams Rediscover linear algebra by playing with circuit diagrams ← Book of the Year 2019 Crossnumber winners, issue 10 → Mathematics and art: the ELHP Adam Atkinson uses maths to try to help a sculptor Christmas cracker joke III Behind today's door... A joke! Memory and maths in the modern world
CommonCrawl
View source for Quantum information processing, science of ← Quantum information processing, science of <!--This article has been texified automatically. Since there was no Nroff source code for this article, the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist was used. If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category. Out of 50 formulas, 50 were replaced by TEX code.--> {{TEX|semi-auto}}{{TEX|done}} The theoretical, experimental and technological areas covering the use of quantum mechanics for communication and computation. Quantum information processing includes investigations in quantum information theory, quantum communication, quantum computation, quantum algorithms and their complexity, and quantum control. The science of quantum information processing is a highly interdisciplinary field. In the context of mathematics it is stimulating research in pure mathematics (e.g. coding theory, $*$-algebras, quantum topology) as well as requiring and providing many opportunities for applied mathematics. The science of quantum information processing emerged from the recognition that usable notions of information need to be physically implementable. In the 1960s and 1970s, researchers such as R. Landauer, C. Bennett, C. Helstrom, and A. Holevo realized that the laws of physics give rise to fundamental constraints on the ability to implement and manipulate information. Landauer repeatedly stated that "information is physical" , providing impetus to the idea that it should be possible to found theories of information on the laws of physics. This is in contrast to the introspective approach which led to the basic definitions of computer science and information theory as formulated by A. Church, A.M. Turing, C. Shannon and others in the first half of the 20th century (cf. also [[Turing machine|Turing machine]]). Early work in studying the physical foundations of information focused on the effects of energy limitations and the need for dissipating heat in computation and communication. Beginning with the work of S. Wiesner on applications of quantum mechanics to [[Cryptography|cryptography]] in the late 1960s, it was realized that there may be intrinsic advantages to using quantum physics in information processing. Quantum cryptography and quantum communication in general were soon established as interesting and non-trivial extensions of classical communication based on bits. That quantum mechanics may be used to improve the efficiency of algorithms was first realized when attempts at simulating quantum mechanical systems resulted in exponentially complex algorithms compared to the physical resources associated with the system simulated. In the 1980s, P. Benioff and R. Feynman introduced the idea of a quantum computer for efficiently implementing quantum physics simulations. Models of quantum computers were developed by D. Deutsch, leading to the formulation of artificial problems that could be solved more efficiently by quantum than by classical computers. The advantages of quantum computers became widely recognized when P. Shor (1994) discovered that they can be used to efficiently factor large numbers — a problem believed to be hard for classical deterministic or probabilistic computation and whose difficulty underlies the security of widely-used public-key encryption methods. Subsequent work established principles of quantum error-correction to ensure that quantum information processing was robustly implementable. See [[#References|[a3]]], [[#References|[a1]]] for introductions to quantum information processing and a quantum mechanics tutorial. In the context of quantum information theory, information in the sense of Shannon is referred to as classical information. The fundamental unit of classical information is the bit, which can be understood as an ideal system in one of two states or configurations, usually denoted by $0$ and $1$. The fundamental units of quantum information are qubits (short for quantum bits), whose states are identified with all "unit superpositions" of the classical states. It is common practice to use the bra-ket conventions for denoting states. In these conventions, the classical configurations are denoted by $| 0 \rangle$ and $| 1 \rangle$, and superpositions are formal sums $\alpha | 0 \rangle + \beta | 1 \rangle$, where $\alpha$ and $\beta$ are complex numbers satisfying $| \alpha | ^ { 2 } + | \beta | ^ { 2 } = 1$. The states $| 0 \rangle$ and $| 1 \rangle$ represent a standard orthonormal basis of a two-dimensional [[Hilbert space|Hilbert space]]. Their superpositions are unit vectors in this space. The state space associated with $n &gt; 1$ qubits is formally the tensor product of the Hilbert spaces of each qubit. This state space can also be obtained as an extension of the state space of $n$ classical bits by identifying the classical configurations with a standard orthonormal basis of a $2 ^ { n }$-dimensional Hilbert space. Access to qubit states is based on the postulates of quantum mechanics with the additional restriction that they are local in the sense that elementary operations apply to one or two qubits at a time. Most operations can be expressed in terms of standard measurements of a qubit and two-qubit quantum gates. The standard qubit measurement has the effect of randomly projecting the state of the qubit onto one of its classical states; this state is an output of the measurement (accessible for use in a classical computer if desired). For example, using the tensor product representation of the state space of several qubits, a measurement of the first qubit is associated with the two projection operators $P _ { 0 } ^ { ( 1 ) } = P _ { 0 } \otimes I \otimes \ldots$ and $1 - P _ { 0 } ^ { ( 1 ) }$, where $P _ { 0 } | 0 \rangle = | 0 \rangle$ and $P _ { 0 } | 1 \rangle = | 0 \rangle$. If $\psi$ is the initial state of the qubits, then the measurement outcome is $0$ with probability $p _ { 0 } = \| P _ { 0 } \psi \| ^ { 2 }$, in which case the new state is $P _ { 0 } \psi / p _ { 0 }$, and the outcome is $1$ with probability $1 - p _ { 0 } = \| P _ { 1 } \psi \| ^ { 2 }$, with new state $P _ { 1 } \psi / ( 1 - p _ { 0 } )$. This is a special case of a von Neumann measurement. A general two-qubit quantum gate is associated with a [[Unitary operator|unitary operator]] $U$ acting on the state space of two qubits. Thus, $U$ may be represented by a $( 4 \times 4 )$-dimensional unitary matrix in the standard basis of two qubits. The quantum gate may be applied to any two chosen qubits. For example, if the state of $n$ qubits is $\psi$ and the gate is applied to the first two qubits, then the new state is given by $( U \otimes I \otimes \ldots ) \psi$. Another important operation of quantum information processing is preparation of the $| 0 \rangle$ state of a qubit, which can be implemented in terms of a measurement and subsequent applications of a gate depending on the outcome. Most problems of theoretical quantum information processing can be cast in terms of the elementary operations above, restrictions on how they can be used and an accounting of the physical resources or cost associated with implementing the operations. Since classical information processing may be viewed as a special case of quantum information processing, problems of classical information theory and computation are generalized and greatly enriched by the availability of quantum superpositions. The two main problem areas of theoretical quantum information processing are quantum computation and quantum communication. In studies of quantum computation (cf. [[Quantum computation, theory of|Quantum computation, theory of]]) one investigates how the availability of qubits can be used to improve the efficiency of algorithmic problem solving. Resources counted include the number of quantum gates applied and the number of qubits accessed. This can be done by defining and investigating various types of quantum automata, most prominently quantum Turing machines, and studying their behaviour using approaches borrowed from the classical theory of automata and languages. It is convenient to combine classical and quantum automata, for example by allowing a classical computer access to qubits as defined above, and then investigating the complexity of algorithms by counting both classical and quantum resources, thus obtaining trade-offs between the two. Most of the complexity classes for classical computation have analogues for quantum computation, and an important research area is concerned with establishing relationships between these complexity classes (cf. also [[Complexity theory|Complexity theory]]). Corresponding to the classical class $\mathcal{P}$ of polynomially decidable languages is the class of languages decidable in bounded-error quantum polynomial time, $\mathbf{BQP}$. While it is believed that $\mathcal{P}$ is properly contained in $\mathbf{BQP}$, whether this is so is at present (2000) an open problem. $\mathbf{BQP}$ is known to be contained in the class $\mathcal{P} ^ { \# _\mathcal{ P}}$ (languages decidable in classical polynomial time given access to an oracle for computing the permanent of $0$-$1$ matrices), but the relationship of $\mathbf{BQP}$ to the important class of nondeterministic polynomial time languages $\cal N P$ is not known (cf. also [[NP|$\cal N P$]]). In quantum communication one considers the situation where two or more entities with access to local qubits can make use of both classical and quantum (communication) channels for exchanging information (cf. also [[Quantum communication channel|Quantum communication channel]]). The basic operations now include the ability to send classical bits and the ability to send quantum bits. There are two main areas of investigation in quantum communication. The first aims at determining the advantages of quantum communication for solving classically posed communication problems with applications to [[Cryptography|cryptography]] and to distributed computation. The second is concerned with establishing relationships between different types of communication resources, particularly with respect to noisy quantum channels, thus generalizing classical communication theory (cf. also [[Shannon theorem|Shannon theorem]]). Early investigations of quantum channels focused on using them for transmitting classical information by encoding a source of information (cf. [[Information, source of|Information, source of]]) with uses of a quantum channel (cf. [[Quantum communication channel|Quantum communication channel]]). The central result of these investigations is Holevo's bound (1973) on the amount of classical information that can be conveyed through a quantum channel. Asymptotic achievability of the bound (using block coding of the information source) was shown in the closing years of the 20th century. With some technical caveats, the bound and its achievability form a quantum information-theoretic analogue of Shannon's capacity theorem for classical communication channels. Quantum cryptography, distributed quantum computation and quantum memory require transmitting (or storing) quantum states. As a result it is of great interest to understand how one can communicate quantum information through quantum channels. In this case, the source of information is replaced by a source of quantum states, which are to be transmitted through the channel with high fidelity. As in the classical case, the state is encoded before transmission and decoded afterwards. There are many measures of fidelity which may be used to evaluate the quality of the transmission protocol. They are chosen so that a good fidelity value implies that with high probability, quantum information processing tasks behave the same using the original or the transmitted states. A commonly used fidelity measure is the Bures–Uhlmann fidelity, which is an extension of the Hilbert space norm to probability distributions of states (represented by density operators). In most cases, asymptotic properties of quantum channels do not depend on the details of the fidelity measure adopted. To improve the reliability of transmission over a noisy quantum channel, one uses quantum error-correcting codes to encode a state generated by the quantum information source with multiple uses of the channel (cf. also [[Error-correcting code|Error-correcting code]]). The theory of quantum codes can be viewed as an extension of classical coding theory. Concepts such as minimum distance and its relationship to error correction generalize to quantum codes. Many results from the classical theory, including some linear programming upper bounds and the Gilbert–Varshamov lower bounds on the achievable rates of classical codes, have their analogues for quantum codes. In the classical theory, linear codes are particularly useful and play a special role. In the quantum theory, this role is played by the stabilizer or additive quantum codes, which are in one-to-one correspondence with self-dual (with respect to a specific symplectic inner product) classical $\operatorname {GF} _ { 2 }$-linear codes over $\operatorname{GF} _ { 4 }$ (cf. [[Finite field|Finite field]]). The capacity of a quantum channel with respect to encoding with quantum codes is not as well understood as the capacity for transmission of classical information. The exact capacity is known only for a few special classes of quantum channels. Although there are information-theoretic upper bounds, they depend on the number of channel instances, and whether or not they can be achieved is an open problem (as of 2000). A further complication is that the capacity of quantum channels depends on whether one-way or two-way classical communication may be used to restore the transmitted quantum information [[#References|[a7]]]. The above examples illustrate the fact that there are many different types of information utilized in quantum information theory, making it a richer subject than classical information theory. Another physical resource whose properties appear to be best described by information-theoretic means is quantum entanglement. A quantum state of more than one quantum system (e.g. two qubits) is said to be entangled if the state cannot be factorized as a product of states of the individual quantum systems. Entanglement is believed to play a crucial role in quantum information processing, as demonstrated by its enabling role in effects such as quantum key distribution, superdense coding, quantum teleportation, and quantum error-correction. Beginning in 1995, an enormous amount of effort has been devoted to understanding the principles governing the behaviour of entanglement. This has resulted in the discovery of connections between quantum entanglement and classical information theory, the theory of positive mappings [[#References|[a2]]] and majorization [[#References|[a4]]] (cf. also [[Majorization ordering|Majorization ordering]]). The investigation of quantum channel capacity, entanglement and many other areas of quantum information processing involves various quantum generalizations of the notion of [[Entropy|entropy]], most notably the von Neumann entropy. The von Neumann entropy is defined as $H ( \rho ) = \operatorname { Tr } \rho \operatorname { log } _ { 2 } ( \rho )$ for density operators $\rho$ ($\rho$ is positive Hermitian and of trace $1$). It has many (but not all) of the properties of the classical information function $H ( . )$ (cf. [[Information, amount of|Information, amount of]]). Understanding these properties has been crucial to the development of quantum information processing (see [[#References|[a3]]], [[#References|[a6]]], [[#References|[a5]]] for reviews). Probably the most powerful known result about the von Neumann entropy is the strong subadditivity inequality. Many of the bounds on quantum communication follow as easy corollaries of strong subadditivity. Whether still more powerful entropic inequalities exist is not known (as of 2000). An important property of both classical and quantum information is that although it is intended to be physically realizable, it is abstractly defined and therefore independent of the details of a physical realization. It is generally believed that qubits encapsulate everything that is finitely realizable using accessible physics. This belief implies that any information processing implemented by available physical systems using resources appropriate for those systems can be implemented as efficiently (with at most polynomial overhead) using qubits. It is noteworthy that there is presently (2000) no proof that information processing based on quantum field theory (cf. [[Quantum field theory|Quantum field theory]]) is not more efficient than information processing with qubits. Furthermore, the as-yet unresolved problem of combining quantum mechanics with general relativity in a theory of quantum gravity prevents a fully satisfactory analysis of the information processing power afforded by fundamental physical laws. Much effort in the science of quantum information processing is being expended on developing and testing the technology required for implementing it. An important task in this direction is to establish that quantum information processing can be implemented robustly in the presence of noise. At first it was believed that this was not possible. Arguments against the robustness of quantum information were based on the apparent relationship to analogue computation (due to the continuity of the amplitudes in the superpositions of configurations) and the fact that it seemed difficult to observe quantum superpositions in nature (due to the rapid loss of phase relationships, called decoherence). However, the work on quantum error-correcting codes rapidly led to the realization that, provided the physical noise behaves locally and is not too large, it is at least in principle possible to process quantum information fault tolerantly. Research in how to process quantum information reliably continues; the main problem is improving the estimates on the maximum amount of tolerable noise for general models of quantum noise and for the types of noise expected in specific physical systems. Other issues include the need to take into consideration restrictions imposed by possible architectures and interconnection networks. There are many physical systems that can potentially be used for quantum information processing [[#References|[a8]]]. An active area of investigation involves determining the general mathematical features of quantum mechanics required for implementing quantum information. More closely tied to existing experimental techniques are studies of specific physical systems. In the context of communication, optical systems are likely to play an important role, while for computation there are proposals for using electrons or nuclei in solid state, ions or atoms in electromagnetic traps, excitations of superconductive devices, etc. In all of these, important theoretical issues arise. These issues include how to optimally use the available means for controlling the quantum systems (quantum control), how to best realize quantum information (possibly indirectly), what architectures can be implemented, how to translate abstract sequences of quantum gates to physical control actions, how to interface the system with optics for communication, refining the theoretical models for how the system is affected by noise and thermodynamic effects, and how to reduce the effects of noise. ====References==== <table><tr><td valign="top">[a1]</td> <td valign="top"> J. Gruska, "Quantum computing" , McGraw-Hill (1999)</td></tr><tr><td valign="top">[a2]</td> <td valign="top"> M. Horodecki, P. Horodecki, R. Horodecki, "Separability of mixed states: necessary and sufficient conditions" ''Phys. Lett. A'' , '''223''' : 1–2 (1996) pp. 1–8</td></tr><tr><td valign="top">[a3]</td> <td valign="top"> M.A. Nielsen, I.L. Chuang, "Quantum computation and quantum information" , Cambridge Univ. Press (2000)</td></tr><tr><td valign="top">[a4]</td> <td valign="top"> M.A. Nielsen, "A partial order on the entangled states" ''quant-ph/9811053'' (1998)</td></tr><tr><td valign="top">[a5]</td> <td valign="top"> M. Ohya, D. Petz, "Quantum entropy and its use" , Springer (1993)</td></tr><tr><td valign="top">[a6]</td> <td valign="top"> A. Wehrl, "General properties of entropy" ''Rev. Mod. Phys.'' , '''50''' (1978) pp. 221</td></tr><tr><td valign="top">[a7]</td> <td valign="top"> C.H. Bennett, D.P. DiVincenzo, J.A. Smolin, W.K. Wootters, "Mixed state entanglement and quantum error-correcting codes" ''Phys. Rev. A'' , '''54''' (1996) pp. 3824–3851</td></tr><tr><td valign="top">[a8]</td> <td valign="top"> Special focus issue, "Experimental proposals for quantum computation" ''Fortschr. Phys.'' , '''48''' (2000) pp. 767–1138</td></tr></table> Template:TEX (view source) Return to Quantum information processing, science of. Quantum information processing, science of. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Quantum_information_processing,_science_of&oldid=50044 This article was adapted from an original article by E.H. KnillM.A. Nielsen (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/wiki/Quantum_information_processing,_science_of"
CommonCrawl
How to define a bijection between $(0,1)$ and $(0,1]$? How to define a bijection between $(0,1)$ and $(0,1]$? Or any other open and closed intervals? If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it): I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by \begin{align*} -5 = f(-1) &= m(-1)+b \\ 4 = f(2) &= m(2) + b \end{align*} Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$ Then I show that $f$ is a bijection by showing that it is injective and surjective. functions elementary-set-theory 1,97333 gold badges1212 silver badges88 bronze badges $\begingroup$ Similar question on Quora: quora.com/What-is-a-bijection-between-0-2-and-0-2?share=1 $\endgroup$ – Martin Sleziak Feb 3 '16 at 18:48 Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective. To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$. DidDid $\begingroup$ One can choose $x_n=1/n$ to avoid Axiom of Choice ^^ $\endgroup$ – Gaston Burrull Apr 1 '13 at 7:45 $\begingroup$ @GastónBurrull Well, "choose" does not refer to the axiom of choice here but simply to the fact that any such sequence (for example the one you suggest) does the job. (But perhaps your comment was tongue-in-cheek... :-)) $\endgroup$ – Did Apr 1 '13 at 8:07 $\begingroup$ It's like the Hilbert hotel. If you slide all the guests to the right, you gain an extra empty room. If you slide all the guests to the left, you gain an extra unroomed guest. This trick underlies all bijections among open, closed, or half-open intervals of reals. $\endgroup$ – user4894 Aug 22 '16 at 17:59 $\begingroup$ @Parcly That anybody would decide to edit a 5 years old answer, only to replace the (correct) command \mathbb by the (deprecated in LaTeX for 20 years) command \Bbb, is beyond me. $\endgroup$ – Did Dec 6 '17 at 11:22 Try something like the function in the following picture: If you only have to show that such bijection exists, you can use Cantor-Bernstein theorem and $(0,1)\subseteq (0,1] \subseteq (0,2)$. See also open and closed intervals have the same cardinality at PlanetMath. Martin SleziakMartin Sleziak $\begingroup$ What about continuous bijection? will it exist? $\endgroup$ – Math geek Jun 9 '19 at 7:43 $\begingroup$ @Mathgeek Using that fact that a continuous injective real function is monotone should help you in showing that any such bijection cannot be continuous. You can have a look at some similar questions on this site, such as Continuous bijection from $(0,1)$ to $[0,1]$ (and other posts linked there) or Continuous, bijective function from $f:[0,1)\to \mathbb{R}.$. $\endgroup$ – Martin Sleziak Jun 9 '19 at 7:54 $\begingroup$ Thank you very much :) $\endgroup$ – Math geek Jun 9 '19 at 15:23 $\begingroup$ Despite the function not being continuous, we can actually construct one of the aforementioned bijections as a pointwise limit of continuous functions, e.g., $$\lim_{n\to\infty}x+\frac{x^{2}}{1-x}\cos\left(\frac{\pi}{x}\right)^{2n}$$. $\endgroup$ – Jam Feb 29 '20 at 21:35 Let $A=\{\frac{1}{2},\frac{1}{3},...\}$,$B=\{1,\frac{1}{2},\frac{1}{3},...\}$. Define $f:A\rightarrow B$ such that $f(\frac{1}{n})=\frac{1}{n-1}$.It is easy to show that $f$ is a bijection. Then define a function $g:(0,1) \rightarrow (0,1]$ such that $g(x)=x$ if $x$ is not in $A$ , otherwise $g(x)=f(x)$. Then $g$ is a required bijection from $(0,1)$ to $(0,1]$. Remark: We can always solve this kind of question by picking a countable proper subset from (say) $(0,1)$ and then define a bijection $f$ so that the image of $f$ is a little bit bigger than its domain and then define a function which is equal to $f$ on the picked countable set and identity function outside that set. $\begingroup$ thanks that makes it a lot clear for me. If I was to follow that logic for lets say [0,1]^2 and [0,2]^2 would I solve for [0,1] and [0,2] first by letting A = {0,1} and b = {0,2} and then f: A->B ? $\endgroup$ – user1411893 Jun 20 '12 at 12:35 $\begingroup$ Between $[0,1]^2$ and $[0,2]^2$, I would rather use the bijection $(x,y)\mapsto(2x,2y)$. $\endgroup$ – Did Jun 20 '12 at 14:53 We will show that both sets are in bijection with $S^1\times \mathbb{Z}$. Consider $(0,1)$. This is in bijection with $\mathbb{R}$ (for example, scale the interval to $(-\pi/2, \pi/2)$ and apply the tangent function). We can map $\mathbb{R}$ to $S^1\times \mathbb{Z}$ bijectively using the map $t\rightarrow (e^{2\pi i t},\lfloor t \rfloor)$. Any set homeomorphic to $(0,1]$ can be put into bijection with $S^1$ using the map $t\rightarrow e^{2\pi i t}$. It remains to show that $(0,1]$ is in bijection with countably many copies of itself. To see this, note that the map $x\rightarrow -\frac{1}{x}$ takes $(0,1]$ to $(-\infty, -1]$, and consider the partition $$\cdots (-4,-3],\ (-3,-2],\ (-2,-1].$$ This seems unnecessarily complicated, and I think you can just map both sets to $\mathbb{R}$ and circumvent the circle stuff, but this is how I figured it out. PotatoPotato $\begingroup$ +1. Actually, my mental image of this is a bijection with $(0,1]\times\mathbb Z$ rather than with $S^1\times\mathbb Z$ but this construction is definitely worthwhile to remember. $\endgroup$ – Did Jun 21 '12 at 6:56 $\begingroup$ Here is another way to say the same thing. I reverse the half-open intervals. I want to show that the intervals $A=\left[ 0,\infty \right)$ and $B= \left( -\infty,\infty \right)$ are in bijective correspondence. By chopping up in half-open pieces (of length one) like $\ldots, \left[ 3,4 \right) , \left[ 4,5 \right) , \left[ 5,6 \right) ,\ldots$, the interval $A$ is naturally in correspondence $A \approx \left[ 0,1 \right) \times\mathbb{N}$, while for $B$ that is $B \approx \left[ 0,1 \right) \times\mathbb{Z}$. But since it is known that $\mathbb{N}$ and $\mathbb{Z}$ are "equal", we are done. $\endgroup$ – Jeppe Stig Nielsen Jun 30 '15 at 14:44 I thought to supplement Did's answer with this picture that I sketched. The blue line represents the set $\color{#0073CF}{(0,1) - \{x_n\}^{\infty}_{n \geq 1}}$. The orange circles are elements of the infinite sequence $\color{#FF4F00}{X = \{x_n\}^{\infty}_{n \geq 1}}$, but I plotted only 4 (I chose 4 arbitrarily) circles because it is impossible to plot all elements of an infinite sequence. Subscripts (The order of the points) were arbitrarily assigned to each orange points. So the depiction above of $f : (0,1] \rightarrow (0,1) $ can be defined with this formula: $f(\color{#FF4F00}{x_n}) = \color{#FF4F00}{x_{n + 1}} \quad \forall \;n \geq 0$ and $f(\color{#0073CF}{x}) = \color{#0073CF}{x} \qquad \quad \forall \; \color{#0073CF}{x \in {(0,1) - \{x_n\}^{\infty}_{n \geq 1}}}$. NNOX AppsNNOX Apps Clearly, $(\mathbb{R}-\mathbb{Q})\cap [0,1]=(\mathbb{R}-\mathbb{Q})\cap (0,1)$. So, set some enumeration for the rationals on $[0,1]$, $(r_{n})_{n \ge 1}$, with $r_1 = 0$ and $r_2 = 1$. Thus, define a function $f:(0,1) \to (0,1]$ to act like the identity on the set of irrationals and, on the set of rationals, set $f(r_j)=r_{j-1}$ for all $j \ge 3$. This is of course a bijection. leticialeticia $\begingroup$ I think for the set of rationals new enumeration would be $f(r_j)=x_j $for $j\geq3$ where $(x_n)$ is the identity mapping on the set of rationals on $(0,1)$ $\endgroup$ – jaggu Sep 22 '16 at 10:48 $\begingroup$ This seems to mimick an answer posted four years earlier, only with mistakes which make that the construction does not work. (@upvoter Why the upvote?) $\endgroup$ – Did Apr 15 '17 at 10:24 If A is a countable infinite set and a is any element in A, then: there's a bijection for A \ {a} and A; Or, equivalently if A is contained strictly in B, b is in B \ A, A is countably infinite then: there's a bijection for A $\bigcup$ {b} and A; To prove the second claim, for example: denote A = {an| n is in $N$}, let: b = f(a0), and an = f(an + 1) for n >= 0. then f is a bijection. (Similar to a previous answer for your question) If A is uncountable, you can choose a subset in A as A' that is countable, and do a similar thing to add element to A'. (similar proof for removing element from A, by choosing the element you wish to remove in A') By this, you can add \ remove as many finite element to an infinite set A as possible, and still have a bijection between them. This is a famous question, called "Hilbert's Hotel", to the name of German Mathematician David Hilbert, you can see more on wiki. (In fact, (1) if A is countably many, you can even add countably infinite elements to A, and there's still a bijection; and sometimes, you can also do this for removing (2) if A is uncountable, you can always do this for both add/remove a coutably many set to/from A) BenjaminBenjamin The technique here is to apply the (abstract) proof of the Schröder–Bernstein theorem to this situation. Let $A = (0,1)$ and $B = (0,1]$. Let $f: A \to B$ be the inclusion mapping $f(x) = x$ and $g: B \to A$ be given by $$ g(x) = \frac{x}{2}$$ Both of these mappings are injective, so we can use the proof technique to build a bijective mapping $h$ between $A$ and $B$. The number $1 \in B = (0,1]$ is a B-stopper; let $$ D = \{\, \frac{1}{2^n} \, | \, n \text{ is a positive integer}\,\}$$ Let $h$ multiply each number in $D$ by $2$ and be equal to $f$ at all other numbers. The function $h: A \to B$ is a bijection. CopyPasteItCopyPasteIt Not the answer you're looking for? Browse other questions tagged functions elementary-set-theory or ask your own question. How to construct a bijection from $(0, 1)$ to $[0, 1]$? Is there a bijection between $[0, \infty)$ and $(0,1)$ Is it possible to define a bijection from nonnegative to positive numbers? Proving $(0,1)$ and $[0,1]$ have the same cardinality 1-1 correspondence between [0,1] and [0,1) Find a one-to-one correspondence between $[0,1]$ and $(0,1)$. Construct an explicit bijection $f:[0,1] \to (0,1]$, where $[0,1]$ is the closed interval in $\mathbb R$ and $(0,1]$ is half open. Bijection between [a,b) and [a,b] intervals. Find a bijective function between two sets Explicit bijection between $[0,1)$ and $(0,1)$ Bijection of sets - Can't find proper bijection Powerset bijection problem Is this a correct bijection between $(0,1]$ and $[0,1]$? Understanding explicit bijection between two sets A bijection between $[0,1]$ and all $k$-encoded strings Order type between two sets and bijection? Find a bijection between 2 intervals
CommonCrawl
Motional emf and induced electric fields A book i am referring to states: "..When the loop moves toward the magnet,it is the magnetic force which drives the charge to flow.But what causes the induced current in a stationary loop when a magnet moves towards it?A magnetic field cannot exert force on a stationary conductor.Whenever a magnetic field is varying with time, an induced Electric field E is produced in any closed path in matter or empty space" Then it goes on to state that this induced emf is non conservative because of a non-zero value of circular integral of E.dl. So,is the former case of when the loop moves in a stationary magnetic field different? Is electric field in the loop due to "motional emf" conservative? And the book also,at one point, expresses electric field due to motional emf as a scalar potetnial gradient. However,motional emf does sounds similar to induced emf. My question is,is E due to motional emf and induced E different or not,and why so? CykaCyka $\begingroup$ Motional EMF is due to the magnetic field acting on the moving charges in the moving wire. Induced EMF is due to an actual nonconservative electric field that really exists even in empty space when magnetic fields are changing in time. They are as different as can be, and the electric ones always satisfy the rule for stationary wires, but if you want a corresponding Faraday's law for motional EMF you have to only apply it to thin wires that van keep the charges in the wire. It is a sometimes rule, not an always rule. The magnetic force can lead to a charge imbalance on moving wires. $\endgroup$ – Timaeus Sep 24 '15 at 21:20 $\begingroup$ And that charge imbalance can cause a conservative electric field, but that field is just keeping the charges inside the wire. Now since the wire is moving you can do work on something and keep it in the wire. But an EMF is not defined as the work done per unit charge. In statics it happens to equal the work per unit charge, but that isn't the definition. $\endgroup$ – Timaeus Sep 24 '15 at 21:21 An EMF from a source is defined as a force per unit charge line integrated about the instantaneous position of a thin wire so for an electromagnetic source: $$\mathscr E=\oint_{\partial S(t_0)} \left(\vec E + \vec v \times \vec B\right)\cdot d \vec l.$$ Where $S(t_0)$ is a surface enclosed by the wire at time $t=t_0$ and the partial means the boundary, so $\partial S(t_0)$ is the instantaneous path of the wire itself at $t=t_0.$ The $\vec v$ is the velocity of the actual charges. Note this is not necessarily the work done on the charges if the wire is moving since the wire goes in a different direction than the charges go when there is a current. Now, if the wire is thin and the charge stays in the wire and there are no magnetic charges we get $$-\oint_{\partial S(t_0)} \left(\vec v \times \vec B\right)\cdot d \vec l=\frac{d}{dt}\left.\iint_{\partial S(t)}\vec B(t_0)\cdot \vec n(t)dS(t)\right|_{t=t_0}$$ And regardless of magnetic charges or thin wires or whether charges stay in the wires we always get $$\oint_{\partial S(t_0)} \vec E\cdot d \vec l=\iint_{S(t_0)}\left.-\frac{\partial \vec B(t)}{\partial t}\right|_{t=t_0}\cdot \vec n(t_0)dS(t_0).$$ So combined together we get: $$\mathscr E=\oint_{\partial S(t_0)} \left(\vec E + \vec v \times \vec B\right)\cdot d \vec l=-\left.\left(\frac{d}{dt}\Phi_B\right)\right|_{t=t_0}$$ The force due to the motion of the wire is purely magnetic, and the force due to the time rate of change of the magnetic field is purely electric. And the work done is an entirely different question than the EMF. The work happens for a motional EMF when a Hall voltage is produced. So,is the former case of when the loop moves in a stationary magnetic field different? A moving wire feels a magnetic force and magnetic forces can be a source term in an EMF. Is electric field in the loop due to "motional emf" conservative? Motional EMF is not caused by electric forces, it is caused by magnetic forces. Since magnetic forces depend on velocity, the word conservative does not even apply since the force depends on the velocity, not merely the path, and they don't do work. And the book also,at one point, expresses electric field due to motional emf as a scalar potetnial gradient. If the wire develops a Hall voltage due to the magnetic force, then the charge distribution for the Hall voltage would set up an electrostatic force, which is conservative. In particular, if the magnetic field is not changing, then the electric field is conservative. However,motional emf does sounds similar to induced emf. When you compute the magnetic flux at two times the term $-\vec B \cdot \hat n dA$ can change for two reasons, a changing loop and a time changing magnetic field. You really get both effects from the product rule for derivatives. The one from the time changing magnetic field becomes equal to the circulation of the electric force per unit charge. The one from the time changing loop becomes equal to the circulation of the magnetic force per unit charge. My question is,is E due to motional emf and induced E different or not,and why so? The electric field is conservative if the magnetic field is not changing in time. And if the magnetic field is not changing in time, the EMF is due solely to the moving charges in the moving wire interacting with a magnetic field. TimaeusTimaeus Let's restate Faraday's Law of Induction carefully in integral form: $\varepsilon = -\frac{d}{dt} \int_S B \bullet da = - \frac{d \Phi}{dt} $ Where C is a closed curve, and S is any smooth surface whose boundary is C. So as you can see, no matter how the generation of electric field is interpreted (take the two scenarios in your question), the motional emf $\varepsilon$ is NEVER 0 unless the magnetic flux is not changing. In both of your scenarios, the magnetic flux is changing and in fact at the SAME rate! That is how they produce the same effect. As Timaeus pointed out, in the case of stationary loop, there seems to be an electric field generated in the loop that drives the current. While in the case of moving loop, the magnetic force does the job. However, these interpretations are two sides of the same coin: relativity. From a relativistic point of view, magnetic fields are really born from principles of relativity + electric field. What is regarded as electrical effect in the stationary loop case transforms into a magnetic effect in the moving loop case. But since the two scenarios in your question are really just a switch between two inertial frames, there shouldn't be any difference in the outcome of the experiments (it would be weird if there were... Einstein would be shocked out of his grave). From that perspective it is also clear that the emf $\varepsilon$ should be the same in both scenarios. So motional emf and induced emf are really just two descriptions of the same effect which is summarized by Faraday's law of induction. The fact that two views give the same result is a manifestation of the relativistic nature of magnetic fields. For more information on deriving magnetism from electricity, and on field transformation laws, I refer you to chapters 5 and 6 of Edward Purcell's E and M textbook: http://www.amazon.com/Electricity-Magnetism-Edward-M-Purcell/dp/1107014026/ref=sr_1_2?ie=UTF8&qid=1443095756&sr=8-2&keywords=purcell+e+and+m Zhengyan ShiZhengyan Shi $\begingroup$ When the loop moves in a constant magnetic field the emf around the loop is is due to the magnetic force on the moving charges (in the thin wire of the loop) as the loop moves. When the loop is stationary and the magnetic field changes the emf is due to the electric force from the nonconservative electric field that is associated with the changing magnetic field. The equation you wrote is for the case of the stationary loop. And an emf is not defined as the line integral of the electric field, if it were then you wouldn't get an emf in the case of a thin loop moving a uniform B field $\endgroup$ – Timaeus Sep 24 '15 at 14:40 $\begingroup$ The equation I wrote works for either case I believe. In the stationary loop case, the magnetic flux changes because the moving magnet produces a changing magnetic field at the location of the loop. In the moving loop case, the magnetic flux changes because the loop travels through space, where magnetic field varies from point to point. And I believe emf is well defined as the line integral of E field in both cases. Regarding your last line, when there is a uniform B field, there is INDEED no emf because the magnetic forces on different charge carriers would cancel out. $\endgroup$ – Zhengyan Shi Sep 24 '15 at 19:06 $\begingroup$ You believe wrong. The EMF is not defined as the line integral of the electric field. And the magnetic field can be both uniform and constant and a moving/deforming thin conducting loop can still feel an EMF. If you have a uniform B field and a conductor on a conducting rail that completes a conducting loop then you get an EMF as the area enclosed expands. If the conductors are thin enough (and the charges are compelled to stay in the conductor) then the EMF equals the change in magnetic flux. But that isn't the definition of an EMF. $\endgroup$ – Timaeus Sep 24 '15 at 19:12 $\begingroup$ An EMF is the line integral of the force per unit charge in the instantaneous direction of the loop. The force can include magnetic forces when the loop is moving because then you can have a magnetic force in the direction the wire is instantaneously pointing. So magnetic field now and a moving wire, or current position of wire and time rate of change of magnetic flux. You can even say the electric EMF is proportional to the flux of $\partial \vec B/\partial t$ because it is true, and the magnetic EMF is solely due to magnetic field now and the velocity of the wire now. $\endgroup$ – Timaeus Sep 24 '15 at 19:16 $\begingroup$ Oh sorry about the uniform B-field thing. I thought you were referring to globally uniform B-field in which no emf is generated... You are absolutely right in making the distinction between E and B fields clear in two reference frames, but my point was that fundamentally these fields are connected (by the Faraday tensor) in relativity and the distinction is really just an apparent effect. I fixed my answer based on some of the confusions you pointed out. Thank you very much! $\endgroup$ – Zhengyan Shi Sep 24 '15 at 20:30 Not the answer you're looking for? Browse other questions tagged electromagnetism or ask your own question. What is the difference between induced emf and motional emf? What force causes the induced EMF of a loop? And, the difference between a loop EMF to motional EMF? Confusion about induced emf vs motional emf motion of a conductor in a varying magnetic field, is motional emf still valid? Different between spatial change in magnetic field & motional emf? Which is induced first EMF or Electric field? Location of high and low potential points in motional emf
CommonCrawl
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. How would have Bezout proved Bezout's theorem? Asked 1 year, 5 months ago How would Bezout have proved Bezout's theorem bounding the number of points in the intersection of two plane (polynomial) curves in $\mathbb{R}^2$? I have looked at a couple of modern algebraic treatments of Bezout's theorem. For instance, I am familiar with Fulton's Algebraic Curves and his treatment of Bezout's theorem therein. While a purely algebraic approach is nice and has its uses, I feel like that proof and related modern proofs must be very different from Bezout's. I expect that Bezout's proof could not have used much beyond rudimentary calculus; in particular, I doubt he would have used the projective plane, exact sequences or local rings. Given that he was an 18th century mathematician, it is also unclear to me if he could even have used the Fundamental Theorem of Algebra which makes me uncertain that he would have proved it using resultants e.g., some version of a proof outlined on the wikipedia page: https://en.wikipedia.org/wiki/B%C3%A9zout%27s_theorem . Alternatively, my question is what is the most classically analytic or down-to-earth proof of Bezout's theorem? Regarding analytic approaches to Bezout's theorem, I think that Griffiths--Harris proof (around page 171 therein) is analytic, but not classical. Bonus points: If such a proof is substantially different from that in Fulton's Algebraic Curves, then can you describe how the proofs relate (assuming that they do) or how the classical proof inspires the modern proof? ag.algebraic-geometry ca.classical-analysis-and-odes K HughesK Hughes $\begingroup$ The proof appears in Bézout's own book Théorie générale des équations algébriques (1779). A 2002 English translation by Eric Feron is available -- the theorem is presented in paragraph 47, which is on page 24 of the translation. Have you read this? It seems like a natural place to start... $\endgroup$ – Carl-Fredrik Nyberg Brodda $\begingroup$ Also, the statement "I doubt he would have used [...] exact sequences or local rings" has my vote for the understatement of the century (or two centuries, to be exact) :-) $\endgroup$ $\begingroup$ The Fundamental Theorem of Algebra can be considered as a 1-dimensional version of Bezout, and it is absolutely essential. Without it, you may only get an inequality. Resultants and elimination theory is a natural and classical approach. $\endgroup$ – Oleg Eroshkin $\begingroup$ @Carl-FredrikNybergBrodda Thank you for that reference! Unfortunately, I cannot access the book without paying for it. However, searching for an arxiv version led me to the following reference which appears helpful: arxiv.org/pdf/1606.03711.pdf $\endgroup$ – K Hughes $\begingroup$ drive.google.com/file/d/1cYPvDagNHsM39ngUjr--HftY44Zznmz5/view Bezout proof was not rigorous. A simple proof can be obtained using resultants. $\endgroup$ – Alexandre Eremenko Thanks for contributing an answer to MathOverflow! Varieties as an introduction to algebraic geometry / How do professional algebraic geometers think about varieties Real intersections of plane cubic curves Max Noether's AF+BG theorem "Classical" consequences of Bezout's theorem in dimensions $>2$ Higher dimensional Bezout via Hilbert polynomials: a reference About Weil's proof of "Weil conjectures for curves and abelian varieties" BSD and generalisation of Gross-Zagier formula Applications of algebra to analysis Research in applied algebraic geometry that essentially needs a background of modern algebraic geometry at Hartshorne's level Can someone help in how to approach reading Mordell-Weil Theorem for abelian varieties?
CommonCrawl