paper
stringlengths 9
16
| proof
stringlengths 0
131k
|
---|---|
math/0004181
|
MATH has the form MATH. Let MATH be the partial isometry defining MATH and note that MATH for all MATH because of REF . Thus MATH is a unitary in MATH which is MATH-invariant modulo MATH and satisfies that MATH where MATH. Thanks to REF we have that MATH for all MATH. Since MATH (with convergence in the strict topology) we find that MATH modulo MATH for all MATH, and hence in fact for all MATH. This proves the lemma.
|
math/0004181
|
We are going to use REF. Let MATH, where MATH is the element represented by the identity map of MATH and MATH is the stabilising MATH-homomorphism. By REF it suffices to identify the image of MATH under the NAME isomorphism MATH and show that the image of that element is not changed under the map we are trying to prove is always the identity. This is what we do. Under the isomorphism MATH, coming from NAME, the image of MATH is represented by the asymptotic homomorphism MATH arising by applying the NAME construction to the NAME extension tensored with MATH : MATH . In other words, if MATH is the Busby invariant of REF the image of MATH in MATH is MATH. For each separable MATH-subalgebra MATH we let MATH denote the inclusion. Then the boundary map MATH is given by MATH where MATH denote the composition product in MATH-theory. Hence MATH is the element MATH with the property that MATH for all large enough MATH. Let MATH be the natural embedding. By the naturality of the NAME construction, MATH for all separable MATH-subalgebras MATH which contains MATH. Since MATH is equivariantly homotopic to the identity map, we have that MATH so we conclude that MATH. Hence MATH by REF . Thus the image of MATH in MATH under the composite map is MATH. The proof is complete.
|
math/0004181
|
If MATH in MATH, there is a path MATH, of asymptotic homomorphisms MATH such that MATH and MATH and a unit sequence MATH in MATH such that MATH connects MATH to MATH. By REF we may assume that MATH is an equi-homotopy and it is then easy to see that REF is a strong homotopy. By REF we conclude from this that MATH in MATH. But MATH in MATH by REF . Hence MATH in MATH and MATH in MATH. To complete the proof it suffices to show that MATH is injective. However, MATH is an equivariant MATH-isomorphism and therefore MATH is an isomorphism. The injectivity of MATH follows from the weak stability of MATH : There is a MATH-invariant isometry MATH such that MATH is an equivariant MATH-automorphism MATH and MATH. Since MATH induces the identity map on MATH we see that MATH is an isomorphism.
|
math/0004181
|
Consider an extension MATH and assume that MATH in MATH. With the notation from REF we find that MATH. But then REF implies that MATH in MATH. By REF this yields the conclusion that MATH in MATH. Thus MATH is injective. But MATH is weakly stable so the result follows.
|
math/0004181
|
Consider another discretization MATH of MATH and define MATH by MATH . There is then a MATH-extension MATH such that MATH. It suffices to show that MATH is unitarily equivalent to an asymptotically split MATH-extension. Define MATH such that MATH . There is then a MATH-extension MATH such that MATH. MATH is clearly unitarily equivalent (via a MATH-invariant unitary) to MATH, so it suffices to show that MATH is asymptotically split. For each MATH we define MATH by MATH . Then MATH is a discrete asymptotic homomorphism such that MATH, MATH, MATH and MATH modulo MATH. By convex interpolation and an obvious application of the MATH-algebra MATH we get an asymptotic homomorphism MATH such that MATH for all MATH.
|
math/0004181
|
CASE: Assuming REF there is an asymptotically split extension MATH such that MATH and MATH are unitarily equivalent. By REF this implies that MATH and MATH are strongly homotopic. Then MATH and MATH are also strongly homotopic, where MATH inverts the orientation of the suspension. REF follows by observing that MATH is strongly homotopic to MATH. CASE: REF follows because an invariant isometry in MATH can be connected to MATH via a strictly continuous path of MATH-invariant isometries, compare for example, REF. CASE: REF follows from REF .
|
math/0004181
|
We may assume that MATH is not homotopic to MATH. Let MATH, be MATH-homomorphisms with orthogonal ranges, both homotopic to the identity map. Then MATH and MATH are homotopic to MATH, and in particular non-zero. Let MATH be a non-zero positive element in the range of MATH and let MATH be a positive lift of MATH. By spectral theory MATH contains a projection MATH with non-zero image in MATH. Since MATH for all MATH, we conclude that MATH is non-zero in MATH. It follows that there is an isometry MATH with infinite dimensional co-kernel such that MATH. Set MATH.
|
math/0004181
|
It follows from REF that MATH is strongly homotopic to MATH for any MATH-homomorphism MATH. By using that the unitary group of MATH is norm-connected, it follows from this and REF that MATH is naturally isomorphic to MATH. Since MATH is naturally isomorphic to MATH by REF , this completes the proof.
|
math/0004183
|
Assume MATH is reducible. Let MATH be a sphere that does not bound a ball on either side. MATH cannot be disjoint from MATH or else it would bound a ball on the side that does not contain MATH. Assume MATH intersects MATH minimally and transversally. The intersection will consist of a union of circles. Let MATH be one of these circles that is innermost on MATH (any circle that bounds a disk on MATH disjoint from all the other circles of intersection). Without loss of generality assume MATH. MATH cannot be trivial on MATH since MATH is minimal. MATH, however must be trivial in MATH so must divide MATH into two pieces, one containing both points of MATH and the other consisting of an annulus running from MATH to MATH. This disk on MATH bounded by MATH shows that MATH bounds a disk in the exterior of MATH. This, however, means that MATH surgery on MATH leaves MATH unchanged instead of turning it into an unknot, yielding the desired contradiction.
|
math/0004183
|
Because the MATH have linking number REF with MATH we can find a NAME surface for MATH disjoint from the MATH. Let MATH be a minimal genus NAME surface for MATH in the link complement. We supplement the notation introduced in REF. Recall MATH is the link exterior of MATH. Let MATH be the corresponding link of MATH components in MATH. MATH is the torus boundary component in MATH corresponding to MATH. Let MATH refer to the manifold obtained by filling MATH along an essential simple closed curve of slope MATH in MATH. When MATH, MATH is a link exterior. Let MATH be the corresponding link in MATH. Let MATH be the image of MATH in MATH and MATH be the image of MATH in MATH. We now prove REF by induction on MATH. If MATH is ever a disk then REF is clearly true, so we will assume that MATH is not a disk throughout the proof. The base case: Let MATH be a strongly REF-trivial knot. This means that MATH is unknotting number REF and there is one linking circle MATH that dictates a crossing change that unknots MATH. By REF if MATH is reducible, then MATH is the unknot. As in the proof of REF MATH bounds a disk in the complement of MATH, so MATH is the unlink on two components. Therefore, MATH being least genus must be a disk, which is a contradiction, verifying the claim for MATH reducible and MATH. We may assume MATH is irreducible to complete the base case. MATH is the unknot. Since MATH is not a disk, it is no longer norm minimizing after the filling. Thus by REF MATH is norm minimizing under any other filling of MATH. In particular MATH is NAME norm minimizing for MATH, which is just MATH. Thus, MATH is a least genus NAME surface for MATH. The inductive step: Now we assume that if MATH has a strong MATH-trivializer MATH and MATH is a NAME surface for MATH disjoint from MATH, which is minimal genus among all such surfaces, then MATH is also a minimal genus NAME surface for MATH and show that the same must be true for any strong MATH-trivializer for MATH. Again by REF if MATH is reducible, MATH must be the unknot. As in previous arguments, the separating sphere MATH must intersect at least one MATH in a curve that is essential on MATH. Without loss of generality, we may assume that MATH is such a disk. Then MATH bounds a disk in the complement of MATH. Since MATH forms a strong MATH-trivializer for MATH, the induction assumption implies MATH bounds a disk MATH disjoint from MATH. Since MATH bounds a disk disjoint from MATH, MATH can clearly be chosen to be disjoint from MATH, too, but this contradicts the assumption that MATH was minimal genus, but not a disk. We now may finish the proof of REF knowing that MATH is irreducible. MATH is an unknot in the link MATH. MATH is a strong MATH-trivializer for MATH in MATH. The inductive assumption means that MATH bounds a disk in the exterior of MATH. This disk is in the same class as MATH in MATH, thereby showing that MATH is not NAME norm minimizing. Thus, by REF MATH remains norm minimizing under any other filling of MATH. In particular MATH is NAME norm minimizing in MATH. Thus, MATH is a least genus NAME surface for MATH in the complement of MATH. MATH, however, forms a strong MATH-trivializer for MATH in MATH. By the inductive assumption, MATH must be NAME norm minimizing for MATH in the knot complement as well as the link complement.
|
math/0004183
|
Let MATH be strongly MATH-trivial with MATH-trivializers MATH. Let MATH be a minimal genus NAME surface for MATH disjoint from MATH as in REF . MATH has genus MATH. Each linking circle MATH bounds a disk MATH that intersects MATH in an arc running between the two points of MATH and perhaps some simple closed curves. Simple closed curves inessential in MATH can be eliminated since MATH is incompressible. Any essential curves MATH must be parallel to MATH in MATH. These curves can be removed one at a time using the annulus running from MATH to the outermost MATH to reroute MATH, decreasing the number of intersections. Thus, if MATH is assumed to have minimal intersection with each of the MATH then it intersects each one in an arc which we shall call MATH as in REF . Each MATH is essential in MATH. Otherwise MATH would bound a disk disjoint from MATH and the crossing change along MATH would fail to unknot MATH. MATH is never parallel on MATH to MATH for MATH. If MATH is parallel on MATH to MATH there must be an annulus running from MATH to MATH in the link exterior. Recall that we adopted the convention that MATH and MATH are each oriented so that MATH surgery results in the appropriate crossing changes. The two tori cannot have opposite orientations or else MATH fillings on both MATH and MATH is the same as MATH fillings on both, thus, instead of unknotting MATH changing both crossings leaves MATH knotted. If the two tori have the same orientation we could replace MATH with a single torus MATH obtained by cutting and pasting of the two tori along the annulus. Now MATH filling for MATH and MATH filling for MATH is equivalent to MATH filling on MATH, while MATH filling on both MATH and MATH is equivalent to MATH filling on MATH. This implies that MATH fails to be norm-minimizing after both MATH and MATH filling of MATH. This contradicts REF completing the proof of the Lemma. Then MATH is a collection of embedded arcs on MATH, no two of which are parallel. An NAME characteristic argument shows that MATH. Since the arcs are in one to one correspondence with the linking circles, a strong MATH-trivializer can produce at most MATH linking circles for MATH finally proving REF .
|
math/0004183
|
The link consists of the arcs MATH, together with short segments from MATH connecting the end points of the segments (and disjoint from the end points of the other segments). The base case is trivial because, MATH contains a NAME link of MATH components: the NAME link. MATH is obtained from MATH by doubling one of the components of a NAME link of MATH components. This yields a NAME link of MATH components. As a result of the NAME structure in MATH any MATH edges from MATH can be disjointly embedded on a disk bounded by MATH. So MATH forms an unlink with any proper subset of MATH. We can use this fact to show that MATH are a MATH-trivializer for MATH. Let MATH be any nontrivial subset of MATH. Let MATH be the complement of MATH. If we take MATH together with MATH, and do MATH surgery on each component of MATH the resulting knot is an unknot. This is because it is exactly the same as if we took MATH and did MATH surgery on each of the components of MATH. Since MATH is a nontrivial subset, MATH is a proper subset. MATH together with a the linking circles in MATH, therefore form an unlink, so each of the components of MATH bounds a disk disjoint from MATH and doing MATH surgery on these linking circles leave MATH unchanged. Now that we know that MATH is strongly MATH-trivial, we need only show MATH is a non-trivial knot. Assume MATH is trivial. By REF , MATH bounds a disk MATH in the complement of MATH. Since MATH was obtained from MATH by spinning along the MATH's, the exteriors of the two links are homeomorphic, and therefore MATH must bound a disk MATH also disjoint from MATH (note that one could even prove that both MATH and MATH are unlinks). MATH intersects each MATH in REF points, so as before we may assume MATH is an arc for each MATH, but these arcs must, of course, be isotopic to the MATH which in turn shows that the MATH can be disjointly embedded on MATH, proving that MATH is planar and not NAME, the desired contradiction. Thus, MATH is a strongly MATH-trivial knot, but not the unknot.
|
math/0004184
|
We take the inner product of REF with MATH and integrate over MATH. By the divergence theorem and the incompressibility, we are left with MATH . The NAME and NAME inequalities give MATH so by cancellation of MATH, MATH . An integration over MATH, taking sup in MATH of MATH, gives MATH . Next we integrate the inequality MATH over the interval MATH, MATH . By REF we get MATH . Finally, we let MATH, and choose MATH sufficiently large to get MATH . This implies that there is a set of positive NAME measure in every interval MATH such that MATH for MATH in this set.
|
math/0004184
|
The subscript MATH indicates that the solutions are only weakly continuos in MATH. We take the inner product of REF with MATH and integrate over MATH. Using NAME 's inequality we obtain MATH . Now MATH by the NAME inequalities, where MATH is a constant. Thus, MATH . An application of the inequality MATH, on the right hand side, gives MATH by another application of the inequality above with MATH. From REF we get, again by the same inequality, MATH . Adding these inequalities, applying the NAME inequality and repeating the use of the inequality above results in MATH where MATH and MATH are constants. Using NAME 's inequality and REF we find that MATH since MATH . Now the point is that if the coefficient MATH is small, or the forcing is small and we have waited a sufficiently long time, to let the initial data MATH decay, then the inequality gives us a bound on MATH. The argument is as follows. We integrate the inequality above over MATH to get MATH where MATH. Now assume that MATH assumes its maximum in the interval MATH at MATH. Then MATH . Now put MATH and MATH . Then the inequality above can be written as MATH . The graph of the function MATH is shown in REF . It is concave and attains its maximum at MATH. By REF there exists a MATH such that MATH and we can choose MATH so small that MATH lies between zero and MATH, see REF . Moreover, MATH can never reach MATH, because MATH . It is clear that we can choose MATH so large that MATH . Moreover, notice that the derivative of MATH in REF is positive at the point, where MATH first reaches MATH, that is, MATH which gives the bound MATH where MATH is arbitrarily small. The only question left is whether the initial data makes sense as a function in MATH. But we have already used above that, by REF , there exists a sequence MATH such that MATH. We now choose the initial time to be the smallest MATH. Then we let this MATH be the new MATH. Now we apply the local existence REF and the above bound to get global existence. We recall the definition of an absorbing set from CITE. A set MATH is an absorbing set if for every bounded set MATH there exists a time MATH such that MATH implies that MATH, if MATH. The estimate in REF shows that the weak-flow of the weak solution of the NAME equations has an absorbing set in MATH. But we have shown in addition that there exists a MATH such that the flow is a continuos flow of strong solutions for MATH and has the absorbing set MATH. Moreover, since MATH is compactly embedded in MATH it follows that the NAME equation has a global attractor in MATH. In fact one can now show, see CITE, that the solutions are spatially smooth and thus MATH. It follows, see Hale CITE, CITE, CITE and CITE, that the NAME equation has a global attractor consisting of spatially smooth solutions. It is also clear that if MATH is small then MATH satisfies the above bound and the solutions have global existence for MATH.
|
math/0004184
|
We consider MATH . The second equation in REF gives MATH . We multiply this equation by MATH, integrate over MATH and use the divergence theorem, to get MATH . Thus, by NAME 's inequality MATH where again MATH is the first eigenvalue of the negative Laplacian with vanishing boundary conditions on MATH. Consequently MATH which shows that MATH for all MATH, if MATH . Similarly we get that MATH if MATH and we conclude that MATH .
|
math/0004184
|
We recall the relationship between MATH and MATH . The maximum principle in REF , for T, MATH, implies that MATH . Thus MATH . Now consider the system REF . We multiply the first equation by MATH and integrate over MATH. Integration by parts and NAME 's inequality give MATH . Thus by the use of the bound for MATH above, and NAME 's inequality, we get MATH . Integration in t then gives MATH where MATH. We combine the bounds for MATH and MATH to get the absorbing set MATH in MATH, where MATH and MATH is arbitrarily small. The last statement of the lemma is proven by a straightforward application of REF , to REF , if we recall that the nonlinear term did not play a role in the proof of REF . Namely, for the first equation in REF , MATH and for the second equation MATH . We divide these by the decay coefficients MATH and MATH, respectively and add them. This produces the bound on the MATH norm for the sequence MATH.
|
math/0004184
|
The maximum principle in REF , for T, MATH, implies a maximum principle for MATH, MATH by the relationship between MATH and MATH . This implies that MATH . The rest of the proof, using this bound on MATH, is similar to the proof of the local existence of the NAME equation. CITE can be consulted for details. Then given the local solution MATH the pressure is recovered as in REF .
|
math/0004184
|
A proof of REF can be found in CITE.
|
math/0004184
|
The subscript MATH indicates that the weak solutions are only weakly continuos in MATH. We multiply the second equation in REF above by MATH and treat it in the same way as MATH in the proof of REF to get MATH and by adding and subtracting MATH and applying NAME 's inequality, we get MATH . Then integrating with respect to t, we get MATH . Next we multiply the u equation in REF by MATH and integrate over MATH. By integration by parts and NAME 's inequality MATH by the NAME inequalities, where we have used that the MATH . NAME norm is bounded by MATH by NAME 's inequality. We use NAME 's inequality to eliminate MATH, namely MATH so MATH and by NAME 's inequality MATH . Then we integrate the equation MATH where MATH. We integrate from the initial time MATH, to get MATH by the above inequality for MATH. Now if MATH attains its maximum on the interval MATH at MATH, then MATH where we have used the bound MATH from REF and the inequality MATH. Thus, if we put MATH, we obtain the inequality MATH . The constants MATH and MATH are, MATH and MATH . This means that for a large aspect ratio MATH, MATH becomes small. Now we repeat the arguments from the proof of REF and conclude that the function MATH is concave and attains its maximum at MATH. This maximum is MATH so that MATH can not escape beyond its maximum, see REF . Moreover arguing as in REF we conclude that the derivative of MATH is positive at the point where MATH first reaches MATH and this gives us the bound MATH . The last step is to get a bound for MATH. We multiply the MATH equation in REF by MATH, to get MATH by NAME 's inequality MATH by NAME 's inequality MATH by NAME 's and NAME 's inequalities, because MATH as above, and because, MATH by two applications of interpolation, where MATH is small. Thus MATH by NAME 's inequality. Now moving the MATH term over to the left hand side of the inequality and applying NAME 's inequality, we get MATH since MATH by the inequality MATH. Thus MATH where MATH. We can now put the estimates for MATH and MATH together to get an absorbing set MATH where MATH is arbitrarily small for MATH large enough. Namely, by REF , there exists a MATH, such that MATH and the global bound on the MATH norm holds for MATH. Combined with the local existence REF , the a priori bound above now gives the existence of a global solution and an absorbing set in MATH, for MATH. The existence of a global attractor in MATH then follows from the compact embedding of MATH in MATH, see Hale CITE, CITE, CITE and CITE.
|
math/0004184
|
REF follows from REF if we set the viscosity and heat conductivity in REF equal to MATH and MATH, respectively.
|
math/0004184
|
In the three-dimensional case the proof is similar to the proof of REF but simpler. First we multiply the first equation in REF by MATH and integrate by parts to get MATH after integration in MATH, where MATH. In CITE we show that MATH where MATH denotes the projection onto the divergence free part of MATH. Hence, MATH . This means that for MATH sufficiently large the forcing in the first equation of REF becomes small. Thus we obtain the existence of unique global solution MATH to the first equation of REF , by REF , and an absorbing set in this space. Since MATH we also obtain global existence for MATH in the second equation of REF , just as in the proof of REF and the existence of an absorbing set in this space as well. The existence of the attractors follows from the fact that the equations possess an absorbing set in MATH which is compactly embedded in MATH.
|
math/0004184
|
By REF we have MATH and MATH . All terms involving spatial derivatives will vanish by the MATH-periodicity on the unit torus and by the incompressibility of MATH. Moreover, MATH, for MATH, so the system reduces to MATH and MATH . We solve this system of ODE's with the initial condition MATH to get the trivial solution, MATH for all MATH.
|
math/0004184
|
We apply the NAME inequality on MATH which, by the result of REF , gives MATH . A change of variables MATH yields MATH . We note that the constant MATH will be the same for all MATH-cubes in the interior of MATH. For MATH-cubes intersecting MATH, MATH will vanish at, at least, one point and the usual NAME inequality applies. Let MATH denote the maximum of all the constants for these cubes. A summation over all MATH-cubes gives MATH where MATH. Finally, an integration with respect to MATH gives the desired inequality.
|
math/0004184
|
Let us consider the first equation in REF . We multiply by MATH and integrate over MATH. By the incompressibility we get, by using the NAME 's inequality MATH . By REF , see also the proof of REF , MATH is bounded, and assuming that MATH is a time where MATH is finite, see REF , we can absorb the term MATH into the time integral using REF . This gives the estimate, MATH so that, MATH . Now we recall that MATH by REF (or REF ). Thus, by REF , MATH . An application of REF once again gives the desired result, that is, MATH . If the initial data MATH and MATH is bounded, in MATH independent of MATH then C will also be independent of MATH .
|
math/0004184
|
By the NAME 's inequality, REF it immediately follows that MATH .
|
math/0004184
|
REF follows from duality by REF follows by REF .
|
math/0004184
|
By assumption the initial data admit unique two-scale limits. Consider now the first equation of REF , MATH . Let MATH and choose test functions MATH. By the results of REF, all the sequences MATH, MATH, MATH and MATH are uniformly bounded in MATH. Therefore, according to REF , they admit two-scales limits. In order to identify these limits we multiply each of these terms by the smooth compactly supported test function MATH. For the time derivative we get (recall that MATH) MATH . Sending MATH and integrating by parts yield, MATH . For the second (inertial) term we consider MATH . By considering MATH to be a test function in the second term on the right hand side this term immediately passes to zero in the two-scale sense. For the first term the NAME 's inequality yields MATH where all the norms are in MATH. In order to pass to the limit in the right hand side we consider the usual MATH-mollifications MATH and MATH of MATH and MATH, respectively. Since the mollified functions pass strongly to MATH and MATH, respectively, as MATH, we have, for MATH sufficiently small, say MATH and for every MATH in MATH where MATH is arbitrarily small, independently of MATH. This inequality still holds true if we take the supremum in MATH over MATH. Thus, for every MATH, the right hand side will tend to zero as MATH tends to MATH, by the uniqueness of the two-scale limit MATH of MATH. Consequently, we have proved that MATH in the two-scale sense. For the third term we get, by the divergence theorem, MATH . Sending MATH yields, after applying the divergence theorem again, MATH . For the right hand side we immediately get MATH . For the fourth term (the pressure) we have to be a bit more careful. Let us multiply the first equation of REF by MATH. For the pressure term we get MATH . A passage to the limit, and an application of the divergence theorem, using the fact that all other terms vanish, gives MATH which implies that MATH does not depend on MATH. We now add the local incompressibility assumption on the test functions MATH, that is, MATH and multiply the pressure term by MATH and apply the divergence theorem, MATH . A passage to the limit, and an application of the divergence theorem, gives MATH . Collecting all two-scale limits on the right hand side gives MATH . Since MATH we can argue as in REF and conclude that there exists a local pressure gradient MATH given by MATH . Let us now consider the second equation of REF . We already know that the sequence MATH is uniformly bounded in MATH. We multiply by MATH as above and for the time derivative we get MATH . By letting MATH we get MATH . For the non-linear term we have MATH . The first term on the right hand side immediately passes to zero. For the second term we argue as above and consider the difference MATH . By considering MATH to be a test function in the second term, this term immediately passes to zero in the two-scale sense. For the first term we get, by the NAME 's inequality, MATH where the norm is in MATH. We introduce, as above, mollifiers and consider the sequence MATH which converges to MATH strongly as MATH. Arguing as above, we choose MATH sufficiently small to get MATH where MATH is arbitrarily small. Thus, by sending MATH, MATH in the two-scale sense. For the third term we get, by the divergence theorem, MATH . We let MATH and get, by the divergence theorem, MATH . The right hand side of the second equation in REF will vanish since MATH is of order MATH. By REF the system REF has a unique solution MATH and, thus, by uniqueness, REF is two scales homogenized limit of the system REF . Also, by uniqueness, the whole sequence converges to its two-scale limit and the theorem is proven.
|
math/0004185
|
For convenience we omit the index MATH and will write simply MATH and MATH instead of MATH and MATH. We belief that no confusion with the system REF can arise. Let MATH and MATH. The system MATH can be rewritten in the form MATH . Consider the system ``orthogonal" to REF MATH . It is evident that the right-hand vector fields of REF are orthogonal at each point. This implies that REF has also a unique equilibrium point at the origin and other trajectories of REF cross orthogonally all cycles of REF at a unique point. For if any trajectory of REF intersects some cycle of REF at more than one point then there is a point on this cycle where right-hand vectors of REF will tangent each other. Let MATH be a certain nonzero solution of REF and MATH denote its maximal interval of existence. Let MATH denote the trajectory of MATH. Without loss of generality we can assume that MATH crosses the cycles of REF from inside to outside with increase of MATH (otherwise one needs to consider the system MATH instead of REF ). It follows from NAME - NAME theory that MATH as MATH and MATH as MATH. Indeed, system REF cannot have closed curves (cycles) because then they must intersect the cycles of REF at more than one point. Hence the MATH-limit set of MATH is not empty and may consist of only origin. By the same reasons the MATH-limit set of MATH must be empty, that is, MATH as MATH. It is clear that when time MATH increases from MATH to MATH the solution MATH crosses each cycle of REF . We have also that MATH because MATH remains bounded as MATH. Let MATH denote the solution of REF with initial condition MATH. We define the mapping MATH by the formula MATH . It is clear that MATH because the cycle MATH intersects MATH transversally. Let MATH be the mapping which to every MATH associates the time MATH such that MATH. Since MATH is differentiable and MATH (it is not equilibrium point) we have MATH. Finally we define on MATH the function MATH by the equality MATH . We point out some properties of MATH. CASE: MATH, CASE: MATH, CASE: MATH, thus MATH is an integral for system REF on MATH, CASE: equation MATH defines a unique cycle of REF (indeed, curve MATH intersects all cycles of REF with increasing MATH at only one point; hence MATH monotonically increases along MATH; therefore each level curve of MATH consists of a unique cycle). CASE: MATH. (Indeed, MATH). By setting MATH we construct a continuous positively-definite integral for REF on whole MATH (we assume naturally MATH). It is clear that MATH . Since differentiability of MATH can be broken only at MATH let us consider the behaviour of the gradient MATH as MATH. There are two possibilities. CASE: MATH. CASE: MATH. In the first case we define MATH. We thus have MATH and MATH is therefore the required integral for REF . In the second case we will construct the integral MATH in the form MATH where a scalar function MATH is chosen to smooth out the discontinuity of MATH at origin. To do this we first define the function MATH for MATH. We have MATH, since the sets MATH are compact and depend continuously on MATH. The following properties are evident: CASE: MATH. CASE: MATH. CASE: MATH. Finally we define MATH. It is easy to see that MATH and MATH. We set MATH and verify that MATH is the required integral for REF . All we need is to check that MATH exists at MATH. But it is fulfilled, because MATH . We point out some properties of MATH. CASE: MATH, where MATH or MATH. CASE: MATH, that is, MATH is positively definite. CASE: MATH monotonically increases along MATH. CASE: each level curve MATH defines a unique cycle of REF . The theorem is proved.
|
math/0004185
|
Obviously, MATH for MATH. It remains to prove that MATH. Let us use the notations from the proof of REF . Changing in REF coordinates MATH and MATH into the standard polar coordinates MATH, MATH, we get for MATH . Obviously, MATH and MATH. Then NAME 's lemma implies MATH where MATH. Hence, MATH. Let MATH denote the solution passing through the initial point MATH at MATH. Let MATH be the period of it. It is clear that MATH. On other hand, MATH. For any MATH the solution MATH belongs to a certain compact set, therefore MATH is uniformly bounded with respect to MATH. If MATH as MATH, then MATH . This contradiction proves the proposition.
|
math/0004185
|
We shall find a comparison equation for MATH. We have MATH here MATH denotes inner product. This gives for MATH (we can assume that MATH for sufficiently large MATH, because MATH). Let us consider the comparison equation MATH . It is well - known (see CITE) that every function MATH satisfying REF does not exceed the solution MATH of REF with initial condition MATH. Thus the existence of bounded solutions of REF will imply the same for REF . Let MATH . Since MATH, function MATH is monotone increasing and invertible. The solution of REF with initial condition MATH has the form MATH . There are two possibilities. CASE: MATH. Then all solutions of REF must be bounded. For if some solution MATH as MATH, then the left - hand side of REF tends to MATH whereas the right - hand side tends to MATH. We got a contradiction. CASE: MATH. Fix MATH and take MATH so large that MATH. Then the solution of REF satisfying the initial condition MATH is bounded. Otherwise, tending MATH in REF and assuming that MATH we get the impossible equality MATH. Thus REF has bounded solutions in both cases. So the theorem is proved.
|
math/0004185
|
As it follows from REF the system REF written in ``quasi-polar" coordinates satisfies all conditions of REF .
|
math/0004185
|
We have MATH . Assumption MATH implies that MATH converges. Hence the limit MATH exists. The theorem is proved.
|
math/0004185
|
It follows immediately from REF.
|
math/0004185
|
Let MATH denote any bounded solution of REF . REF implies the convergence MATH as MATH. Every solution of REF lying on the invariant torus MATH has the form MATH, where MATH is the initial value. It is necessary and sufficient to prove the existence of a MATH such that MATH . Thus it is sufficient to prove that the limit value MATH exists. Since MATH it is sufficient to prove that the obtained integral converges. By REF we have MATH . There is a constant MATH such that MATH for MATH. Hence MATH . Next we need to estimate MATH. First of all we notice that MATH . Since MATH, it satisfies the NAME inequality with some constant MATH (on such a compact set MATH from MATH that MATH for MATH ) MATH . It follows from REF that MATH. REF implies convergence of MATH . Hence limit MATH exists. This proves the theorem.
|
math/0004188
|
Multiplying both sides of REF by MATH, we reduce REF to the identity MATH . By NAME 's formula MATH the Right-hand side of REF can be transformed into MATH . Thus, we need to verify that MATH which is equivalent to MATH which is true because [REF ] MATH .
|
math/0004188
|
Multiplying both sides of REF by MATH and summing on MATH, we find: MATH where we used the two handy formulae: MATH . Formulae REF convert the identity REF into MATH where MATH. Each side of the equality REF vanishes for MATH, so it's equivalent to the MATH-derivative MATH of it: MATH where we used REF . Since the Right-hand side of REF is MATH we need to verify that MATH and this is REF for MATH.
|
math/0004188
|
Each side of the identity REF vanishes for MATH, so we apply MATH to it to make it simpler-looking. We get: MATH because, as is easily verified, MATH . We thus arrive at the identify MATH . Denote MATH . We shall prove that MATH REF then results from REF when MATH. To prove REF we use induction on MATH. For MATH, REF is obviously true. Now, denote the Right-hand side of REF by MATH. Then MATH where MATH . Thus, MATH .
|
math/0004188
|
By REF , the LHS of REF is MATH .
|
math/0004188
|
Since MATH we have MATH . Now, MATH . Since, MATH, the exponents MATH are, modulo MATH, just a permutation of the exponents MATH. Therefore, the expressions MATH are, since MATH, just a permutation of the expressions MATH modulo MATH. This proves REF . Now, for any MATH, coprime to MATH or not, MATH so that, by REF , for a coprime to MATH, MATH . This proves REF . Finally, if MATH is divisible by MATH, MATH then MATH so that MATH . This proves REF .
|
math/0004188
|
First, since MATH is a prime, MATH . Next, it's easy to verify that MATH . Taking MATH and using REF , we get MATH . Therefore, MATH .
|
math/0004188
|
The LHS of the congruence REF is MATH . Since MATH is coprime to MATH, the set MATH is, modulo MATH, a permutation of the set MATH.
|
math/0004188
|
The congruence MATH has exactly two solutions: MATH so that MATH . Otherwise, MATH . For each such pair MATH we have: MATH . Therefore, for all such MATH pairs combined, MATH where MATH . Thus, MATH . In addition, MATH and MATH . Combining formulae REF - REF we obtain REF . Since REF follows from REF .
|
nlin/0004008
|
REF follows from REF - REF, and REF. Similarly, REF follows from REF - REF, and REF. Relations REF - REF are obvious from REF.
|
nlin/0004008
|
REF are verified using REF - REF, and REF follows by combining REF, and REF. Clearly MATH is meromorphic on MATH by REF. Since MATH one infers that MATH is meromorphic on MATH if the zeros of MATH are all simple. This follows from REF by restricting MATH to a sufficiently small neighborhood MATH of MATH such that MATH for all MATH and all MATH, and similarly, by restricting MATH to a sufficiently small neighborhood MATH of MATH such that MATH for all MATH and all MATH. REF follows from REF after replacing MATH by the right-hand side of REF and utilizing REF in the MATH-integral and REF in the MATH-integral. REF - REF immediately follow from REF - REF, and REF.
|
nlin/0004008
|
The existence of these asymptotic expansions (in terms of local coordinates MATH near MATH and local coordinate MATH near MATH) is clear from the explicit form of MATH in REF. Insertion of the polynomials MATH, MATH, and MATH, then in principle, yields the explicit expansion coefficients in REF - REF. However, this is a cumbersome procedure, especially with regard to the next to leading coefficients in REF - REF. Much more efficient is the actual computation of these coefficients utilizing the NAME REF . Indeed, inserting the ansatz MATH into REF and comparing the first two leading powers of MATH immediately yields REF. Similarly, the ansatz MATH inserted into REF immediately produces REF. In exactly the same manner, inserting the ansatz MATH and the ansatz MATH into REF immediately yields REF, respectively.
|
nlin/0004008
|
REF follow from REF noting MATH .
|
nlin/0004008
|
REF , and REF imply MATH . Using MATH by REF, one concludes REF. Similarly, one derives from REF, and REF, MATH . Since MATH by REF, one arrives at REF are derived analogously. In order to conclude REF, one first needs to investigate the case where MATH hits one of the branch points MATH and hence the right-hand sides of REF vanish. Thus we suppose that MATH for some MATH, MATH and some MATH. Introducing MATH for MATH in an open neighborhood of MATH, REF become MATH . Since by hypothesis the right-hand sides of REF are nonvanishing, one arrives at REF.
|
nlin/0004008
|
REF follow from REF by comparing powers of MATH and MATH, using REF follow from taking MATH in REF, using again REF. Finally, REF follow from MATH, MATH and REF.
|
nlin/0004008
|
It suffices to insert REF into the system REF - REF.
|
nlin/0004008
|
Define the polynomial MATH . Using REF and MATH (by differentiating REF with respect to MATH) one then computes MATH . In order to investigate the leading-order term with respect to MATH of MATH we first study the leading-order MATH-behavior of MATH, and MATH. Writing (compare REF - REF) MATH a comparison of leading powers with respect to MATH in REF, and REF yields MATH . Since REF can be rewritten in the form MATH a comparison of REF then yields MATH and hence MATH . Insertion of REF, and REF into REF then yields MATH . Thus, REF prove MATH for some MATH (independent of MATH), implying MATH . Taking MATH in REF, observing that MATH is independent of MATH by REF, then shows that MATH and hence MATH on MATH because of REF. Thus, MATH . Differentiating REF with respect to MATH, inserting REF, then yields MATH and we proved REF - REF. Next, combining REF, and REF one computes MATH . Since clearly MATH a comparison of REF yields MATH for some MATH (independent of MATH), and hence REF (except for MATH). A comparison of powers of MATH in REF then yields REF. Next, we restrict MATH a bit further and introduce MATH by the requirement that MATH remain distinct and also distinct from MATH for MATH, that is, we suppose MATH . Differentiating REF with respect to MATH inserting REF then yields MATH . Since the zeros of MATH and MATH are disjoint by REF (compare also REF), MATH necessarily must be of the form MATH for some MATH (independent of MATH) and REF inserted into REF then yields MATH . Since MATH combining REF, taking MATH in REF, observing REF, results in MATH and hence in MATH . Using REF - REF then extend by continuity from MATH to MATH. This proves REF (except for MATH). A comparison of powers of MATH in REF then yields REF. Taking MATH in REF, observing REF, then proves REF. Finally, computing the partial MATH-derivative of MATH and separately the partial MATH-derivative of MATH, utilizing REF, and REF - REF then shows MATH and hence MATH .
|
nlin/0004008
|
Abbreviating MATH one infers from MATH (compare REF), and REF, that MATH . Next, observing the fact that MATH it becomes a straightforward matter deriving REF - REF. For simplicity we just focus on the expansion of MATH as MATH, the rest is completely analogous. Using MATH and REF - REF, one computes by comparison with REF, MATH . This proves REF. Similarly, one calculates, MATH proving REF.
|
nlin/0004008
|
Given the expansions REF of MATH near MATH and MATH, REF are standard facts following from NAME interpolation results of the type (see, for example, CITE) MATH .
|
nlin/0004008
|
Using REF (compare REF) one obtains MATH and hence MATH . Here we used REF to convert the directional derivatives MATH and MATH, MATH into MATH and MATH derivatives. Since by REF exactly the same REF apply to MATH, insertion of REF, and REF (and their MATH analogs) into REF proves REF - REF.
|
nlin/0004008
|
A comparison of REF - REF - REF yields MATH and hence MATH proves REF. Insertion of REF into the leading asymptotic term of REF - REF then yields REF - REF.
|
nlin/0004008
|
Introducing MATH with an appropriate normalization MATH (which is MATH-independent) to be determined later, we next intend to prove that MATH . A comparison of REF, and REF shows that MATH and MATH share the identical essential singularity near MATH. Next we turn to the local bahavior of MATH with respect to its zeros and poles. We temporarily restrict MATH to MATH such that for all MATH, MATH for all MATH, MATH. Then arguing as in the paragraph following REF one infers from REF that MATH where MATH. Applying REF then proves REF for MATH. By continuity this extends to MATH as long as MATH remains nonspecial. Finally we determine MATH. A comparison of REF, and REF yields MATH and MATH . Thus, REF implies MATH and REF yields MATH . In order to reconcile the two REF for MATH it suffices to recall the linear dependence of the divisors MATH and MATH, that is, MATH to conclude that MATH and hence equality of the right-hand sides of REF. This proves REF.
|
nlin/0004008
|
Since MATH for all MATH, there is nothing to prove in the special case MATH. Hence we assume MATH. Let MATH be a fixed branch point of MATH and suppose that MATH is special. Then by REF there is a pair MATH such that MATH where MATH. Let MATH so that MATH. Then MATH and hence by REF , MATH . Since by REF is nonspecial and MATH, REF contradicts REF . Thus, MATH is nonspecial.
|
nlin/0004041
|
Let us take in previous theorem MATH. Since MATH it follows that all conditions of REF are satisfied.
|
physics/0004057
|
Taking the derivative with respect to MATH, for given MATH and MATH, one obtains MATH since the marginal distribution satisfies MATH. MATH are the normalization NAME multipliers for each MATH. Setting the derivatives to zero and writing MATH, we obtain REF . When varying the normalized MATH, the variations MATH and MATH are linked through MATH from which REF follows. The positivity of MATH is then a consequence of the concavity of the rate distortion function (see, for example, REF of Ref. CITE).
|
physics/0004057
|
First we note that the conditional distribution of MATH on MATH follows from the NAME chain condition MATH. The only variational variables in this scheme are the conditional distributions, MATH, since other unknown distributions are determined from it through NAME 's rule and consistency. Thus we have MATH and MATH . The above equations imply the following derivatives with respect to MATH, MATH and MATH . Introducing NAME multipliers, MATH for the information constraint and MATH for the normalization of the conditional distributions MATH at each MATH, the Lagrangian, REF , becomes MATH . Taking derivatives with respect to MATH for given MATH and MATH, one obtains MATH . Substituting the derivatives from Eq's. REF and rearranging, MATH . Notice that MATH is a function of MATH only (independent of MATH) and thus can be absorbed into the multiplier MATH. Introducing MATH we finally obtain the variational condition: MATH which is equivalent to REF for MATH, MATH with MATH the normalization (partition) function.
|
physics/0004057
|
For lack of space we can only outline the proof. First we show that the equations indeed are satisfied at the minima of the functional MATH (known for physicists as the ``free energy"). This follows from REF when applied to MATH with the convex sets of MATH and MATH, as for the BA algorithm. Then the second part of the lemma is applied to MATH which is an expected relative entropy. REF minimizes the expected relative entropy with respect to to variations in the convex set of the normalized MATH. Denoting by MATH and by MATH the normalization NAME multipliers, we obtain MATH . The expected relative entropy becomes, MATH which gives REF , since MATH are independent for each MATH. REF also have the interpretation of a weighted average of the data conditional distributions that contribute to the representative MATH. To prove the convergence of the iterations it is enough to verify that each of the iteration steps minimizes the same functional, independently, and that this functional is bounded from below as a sum of two non - negative terms. The only point to notice is that when MATH is fixed we are back to the rate distortion case with fixed distortion matrix MATH. The argument in CITE for the BA algorithm applies here as well. On the other hand we have just shown that the third equation minimizes the expected relative entropy without affecting the mutual information MATH. This proves the convergence of the alternating iterations. However, the situation here is similar to the EM algorithm and the functional MATH is convex in each of the distribution independently but not in the product space of these distributions. Thus our convergence proof does not imply uniqueness of the solution.
|
quant-ph/0004017
|
. At deposit time NAME sends NAME one qubit MATH, which might be entangled with the qubits MATH that NAME holds. Let us denote the reduced density matrix of MATH by MATH. At revealing time, NAME may choose whether she wants to bias the result towards MATH, in which case she applies the generalized measurement MATH, or towards MATH in which case she applies MATH. The measurements MATH and MATH do not change the reduced density matrix MATH of NAME, but rather give different ways to realize MATH as a mixture of pure-states, and give NAME information about the value that NAME actually gets to see in this mixture. Now, we even go further and give NAME complete freedom to choose the way she realizes the reduced density matrix MATH of NAME as a mixture, and we give her the knowledge of NAME 's value for free. Let us say that when NAME applies MATH, the reduced density matrix MATH is realized as the mixture MATH, and when NAME applies MATH the reduced density matrix MATH is realized as the mixture MATH. Now, let us focus on the zero strategy. Say NAME realizes MATH as MATH. When the MATH'th event happens, NAME 's strategy tells her to send some two qubits MATH to NAME, that are supposed to hold classical MATH values for MATH and MATH. NAME then measures MATH and MATH in the MATH basis. Now, if one of MATH is not a classical bit, then NAME can measure it herself in the MATH basis, and get a mixture over classical bits. Furthermore, we can push all the probabilistic decisions into the mixture MATH. Thus, without loss of generality, we can assume NAME 's answers MATH and MATH are classical bits that are determined by the event MATH. Let us denote by MATH the vector MATH where MATH are NAME 's answers when event MATH occurs. Without loss of generality we may assume MATH, otherwise we know NAME immediately rejects. The probability NAME discovers that NAME is cheating is then MATH and the overall probability NAME detects NAME is cheating is MATH . Let us define the density matrix MATH. MATH. MATH. Therefore MATH . Now, by NAME inequality, MATH and the claim follows. Similarly, if NAME tries to bias the result towards MATH, MATH ends up in the mixture MATH, and when MATH occurs NAME sends MATH to NAME that correspond to a vector MATH. We define MATH to be the reduced density matrix MATH. As before, MATH. Hence, MATH. To conclude the proof, we establish the following claim: Let MATH and MATH be density matrices corresponding to mixtures over MATH. Let MATH be the probability of MATH or MATH in the first mixture, and MATH be the probability of MATH or MATH. Similarly let MATH and MATH be the corresponding quantities for the second mixture. Then MATH. We show that we can distinguish the mixtures with probability at least MATH when we measure them according to the basis MATH. If we do the measurement on a qubit whose state is the reduced density matrix MATH we get the MATH answer with probability MATH, while if we do the measurement on a qubit whose state is the reduced density matrix MATH we get the MATH answer with probability MATH. The difference is MATH, where we used MATH. Altogether we get MATH as desired. Putting it together: MATH . That is, MATH.
|
quant-ph/0004017
|
We first represent the strategy of a honest NAME in quantum language. Consider two maximally parallel purifications MATH and MATH of MATH and MATH, where MATH and MATH are density matrices of the register MATH, and the purifications are states on a larger NAME space MATH. By CITE, MATH. At preparation time, NAME prepares the state MATH on MATH and one extra qubit MATH. NAME then sends the register MATH to NAME. At revealing time, NAME measures the qubit MATH in the MATH basis, to get a bit MATH. The state of registers MATH is now MATH. NAME then applies a unitary transformation MATH on register MATH, which rotates her state MATH to the state MATH . This is possible by REF . After applying MATH, NAME measures register MATH in the computational basis and sends NAME the bit MATH and the outcome of the second measurement, MATH. This strategy is similar to the honest strategy, except for that NAME does not know what bit and state is sent until revealing time. We can also assume without loss of generality that the maximally parallel purifications satisfy that MATH is real and positive. This can be assumed since otherwise we could multiply MATH by an overall phase without changing the reduced density matrix and the absolute value of the inner product. To cheat, NAME creates the encoding MATH and sends register MATH to NAME. NAME 's one strategy is also as described above. The zero strategy, on the other hand, is a slight modification of the honest strategy. At revealing time, NAME measures the control qubit MATH in the MATH basis, where MATH and MATH, MATH. If the outcome is a projection on MATH . NAME sends MATH and proceeds according to the MATH honest protocol, that is, applies MATH to register MATH, measures in the computational basis and sends the result to NAME. If the outcome is a projection on MATH, NAME proceeds according to the MATH honest protocol. Let us now compute NAME 's advantage and NAME 's probability of getting caught cheating. We can express MATH as: MATH . Hence, the probability NAME sends MATH in the zero strategy is MATH. We conclude: NAME 's advantage is MATH. We now prove that the detection probability is at most MATH. The state of MATH conditioned that the first measurement yields MATH can be written as MATH where MATH is the probability NAME sends MATH in the zero strategy. The above state can be written as MATH . The rest of the protocol involves NAME 's rotation of the state by MATH, then NAME 's measurement of the register MATH and NAME 's measurement of the register MATH. The entire process can be treated as a generalized measurement on this state, where this measurement is a projection onto one of two subspaces, the ``cheating NAME and the ``Honest NAME subspaces. We know that MATH lies entirely in the honest NAME subspace, and thus the probability that NAME is caught, conditioned that MATH was projected on MATH, is at most MATH. In the same way, when we condition on a projection on MATH, NAME 's state can be written as MATH. which gives a probability of detection which is at most MATH. Adding the conditional probabilities together we get that the detection probability is at most MATH.
|
quant-ph/0004017
|
We first describe a general scenario. NAME is honest and sends MATH to NAME. NAME has an ancilla MATH. NAME applies some unitary transformation MATH acting on the registers MATH and MATH. Let us denote MATH NAME then sends register MATH to NAME, and keeps register MATH to himself. We want to show that if MATH contains much information about MATH then NAME detects NAME cheating with a good probability. We can express MATH as a superposition, MATH where we have used the basis MATH, MATH, for MATH. In this representation, the probability MATH . NAME is caught cheating is: MATH which in particular implies that MATH. We now want to express NAME 's advantage. Let MATH REF be the reduced density matrix of the register MATH conditioned on the event that MATH (MATH). Then, MATH NAME 's advantage is at most the trace distance between MATH and MATH, and we want to bound it from above. Triangle inequality gives: MATH. As the trace norm of two pure states MATH and MATH is MATH, and using REF , we get: MATH . We now claim; MATH. Thus, altogether, MATH which completes the proof.
|
quant-ph/0004017
|
. We will prove that all the unprimed MATH vectors lie in one bunch of small width, using the unitarity of MATH. The unitarity of MATH implies that MATH. We can express MATH as in REF . We get: MATH . Substituting the values MATH for actual values, and noticing that MATH, we in particular get the following equations: MATH where MATH, MATH and we write MATH if MATH. A partial result can already be derived from what we have so far. By REF , we note that the length of the primed MATH vectors is at most MATH. Inserting this to REF , we get that MATH and similarly MATH are close to MATH up to terms of order MATH. This is a weaker than the result which we want to achieve in REF , which is closeness to MATH up to order MATH terms. If we stop here, the closeness of the unprimed MATH vectors up to order MATH implies that NAME 's information is at most of the order of MATH. Note, however, that so far all we have used is unitarity, and we have not used the particular properties of the set of vectors we use in the protocol. In the rest of the proof, we will use the symmetry in protocol REF to improve on this partial result, and to show that NAME 's information is at most of the order of MATH. Basically, the symmetry which we will use is the fact that the vectors in the protocol can be paired into orthogonal vectors. We proceed as follows. The idea is to express REF as inequalities involving only the distances between two MATH vectors, MATH and then to solve the set of the four inequalities to give an upper bound on the pairwise distances. This will imply a bound on the inner products, MATH, by the following connection: MATH. where MATH denotes the real part of the complex number MATH. MATH. We denote: MATH . Let MATH REF be the sum of the left (right) hand side of the last four equations. MATH. MATH and now we can apply REF . Expressing the left hand side of the equations in terms of MATH and MATH might look a bit more complicated, and this is where we invoke the symmetric properties of the protocol, namely REF . MATH . We first look at the LHS of REF + REF . By adding MATH (due to REF ) and by using the fact that MATH we get that the LHS of these two equations contributes MATH. Similarly, the LHS of REF +REF is MATH. Altogether, MATH. Combining REF with our knowledge that MATH we get: MATH . We want to show that MATH are all of the order of MATH. Define MATH. For MATH, MATH. Since all terms in the left hand side are positive, we have for each of MATH an upper bound in terms of MATH: MATH . Thus, MATH. Solving the quadratic equation MATH for MATH we get MATH . Finally, MATH where the third inequality is true due to REF . Similarly, we have the same lower bound for MATH, which implies REF .
|
quant-ph/0004017
|
We will show MATH . Thus, MATH, where the last equality is due to REF . Since, MATH we get MATH as desired.
|
quant-ph/0004017
|
. We express MATH, where MATH is a pure state. We further express each MATH in the eigenbasis MATH: MATH . Applying MATH, this state is taken to: MATH . The reduced density matrix to the register MATH, in case of event MATH is: MATH and altogether, MATH. To complete the proof we just notice that MATH. The proof for MATH is similar.
|
quant-ph/0004017
|
Say NAME sent NAME the state MATH. We can express it as MATH where MATH and MATH. NAME applies MATH on MATH and gets MATH . Therefore, if we measure the last qubit, then with probability MATH we end up in MATH and with probability MATH we end up in MATH normalized. Thus the density matrix of MATH after tracing out the last qubit is: MATH . To find out the probability for NAME not to detect NAME cheating, we calculate MATH. We get: MATH . The probability of NAME detecting an error is thus MATH.
|
quant-ph/0004032
|
Let MATH and denote by MATH the orthogonal projection onto MATH. For all MATH and MATH we have MATH . Since MATH and MATH are in MATH and MATH is square integrable modulo MATH, we have MATH . Explicitly, MATH . For any MATH the function MATH defined in REF is in MATH and we get MATH . We claim that MATH . Indeed MATH and the function MATH is in MATH (see, for instance, CITE). Hence its product with MATH is in MATH and the claim follows by NAME 's theorem. Now we can apply REF to the function MATH to conclude that MATH where MATH, and MATH denotes the convolution. In particular MATH. If we let MATH run over a sequence of functions on MATH which is an approximate identity, see for example CITE, one can prove that MATH in MATH (see the below remark) and, since MATH is closed, MATH. This shows that MATH.
|
quant-ph/0004032
|
From REF above it follows that the set MATH is a resolution of the identity of MATH and MATH is an isotypic representation. Hence, MATH is a resolution of the identity in MATH if and only if MATH and, in this case, MATH is isotypic. Conversely, assume that MATH is an isotypic representation. Let MATH be an irreducible subrepresentation of MATH, then MATH is square integrable modulo the centre and, by REF , MATH. Since MATH is isotypic and MATH is equivalent to a subrepresentation of MATH, MATH is equivalent to MATH, so that MATH. Since MATH is direct sum of copies of MATH, it follows that MATH.
|
quant-ph/0004032
|
It is known, see for example CITE, that MATH is informationally complete if and only if it separates the set of trace class operators. Let MATH be a trace class operator, then MATH for any MATH if and only if MATH for MATH-almost all MATH. Observing that MATH for all MATH such that MATH, this last condition is equivalent to MATH for MATH-almost all MATH. Since the map MATH is continuous, the lemma is proved.
|
quant-ph/0004032
|
Let MATH be a trace class operator, and consider the decompositions of MATH and MATH as given in REF, that is, MATH and MATH . Since MATH, it follows that MATH for all MATH. Given MATH, using the orthogonality relations REF , one has MATH since MATH converges in MATH to MATH, as shown in REF, and MATH is bounded. Hence MATH and, if MATH for all MATH, then MATH for MATH-almost all MATH. On the other hand, if MATH is a basis of MATH, MATH where the double series converges in MATH. Since the set MATH is orthonormal in MATH, the condition MATH for MATH-almost all MATH implies MATH for all MATH, that is, MATH and this proves that MATH is informationally complete.
|
quant-ph/0004072
|
This fact is a simple consequence of the linearity of quantum mechanics. Suppose we had such an operation and MATH and MATH are distinct. Then, by the definition of the operation, MATH . (Here, and frequently below, I omit normalization, which is generally unimportant.) But by linearity, MATH . This differs from REF by the crossterm MATH .
|
quant-ph/0004072
|
Suppose MATH is a basis for MATH and MATH is a basis for MATH. By setting MATH and MATH equal to the basis elements and to the sum and difference of two basis elements (with or without a phase factor MATH), we can see that REF is equivalent to MATH where MATH is a Hermitian matrix independent of MATH and MATH. Suppose REF holds. We can diagonalize MATH. This involves choosing a new basis MATH for MATH, and the result is REF , and REF . The arguments before the theorem show that we can measure the error, determine it uniquely (in the new basis), and invert it (on the coding space). Thus, we have a quantum error-correcting code. Now suppose we have a quantum error-correcting code, and let MATH and MATH be two distinct codewords. Then we must have MATH for all MATH. That is, REF must hold. If not, MATH changes the relative size of MATH and MATH. Both MATH and MATH are valid codewords, and MATH where MATH is a normalization factor and MATH . The error MATH will actually change the encoded state, which is a failure of the code, unless MATH.
|
cond-mat/0005026
|
MATH is a continuous and positive function that satisfies the variational equation MATH with MATH. Let MATH. Since MATH we see that MATH on MATH, that is, MATH is subharmonic on MATH. Hence MATH achieves its maximum on the boundary of MATH, where MATH, so MATH is empty.
|
cond-mat/0005026
|
Obviously, the infimum is achieved for a multiple of a characteristic function for some measurable set MATH. If MATH denotes the NAME measure of MATH, then MATH . Now MATH, with equality for MATH .
|
cond-mat/0005026
|
Immediate consequences of REF , using that MATH and MATH.
|
cond-mat/0005026
|
It is clear that MATH. For the other direction, we use MATH as a test function for MATH, where MATH . Note that MATH and MATH. Therefore MATH where we have used convexity for the last term. Moreover, MATH as long as MATH. So we have MATH . Optimizing over MATH gives as a final result MATH .
|
cond-mat/0005026
|
With the demanded properties of MATH, REF holds. Using this and REF one easily verifies REF . Moreover, MATH is a minimizing sequence for the functional in question, so we can conclude as in REF that it converges to MATH strongly in MATH, proving REF . (Remark: In REF there is a misprint, instead of MATH one should have MATH on the left side.) To see REF let us define MATH . We can write MATH with MATH . By assumption, MATH for some MATH with MATH. Because MATH for all MATH, we see from REF that MATH converges to some MATH as MATH. Moreover, we can conclude that the support of MATH is for large MATH contained in some bounded set MATH independent of MATH. Therefore MATH by dominated convergence, so MATH is equal to the MATH of REF . Now MATH with MATH . Again MATH for some MATH with MATH. By REF we thus have MATH with MATH.
|
cond-mat/0005026
|
The upper bound is trivial. Because MATH, defined in REF , converges uniformly to MATH and MATH as MATH, we have the lower bound MATH for some MATH.
|
cond-mat/0005026
|
Since MATH for all MATH we have MATH with MATH . Choosing MATH gives the desired result.
|
cs/0005001
|
The lemma can be proved directly by considering the worst case: namely we can always divide the rectangle at worst case into MATH regions of unit size, which means the above difference always vanishes.
|
cs/0005001
|
REF follows immediately for MATH because for any combination of partitioning, each of the anti-A-noise-concentrated block of size MATH can at best ``contaminate" MATH equal size regions of MATH. It is a simple matter to confirm that the conclusion is valid for MATH as well. REF comes from the fact that at most MATH of the MATH regions can be contaminated when MATH.
|
cs/0005001
|
To prove REF , we must show that among all the possible MATH different partitions that the Shifting Strategy can possibly generate, each of MATH size anti-A-noise-concentrated block is capable of contaminating the total of MATH different regions. Once this is done, the NAME Hole Principle CITE ensures that there is at least one partition in which the existing MATH anti-A-noise-concentrated blocks contaminate at most MATH regions. We prove this for MATH first. Among all possible MATH partitions, MATH partitions divide the block into REF regions, MATH of them divide the block into REF regions, while MATH of the partitions can not divide the block into more than one region. Summing them up, the noise-concentrated block is divided into MATH different regions. REF illustrates the three cases above for MATH and MATH, for example. We see that the partitioning through the solid dots of the figure divide the noise block into REF regions, those through the crossed points into REF regions, while those through the hollow dots may not be able to divide the block. For MATH, we enumerate each of all the MATH possible partitions. We know that the block would be divided into MATH, MATH, MATH, MATH, MATH, MATH; MATH, MATH, MATH, MATH, MATH, MATH; MATH, MATH, MATH, MATH, MATH, MATH; MATH; MATH, MATH, MATH, MATH, MATH, MATH regions separately. Summing up all the possible terms above by means of REF we know that the block will be divided into MATH different regions for all the MATH different partitions. REF follows from REF since the condition that the number of noise-contaminated regions is to be less than REF of the total number of regions constitutes a sufficient condition for the original candidate selection of MATH.
|
cs/0005009
|
For a variable MATH, we define MATH as the maximum depth of nested modal operators in MATH. Obviously, MATH holds. Also, if MATH then MATH. Hence each path MATH in MATH induces a sequence MATH of natural numbers. MATH is a tree with root MATH, hence the longest path in MATH starts with MATH and its length is bounded by MATH. Successors in MATH are only generated by the MATH-rule. For a variable MATH this rule will generate at most MATH successors for each MATH. There are at most MATH such formulae in MATH. Hence the out-degree of MATH is bounded by MATH, where MATH is a limit for the biggest number that may appear in MATH if binary coding is used.
|
cs/0005009
|
The sequence of rules induces a sequence of trees. The depth and the out-degree of these trees is bounded in MATH by REF . For each variable MATH the label MATH is a subset of the finite set MATH. Each application of a rule either CASE: adds a constraint of the form MATH and hence adds an element to MATH, or CASE: adds fresh variables to MATH and hence adds additional nodes to the tree MATH. Since constraints are never deleted and variables are never identified, an infinite sequence of rule application must either lead to an arbitrary large number of nodes in the trees which contradicts their boundedness, or it leads to an infinite label of one of the nodes MATH which contradicts MATH.
|
cs/0005009
|
MATH for any rule MATH implies MATH, hence each model of MATH is also a model of MATH. Consequently, we must show only the other direction. CASE: Let MATH be a model of MATH and let MATH be the constraint that triggers the application of the MATH-rule. The constraint MATH implies MATH. This implies MATH for MATH. Hence MATH is also a model of MATH. CASE: Firstly, we consider the MATH-rule. Let MATH be a model of MATH and let MATH be the constraint that triggers the application of the MATH-rule. MATH implies MATH. This implies MATH or MATH. Without loss of generality we may assume MATH. The MATH-rule may choose MATH, which implies MATH and hence MATH is a model for MATH. Secondly, we consider the MATH-rule. Again let MATH be a model of MATH and let MATH be the constraint that triggers the application of the MATH-rule. Since the MATH-rule is applicable, we have MATH. We claim that there is a MATH with MATH . Before we prove this claim, we show how it can be used to finish the proof. The world MATH is used to ``select" a choice of the MATH-rule that preserves satisfiability: Let MATH be an enumeration of the set MATH. We set MATH . Obviously, MATH is a model for MATH (since MATH is a fresh variable and MATH satisfies MATH), and MATH is a possible result of the application of the MATH-rule to MATH. We will now come back to the claim. It is obvious that there is a MATH with MATH and MATH that is not contained in MATH, because MATH. Yet MATH might appear as the image of an element MATH such that MATH but MATH. Now, MATH and MATH implies MATH. This is due to the fact that the constraint MATH must have been generated by an application of the MATH-rule because it has not been an element of the initial c.s. The application of this rule was suspended until neither the MATH- nor the MATH-rule are applicable to MATH. Hence, if MATH is an element of MATH now, then it has already been in MATH when the MATH-rule that generated MATH was applied. The MATH-rule guarantees that either MATH or MATH is added to MATH. Hence MATH. This is a contradiction to MATH because under the assumption that MATH is a model of MATH this would imply MATH while we initially assumed MATH.
|
cs/0005009
|
The satisfiability of MATH implies that also MATH is satisfiable. By REF there is a sequence of applications of the optimised rules which preserves the satisfiability of the c.s. By REF any sequence of applications must be finite. No generated c.s. (including the last one) may contain a clash because this would make it unsatisfiable.
|
cs/0005009
|
Let MATH be a complete and clash-free c.s. generated by applications of the optimised rules. We will show that the canonical model MATH together with the identity function is a model for MATH. Since MATH was generated from MATH and the rules do not remove constraints from the c.s., MATH. Thus MATH is also a model for MATH with MATH. By construction of MATH, REF are trivially satisfied. It remains to show that MATH implies MATH, which we will show by induction on the norm MATH of a formula MATH. The norm MATH for formulae in NNF is inductively defined by: MATH . This definition is chosen such that it satisfies MATH for every formula MATH. CASE: The first base case is MATH for MATH. MATH implies MATH and hence MATH. The second base case is MATH. Since MATH is clash-free, this implies MATH and hence MATH. This implies MATH. CASE: MATH implies MATH. By induction, we have MATH and MATH holds and hence MATH. The case MATH can be handled analogously. CASE: MATH implies MATH because otherwise the MATH-rule would be applicable and MATH would not be complete. By induction, we have MATH for each MATH with MATH. Hence MATH and thus MATH. CASE: MATH implies MATH because MATH is clash-free. Hence it is sufficient to show that MATH holds. On the contrary, assume MATH holds. Then there is a variable MATH such that MATH and MATH while MATH. For each variable MATH with MATH either MATH or MATH. This implies MATH and, by the induction hypothesis, MATH holds which is a contradiction.
|
cs/0005009
|
Let MATH be the MATH-formula to be tested for satisfiability. We can assume MATH to be in NNF because the transformation of a formula to NNF can be performed in linear time and space. The key idea for the NAME implementation is the trace technique CITE, that is, it is sufficient to keep only a single path (a trace) of MATH in memory at a given the c.s. is generated in a depth-first manner. This has already been the key to a NAME upper bound for MATH and MATH in CITE. To do this we need to store the values for MATH for each variable MATH in the path, each MATH which appears in MATH and each MATH. By storing these values in binary form, we are able to keep information about exponentially many successors in memory while storing only a single path at a given stage. Consider the algorithm in REF , where MATH denotes the set of relation names that appear in MATH. It re-uses the space needed to check the satisfiability of a successor MATH of MATH once the existence of a complete and clash-free ``subtree" for the constraints on MATH has been established. This is admissible since the optimised rules will never modify this subtree once is it completed. Neither do constraints in this subtree have an influence on the completeness or the existence of a clash in the rest of the tree, with the exception that constraints of the form MATH for MATH-successors MATH of MATH contribute to the value of MATH. These numbers play a role both in the definition of a clash and for the applicability of the MATH-rule. Hence, in order to re-use the space occupied by the subtree for MATH, it is necessary and sufficient to store these numbers. Let us examine the space usage of this algorithm. Let MATH. The algorithm is designed to keep only a single path of MATH in memory at a given stage. For each variable MATH on a path, constraints of the form MATH have to be stored for formulae MATH. The size of MATH is bounded by MATH and hence the constraints for a single variable can be stored in MATH bits. For each variable, there are at most MATH counters to be stored. The numbers to be stored in these counters do not exceed the out-degree of MATH, which, by REF , is bounded by MATH. Hence each counter can be stored using MATH bits when binary coding is used to represent the counters, and all counters for a single variable require MATH bits. Due to REF , the length of a path is limited by MATH, which yields an overall memory consumption of MATH.
|
cs/0005009
|
The sequence of rule applications induces a sequence of trees. As before, the depth and out-degree of this tree is bounded in MATH by REF . For each variable MATH, MATH is a subset of the finite set MATH. Each application of a rule either CASE: adds a constraint of the form MATH and hence adds an element to MATH, or CASE: adds fresh variables to MATH and hence adds additional nodes to the tree MATH, or CASE: adds a constraint to a node MATH and deletes all subtrees rooted at successors of MATH. Assume that algorithm does not terminate. Due to the mentioned facts this can only be because of an infinite number of deletions of subtrees. Each node can of course only be deleted once, but the successors of a single node may be deleted several times. The root of the completion tree cannot be deleted because it has no predecessor. Hence there are nodes which are never deleted. Choose one of these nodes MATH with maximum distance from the root, that is, which has a maximum number of ancestors in MATH. Suppose that MATH's successors are deleted only finitely many times. This can not be the case because, after the last deletion of MATH's successors, the ``new" successors were never deleted and thus MATH would not have maximum distance from the root. Hence MATH triggers the deletion of its successors infinitely many times. However, the MATH-rule is the only rule that leads to a deletion, and it simultaneously leads to an increase of MATH, namely by the missing concept which caused the deletion of MATH's successors. This implies the existence of an infinitely increasing chain of subsets of MATH, which is clearly impossible.
|
cs/0005009
|
Let MATH be a complete and clash-free c.s. obtained by a sequence of rule applications from MATH. We show that the canonical structure MATH is indeed a model of MATH, where the canonical structure for MATH is defined as in REF . Please note, that we need the condition ``MATH iff MATH" to make sure that all information from the c.s. is reflected in the canonical structure. By induction over the norm of formulae MATH as defined in the proof of REF , we show that, for a complete and clash-free c.s. MATH, MATH implies MATH. The only interesting cases are when MATH starts with a modal operator. CASE: MATH implies MATH because MATH is complete. Hence, there are MATH distinct variables MATH with MATH and MATH for each MATH and MATH. By induction, we have MATH and MATH and hence MATH. CASE: MATH implies, for any MATH and any MATH with MATH, MATH or MATH. For any predecessor of MATH, this is guaranteed by the MATH-rule, for any successor of MATH by the MATH-rule which is suspended until no non-generating rule rules can applied to MATH or any predecessor of MATH together with the reset-restart mechanism that is triggered by constraints ``moving upwards" from a variable to its predecessor. We show that MATH: assume MATH. This implies the existence of some MATH with MATH for each MATH and MATH but MATH. This implies MATH, which, by induction yields MATH in contradiction to MATH. Since constraints for the initial variable MATH are never deleted from MATH, we have that MATH and hence MATH and MATH.
|
cs/0005009
|
Let MATH be a model for MATH and MATH the set of relations that occur in MATH together with their inverse. We use MATH to guide the application of the non-deterministic completion rules by incremently defining a function MATH mapping variables from the c.s. to elements of MATH. The function MATH will always satisfy the following conditions: MATH . Claim: Whenever MATH holds for a c.s. MATH and a function MATH and a rule is applicable to MATH then it can be applied in a way that maintains MATH. CASE: The MATH-rule: if MATH, then MATH. This implies MATH for MATH, and hence the rule can be applied without violating MATH. CASE: The MATH-rule: if MATH, then MATH. This implies MATH or MATH. Hence the MATH-rule can add a constraint MATH with MATH such that MATH still holds. CASE: The MATH-rule: obviously, either MATH or MATH for any variable MATH in MATH. Hence, the rule can always be applied in a way that maintains MATH. Deletion of nodes does not violate MATH. CASE: The MATH-rule: if MATH, then MATH. This implies MATH. We claim that there is an element MATH such that MATH . We will come back to this claim later. Let MATH be an enumeration of the set MATH . The MATH-rule can add the constraints MATH as well as MATH to MATH. If we set MATH, then the obtained c.s. together with MATH satisfies MATH. Why does there exists an element MATH that satisfies MATH? Let MATH be an arbitrary element with MATH and MATH that appears as an image of an arbitrary element MATH with MATH for some MATH. REF of MATH implies that MATH for any MATH and also MATH must hold as follows: Assume MATH. This implies MATH: either MATH, then in order for the MATH-rule to be applicable, no non-generating rules and especially the MATH-rule is not applicable to MATH and its ancestor, which implies MATH. If not MATH then MATH must have been generated by an application of the MATH-rule to MATH. In order for this rule to be applicable no non-generating rule may have been applicable to MATH or any of its ancestors. This implies that at the time of the generation of MATH already MATH held and hence the MATH-rule ensures MATH. In any case MATH holds and together with REF of MATH this implies MATH which contradicts MATH. Together this implies that, whenever an element MATH with MATH and MATH is assigned to a variable MATH with MATH, then it must be assigned to a variable that contributes to MATH. Since the MATH-rule is applicable there are less than MATH such variables and hence there must be an unassigned element MATH as required by MATH. This concludes the proof of the claim. The claim yields the lemma as follows: obviously, MATH holds for the initial c.s. MATH, if we set MATH for an element MATH with MATH (such an element must exist because MATH is a model for MATH). The claim implies that, whenever a rule is applicable, then it can be applied in a manner that maintains MATH. REF yields that each sequence of rule applications must terminate, and also each c.s. for which MATH holds is necessarily clash-free. It cannot contain a clash of the form MATH because this would imply MATH and MATH. It can neither contain a clash of the form MATH and MATH because MATH is an injective function on MATH and preserves all relations in MATH. Hence MATH implies MATH, which cannot be the case since MATH.
|
cs/0005009
|
Consider the algorithm in REF , where MATH denotes all intersections of relations that occur in MATH. As the algorithm for MATH, it re-uses the space used to check for the existence of a complete and clash-free ``subtree" for each successor MATH of a variable MATH. Counter variables are used to keep track of the values MATH for all relevant MATH and MATH. This can be done in polynomial space. Resetting a node and restarting the generation of its successors is achieved by resetting all successor counters. Please note, how the predecessor of a node is taken into account when initialising the counter variables. Since the length of paths in a c.s. is polynomial bounded in MATH and all necessary book-keeping information can be stored in polynomial space, this proves the lemma.
|
cs/0005009
|
The DL MATH is a syntactic restriction of the DL MATH, which, in turn, is a syntactical variant of MATH. Hence, the MATH-algorithm can immediately be applied to MATH-concepts.
|
cs/0005010
|
Note that the deductive closure of the reduct MATH is a closure, and note that for every MATH that is a closure, the deductive closure of MATH is a subset of MATH.
|
cs/0005010
|
Note that the operator MATH is monotonic and that it has a fixed point MATH for any closure MATH. Hence, MATH for all MATH and therefore MATH. Since MATH, taken as a function of MATH, defines a closure, MATH. Thus, MATH follows.
|
cs/0005010
|
Let MATH be a logic program that does not contain any choice rules and let MATH be two stable models of MATH. As MATH since MATH is anti-monotonic in its first argument, MATH.
|
cs/0005010
|
Let MATH be the atoms not covered by MATH. We prove the claim by induction on the size of MATH. Assume that the set MATH. Then, MATH covers MATH by REF and MATH returns true if and only if MATH returns false. By REF, CREF, and CREF, this happens precisely when there is a stable model of MATH agreeing with MATH. Assume MATH. If MATH returns true, then MATH returns false and by REF and CREF there is no stable model agreeing with MATH. On the other hand, if MATH returns false and MATH covers MATH, then MATH returns true and by REF and CREF there is a stable model that agrees with MATH. Otherwise, induction together with EREF and EREF show that MATH or MATH returns true if and only if there is a stable model agreeing with MATH.
|
cs/0005010
|
Recall from REF that MATH is monotonic and that its least fixed point is equal to MATH. Hence, if MATH is a stable model of MATH, then for any MATH, MATH. Let the stable model MATH agree with the set MATH. As MATH, the first claim follows. Notice that MATH . If for all MATH, MATH, then for all MATH, MATH and consequently MATH. This proves the second claim. If MATH and there is only one MATH for which MATH, then MATH. If MATH for some literal MATH and if MATH agrees with MATH, then MATH which is a contradiction. Thus, the third claim holds. Finally, if MATH and there is a literal MATH such that for some MATH, MATH, then by the first claim MATH which is a contradiction. Therefore, also the fourth claim is true.
|
cs/0005010
|
Observe that the function MATH is monotonic and that the function MATH is anti-monotonic. Hence, MATH are monotonic with respect to MATH. Assume that there exists MATH such that MATH for only one MATH and MATH. If MATH and MATH, then MATH . Consequently, both MATH and therefore MATH . It follows that MATH. Thus, MATH is monotonic and has a least fixed point. Finally, notice that MATH has the same fixed points as MATH. By the definition of MATH, MATH. Thus, MATH implies MATH .
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.