url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/45089/non-algebraizable-formal-scheme/45090
## Non-algebraizable Formal Scheme? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is an example of a formal scheme that is not algebraizable? Recall that, if $X$ is a locally noetherian scheme and $Z$ is a closed subset (of the underlying topological space), then one can form the formal completion of $X$ along $Z$ which is sometimes denoted $X_{/Z}$. This is a formal scheme whose underlying topological space is $Z$. What is a formal scheme that is not of this form? Update: Emerton and Francesco Polizzi suggested several examples that arise in the study of deformations of varieties with trivial canonical bundle. It'd be nice to see some more elementary, explicit examples as well. Update 2: In comments, Francesco Polizzi mentioned that further examples can be found in [Hironaka-Matsumura, "Formal functions and formal embeddings" J. math. soc. Japan 20, Theorem 5.3.3 ] and [Hartshorne, Ample subvarieties of algebraic varieties, p. 205]. This is too long to fit into comments: @FP: Thanks! I'm not sure I quite follow the argument for non-algebraizability in the book. Sernesi states that, if $X \to \text{Spec}(\bar{A})$ is an algebrization, then $X$ would admit a non-trivial line bundle "since $X$ is of finite type over an integral scheme." Furthermore, he states that this line bundle can be chosen to "correspond to a Cartier divisor whose support does not contain $X_{s}$ [the special fiber] and has nonempty intersection with $X_{s}$." (note: The notation $X$, $X_s$ is different in the text.) It is not clear to me why such a line bundle exists: $\mathbb{A}^n$ is a finite type scheme over an integral scheme that has no non-trivial line bundles. I understand how this shows that there is no algebraization by a $\bar{A}$-projective scheme, but why is there no algebraization by an arbitrary scheme? I was a little nervous about the argument (Raynaud has an example of a family of Abelian varieties over a nodal curve with non-projective total space), but my concern was needless. Here is one argument. Let $X_0/\mathbb{C}$ be an algebraic $K3$-surface. We assume algebraizability and derive a contradiction. The statement about existence of non-algebraic deformations is very strong: In fact, there exists a 1st order deformation $f_1 \colon X_1 \to \text{Spec}(\mathbb{C}[t]/(t^2))$ with the property that the restriction of any line bundles $L_1$ on $X_1$ to $X_0$ is numerically trivial. We use this deformation to derive a contradiction. By definition, there exists a morphism $f_1 \colon \text{Spec}(\mathbb{C}[t]/(t^{2})) \to \text{Spec}(\mathbb{C}[[x_1, \dots, x_{20}]])$ with property that the versal deformation restricts to $X_1$. Now factor this morphism as $\text{Spec}(\mathbb{C}[t]/(t^{2})) \to \text{Spec}(\mathbb{C}[[t]]) \to \text{Spec}(\mathbb{C}[[x_1, \dots, x_{20}]])$ (by lifting the images of $x_1, \dots, x_{20}$ under $f_1^{*}$). If $X_{t} \to \text{Spec}(\mathbb{C}[[t]])$ is the restriction of the versal deformation, then the generic fiber is an algebraic $K3$-surface, hence admits an ample line bundle. The total space $X_{t}$ of the family is regular, so it is possible to extend this line bundle to a line bundle $L_{t}$ on $X_{t}$. But then the restriction of $L_{t}$ to the special fiber is not numerically trivial (by flatness); however, no such line bundle can even lift to 1st order. Contradiction. - 2 Dear jlk, As a commentary on the example of Francesco Polizzi below: the general yoga, when looks at formal deformations, is that picture in formal geometry should be the same as the picture in complex analytic geometry: so the complex analytic K3s form a 20-dim'l space (of which the Specf$(\overline{A})$ in Francesco's answer is a formal neighbourhood around the point corresponding to his initially chosen $K3$ surface $X$), while the algebraic K3s lie in a collection of 19-dim'l subfamilies (so the algebraizable locus in Francesco's $\mathcal X$ is codimension 1; if one looks at the ... – Emerton Nov 6 2010 at 19:59 7 The CY examples are obviously great and important, but here's a stupider one. Let (R,m) be a DVR, and let f(t) be a convergent power series over R for the m-adic topology. Then f defines maps A^1 -> A^1 over R/m^i which are compatible. Hence, their graphs glue to give a formal closed subscheme of A^1 x A^1 over R, and this formal subscheme is not algebraic. In fewer words, Chow's lemma fails horribly and easily for non-proper maps. – Bhargav Nov 6 2010 at 20:14 2 @jlk if $\mathcal{X}$ were algebraizable, it would be projective $over$ $\textrm{Spec}(\bar{A})$ (since every algebraic $K3$ surface is projective). It follows that there exists a line bundle $\mathcal{L}$ which is $f$-ample etc etc... – Francesco Polizzi Nov 6 2010 at 21:09 2 Dear jlk, In regard to your question "As an aside, ..." (which I hadn't noticed before now), I see that you have answered it in the most recent addition to your question. When one is in a context in which "algebraic" is more general than "projective", so that algebraicity can't be tested by deforming an ample line bundle, I'm not sure if there are other general principles one can apply to test for (non)-algebraicity. (None are coming to mind, but maybe someone else will have something to suggest. In fact, perhaps you could ask this as a separate question ... .) – Emerton Nov 7 2010 at 1:57 3 Dear Bhargav & AByer: Bhargav's construction has projection to the first factor identifying his formal scheme with a suitable completion of the affine line, so it is identified with the $m_R$-adic completion of the affine line (and so doesn't give an example of the sort requested in the question). Meanwhile, AByer's construction seems to want to be the zero locus of $\prod_{n \ge 1} (y - x^n)$ in the $x$-adic completion of $k[x,y]$, but this makes no sense since the terms in the product don't tend $x$-adically to 1. So I am confused; what is meant by "union"? – BCnrd Nov 7 2010 at 19:39 show 13 more comments ## 3 Answers I think the following should work. Let $X$ be a smooth, complex, projective $K3$ surface, and let $\bar{A}$ be the base of the formal semi-universal deformation of $X$. It is well-known that $\bar{A}=\mathbb{C}[[X_1, \ldots, X_{20}]]$. Let $\mathcal{X} \to \textrm{Specf}(\bar{A})$ be the corresponding formal scheme. Then $\mathcal{X}$ is not algebraizable. Roughly speaking, the reason is that the general deformation of $X$ is a $K3$ surface which is $not$ algebraic. For a complete proof, see [Sernesi, Deformations of algebraic schemes, Example 2.5.12]. EDIT. As it is also remarked in Sernesi's book, this example shows that a smooth, complex, projective variety $X$ need not have an algebraic formally versal deformation, even if the functor $\textrm{Def}_X$ is prorepresentable and unobstructed. - Yes, this is the first example that came to my mind! – Emerton Nov 6 2010 at 19:52 3 I guess so. The universal deformation of an abelian variety of dimension $> 1$ would be another, I guess. – Emerton Nov 6 2010 at 19:56 2 The question asked for non-algebraization as abstract (locally noetherian) scheme, not equipped with auxiliary structure (such as map to a specific affine scheme). By replacing $\mathbf{C}$ with $\mathbf{Q}$, perhaps an approximation argument can prove that if the above example admits algebraization as an abstract scheme then it also does as a proper flat scheme over the indicated base (hence a contradiction, as explained above). – BCnrd Nov 7 2010 at 13:54 1 Dear Francesco: Do you see what such an approximation argument might be? I thought for a little bit and didn't see how to make it work. I have vague recollection that Artin constructed examples of non-algebraizable formal singularities (perhaps over any alg. closed field of char. 0?), but I don't remember anything more about that. – BCnrd Nov 7 2010 at 19:41 1 Dear Brian and jlk, I do not see how to make an approximation argument work either. About Artin's examples, I looked for them but I could not find the exact reference. However, during my search I met further examples of non-algebraizable formal schemes, maybe (if you do not know them already) you could find them interesting. The references are [Hironaka-Matsumura, "Formal functions and formal embeddings" J. math. soc. Japan 20, Theorem 5.3.3 ] and [Hartshorne, Ample subvarieties of algebraic varieties, p. 205]. Regards, Francesco – Francesco Polizzi Nov 8 2010 at 9:04 show 5 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Bhargav's example is really an example of a non-algebraic formal subscheme of the affine plane. Such examples are ubiquitous in foliation theory : a differential equation and, more generally, a foliation on a (smooth) algebraic variety has local leaves which are smooth formal schemes. This follows from the formal Frobenius theorem (in positive characteristic, the foliation needs to have p-curvature zero). Sometimes, these leaves are the formal completions of an algebraic subvariety, but often not. However, these leaves are isomorphic, at formal schemes, to the formal completion at the origin of an affine space; from the intrinsic point of view, they thus are algebraizable. The theorems of Hironaka, Matsumura, Hartshorne to which Francesco Polizzi refers are in the same spirit, but concern formal subschemes along an algebraic subvariety. They don't apply to formal subschemes based at a point. Actually, Arakelov geometry allows to establish analogs of these theorems and algebraize some formal subschemes based at a point (eg leaves of a foliation). See papers of Bost (Pub. Math IHES, vol. 93, 2001), and of Bost and myself (Manin Festschrift, 2010). - This is ACL's post. I just fixed a typo. – Harry Gindi Nov 9 2010 at 10:41 @ACL: Interesting. Thank you for the response. – jlk Nov 9 2010 at 19:05 I find more or less illusory to ask for non-algebraizable formal schemes which would not find into the scope of deformation theory. Indeed, a formal scheme $\hat X$ over $C[[t]]$, say, is noting but a family of schemes $(X_n)$, where $X_n$ is a scheme over $C[t]/(t^{n+1})$ together with isomorphisms of $X_n$ with $X_{n+1}\otimes C[t]/(t^{n+1})$. On the other hand, I wonder whether classical examples of non-algebraic analytic spaces, or algebraic spaces, could be constructed in the category of formal schemes, but have no precise answer to give. - 1 I was discussing this with jlk and Bhargav yesterday, and Bhargav pointed out to be that there are interesting formal schemes which are not deformations in this sense. Indeed, if $X$ is a curve over $k$, $L$ is a line bundle on $X$ of positive degree and $\hat{X}$ is the formal neighborhood of $X$ in the total space of $L$, then $\hat{X}$ is a formal scheme which does not have any map to $\mathrm{Spec} k[[t]]$ for which the preimage of the closed point is $X$. – David Speyer Nov 9 2010 at 14:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380967617034912, "perplexity_flag": "head"}
http://mathoverflow.net/questions/65731/high-multiplicity-eigenvalue-implies-symmetry
## High multiplicity eigenvalue implies symmetry? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It is well known that on any compact Riemannian symmetric space $X$, the eigenvalues of the Laplacian have very high multiplicity (comparable with the Weyl bound), and the resulting actions $\operatorname{Isom}(X)\to \operatorname{SO}(W_\lambda)$ for eigenspaces $W_\lambda$ give many representations of the Lie group $\operatorname{Isom}(X)$. Suppose one has an unknown compact Riemannian manifold $X$ ($n=\dim X$), but where the eigenspaces of the Laplacian have large dimension (I don't have a precise definition of "large" here; the weakest definition would probably be something like $\dim W_\lambda>1$ for infinitely many $\lambda$. I'd be happy even with a much stronger assumption, say $\dim W_\lambda$ is at least $\epsilon$ times the Weyl bound $\operatorname{const}\cdot\lambda^{(n-1)/2}$ infinitely often). Can one conclude that $X$ is a symmetric space, or close to one in some sense? EDIT: I would be interested in any result which takes as a hypothesis some assumption of large multiplicity in the Laplace spectrum, and whose conclusion is some sort of symmetry of the underlying manifold. - Wouldn't even a nontrivial $S^1$-action on $X$ imply an infinite number of multiple eigenvalues? – Tom Goodwillie May 23 2011 at 4:09 My guess is that: No. There are inverse spectral results, so that any sequence of eigenvalues corresponds to a space. Just figure out a sequence that does not arise as the eigenvalues of a symmetric space and you are done. – Helge May 23 2011 at 5:47 @Tom Goodwillie, Yes, I guess the multiplicity has to be on the large end for anything like this to be true. I would be interested in any result which takes some multiplicity assumptions as hypotheses and whose conclusion is some sort of symmetry of the manifold. – unknown (google) May 23 2011 at 14:53 2 The multiplicities, even for a compact symmetric space, need not be all that high. For example, consider the symmetric space $X = \mathbb(R^n)/\Lambda$, where $\Lambda\subset\mathbb{R}^n$ is a generic lattice. For nearly all lattices, the multiplicities of the eigenvalues of the Laplacian will be at most $2$ since the only other vector in $\Lambda$ that has the same length as any given $v\in\Lambda$ will be $-v$. I think you probably meant to consider only Riemannian symmetric spaces of compact type. – Robert Bryant May 25 2011 at 14:11 1 This is not exactly the same thing, but there is a paper of Sogge and Zelditch at ams.org/mathscinet-getitem?mr=1924569 which links maximal eigenfunction growth (in the sense that the sup norm of L^2-normalised eigenfunctions is as large as possible given the energy level) with being a Zoll manifold (or more generally having a positive measure set of closed geodesics of a given length). – Terry Tao Mar 17 2012 at 21:30 show 2 more comments ## 4 Answers One situation where people have thought hard about this issue is the multiplicity of the spectrum of the Laplacian on the modular surface. This is a notoriously difficult problem. Conjecturally, the (discrete) spectrum is simple, but as far as I know, the best known bounds on the multiplicity of a Maas cusp form of eigenvalue $\lambda$ are somewhere in the neighborhood of $O(\lambda^{1/2}/\log \lambda).$ (Weyl's law in dimension $2$ says that the number of eigenvalues of size at most $N$ is $O(N)$). See e.g. section 4 of the following survey of Peter Sarnak: http://www.math.princeton.edu/sarnak/baltimore.pdf So I would say that some of the stronger versions of your conjecture are almost certainly out of reach. - Well, the funny thing is that a result of this sort DOES hold for the length spectrum: Takeuchi (I believe) showed that if a surface has linear number of traces smaller than $N,$ if and only if it is arithmetic, which is morally a closely related result. – Igor Rivin Mar 18 2012 at 2:54 @Igor: The most natural way to estimate the multiplicity of the eienvalues is to use the trace formula, which relates it to a sum over the closed geodesics (i.e. the length spectrum). But the problem is that you get a sum over exponentially many geodesics, and you end up trying to add up e^\lambda terms of size O(1), while the actual answer is also O(1). This is why it is so hard. – Alex Eskin Mar 18 2012 at 3:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. My guess is No. You do not need a one parameter Lie group of symmetry to have infinitely many double eigenvalues. Just one involution suffices. And one involution is not enough to make a symmetric space. For instance, every complex curve whose equation is real has this property; then the involution is the complex conjugation. For most such curve, there is no other non-trivial involution. - I might be totally confused, but in this paper of Hubert Goldschmidt (Infinitesimal isospectral deformations of symmetric spaces), he seems to construct the very deformations of the title. It is not obvious that these can be integrated, but if they can be, it gives you a family of manifolds isospectral to the symmetric spaces and not symmetric, which would provide a negative answer to the OP (note that if you read Goldschmidt's paper, you will see that various spaces are not NOT to be so deformable). - My answer is rather a question. Suppose that $(M,g)$ is a compact Riemann manifold and for any positive integer $N$ there exists an eigenvalue of the Laplacian that has multiplicity $>N$. Can one conclude that the group of isometries of $(M,g)$ has positive dimension? - 2 The Poincaré homology sphere $P$ is a counterexample, right? It's the quotient of $S^3$ by a finite group $G=2.A_5$ acting freely by isometries. Any automorphism of $P$ lifts to its universal cover $S^3$,and is thus a coset of $2.A_5$ in the normalizer of $2.A_5$ in ${\rm Aut}(S^3) = O_4({\bf R})$; but this normalizer is $2.A_5$ itself, so ${\rm Aut(P)}$ is trivial. But for even $d$ the space $H_d$ of harmonic polynomials of degree $d$ has a $2.A_5$ invariant subspace that is an eigenspace for the Laplacian on $P$ of dimension asymptotic to $\dim(H_d) / 60 \rightarrow \infty$ with $d$. – Noam D. Elkies Mar 17 2012 at 22:50 The Poincare sphere is also a Seifert manifold. I have to admit that I am a bit confused about how the Seifert structure fits in the above picture. – Liviu Nicolaescu Mar 18 2012 at 0:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9344762563705444, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/183117/extension-of-cheegers-inequality-with-distinguished-vertices
Extension of Cheeger's inequality with distinguished vertices The standard Cheeger's inequality for graph $G$ states that $\frac{1}{2}$ $\lambda$ < $\phi(G)$ < $\sqrt{2\lambda}$ where $\lambda$ is the second smallest eigenvalue of the normalized Laplacian of G, and $\phi(G)$ is the conductance of G. Now suppose we are also given two special vertices $s$ and $t$, and would like to consider only cuts that separate $s$ and $t$. Can we have a similar result relating the conductance in this special case, and some Rayleigh coefficient of the normalized Laplacian of the graph? -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9368805885314941, "perplexity_flag": "head"}
http://mathhelpforum.com/number-theory/55041-partitions-n-into-non-negative-powers-2-a.html
# Thread: 1. ## Partitions of n into non-negative powers of 2. 10. (Solved) Let b(n) denote the number of partitions of n into non-negative powers of 2. Prove that: $\sum^\infty_{n=0}b(n)q^n=\prod^\infty_{n=0}(1-q^{2^n})^{-1}$ OK as I said I have solved that question. 10. Using question 10 prove the following three identities: a. $b(2n+1)=b(2n)$ b. $b(2n)=b(2n-1)+b(n)$ c. $b(n)\equiv0$ mod 2 for n>1 I am strugling with this. I think identity a is fairly obvious because for each partition of 2n we create a partition of 2n+1 by adding the only odd part that is available (2^0). I think identity c followis from identities a. and b. by induction. However, I have no idea how to tackle identity b. 2. your ideas for part a) and c) are correct. for part b) let $2n=\sum 2^{m_j}.$ if all $m_j$ are > 0, then $2n=2 \sum 2^{m_j -1},$ and $\sum 2^{m_j - 1}$ is a partition of n. if at least one of $m_j$ is 0, say $m_1=0,$ then $2n=1 + \sum_{j \neq 1} 2^{m_j}$ and $\sum_{j \neq 1} 2^{m_j}$ is a partition of 2n - 1, and you're done. this also tells us how to find partitions of 2n when the partitions of n and 2n - 1 are given. for example for n = 3: partitions of 3: 1 + 1 + 1, 1 + 2. partitions of 5: 1 + 2 + 2, 1 + 4, 1 + 1 + 1 + 2, 1 + 1 + 1 + 1 + 1. now to find the partitions of 2n = 6, we multilpy every partition of n = 3 by 2, and add 1 unit to every partition of 2n - 1 = 5 to get all 6 partitions of 6: 2 + 2 + 2, 2 + 4, 1 + 1 + 2 + 2, 1 + 1 + 4, 1 + 1 + 1 + 1 + 2, 1 + 1 + 1 + 1 + 1 + 1. 3. b) We have: $<br /> \sum\limits_n {b\left( n \right) \cdot x^{2n} } = \prod\limits_n {\left( {\tfrac{1}<br /> {{1 - x^{2^{n + 1} } }}} \right)} = \left( {1 - x} \right) \cdot \prod\limits_n {\left( {\tfrac{1}<br /> {{1 - x^{2^n } }}} \right)} = \left( {1 - x} \right) \cdot \sum\limits_n {b\left( n \right) \cdot x^n } <br />$ i.e. $<br /> \sum\limits_n {b\left( n \right) \cdot x^{2n} } = \sum\limits_n {\left[ {b\left( n \right) - b\left( {n - 1} \right)} \right] \cdot x^n }$ Thus $<br /> b\left( n \right) - b\left( {n - 1} \right) = \left\{ \begin{gathered}<br /> 0{\text{ if }}n{\text{ is odd}} \hfill \\<br /> b\left( {\tfrac{n}<br /> {2}} \right){\text{ if }}n{\text{ is even}} \hfill \\ <br /> \end{gathered} \right.<br />$ which proves both, (a) and (b)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9128924012184143, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/29887/is-a-vector-space-only-defined-for-one-field
# Is a vector space only defined for one field? Suppose a vector space V is defined on a field F. Does this at all imply that V is also defined on all fields, or does it only dictate that V is defined on F (and could also work with other fields if proven)? I realize it's sort of silly to assume anything in math, but my confusion comes from examples of vector spaces that I've seen, such as n-tuples of a field with coordinate-wise addition and scalar multiplication holding for any arbitrary field. Thanks a lot. - 1 a finite vector space (finite dimensional over a finite field) cannot be a vector space over, say, $\mathbb{R}$. – yoyo Mar 30 '11 at 15:16 Ah, that makes sense. Thanks. – Nick Van Hoogenstyn Mar 31 '11 at 8:03 ## 2 Answers A priori, if we have an abelian group $V$ (the abelian group structure provides the addition of a vector space), and we give it the structure of a vector space over a field $F$, then we only know how to make $V$ a vector space over $F$, and over any subfield of $F$. This is because when we give $V$ the structure of a vector space over $F$, the information we have specified is how to multiply elements of $V$ by elements of $F$. If $L\subset F$ is a subfield of $F$, then we already know how to define multiplication of elements of $V$ by elements of $L$: elements of $L$ are also elements of $F$, and we just use our definition for them! For example, the collection of ordered pairs of complex numbers, $V=\mathbb{C}^2$, is an abelian group under the usual addition $$(\alpha_1,\alpha_2)+(\beta_1,\beta_2)=(\alpha_1+\beta_1,\alpha_2+\beta_2) \text{ for all }(\alpha_1,\alpha_2),(\beta_1,\beta_2)\in V .$$ It can be given the structure of a vector space over $\mathbb{C}$ by defining $$\lambda(\alpha_1,\alpha_2)=(\lambda\alpha_1,\lambda\alpha_2)\text{ for all }\lambda\in\mathbb{C},\,\,(\alpha_1,\alpha_2)\in V.$$ But, now that we've done that, it is also a vector space over $\mathbb{R}$, which is a subfield of $\mathbb{C}$ - we know how to multiply elements of $V$ by real numbers because we already have specified how to multiply by complex numbers. However, the abelian group $V$ cannot be given the structure of a vector space over $\mathbb{Z}/p\mathbb{Z}$ where $p$ is a prime number, which is a field that is not a subfield of $\mathbb{C}$. This is because we would have to have $$p\cdot (\alpha_1,\alpha_2)=(p\alpha_1,p\alpha_2)=0$$ for any $(\alpha_1,\alpha_2)\in V$, which is false. Finally, I would point out that even if $L$ is not a subfield of $F$, that doesn't prevent $V$ from also being able to be given the structure of a vector space over $L$. In our example of $V=\mathbb{C}^2$, suppose we had originally specified that $V$ was to be considered as a vector space over $\mathbb{R}$. That is, suppose we had said, "Here is our abelian group $V=\mathbb{C}^2$, and we make it into a vector space over $\mathbb{R}$ by defining $$c(\alpha_1,\alpha_2)=(c\alpha_1,c\alpha_2)\text{ for all }c\in\mathbb{R},\,\,(\alpha_1,\alpha_2)\in V."$$ This wouldn't change the fact that it can also be given the structure of a vector space over $\mathbb{C}$, in a way that agrees with the original structure over $\mathbb{R}$, even though $\mathbb{C}$ is a larger field than $\mathbb{R}$. - I didn't totally follow the example you gave of a field for which the vector space did NOT hold, but I get what you're trying to say and it answered my question. Thanks! – Nick Van Hoogenstyn Mar 30 '11 at 8:37 1 I don't understand what it means to say that "$V$ is not a vector space over $\mathbb{Z}/p\mathbb{Z}$": I would say that $V$ cannot naturally be given the structure of a vector space over $\mathbb{Z}/p\mathbb{Z}$. – Qiaochu Yuan Mar 30 '11 at 14:48 @Qiaochu - well, I would agree that the set $V$ can non-naturally be made into a $\mathbb{Z}/p\mathbb{Z}$-vector space, but what I was saying was that the abelian group $V$ is torsion-free - doesn't that mean it's impossible to give it a $\mathbb{Z}/p\mathbb{Z}$-vector space structure? – Zev Chonoles♦ Mar 30 '11 at 17:37 sure, but I didn't see you say "this abelian group" or "this set." In either case I am somewhat uncomfortable with the use of the word "is" in such situations. It hides a lot of subtleties. – Qiaochu Yuan Mar 30 '11 at 17:43 @Qiaochu: Fair point; I was glossing over this point given the level of the OP, but I'll edit. – Zev Chonoles♦ Mar 30 '11 at 18:01 Many of the constructions of vector spaces do not depends on the specific field. Category theory can help to explain why this is since the constructions are really category theory constructions which don't use any property of the field. However, some constructions do use properties of the field, and there isn't any obvious analogue over other fields. For example, smooth functions $\mathbb{R} \to \mathbb{R}$ form a real vector space, and there isn't a natural analogue over the field with $2$ elements. If you have fields $K \subset L$ (such as $\mathbb{R} \subset \mathbb{C}$) then for any $K$-vector space $V$, there is a natural way to construct an $L$-vector space, $V \otimes_K L$. This is generated by elements of the form $v \otimes \ell$ where $v\in V$ and $\ell \in L$, with rules including $v_1 \otimes \ell + v_2 \otimes \ell = (v_1+v_2)\otimes \ell$, $v \otimes \ell_1 + v\otimes \ell_2 = v\otimes (\ell_1+\ell_2)$, and $k v \otimes \ell = v \otimes k \ell$. Tensor products over $K$ of $K$-vector spaces are always $K$-vector spaces, but these also have the structure of an $L$-vector space because you can multiply the second coordinate by a scalar from $L$. You can identify $V$ with the set of elements of the form $v \otimes 1$. Also, if you have fields $K \subset L$, then any $L$-vector space is also a $K$-vector space. - I didn't give this answer because the OP specified that he wanted to know what fields $V$ was defined over, not ways of making new vector spaces from $V$ that would be defined over other fields. But nonetheless it is certainly worth mentioning. – Zev Chonoles♦ Mar 30 '11 at 6:46 And I also know nothing of category theory! :) But I'll take your word that this is valid. – Nick Van Hoogenstyn Mar 30 '11 at 7:51 – Douglas Zare Mar 30 '11 at 22:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 73, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9570497274398804, "perplexity_flag": "head"}
http://crypto.stackexchange.com/tags/notation/hot
# Tag Info ## Hot answers tagged notation 11 ### What exactly is a negligible (and non-negligible) function? In perfectly secret schemes like the one-time pad, the probability of success does not improve with greater computational power. However, in modern cryptographic schemes, we generally do not try to achieve perfect secrecy(yes governments may use the one time pad, but this is generally not practical for the average user). In fact, given unbounded ... 6 ### Why does key generation take an input $1^k$, and how do I represent it in practice? The $1^k$ is a formalism that's only there to make the theoreticians happy. You can safely ignore it. When you actually implement the cryptosystem, you don't try to pass the string $1^k$; instead, you pass $k$, the security parameter (a representation of how much cryptographic strength is desired from the key generation algorithm). I wish I could leave it ... 4 ### ECIES protocol - what does the || operation mean? In terms of algorithms, it usually means concatenate - as in, join two binary (or otherwise defined) strings together (in this order). For wikipedia's cryptography articles that's nearly always the case. When it is on the left side of an assignment, this means split the string from the right side into these component strings. This makes only sense if the ... 4 ### What do $0^n$ and $1^n$ mean in cryptography? Without seeing the entire formal construction: It seems like they wanted different strings. Meaning they needed $f_x(a)||f_x(b)$ where $a≠b$. The easiest way to express this is using the all $0$ and all $1$ strings, but any other pair of distinct strings of that length would yield the same effect. As to why they wanted this: They're using a PRF twice to ... 3 ### Why is a non fixed-length encryption scheme worse than a fixed-length one? The encryption scheme in the experiment you describe does not have to be fixed-length. We simply require that the two messages the adversary sends to its oracle have the same length. The restriction is on the adversary, not on the encryption algorithm. So why do we put this requirement on the adversary? The reason is that in every practical encryption ... 3 ### Why is a non fixed-length encryption scheme worse than a fixed-length one? You had your finger on it, you do know something about the encryption of two messages of different length before they are actually encrypted: the length of the corresponding ciphertexts. If the setting in which you're using your encryption scheme allows for a maximum message length then you can always pad to make every ciphertext the same size ... 3 ### What does the expression $1^n$ mean as a function argument? If you got an expression that resembles $\{1\}^n$ (or $1^n$) at a place in a surrounding expression where you would expect an $n$-bit bit string to be, the $\{1\}^n$ expression means a string of $n$ bits each with the bit value $1$. Conversely, $\{0\}^n$ means a string of $n$ zero valued bits, and $\{0,1\}^n$ just means any bit string of length $n$. In ... 2 ### What does $(\mathbb{Z}_n^*)^2$ mean? $\mathbb Z_n^*$ is a mathematical notation for the multiplicative group of integers modulo $n$. In other words, it is the set of integers that are relatively prime to $n$, all taken modulo $n$ (excluding zero). The $*$ symbol is commonly used for denoting "the set of elements in the multiplicative group", which in this case means "the set of elements that ... 2 ### What do $0^n$ and $1^n$ mean in cryptography? $0^n$ means a string of $n$ zeros (the $n$-bit string that is all zeros). $1^n$ means a string of $n$ ones. Why were these used? There's nothing special about $0^n$ or $1^n$, in this context. They could have used any pair of two constant $n$-bit strings, as long as the two strings were not the same. $0^n$ and $1^n$ is a convenient choice of two strings ... 2 ### What exactly is a negligible (and non-negligible) function? Very good explanation. I would like to add that you will see negligible functions also in other proofs. One example are peusdorandom strings. If an attacker looks at a string, he should only be able to decide if this string is pseudo-random or "real" random" with probability (distribution) 1/2 + negl(n). He can always toss a coin (that gives hom probability ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255374073982239, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6325?sort=oldest
## Equations for Integrable Systems ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) So, let's say we have a symplectic variety over $\mathbb{C}$, $M$, of dimension $2n$, and $f_1,\ldots,f_n$ Poisson commuting functions with $df_1\wedge\ldots\wedge df_n$ generically nonzero. Further assume that the fibers of the map $f:M\to\mathbb{C}^n$ determined by the $f_i$ is an open subset of an abelian variety and the vector fields $X_{f_i}$ are linear. We call such a thing an algebraically completely integrable Hamiltonian system. Now, I'm told that there's a definition of integrable system in PDEs that acts as some sort of stability condition, though I don't understand it. I've also been told that there's a way to, from an integrable system of PDEs, construct an integrable Hamiltonian system (just drop the condition that fibers be in abelian varieties), and that these two types of objects should be equivalent. My questions: 1) What's the correct formulation for PDEs to make something like this work out? (I know virtually nothing about PDEs, and would be quite grateful just to be pointed at a good reference, if there's too much I need to read to get a quick, understandable answer) 2) Is there a general method of going from an algebraically completely integrable Hamiltonian system, which is algebro-geometric in nature to working out the PDEs explicitly? Does it help if the symplectic variety is known to be the cotangent bundle of something? How about if the base is unirational? rational? - ## 3 Answers I'm no expert on this, but since nobody answers: one book which should definitely help is this big introductory one by Babelon, Bernard and Talon. There's also a very technical older paper by Ben-Zvi and Frenkel where apparently some sort of general construction is made, and there's a more readable paper by Inoue, Vanhaecke and Yamazaki on algebraic complete integrability and integrable hierarchies of PDEs intended for your second question (especially section 6 for the relationship). Just glancing at all this I'm not so sure a general explicit method to obtain the PDEs exists (I could be wrong) but some cases seem to be understood. Hope this helps... - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Many definitions for "integrable" equations exist, your current definition is quite limiting. I would say what you are describing are a subset of Hamiltonian systems. When most people say "integrable" in PDE's and/or dynamical systems, they usually mean equations for which the Inverse Scattering Transform can be used to construct an analytic solution, which is a much larger class of problems than Hamiltonian systems. - The question is intentionally restrictive. I'm only actually interested in a very special class of systems (in fact, algebraic ones, which are cotangent bundles of unirational varieties) and was hoping that restricting would be more likely to get me the result I'm looking for. – Charles Siegel Nov 23 2009 at 20:01 Regarding 2), I'm not an expert in algebraic integrable systems, but shouldn't it be the other way around? That is, given an integrable evolutionary system of PDEs which is further assumed to be Hamiltonian, you can consider the fixed points of the (linear combinations of) associated higher flows, and then the equations describing these fixed points (these equations are also referred to as the (higher) stationary equations) are Hamiltonian and integrable themselves. As for a reference on the things just described, you can try this book and references therein. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9507361650466919, "perplexity_flag": "head"}
http://mathoverflow.net/questions/72904?sort=newest
## Can a Vitali set be Lebesgue measurable? (ZF) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Here is the definition of Lebesgue measure. The standard proof that Vitali sets are not Lebesgue measurable uses countable additivity of Lebesgue measure, which is not a theorem of ZF. (In particular, it is consistent that the real line is a countable union of countable sets, and thus a countable union of measure zero sets.) Since ZF does prove that Lebesgue measure is super-additive, that proof can be easily adapted to show in ZF that if a Vitali set is measurable, then its measure is zero. By the Caratheodory construction, this is equivalent to having outer measure zero. Does ZF prove that all Vitali sets have positive outer measure? If no, does ZF prove "if there exists a Vitali set, then there exists a Vitali set with positive outer measure"? - 4 People seem to be answering a different question than what was asked... – Harry Altman Aug 15 2011 at 11:56 Now when you say measurable, in ZF+$\lnot$Countable choice for $\Bbb R$; do you mean measures by Borel codes, or the usual Borel measure without the sigma-additivity? Also, what is the outer measure in this case? – Asaf Karagila Mar 3 at 20:25 I mean Caratheodory measurable wrt Lebesgue outer measure. $\:$ – Ricky Demer Mar 3 at 20:59 Yes, but suppose that the real numbers are a countable union of countable sets; requiring countable subadditivity of the outer measure immediately makes all sets measure zero. – Asaf Karagila Mar 4 at 16:24 ## 2 Answers Sigma additivity is not the crucial issue issue here. In R^3 for example, the Banach-Tarski decomposition can divide the sphere up into five pieces (four non-measurable parts, and one point), and via translations and rotations only, create two spheres the same size as the original. This instance involves only finite additivity. Sigma additivity is required for R^1 and R^2, but that is not the essential issue of non-measurability. - It is an essential issue of defining the measure, thus an essential issue of non-measurability. – Asaf Karagila Mar 4 at 20:04 Are you saying that your argument only uses a finite redistribution? – François G. Dorais♦ Mar 4 at 21:05 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Vitali set can never be "made measurable", primarily due to translation invariance. As you recall, the Vitali construction yield a countable collection of sets whose union is [0,1). However, through simple translations of these same sets, they can be redistributed so that their union is now [0,2), or [0,n), or any rational length. Since the union of these sets can have differing measures, the measure of these sets must therefore remain undefined. - 2 This only applies if the measure is sigma-additive which cannot be proven in the complete absence of any form of choice. You need at least countable choice or something like that I think. – Johannes Hahn Mar 3 at 17:02 @Johannes: $\:$ Do you mean, "which you think cannot be proven in ..."? $\:$ If you have a proof $\hspace{.4 in}$ that that cannot be proven, then I'd like a link or sketch. $\;\;$ – Ricky Demer Mar 3 at 21:02 1 There is a model of ZF (without choice) in which the real line is the union of countably many countable sets. This is Theorem 10.6 in Jech's book "The Axiom of Choice"; the result is due to Feferman and Lévy, very shortly after Cohen's proof of the independence of AC from ZF. Of course in such a model Lebesgue measure cannot be countably additive. – Andreas Blass Mar 3 at 21:43 Oh yeah, I'd completely forgotten that one. $\:$ – Ricky Demer Mar 3 at 22:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392454028129578, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/218340/landaus-theorem-on-tournaments?answertab=active
# Landau's Theorem on tournaments There is a Landau's theorem related to tournaments theory. Looks like the sequence $(0, 1, 3, 3, 3)$ satisfies all three conditions from the theorem, but it is not possible for 5 people to play tournament in such a way (if there are no ties). Did I miss something? - Are you sure your question is related to Mathematica (TM) "the software"? – belisarius Oct 21 '12 at 22:41 – Pavel Podlipensky Oct 21 '12 at 22:54 ## 2 Answers Player $A$ loses to everyone. Player $B$ beats player $A$ and loses to everyone else. Players $C$, $D$, and $E$ beat each other cyclically, like rock-paper-scissors. - – Pavel Podlipensky Oct 21 '12 at 23:11 There is no requirement that the graph be acyclic. – Rahul Narain Oct 21 '12 at 23:16 Draw $K_5$, the complete graph on $5$ vertices, and assign directions to just enough edges to give one vertex ($A$ in the picture below) a score (outdegree) of $0$ and another ($B$ in the picture) a score of $1$. At this point only the red edges have not been assigned orientations, and it’s clear that there are exactly two ways to orient them to gives vertices $C,D$, and $E$ scores of $3$: they must form a cycle, either $$C\to D\to E\to C$$ or $$C\to E\to E\to C\;.$$ Either works to give a tournament with the score sequence $\langle 0,1,3,3,3\rangle$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916496753692627, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/73498-solved-mean-variance.html
# Thread: 1. ## [SOLVED] mean and variance Let X1,X2,X3,X4 be four iid random variables having the same pdf f(x)=2x, 0<x<1, zero elsewhere. Find the mean and variance of the sum Y of these four random variables. ?? 2. Originally Posted by ninano1205 Let X1,X2,X3,X4 be four iid random variables having the same pdf f(x)=2x, 0<x<1, zero elsewhere. Find the mean and variance of the sum Y of these four random variables. ?? I think you mean to say that $Y=X_1+X_2+X_3+X_4$? Thus, $E\left[Y\right]=E\left[X_1+X_2+X_3+X_4\right]=E\left[X_1\right]+E\left[X_2\right]+E\left[X_3\right]+E\left[X_4\right]$ Since they all have the same pdf, we can say that $E\left[Y\right]=4\int_0^1 \left[x\cdot\left(2x\right)\right]\,dx=8\int_0^1 x^2\,dx$. Can you take it from here? For variances, $\text{Var}\left[Y\right]=\text{Var}\left[X_1+X_2+X_3+X_4\right]=\text{Var}\left[X_1\right]+\text{Var}\left[X_2\right]+\text{Var}\left[X_3\right]+\text{Var}\left[X_4\right]$ Since they all have the same pdf, we can say that $\text{Var}\left[Y\right]=4\int_0^1\left[x^2\cdot\left(2x\right)\right]\,dx=8\int_0^1 x^3\,dx$ Can you take it from here? Does this make sense? 3. ## Variance? I got the mean = 8/3 which is right but the variance doesnt seem to be right. Following ur way the var comes out to be 2 but the answer should be 2/9. What's wrong with it? 4. Originally Posted by ninano1205 I got the mean = 8/3 which is right but the variance doesnt seem to be right. Following ur way the var comes out to be 2 but the answer should be 2/9. What's wrong with it? Woops...I forgot something... $Var\left[x\right]=E\left[x^2\right]-{\color{red}\left(E\left[x\right]\right)^2}$ So you would need to evaluate $4\left[\int_0^1 2x^3\,dx-\left(\int_0^1 2x^2\,dx\right)^2\right]$ to get the variance (I multiplied the value by 4 because each r.v. has the same variance). It should be $\frac{2}{9}$ Does this make sense? 5. That's right. You found the second moment and not the variance.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9207192659378052, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/75731?sort=votes
Oracle Separation Survey Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a survey (or a website) somewhere that lists all known separation results? I.e. it has a list of triples: $$(C_1, C_2, A)$$ where ````1. We do not know if $C_1 = C_2$ 2. We know that $C_1^A = C_2^A$ or $C_1^A \neq C_A^A$ ```` I.e. I'm looking for a big list, for classes like: L vs P. P vs NP. BPP vs NEXP. etc ... - 1 I second JDH's suggestion of the Complexity Zoo, though, from what I can tell, it will require you to compile the answers yourself by going through all the 495 classes listed there. On the plus side, I would tend to trust the zoo to have even the latest results, and the zookeepers are very good with providing references for everything. The site was created specifically because few people can confidently remember such long lists of separation results. – Thierry Zell Sep 18 2011 at 14:43 By the way, I'm sure there must be surveys too. They will be more reader-friendly, but less comprehensive, by definition. In the long run, you probably want to look at both. – Thierry Zell Sep 18 2011 at 14:46 Is there a general definition, given a class $C$ of languages, what the relativized class $C^A$ means for oracle $A$? This notation seems confusing to me. – David Harris Sep 18 2011 at 16:36 2 Answers The Complexity Zoo compiles a huge amount of the information you want, if not exactly in the form you request. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The Complexity Zoology gives you exactly what you are looking for. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343022108078003, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/15403?sort=votes
## “$\kappa$ strongly inaccessible” = “every function $f:V_\kappa\to V_\kappa$ can be self-applied”? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Strongly inaccessible cardinals are usually introduced either as (a) cardinalities of models of ZFC or (b) cardinals which are not the power set of a smaller cardinal nor the supremum of a set with hereditarily lesser cardinality. These seem to represent the model-theoretic and set-theoretic perspectives on strong inaccessibility. Recently I learned that if $\kappa$ is a strongly inaccessible cardinal, then $(V_\kappa)^2\subseteq V_\kappa$, so any function $f:V_\kappa\to V_\kappa$ is a set of pairs whose coordinates are members of $V_\kappa$, and so any such function can be "applied" to any other such function: `$$f(g) = \{\ z\ |\ \langle \langle x,y\rangle, z \rangle\in f\ \&\ \langle x,y\rangle\in g\ \}$$` Therefore one can say that if $\kappa$ is strongly inaccessible, then $V_\kappa$ is closed under self-application (as defined above) of functions $f:V_\kappa\to V_\kappa$. This seems to be sort of a "recursion-theoretic" characterization of strong inaccessibility: it identifies a definable operation under which all strongly inaccessible cardinals are closed. Question 0: does this make sense? Question 1: is the converse true, making this a complete characterization? (if $V_\kappa$ closed under self-application of functions then $\kappa$ is strongly inaccessible)? Question 2: if so, I'm sure this has come up before in the literature. In what sorts of directions does this investigation lead? This is one of the more-vague questions I've asked so far. I guess I'm sort of fishing for enlightenment here; it took me a long time to understand the point of inaccessibility, and I suspect that I might have caught on more quickly if this motivation (closure under self-application) had been introduced early on. - ## 1 Answer Unfortunately, your characterizations of the strongly inaccessible cardinals are not quite correct. The correct definition is that κ is strongly inaccessible (also known as just plain inaccessible), if κ is an uncountable regular strong limit cardinal. The cardinal κ is regular if it is not the union of fewer than κ many sets of size less than κ. And κ is a strong limit cardinal if whenver β < κ, then the power set of β also has size less than κ. This is not equivalent to the assertion that Vκ is a model of ZFC. (Although, to be sure, this false assertion has appeared surprisingly often in print and I have even heard a famous proof theorist make this assertion to a very large audience of hundreds of logicians.) The reason is that if κ is strongly inaccessible, then a Lowenheim-Skolem argument shows that there will be many γ < κ for which Vγ is elementary in Vκ, and so these also will be models of ZFC. It is an exercise to show that the least γ for which Vγ is a model of ZFC has cofinality ω, and so is definitely not inaccessible. Also, since ZFC is a first order theory in a countable language, if it has any models at all, then it has models in every infinite cardinality. So it is not correct to characterize inaccessible cardinals as the sizes of models of ZFC in that way either. It is also not equivalent to asserting that κ is regular and not the size of a power set of a smaller set. The reason is that if, say, CH failed, then ω1 would be regular and also not be the size of the power set of any smaller set (since 2ω would be already too large). But ω1 is not an inaccessible cardinal. Your remark that Vκ is closed under pairs when κ is inaccessible actually doesn't need any amount of inaccessibility. If x and y are sets in any Vα, then the pair (x,y) appears just a few steps later (and actually, one can use flat pairing function that do not increase rank at all, for infinite rank sets), and so every Vλ is closed under pairing for any limit ordinal λ. If one uses a flat pairing function (instead of the common Kuratowski pairing function), then every Vα for every infinite ordinal α will be closed under pairing. Finally, yes, if Vλ is closed under pairing, then you can apply such functions to themselves, and this idea is used quite often when we have elementary embeddings defined on models of ZFC. For example, if j:V to M, then j(j) is a function defined on M, into some structure j(M), which will be the union of j(VαM). This operation is called application. There is a famous result of Laver concerning the left distributive algebra of nontrivial elementary embeddings j:Vλ to Vλ. The first results characterizing normal forms in the free algebra with one generator used such embeddings, with the accompanying very large large cardinal hypothesis. For example, Laver produced a decision procedure, which was only known to work under these enormous large cardinal assumptions. Later, the large cardinal hypotheses were removed and the algebra became studied apart from the large cardinals, but the basic properties were definitely inspired and discovered by knowledge of what the large caridnals were like. The basic operation in this algebra is known as application, and is exactly the operation that you mention. - 1 Hear hear. It is much too infrequently mentioned that there are many non-inaccessible a for which Va is a model of ZFC. I think it is true, though, that k must be inaccessible as soon as Vk is a model for "second-order" ZFC, right? – Mike Shulman Feb 16 2010 at 3:11 1 @Mike: There might be no such V_a's, though you are correct that there are plenty if there is also an inaccessible. What you say about ZFC2 is true, this is an old theorem of Shepherdson (from one of his trilogy of papers titled Inner models for set theory). – François G. Dorais♦ Feb 16 2010 at 3:18 4 Well, if by second order you just mean GBC=Goedel-Bernays set theory (with choice), then that is not quite right either, since every ZFC model can be given a second order part satisfying GBC. If you mean KM = Kelly Morse, then you have to go a bit further, but you still don't need an inaccessible. But if by "second-order" you mean that you have V_kappa with the full V_{kappa+1} as the second order part, then this does indeed imply that kappa is inaccessible. – Joel David Hamkins Feb 16 2010 at 3:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.956900954246521, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/70654?sort=votes
## Invariants for subspaces of product manifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The answer to the following question is probably known since long ago, although unknown to me, since I am not a differential geometer. Let $X$ and $Y$ be 2-dimensional, smooth manifolds and let $Z$ be an open piece of a hypersurface in $X\times Y$ near a point $(x_0, y_0)$ with the properties that both projections $\pi_X:\ Z \to X$ and $\pi_Y:\ Z \to Y$ have surjective differentials and the projections from the conormal `$N^*(Z)$` into `$T^*(X)$` and `$T^*(Y)$` are local diffeomorphisms. What are the invariants of $Z$ with respect to separate diffeomorphisms in $X$ and $Y$? In particular, how can we decide, for instance in terms of a defining function `$F(x_1, x_2, y_1, y_2)$` for $Z$, whether there are coordinate systems in $X$ and $Y$ such that the fibers `$\pi_X(\pi_Y^{-1}(y))$` and `$\pi_Y(\pi_X^{-1}(x))$` are straight lines (which is the case if `$F(x, y) = x_1 y_1 + x_2 + y_2$`). - ## 1 Answer There is a theory due to Tresse, which is unfortunately fairly complicated. It is explained, at least partly, in the book of Arnol'd, Geometric Methods in the Theory of Ordinary Differential Equations. Elie Cartan wrote a difficult paper on it, and this paper was explained more clearly in a paper of Bryant, Griffiths and Hsu, Toward a Geometry of Differential Equations. The possibility of finding coordinates in which both systems of curves are straight lines is precisely the vanishing of Tresse's invariants, which occurs just when your space $Z$ is given by a single quadratic equation in the space $X \times Y$, in some system of coordinates. Explicit formulae for Tresse's invariants are in the sources I mentioned. I wrote a paper on the relation of this problem to complex algebraic geometry: http://arxiv.org/pdf/math/0507087v5. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511796236038208, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/34447-drawing-directed-graph.html
# Thread: 1. ## Drawing a directed graph Now, I understand how directed graphs work. But drawing one for this problem just baffles me. I have a 3 gallon and a 5 gallon jug. I can fill them from a tap, transfer water from one another, and empty them. I need to prove that I can get exactly 1 gallon in one jug. I then need to draw the process using a directed graph model. I solved the first part. I eventually got the 3 gallon jug with exactly one gallon of water. I'm having trouble trying to figure out how to draw this though. A hint says to use (a,b) to show each jug. I'm at a complete loss of how to do this without having a vertex representing every step, but this would just create a line. Any ideas? 2. $(0,5) \to (3,2) \to (0,2) \to (2,0) \to (2,5) \to (3,4) \to (0,4) \to (3,1)$ 3. wow, so I guess I was right? it just seems... too simple... why draw a graph in the first place? ah well, I appreciate it. 4. You might also try plotting the points (0,5) etc. on an ordinary Cartesian x-y graph and then draw arrows connecting them. This reveals a certain structure to the solution which may not otherwise be obvious. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597386717796326, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38605?sort=oldest
Two versions of hamiltonian reduction Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a symplectic manifold $X$ with nice $G$ action, equivariant momentmap $\mu$ and invariant $\chi \in \mathfrak{g}^*$ which is a regular value of $\mu$. There are two ways to form the Hamiltonian reduction. What one usually does is take the levelset $X_\chi=\mu^{-1}(\chi)$ and quotient out the group $X_\chi/G$. However one could also quotient out $G$ first and then define $(X/G)_\chi$ to be the set of all points represented by elements of $\mu^{-1}(\chi)$. I have sometimes heard that these two procedures are equal. Is this true, or more precisely in what situations is it true? Is this written down somewhere? Are there counterexamples one should have in mind? I am also interested in settings where one replaces spaces by possibly noncommutative Poisson algebras. - 1 If an orbit contains an element of that level set, then it is completely contained in it by the equivariance of $\mu$ and invariance of $\chi$. – Santiago Canez Sep 13 2010 at 19:43 2 Answers It depends on whether you think of the symplectic form as giving you a map $TX \to T^*X$ or vice versa. When you restrict to $\mu^{-1}(\chi)$, you only have a "presymplectic form", which gets you a map like the above (and not an isomorphism). If you quotient down to $X/G$, you only have a "Poisson tensor", which gets you a map backwards (that isn't an isomorphism). From your desired generalization it seems like you would prefer the latter picture. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I seem to recall that, in the algebraic setting (so working with (affine, say) algebraic varieties and their algebraic functions, rather than manifolds and smooth functions), it is necessary to assume that G is reductive for the two processes you propose to yield the same result. On the one hand, one has $(A/J)^\mathfrak{g}$, while on the other, one has $A^\mathfrak{g}/J^\mathfrak{g}$. If it happens that $J$ has an complementary submodule $J^c$ in $A$, then one can write $A=J \oplus J^c$ as a $G$-module, so that $A^\mathfrak{g}=J^\mathfrak{g} \oplus (J^c)^\mathfrak{g}$, and one can finally conclude that $(A/J)^\mathfrak{g} \cong A^\mathfrak{g}/J^\mathfrak{g}$. Otherwise, the definitions can disagree. I learned this argument in a course which didn't produce notes, so I cannot provide a reference.... I didn't think about it, but I think the same sort of thing is going on for $C^\infty$ manifolds: you want a reductive group. I could be mistaken. - Thanks, I recall the same thing for reductive groups, so it must be true ;-) – Jan Weidner Sep 30 2010 at 9:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517955780029297, "perplexity_flag": "head"}
http://www.chegg.com/homework-help/questions-and-answers/steve-roper-owes-phil-corrigan-the-following-750-due-in-3-years-without-interest-800-due-i-q3436436
home / homework help / questions and answers / math / advanced math / steve roper owes phil... Close Looking for Cramster? Cramster is now Chegg Homework Help. Learn More ## Please show all work. What should be the amounts of these payments if money is worth 4% compounded quarterly? Steve Roper owes Phil Corrigan the following: $750 due in 3 years without interest,$800 due in 3.5 years with interest at 5% compounded quarterly, $500 due in 5 years with at 3.5% compounded semiannually. They agree to settle the debt by having Steve pay Phil$600 in 1 year and the remainder in two equal payments - one in 3 years and the other in 8 years. What should be the amounts of these payments if the money is worth 4% compounded quarterly? ## Answers (1) • Answered by Anonymous 8 hours later Not yet rated I hope this might help you out. EXAMPLE 1 Computing the Future Value of an Ordinary Annuity Suppose \$1000 is invested in a savings plan at the end of each year and that 8% interest is paid, compounded annually. How much will be in the account after 4 years? Solution To find the value of the annuity after 4 years, we compute the value of each of the four payments after 4 years and then find the sum of these payments. First, $1000 deposited at the end of the first year will be earning interest for 3 years. Thus the$1000 will be worth Similarly, \$1000 deposited at the end of the second year will be earning interest for 2 years. Thus it will be worth Continuing, we see that, after the fourth year, the \$1000 invested at the end of the third year will be worth Finally, the $1000 invested at the end of the fourth year will not earn any interest and so will be worth$1000. Summing, we have We illustrate this in Figure 1. WARNING In an ordinary annuity, payments are made at the end of each payment interval. This is a common practice as, for example, the first car or mortgage payment is usually due one month (or more) after the car or house purchase. However, in other cases, payments are made at the beginning of a period. This type of annuity is called an annuity due and is discussed on page 582. We stress that formulas (1), (2), (3), and (6), which we are about to develop, are valid only for ordinary annuities. As Example 1 suggests, computing the future value of an annuity by calculating the future value of each payment separately can be a tedious affair. Imagine trying to compute the future value of an annuity consisting of 360 payments (as in a 30-year mortgage). Fortunately, there is a much easier way to do it Suppose that B dollars is deposited or received at the end of each payment period. An interest rate of i is paid at the same time. Let us compute the amount in the account or future value (FV) after n payment periods: After 1 period We deposit B dollars. After 2 periods The first B dollars is now worth dollars, and we deposit another B dollars to obtain After 3 periods The dollars is now worth and we deposit another B dollars, so Continuing, we find that The expression in brackets can be written more succinctly. As we show at the end of this section, Thus we have the following. FUTURE VALUE OF AN ANNUITY: COMPOUNDING ONCE EACH PAYMENT PERIOD The future value. FV, of an ordinary annuity with interest compounded at the end of each payment interval, is given by where B = amount deposited at the end of each payment period i = interest rate per payment period n = number of payment periods EXAMPLE 1 (continued) Using Formula (1) We can solve Example 1 much more quickly if we use equation (1). We have If interest is compounded m times a year, a formula similar to (1) holds. FUTURE VALUE OF AN ANNUITY: COMPOUNDING m TIMES EACH PAYMENT PERIOD The future value FV of an annuity, with interest compounded m times each payment period, is where B, i, and n are as before. Get homework help More than 200 experts are waiting to help you now... #### Company Chegg Plants Trees
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272411465644836, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=The_Golden_Ratio&direction=next&oldid=34261
<.H3> <./H3> <.H3> <./H4> The Golden Ratio - Math Images # The Golden Ratio ### From Math Images Revision as of 01:53, 12 December 2012 by Johan (Talk | contribs) Jump to: navigation, search The Golden Ratio Fields: Algebra and Geometry Image Created By: azavez1 Website: The Math Forum The Golden Ratio The golden number, often denoted by lowercase Greek letter "phi", is ${\varphi}=\frac{1 + \sqrt{5}}{2} = 1.61803399...$. The term golden ratio refers to any ratio which has the value phi. The image to the right illustrates dividing and subdividing a rectangle into the golden ratio. This page explores how the Golden Ratio can be observed and found in the arts, mathematics, and nature. # Basic Description ==The Golden Ratio as an Irrational Number== ## The Golden Ratio in the Arts ### Parthenon There is an abundance of artists who consciously used the Golden Ratio from as long ago as 400 BCE. Once such example is the Greek sculptor Phidias, who built the Parthenon. The exterior dimensions of the Parthenon form the golden rectangle, and the golden rectangle can also be found in the space between the columns. ### Vitruvian Man Another instance in which the Golden ratio appears is in Leonardo Da Vinci’s drawing of the Vitruvian Man. Da Vinci’s picture of man’s body fits the approximation of the golden ratio very closely. This picture is considered to a depiction of a perfectly proportioned human body. The Golden Ratio in this picture is the distance from the naval to the top of his head, divided by the distance from the soles of the feet to the top of the head. ## A Geometric Representation ### The Golden Ratio in a Line Segment The golden number can be defined using a line segment divided into two sections of lengths a and b. If a and b are appropriately chosen, the ratio of a to b is the same as the ratio of a + b to a and both ratios are equal to $\varphi$. The line segment above (left) exhibits the golden proportion. The line segments above (right) are also examples of the golden ratio. In each case, $\frac{{\color{Red}\mathrm{red}}+\color{Blue}\mathrm{blue}}{{\color{Blue}\mathrm{blue}} }= \frac{{\color{Blue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} }= \varphi .$ ### The Golden Rectangle A golden rectangle is any rectangle where the ratio between the sides is equal to phi. When the sides lengths are proportioned in the golden ratio, the rectangle is said to possess the golden proportions. A golden rectangle has sides of length $\varphi \times r$ and $1 \times r$ where $r$ can be any constant. Remarkably, when a square with side length equal to the shorter side of the rectangle is cut off from one side of the golden rectangle, the remaining rectangle also exhibits the golden proportions. This continuing pattern is visible in the golden rectangle below. ### Triangles The golden number, $\varphi$, is used to construct the golden triangle, an isoceles triangle that has legs of length $\varphi \times r$ and base length of $1 \times r$ where $r$ can be any constant. It is above and to the left. Similarly, the golden gnomon has base $\varphi \times r$ and legs of length $1 \times r$. It is shown above and to the right. These triangles can be used to form regular pentagons (pictured above) and pentagrams. The pentgram below, generated by the golden triangle and the golden gnomon, has many side lengths proportioned in the golden ratio. $\frac{{\color{SkyBlue}\mathrm{blue}} }{{\color{Red}\mathrm{red}} } = \frac{{\color{Red}\mathrm{red}} }{{\color{Green}\mathrm{green}} } = \frac{{\color{Green}\mathrm{green}} }{{\color{Magenta}\mathrm{pink}} } = \varphi .$ These triangles can be used to form fractals and are one of the only ways to tile a plane using pentagonal symmetry. Pentagonal symmetry is best explained through example. Below, we have two fractal examples of pentagonal symmetry. Images that exhibit pentagonal symmetry have five symmetry axes. This means that we can draw five lines from the image's center, and all resulting divisions are identical. # A More Mathematical Explanation Note: understanding of this explanation requires: *Algebra, Geometry [Click to view A More Mathematical Explanation] ## The Golden Ratio in Nature ===Spirals & Phyllotaxis=== [Click to hide A More Mathematical Explanation] # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # Future Directions for this Page -animation? http://www.metaphorical.net/note/on/golden_ratio http://www.mathopenref.com/rectanglegolden.html If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Retrieved from "http://mathforum.org/mathimages/index.php/The_Golden_Ratio" Categories: | | | | | | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8934658169746399, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/47343/looking-for-a-complete-exposition-of-the-burali-forti-paradox/47686
## Looking for a complete exposition of the Burali-Forti paradox ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In the context of ZFC, one normally uses von Neumann's definition of the ordinals. However, originially an ordinal was just the order-type of a well-ordered set (where "order-type of A" may for example be defined to be the equivalence class of all ordered sets that are order-isomorphic to A; this definition is of course no longer allowed in ZFC, but was common in pre-ZFC naive set theory). I am now looking for a complete exposition of Burali-Forti paradox, with the original definition of ordinal. One can in a number of papers and books find expositions similar to the following one cited from Wikipedia: "The "order types" (ordinal numbers) themselves are well-ordered in a natural way, and this well-ordering must have an order type Ω. It is easily shown in naïve set theory (and remains true in ZFC but not in New Foundations) that the order type of all ordinal numbers less than a fixed α is α itself. So the order type of all ordinal numbers less than Ω is Ω itself. But this means that Ω, being the order type of a proper initial segment of the ordinals, is strictly less than the order type of all the ordinals, but the latter is Ω itself by definition. This is a contradiction." To complete this exposition, we need proofs for the following two facts: • The ordinals are well-ordered under their natural ordering. • The order type of all ordinal numbers less than a fixed α is α itself. The book "Grundbegriffe der Mengenlehre" by Gerhard Hessenberg (1906) (which can be read online at http://www.archive.org/stream/grundbegriffede00hessgoog#page/n79/mode/1up) presents proofs for these facts, which to me however seem invalid (I do not understand why he may conclude "und umgekehrt ist jeder Zahl ν<μ ein Abschnitt in M eindeutig zugeordnet" on page 550). I have found a complicated proof for the first fact, which however is based on induction (over the natural numbers) and three applications of the Axiom of Choice. It seems to me that the second fact may be proven using transfinite induction (transfinite induction, it seems to me, may only be used once one has established the first fact). So in principle, I think I can complete the Burali-Forti paradox as stated above, but this completed derivation would be very lengthy and involved. So what I am actually looking for is a more concise or less involved complete derivation of the Burali-Forti paradox. Can anyone present such a derivation here, or point me to an existing one in the literature? - 3 None of these results require any form of choice. The main things you need to show are the following: if a well-ordered set X has an order-preserving injection into a well-ordered set Y, then there is a unique such injection whose image is an initial segment of Y. If X, Y are well-ordered sets, there is either an order-preserving injection from X to Y or an order-preserving injection from Y to X. To do this you need a precise form of definition by recursion. Anyway, I would expect this material to be in many standard books. – Qiaochu Yuan Nov 25 2010 at 17:21 I do disagree with one part of Qiaochu's comment. I am pretty sure that, when I was thinking this all through a few years ago, I never did need to come up with a precise form of definition by recursion. I wrote up the initial, and hardest, steps below; I'm curious when I will need to use because it hasn't happened yet. – David Speyer Nov 25 2010 at 18:13 ## 2 Answers So, an awkward admission. I've never actually read a basic intro to ZFC, nor taken a course on the subject. So, while I suspect this can all be found in any basic text, I don't know which one to refer you to. But it isn't hard to do all of this by hand, just tedious. I'll get you to the point of showing that ordinals, by your definition, are totally ordered. I think that's the part which is most different from the Von Neummann ordinal case. The rest is not too much more difficult, and I assume someone will be recommending a textbook soon anyway. We'll write $X \preceq Y$ if $X$ and $Y$ are well ordered sets and there is an order preserving bijection between $X$ and an initial segment of $Y$. Lemma 1 If $X$ and $Y$ are well ordered sets, there is at most one order preserving injection $X \to Y$ whose image is an initial interval. Proof: Suppose there were two, call them $\phi_1$ and $\phi_2$. Since they are not the same, there is some smallest $x \in X$ such that $\phi_1(x) \neq \phi_2(x)$ (using that $X$ is well ordered). Let `$Y' = \{ \phi_1(x'): x' < x \}$`. Since $\phi_1(x) \not \in Y'$, the set $Y \setminus Y'$ is not empty, let its least member be $y$. Then either $\phi_1(x)$ or $\phi_2(x)$ is not $y$; say WLOG $\phi_1(x) \neq y$. Since $\phi_1(x) \not \in Y'$, we deduce that $\phi_1(x) > y$. Now, consider any $x' \in X$. If $x' < x$, then $\phi_1(x') \in Y'$ and $\phi_1(x') \neq y$; if $x' \geq x$ then $\phi_1(x') \geq \phi_1(x) > y$. So $y$ is not in the image of $\phi_1$, but $\phi_1(x) >y$ is. This contradicts that the image of $\phi_1$ is an initial interval. QED We'll write $X \preceq_{\phi} Y$ to mean that $\phi$ is an order preserving map $X \to Y$ whose image is an initial segment. So $X \preceq Y$ if and only if $X \preceq_{\phi} Y$ for some $\phi$. Corollary: If $X \preceq Y$ and $Y \preceq X$ then $X$ and $Y$ are isomorphic posets. Proof: Let `$X \preceq_{\phi} Y$` and `$Y \preceq_{\psi} X$`. Then `$X \preceq_{\psi \circ \phi} X$`. But also `$X \preceq_{\mathrm{Id}} X$`. So $\psi \circ \phi = \mathrm{Id}$. Similarly, $\phi \circ \psi = \mathrm{Id}$. So $\phi$ and $\psi$ are mutually inverse order preserving maps. QED Thus, we see that that the ordinals are partially ordered under $\preceq$. (Of course, expressing this concept takes us out of the language of ZFC, since it is a statement about classes.) We will next show that this partial order is total. Prop 2 Let $X$ and $Y$ be well ordered sets. Then either $X \preceq Y$ or $Y \preceq X$. Proof: Consider $X' := { x \in X : X_{\leq x} \preceq Y }$. Consider the follow subset of $X' \times Y$: `$$\Phi = \{ (x,y) : \ \exists x' \in X,\ x \leq x',\ \exists \phi:\ X_{\leq x'} \preceq_{\phi} Y \ \mbox{and} \ y=\phi(x) \}$$` We claim that $\Phi$ is a function. In other words, we claim that, for each $x \in X'$, there is exactly one $y$ such that $(x, y) \in \Phi$. There is at least one such $y$ because we can take $x'=x$ and, by the definition of $X'$, there will be map $\phi$ such that $X_{\leq x} \preceq_{\phi} Y$; take $y= \phi(x)$. To see that there is not more than one $y$ above $x$, suppose that there were $y_1$ and $y_2$. They would correspond to some `$(x'_1, \phi_1)$`, `$(x'_2, \phi_2)$`. (No axiom of choice here, I'm only making finitely many choices!) WLOG, say `$x'_1 \leq x'_2$`. Let `$\phi'_2$` be the restriction of $\phi_2$ to `$X_{\leq x'_1}$`. Then `$X_{\leq x'_1} \preceq_{\phi_1} Y$` and `$X_{\leq x'_1} \preceq_{\phi'_2} Y$`. So, by lemma 1, `$\phi_1=\phi'_2$`. Then `$\phi_1(x) = \phi'_2(x)$` which is to say, $y_1=y_2$. So, $\Phi$ is a function. It is now easy to check (details left to you) that `$X' \preceq_{\Phi} Y$`. If $X=X'$, we are done. If not, let $Y' = \Phi(X')$. If $Y=Y'$, then $\Phi$ is injective and surjective, so its inverse is a function and we have $Y \preceq_{\Phi^{-1}} X$. If $X \neq X'$ and $Y \neq Y'$, then let $x$ and $y$ be the minimal elements of $X \setminus X'$ and $Y \setminus Y'$ (since $X$ and $Y$ are well ordered). Define $\phi'$ on `$X_{\leq x}$` to be $\phi$ on `$X_{<x} = X'$` and by $\phi(x)=y$. Then $\phi$ is easily checked to be order preserving and have image an initial interval, so `$X_{\leq x} \preceq Y$`. This contradicts that we took $x$ not to be in $X'$, and we are done. QED At this point, we see that equivalence classes of well-ordered sets form a total order under $\preceq$. - 2 Your proof of Proposition 2 essentially repeats the formal proof of definition by recursion that I saw when this material was first presented to me. I think the proof is slightly cleaner if this principle is abstracted out and proven separately, but it doesn't make too much of a difference. – Qiaochu Yuan Nov 25 2010 at 18:59 @David : I am not sure I understand what you mean by "Of course, expressing this concept takes us out of the language of ZFC, since it is a statement about classes." It seems obvious to me how to express this in ZFC. Perhaps I'm overlooking some subtlety? – Andres Caicedo Nov 26 2010 at 16:48 What I was thinking was that I couldn't say "For $\mathcal{X}$ and $\mathcal{Y}$ any two equivalence classes of well ordered sets, either $X \preceq Y$ for all $X \in \mathcal{X}$ and $Y \in \mathcal{Y}$, or vice versa." Maybe I introduced more confusion than I removed, though. – David Speyer Nov 26 2010 at 17:32 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I have now found a textbook that provides a complete proof of the Burali-Forti paradox without making use of von Neumann's definition of ordinals: "Basic Set Theory" by Azriel Levy. Before providing von Neumann's definition, he works just on the assumption that some "order types" can be defined such that the order type of two well-ordered sets is identicall iff they are order-isomorphic. Based only on this assumption, and, importantly for my concern, without using the Axiom of Foundation, he shows that the class of ordinals cannot be a set. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542278051376343, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=17098&page=3
Physics Forums Thread Closed Page 3 of 8 < 1 2 3 4 5 6 > Last » Mentor ## speed of light DrMatrix, you are harping on the fact that the meter is defined by the speed of light and its been explained to you several times now that that is irrelevant to the question of if the speed of light is constant. Exploring this point by Integral a little more may be useful: Quote by Integral c was constant when the meter was defined as a fraction of the earths diameter, it was constant when lengths were measured in cubits, it was constant when man had no concept of measurement. The constancy of the speed of light is not an artifact of mans ability to measure it is a property of the universe. The speed of light has been known (or at least expected) to be constant for a hundreds of years. The meter was defined in terms of the speed of light only 20 years ago. The meter is tied to the speed of light because the speed of light is a universal constant, not the other way around. In fact, the reason that the meter is no longer defined as the distance between two scratch marks on a bar of metal in France is that since about 20 years ago, our abiliy to measure the speed of light accurately has exceeded our ability to measure the distance between those two scratch marks accurately. As well, due to environmental factors, the distance between those two scratch marks wasn't even constant. The meter had to be re-defined in terms of something more constant/precise or it would hinder scientific research. The speed of light quite literally makes a better meter-stick than the meter-stick does. If you really want to argue arbitrary units, how about the definition of a second...? Quote by russ_watters If you really want to argue arbitrary units, how about the definition of a second...? Why do that? Why not define a ruler in the general sense so that it can be used to measure intervals that are space-like or time-like. Mentor Blog Entries: 9 Quote by Jack Martinelli This is also circular reasoning. You can say that c is constant because $$\mu_0$$ and $$\epsilon_0$$ are constant. But then, to make it non-circular, you are left with explaining the constancy of $$\mu_0$$ and $$\epsilon_0$$. (and what do you have to do to get rid of that annoying undescore?) (and how come I haven't been attacked yet?) I do not see the circularity here. Yes, the question has changed but unless a measurement of c is involved in the experiment to find $$\epsilon_0$$ it is not circular. Recognitions: Homework Help quoting Integral: I do not care ... what defines what. The speed of light is a constant. The only thing that the definition of the meter in terms of the wave length of light does is make the number used to represent the speed of light in meters a rational. This is merely a matter of convince and does not effect the speed of light in any way. It is not a matter of what, but how. quoting Integral: While dinosaurs were walking the earth the speed of light was constant, it is the same constant now, ... How do you know this? Compared to what? quoting Integral: The speed of light does not, and cannot double. How do you know this? Compared to what? quoting Creator: ...Someone get me off the floor! Would that I could. quoting Integral: ... the use of rods in Einsteins paper had no meaning other then a illustraion. The rods play no part in the theory. Say WHAT?! How do you define a coordinate system? I admit there are other ways to define one, but there must be one. Or, in the words of DrMatrix: ... it is necessary to define a coordinate system. quoting Integral: It does not matter that the meter has been redefined in terms of wavelengths of light. Of course it matters. It matters for the same reason that most of you seem to be saying it doesn't. If the speed of light is taken to be the standard, then it is a constant by definition, which answers the original post, unless, by "speed of light," the original poster meant something other than "c." quoting jdavel: If you make two measurements of the speed of light and calibrate the lengths of your apparatus before each measurement using the light speed method, ... ... ... you'd never do that. What do you mean, "you'd never do that"? That is exactly what would be done. How else would you calibrate length? quoting russ_watters: ... due to environmental factors, the distance between those two scratch marks wasn't even constant. The meter had to be re-defined in terms of something more constant/precise ... That's the point. Except, I would replace "constant/precise" with "universal;" the length of a rod is almost as constant and just as precise as the consant in the wave equation before we impose a scale on it. There isn't a rod for calibrating the meter locked in a vault and held at strict environmental conditions on Mars, but it is believed that Maxwell's equations can be found on Mars. Thus, they give us the standard, but it is still just a standard. I'm not saying that the equations are merely a standard (they are physical law), but the numerical value that we infer from the wave equation is defined as a standard universal constant. It is a constant because we have defined it that way. We have defined it that way because it is believed to be universal, and therefore more convenient than using a bar. Mentor Blog Entries: 9 Quote by DrMatrix The length of the meter would change compared to what? The standard of length is the meter. Rigid rods might change length, but the length of the meter would remain one meter. Since the definition of the meter is tied to the speed of light, if the speed of light were to change, the length of the standard would change. This means that if I were to use the standard meter to measure a known distance, I would come up with a different distance after the change in c. It would now be 60km to London it was 50km before Those are some right pretty equations y'all got there, I find this comment very interesting. What "fancy" equations are we talking about? A square root??? If you consider the square root to be a "fancy equation" then the relativity understand so well is full of "fancy equations". I am afraid that anyone who considers the square root to be a "fancy" equation simply cannot comprehend the even the simply math used in the development of Special Relativity. Lets not say anything of the very difficult math required to understand General Relativity. By the way, since you are so enamored with the definition of a specific coordinate system you should realize that on of the important features of GR is the ability to express the Physical relationships in a manner that is independent of the coordinate system. In the fundamental expressions look the same in ALL coordinate systems. Recognitions: Homework Help Quote by Integral Since the definition of the meter is tied to the speed of light, if the speed of light were to change, the length of the standard would change. This means that if I were to use the standard meter to measure a known distance, I would come up with a different distance after the change in c. It would now be 60km to London it was 50km before And that would show what? What's the big deal? Do you believe that lengths should be invariant in themselves? In SR, using the speed of light as a standard causes lengths to change. Is that wrong, too? Mentor Blog Entries: 9 Quote by turin And that would show what? What's the big deal? Do you believe that lengths should be invariant in themselves? In SR, using the speed of light as a standard causes lengths to change. Is that wrong, too? The whole point of the standard coordinate system you carried on about in your previous post is so that the standard length does not change. Now you seem to be arguing the other side of the coin. Make up your mind. Perhaps I do not understand your point. Lengths in SR change with relative velocity, if there is no relative velocity there is no length change. What has SR to do with the current discussion? I have asked this question over and over, never get an answer. Suppose I were to define a standard volume of water as the amount of water in a certain bucket. Then using that bucket I measure the volume of several other buckets to be 5 standard buckets. One day I see a new shiny bucket that I would rather use as my standard. When I use my new standard to measure the amount of water in one of the previously measured buckets I find that it now holds 6 standard buckets. Are you telling me I should ignore this discrepancy or should I assume that all buckets have somehow changed? Clearly, if it is indeed used, a change in a standard will be noticed. If the meter were never used to measure anything but the speed of light we would indeed never notice the change. But as soon as we apply the standard to something else we will certainly notice if the standard changes. Suddenly all previous measurements will be wrong according to our standard. What is a "known distance"? A distance is known only when compared to the standard of length. The meter is the standard. If the distance to London was 50km and it is now 60km, then the distance to London changed. The meter is still one meter. Yes I am well aware that GR allows aribtrary coordinates systems. I didn't want to get into that. You can't talk about a constant speed of light in GR. Quote by Integral I find this comment very interesting. What "fancy" equations are we talking about? A square root??? If you consider the square root to be a "fancy equation" then the relativity understand so well is full of "fancy equations. I was being sarcastic, I thought you'd get that. Sorry. Mentor Blog Entries: 9 Quote by turin quoting Integral: It is not a matter of what, but how. Please explain further what you mean, I don't get it. quoting Integral: How do you know this? Compared to what? It is believed that the oil we are burning today was living plant life in the era of the dinosaurs. It has an atomic structure that is compatible with our current chemistry. Therefore atomic structure has not changed since the era of the dinosaurs, therefore the speed of light has not changed (significantly). quoting Integral: How do you know this? Compared to what? Simply citing the current state of Physics. It may be that c has changed over the life time of the universe, but that is not the current understanding. quoting Integral: Say WHAT?! How do you define a coordinate system? I admit there are other ways to define one, but there must be one. Or, in the words of DrMatrix: Ok, we need to establish a coordinate system. Now what is the significance of that coordinate system? What part does the coordinate system play in the final work of relativity? Precisely, none. General Relativity allows expression of physical laws independent of the coordinate system. quoting Integral: Of course it matters. It matters for the same reason that most of you seem to be saying it doesn't. If the speed of light is taken to be the standard, then it is a constant by definition, which answers the original post, unless, by "speed of light," the original poster meant something other than "c." How does the fact that the meter has been tied to the speed of light effect the speed of light? The key point to this entire discussion is the difference between "the speed of light is constant because the meter is defined in terms of the wavelength of light? and "The meter is expressed in terms of the wavelengths of light because the speed of light is constant" quoting russ_watters: That's the point. Except, I would replace "constant/precise" with "universal;" the length of a rod is almost as constant and just as precise as the consant in the wave equation before we impose a scale on it. There isn't a rod for calibrating the meter locked in a vault and held at strict environmental conditions on Mars, but it is believed that Maxwell's equations can be found on Mars. Thus, they give us the standard, but it is still just a standard. I'm not saying that the equations are merely a standard (they are physical law), but the numerical value that we infer from the wave equation is defined as a standard universal constant. It is a constant because we have defined it that way. We have defined it that way because it is believed to be universal, and therefore more convenient than using a bar. I think you are stepping beyond the Physics into philosophy and semantics here. The speed of light is a universal constant. Please take debates about this issue to Theory Development. Recognitions: Homework Help Quote by Integral The whole point of the standard coordinate system you carried on about in your previous post ... What standard coordinate system? I simply supported DrMatrix' position that there must be a coordinate system. I even alluded to the possibility of other ways to define a coordinate system indicating that I do not think a lattice of rigid rods is a standard. Quote by Integral What has SR to do with the current discussion? It provides an example to show that there is no problem with the spatial distance between any two points in space, such as the distance between two cities, changing. Quote by Integral Suppose I were to define a standard volume of water as ... But this is just like changing from yards to meters. There is no problem here; it is just a matter of a conversion factor. What would make your example more interesting is to say: What if we measured the amount of some water in terms of the number of standard shiney buckets full, and found it to be six one say, and then, the next day, the the amount was only five. Well, then we'd have to discard our conservation of water law, right? No, because, just in the nick of time, a genius comes along with an incredible breakthrough. He (or she; there have been female geniuses, too) declares that the amount of water is not determined by the volume, but by the weight. Now, whenever the water is measured, on whatever day, the weight is always found to be 8.3 standard shiney buckets. Hooray for the inovative minds on the standardization committee! Mentor Blog Entries: 9 It provides an example to show that there is no problem with the spatial distance between any two points in space, such as the distance between two cities, changing. There certainly is a problem with distances changing if it is due to a change in the standard. Why to you think that the US mileage signs are mileage signs and not km signs? A change in a standard requires a lot of re calibration this costs money. Sorry to bring real world issues into the argument. I feel that there is a issue with the distance changes between cities, if standards mean anything, when I measure a distance in meters it had better read the same today as it did yesterday or next year. That is the whole point of a standard. When London is moving at .5c wrt to Paris then I would expect the distance to change. As long as they are stationary the distance must remain the same. The distance is a physical quantity it does not change with the units you measure it in. Recognitions: Homework Help Quote by Integral Please explain further what you mean, I don't get it. Consider defining distance against a standard rod. What is the prescription for measurement? Do we place the rod at rest along side the distance to be measured? If this is the method, then all we can do is put the distance in three categories: shorter, longer, same length as standard. Of course, this simply won't do, so we can translate the rod along the distance. This is, of course, inconvenient. For instance, this is impractical to measure microscopic distances and interstellar distances with the same standard. Also, translating the rod takes time. Things can change in time. Nothing fundamentally guaruntees that the object being measured will not change significantly during the process of translation. By what, I mean the trivial issue of the standard itself. By how, I mean the more significant issue of the method used to compare to the standard. Quote by Integral ... atomic structure has not changed since the era of the dinosaurs, therefore the speed of light has not changed (significantly). I don't understand how you jumped from atomic stucture to speed of light. Quote by Integral Simply citing the current state of Physics. It may be that c has changed over the life time of the universe, but that is not the current understanding. But this still doesn't answer the question, "compared to what?" Quote by Integral ... what is the significance of that coordinate system? What part does the coordinate system play in the final work of relativity? Precisely, none. General Relativity allows expression of physical laws independent of the coordinate system. Expression of physical laws is math. In order to do physics, there is no way around a coordinate system. Therefore, the coordinate system becomes significant, and in GR quite non-trivial. Quote by Integral The key point to this entire discussion is the difference between "the speed of light is constant because the meter is defined in terms of the wavelength of light? and "The meter is expressed in terms of the wavelengths of light because the speed of light is constant" I don't support the first justification, if "speed of light" is intended to mean "c." I support the second statement, but it is not an answer to the original post. Did I ever once disagree with c as a universal constant? I don't visit the theory development thread for two reasons: 1) I'm not interested in other peoples attempts to disprove relativity and 2) if I actually thought I had a sound theory to develope, I wouldn't post it on the internet to invite someone else to steal it from me and take all the credit. Mentor Blog Entries: 9 Turin, I am failing to see the point of your posts or your arguments. Quote by Integral I do not see the circularity here. Yes, the question has changed but unless a measurement of c is involved in the experiment to find $$\epsilon_0$$ it is not circular. The circularity is: if $$\mu_0$$ and $$\epsilon_0$$ are constant it is because c is constant, which is constant because $$\mu_0$$ and $$\epsilon_0$$ are constant which is constant because... Recognitions: Homework Help Quote by Integral Sorry to bring real world issues into the argument. Are you, really, or are you being sarcastic. It seems like you're being sarcastic. Well, shame on you. Quote by Integral I feel that there is a issue with the distance changes between cities, if standards mean anything, ... So do I. It shows that, if the distance between New York and Miami changes by a significant percentage in the course of a day, I'd better run for the west coast. One of the last things I would personally do is blame a standard for changing. Oh ya, and, sarcastically, I'm sorry for bringing the good old U.S. of A. into the argument. Quote by Integral ... when I measure a distance in meters it had better read the same today as it did yesterday or next year. That is the whole point of a standard. That's the whole point of defining that particular distance as a standard distance. Standards have different points depending on the application. But I don't agree that the point of a standard of one unit is to make sure that some arbitrary property of some arbitrary object remains constant. Quote by Integral When London is moving at .5c wrt to Paris then I would expect the distance to change. As long as they are stationary the distance must remain the same. But they are never stationary, even WRT each other. As far as I know, the distance between London and Paris is not a standard, nor is it even assumed to remain constant, in the physics that I've studied. It is a reasonable approximation, but not a constant standard value. Quote by Integral The distance is a physical quantity it does not change with the units you measure it in. It does if you first measure it in units of length, and then subsequently measure it in units of time. It completely changes in meaning. If you say that the space-like separation of London and Paris is x km, that implies that they are causally disconnected in that context. But, if you say that Paris is y hrs from London, you are talking about how much time you would experience during the flight or train ride from London to Paris. These two measurements/units have completely different physical meanings, but they both measure the distance between London and Paris. Mentor Blog Entries: 9 Quote by Jack Martinelli The circularity is: if $$\mu_0$$ and $$\epsilon_0$$ are constant it is because c is constant, which is constant because $$\mu_0$$ and $$\epsilon_0$$ are constant which is constant because... The experimental methods used to measure a value for $$\epsilon_0$$ are significantly different from those used to measure c. The fact is with modern technology I believe it is easier to get a precise measurement of c then $$\epsilon_0$$ so it may well be that it is now defined it terms of c then the other way round. That does not change the fact that $$\epsilon_0$$ is a basic property of space time which is a factor in the propagation of EM waves. Which is the "more" fundamental constant. I personally do not know. Mentor Blog Entries: 9 Turin, You are now arguing semantics, can we get this thread back on topic. Thread Closed Page 3 of 8 < 1 2 3 4 5 6 > Last » Thread Tools | | | | |-------------------------------------|------------------------------|---------| | Similar Threads for: speed of light | | | | Thread | Forum | Replies | | | General Physics | 25 | | | General Physics | 5 | | | General Astronomy | 4 | | | Special & General Relativity | 13 | | | General Physics | 15 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9625862836837769, "perplexity_flag": "head"}
http://mathoverflow.net/questions/98047/minimum-covering-in-cubic-graphs
## Minimum covering in cubic graphs ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does a cubic graph with $2n$ vertices admit a minimal cover with $n-1$ vertices? - Is it an edge or vertex cover? – hbm May 26 at 16:08 2 And this seems like a statement, not a question... – Igor Rivin May 26 at 16:34 The way it is stated, it's not a real question. Voting to close. Once you formulate it, don't forget to convince us it's not your homework from a course in combinatorics... – Vladimir Dotsenko May 26 at 17:10 2 Consider a nodal curve $C$ with 2n components where each component intersects the curve complementary to it on three points. We can interpret each component of the curve $C$ as a vertex and each node how a edge, obtaining thus, a graph called the dual graph of the curve $C$. My question is if I can chose $n-1$ components (vertices) so that each one these components (vetices) intersect at least one component among the other $n+1$ components. This is not a problem for minimal covering of graphs ? – Flávio May 26 at 22:49 2 At least you should edit your motivation from the comment above into the main body of the question. – Gjergji Zaimi May 27 at 5:23 show 2 more comments ## 4 Answers A subset of vertices $S$ in a graph $G$ is called a dominating set if every vertex in $G$ is in $S$ or is connected to a vertex in $S$. The size of the smallest dominating set in a graph is called the domination number of $G$. Even though your question asks about a minimum (vertex/edge) covering, what you seem to be interested in is the dominating number of a cubic graph. Not only does the domination number satisfy the bound in your question, but it can be improved further. Bruce Reed showed in "Paths, stars, and the number three", Combin. Probab. Comput. 5 (1996) 277--295, that every cubic graph has its domination number bounded by $3|V|/8$, where $|V|$ is the number of vertices in your graph. The bound is achieved for some graphs on 8 vertices. This bound has been more recently improved on by Kostochka and Stodolsky. I believe the conjectured best bound is $5|V|/14$, but it is not known if infinitely many graphs achieve it. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Pick any vertex. It is adjacent to three other vertices. Now repeatedly pick an edge joining two vertices, call them $x$ and $y$, such that $y$ has not yet been accounted for, and choose $x$. Each such choice adds one more to the vertices accounted for. When you've exhausted the graph, you've picked $n-1$ vertices such that each of the other $n+1$ vertices is adjacent to at least one of the $n-1$. - @Gerry, this argument won't work. Think of $K_{3,3}$. – Gjergji Zaimi May 27 at 0:27 @Gjergji, let the vertices of $K_{3,3}$ be $a,b,c,1,2,3$, with letters adjacent to numbers. Pick $a$. It is adjacent to $1,2,3$. Now there's an edge joining $b$ and $1$, and $b$ has not yet been accounted for, so choose $1$. Now we've accounted for everything, so we have picked 2 vertices ($a$ and $1$) such that each of the other 4 vertices is adjacent to at least one of the 2 we picked. It's possible that my argument doesn't work, but I think it works for $K_{3,3}$. – Gerry Myerson May 27 at 5:32 As I understand Gerry's argument, it does work for K_3,3. I don't know about other cubic graphs. Gerhard "Looks Like It Works Though" Paseman, 2012.05.26 – Gerhard Paseman May 27 at 5:36 Of course, his argument establishes an upper bound, not an exact number. To get exactly n-1 you have to be carefully clumsy, or perhaps I mean clumsily careful. Gerhard "Still Thinks It Could Work" Paseman, 2012.05.26 – Gerhard Paseman May 27 at 5:41 Also, there is the issue of minimality. Until we have a clear idea of what is wanted, I would say the answer is no, n-1 is not guaranteed for 2n vertices. Gerhard "Waiting For The Fine Print" Paseman, 2012.05.26 – Gerhard Paseman May 27 at 5:52 show 4 more comments Dear Gjergji Zaimi, thank you for your reply but I think there is something wrong with this value $3 | V | /$ 8 because the following cubic graph, http://www.dharwadker.org/independent_set/fig2a.gif, has 6 vertices and a dominating set with three vertices. Is not This? - 1 It also has a dominating set of size two. – Zack Wolske May 27 at 23:03 Looks wrong. The Heawood graph is 3-regular on 14 vertices and its vertex cover is 7. ````sage: g = graphs.HeawoodGraph() sage: g.degree() [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3] sage: g.order() 14 sage: g.vertex_cover() [0, 2, 4, 6, 8, 10, 12] ```` - Notice that for any regular graph, a vertex cover contains at least half of the vertices. – Gjergji Zaimi May 29 at 8:21 ahahahah right ! And necessarily more if it is not bipartite :-) – Nathann Cohen May 30 at 7:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371095299720764, "perplexity_flag": "head"}
http://mathoverflow.net/questions/25323/picard-groups-of-moduli-problems
## Picard Groups of Moduli Problems ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) First, yes, I've seen Mumford's paper of this title. I'm actually interested in specific ones, and looking for really the most elementary/elegant proof possible. I'm told that for $g\geq 2$ it is known that the Picard groups of $\mathcal{M}_g$ and $\mathcal{A}_g$ (the moduli spaces of curves of genus $g$ and abelian varieties of dimension $g$) are both isomorphic to $\mathbb{Z}$ (at least, over $\mathbb{C}$). What's the most efficient way to compute this? In fact, for $\mathcal{M}_g$, it's even generated by the Hodge bundle, I'm told. Ideally I want to avoid using stacks (though if stacks give an elegant proof, I'm open to them) and also would like to be able to calculate the degrees of some natural bundles, though I get that that's going to be a bit harder, so I want to focus this question on the computation of the Picard group. - 4 @Charles: you say you'd like to "avoid stacks" if possible, but that also affects the answer (let alone the methods). Just think of $\mathcal{M}_ 1$: the coarse moduli space is the affine line (trivial Pic), whereas the stack has nontrivial Pic ($\mathbf{Z}/12 \mathbf{Z}$). So specifically what do you mean by $\mathcal{M}_ g$ and $\mathcal{A}_ g$? – BCnrd May 20 2010 at 5:36 I'm also specifically excluding $\mathcak{M}_1$. Assuming that the statements I've made are true, I'll be satisfied with coarse moduli. I'm thinking of $\mathcal{A}_g$ as the Siegel upper half plane moduli the integral symplectic group, and I'm more than willing to take $\mathcal{M}_g$ to be the GIT quotient of the Hilbert scheme if tricanonically embedded curves moduli the projective linear group. However, I'd been under the (possibly mistaken) impression that the stackiness doesn't matter as much for $g\geq 2$, at least for $\mathcal{M}_g$ as automorphisms are finite. – Charles Siegel May 20 2010 at 5:44 I added some words to the question. Hopefully I guessed your intention correctly. – Kevin Lin May 20 2010 at 12:50 ## 2 Answers I'll just talk about the calculation of $\text{Pic}(\mathcal{M}_g)$ as a group (showing that it is generated by the Hodge bundle is then a calculation). I think the most elementary way to view this problem is to think in terms of orbifolds rather than stacks. Recall that $\mathcal{M}_g$ is the quotient of Tecichmuller space $\mathcal{T}_g$ by the mapping class group $\text{Mod}_g$ (this is the curves analogue of $\mathcal{A}_g$ being the quotient of the Siegel upper half plane by the symplectic group). This action is properly discontinuous but not free (that's why we have an orbifold/stack rather than an honest space). A line bundle on $\mathcal{M}_g$ is then a $\text{Mod}_g$-equivariant line bundle on $\mathcal{T}_g$. There is an equivariant first Chern class homomorphism $c_1 : \text{Pic}(\mathcal{M}_g) \rightarrow H^2(\text{Mod}_g;\mathbb{Z})$. Mumford showed that $H^1(\text{Mod}_g;\mathbb{Z})=0$, so $\text{Pic}(\mathcal{M}_g)$ cannot vary continuously. This implies that $c_1$ is injective. Later, Harer proved that $H^2(\text{Mod}_g;\mathbb{Z}) \cong \mathbb{Z}$ for $g$ large. Since the Hodge bundle is nontrivial, $c_1$ cannot be the zero map, so we conclude that $\text{Pic}(\mathcal{M}_g) \cong \mathbb{Z}$. Let me now recommend three places that contain more details about the above point of view. First, Hain has a survey entitled "Moduli of Riemann Surfaces, Trancendental Aspects", a large portion of which is devoted to the calculation of the Picard group. He gives many more details of the above sketch. He also shows how to show that the Hodge bundle generates the Picard group. Second, in the first couple of sections of my paper "The Picard Group of the Moduli Space of Curves With Level Structures" I give some extra details about things like Chern classes of orbifold line bundles. Finally, Hain has another survey "Lectures on Moduli Spaces of Elliptic Curves" in which he works all the above out for the moduli space of elliptic curves, where things are a little more concrete. - Is it profitable to think of these line bundles in terms of central extensions of $\operatorname{Mod}_g$? – S. Carnahan♦ May 20 2010 at 14:22 That an interesting question. Central extensions by Z of Mod_g correspond to classes in H^2(Mod_g;Z). The calculations of H^2(Mod_g;Z) that I know of first use topology to give upper bounds on how large H^2(Mod_g;Z) is, and then prove that this bound is realized. One can either do this using line bundles on moduli space or by using the so-called "Meyer cocycle". Thus line bundles are more used to study central extensions than the other way around. – Andy Putman May 20 2010 at 14:56 Actually, here's an example of how to use central extensions to prove things about line bundles. Recently Funar proved that the universal central extension of Mod_g is residually finite (a thm of Deligne shows that the universal central extension of Sp_2g(Z) is not residually finite). This has a nice implication for the Picard group of moduli space : by passing to high enough finite covers of moduli space, you can make the Hodge bundle divisible by as much as you like. Deligne's thm implies by passing to finite covers of A_g, the best you can do is make the hodge bundle divisible by 2. – Andy Putman May 20 2010 at 15:07 @Andy, thanks for the detailed answer for the case of curves (and especially the references, going through Hain now). The case for abelian varieties is fairly similar, with the Symplectic group and $\mathbb{H}_g$ replacing Teichmuller space and and the mapping class group? – Charles Siegel May 27 2010 at 4:15 @Charles : That's right. The picture for A_g was known long before the work on M_g and was an important inspiration for that work. – Andy Putman May 27 2010 at 4:28 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The fact that the Picard group of the moduli variety (not stack) A(g) is of rank 1, is sketched in a footnote of Mumford's paper on the Kodaira dimension of A(g), in LNM 997. This footnote is elaborated (over Z) in a paper of Smith-Varley: in LNM 1124. Another reference is Freitag's paper in Arch. Math. 40 (1983), pp.255-259. The whole point of Mumford's argument was that it follows from Borel's computation of the rank of the second cohomology group of the symplectic group (i.e. rank one). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421471953392029, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/108352-differential-equations-seperation-variables.html
# Thread: 1. ## Differential equations - seperation of variables Solve the given differential equation by Separation of variables. (1+x^4)dy + x(1+4y^2)dx = 0, y(1)=0 (3x^2 + 9xy + 5y^2)dx – (6x^2 +4xy)dy = 0, y(2) = -6 (3y^2-x^2 / y^5)dy/dx + x/2y^4 = 0, y(1) = 1 Dr/dQ + rsecQ = cosQ 2. Originally Posted by wyasser Solve the given differential equation by Separation of variables. (1+x^4)dy + x(1+4y^2)dx = 0, y(1)=0 (3x^2 + 9xy + 5y^2)dx – (6x^2 +4xy)dy = 0, y(2) = -6 (3y^2-x^2 / y^5)dy/dx + x/2y^4 = 0, y(1) = 1 Dr/dQ + rsecQ = cosQ It will help people to answer your question if you write out your equations using LATEX. For the first one: $(1+x^4)dy + x(1+4y^2)dx = 0$ Here divide both sides by $(1+x^4)(1+4y^2)$, this will leave you with: $\int\frac{x}{1+x^4}dx + \int\frac{1}{1+4y^2}dy = 0$ Hint, for the x variable consider using partial fractions to simplify the equation before you try and integrate. The second one is a little harder. $(3x^2 + 9xy + 5y^2)dx - (6x^2 +4xy)dy = 0$ For this one I would suggest using a substitution to simplify the equation a little before you attempt to separate the variables. Let $y = vx$ where $v = v(x)$, if we differentiate this equation with respect to $x$ we get $dy = xdv + vdx$. Substitute in these 2 values and see how you go. When you've done these try and see what you can manage with the last two. Hope this helps
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8452770113945007, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/116112?sort=oldest
## Continuity of critical points with respect to a parameterisation. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello. I have a research note coming out soon, and I'm stuck showing that a weird kind of function is continuous. I need to it show a new method of bounding exponential growth factors in combinatorial classes. The function in question is $f: [0,1] \rightarrow (0,\infty)$, which sends a parameter $l \in [0,1]$ to the $z$ value of the unique positive critical point ($P''>0$) of a function $P_{S,l}(z) = \sum_{(i,j) \in S} z^{j\cdot l + i\cdot(1-l)}$, where $S \subset \{ 0,1,-1 \}^2$. For several different sets $S$, I have numerical experiments supporting the claim that this function is continuous. I've considered trying the $L^2$ norm, but I don't get very far before I'm swamped with unmanageable amounts of output. I'm looking for a shortcut that will give continuity, I'm not really concerned with how refined the bounds are. Any help or references are greatly appreciated. Cheers, Sam - 1 I'm missing something. The $\delta_{ij}$ means you are only summing on those $(i,j)\in S$ with $i=j$, in which case the exponent of $z$ is $i$, not depending on $l$. – Pietro Majer Dec 11 at 20:09 Sorry Pietro, I mean that $\delta_{ij} = 1$ if and only if $(i,j) \in S$, on second look it's actually redundant since I'm only summing over the vectors in $S$. Thanks for helping me catch it :) Sam – Sam Dec 11 at 21:17 What is the domain of $P_{S,l}$? (Positive reals, I guess) – Pietro Majer Dec 12 at 19:11 Pietro: Yes, the domain of $P_{S,l}(z)$ is the positive reals. Cheers, Sam – Sam Dec 13 at 17:48 ## 1 Answer If you know that P''>0, then the implicit function theorem should be applicable to give you continuity. - Oooh, I'll try this. Thanks Michael. – Sam Dec 11 at 21:19 This would work if I had $P''>0$ on my domain. Unfortunately, I was hasty in asserting this above. I started with a function, for example, $P(z) = z^a + z^b + z^{-a} + z^{-b}$, where $a,b \in \mathbb{Z}$, which does have $P''>0$. But I wanted to be able to vary a parameter, and vary the exponents in the function above with it. This led to the parameterisation above, and I assumed that convexity would be carried with it, but I'm finding it hard to show. I've made plots for different sets $S$ and they all seem like convex functions. Back to the drawing board. – Sam Dec 17 at 22:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463106989860535, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/12444/feynman-diagrams-in-effective-theories/12446
# Feynman diagrams in effective theories I've been seeing many feynman diagrams lately that I can't quite interpret yet. I've heard a basic Quantum Field Theory lecture and so to me, a Feynman diagram is simply a mnemonic picture to quickly write down (and remember) all of the possible terms in the perturbation series of the matrix element in scattering. But they seem to be much more than that. Consider for example this diagram of proton-neutron scattering via pion exchange. The picture seems to have an intuitive meaning, but how can it be a valid Feynman diagram? Wouldn't that imply that there is an unterlying theory with a proper Lagrangian from which Feynman rules can be derived that would assign a number to this diagram? Question: When describing nucleon-nucleon interactions in this way, do people actually write down a Lagrangian and derive those rules? (I am confused because I've seen diagrams like that in several books and they usually write down the cross sections simply from analogy arguments to other theories.) Any clarification will be greatly appreciated, David - ## 3 Answers For nucleon-nucleon interaction please keep in mind that in this low-energy regime pertubative QCD breaks down and reactions are not really calculable. For the specific pion exchange you mention have a look at http://hyperphysics.phy-astr.gsu.edu/hbase/forces/funfor.html as to why this QCD-process can be seen as an exchange of a pion. In general you can get Lagrangians for effective (i.e. low-energy) theories by integrating out the high momentum degrees of freedom. For example the W-boson propagator $\frac{-i(g_{\mu\nu}-\frac{q_\mu q_\nu}{m_W})}{q^2-M^2_W}$ in the limit of small $q^2$ becomes $\frac{ig_{\mu\nu}}{M^2_W}$ so instead of the full electro-weak symmetry there would be a new lagrangian with a four point interaction term $\frac{G_F}{\sqrt{2}}J_\mu J^\mu$ where $J_\mu$ is a left-handed Dirac current. In more general terms this integrating out of high momentum degrees of freedom is the point of view on renormalization taken by Wilson. In this view our current theories are effective theories that result from an unknown fundamental Lagrangian from which all the high momentum d.o.f. have been integrated out. (c.f. renormalization group) - +1 Wilsonian point of view is precisely how one should approach effective theories; or all theories for that matter. [Also, rereading the question, I guess my answer addresses a topic quite orthogonal to what OP wanted...] – Marek Jul 19 '11 at 10:39 Marek, luksen, thank you both for your answers. Do you have any good references for learning about renormalization? (Can you recommend Peskin/Schröder Chapters 10/11/12, which would otherwise be my starting point?) – David M. R. Jul 19 '11 at 13:14 I think Peskin's introductino to renormalization is quite good. I'd definitely recommend it. – luksen Jul 19 '11 at 13:49 Well, it need not be a Lagrangian, you can work in Hamiltonian formalism too (which is often possible outside of particle physics, e.g. in condensed matter theory). But except for this caveat the answer is a resounding yes: the theory comes first and diagrams are just mnemonics. Diagram by itself doesn't make any sense if you can't reconstruct the corresponding term in the expansion from it. And you can only do this if you know precisely what theory you are dealing with (so you need to know what types of particles there are and in particular their Lorentz structure and Dirac structure, etc.). Of course, if you are familiar enough with the diagrammatic approach and comfortable with most of QFT and particle physics (so that you can write down Lagrangian of the most common types of interactions and also "see" what interaction vertices it corresponds to) then there is no problem working directly with Feynman diagrams and never mention any Lagrangian at all. Let me make (by now perhaps superfluous) analogy to the differential calculus. You can learn your ${{\rm d} x^n \over {\rm d} x} = n x^{n-1}$ formulas and product and chain rules and comfortably differentiate all kinds of functions. Yet, at some point you should learn that these are just mnemonics and differentiation is defined in terms of limits (which is what you will have to return to anytime you encounter a function for which you have no rule). Still, most of the time the standard rules are all you need. - 1 is there a tractable machinery to start with a system of (simple) feynman diagram terms and mathematically arrive back at the corresponding lagrangian? (for a system where you know a well-defined one exists) I mean, can you essentially say they are mathematically equally powerful descriptions (not considering the intuitive power etc). – Bjorn Wesen Jul 19 '11 at 9:43 1 @Bjorn: Feynman diagrams in particle physics are strictly weaker than the Lagrangian formulation. They represent just S-matrix expansion (i.e. interaction over infinite amount of time between asymptotic states), so the question would be whether S-matrix is all there is to field theory (once people thought so, nowadays this isn't pursued much I think). Also, the series actually does not converge. Also, one needs renormalization that modifies the original Lagrangian and diagrams. There are so many issues that I don't have a slightest clue how to even begin to answer your question :) – Marek Jul 19 '11 at 10:30 its ok :) still, it feels like at least some of those issues have to do with how you enumerate, select and use the individual feynman diagrams rather than discard or change the diagrams themselves.. – Bjorn Wesen Jul 20 '11 at 0:03 – orbifold May 5 '12 at 1:18 The effective Lagrangian upon which this Feynman diagram is based is the Yukawa effective theory. See, for example, the following exposition by: Christof Wetterich. The kinetic terms are given in equations 1.3 and 1.4 and the intercation term (Yukawa interaction) is given in equation 1.5 for a charged pion doublet and by 1.19 for a pion triplet including the neutral Pion. Please notice that the difference between the proton and neutron masses explicitely breaks the isospin invariance. This is an old theory that was used before the standard model was discovered. Although, this is a perturbatively renormalizable theory, there is no profit in performing computation beyond the tree level, because the proton, neutron and pions are not elementary. Also both the Yukawa coupling and the particle masses are external parameters. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354684948921204, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/08/04/group-objects/
# The Unapologetic Mathematician ## Group objects Just like we have monoid objects, we can construct a category called $\mathrm{Th}(\mathbf{Grp})$, which encodes the notion of a “group object”. Groups are a lot like monoids, but we’ll need to be able to do a few more things to describe groups than we needed for monoids. So let’s start with all the same setup as for monoid objects, but let’s make the monoidal structure in our toy category be an actual cartesian structure. That is, we start with an object $G$ and we build up all the “powers”, but now we insist that they be built from categorical products rather than just some arbitrary monoidal structure. Then $G^{\times n}$ is the categorical product of $n$ copies of $G$. Not only does this work like a monoidal structure, but it come equipped with a bunch of extra arrows. For example, there are arrows to “project out” a copy of $G$ from a product, and every object $X$ has a unique arrow $t_X$ from $X$ to the terminal object — the product of zero copies of $G$. More importantly it has an arrow $\Delta:G\rightarrow G\times G$ called the “diagonal” that is defined as the unique arrow satisfying $\pi_1\circ\Delta=1_G=\pi_2\circ\Delta$. That is, it makes an “identical copy” of $G$. For instance, in the context of sets this is the function $\Delta:S\rightarrow S\times S$ defined by $\Delta(s)=(s,s)$. Now we do everything like we did for monoid objects. There’s a morphism $m:G\times G\rightarrow G$, and one $e:G^{\otimes0}\rightarrow G$, and these satisfy the identity and associativity relations. Now we also throw in an arrow $i:G\rightarrow G$ satisfying $m\circ(i\times1_G)\circ\Delta=e\circ t_G=m\circ(1_G\times i)\circ\Delta$. That is, we can start with an “element” of $G$ and split it into two copies. Then we can either copy with $i$ and leave the other alone. Then we can multiply together the copies. Either choice of which one to hit with $i$ will give us the exact same result as if we’d just “forgotten” the original element of $G$ by passing to the terminal object, and then created a copy of the identity element with $e$. Wow, that looks complicated. Well, let’s take a functor from this category to $\mathbf{Set}$ that preserves products. Then what does the equation say in terms of elements of sets? We read off $m(x,i(x))=e=m(i(x),x)$. That is, the product of $x$ and $i(x)$ on either side is just the single element in the image of the arrow described by $e$ — the identity element of the monoid. But this is the condition that $i(x)$ be the inverse of $x$. So we’re just saying that (when we read the condition in sets) every element of our monoid has an inverse, which makes it into a group! Now a group object in any other category $\mathcal{C}$ is a product-preserving functor from $\mathrm{Th}(\mathbf{Grp})$ to $\mathcal{C}$. We can do even better. Since the monoidal structure in $\mathrm{Th}(\mathbf{Grp})$ is cartesian, it comes with a symmetry. The twist $\tau_{A,B}:A\times B\rightarrow B\times A$ is defined as the unique arrow satisfying $\pi_1\circ\tau_{A,B}=\pi_2$ and $\pi_2\circ\tau_{A,B}=\pi_1$. In sets, this means $\tau_{A,B}(a,b)=(b,a)$. Now we can add the relation $m\circ\tau_{G,G}=m$ to our category. In sets this reads that $m(x,y)=m(y,x)$, which says that the multiplication is commutative. The resulting category is $\mathrm{Th}(\mathbf{Ab})$ — the “theory of abelian groups” — and an “abelian group object” in a category $\mathcal{C}$ is a product-preserving functor in $\mathcal{C}^{\mathrm{Th}(\mathbf{Ab})}$. ### Like this: Posted by John Armstrong | Category theory ## 13 Comments » 1. So, if I ask you now what a group-object looks like in some of the more familiar categories, you’ll just shush me and tell me to wait till tomorrow – or possibly till Thursday. Right? Comment by | August 4, 2007 | Reply 2. No, but the examples I could reasonably give at this point are all but trivial. If you’re up for it, you can show for yourself that a group object in $k$-vector spaces is a Hopf algebra over $k$. It’ll make a nice little project for the trip on your return home. Comment by | August 4, 2007 | Reply 3. Ummmm, it is a vector space with a multiplication, a comultiplication, and the Hopf PROP diagram fulfilled. I’m not complete convinced I see where the inversion appears, but the rest is easy enough. Comment by | August 5, 2007 | Reply 4. Having a multiplication and a (compatible) comultiplication makes a bialgebra. Keep looking. Comment by | August 5, 2007 | Reply 5. Ok. I realized I was missing something. Opened the relevant Wikipedia page. And discovered that the inversion is called “antipode”, and that the formal definition of a Hopf algebra, according to Wikipedia, is that it is a group object in k-Mod. With the right definition, this statement is trivial to the point of almost being ridiculous. Comment by | August 5, 2007 | Reply 6. Be careful. A group object in that category is a Hopf algebra, but not all Hopf algebras are so. What hidden relation does the setup of a group object satisfy that might not be satisfied in a general Hopf algebra? Comment by | August 5, 2007 | Reply 7. [...] sets the stage. It defines a group as a group object in Set, but without the diagonal map. It produces a minimal definition – the left identity and inverse [...] Pingback by | August 5, 2007 | Reply 8. [...] at Weighted Graphs and the Minimum Spanning Tree (by Mark C. Chu-Carroll of Good Math, Bad Math), Group objects (at The Unapologetic Mathematician—one of an ongoing series on Category Theory), and Questions [...] Pingback by | August 8, 2007 | Reply 9. John Armstrong commented: “If you’re up for it, you can show for yourself that a group object in k-vector spaces is a Hopf algebra over k.” Careful! If you’re using cartesian products in k-Vect, then a monoid object (V, m, e) in k-Vect is really nothing more than V itself, i.e., where m: V x V –> V is addition and e: 1 –> V is the zero element. This is a manifestation of the Eckmann-Hilton lemma, which says that a monoid object (M, m, e) in the category of monoids is the same as an abelian monoid (with m the same as the multiplication of the underlying monoid M, and e the identity of the underlying monoid). Obviously, it doesn’t help just to use (k-Vect, \otimes) in place of (k-Vect, x), because you need to use cartesian products to define group objects, and \otimes isn’t the cartesian product! To get something in the ballpark of where you’re aiming, consider instead group objects in the category of *cocommutative coalgebras over k* (i.e., cocommutative comonoids in the symmetric monoidal category k-Vect, with tensor product as monoidal product.) Here, the cartesian product of two cocommutative coalgebras is given by the tensor product \otimes on the underlying vector spaces, and all is well again. (NB: I almost forgot to say “cocommutative”, but without that condition the cartesian product isn’t tensor; it’s something more complicated than that.) Modulo this fix, John was right: not all cocommutative Hopf algebras are group objects in the category of cocommutative coalgebras, since the antipode in the definition of Hopf algebra is not required to be a coalgebra map. Comment by Todd Trimble | August 19, 2007 | Reply 10. [...] what I did for group objects can be extended to cover relations as well as functions. [...] Pingback by | August 20, 2007 | Reply 11. Note: as I mention in the new post today, I was wrong in the earlier comments. Todd is right here. Comment by | November 7, 2008 | Reply 12. [...] thing that we should point out: this is not a group object in the category of vector spaces over . A group object needs the diagonal we get from the finite [...] Pingback by | November 7, 2008 | Reply 13. [...] and groups in a compatible way. The fancy way to say it is, of course, that a Lie group is a group object in the category of smooth [...] Pingback by | June 6, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 46, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295742511749268, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/38643/list
## Return to Answer 1 [made Community Wiki] One observation: People understandably hesitate telling a half-truth. When you teach a heuristic picture to someone, you also need to teach them about how fuzzy it is and when it starts to break down. A more calculational proof has the virtue of being self-contained and robustly transmissible. This is even more important when writing a textbook. Then there are other cases where I'm puzzled why certain heuristic means of understanding and organizing knowledge don't seem to be usually taught. Take the concept of normal subgroups. In one of his books, V.I. Arnold says that a subgroup is normal when it is relativistically invariant, and he doesn't develop that line of thought any deeper. That statement is a good example of a heuristic analogy that is specific in its detail but general in its spirit. However you phrase it, certainly you should give your students the idea that a normal subgroup is something whose structure is invariant with respect to the parent group's symmetries. As a litmus test, your students should be able to tell whether these subgroups are normal at a glance, without calculation: Let $E^2$ be the Euclidean group of the plane and let $O^2$ be the subgroup fixing some point. Subgroups of $E^2$: • Translations along some particular direction. • Translations along every direction. • Translations and glides along every direction. • Reflections in every line. • Rotations around some particular point. • Symmetries of a tessellation. Subgroups of $O^2$: • Symmetries of a regular polygon. • Reflections in a line. The way I think about the non-normal cases is that there is something something non-isotropic about them, some structure that the subgroup preserves that is not preserved by the parent group. For example: • Translations along some particular direction: Rotations don't preserve the direction. • Translations along every direction: No special directions, so it's normal. • Translations and glides along every direction: Ditto. • Reflections in every line: Ditto. (This combines the two previous cases.) • Rotations around some particular point: Translations don't preserve the point.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425755143165588, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/11740/list
## Return to Answer 2 talked about the corresponding mapping, as per andinos's comment Edit: andinos clarified to say he wants to know about the implicit mapping of the kernel function. Well I have bad news: It does not exist!! The proof works by showing there exist matrices $A,B$ such that the corresponding kernel matrix is not positive semi-definite. To finish, apply Mercer's theorem. In particular, set $A = \left(\begin{array}{cc}1 & 1 \\ -1 & 1\end{array}\right)$ and $B = A^T = \left(\begin{array}{cc}1 & -1 \\ 1 & 1\end{array}\right)$. Therefore $\textrm{tr}(AB) = \textrm{tr}(AA^T) = 4$, and $\textrm{tr}(BA)$ is identical. On the other hand, $\textrm{tr}(AA) = \textrm{tr}(BB) = 0$. therefore, the kernel matrix $K$ is $\left(\begin{array}{cc}0 & 4 \\ 4 & 0\end{array}\right)$. Set $x = \left(\begin{array}{c} 1 \\ -1\end{array}\right)$, and observe that $x^T K x = -8 < 0$, and therefore $K$ is not PSD, so the kernel $k(A,B) = \textrm{tr}(AB)$ is not PSD. On the other hand! If you had instead defined your kernel to be $k'(A,B) = \textrm{tr}(AB^T)$, notice that $k'(A,B) = \sum_{i,j}A_{ij}B_{ij} = \Phi(A)^T\Phi(B)$ where $\Phi$ simply takes its input matrix and outputs it as a column vector. 1 If $A,B$ are arbitrary $n\times n$ matrices, by definition of trace, $\textrm{tr}(AB) = \sum_{i,j} A_{ij}B_{ji}$. This is $O(n^2)$, but just reading the entries of $A$ is $\Omega(n^2)$. Without any special structure on $A,B$, you probably can't do better. If $A,B$ are (column) vectors, you probably mean the outer product $\textrm{tr}(AB^T) = \sum_i A_i B_i$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9329732060432434, "perplexity_flag": "head"}
http://mathoverflow.net/questions/117038?sort=votes
## Automatic continuity of the inverse map ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) All topological spaces considered here are Hausdorff. It is a well-known consequence of the minimality of a compact topology that an injective continuous map $f\colon X\to Y$ where $X$ is compact, must be automatically a homeomorphism onto its range. I am interested in possibly non-compact spaces which share this property. I would like to kindly ask whether there is a characterisation of this class of spaces. - There is an easy case: Let $f$ be proper, meaning the inverse image of every compact subset of $Y$ is compact in $X$. Then $f$ would be a homeomorphism onto its image. – Vahid Shirbisheh Dec 22 at 18:33 $Y$ also should be compact in the above! – Vahid Shirbisheh Dec 22 at 18:47 1 This is a similar question. mathoverflow.net/questions/36085/… – Joseph Van Name Dec 22 at 20:53 ## 1 Answer The spaces that you are looking for are precisely the minimal Hausdorff spaces. i.e. A Hausdorff space $X$ is a minimal Hausdorff if every injective continuous map from $X$ to a Hausdorff space is an embedding. See the Book General Topology by Stephen Willard problems 17M for more information about these spaces. The Minimal Hausdorff spaces are precisely the Hausdorff spaces where every open filter with a unique accumulation point converges. We say that a topological space is semiregular if the regular open sets form a basis for the topology. A Hausdorff space is said to be $H$-closed if it is closed in every Hausdorff extension. Every minimal Hausdorff space is $H$-closed, and a Hausdorff space is a minimal Hausdorff space if and only if it is semiregular and $H$-closed. While there are minimal Hausdorff spaces which are not compact, the notions of minimal Hausdorff and compactness coincide for regular spaces. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8873562812805176, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/160732-joint-probability-distribution.html
Thread: 1. Joint Probability Distribution Hey, i'm having a little trouble transitioning from single variable probability distributions to bivariate distributions, I'm just wondering if any of you guys have a minute to look over my solution to a question to make sure i'm not going astray on the foundation? Thanks for your help! For the given Joint Probability Mass Function $f(x,y)=\frac{1}{36}(x+y)$ over the nine points with $x=1,2,3$ and $y=1,2,3$, Determine $E(X)$ and $V(X)$ So for $E(X)$ we need to find $f_{X}(x)$. $f_{X}(x)=\int_Y f(x,y) dy = \int_{1}^{3} \frac{1}{36}(x+y) dy = \frac{1}{18}x+\frac{1}{9}$ From here, $E(X)=\Sigma_{x} xf_{X}(x)= (1)(\frac{1}{18}(1)+\frac{1}{9}) + (2)(\frac{1}{18}(2)+\frac{1}{9})+ (3)(\frac{1}{18}(3)+\frac{1}{9}) =\frac{3}{18}+\frac{8}{18}+\frac{15}{18}=1.444$ Now onto the variance; $V(X)=\Sigma_{X} x^2f_{X}(x) - E(x)^2$ $V(X)= (1)^2(\frac{1}{18}(1)+\frac{1}{9})+(2)^2(\frac{1}{ 18}(2)+\frac{1}{9})+(3)^2(\frac{1}{18}(3)+\frac{1} {9})-1.444^2$ $V(X)=[\frac{3}{18} + \frac{16}{18} + \frac{45}{18}]-2.085 = 3.5555-2.085$ $V(X)=1.471$ Have I gone horribly wrong somewhere? Or does this look like the appropriate approach? Thanks for helping me understand this! Kasper 2. Originally Posted by Kasper $f_{X}(x)=\int_Y f(x,y) dy = \int_{1}^{3} \frac{1}{36}(x+y) dy = \frac{1}{18}x+\frac{1}{9}$ You seem to be confused about discrete ad continuous distributions. For a discrete distribution with joint pmf f(x,y), the marginal pmf is found by $f_{X}(x) = \sum_{y} f(x,y)$ and $f_{Y}(y) = \sum_{x} f(x,y)$ Integration is done in the case of continuous distributions. 3. Riiight. Can't just go integrating for discrete cases. So in my case then, $f_{X}(x)=\Sigma_Y f_{XY}(x,y)=\frac{1}{36}(x+1)+\frac{1}{36}(x+2)+\f rac{1}{36}(x+3)=\frac{1}{12}x+\frac{1}{6}$? Thanks for the help, good to know I'm on the right track, short of integrating across integers. 4. That looks right. 5. Thanks man, much appreciated
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137473106384277, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/bose-einstein-condensate+fermions
# Tagged Questions 2answers 353 views ### Can bosons that are composed of several fermions occupy the same state? It is generally assumed that there is no limit on how many bosons are allowed to occupy the same quantum mechanical state. However, almost every boson encountered in every-day physics is not a ... 0answers 86 views ### Fock picture of bosonification in condensates I want to understand how bosonification in a condensate must be interpreted in the Fock states picture Say i have uncoupled fermions in a set of states $E_1$, $E_2$ ... over the vacuum $E_0$. They ... 1answer 423 views ### BCS theory, Richardson model and Superconductivity I'm studying Richardson Model in second quantization. There are many initial points that I don't understand: We supposed that an attractive force between 2 electrons exists, due to electron-phonon ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9376829266548157, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/195051-isometries-r2-r3-specific-example.html
Thread: 1. Isometries of R2 and R3 - Specific example I am reading Papantonopoulou: Algebra Ch 14 Symmetries. I am seeking to fully understand Theorem 14.21 (see attached Papantonopoulou pp 462 -463) On page 462, Papantonopoulou defines translations, rotations and reflections for R2 and R3 (see attached). Note that the rotations are defined as about the origin and the reflections are about the X-axis or $e_1$ axis. Then on Page 463 he states Theorem 14.21 as follows: 14.21 Theorem An isometry S of $\mathbb{R}^2$ or $\mathbb{R}^3$ can be uniquely expressed as $S = t_b \circ \rho_{\theta} \circ r^i$ where i = 0 or 1 I would like to use Theorem 14.21 to specify S for the isometry of $\mathbb{R}^2$ that maps the line y = x to the line y = 1 - 2x? [ ie what is $r^i$ , $\rho_{\theta}$ , $t_b$ in this case?] Peter Attached Files • P pp 462.pdf (537.3 KB, 12 views) • P pp 463.pdf (732.4 KB, 8 views) 2. Re: Isometries of R2 and R3 - Specific example there is more than one isometry that does this. we get two seperate rotations that take y = x to y = -2x (one through some angle θ, and one through π+θ). on the other hand, if we let i = 1, this takes the line y = x, to the line y = -x (well, actually, the line -y = x, but so what?), and we have two different rotations that map y = -x to y = -2x. after each 4 of these elements of O(2), we can translate using (0,1) to get the desired affine isometry. explicitly, here is one such mapping: let i = 0. this is the "identity" reflection (not reflecting at all). note that: $r^i = \begin{bmatrix}1&0\\0&(-1)^i\end{bmatrix}$ the next step is to compute θ. let's rotate counter-clockwise. the angle we want is: $\theta = \frac{3\pi}{4} - \text{arctan}(2)$ then we have: $\rho_{\theta} = \begin{bmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{bmatrix}$ if we want to obtain a specific matrix for this rotation, we can rotate by the two angles (3π/4 and -arctan(2)) separately: $\rho_{3\pi/4} = \begin{bmatrix}\frac{-\sqrt{2}}{2}&\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2}&\frac{-\sqrt{2}}{2}\end{bmatrix}$ while: $\rho_{-\text{arctan}(2)} = \begin{bmatrix}\frac{1}{\sqrt{5}}&\frac{2}{\sqrt{5 }}\\ \frac{-2}{\sqrt{5}}&\frac{1}{\sqrt{5}}\end{bmatrix}$ so that: $\rho_{\theta} = \rho_{3\pi/4} \circ \rho_{-\text{arctan}(2)} = \begin{bmatrix}\frac{\sqrt{10}}{10}&\frac{-3\sqrt{10}}{10}\\ \frac{3\sqrt{10}}{10}&\frac{\sqrt{10}}{10} \end{bmatrix}$ (if i did my arithmetic and geometry right, i dislike calculation like this ) finally, we translate by (0,1), giving the transformation: $S:\begin{bmatrix}x\\y\end{bmatrix} \to \left(\begin{bmatrix}\frac{\sqrt{10}}{10}&\frac{-3\sqrt{10}}{10}\\ \frac{3\sqrt{10}}{10}&\frac{\sqrt{10}}{10} \end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} + \begin{bmatrix}0\\1\end{bmatrix}\right)$ so let's see if we map points on y = x to points on y = 1 - 2x, and preserve the distance. for our 2 test points, i'll pick (0,0) and (4,4). clearly S takes (0,0) to (0,1) and 1 = 1 - 2(0), so that is on our target line. a brief (for you, maybe) calculation shows that S takes (4,4) to the point (-4√10/5, (8√10 + 5)/5), and again: (8√10 + 5)/5 = 1 - 2(-4√10/5), so that, too, is on our line. now the distance between (0,0) and (4,4) is 4√2. let's see what the distance between our 2 image points are: $\sqrt{\left(\frac{8\sqrt{10}+5}{5} - 1\right)^2 + \left(\frac{-4\sqrt{10}}{5}\right)^2}$ $= \sqrt{\frac{640}{25} + \frac{160}{25}} = \sqrt{\frac{800}{25}} = \sqrt{32} = 4\sqrt{2}$ i'm convinced, how about you? 3. Re: Isometries of R2 and R3 - Specific example Hi Deveno Yes, worked through all the example you provided CONVINCED!! Thank you for your guidance and help! It has enabled me to pursue symmetries and symmetry groups further! Peter
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8996009826660156, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/55016/list
## Return to Answer 4 edited body The Laplace operator is to Analysis and PDEs is (almost) like what a sum of squares is to Linear Algebra and Statistics. • The Laplacian is the simplest differential quadratic form corresponding via the Fourier transform to the square of the Euclidean distance. This may explain in part its fundamental role in the Harmonic Analysis on Euclidean spaces. • The Laplacian (or, more precisely, $\frac{1}{2}\Delta$) is the infinitesimal generator of a Brownian motion on $\mathbb R^n$, which is the simplest and most ubiquitous of the continuous-time stochastic processes. • The Laplace-Beltrami operator on a Riemannian manifold is conformal invariant. The connection between harmonic functions, complex analysis and probability is the most tight in dimension 2 (Levy's theorem, Schramm-Loewner evolution, ...). • The Laplace operator is the trace of the coefficient of the quadratic term in a local Taylor expansion of a function (the Hessian matrix). This implies that it will pop up (together with the determinant of the Hessian matrix) in many problems related to optimization. 3 added 8 characters in body The Laplace operator is to Analysis and PDEs is (almost) what like a sum of squares is to Linear Algebra and Statistics. • The Laplacian is the simplest differential quadratic form corresponding via the Fourier transform to the square of the Euclidean distance. This may explain in part its fundamental role in the Harmonic Analysis on Euclidean spaces. • The Laplacian (or, more precisely, $\frac{1}{2}\Delta$) is the infinitesimal generator of a Brownian motion on $\mathbb R^n$, which is the simplest and most ubiquitous of the continuous-time stochastic processes. • The Laplace-Beltrami operator on a Riemannian manifold is conformal invariant. The connection between harmonic functions, complex analysis and probability is the most tight in dimension 2 (Levy's theorem, Schramm-Loewner evolution, ...). • The Laplace operator is the trace of the coefficient of the quadratic term in a local Taylor expansion of a function (the Hessian matrix). This implies that it will pop up (together with the determinant of the Hessian matrix) in many problems related to optimization. 2 added 317 characters in body The Laplace operator to Analysis and PDEs is (almost) what sum of squares to Linear Algebra and Statistics. • The Laplacian is the simplest differential quadratic form corresponding via the Fourier transform to the square of the Euclidean distance. This may explain in part its fundamental role in the Harmonic Analysis on Euclidean spaces. • The Laplace-Beltrami operator on a connected Riemannian manifold is conformal invariant. • The Laplacian (or, more precisely, $\frac{1}{2}\Delta$) is the infinitesimal generator of a Brownian motion on $\mathbb R^n$, which is the simplest and most ubiquitous of the continuous-time stochastic processes. • The Laplace-Beltrami operator on a Riemannian manifold is conformal invariant. The connection between harmonic functions, complex analysis and probability is the most tight in dimension 2 (Levy's theorem, Schramm-Loewner evolution, ...). • The Laplace operator is the trace of the coefficient of the quadratic term in a local Taylor expansion of a function (the Hessian matrix). This implies that it will pop up (together with the determinant of the Hessian matrix) in many problems related to optimization. 1 • The Laplacian is the simplest differential quadratic form corresponding via the Fourier transform to the square of the Euclidean distance. This may explain in part its fundamental role in the Harmonic Analysis on Euclidean spaces. • The Laplace-Beltrami operator on a connected Riemannian manifold is conformal invariant. • The Laplacian (or, more precisely, $\frac{1}{2}\Delta$) is the infinitesimal generator of a Brownian motion on $\mathbb R^n$, which is the simplest and most ubiquitous of the continuous-time stochastic processes. • The Laplace operator is the trace of the coefficient of the quadratic term in a local Taylor expansion of a function (the Hessian matrix). This implies that it will pop up in many problems related to optimization.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8663983941078186, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Peano_arithmetic
# Peano axioms (Redirected from Peano arithmetic) In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are a set of axioms for the natural numbers presented by the 19th century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of consistency and completeness of number theory. The need for formalism in arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction.[1] In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic.[2] In 1888, Richard Dedekind proposed a collection of axioms about the numbers, and in 1889 Peano published a more precisely formulated version of them as a collection of axioms in his book, The principles of arithmetic presented by a new method (Latin: Arithmetices principia, nova methodo exposita). The Peano axioms contain three types of statements. The first axiom asserts the existence of at least one member of the set "number". The next four are general statements about equality; in modern treatments these are often considered axioms of the "underlying logic".[3] The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second order statement of the principle of mathematical induction over the natural numbers. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema. ## The axioms When Peano formulated his axioms, the language of mathematical logic was in its infancy. The system of logical notation he created to present the axioms did not prove to be popular, although it was the genesis of the modern notation for set membership (∈, which is from Peano's ε) and implication (⊃, which is from Peano's reversed 'C'.) Peano maintained a clear distinction between mathematical and logical symbols, which was not yet common in mathematics; such a separation had first been introduced in the Begriffsschrift by Gottlob Frege, published in 1879.[4] Peano was unaware of Frege's work and independently recreated his logical apparatus based on the work of Boole and Schröder.[5] The Peano axioms define the arithmetical properties of natural numbers, usually represented as a set N or $\mathbb{N}.$ The signature (a formal language's non-logical symbols) for the axioms includes a constant symbol 0 and a unary function symbol S. The constant 0 is assumed to be a natural number: 1. 0 is a natural number. The next four axioms describe the equality relation. 1. For every natural number x, x = x. That is, equality is reflexive. 2. For all natural numbers x and y, if x = y, then y = x. That is, equality is symmetric. 3. For all natural numbers x, y and z, if x = y and y = z, then x = z. That is, equality is transitive. 4. For all a and b, if a is a natural number and a = b, then b is also a natural number. That is, the natural numbers are closed under equality. The remaining axioms define the arithmetical properties of the natural numbers. The naturals are assumed to be closed under a single-valued "successor" function S. 1. For every natural number n, S(n) is a natural number. Peano's original formulation of the axioms used 1 instead of 0 as the "first" natural number. This choice is arbitrary, as axiom 1 does not endow the constant 0 with any additional properties. However, because 0 is the additive identity in arithmetic, most modern formulations of the Peano axioms start from 0. Axioms 1 and 6 define a unary representation of the natural numbers: the number 1 can be defined as S(0), 2 as S(S(0)) (which is also S(1)), and, in general, any natural number n as Sn(0). The next two axioms define the properties of this representation. 1. For every natural number n, S(n) = 0 is false. That is, there is no natural number whose successor is 0. 2. For all natural numbers m and n, if S(m) = S(n), then m = n. That is, S is an injection. Axioms 1, 6, 7 and 8 imply that the set of natural numbers contains the distinct elements 0, S(0), S(S(0)), and furthermore that {0, S(0), S(S(0)), …} ⊆ N. This shows that the set of natural numbers is infinite. However, to show that N = {0, S(0), S(S(0)), …}, it must be shown that N ⊆ {0, S(0), S(S(0)), …}; i.e., it must be shown that every natural number is included in {0, S(0), S(S(0)), …}. To do this however requires an additional axiom, which is sometimes called the axiom of induction. This axiom provides a method for reasoning about the set of all natural numbers. 1. If K is a set such that: • 0 is in K, and • for every natural number n, if n is in K, then S(n) is in K, then K contains every natural number. The induction axiom is sometimes stated in the following form: 1. If φ is a unary predicate such that: • φ(0) is true, and • for every natural number n, if φ(n) is true, then φ(S(n)) is true, then φ(n) is true for every natural number n. In Peano's original formulation, the induction axiom is a second-order axiom. It is now common to replace this second-order principle with a weaker first-order induction scheme. There are important differences between the second-order and first-order formulations, as discussed in the section Models below. ## Arithmetic The Peano axioms can be augmented with the operations of addition and multiplication and the usual total (linear) ordering on N. The respective functions and relations are constructed in second-order logic, and are shown to be unique using the Peano axioms. ### Addition Addition is the function + : N × N → N (written in the usual infix notation, mapping two elements of N to another element of N), defined recursively as: $\begin{align} a + 0 &= a ,\\ a + S (b) &= S (a + b). \end{align}$ For example, a + 1 = a + S(0) = S(a + 0) = S(a). The structure (N, +) is a commutative semigroup with identity element 0. (N, +) is also a cancellative magma, and thus embeddable in a group. The smallest group embedding N is the integers. ### Multiplication Given addition, multiplication is the function · : N × N → N defined recursively as: $\begin{align} a \cdot 0 &= 0, \\ a \cdot S (b) &= a + (a \cdot b). \end{align}$ It is easy to see that setting b equal to 0 yields the multiplicative identity: a · 1 = a · S(0) = a + (a · 0) = a + 0 = a Moreover, multiplication distributes over addition: a · (b + c) = (a · b) + (a · c). Thus, (N, +, 0, ·, 1) is a commutative semiring. ### Inequalities The usual total order relation ≤ : N × N can be defined as follows, assuming 0 is a natural number: For all a, b ∈ N, a ≤ b if and only if there exists some c ∈ N such that a + c = b. This relation is stable under addition and multiplication: for $a, b, c \in N$, if a ≤ b, then: • a + c ≤ b + c, and • a · c ≤ b · c. Thus, the structure (N, +, ·, 1, 0, ≤) is an ordered semiring; because there is no natural number between 0 and 1, it is a discrete ordered semiring. The axiom of induction is sometimes stated in the following strong form, making use of the ≤ order: For any predicate φ, if • φ(0) is true, and • for every n, k ∈ N, if k ≤ n implies φ(k) is true, then φ(S(n)) is true, then for every n ∈ N, φ(n) is true. This form of the induction axiom is a simple consequence of the standard formulation, but is often better suited for reasoning about the ≤ order. For example, to show that the naturals are well-ordered—every nonempty subset of N has a least element—one can reason as follows. Let a nonempty X ⊆ N be given and assume X has no least element. • Because 0 is the least element of N, it must be that 0 ∉ X. • For any n ∈ N, suppose for every k ≤ n, k ∉ X. Then S(n) ∉ X, for otherwise it would be the least element of X. Thus, by the strong induction principle, for every n ∈ N, n ∉ X. Thus, X ∩ N = ∅, which contradicts X being a nonempty subset of N. Thus X has a least element. ## First-order theory of arithmetic First-order theories are often better than second order theories for model or proof theoretic analysis. All of the Peano axiom except the ninth axiom (the induction axiom) are statements in first-order logic. The arithmetical operations of addition and multiplication and the order relation can also be defined using first-order axioms. The second-order axiom of induction can be transformed into a weaker first-order induction schema. First-order axiomatizations of Peano arithmetic have an important limitation, however. In second-order logic, it is possible to define the addition and multiplication operations from the successor operation, but this cannot be done in the more restrictive setting of first-order logic. Therefore, the addition and multiplication operations are directly included in the signature of Peano arithmetic, and axioms are included that relate the three operations to each other. The following list of axioms (along with the usual axioms of equality) is sufficient for this purpose:[6] • $0 \not = S(x_1)$ • $S(x_1) = S(x_2) \Rightarrow x_1 = x_2 \,$ • $x_1 + 0 = x_1 \,$ • $x_1 + S(x_2) = S(x_1 + x_2)\,$ • $x_1 \cdot 0 = 0$ • $x_1 \cdot S(x_2) = x_1\cdot x_2 + x_1$ In addition to this list of numerical axioms, Peano arithmetic contains the induction schema, which consists of a countably infinite set of axioms. For each formula φ(x,y1,...,yk) in the language of Peano arithmetic, the first-order induction axiom for φ is the sentence $\forall \bar{y} (\phi(0,\bar{y}) \land \forall x ( \phi(x,\bar{y})\Rightarrow\phi(S(x),\bar{y})) \Rightarrow \forall x \phi(x,\bar{y}))$ where $\bar{y}$ is an abbreviation for y1,...,yk. The first-order induction schema includes every instance of the first-order induction axiom, that is, it includes the induction axiom for every formula φ. This schema avoids quantification over sets of natural numbers, which is impossible in first-order logic. For instance, it is not possible in first-order logic to say that any set of natural numbers containing 0 and closed under successor is the entire set of natural numbers. What can be expressed is that any definable set of natural numbers has this property. Because it is not possible to quantify over definable subsets explicitly with a single axiom, the induction schema includes one instance of the induction axiom for every definition of a subset of the naturals. ### Equivalent axiomatizations There are many different, but equivalent, axiomatizations of Peano arithmetic. While some axiomatizations, such as the one just described, use a signature that only has symbols for 0 and the successor, addition, and multiplications operations, other axiomatizations use the language of ordered semirings, including an additional order relation symbol. One such axiomatization begins with the following axioms that describe a discrete ordered semiring.[7] 1. $\forall x, y, z \in N$. $(x + y) + z = x + (y + z)$, i.e., addition is associative. 2. $\forall x, y \in N$. $x + y = y + x$, i.e., addition is commutative. 3. $\forall x, y, z \in N$. $(x \cdot y) \cdot z = x \cdot (y \cdot z)$, i.e., multiplication is associative. 4. $\forall x, y \in N$. $x \cdot y = y \cdot x$, i.e., multiplication is commutative. 5. $\forall x, y, z \in N$. $x \cdot (y + z) = (x \cdot y) + (x \cdot z)$, i.e., the distributive law. 6. $\forall x \in N$. $x + 0 = x \and x \cdot 0 = 0$, i.e., zero is the identity element for addition. 7. $\forall x \in N$. $x \cdot 1 = x$, i.e., one is the identity element for multiplication. 8. $\forall x, y, z \in N$. $x < y \and y < z \supset x < z$, i.e., the '<' operator is transitive. 9. $\forall x \in N$. $\neg (x < x)$, i.e., the '<' operator is irreflexive. 10. $\forall x, y \in N$. $x < y \or x = y \or y < x$. 11. $\forall x, y, z \in N$. $x < y \supset x + z < y + z$. 12. $\forall x, y, z \in N$. $0 < z \and x < y \supset x \cdot z < y \cdot z$. 13. $\forall x, y \in N$. $x < y \supset \exists z \in N$. $x + z = y$. 14. $0 < 1 \and \forall x \in N$. $x > 0 \supset x \geq 1$. 15. $\forall x \in N$. $x \geq 0$. The theory defined by these axioms is known as PA–; PA is obtained by adding the first-order induction schema. An important property of PA– is that any structure M satisfying this theory has an initial segment (ordered by ≤) isomorphic to N. Elements of M \ N are known as nonstandard elements. ## Models A model of the Peano axioms is a triple (N, 0, S), where N is a (necessarily infinite) set, 0 ∈ N and S : N → N satisfies the axioms above. Dedekind proved in his 1888 book, What are numbers and what should they be (German: Was sind und was sollen die Zahlen) that any two models of the Peano axioms (including the second-order induction axiom) are isomorphic. In particular, given two models (NA, 0A, SA) and (NB, 0B, SB) of the Peano axioms, there is a unique homomorphism f : NA → NB satisfying $\begin{align} f(0_A) &= 0_B \\ f(S_A (n)) &= S_B (f (n)) \end{align}$ and it is a bijection. The second-order Peano axioms are thus categorical; this is not the case with any first-order reformulation of the Peano axioms, however. ### Nonstandard models Although the usual natural numbers satisfy the axioms of PA, there are other non-standard models as well; the compactness theorem implies that the existence of nonstandard elements cannot be excluded in first-order logic. The upward Löwenheim–Skolem theorem shows that there are nonstandard models of PA of all infinite cardinalities. This is not the case for the original (second-order) Peano axioms, which have only one model, up to isomorphism. This illustrates one way the first-order system PA is weaker than the second-order Peano axioms. When interpreted as a proof within a first-order set theory, such as ZFC, Dedekind's categoricity proof for PA shows that each model of set theory has a unique model of the Peano axioms, up to isomorphism, that embeds as an initial segment of all other models of PA contained within that model of set theory. In the standard model of set theory, this smallest model of PA is the standard model of PA; however, in a nonstandard model of set theory, it may be a nonstandard model of PA. This situation cannot be avoided with any first-order formalization of set theory. It is natural to ask whether a countable nonstandard model can be explicitly constructed. Tennenbaum's theorem, proved in 1959, shows that there is no countable nonstandard model of PA in which either the addition or multiplication operation is computable.[8] This result shows it is difficult to be completely explicit in describing the addition and multiplication operations of a countable nonstandard model of PA. However, there is only one possible order type of a countable nonstandard model. Letting ω be the order type of the natural numbers, ζ be the order type of the integers, and η be the order type of the rationals, the order type of any countable nonstandard model of PA is ω + ζ·η, which can be visualized as a copy of the natural numbers followed by a dense linear ordering of copies of the integers. ### Set-theoretic models Main article: Set-theoretic definition of natural numbers The Peano axioms can be derived from set theoretic constructions of the natural numbers and axioms of set theory such as the ZF.[9] The standard construction of the naturals, due to John von Neumann, starts from a definition of 0 as the empty set, ∅, and an operator s on sets defined as: s(a) = a ∪ { a }. The set of natural numbers N is defined as the intersection of all sets closed under s that contain the empty set. Each natural number is equal (as a set) to the set of natural numbers less than it: $\begin{align} 0 &= \emptyset \\ 1 &= s(0) = s(\emptyset) = \emptyset \cup \{ \emptyset \} = \{ \emptyset \} = \{ 0 \} \\ 2 &= s(1) = s(\{ 0 \}) = \{ 0 \} \cup \{ \{ 0 \} \} = \{ 0 , \{ 0 \} \} = \{ 0, 1 \} \\ 3 &= ... = \{ 0, 1, 2 \} \end{align}$ and so on. The set N together with 0 and the successor function s : N → N satisfies the Peano axioms. Peano arithmetic is equiconsistent with several weak systems of set theory.[10] One such system is ZFC with the axiom of infinity replaced by its negation. Another such system consists of general set theory (extensionality, existence of the empty set, and the axiom of adjunction), augmented by an axiom schema stating that a property that holds for the empty set and holds of an adjunction whenever it holds of the adjunct must hold for all sets. ### Interpretation in category theory The Peano axioms can also be understood using category theory. Let C be a category with terminal object 1C, and define the category of pointed unary systems, US1(C) as follows: • The objects of US1(C) are triples (X, 0X, SX) where X is an object of C, and 0X : 1C → X and SX : X → X are C-morphisms. • A morphism φ : (X, 0X, SX) → (Y, 0Y, SY) is a C-morphism φ : X → Y with φ 0X = 0Y and φ SX = SY φ. Then C is said to satisfy the Dedekind–Peano axioms if US1(C) has an initial object; this initial object is known as a natural number object in C. If (N, 0, S) is this initial object, and (X, 0X, SX) is any other object, then the unique map u : (N, 0, S) → (X, 0X, SX) is such that $\begin{align} u 0 &= 0_X, \\ u (S x) &= S_X (u x). \end{align}$ This is precisely the recursive definition of 0X and SX. ## Consistency Main article: Hilbert's second problem When the Peano axioms were first proposed, Bertrand Russell and others agreed that these axioms implicitly defined what we mean by a "natural number". Henri Poincaré was more cautious, saying they only defined natural numbers if they were consistent; if there is a proof that starts from just these axioms and derives a contradiction such as 0 = 1, then the axioms are inconsistent, and don't define anything. In 1900, David Hilbert posed the problem of proving their consistency using only finitistic methods as the second of his twenty-three problems.[11] In 1931, Kurt Gödel proved his second incompleteness theorem, which shows that such a consistency proof cannot be formalized within Peano arithmetic itself.[12] Although it is widely claimed that Gödel's theorem rules out the possibility of a finitistic consistency proof for Peano arithmetic, this depends on exactly what one means by a finitistic proof. Gödel himself pointed out the possibility of giving a finitistic consistency proof of Peano arithmetic or stronger systems by using finitistic methods that are not formalizable in Peano arithmetic, and in 1958 Gödel published a method for proving the consistency of arithmetic using type theory.[13] In 1936, Gerhard Gentzen gave a proof of the consistency of Peano's axioms, using transfinite induction up to an ordinal called ε0.[14] Gentzen explained: "The aim of the present paper is to prove the consistency of elementary number theory or, rather, to reduce the question of consistency to certain fundamental principles". Gentzen's proof is arguably finitistic, since the transfinite ordinal ε0 can be encoded in terms of finite objects (for example, as a Turing machine describing a suitable order on the integers, or more abstractly as consisting of the finite trees, suitably linearly ordered). Whether or not Gentzen's proof meets the requirements Hilbert envisioned is unclear: there is no generally accepted definition of exactly what is meant by a finitistic proof, and Hilbert himself never gave a precise definition. The vast majority of contemporary mathematicians believe that Peano's axioms are consistent, relying either on intuition or the acceptance of a consistency proof such as Gentzen's proof. The small number of mathematicians who advocate ultrafinitism reject Peano's axioms because the axioms require an infinite set of natural numbers. ## Footnotes 1. Grassmann 1861 2. Peirce 1881; also see Shields 1997 3. van Heijenoort 1967:94 4. Van Heijenoort 1967, p. 2 5. Van Heijenoort 1967, p. 83 6. Mendelson 1997:155 7. Kaye 1991, pp. 16–18 8. Kaye 1991, sec. 11.3 9. Suppes 1960; Hatcher 1982 10. Tarski & Givant 1987, sec. 7.6 11. Hilbert 1900 12. Godel 1931 13. Godel 1958 14. Gentzen 1936 ## References • Martin Davis, 1974. Computability. Notes by Barry Jacobs. Courant Institute of Mathematical Sciences, New York University. • Richard Dedekind, 1888. Was sind und was sollen die Zahlen? (What are and what should the numbers be?). Braunschweig. Two English translations: • 1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover. • 1996. In From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Ewald, William B., ed. Oxford University Press: 787–832. • Gentzen, G., 1936, Die Widerspruchsfreiheit der reinen Zahlentheorie. Mathematische Annalen 112: 132–213. Reprinted in English translation in his 1969 Collected works, M. E. Szabo, ed. Amsterdam: North-Holland. • K. Gödel,1931, Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik 38: 173–98. See On Formally Undecidable Propositions of Principia Mathematica and Related Systems for details on English translations. • --------, 1958, "Über eine bisher noch nicht benüzte Erweiterung des finiten Standpunktes," Dialectica 12: 280–87. Reprinted in English translation in 1990. Gödel's Collected Works, Vol II. Solomon Feferman et al., eds. Oxford University Press. • Hermann Grassmann, 1861. Lehrbuch der Arithmetik (A tutorial in arithmetic). Berlin. • Hatcher, William S., 1982. The Logical Foundations of Mathematics. Pergamon. Derives the Peano axioms (called S) from several axiomatic set theories and from category theory. • David Hilbert,1901, "Mathematische Probleme". Archiv der Mathematik und Physik 3(1): 44–63, 213–37. English translation by Maby Winton, 1902, "Mathematical Problems," Bulletin of the American Mathematical Society 8: 437–79. • Kaye, Richard, 1991. Models of Peano arithmetic. Oxford University Press. ISBN 0-19-853213-X. • Peirce, C. S. (1881). "On the Logic of Number". American Journal of Mathematics 4 (1–4). pp. 85–95. doi:10.2307/2369151. JSTOR 2369151. MR 1507856.  Reprinted (CP 3.252-88), (W 4:299-309). • Paul Shields. (1997), "Peirce’s Axiomatization of Arithmetic", in Houser et al., eds., Studies in the Logic of Charles S. Peirce. • Patrick Suppes, 1972 (1960). Axiomatic Set Theory. Dover. ISBN 0-486-61630-4. Derives the Peano axioms from ZFC. • Alfred Tarski, and Givant, Steven, 1987. A Formalization of Set Theory without Variables. AMS Colloquium Publications, vol. 41. • Edmund Landau, 1965 Grundlagen Der Analysis. AMS Chelsea Publishing. Derives the basic number systems from the Peano axioms. English/German vocabulary included. ISBN 978-0-8284-0141-8 • Jean van Heijenoort, ed. (1967, 1976 3rd printing with corrections). From Frege to Godel: A Source Book in Mathematical Logic, 1879–1931 (3rd ed.). Cambridge, Mass: Harvard University Press. ISBN 0-674-32449-8 (pbk.).  Contains translations of the following two papers, with valuable commentary: • Richard Dedekind, 1890, "Letter to Keferstein." pp. 98–103. On p. 100, he restates and defends his axioms of 1888. • Giuseppe Peano, 1889. Arithmetices principia, nova methodo exposita (The principles of arithmetic, presented by a new method), pp. 83–97. An excerpt of the treatise where Peano first presented his axioms, and recursively defined arithmetical operations.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 46, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8862669467926025, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/206597-remainder-theorem.html
1Thanks • 1 Post By MarkFL # Thread: 1. ## remainder theorem When a polynomial f(x) is divided by x-1 and x-2, the remainders are 0 and -4 respectively. Find the remainder when f(x) is divided by (x-1)(x-2). The answer of this question is -4x+4 Can anyone show me how to work out the solution of this question? Thanks 2. ## Re: remainder theorem We know: $\frac{f(x)}{x-2}=Q_1(x)-\frac{4}{x-2}$ $\frac{f(x)}{x-1}=Q_2(x)$ Subtracting the latter from the former, we find: $f(x)\left(\frac{1}{x-2}-\frac{1}{x-1} \right)=Q_1(x)-Q_2(x)-\frac{4}{x-2}$ Let $Q_3(x)=Q_1(x)-Q_2(x)$ and we may write: $\frac{f(x)}{(x-1)(x-2)}=Q_3(x)+\frac{4(1-x)}{(x-1)(x-2)}$ Hence the remainder is $4(1-x)=4-4x$. 3. ## Re: remainder theorem Thanks for the quick reply. I don't understand why we need to subtract the later from the former and also why do we have to let Q3 =Q2-Q1? I hope to understand the logic and so can solve similar problem in my textbook . Thanks 4. ## Re: remainder theorem Originally Posted by flower Thanks for the quick reply. I don't understand why we need to subtract the later from the former and also why do we have to let Q3 =Q2-Q1? I hope to understand the logic and so can solve similar problem in my textbook . Thanks You don't have to let Q3 = Q2-Q1, it's just to make it easier to se that when we subtract the later from the former we actually get a new quotient with a new remainder which we want to write so that we have f(x)/((x-1)(x-2)) and r/((x-1)(x-2)). And in this case r = 4(1-x) 5. ## Re: remainder theorem Thank you very much. I understand this question now. However, I still can't solve the following two questions 1. f(x) is a polynomial. When 3x-2 divides f(x) , the remainder is K. when 2-3x divides f(x) , the remainder is the answer is K. I try to set up this way following your method to last question, 3x-2/f(x) = Q1(x) + K/f(x) so 2-3x/f(x) = -(3x-2) / f(x) + Q1(x) + (-k)/f(x) so the answer show be -K which is different from the textbook answer. 2) Let f(x) be a polynomial. If f(x) is divisible by x-1, which of the following must be a factor of f(2x+1)? the answer is x I have no idea how to do this question at all. Please help me . Thank you. 6. ## Re: remainder theorem 1.) The remainder theorem tells us that since the two divisors have the same root, the function will have the same remainder when divided by them. 2.) We may write: $f(x)=(x-1)Q(x)$ and so: $f(2x+1)=((2x+1)-1)Q(2x+1)=2xQ(2x+1)$ Thus, $x$ is a factor of $f(2x+1)$. 7. ## Re: remainder theorem Thank you very much. I understand them all now. #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9526937007904053, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/33227/are-sample-means-for-quantiles-of-sorted-data-unbiased-estimators-of-the-true-me
# Are sample means for quantiles of sorted data unbiased estimators of the true means? In studying income inequality, it is very common to look at sample means for deciles or quintiles of the sample, and to assume that the sample means are good estimators of the true means. In this setting, the "deciles" and "quintiles" normally refer, not to the break points, but to the sets of observations divided by the break points. Suppose that the income values are observed with error, and that the error, or more probably, the percentage error, is independent of the true value. • Is the mean of the sample quantile, e.g., the top decile, an unbiased estimator of the population mean? I know that with some leptokurtic distributions (e.g., the Pareto) the sample mean understates the population mean. My question refers, not to this, but to any bias that might be induced by the sorting process, because one sorts on the observed values including error, rather than on the true values. • My intuition is that sample mean of the highest decile/quintile would be biased upwards, because positive errors would be sorted up, and conversely for the lowest. For instance, it seems to me that if incomes were non-negative but observed with a normal error, a large sample would contain some negative values, and a fine enough quantile would gather these negative values together into a lowest group with a negative mean, demonstrating bias since the true mean must be positive. Is this true? • If the means are biased, is there any good way of correcting for this bias? Am I correct in thinking that the error's independence of the true value does not carry over to independence of the error from the observed value, which includes the error? If so, is there an easy way to at least characterize, and ideally correct, that dependence? • A commonly used index of inequality is the ration of mean of the incomes in the highest quintile or decile to the mean of the incomes in the lowest. If these means are biased, and the bias is corrected, will the resulting ratios be unbiased estimators of the true ratios? - ## 1 Answer For some distributions there is a positive bias due to measurement errors. If you assume the noise has mean $0$, then if you sample people from the top decile, their average measured income will be the average income of the top decile. However, the top decile of your sample will include some people who have displaced people from the top decile. The difference between the measured incomes of the incorrectly included people and the displaced people is always nonnegative, and the average value of this indicates the bias from this source of error. For some distributions, there is a negative bias due to sampling. I think this is a rare situation which you may be able to ignore based on some assumptions about the income distribution and noise distribution. Here is an artificial distribution which exhibits such a negative bias: Suppose $11\%$ of the population has a job and an income of $1$ unit, while everyone else is unemployed with an income of $0$, and there is no noise. The average income of the top $10\%$ is $1$, but there is a chance that the employment rate in your sample is under $10\%$, so the expected income of the top decile of a sample is less than $1$, so the bias is negative. If you want to get a ballpark estimate for the size of the bias, you can do a Monte Carlo simulation based on a distribution you fit to your sample and model for noise. There might be more accurate techniques, but this should be fast. - I've done some numerical experiments that persuade me that the bias in quantile means at the top and bottom is real but probably not very large. But it would certainly be reassuring to get an analytic description of the bias, so that I could know under what circumstances, if any, I should be worrying about it. I am pretty sure that the bias increases when the average absolute error is a larger percentage of the value of the variables. – andrewH Jul 30 '12 at 2:17 I'm afraid that I do not know how to estimate the error process in this situation. I do not have a model that I am fitting -- I'm just taking mean values for, say, deciles, as descriptive statistics. I could make assumptions about the underlying process, that the data is drawn from a lognormal or gamma or whatever - I think the Generalized Beta type II is thought to be best fitting on U.S. data, at least for the high end of the income spectrum. I would know how to estimate the parameters of the distribution from the sample, but not the parameters of the error process. – andrewH Jul 30 '12 at 2:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467970132827759, "perplexity_flag": "head"}
http://mathoverflow.net/questions/106230/schubert-varieties-which-admit-small-resolutions-of-singularities/106293
## Schubert varieties which admit small resolutions of singularities ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for an (incomplete) list of partial flag varieties for which all Schubert cells admit small resolutions of singularities. This is interesting, for many reasons. My motivation is, that a description of a small resolution will give the corresponding IC sheaves very explicitly and hence explicit formulas for the KL-polynomials. For example I know that Zelevinsky showed that this is the case for all type A Grassmannians. What about other $G/P$ for say $P$ Hermitian symmetric? I think in this case there are at least explicit formulas for KL polynomials. What about other partial flag varieties? - @Jan: This is outside my range of knowledge, but I wonder what searches you have tried? For example, beyond Zelevinsky's 1983 paper, MathSciNet turns up quickly five others which may be relevant to your question: B.F.Jones (2010), N. Perrin (2007), J. van Hamel (2003), P.Sankaran and P. Parameswaran (1994, 1995). At least some of these must be on arXiv. There has certainly been some related study though it might not fit your question exactly. – Jim Humphreys Sep 3 at 14:21 The book "Singular Loci of Schubert Varieties" by Billey and Lakshmibai has some results about small resolutions (cf in particular section 9.1); you might want to take a look there. – Chuck Hague Sep 3 at 15:34 ## 1 Answer For the Hermitian symmetric $G/P$, Nicolas Perrin explicitly classified all the Schubert varieties admitting a small resolution. (More explicitly, he classifies all the minimal models and quotes a theorem that says a small resolution is a smooth minimal model.) If I remember correctly, except for some very small rank exceptions, type A, and the obvious cases (e.g. projective space), all $G/P$ have a Schubert without a small resolution. Fortunately, there are few enough minimal models for the Hermitian symmetric cases that even in the non-smooth case, their intersection cohomology have a reasonable description, leading to the (previously known) explicit formulas for KL-polynomials. (These formulas are summarized in a paper by Boe that proves the final cases and summarizes and refers to the cases done earlier.) In type A, there is a Schubert without a small resolution for any 2-step flag variety where the two steps are not adjacent. (The example mentioned in the Zelevinsky paper (which he notes as being due to MacPherson) generalizes to all such cases.) This also means there is a Schubert without a small resolution for any 3-or-more-step flag variety. This leaves as the only case (barring low rank exceptions) adjacent 2-step flag varieties. I have not seen this resolved in the literature. I suspect there is a Schubert variety in those cases without a small resolution provided the steps are at least two away from both ends. In fact I have an explicit expected counterexample, but I have never learned the tools necessary to prove it actually is one. - That's interesting! In case you edit your answer again, I think MacPherson has a capital "P". – tweetie-bird Sep 4 at 2:12 Oops -- thanks. – Alexander Woo Sep 5 at 19:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212033748626709, "perplexity_flag": "middle"}
http://www.science20.com/quantum_diaries_survivor/higgs_mass_120_gev_susy_says
[x] Stay in touch with the scientific world! The Higgs Mass ? 120 GeV, SUSY Says By Tommaso Dorigo | August 24th 2009 03:25 PM | 25 comments | Print | E-mail | Track Comments Tweet More Articles All Articles I am an experimental particle physicist working with the CMS experiment at CERN. In my spare time I play chess, abuse the piano, and aim my dobson... View Tommaso's Profile Today, although fully submerged by an anomalous wave of errands which had been patiently waiting for my return at work, I heroically managed to dig out of the ArXiv a paper worth a close look. The study, titled "Likelihood Functions for Supersymmetric Observables in Frequentist Analyses of the CMSSM and NUHM1" and authored by renowned supersymmetry experts like John Ellis and Sven Heinemeyer, and experimentalists like my CMS colleagues Albert De Roeck and Henning Flacher, had me thinking that Supersymmetry does have an answer for everything, apparently. That, at least, is what one gets from even a careless look at a few of the figures in the paper. But let me get back a few steps and explain what I am talking about, before I am left alone here. SUSY, a proper daughter of mother Nature Supersymmetry is an appealing extension of the Standard Model of particle physics, which is both beautiful and naughty. In order to fix a problem of the Standard Model -the unbearable lightness of the Higgs boson- Supersymmetry tries to sell us the existence of dozen new particles yet unseen, fourscore new parameters and then a few, and one additional symmetry principle: for every fermion there is a boson, and for every boson there is a fermion. A beautiful, but broken, new symmetry of mother Nature (another well-known bitch). Broken, because the supersymmetric bodies are all much heavier than their standard counterparts: lest we would have already seen them! To tell the truth, Susy (that is the nickname by which she goes among those who have entertained themselves with her at least once) is not just easy at claiming new bodies: it also openly displays a pair of additional nice features that make her appealing. On one side, it provides precise high-energy convergence to a common value for the coupling strength of the fundamental interactions. And on the other, it contains a perfect candidate for the unaccounted mass of the Universe among the score of new particles it hypothesizes: the neutralino. Make no mistake: Susy is unification-ready, and just what you need for a big Bang. The neutralino is the lightest supersymmetric particle. It cannot decay to anything lighter, and so is bound to live forever. It is electrically neutral, so light cannot see it; it is not made of coloured stuff, so ordinary matter hardly stops it; and it is just expected to have the right mass to make the matter balance of our Universe compatible with its evolution after the big bang. Two words on the study After the above introduction, it is time to get serious again and discuss the paper. I do not wish to summarize it for you here -it is 34 pages long, and I would be unable to do it in a reasonable amount of time; plus, there is no real reason to do it since the paper is quite readable and you should definitely donwload it from here. All I can do here is to just concentrate on a few of its many interesting results. To verify the compatibility of Susy with the present experimental status of particle physics and astrophysics, the study considers all electroweak precision data (measurements at the Z peak by LEP and SLC, Tevatron results on the top and W boson masses, neutrino scattering experiments, and then a few), together with the precise measurement of the muon gyromagnetic ratio (a quantity which has a highly predictive power for new physics, since its value would be sizably affected by the existence of new particles circulating inside virtual loops in the scattering diagrams affecting the muon anomaly), B physics observables (which for the same reason have something to say about the existence of those fancy massive new particles), and cosmological constraints. In this sense, it is a really complete account of the inputs we may have at hand to determine whether Susy is just a dream or the girl next door. Because supersymmetry is not a single theory, but rather a framework of different theories which may display the widest variety of phenomenology depending on the value of a few critical parameters, the study only considers a subset of specific models, ones going by the name of CMSSM and NUHM1. these are particular varieties of the minimal supersymmetric extension of the Standard Model (MSSM), and to explain their details I would need to write more than I am willing to about them here. Suffices to say here that these theories are enough well-defined that they can make definite predictions for some quantities we have a chance of measuring before our retirement. The authors offer us the results of a fit to all the measured quantities considered in the study, which has the benefit of taking a frequentist approach for the statistical analysis: this frees them from the need of assuming any knowledge of the a priori distribution of some of the parameters, and makes the results insensitive to such assumptions. The fit is a quite smart one, which investigates with care the full parameter space; all model parameters are varied simultaneously in the sampling of the multidimensional space. The paper explains several technical details that I cannot report here; these give the impression that the work has been done with care, and one has the feeling that the results and their uncertainties may be trusted to be a faithful representation of the current status of our indirect knowledge on Supersymmetry from the available experimental inputs. An interesting result of the fit is the branching fraction of the Bs meson (a particle made by a strange quark and a anti-bottom one) to muon pairs: within the best solution of the CMSSM this quantity is close to the Standard Model prediction -which is bad, since experimentally we have quite a lot of work ahead before we can measure it if it is so small; on the other hand, it may easily be much larger for the NUHM1, which exposes that theory to a direct proof (or bust) in a not-so-distant future. In the figure below you see the likelihood functions as a function of Bs branching fraction; the black line shows the Standard Model prediction. You can see that in the NUHM1 model (right) the likelihood has a much flatter minimum extending to several tens in 10^-8. Maybe the most striking result of the study is the observation that within the NUHM1 a value of the lightest neutral Higgs boson arises naturally above the lower limit set by the LEP II experiments: in other words, while the Standard Model (and quite a few points of the SUSY parameter space) is hard-pressed to explain why the Higgs boson is heavier than 114.4 GeV, when electroweak data would instead point to smaller values (the latest fits in the LEP electroweak working group page point to $m_h%20=%2087%5E%7B+35%7D_%7B-26%7D%20GeV$, so about one standard deviation below the strict bound set at LEP), the NUHM1 actually favors a Higgs mass above that bound. As far as the lightest Higgs boson is concerned, within the CMSSM the best fit produces an estimate of $m_h%20=%20114.2%20GeV$, with a fit probability of 36%. This value is smaller than the 95% CL bound set by LEP II, but close enough to it that it is still reasonably possible. For the NUHM1, the minimum occurs for $m_h%20=%20120.7%20GeV$, with a similar probability of the CMSSM fit. You can see the results in the figure below, which shows the likelihood functions and their minima as a function of the Higgs mass for the CMSSM (left) and NUHM1 model (right). The curves which are most interesting to me are the dashed ones: they exclude from the fit the knowledge of the lower bound on MH set by the LEP II experiments, and thus show that the NUHM1 model does prefer a mass above the crucial 114.4 GeV divide. Also worth mentioning here is that for the mass of the lightest neutralino, the best fit value turns out to have a sharp minimum at about 120 GeV in both considered models: this is not a surprise, but it is again interesting from an experimental standpoint, since such mass values may be accessible in the near future. The paper contains several plots like the ones shown above, describing the likely values of many of the important parameters of the theories considered. If you are at all interested in knowing what is the most likely mass value of your favorite -ino particle, your curiosity will be satiated. As for me, I find it remarkable that Supersymmetry (or at least a few points of its hundred-dimensional parameter space) keeps standing head and shoulder above the surge of experimental results coming out of the Tevatron, which stubbornly insists finding everything in agreement with the Standard Model and no trace of sparticles around. In principle, in three months time the start-up of LHC might allow us to discover Supersymmetry in the matter of a few weeks, or even days of running. Well, make it a few months. Reality, I am told, is usually different. Life is tough, Nature is a bitch, and Susy is presently hiding in the dark. Stay tuned. ## Comments Did you take Lee Smolin as your role model in turning the coat? Let me remind you of some reasons why you dislike and don't believe SUSY: http://dorigo.wordpress.com/2008/03/05/susy-more-unlikely-by-the-new-cdm... http://dorigo.wordpress.com/2006/06/07/292/ Will we also read a blog entry in early 2010 that you have never disliked SUSY and never predicted it wasn't there? These considerations by Ellis et al. are about some one-sigma preferences and not very strong. But I do bet that it is more likely than not that some of this new marvelous SUSY stuff will be found between 114 and 130 GeV. Don't think it is natural that, according to your own words, that "[Supersymmetry] stubbornly insists finding everything in agreement with the Standard Model and no trace of sparticles around.", "Because supersymmetry is not a single theory, but rather a framework of different theories which may display the widest variety of phenomenology depending on the value of few critical parameters", so it is not surprising at all that one may fit Supersymmetric models, whatsoever parameters are found, to the Standard Model? Hi Daniel, the subject of the first sentence you quote is the Tevatron, not Supersymmetry. Anyway, the Higgs sector structure of SUSY is quite different from that of the SM, with five higgs bosons, the lightest of which starts off with a mass exactly equal to that of the Z boson. It is only through theoretical magic that one can dislodge it from there and make it heavier than the 114.4 GeV limit of LEP II; still, no SUSY theory can account for a Higgs heavier than 135 GeV or so. This, despite the many parameters and models. If we found a Higgs at 160 GeV we would have to kiss SUSY bye bye, regardless of other "evidence" and theoretical preferences. Cheers, T. Oh, I see. I really thought that "which" refered to Supersymmetry, not Tevatron. Well, now the intention of the whole text changed for me! I think that if you look long enough on arxiv.org, you I think will find a neutral higgs above 135GeV in susy . Hi Tommaso, to put it mildly, your sentence is very naive. Indeed, 135 GeV is (roughly) the upper bound in the MSSM with the stop quark masses around 1 TeV. Take a "split" scenario with heavier (multi-TeV) stops and you get easily above 135 TeV. Just add an extra singlet to the MSSM and you get the NMSSM, where a 160-GeV Higgs can be accommodated (and other interesting stuff happens). It will take more than that to kiss SUSY bye bye... Cheers Ptrslv72 Ptrslv72 (not verified) | 08/25/09 | 12:00 PM Hi Tommaso, let me add some straw to the fire (and be a smart ass, sorry)... If you found a Higgs at 160 GeV *with SM couplings, i.e., large branching ratios into WW*, then you can kiss the MSSM, as well as some people, bye bye. That is probably what you meant by "we". As far as the theoretical magic which pushes the mass above 114.4GeV, well... We don't know how to compute most theories exactly (not just in particle physics) and resort to expansions around some known solution. In particle physics we simplify the world and let our fields be free (some kind of heaven for particles before hell broke open or after it vanishes into the ashes). So what gives us this extra mass is all the stuff which interacts with the SUSY Higgs bosons, especially that with strong interactions like the top and its superpartners, the stops. These interactions change the original free fields, in particular their masses. Whats somehow magic for me is that these interactions can also change the Higgs field so that it develops a vacuum expectation value at the electroweak scale. Cheers, Federico Federico (not verified) | 08/26/09 | 23:22 PM Hi Federico, you must be new to this site... otherwise you'd know that I like to trivialize things, to make them digestible, even if this entails some oversimplification. Thank you for your explanation on how the Susy higgs moves away from Mz anyway, but you are far from putting straws to the fire :) Cheers, T. Dear Tommaso, Take for instance; http://xxx.lanl.gov/abs/0908.0780 and its references: http://xxx.lanl.gov/PS_cache/arxiv/pdf/0812/0812.1994v2.pdf http://xxx.lanl.gov/PS_cache/arxiv/pdf/0812/0812.1991v2.pdf http://xxx.lanl.gov/PS_cache/arxiv/pdf/0709/0709.4269v6.pdf Regards Anonymous (not verified) | 08/25/09 | 12:01 PM Sorry Tommaso, something got screwed up in the message above. It should have read: to put it mildly, your sentence "no SUSY theory can account for a Higgs heavier than 135 GeV or so" is very naive... Ptrslv72 (not verified) | 08/25/09 | 12:02 PM Sure Ptrslv, I know - but split SUSY is something I do not even want to start considering; it is a different thing for me. Anyway, the point has been made several times, that even if we do not see any SUSY at LHC we won't have killed it. However, am I being naive if I think that it will considerably decrease its popularity in that case ? For one thing, all these global fit addicts will have to change area of expertise... Cheers, T. Who would deny that SUSY's popularity will decrease if we don't find it at the LHC? My comment was about the 135-GeV bound on the lightest Higgs mass, that you seemed to take waaay more seriously than you should have. Contrary to what we like to think to simplify our calculations, the range of possible SUSY theories is quite broad and is not confined to the plain-vanilla MSSM with 1-TeV squarks (not to mention the even-more-constrained CMSSM). Ptrslv72 (not verified) | 08/25/09 | 12:39 PM Ok, point taken, but you certainly know that the focus of 99% of SUSY phenomenologists and other stringy enthusiasts today is on the low-mass SUSY models which can indeed be at reach. Cheers, T. What else did you expect? That they justified their grants with things completely out of reach? But well, they can. the NMSSM (Next-to-Minimal SSM) can be equally within reach and at the same time allow for a lightest Higgs heavier than the one of the MSSM. And this is just the first example that came to my mind... Ptrslv72 (not verified) | 08/25/09 | 13:18 PM There are also SUSY models with extended gauge sectors which can raise the Higgs mass, up to about 150 GeV in the Langacker et al U(1) versions and up to hundreds of GeV in the Batra et al SU(2) versions. And the CMSSM is by no means a complete covering of even the vanilla MSSM parameter space... though the prediction of a fairly light Higgs is pretty robust in the MSSM. Anonymous (not verified) | 08/25/09 | 23:12 PM Ok, so the situation is even worse than the one I pictured... We will never get rid of this fantastic, wrong theory! In any case, I stand by my way of telling the tale, which is slightly inaccurate or incomplete but captures the essence of what most SUSY addicts are putting their money on. Cheers, T. OK, I've changed my mind about the value in delaying LHC results (until theorists can actually make decent predictions). Can they please turn it on ASAP and put everybody out of their misery. If there is no Higgs with mass less than 130 GeV, when will the Tevatron be able to rule that out, at 95% CL? Before the LHC comes online? Thomas Larsson (not verified) | 08/26/09 | 12:26 PM Hardly so, Thomas. That is a tough region for the LHC, but it is not easy for the Tevatron either. It might take two or three more years to CDF and Dzero to do that. Cheers, T. How long for LHC to achieve 95% when working at 7TeV CMS? Daniel, that is the subject of my poster at KOBE... Post coming! In any case, no assessment has been made for 7 TeV cm., but the sensitivity roughly scales down such that we need twice more luminosity than at 10 TeV. Since CMS assessed that we need 1/fb at 14 TeV to get an exclusion of 140 to 230 GeV for the Higgs if it is not there (but we discover it at 6-sigma if it is in the right region with that much data), I would eyeball that we need 4/fb to do the same at 7 TeV. But this is just adding the H->WW and H->ZZ channels, while more analyses would help reducing the required luminosity. For the low mass region, things are harder to assess right now. Cheers, T. How long does it take to accumulate 7/fb at 7teV to get that exclusion at 130GeV? The question is ill-posed, because LHC will not get that much data at 7 TeV (unless it decides to stick to that energy for safety reasons, which would be a total failure!). The LHC will move to higher energy in 2011, after at most 1/fb at 7 TeV. The problem is that with such uncertainty on energy as well as luminosity, predictions are next to impossible. Cheers, T. Possible Higgs mass: 128.577 Gev 183.7119 Gev 181.0516 Gev 161.9738 Gev 159.6282 Gev 3zy (not verified) | 09/02/09 | 11:22 AM 3zy, it's GeV, not Gev. and anyway, yes, they are all possible, as all the others. Cheers, T. ### What's Happening Current Topic: The best writers in science tackle science's hottest topics. Take a look at the best of Science 2.0 pages and web applications from around the Internet!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312949180603027, "perplexity_flag": "middle"}
http://zym8903.wordpress.com/
# Riding on gazelle God Ever Geometrize `Status: On` ## Mark April 22, 2013 Comments off considering build up a package for my recent research. Will be back to this post later. — Considering build theory on DtN map for nonhomogeneous source term, it has been thought to have the same property as homogeneous case. — Considering compressive detecting method for point source problem. — Considering particle-based transport equation with inverse source problem. 0.000000 0.000000 ### Like it? Categories: Numerics ## Jam April 13, 2013 Comments off 0.000000 0.000000 ### Like it? Categories: Accessary, Austin Pavilion ## Spams rampage April 10, 2013 Comments off 0.000000 0.000000 ### Like it? Categories: Accessary ## A trip to San Antonio March 24, 2013 Comments off This slideshow requires JavaScript. 0.000000 0.000000 ### Like it? Categories: Accessary ## Existence of random zeros::solution March 3, 2013 Comments off [UPDATE] This is the last post for this problem. A couple of weeks ago, I came up with the solution by using moving frame which was learned in Differential Equation class long time ago. Briefly speaking, it is a good piece of work, in constructing special solution, and a nice try with coordinate transformation. Inverse Problem: on point source. Now I simply list my trials in probing the path to the solution. • As I mentioned in last post, I would like to continue using the same technique, with a CGO-like approach, however, after a long time of trials, I gave up with this method. This method will give out a very impressive and concise representation of the problem: $u\Delta \phi + 2\nabla u \cdot \nabla \phi +k^2 u(n-1) \phi = 0$, after we multiply $\overline{u}$ to the equation, it will morph into divergence form. $\nabla(|u|^2\nabla\phi) + k^2|u|^2\phi = 0$. To my knowledge of this degenerate elliptic equation, we need to apply weighted Sobolev space theory, but unfortunately the theory cannot rule out the existence of singularities, and I came up with a counterexample for that. However, I still believe this method can be promising in 2D case. • For series form, if we require the coefficients of  expansion as analytic series, i.e. $\rho = \sum\rho_j \overline{z}^j$, where $\rho_j = \sum c_{kj} z^k$ This can give a recursion formula for the problem, but the analytic property will force the problem to be unsolvable. What a pity. ————— 0.000000 0.000000 ### Like it? Categories: Complex Analysis, Doodle, Partial Differential Equations Tags: point source ## Existence of random zeros::Explore March 1, 2013 Comments off Last time I considered the problems related to  ’existence of random zeros’. And for No.5 problem, we can find a solution. The governing equation: $\Delta u + k^2 u =0$, and $u(P(x_j)) = 0$. Here $P$ is a projection operator onto the xy plane. Our solution is $u = \prod_{j=1}^m (x+iy - a_j -ib_j) e^{-ikz} = \Phi e^{-ikz}$, here $P(x_j) = (a_j,b_j)$, since $\Phi$ is analytic, then $\Delta \Phi = 0$ . Thus I only have to look at $\mathbb{R}^3$ case, for high dimension spaces, we just project the points onto a lower dimensional one. For No.4, we just need to consider the solution to No.3 I do not think the randomness can be achieved for No.1 and No.2, but proof needs more work to do. ###### Related articles • Existence of random zeros::Problems (zym8903.wordpress.com) 30.288670 -97.736124 ### Like this: Categories: Complex Analysis, Doodle, Partial Differential Equations Tags: Helmholtz equation ## Existence of random zeros::Problems[FINISHED] February 24, 2013 [UPDATE:The problem has been solved completely. I posted the rough proof at minfun.info] Recently I was thinking about the zeros of Helmholtz equation. • Problem 1: Suppose we have a bunch of points in $\mathbb{R}^3$, say $x_j$, $j =1 ,\cdots m$.  Is there a solution of Helmholtz equation $\Delta u + k^2 u = 0$ such that $u(x_j) = 0$. • Problem 2: What if in $\mathbb{R}^d$? • Problem 3: [3D case] What if the media is in-homogeneous, the equation turns out to be $\Delta u + k^2 n(x) u = 0$ where $n(x)-1$ is supported on a compact domain. • Problem 4: [$\mathbb{R}^d$ case] of the above one. • Problem 5: [Reduced case] If we cannot find the solution for random zeros,  we define a projection operator $P:\mathbb{R}^3\rightarrow \mathbb{R}^2$, maps points onto a plane. Then can we find a solution to the Helmholtz equation such that $u(P(x_j))=0$. —————————————————– For Problem 5, I have a solution, but cannot be applied to other ones. Will be recorded next time. 0.000000 0.000000 Follow ### Follow “Riding on gazelle” Get every new post delivered to your Inbox. Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8531520962715149, "perplexity_flag": "middle"}
http://scicomp.stackexchange.com/questions/5228/sparse-linear-solver-for-many-right-hand-sides
# Sparse linear solver for many right-hand sides I need to solve the same sparse linear system (300x300 to 1000x1000) with many right hand sides (300 to 1000). In addition to this first problem, I would also like to solve different systems, but with the same non-zero elements (just different values), that is many sparse systems with constant sparsity pattern. My matrices are undefinite. Performance of the factorization and initialization is not important, but performance of the solve stage is. Currently I'm considering PaStiX or Umfpack, and I will probably play around with Petsc (which supports both solver) Are there libraries capable of taking advantage of my specific needs (vectorization, multi-threading) or should I rely on general solvers, and maybe modify them slightly for my needs ? What if the sparse matrix is larger, up to $10^6 \times 10^6$ ? - ## 3 Answers There is typically a trade-off between the amount of work you put into constructing a good preconditioner for an iterative solver and the work you save by using a good preconditioner when actually solving the linear systems. In your case, the case is pretty clear: put as much work as you can into constructing a good preconditioner because you have to solve so many linear systems. In fact, I think it is appropriate to invest the time to get the perfect preconditioner: an LU decomposition (using UMFPACK, for example, or the Pardiso solver that comes as part of Intel's MKL). Then simply apply this decomposition as many times as necessary. If you have $O(N)$ linear systems to solve, nothing can be expected to beat an exact decomposition. - 2 Your last statement is debatable. Consider an exact multifrontal factorization of a 3D FEM or FD discretization over a cube, which should require $O(N^2)$ work and $O(N^{4/3})$ memory usage. The exact solves therefore require $O(N^{4/3})$ flops per right-hand side, and so, for sufficiently large $N$, any iterative solver with a lower asymptotic complexity will be faster. – Jack Poulson Feb 8 at 15:46 2 Maybe. But as a matter of practical consideration, sparse direct solvers are still darn fast given that the constant in front of even an $O(N)$ solver is pretty large whereas the constant in front of the $O(N^{4/3})$ is not. – Wolfgang Bangerth Feb 8 at 23:25 1 The catch is that you run of memory and patience for the factorization at about the crossover point. For the 7-point Laplacian, multigrid needs about 50 flops/dof, leading to a flops crossover (versus back-solve) at around $10^5$ dofs. The back-solve uses a lot more memory, but kernels for many right hand sides are commonly available. Multigrid is not usually written for many right hand sides, thus sacrificing vectorization potential. I'd wager that you can write an MG algorithm for which time-per-RHS is less than a CHOLMOD (or any other package) back-solve for 3D Laplacian at some $n<300k$. – Jed Brown Feb 9 at 15:39 Without taking sides the discussion about whether to use direct or iterative solvers, I just want to add two points: 1. There exist Krylov methods for systems with multiple right-hand sides (called block Krylov methods). As an added bonus, these often have faster convergence than standard Krylov methods since the Krylov space is built from a larger collection of vectors. See Dianne P. O’Leary, The Block Conjugate Gradient Algorithm and Related Methods. Linear Algebra and its applications 29 (1980), pages 239-322. and Martin H. Gutknecht, Block Krylov space methods for linear systems with multiple right-hand sides: An introduction (2007). 2. If you have different matrices with the same sparsity pattern, you can precompute a symbolic factorization for the first matrix, which can be reused in computing the numerical factorization for this and the subsequent matrices. (In UMFPACK, you can do this using `umfpack di symbolic` and passing the result to `umfpack_di_numeric`.) - You're not quite clear in your statement of the problem when you talk about "the same non-zero elements (just different values)" Are you saying that the matrix has a constant sparsity pattern but the actual values change? Or, are you saying that the matrix is in fact constant? Assuming that the sparse matrix is constant and only the right hand side is changing, then you should be looking at methods that use direct factorization (of the form $PA=LU$) of the matrix, and then solve for each right hand side by forward/backward substitution. Once the factorization is complete, each solution will be extremely fast ($O(n^2)$ time for completely dense factors but for sparse factors this will be proportional to the number of nonzeros in the factors.) For multiple right hand sides and systems of equations of this size, iterative methods are typically not worth while. All of the packages you mentioned offer direct factorization methods (although PetSc is mostly known for its iterative solvers.) However, your systems are so tiny that it is unlikely that you could get substantial parallel speedups, particularly in a distributed memory environment. I'd suggest using Umfpack for this job- PaStix and PetSc are overkill. - Thanks for your answer. In order to clarify: I asked first for a single matrix with many right hand sides, and then, another problem is a collection of matrices with the same sparsity patern but the values change, each of them must be solved for many rhs. Subsidary question: what if the sparse matrix is now 10^5x10^5 to 10^6x10^6 ? – nat chouf Feb 8 at 16:43 2 My rule of thumb is that a sparse direct solver (taking as example the discretization of a 2d PDE) is faster than even good iterative solvers if the size of the matrix is less than $10^5$. That may be a rough guess, but it may give you an idea. – Wolfgang Bangerth Feb 8 at 23:27 Using an iterative method for your larger systems with only a single right hand side might make sense, particularly if you don't need very accurate solutions and particularly if you can find an effective preconditioner or your systems are already well conditioned. However, if your systems are badly conditioned, you need accurate solutions, and you can't find a good preconditioner, then you'll likely still be better off with direct factorization. – Brian Borchers Feb 9 at 2:29 Another important consideration is the memory requirements. Once you get into very large sparse systems where $N$ is of order $10^6$, you can easily run out of memory to store the direct factorization. At the point you'll definitely be forced to switch to using an iterative method. – Brian Borchers Feb 9 at 2:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346557855606079, "perplexity_flag": "middle"}
http://conservapedia.com/Pascal's_triangle
# Pascal's triangle ### From Conservapedia Pascal's triangle up to row six Pascal's triangle is a triangular arrangement of the coefficients of a binomial expansion. Although it was discovered centuries earlier,[1] the triangle's full significance was first understood by 17th century mathematician and religious scholar Blaise Pascal.[2] Pascal's triangle is constructed by starting with a single 1, and then moving downward and out so that each new element is the sum of the two numbers diagonally above it. (Elements outside the triangle are considered to be zero.) The top row of the triangle (containing just a single 1) is the 0th row, with rows below it being 1st, 2nd, and so on. The left-most element in each row (which will always be a 1) is the 0th element of that row, with the following elements continuing as 1st, 2nd, and so forth. ## Applications Pascal's triangle has numerous applications and properties, some of which are beyond the scope of this article.[3] Some are useful, others are merely interesting. Only a few of the more well-known applications of Pascal's triangle are described below. ### Binomial Expansion For the binomial (a + b)n, the coefficients of the expansion are found in the nth row of Pascal's triangle. (Remember the top row is row 0.) For example, the binomial coefficients for (a + b)4 are the numbers in row 4 of the triangle. (a + b)4 = 1a4 + 4a3b + 6a2b2 + 4ab3 + 1b4 ### Combinations For a mathematical combination of the form nCr, the solution can be found at row n, element r of Pascal's triangle. For the combination 5C3, the solution is the number at the 3rd element of the 5th row. 5C3 = 10 ### Pascal's Rule Pascal's rule states that: ${n \choose r} = {n-1 \choose r-1} + {n-1 \choose r}$ In terms of the triangle, this simply means that an entry is the sum of two diagonal entries above it. It is easy to prove: ${n-1 \choose r-1} + {n-1 \choose r}$ $= \frac{(n-1)!}{(r-1)!(n-1-(r-1)!}+ \frac{(n-1)!}{r!(n-1-r)!}$ $= \frac{ (n-1)!r + (n-1)!(n-r)}{r!(n-r)!}$ $= \frac{ (n-1)!( r + (n-r)}{r!(n-r)!}$ $= \frac{n!}{r!(n-r)!} = {n \choose r}.$ ## References 1. ↑ "Napier ... discovered what eventually would be called "Pascal's Triangle" and placed it in common use long before Pascal was even born." [1] 2. ↑ http://pages.csam.montclair.edu/~kazimir/history.html 3. ↑ http://ptri1.tripod.com/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388283491134644, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/17634/definition-of-chow-groups-over-spec-z/18073
## Definition of Chow groups over Spec Z ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Usually (eg, intro. in M. Rost's 'Cycle modules with coefficients'), for a variety, $X$, over a field one can define the Chow group of p-cycles, $CH_p (X)$, as $$CH_p (X) = coker\; \left[\bigoplus_{x\in X_{p+1}} k(x)^\times \rightarrow \bigoplus_{x\in X_p} \mathbb{Z}\; \right]$$. What about for an arithmetic scheme, eg when $X$ is, say, normal, separated, of finite type and flat over $Spec \; \mathbb{Z}$? Does something go wrong with the above definition? Peter Arndt had posed part of this question already, but it seems without an answer. - ## 5 Answers One can define Chow groups over any Noetherian scheme $X$. Let $Z_iX$ be the free abelian group on the $i$-dimensional subvarieties (closed integral subschemes) of $X$. For any $i+1$-dimensional subscheme $W$ of $X$, and a rational function $f$ on $W$, we can define an element of $Z_iX$ as follows: $$[div(f,W)] = \sum_{V} ord_V(f)[V]$$ summing over all codimension one subvarieties $V$ of $W$. Then the $i$-Chow group $CH_i(X)$ is defined as the quotient of $Z_iX$ by the subgroup generated by all elements of the form $[div(f,W)]$. The order function is defined as length of the corresponding local ring, so it does not need any further assumptions on $X$, In fact, $X$ being locally Noetherian is enough. This definition also agrees with the one in the original question. - 1 Note that if $X$ is not Noetherian, the sum on the codimension $1$ integral closed subschemes $V$ might not be a finite sum. – Qing Liu Mar 9 2010 at 21:05 Thank you! this is just what I was looking for – Ivan Mar 9 2010 at 22:46 @Ivan: If you are looking for a reference, this definition is given in Fulton's book on intersection theory. – name Jul 9 at 11:43 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As explained by Hailong, one can define the Chow group CH($X$) (without grading by the dimension of the cycles) of any Noetherian scheme $X$. But in general one has to be careful about the behavior of the rational equivalence. • A principal divisor in an integral closed subscheme $W$ of dimension $i+1$ is not necessary a $i$-cycle (this has to do with the notion of catenary schemes). • If $X$ is not equidmensional, then the divisor associated to an invertible rational function can be not rationally equivalent to $0$ (even when $X$ is a reduced projective variety over a field). • If $f : X\to Y$ is a proper morphism, then contrarily to the case of varieties over a field, the pushforward by $f$ does not induce a map ${\rm CH}(X)\to {\rm CH}(Y)$. One can construct an example with $X$ affine, regular of dimension $2$, $f$ finite birational and the image of a principal divisor on $X$ is non-zero in CH($Y$). To remedy to these pathologies, Thorup (Rational equivalence theory on arbitrary Noetherian schemes, Enumerative geometry (Sitges, 1987), 256--297, Lecture Notes in Math., 1436, 1990) defines a graded rational equivalence relation associated to a grading on $X$ (a map from $X$ to $\mathbb Z$ with some properties, e.g. $x\mapsto -\dim O_{X,x}$), and a graded principal divisor on an integral closed subscheme is the usual principal divisor where we discount components of bad gradings. With the new rational equivalence everything works fine for the pushforward by proper morphisms $X\to Y$ (though the gradings on $X$ and $Y$ must be compatible in some sense) and pullback by flat morphisms. When $X$ and $Y$ are of finite type over an universally catenary base scheme $S$ [EDIT which is equidimensional at every point], then any proper morphism $f : X\to Y$ induces a homomorphism ${\rm CH}(X)\to {\rm CH}(Y)$ (without grading), see S. Kleiman, Intersection theory and enumerative geometry: a decade in review, With the collaboration of Anders Thorup on § 3. Proc. Sympos. Pure Math., 46, Part 2, Algebraic geometry, Bowdoin, 1985, 321--370, Amer. Math. Soc., Providence, RI, 1987. I learned most of these from O. Gabber and the examples mentioned above are in a (not yet finished) preprint with him and D. Lorenzini. [EDIT]. To summarize, if $X$ is noetherian, universally catenary (e.g. finite type over ${\mathbb Z}$ or any noetherian regular scheme) and equidimensional at every point (i.e. for every $x\in X$, the irreducible components of ${\rm Spec}(O_{X,x})$ all have the same dimension), then CH$(X)$ can be decomposed as the direct sum of the $CH_i(X)$'s as in Hailong's post. If $f : X\to Y$ is a proper morphism of universally catenary noetherian schemes, then $f_*$ induces a homomorphism ${\rm CH}(X)\to {\rm CH}(Y)$. In the last counterexample above, $X$ is regular (so univ. catenary), $Y$ is catenary but not universally. - Thank you for the illuminating examples and references Prof. Liu, I am looking forward to the Gabber/Lorenzini article. – Ivan Mar 9 2010 at 22:48 1 Hey I am one of the authors :). You might have a look at a paper of Shuji Saito and K. Sato to appear in the Annals : <i>A finiteness theorem for zero-cycles over $p$-adic fields</i> (also in Arxiv. They use CH(X) over ${\mathbb Z}_p$. – Qing Liu Mar 10 2010 at 12:55 So with a lot of extra care about dimension/codimension it seems to be possible to define Chow groups over Spec Z if I understand the above answers correctly. I may point out that in the book by Elman, Karpenko and Merkurjev "Algebraic and Geometry Theory of Quadratic Forms" (even though the title does not suggest so) they very carefully work out Chow groups, even some version of higher Chow groups. They begin by treating Chow groups over general excellent schemes (something you do not have written so explicitly in Fulton), so quite general and only later impose additional assumptions, like equidimensionality, being over a field, and all that. So maybe it is worth having a look at that. On the other hand, they get a pullback along non-flat morphisms only with the typical more restrictive conditions. This however is crucial for turning Chow groups into the Chow ring. So I think the construction of the intersection product [which uses the pullback along the embedding of the diagonal X -> X x X] is another very very critical matter over Z (but according to one of the other answers it can be done, that sounds very interesting). Last but not least, just maybe another perspective, if one writes down the classical intersection multiplicity of two cycles, that can be done by first multiplying both cycles of complementary codim [so for this we need a ring structure, but let's just assume somebody can give such a structure even over Z, just to find out where we would actually be going], then the product lies in CH^n(X), n being the dimension of our scheme. Now to turn that into the classical intersection multiplicity one could pushforward this cycle along the structural map to the base field, $X$ --> $Spec$ $(k)$ over a field(!) and $CH{_0}(Spec k) = \mathbb{Z}$ and we get our intersection number. Voilà. But if we are proper over Spec Z, we could at best pushforward $X$ --> $Spec$ Z but $CH{_0}(Spec(\mathbb{Z}) = 0$, so nothing very interesting seems to result here. [this argument however only makes sense if the dimension shifts in this Spec Z setup would be carried over analogously, which maybe is also stupid here for the reason that Spec Z is one-dimensional and Spec k zero-dimensional. I am just saying all this, because best and supercool would of course be somebody with a Spec F$_1$ having CH_0(Spec F$_1$)= ? (....something, probably rather R than Z) and that could then be our Spec Z intersection number by giving F$_1$ the role of a "virtual base field" and I guess some people say this should link to the Arakelovian one.... but well, that's very speculative] So I think some people's expectation goes in the direction that the "interesting" way of doing intersection theory over Spec Z needs such a final F$_1$-twist. Note maybe that the classical analgoue would be P1 <-> Spec Z + (infinite place) but CH_0(P1) = Z, whereas CH_0(Z) = 0, so we kind of miss something if we just use classical Chow groups over Spec Z. For other questions, classical methods work well even for Z without needing F$_1$ or so, for example the étale fundamental groups of both P1 and Spec Z are trivial. But for Chow theory some additional tricks seem to be required. At least that is my impression. Of course this arithmetic aspect of intersection theory over Spec Z is a kind of different story and it also makes perfect sense to talk about classical Chow groups over Spec Z, so there is certainly nothing wrong in having CH_0(Z) = 0, just maybe for some sorts of questions of arithmetical content, this type of Chow theory may not be the right approach. - The last chapter of Fulton's book defines Chow groups on schemes $X$ of finite type over a fixed smooth scheme $S$. It not automatic to have an intersection product for $X$ smooth, but there is one as long as $S$ is $1$-dimensional. So it should cover the case of $\mathbb{Z}$. - thank you, and what does "smooth scheme S" mean? (smooth is always a relative notion), bc if it means S is smooth over a field k, then it coincides with the above definition. I think the first chapter of 'Lectures on Arakelov geometry' is also a good reference, here they treat schemes over Dedekind rings (for example) – Ivan Mar 9 2010 at 18:41 Why always a relative notion? You can just ask for the local rings to be regular. – Andrea Ferretti Mar 9 2010 at 19:07 Presumably he means smooth over $Spec\mathbb{Z}$. – Harry Gindi Mar 9 2010 at 19:21 while smooth over (say) perfect fields implies regular, regularity does not imply smoothness (eg over inseparable extensions). i'm referencing the definition of smoothness (of a morphism) (EGA IV) for arbitrary schemes (not just varieties). Here is a typical example where regular does not imply smooth: let k be an inperfect field, take a finite inseparable extension L/k, then $\mathbb{P}^1 _L$ is regular (variety over k) but not smooth over k. – Ivan Mar 9 2010 at 19:26 Ok, I understand what you mean. I do not have Fulton's book right now, but I think he actually means regular. – Andrea Ferretti Mar 9 2010 at 19:43 I think intersection of vertical divisors cannot be defined, see Liu 9.1.33. This should be fixed by Arakelov theory. - 3 You mean <i>horizontal</i> divisors ? Following Fulton, one can define a $0$-cycle on ${\rm Spec}(\mathbb Z)$ as intersection. But one can not define its degree as an integer depending only on the class in $CH_0({\mathbb Z})$. – Qing Liu Mar 9 2010 at 21:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 89, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9132344126701355, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48771/proofs-that-require-fundamentally-new-ways-of-thinking/94167
Proofs that require fundamentally new ways of thinking Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are used to it. Example 1. Euler's proof that there are infinitely many primes. If you haven't seen anything like it before, the idea that you could use analysis to prove that there are infinitely many primes is completely unexpected. Once you've seen how it works, that's a different matter, and you are ready to contemplate trying to do all sorts of other things by developing the method. Example 2. The use of complex analysis to establish the prime number theorem. Even when you've seen Euler's argument, it still takes a leap to look at the complex numbers. (I'm not saying it can't be made to seem natural: with the help of Fourier analysis it can. Nevertheless, it is a good example of the introduction of a whole new way of thinking about certain questions.) Example 3. Variational methods. You can pick your favourite problem here: one good one is determining the shape of a heavy chain in equilibrium. Example 4. Erdős's lower bound for Ramsey numbers. One of the very first results (Shannon's bound for the size of a separated subset of the discrete cube being another very early one) in probabilistic combinatorics. Example 5. Roth's proof that a dense set of integers contains an arithmetic progression of length 3. Historically this was by no means the first use of Fourier analysis in number theory. But it was the first application of Fourier analysis to number theory that I personally properly understood, and that completely changed my outlook on mathematics. So I count it as an example (because there exists a plausible fictional history of mathematics where it was the first use of Fourier analysis in number theory). Example 6. Use of homotopy/homology to prove fixed-point theorems. Once again, if you mount a direct attack on, say, the Brouwer fixed point theorem, you probably won't invent homology or homotopy (though you might do if you then spent a long time reflecting on your proof). The reason these proofs interest me is that they are the kinds of arguments where it is tempting to say that human intelligence was necessary for them to have been discovered. It would probably be possible in principle, if technically difficult, to teach a computer how to apply standard techniques, the familiar argument goes, but it takes a human to invent those techniques in the first place. Now I don't buy that argument. I think that it is possible in principle, though technically difficult, for a computer to come up with radically new techniques. Indeed, I think I can give reasonably good Just So Stories for some of the examples above. So I'm looking for more examples. The best examples would be ones where a technique just seems to spring from nowhere -- ones where you're tempted to say, "A computer could never have come up with that." Edit: I agree with the first two comments below, and was slightly worried about that when I posted the question. Let me have a go at it though. The difficulty with, say, proving Fermat's last theorem was of course partly that a new insight was needed. But that wasn't the only difficulty at all. Indeed, in that case a succession of new insights was needed, and not just that but a knowledge of all the different already existing ingredients that had to be put together. So I suppose what I'm after is problems where essentially the only difficulty is the need for the clever and unexpected idea. I.e., I'm looking for problems that are very good challenge problems for working out how a computer might do mathematics. In particular, I want the main difficulty to be fundamental (coming up with a new idea) and not technical (having to know a lot, having to do difficult but not radically new calculations, etc.). Also, it's not quite fair to say that the solution of an arbitrary hard problem fits the bill. For example, my impression (which could be wrong, but that doesn't affect the general point I'm making) is that the recent breakthrough by Nets Katz and Larry Guth in which they solved the Erdős distinct distances problem was a very clever realization that techniques that were already out there could be combined to solve the problem. One could imagine a computer finding the proof by being patient enough to look at lots of different combinations of techniques until it found one that worked. Now their realization itself was amazing and probably opens up new possibilities, but there is a sense in which their breakthrough was not a good example of what I am asking for. While I'm at it, here's another attempt to make the question more precise. Many many new proofs are variants of old proofs. These variants are often hard to come by, but at least one starts out with the feeling that there is something out there that's worth searching for. So that doesn't really constitute an entirely new way of thinking. (An example close to my heart: the Polymath proof of the density Hales-Jewett theorem was a bit like that. It was a new and surprising argument, but one could see exactly how it was found since it was modelled on a proof of a related theorem. So that is a counterexample to Kevin's assertion that any solution of a hard problem fits the bill.) I am looking for proofs that seem to come out of nowhere and seem not to be modelled on anything. Further edit. I'm not so keen on random massive breakthroughs. So perhaps I should narrow it down further -- to proofs that are easy to understand and remember once seen, but seemingly hard to come up with in the first place. - 2 Perhaps you could make the requirements a bit more precise. The most obvious examples that come to mind from number theory are proofs that are ingenious but also very involved, arising from a rather elaborate tradition, like Wiles' proof of Fermat's last theorem, Faltings' proof of the Mordell conjecture, or Ngo's proof of the fundamental lemma. But somehow, I'm guessing that such complicated replies are not what you have in mind. – Minhyong Kim Dec 9 2010 at 15:18 7 Of course, there was apparently a surprising and simple insight involved in the proof of FLT, namely Frey's idea that a solution triple would give rise to a rather exotic elliptic curve. It seems to have been this insight that brought a previously eccentric seeming problem at least potentially within the reach of the powerful and elaborate tradition referred to. So perhaps that was a new way of thinking at least about what ideas were involved in FLT. – roy smith Dec 9 2010 at 16:21 10 Never mind the application of Fourier analysis to number theory -- how about the invention of Fourier analysis itself, to study the heat equation! More recently, if you count the application of complex analysis to prove the prime number theorem, then you might also count the application of model theory to prove results in arithmetic geometry (e.g. Hrushovski's proof of Mordell-Lang for function fields). – D. Savitt Dec 9 2010 at 16:42 2 In response to edit: On the other hand, I think those big theorems are still reasonable instances of proofs that are difficult to imagine for a computer! Incidentally, regarding your example 2, it seems to me Dirichlet's theorem on primes in arithmetic progressions might be a better example in the same vein. – Minhyong Kim Dec 9 2010 at 17:34 6 I agree that they are difficult, but in a sense what I am looking for is problems that isolate as well as possible whatever it is that humans are supposedly better at than computers. Those big problems are too large and multifaceted to serve that purpose. You could say that I am looking for "first non-trivial examples" rather than just massively hard examples. – gowers Dec 9 2010 at 18:04 show 13 more comments 59 Answers I find Shannon's use of random codes to understand channel capacity very striking. It seems to be very difficult to explicitly construct a code which achieves the channel capacity - but picking one at random works very well, provided one chooses the right underlying measure. Furthermore, this technique works very well for many related problems. I don't know the details of your Example 4 (Erdos and Ramsey numbers), but I expect this is probably closely related. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. How about Bolzano's 1817 proof of the intermediate value theorem? In English here: Russ, S. B. "A Translation of Bolzano's Paper on the Intermediate Value Theorem." Hist. Math. 7, 156-185, 1980. Or in the original here: Bernard Bolzano (1817). Purely analytic proof of the theorem that between any two values which give results of opposite sign, there lies at least one real root of the equation. In Abhandlungen der koniglichen bohmischen GeseUschaft der Wissenschaften Vol. V, pp.225-48. Not fully rigorous, according to today's standards, but perhaps his method of proof could be considered a breakthrough nonetheless. - 4 More specifically, Bolzano was first to recognize that a completeness property of the real numbers was needed, and he proposed the principle that any bounded set of real numbers has a least upper bound. – John Stillwell Aug 29 2011 at 16:51 I am always impressed by proofs that reach outside the obvious tool-kit. For example, the proof that the dimensions of the irreducible representations of a finite group divide the order of the group relies on the fact that the character values are algebraic integers. In particular, given a finite group $|G|$ and an irreducible character $\chi$ of dimension $n,$ $$\frac{1}{n} \sum_{s \in G} \chi(s^{-1})\chi(s) = \frac{|G|}{n}.$$ However, since $\frac{|G|}{n}$ is an algebraic integer (it is the image of an algebra homomorphism) lying in $\mathbb{Q},$ it in fact lies in $\mathbb{Z}.$ - Novikov's proof of the topological invariance of rational Pontryangin classes, for which he was awarded the 1970 Fields Medal. Fundamentally new (complicating a fundamental group to simplify geometry), and also fundamentally important. Here is what Sir Michael Atiyah had to say (as cited in the introduction to Raniski's Higher Dimensional Knot Theory): Undoubtedly the most important single result of Novikov, and one which combines in a remarkable degree both algebraic and geometric methods, is his famous proof of the topological invariance of (rational) Pontryagin classes of a differentiable manifold... As is well-known many topological problems are very much easier if one is dealing with simply-connected spaces. Topologists are very happy when they can get rid of the fundamental group and its algebraic complications. Not so Novikov! Although the theorem above involves only simply-connected spaces, a key step in its proof consists in preversely introducing a fundamental group, rather in the way that (on a much more elementary level) puncturing the plane makes it non-simply-connected. This bold move has the effect of simplifying the geometry at the expense of complicating the algebra, but the complication is just manageable and the trick works beautifully. It is a real master stroke and completely unprecedented. - Some more proofs that startled me (in a random order): Liouville theorem to prove that Weierstrass P-function satifies the differential equation you know. Complex methods to establish the addition law on an elliptic curve. Cauchy's formula (for P'/P) to prove that C is algebraically closed. Pigeon hole principle to prove existence of solutions to Fermat-Pell's equation Kronecker's solution to the same equation, using L-functions. Minkowski's lemma (a convex compact, symmetric, of volume 2^n contains a non trivial integer point) and its use to prove Dirichlet's theorem on the structure of units in number fields. Fourier transform to prove (versions of) the central limit theorem. Multiplicativity of Ramanujan's tau function via Hecke operators. Poisson formula and its use (for example, for the functional equation of Riemann's zeta function, or for computing the volume of SL_n(R)/SL_n(Z), or values of zeta at even positive integers). - How about Rabinowitsch's proof of the Nullstellensatz? - 2 I was about to go post about the 3-5 switch in the proof of Fermat's Last Theorem when I read this answer. It made me realize both are tricks rather than "proofs which require a fundamentally new way of thinking." Indeed, I think a computer could easily come up with the idea of introducing a new variable to simplify what needs to be proven (Rabinowitsch) or of switching tactics to deal with one case separately (Wiles). I'd go so far as to say computers are much better than humans at this kind of equational reasoning. – David White May 19 2011 at 20:01 I work in automated theorem proving. I certainly agree, in principle, that there are no proofs that are inherently beyond the ability of a computer to solve, but I also think that there are fundamental methodological problems in addressing the problem as posed. The problem is to come up with a solution that would not be regarded as 'cheating', i.e., somehow building the solution into the automated prover to start with. New proof methods can be captured by what are called 'tactics', i.e., programs that guide a prover through a proof. Clearly, it would not be satisfactory to analyse the original proof, extract a tactic from it (even a generic one) that captures the novel proof structure and then demonstrate that the enhanced prover could now 'discover' the novel proof. Rather, we want the prover to invent the new tactic itself, perhaps by some analysis of the conjecture to be proved, and then apply it. So we need an automated prover that learns. But anticipating what kind of tactic we want to be learnt may well influence the design of the learning mechanism. We've now kicked the 'cheating' problem up a level. Methodologically, what we want is a large class of problems of this form. Some we can use for development of the learning mechanism, and some we can use to test it. Success on a previously unseen test set would demonstrate the generality of the learning mechanism and, hence, the absence of cheating. Unfortunately, these challenges are usually posed as 'can a prover prove this theorem' rather than 'can it solve this wide range of theorems, each requiring a different form of novelty. Clearly, this latter form of the question is hugely challenging and we're unlikely to see if solved in the foreseeable future. - The Lebesgue integral seems to have been a fundamentally new way of thinking about the integral. It's hard to prove the convergence theorems if you have the Riemann integral in mind. I suppose there are probably many instances where you can give a computer a very ineffective definition of something and ask that it prove theorems. Ask it to prove anything about the primes where you start with the converse of Wilson's theorem as the definition of a prime. Can the computer figure out that its definition is terrible? Can it figure out what a prime really "is"? - "unexpected technique" Sometimes the result itself is unexpected. Cantor's diagonal proof (and other counterexamples), Godel's incompleteness, Banach-Tarski and nonmeasurable sets, independence results generally. I think you want cases where the result is anticipated, but the technique seems unrelated? - 7 I would count Cantor's proof of the existence of transcendental numbers. – gowers Dec 9 2010 at 16:39 3 However, I think I have come up with a reasonably plausible account of how somebody could have thought of it: gowers.wordpress.com/2010/12/09/… – gowers Dec 9 2010 at 23:00 Would the "quantum method" fit the bill here ? "Quantum Proofs for Classical Theorems" Andrew Drucker, Ronald de Wolf "Erdös and the Quantum Method" Richard Lipton - Lovasz's proof of cancellation in certain classes of finite structures still bewilders me; I can only imagine that he found the proof first and then came up with the theorem afterwards. The basic idea is to look at homomorphisms between a given structure and a sequence of other structures. A comparison of two such sequences involving structures of the form AxC and BxC can be taken to a comparison between A and B. The condition that there exists a one-element substructure is used to show a certain nontriviality of the comparison, and a few more details result in showing A is isomorphic to B if(f) AxC is isomorphic to BxC. I should have asked Lovasz how he came up with the proof; I am confident that most people would not be able to come close to the method independently if they were only given the theorem statement. (Not to mention the analogous statement of unique nth roots in the same class.) - Hochster and Huneke's tight closure theory to prove various theorems in Commutative algebra (Cohen-Macaulayness of rings of invariants, existence of big Cohen Macaulay algebras)? - Proving that subgroups of free groups are free requires the knowledge of topology, a completely different field which a priori does not have anything to do with groups. - 5 Though the topological proofs are beautiful and slick, you don't need topology to prove this fact (and the original proofs were completely algebraic; see Lyndon and Schupp's book on combinatorial group theory for nice accounts of Nielsen's original proof as well as the later Reidemeister-Schreier approach). – Andy Putman Dec 18 2010 at 6:54 The Ax-Kochen theorem about zeros of forms over the $p$-adics which was proved using model theory. - There are two ways to prove the compactness theorem for propositional logic - either using the completeness theorem and going from semantic entailment to syntactic proof, or by a topological argument in Stone spaces. The latter, I feel, is an unexpected way of doing it - but I don't know the history of the subject so I'm probably not qualified to comment whether it was fundamentally new or not. Certainly in light of Stone's representation theorem, it seems unsurprising that there could be a topological proof of a theorem in logic, and as I understand it this connection is further investigated in topos theory? - 3 The topology on Stone spaces is precisely the Zariski topology on the spectrum of a Boolean ring. It is one of three historical sources of the idea that commutative rings can be thought of as topological spaces, along with various work in algebraic geometry and Gelfand's work on C*-algebras. I have never been very clear on the historical relationship between the three. – Qiaochu Yuan Jan 18 2011 at 16:27 Here are two more candidates for new ways of thinking in proofs but I am not sure about the historical picture. One is Brunn sieve which led to new results in number theory. The other is Kummer's method that have led to proofs of many cases of FLT. (Frey's new way of thinking regarding FLT was already mentioned in a Roy Smith's comment.) - Turing's solution of Hilbert's Entscheidungsproblem. The new idea was to invent the Turing machine and "virtualization" (the universal Turing machine). - I am surprised that noone mentioned Hilbert's proof of Hilbert's Basis Theorem yet. It says that every ideal in $\mathbb{C}[x_1,\ldots,x_n]$ is finitely generated - the proof is nonconstructive in the sense that it does not give an explicit set of generators of an ideal. When P. Gordan (a leading algebraists at that time) first saw Hilbert's proof, he said, "This is not Mathematics, but theology!" However, in 1899, Gordan published a simplified proof of Hilbert's theorem and commented with "I have convinced myself that theology also has its advantages." - Use of the Hardy–Littlewood circle method towards Waring's problem. - Malliavin's proof of Hormander's theorem is very interesting in the sense that one of the basic ingredients in the language of the proof is a derivative operator with respect to a Gaussian process acting on a Hilbert space. The adjoint of the derivative operator is known as the divergence operator and with these two definitions one can establish the so called "Malliavin Calculus" which has been used to recover classical probabilistic results as well as give new insight into current research in stochastic processes such as developing a stochastic calculus with respect to fractional Brownian motion. What makes his proof more interesting is that Malliavin was trained in geometry and only used the language of probability in a somewhat marginal sense at times - alot of his ideas are very geometric in nature which can be seen for example in his very dense book: P. Malliavin: Stochastic Analysis. Grundlehren der Mathematischen Wissenschaften, 313. Springer-Verlag, Berlin, 1997. - How about Goodwillie Calculus? I'm not an expert in this field, but it seems to capture a lot of very deep ideas in stable homotopy theory and in category theory more generally. Here is a stub which includes some of the traditional concepts you can get back from Goodwillie Calculus: http://ncatlab.org/nlab/show/Goodwillie+calculus Here are some lecture notes which go over the Goodwillie calculus and use it to derive the James splitting $\Sigma^\infty \Omega \Sigma X$ and the Snaith splittings of $\Sigma^\infty \Omega^n \Sigma^n X$ in a new way (this is an example of the "proof" the question is asking for): http://noether.uoregon.edu/~sadofsky/gctt/goodwillie.pdf Finally, I recently saw an amazing talk given by Mark Behrens which used the Goodwillie Calculus to lift differentials in a particular spectral sequence to differentials in the EHP Spectral Sequence, meaning this abstract machinery could also lead to powerful new computational tools. This is discussed in a recent paper: http://www-math.mit.edu/~mbehrens/papers/GoodEHPmem.pdf - Barwise compactness and $\alpha$-recursion theory. The idea many properties of the following are captured by thinking of how to define analogs in $V_\omega$: (1) Finite sets are elements of $V_{\omega}$. (2) Computable sets can are $\Delta_1$ definable over $V_{\omega}$. (3) Computable enumerable sets can are $\Sigma_1$ definable over $V_{\omega}$. (4) First order logic is $L_{\infty, \omega} \cap V_\omega$. Then, if we replace $V_\omega$ by a different countable admissible set $A$, many of the results relating these classes have analogs. E.g. Barwise compactness, completeness, the existence of an $A$-Turing jump, ... - I think Fürstenberg´s proof of the infinitude of primes, taken for itself, could be considered in the spirit of the original question, even though its mathematical value is questionable. - 3 A plea in advance - can we not have this discussion again? – HW Dec 9 2010 at 19:16 18 Ans others think it's the usual proof in disguise :-) – Robin Chapman Dec 9 2010 at 19:44 1 Well, ok, I was not aware of all these discussions (I have just found some more, lol). I agree though that its mathematical value is doubtful as it seems to be just a reformulation of Euclid´s proof, yet I find it amusing that such reformulation is possible... No more discussion about it. Period. :-) – ex falso quodlibet Dec 9 2010 at 21:18 1 @ Max Muller: In the spirit of accuracy (but without wanting to move towards opening up any cans of worms) I believe it is the case that Furstenburg's is not a repackaging of analytic ideas, but rather Euclid's original argument. – Nick Salter Dec 9 2010 at 21:43 1 @all: Please consider Henry Wilton´s plea! Thanks! :-) – ex falso quodlibet Dec 9 2010 at 22:17 show 3 more comments Hilbert's proof of Hilbert-Waring theorem. - I think the concepts of Archimedes which are at the birth of infinitesimal calculation, as the definition of length of a circle (hence the concept of $\pi$), and how to calculate the area of ​​a circle from $\pi$. - Nicolas Monod's genius two pages new counterexample to the von Neumann conjecture, inspired by Mary Shelley's novel Frankenstein: http://arxiv.org/abs/1209.5229 - The post,the answers generated, the comments...wow! They are so nice! Though I am not going to answer in exactly the way required, I believe including occasions were new insights helped to give support a great unsolved problem are worth-noting. For instance, we can consider a case in Random Matrix Theory: the statistical interplay between the distribution the zeros of the Riemann Zeta function and the eigenvalues of a random Hermitian matrix which has provided a basis for the Hilbert–Pólya conjecture. . - 3 I think it's fair to say that this interplay has led to a lot of conjectures but no proofs; whether random matrix theory can actually be used to prove theorems about L-functions remains to be seen... – David Hansen Dec 12 2010 at 9:28 show 2 more comments The first formal proofs using limits. (the oldest ones I know are in Newton's Principia) - Lovasz proof of Shannon Capacity of the Pentagon (the only proof known). Introduces Semidefinite optimization. Geometrizes and introduces analytic techniques to Graph Theory. Descartes introduced coordinate space approach to geometric problems. In the same spirit, Lovasz's proof coordinate space approach to graph theory problems. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540711641311646, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/53169/should-one-think-of-a-network-as-a-connected-graph-or-what-is-the-right-way
# Should one think of a network as a connected graph ? (Or: What is the right way to think of a network?) In the definition of a network, are we only considering connected graphs ? Because I keep encountering definitions that don't assume explicitly that we deal with connected graphs, but which would be very counter-intuitive if they would also apply for disconnected graphs (almost everywhere one is advised to think of a network - the mathematical object - as a network of pipes transporting some fluid; if disconnected graphs come into play this metaphor of pipes transporting something fails!). - 1 "Network" means different things to different people. What context is this in? – Qiaochu Yuan Jul 22 '11 at 19:57 If you mean computer networks, they do not have to be connected. You might want the graph to be weighted or directed, depending on the situation (network characteristics and problem at hand). – Emre Jul 22 '11 at 20:02 @Qiaochu Well, there isn't much context; I was just confronted with networks during a course in discrete mathematics, where also some graph-theoretical topics were treated. Basically we just got the definition and some lemmas and theorems followed that illustrated some properties of the definitions and then we moved on to other topics. We defined a network to be a directed graph (@ Emre, sorry, should have mentioned that earlier), that contains to two special nodes, the source and the sink, and a function (the capacity) $f:E\rightarrow \mathbb{R}$. – resu Jul 23 '11 at 9:44 Also (@ Qiaochu) what would be all the other meanings network ? – resu Jul 23 '11 at 9:45 @resu: I don't know. All I'm saying is that a lot of people use the word "network" (computer scientists, biologists, social scientists, etc.) and I would be rather surprised if their usages matched exactly. It doesn't, to my knowledge, have a precise, agreed-upon mathematical definition, and before you gave that definition that I thought you were referring to a graph. – Qiaochu Yuan Jul 23 '11 at 14:17 show 1 more comment ## 3 Answers I don't see how the metaphor fails if the graph is disconnected. Your pipe system is just broken (or perhaps under construction), and no flow is possible. A broken system of pipes is still a system of pipes. - It is possible that the author had a sensor network in mind. These usually have sensors (sources) and a sink that fuses the sensor readings. The capacity can also be defined. These networks may or may not be connected. Obviously it helps if there is always a path from all sources to the sink. If the path momentarily disappears, the sources may store their readings until a route is re-established, etc. - Outside of flow networks (which are precisely defined)... What is the right way to think of a network? There isn't one right way. What a "network" is is context dependent. Authors often do not give a precise definition of a network, instead opting for a more encompassing intuitive definition of "network". Should one think of a network as a connected graph ? • In real-world contexts, a network might not even be considered a graph. For example, in a biological neural network neuronal connections are made via an axon to other neurons. A single axon may connect to many other neurons (and have many connections to each of them). This is an advantage for e.g. coordinating movement. It's not obvious how to interpret the network as a graph (and whether or not a graph is the best data structure to use). E.g. if we represent a single axon with multiple edges, then the graph does not capture the property that if we cut that single axon, then we cut a whole bunch of edges. • In real-world contexts, for a network that is considered a graph (which may or may not be directed), it may or may not be important that the graph is connected (either weakly or strongly). E.g. in my area of research, network motifs, it can be largely ignored (it's not relevant to the problems we're trying to solve). In cases where connectivity is important (e.g. computer networks), the distinction between connected and disconnected networks is an important consideration. In other studies, a largest component (or giant component) may be instead studied. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9626926183700562, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3220/efficient-zero-knowledge-proof-of-least-significant-bits-either-rsa-or-rabin?answertab=oldest
# Efficient zero knowledge proof of least significant bits either RSA or Rabin? I need to reveal the log(n) least significant bits of $x^2 mod\ N$ or $x^3 mod\ N$ without revealing x. So far the best I have involves a Boudot range proof and is not a very nice construction - 1 What exactly would the ZKP actually prove? That you know a value $x$ with $(x^2 \bmod N) \bmod 2^k = value$, for some modest $k$ which you list as $\log(n)$, for some $n$? Or, in other words, how would the verifier know that the value of $x$ involved is the secret $x$ you're thinking of? – poncho Jul 13 '12 at 15:31 The full proof would have other predicates about x in it. So you might prove,for example, that you know the factorization of x and then show the least significant bits of the rsa encryption of x. – imichaelmiers Jul 13 '12 at 15:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9460448026657104, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/7320/heuristic-explanation-of-why-we-lose-projectives-in-sheaves/7480
## Heuristic explanation of why we lose projectives in sheaves. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) We know that presheaves of any category have enough projectives and that sheaves do not, why is this, and how does it effect our thinking? This question was asked(and I found it very helpful) but I was hoping to get a better understanding of why. I was thinking about the following construction(given during a course); given an affine cover, we normally study the quasi-coherent sheaves, but in fact we could study the presheaves in the following sense: Given an affine cover of X, $Ker_2\left(\pi\right)\rightrightarrows^{p_1}_{p_2} U\rightarrow X$ then we can define $X_1:=Cok\left(p_1,p_2\right)$, a presheaf, to obtain refinements in presheaves where we have enough projectives and the quasi-coherent sheaves coincide. Specifically, if $X_1\xrightarrow{\varphi}X$ for a scheme $X$, s.t. $\mathcal{S}\left(\varphi\right)\in Isom$ for $\mathcal{S}(-)$ is the sheaffication functor, then for all affine covers $U_i\xrightarrow{u_i}X$ there exists a refinement $V_{ij}\xrightarrow{u_{ij}}U_i$ which factors through $\varphi$. This hinges on the fact that $V_{ij}$ is representable and thus projective, a result of the fact that we are working with presheaves. In sheaves, we would lose these refinements. Additionally, these presheaves do not depend on the specific topology(at the cost of gluing). In this setting, we lose projectives because we are applying the localization functor which is not exact(only right exact). However, I don't really understand this reason, and would like a more general answer. A related appearance of this loss is in homological algebra. Sheaves do not have enough projectives, so we cannot always get projective resolutions. They do have injective resolutions, and this is related to the use of cohomology of sheaves rather than homology of sheaves. In paticular, in Rotman's Homological Algebra pg 314, he gives a footnote; In The Theory of Sheaves, Swan writes "...if the base space X is not discrete, I know of no examples of projective sheaves except the zero sheaf." In Bredon, Sheaf Theory: on locally connected Hausdorff spaces without isolated points, the only projective sheaf is 0 addressing this situation. In essence, my question is for a heuristic or geometric explanation of why we lose projectives when we pass from presheaves to sheaves. Thanks in advance! - You made two mistakes in your statements. In fact, in category of sheaves, we still have this refinement. Because sheafification functor as localization will not kill the morphism from V[ij]-->Ui. This is the reason that we have this refinement in category of schemes. The correct statement is that we might not prove this statement in category sheaves since lost of projectives but can do it within presheaves – Shizhuo Zhang Jan 2 2010 at 5:10 You said sheafification as localization functor is not exact. This is wrong. Sheafification functor is exact functor because the localization is at Serre subcategory of presheaves category – Shizhuo Zhang Jan 2 2010 at 6:11 In fact, there are some other wrong words in your statement such as "because of lost of projective resolution, we use cohomology not homology...." this is not the reason. – Shizhuo Zhang Jan 2 2010 at 6:14 ## 4 Answers This is pretty much Dinakar's answer from a different view point: He says that it is too easy for a sheaf morphism to be an epi, so, since there are so many epis, it is now a stronger requirement that for every epi we find a lift - so strong that is not satisfied most of the times. I just want to call attention to the fact that this problem has nothing to do with module sheaves but is about sheaves of sets - and as such has the following nice interpretation: The condition of being a projective module sheaf can be split in two conditions: That of existence of the lifting map as a morphism of sheaves of sets and that of it being a morphism of module sheaves. In the category of sets the first condition is always satisfied; we have the axiom of choice which says that every epimorphism has a section and composing the morphism from our would-be projective with this section produces a lift - set-theoretically. Then one has to establish that one such lift is a module homomorphism. But in a sheaf category step one can fail. Sheaves (of sets) are objects in the category of sheaves. This category is a topos and can be seen as an intuitionistic set-theoretic universe (in a precise sense: there is a sound and complete topos semantics for intuitionistic logic, see e.g. this book). Now in an intuitionistic universe of sets, the axiom of choice is not valid in general; there might not be a "set-theoretic" section of the epimorphism! - Thank you for your answer. I REALLY liked your added information. I am surprised that the issue arises from the existence of the morphism rather than the satisfaction of the module condition. I have two more questions(if you feel your answers need another question rather than just comments, let me know). First, we have two places that in sheaf categories this existence of the lift can fail, as you said steps one and two. How often is it the first or second resp? Second, I am a novice in topos theory, could you explain more about the failure of the axiom of choice in these situations? Thanks! – B. Bischof Dec 3 2009 at 20:47 I have no answer to "how often" one or two fail. A module sheaf can fail to be projective because there is a map into it which is stalkwise surjective, but not open-set-wise, as Dinakar said. I don't have a criterion for this but the feeling that this is VERY common. The failure of step two you can recognise as in sets e.g. because there is torsion in the module etc. – Peter Arndt Dec 4 2009 at 1:38 The axiom of choice (AC) fails for sheaves over every non-discrete T_1-space: AC implies Booleanness (Moerdijk/MacLane, chap.VI, exercise 16), i.e. that the lattice of open sets is a Boolean algebra; for a T_1-space this means that it is discrete (MM,ch.VI ex.3) – Peter Arndt Dec 4 2009 at 1:41 An example for Dinakar's situation: Let V be a fixed open set of some space X and consider sheaves on X. Let F,G be the sheaves with F(U):={continuous real-valued functions on U} and G(U):={cont. real-valued functions on (U intersected with V)}. The morphism F->G given by restricting functions to V is not surjective on every open set U, since there could exist a function on (U \cap V) which tends to infinity towards the border and thus is not extendable to U, i.e. is not a restriction of something defined on all of U. But surjectivity at stalks is clear by looking at small enough open sets... – Peter Arndt Dec 4 2009 at 2:11 I don't think the map is surjective on stalks on the boundary of V. Do you have another example in mind? – Steven Gubkin Jan 8 2010 at 13:15 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One reason is that surjectivity of a map of sheaves is a weaker condition than surjectivity of a map of presheaves. For a map of sheaves to be surjective, it need only be surjective on stalks. Recall the definition of a projective sheaf $\mathcal{P}$: Suppose $\mathcal{N} \rightarrow \mathcal{M}$ is a surjective map of sheaves and $\mathcal{P} \rightarrow \mathcal{M}$ is a sheaf map. Then we require that there exists a lifting $\mathcal{P} \rightarrow \mathcal{N}$ making the obvious diagram commute. Because of the definition of surjectivity for sheaves, there's probably an open set $U$ for which the map $\mathcal{N}(U) \mapsto \mathcal{M}(U)$ isn't surjective. So if $\mathcal{P}(U)$ doesn't map into the image, then there is no hope for a lifting. In all but the trivial cases (like discrete spaces), it will be easy to cook up a map $\mathcal{N} \rightarrow \mathcal{M}$ to do this. For presheaves, surjectivity means surjectivity on each open set, so this problem doesn't happen. But presheaves as an abelian category aren't very interesting. For example, the strictness of surjectivity means there is no cohomology. - This was exactly what I was looking for, Thank you! – B. Bischof Dec 1 2009 at 19:06 Your answer was excellent, but I have switched the accepted answer to Peter's due to it being a bit more specific and providing a bigger and more complete picture. Thank you very much for this explanation as it has helped very much. – B. Bischof Dec 3 2009 at 20:49 Here is an answer to a slightly different question, namely, what can one do once it is established that projective sheaves very often do not exist. For a nonaffine scheme, there is no known analogue of projective quasi-coherent sheaves on an affine scheme, but there is an analogue of the unbounded homotopy category of complexes of projectives. Namely, Amnon Neeman has proven that the homotopy category of complexes of projective modules over an (arbitrary, noncommutative) ring is equivalent to the quotient category of the homotopy category of complexes of flat modules by the triangulated subcategory of acyclic complexes of flat modules with flat modules of cocycles. Building upon this result, Daniel Murfet in his Ph.D. thesis studies the mock homotopy category of projectives on a separated Noetherian scheme, defined as the quotient category of the homotopy category of unbounded complexes of flat quasi-coherent sheaves by the triangulated subcategory of pure acyclic complexes. - Thank you for this, as you said, not what I was looking for, but this helped in another way. – B. Bischof Dec 1 2009 at 19:07 Sorry if this is silly; but might it have something to do with needing in the sheaf category to consider the sheafified presheaf cokernel in order to talk about projections? that is, I (think I) can imagine a nontrivial preasheaf with only trivial stalks, so that its sheafification is trivial; on the other hand, the sheaf morphisms are just the same as presheaf morphisms between sheaves. Hence there are probably too many sheaf morphisms with trivial cokernel. - Consider the sheaves $C^1:U\mapsto\mathcal{F}(U)$ and $C^2:U\mapsto\mathcal{F}(U^2)$, the free abelian groups on the points and pairs of points in $U$ respectively. There is a sheaf morphism $C^1\rightarrow C^2$ induced by the diagonal. The presheaf cokernel at $U$ is isomorphic to the free abelian group generated by pairs $(u,v)$ with $u\neq v$; this clearly has trivial stalks, so the sheaf cokernel is trivial. But I'm afraid I don't know how this picture generalizes when points aren't the right thing to talk about. – some guy on the street Dec 1 2009 at 17:29 Wait: is $C^2$ a sheaf? dear me... I'm afraid maybe not. In fact the stalks of $C^2$ look isomorphic to those of $C^1$ ... – some guy on the street Dec 2 2009 at 0:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452678561210632, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/209439-find-mean-standard-deviation-sum-difference-independent-random-v.html
# Thread: 1. ## Find the mean and standard deviation of the sum or difference of independent random v what should i do for A?? and for B, is it asking me to convert all the temperatures to Fahrenheit? Attached Thumbnails 2. ## Re: Find the mean and standard deviation of the sum or difference of independent rand They are asking for $E[X-550]$, and $\textrm{Var}(X-550)$. You should have some formulas that should make the preceding quite simple. 3. ## Re: Find the mean and standard deviation of the sum or difference of independent rand yes i figured out how to get the mean... i just simply take the temperatures that are off target and minus it to 550... i got that part but what i don't know how to get is the standard deviation.... would you happen to know how to calculate that based on my problem??
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924153208732605, "perplexity_flag": "head"}
http://mathoverflow.net/questions/74102/rational-function-identity/74160
## rational function identity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I just had to make use of an elementary rational function identity (below). The proof is a straightforward exercise, but that isn't the point. First, "my" identity is almost surely not original, but I don't have a reference for it. Perhaps someone knows it (like a lost cat without a collar) or, more likely, could spot this as a special case of a more general identity. Second, the obvious proof is not much of an explanation: a combinatorial identity often arises for a conceptual reason, and I'd be happy to hear if anyone sees mathematics behind this one. Let $f(x_1,\ldots,x_n)=\prod_{p=1}^n\big(\sum_{i=p}^n x_i\big)^{-1}$. Then $$f(x_1,\ldots,x_n)+f(x_2,x_1,x_3,\ldots,x_n)+\cdots+f(x_2,\ldots,x_n,x_1)=\big(\sum_{i=1}^n x_i\big)/x_1\cdot f(x_1,\ldots,x_n),$$ where $x_1$ appears as the $i$th argument to $f$ in the $i$th summand on the left side, for $1\leq i\leq n$. But why? - 7 +1 for the lost cat. See, mathematics is not only about finding black cats in dark rooms - sometimes it is about finding the owner. – darij grinberg Aug 31 2011 at 13:55 ## 4 Answers I have seen a cat of a similar breed in the representation theory of symmetric groups. Out of habit, let me quote a lemma attributed to Littlewood in Donald Knutson, $\lambda$-rings and the Representation Theory of the Symmetric Group, Springer 1973 (LNM #308), Chapter III, section 2, p. 149: $\sum\limits_{\sigma\in S_n} f\left(x_{\sigma\left(1\right)},x_{\sigma\left(2\right)},...,x_{\sigma\left(n\right)}\right) = \frac{1}{x_1x_2...x_n}$. At the moment, neither does this cat imply yours, nor the other way round. But can we cross them? Let me try. The left paw side of your cat is $\sum\limits_{\sigma\in \mathrm{Sh}\left(1,n-1\right)} f\left(x_{\sigma^{-1}\left(1\right)},x_{\sigma^{-1}\left(2\right)},...,x_{\sigma^{-1}\left(n\right)}\right)$, where $\mathrm{Sh}\left(a,b\right)$ is defined as the subgroup $\left\lbrace \sigma \in S_{a+b} \mid \sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(a\right) \text{ and } \sigma\left(a+1\right) < \sigma\left(a+2\right) < ... < \sigma\left(a+b\right) \right\rbrace$ of the symmetric group $S_{a+b}$. (The elements of this subgroup $\mathrm{Sh}\left(a,b\right)$ are known as $\left(a,b\right)$-shuffles.) Now I suspect tat $\sum\limits_{\sigma\in \mathrm{Sh}\left(a,b\right)} f\left(x_{\sigma^{-1}\left(1\right)},x_{\sigma^{-1}\left(2\right)},...,x_{\sigma^{-1}\left(a+b\right)}\right) = f\left(x_1,x_2,...,x_a\right) f\left(x_{a+1},x_{a+2},...,x_{a+b}\right)$ for any $a$ and $b$ and any $x_i$. This generalizes your cat. Does it generalize Littlewood's? Yes, at least if we generalize it even further, to the so-called *$\left(a_1,a_2,...,a_k\right)$-multishuffles* (which are permutations $\sigma\in S_{a_1+a_2+...+a_k}$ increasing on each of the intervals $\left[a_i+1,a_{i+1}\right]$, where $a_0=0$ and $a_{k+1}=n$). This is not much of a generalization, since it follows from the $\left(a,b\right)$-shuffle version by induction over $k$, but applying it to $\left(1,1,...,1\right)$-multishuffles (which are simply all the elements of $S_n$) yields Littlewood's cat. Now I see that Littlewood's cat even follows from yours, if we notice that every permutation $\sigma\in S_n$ can be written uniquely as a product $t_1t_2...t_{n-1}$, where each of the $t_k$ moves the $k$ some places to the right. (This is one of the stupid sorting algorithms.) Oh, and I don't have a proof of my cat, but it can catch mice, so it's a good cat, isn't it? - I enjoy the cat comparisons. They will get a fifth vote from my voting account. (If the moderators let me, I would add 6 more to it.) Gerhard "Yes, I'm A Cat Person" Paseman, 2011.08.31 – Gerhard Paseman Aug 31 2011 at 16:20 Yes, that's an excellent cat!! So Darij conjectures that, more generally, $f$ satisfies this "shuffle-coproduct" identity for any $(a,b)$. (Which, Frédéric points out, makes $f$ into a "symmetral mould", if I understand correctly, but I'm kind of fuzzy about operads.) Is it perhaps possible to prove this using, say, some clever trick like Tom proposed for the $(1,n-1)$ case? – Graham Denham Aug 31 2011 at 19:17 1 Actually this cat is no longer a conjecture, because the proof is completely straightforward: Every $\sigma \in \mathrm{Sh}\left(a,b\right)$ satisfies either $\sigma^{-1}\left(1\right)=1$ or $\sigma^{-1}\left(1\right)=a+1$. Thus, the sum splits into two parts, each of which can be handled by induction (once for $\left(a-1,b\right)$ instead of $\left(a,b\right)$, and once for $\left(a,b-1\right)$ instead of $\left(a,b\right)$). – darij grinberg Aug 31 2011 at 22:32 2 Like every proof related to shuffles, this proof is very simple but nigh impossible to formalize. We really need a reasonable shuffle calculus. Maybe dendriform dialgebras can be of use here. – darij grinberg Aug 31 2011 at 22:34 1 For me, they come from the shuffle Hopf algebra. ;) – darij grinberg Sep 1 2011 at 7:32 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This property ( or rather the generalized version by Darij using (a,b)-shuffles ) means that f is what is called a "symmetral mould" in the context of Ecalle's theory of moulds. There is a related notion of "alternal mould" where the right hand side is 0 rather than a product of two f. Here is just one reference among many : page 591 of Jean Ecalle; Bruno Vallet The arborification-coarborification transform: analytic, combinatorial, and algebraic aspects This may not be transparent when looking at this article. Maybe page 2 of my article The anticyclic operad of moulds would be more clear, but it only defines "alternal moulds". ADDED • The symmetral property is really a property of sequence of functions $f_n$, with $f_n$ a function of $n$ variables $x_1,\dots,x_n$. • The notions of alternal and symmetral moulds, when considered under some specific point of view, turn into the notion of primitive and group-like element in a Hopf algebra. - A simple proof of the Sh(a,b) cat, using iterated integrals, is as follows. Note that $$f(x_1,\ldots,x_n)=\int_{1>t_1>\cdots>t_n>0} dt_1\cdots dt_n \ t_1^{x_1-1}\cdots t_n^{x_n-1}\ .$$ Littlewood's identity follows from changing variables using the permutation so as to keep the integrand fixed. Then one has a sum of simplices (corresponding to all possible relative orderings of the variables) which recombines into a cube of integration $[0,1]^n$. The proof of the Sh(a,b) identity follows the same idea. Here the total volume of integration is a product of simplices which is broken into a union of simplices. This is probably well known to people working with moulds, operads, etc. An additional remark: Littlewood's identity follows from Lemma II.2 in my article "Trees forests and jungles: a botanical garden for cluster expansions" with V. Rivasseau. To see this, extract the coefficient of the highest degree monomial in the v variables (notations of that article), then specialize the u variables to the case where $u_{i, i+1}=x_i$ and all other pair variables are zero (killing all edges of the complete graph which are not in a `spanning chain'). The Lemma in our article is related to many other topics in mathematical physics such as the Wilson-Polchinski renormalization group equation, see e.g. these slides. - So Graham was right, it was a stray cat from the land of topology. (Incidentally, Damien Calaque answered another question of mine about shuffles using your integral: mathoverflow.net/questions/63923/… .) – darij grinberg Sep 1 2011 at 19:35 I think the cat is too feral to belong to one particular land... – Abdelmalek Abdesselam Sep 1 2011 at 19:40 A geometric argument! That's very nice too. – Graham Denham Sep 3 2011 at 14:03 I'm not sure whether my answer is conceptual in your sense, but here is a relatively short proof. First of all, your definition of $f$ suggests the notation $$s_p := \sum_{i=p}^n x_i.$$ Now consider the following telescopic sum: \begin{equation}\label{eq} (1 - z_2) + z_2(1 - z_3) + z_2 z_3 (1 - z_4) + \dotsm + z_2 \dotsm z_{n-1} (1 - z_n) + z_2 \dotsm z_n = 1. \quad (*) \end{equation} For each $i \in {2,\dots,n}$, take $$z_i = \frac{s_i}{x_1 + s_i},$$ hence $$1 - z_i = \frac{x_1}{x_1 + s_i},$$ and plug this into the telescopic sum $(*)$. Divide both sides of the equation by $x_1 \cdot s_2 s_3 \dotsm s_n$ to get the desired expression. - 2 That's nicer than my argument (namely: just rewrite in terms of $s_i$'s, clear denominators, and collapse). It's possible, I guess, that my identity can only be seen as an obscured form of (*), in which case I should not expect too much from it. But I'll remain optimistic for a while. – Graham Denham Aug 31 2011 at 14:26 You say the definition of f suggests the notation s_i = ..., but s_i makes no sense: the i on the right side is the index of summation. You meant s_p, not s_i. – KConrad Sep 2 2011 at 12:50 @KConrad: Sorry for the obvious typo; I will correct it. Thanks for noticing :) – Tom De Medts Sep 3 2011 at 17:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107353687286377, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54692/why-is-the-free-energy-minimized-by-the-boltzmann-distribution
# Why is the free energy minimized by the Boltzmann distribution? Can someone show me, without glossing over anything, why $F = E - TS$ is minimized when $p_i = e^{-U_i/k_bT}/\sum_ie^{-U_i/k_bT}$? I understand it conceptually, but am having difficulty showing it formally. - 1 Where are you having trouble? – Michael Brown Feb 22 at 1:15 I know that $E-TS=\sum_iU_ip_i+k_BT*\sum_ip_iln(p_i)$ I take the derivative of F with respect to $p_i$, yielding $\sum_iU_i+k_BT*\sum_i(1/p_i)$, and substitute the Boltzmann distribution for $p_i$ expecting to get 0, but I'm having trouble with the algebra. Is this a reasonable approach? – user21243 Feb 22 at 1:19 1 @user21243 - It would be a good idea to post what you've tried along with your question. – Kitchi Feb 22 at 5:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511785507202148, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/27122/list
## Return to Answer 2 typos Here's an argument that the diagonal Lagrangian correspondence $\Delta$ in $\mathbb{C}P^n \times \mathbb{C}P^n$ is formal. That is, its Floer cochains $CF^\ast(\Delta,\Delta)$, as an $A_\infty$-algebra over the rational Novikov field $\Lambda=\Lambda_\mathbb{Q}$ (say), are quasi-isomorphic to the underlying cohomology algebra $HF^\ast(\Delta, \Delta)\cong QH^\ast(\mathbb{C}P^n; \Lambda)$ with trivial $A_\infty$ operations $\mu^d$ except for the product $\mu^2$. Be critical; I might have slipped up! Write $A$ for $QH^\ast(\mathbb{C}P^n; \Lambda)=\Lambda[t]/(t^{n+1}=q)$. Here $q$ is the Novikov parameter. I claim that $A$ is intrinsically formal, meaning that every $A_\infty$-structure on $A$ A$, with$\mu^1=0$and$\mu^2$the products product on$A$A$, can be modified by a change of variable so that $\mu^d=0$ for $d\neq 2$. Suppose inductively that we can kill the $d$-fold products $\mu^d$ for $3\leq d\leq m$. Then $\mu^{m+1}$ is a cycle for the Hochschild (cyclic bar) complex $C^{m+1}(A,A)$. The obstruction to killing it by a change of variable (leaving the lower order terms untouched) is its class in $HH^{m+1}(A,A)$. But $A$ is a finite extension field of $\Lambda$ (and, to be safe, we're in char zero). So, as proved in Weibel's homological algebra book, $HH^\ast(A,A)=0$ in positive degrees, and therefore the induction works. Taking a little care over what "change of variable" actually means in terms of powers of $q$, once one concludes intrinsic formality. You made a much more geometric suggestion - to invoke GW invariants. If you want to handle $\Delta_M\subset M\times M$ more generally, I think this is a good idea, though I can't immediately think of a suitable reference. One can show using "open-closed TQFT " arguments that $HF(\Delta_M,\Delta_M)$ is isomorphic to Hamiltonian Floer cohomology $HF(M)$. One could do this at cochain level and thereby show that the $A_\infty$ product $\mu^d$ of $HF(\Delta_M,\Delta_M)$ corresponds to the operation in the closed-string TCFT of Hamiltonian Floer cochains arising from a genus zero surface with $d$ incoming punctures and one outgoing puncture (and varying conformal structure). Via a "PSS" isomorphism with $QH(M)$, these operations should then be computable as genus-zero GW invariants (or at any rate, the cohomology-level Massey products derived from the $A_\infty$-structure should be GW invariants). 1 Here's an argument that the diagonal Lagrangian correspondence $\Delta$ in $\mathbb{C}P^n \times \mathbb{C}P^n$ is formal. That is, its Floer cochains $CF^\ast(\Delta,\Delta)$, as an $A_\infty$-algebra over the rational Novikov field $\Lambda=\Lambda_\mathbb{Q}$ (say), are quasi-isomorphic to the underlying cohomology algebra $HF^\ast(\Delta, \Delta)\cong QH^\ast(\mathbb{C}P^n; \Lambda)$ with trivial $A_\infty$ operations $\mu^d$ except for the product $\mu^2$. Be critical; I might have slipped up! Write $A$ for $QH^\ast(\mathbb{C}P^n; \Lambda)=\Lambda[t]/(t^{n+1}=q)$. Here $q$ is the Novikov parameter. I claim that $A$ is intrinsically formal, meaning that every $A_\infty$-structure on $A$ with $\mu^1=0$ and $\mu^2$ the products on $A$ can be modified by a change of variable so that $\mu^d=0$ for $d\neq 2$. Suppose inductively that we can kill the $d$-fold products $\mu^d$ for $3\leq d\leq m$. Then $\mu^{m+1}$ is a cycle for the Hochschild (cyclic bar) complex $C^{m+1}(A,A)$. The obstruction to killing it by a change of variable (leaving the lower order terms untouched) is its class in $HH^{m+1}(A,A)$. But $A$ is a finite extension field of $\Lambda$ (and, to be safe, we're in char zero). So, as proved in Weibel's homological algebra book, $HH^\ast(A,A)=0$ in positive degrees, and therefore the induction works. Taking a little care over what "change of variable" actually means in terms of powers of $q$, once concludes intrinsic formality. You made a much more geometric suggestion - to invoke GW invariants. If you want to handle $\Delta_M\subset M\times M$ more generally, I think this is a good idea, though I can't immediately think of a suitable reference. One can show using "open-closed TQFT" arguments that $HF(\Delta_M,\Delta_M)$ is isomorphic to Hamiltonian Floer cohomology $HF(M)$. One could do this at cochain level and thereby show that the $A_\infty$ product $\mu^d$ of $HF(\Delta_M,\Delta_M)$ corresponds to the operation in the closed-string TCFT of Hamiltonian Floer cochains arising from a genus zero surface with $d$ incoming punctures and one outgoing puncture (and varying conformal structure). Via a "PSS" isomorphism with $QH(M)$, these operations should then be computable as genus-zero GW invariants (or at any rate, the cohomology-level Massey products derived from the $A_\infty$-structure should be GW invariants).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9258822798728943, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27811/will-a-precessing-spinning-whell-fall-down-if-there-is-no-friction-at-all
Will a precessing spinning whell fall down if there is no friction at all? If there where no friction at all, would a spinning wheel held up by one end of the axis spin precess forever without falling down? Direction of torque precession of a spinning wheel Since it seems to be a good practice on stackexchange not to ask several questions in one post, I splitted them up into two questions. However if I am wrong, feel free to merge this questions. - If there were no friction at all and the wheel did fall down, where would the energy in the spinning wheel have gone? – Peter Shor May 5 '12 at 14:05 @PeterShor The wheel just continues to spin. – martin May 5 '12 at 14:50 So the spinning would speed up to compensate for the loss of gravitational potential energy in the wheel? I suppose that wouldn't violate conservation of energy, and angular momentum isn't locally conserved here anyway, so maybe you do need to use some actual physics to get the right answer. – Peter Shor May 5 '12 at 15:24 Actually this is a very good case for having two separate questions. – David Zaslavsky♦ May 5 '12 at 15:27 – Qmechanic♦ May 5 '12 at 18:53 1 Answer It is spinning forever. As you see, change of angular momentum $$\frac{\text{d}\vec{L}}{\text{d}t} = \vec{\tau}$$ is always perpendicular to angular momentum itself, which means that angular momentum's direction is changed, while its magnitude is constant. Note the mathematical analogy with velocity and acceleration in case of circular rotation with constant velocity: $$\frac{\text{d}\vec{v}}{\text{d}t} = \vec{a}_\text{cp}$$ - Hm, this should be also the case for a very very small $\omega$, seems to be very unintuitive for me – martin May 5 '12 at 19:19 First step is understanding without thinking about $\omega$. If you want $\omega$ in the picture you should use Euler's equations, because $\vec{L} = I \vec{\omega}$ is valid only for fixed axis rotations (and of course along principal axis of moment of intertia too!). It can become extremely complicated to understand things when Euler's equations are considered. – Pygmalion May 5 '12 at 19:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435681104660034, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/104103/how-do-i-show-for-nonzero-constants-a-and-b-that-operatornamecorr-x-y
# How do I show for nonzero constants $a$ and $b$ that $\operatorname{Corr} (x,y) = -1$ or $1$? Let $X$ be a random variable with a mean of $\mu$ and a variance of $\sigma^2$ and let $Y = aX +b$. Show for non-zero constants $a$ and $b$ that $\operatorname{Corr}(X; Y ) = +1$ or $-1$. - ## 1 Answer Use the fact that $$\text{Corr}(X,Y) = \frac{\text{Cov}(X,Y)}{\sigma_{X} \sigma_{Y}}$$ - is there a way to break apart the equation y= ax+b in order to incorporate a and b into the formula for correlation? – kay Jan 31 '12 at 15:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8217882513999939, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/67885/hausdorff-measure-with-non-power-test-function?answertab=active
Hausdorff measure with non-power test function At my analysis course some time ago we were told that there is definition of Hausdorff measure through the test functions which are continuous and non-decreasing $h:(0,\infty)\to(0,\infty)$ and defined for a subset $E$ of a metric space as $$\mathcal H^h(E) = \lim\limits_{\delta\to 0}\left(\inf\limits_{\Xi(\delta)}\sum\limits_{k}h(r_k)\right)$$ where $\inf$ is taken w.r.t. to all at most countable covers of $E$ with closed balls of the radius $r_k\leq\delta$. If one put $h(r) = r^d$ he has a Hausdorff measure which helps to find the Hausdorff dimension. We were also told that there are examples when set has non-trivial measure with $h$ different from the power function, e.g. logarithmic Hausdorff measure with $h(r)=\min\left(1,\frac1{-\log r}\right)$. But we weren't told about the examples of sets which admit non-trivial ($\neq0,\neq\infty$) such measure. Do you know any? Not necessary for the logarithmic $h$. - 1 – Brian M. Scott Sep 28 '11 at 7:55 @BrianM.Scott: thanks, that's an interesting approach to measure the paths of BM. – Ilya Sep 28 '11 at 8:09 @BrianM.Scott: I will be grateful to you if you will put your comment as an answer – Ilya Oct 8 '11 at 18:36 3 Answers I don’t know enough about Hausdorff measure to write up a real answer, but the last two paragraphs of this look as if they might be useful. - Felix Hausdorff, in his original 1914 paper, constructs a Cantor set with positive,finite measure for a large class of such functions. [English translation in Classics on Fractals ] - By consider easy such functions $h$, we can come up with some easy examples. Take as example the test function $h(r) = cr^d$ for some constant $c>0$. Then any set with nonzero finite Hausdorff measure of dimension $d$ would have nonzero finite measure also with this testfunction. If we let $h$ depend on not only the radius of the balls in the cover but also on their centers, we can get any measure which is absolute continuous to a Hausdorff measure. In this case we can also get a less trivial example by considering $h(x,r)=r^{d(x)}$ where $x$ is the center of the ball. By imitating the Cantor construction but removing intervals at each step of size dependent on their midpoint, it is possible to construct sets of "nonconstant" Hausdorff dimension. If we let $d(x)$ be the local dimension at each point in the resulting set; $h(x,r)=r^{d(x)}$ should give a nonzero and finite measure to the constructed set when measured with this test function. Wikipedia gives a less intuitive example here; the test function $h(r) = r^2 \log \frac{1}{t} \log \log \log \frac{1}{t}$ gives almost surely $\sigma$-finite measure to the Brownian path in $\mathbb{R}^n$ for $n>2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455110430717468, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/296156/normal-distribution-stats
# Normal Distribution Stats Let $$X \sim N(65,20)$$ Find correct to $3$ Decimal Place the value of $x$ such that $Pr(X>x) = 0.43$ - What have you tried? Which tools are you allowed to use? – Stefan Hansen Feb 6 at 9:34 2 Down-voted because the OP didn't indicate any interaction with their question. – Rustyn Yazdanpour Feb 6 at 9:47 @StefanHansen i've gotten to $\frac {x-65}{2(5)^{1/2}} =0.1764$ and hence $x = 67.789$ ?? – Sam Feb 6 at 10:05 – Stefan Hansen Feb 6 at 10:09 im not 100% sure where i've gone wrong with my working out? – Sam Feb 6 at 10:15 show 3 more comments ## 1 Answer Making my comment as an answer. The equation $$\frac{x-65}{\sqrt{20}}=0.1764$$ you proposed is correct. However, the solution to this is $$0.1764\cdot \sqrt{20}+65\approx 65.789$$ and not $67.789$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280880689620972, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262863/proof-for-fracxnxn-xn-n/262868
# Proof for $\frac{x^n}{x^n}=x^{n-n}$ Does the equality $\forall n,x\not=0: \frac{x^n}{x^n}=x^{n-n}$ come straight from the definition of exponents, or is a more elaborate proof needed? Please note that I'm not asking about $x^0=1$, but only about the $\frac{x^n}{x^n}=x^{n-n}$ part of the equation. - 4 Depends on your definition of powers. – Qiaochu Yuan Dec 20 '12 at 20:12 $x^0=1 \ \forall x$ is a convention but it has combinatorical justification i.e. the number of combinations drawing 0 elements with $x$ colors is $1$. – Amihai Zivan Dec 20 '12 at 20:12 1 You could prove it inductively, otherwise it just follows from exponent rules – Math_Illiterate Dec 20 '12 at 20:20 1 It also follows from the fact that $\frac{y}{y} = 1$ whenever $y \ne 0$. – Joel Cohen Dec 20 '12 at 20:33 Well, write $n$ of them at both numerator and the denominator and simplify. – ashley Dec 20 '12 at 20:40 show 1 more comment ## 3 Answers $$\large \dfrac{x^n}{x^n}= x^n \cdot \frac{1}{x^n} = x^n \cdot x^{-n} = x^{(n\ + \ - n)} = x^{(n\ - \ n)} = (x^0 = 1).$$ Note: just as $\dfrac{1}{a}$ is the multiplicative inverse of $a$, (and can be represented by $a^{-1}$). The multiplicative inverse of $a$ is the number you need to multiply by to arrive at $1$: That is, we need $a^{-1}$ to be such that $a\cdot a^{-1} = a^{-1} \cdot a = 1$ (since $1$ is the multiplicative identity for the real numbers: any number multiplied by $1$ remains unchanged). This works for all non-zero $a$ provided $a^{-1} = \dfrac1a.$ Then we have that $a \cdot \dfrac1a = 1$ $$\large \dfrac{1}{x^n} = \underbrace{\frac1x\cdot \frac1x \cdots \frac1x}_{n-times} = \underbrace{x^{-1}\cdot x^{-1} \cdot \cdots x^{-1}}_{n-times} = x^{\overbrace{-1 + -1 + \cdots + -1}^{n - times}} = x^{-n}$$ So I'd say you can prove this simply by using the exponent rules. You might also want to do so using induction, but might be incorporating the use of rules of exponentiation, implicitly. It's sort of a "which came first, the chicken or the egg" question as to which definition is most primitive. - 2 I think that for this question it is also good to say why $1/x^n=x^{-n}$ (+1) – Belgi Dec 20 '12 at 20:39 @Belgi: I edited about an hour ago to expand on your suggestion. Is this what you had in mind? – amWhy Dec 20 '12 at 22:57 Thanks for the edit, but now I feel you need to explain what I wrote in my first comment for $n=1$. What I had in mind that you show that one is the inverse of the other by simple showing that when you multiply them you get $1$, which is also shorter to write :) – Belgi Dec 20 '12 at 23:28 1 @Belgi Hopefully, done! :-) – amWhy Dec 20 '12 at 23:48 Looks good to me :) – Belgi Dec 20 '12 at 23:49 Assuming $x \neq 0$, $$\frac{x^n}{x^n} = 1 = x^0 = x^{n-n}.$$ The only exponent rule used in the above is $x^0 = 1$. Note, this is a special case of the exponent rule $$\frac{x^a}{x^b} = x^{a-b}.$$ So, if you already knew that rule, then it follows immediately by letting $a = b = n$. - Base Case: Let n = 0 $\frac{x^{0}}{ x^{0}} = \frac{1}{1} = 1$ $x^{n-n} = x^{0 – 0} = x^{0} = 1$ So the base case checks out. Inductive Hypothesis: Want to show that $\frac{x^{n+1}}{x^{n+1}} = x^{(n+1) – (n+1)}$ $\frac{x^{n+1}}{x^{n+1}} = \frac{x^{n} \cdot x}{x^{n} \cdot x} = x^{n-n} \cdot x^{1-1} = x^{(n+1) – (n+1) }$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455913305282593, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/05/30/the-maximal-flow-of-a-vector-field/?like=1&source=post_flair&_wpnonce=c1c9c39881
# The Unapologetic Mathematician ## The Maximal Flow of a Vector Field Given a smooth vector field $X\in\mathfrak{X}M$ we know what it means for a curve $c$ to be an integral curve of $X$. We even know how to find them by starting at a point $p$ and solving differential equations as far out as we can. For every $p\in M$, let $I_p$ be the maximal open interval containing $0$ on which we can define the integral curve $\Phi_p$ with $\Phi_p(0)=p$. Now, I say that there is a unique open set $W\subseteq\mathbb{R}\times M$ and a unique smooth map $\Phi:W\to M$ such that $W\cap(\mathbb{R}\times\{p\})=I_p\times\{p\}$ — the set $W$ cuts out the interval $I_p$ from the copy of $\mathbb{R}$ at $p$ — and further $\Phi(t,p)=\phi_p(t)$ for all $(t,p)\in W$. This is called the “maximal flow” of $X$. Since there is some integral curve through each point $p\in M$, we can see that $\{0\}\times M\subseteq W$. Further, it should be immediately apparent that $\Phi$ is also a local flow. What needs to be proven is that $W$ is open, and that $\Phi$ is smooth. Given a $p\in M$, let $I\subseteq I_p$ be the collection of $t$ for which there is a neighborhood of $(t,p)$ contained in $W$ on which $\Phi$ is differentiable. We will show that $I$ is nonempty, open, and closed in $I_p$, meaning that it must be the whole interval. Nonemptiness is obvious, since it just means that $p$ is contained in some local flow, which we showed last time. Openness also follows directly from the definition of $I$. As for closedness, let $t_0$ be any point in $\bar{I}$, the closure of $I$. We know there exists some local flow $\Phi':I'\times V'\to M$ with $0\in I'$ and $\Phi_p(t_0)\in V'$. Now pick an $t_1\in I$ close enough to $t_0$ so that $t_0-t_1\in I'$ and $\Phi_p(t_1)\in V'$ — this is possible since $t_0$ is in the closure of $I$ and $\Phi_p$ is continuous. Then choose an interval $I_0$ around $t_0$ so that $t-t_1\in I'$ for each $t\in I_0$. And finally the continuity of $\Phi$ at $(t_1,p)$ tells us that there is a neighborhood $V$ of $p$ so that $\Phi(t_1\times V)\subseteq V'$. Now, $\Phi$ is defined and differentiable on $I_0\times V$, showing that $t_0\in I$. Indeed, if $t\in I_0$ and $q\in V$, then $t-t_1\in I'$ and $\Phi(t_1,q)\in V'$, so $\Phi'(t-t_1,\Phi(t_1,q))$ is defined. The curve $s\mapsto\Phi'(s-t_1,\Phi(t_1,q))$ is an integral curve of $X$, and it equals $\Phi(t_1,q)$ at $t_1$. Uniqueness tells us that $\Phi(t,q)=\Phi'(t-t_1,\Phi(t_1,q))$ is defined, and $\Phi$ is thus differentiable at $(t,q)$. ### Like this: Posted by John Armstrong | Differential Topology, Topology ## 6 Comments » 1. [...] we define a vector field by then is a flow for this vector field. Indeed, it’s a maximal flow, since it’s defined for all time at each [...] Pingback by | May 31, 2011 | Reply 2. [...] gives us a “derivative” of smooth functions . If is a smooth vector field it has a maximal flow which gives a one-parameter family of diffeomorphisms, which we can think of as “moving [...] Pingback by | June 15, 2011 | Reply 3. [...] we want a characterization of -invariance in terms of the flow of . I say that is -invariant if and only if for all . That is, the flow should commute with [...] Pingback by | June 17, 2011 | Reply 4. [...] mean? It turns out that the bracket of two vector fields measures the extent to which their flows fail to [...] Pingback by | June 18, 2011 | Reply 5. [...] let be the flow of , and let be a small enough neighborhood of that we can define [...] Pingback by | June 22, 2011 | Reply 6. [...] defined the Lie derivative of one vector field by another, . This worked by using the flow of to compare nearby points, and used the derivative of the flow to translate [...] Pingback by | July 13, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 71, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9608810544013977, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/292775/proving-the-sum-of-the-first-n-natural-numbers-by-induction
# Proving the sum of the first n natural numbers by induction I have the Following Proof By Induction Question: $$(1)(2) + (2)(3) + (3)(4) + \cdots+ (n) (n+1) = \frac{(n)(n+1)(n+2)}{3}$$ Can Anybody Tell Me What I'm Missing. This is where I've Gone So Far. Show Truth for N = 1 LHS = (1) (2) = 2 RHS = $$\frac{(1)(1+1)(1+2)}{3}$$ Which is Equal to 2 Assume N = K $$(1)(2) + (2)(3) + (3)(4) + \cdots+ (k) (k+1) = \frac{(k)(k+1)(k+2)}{3}$$ Proof that the equation is true for N = K + 1 $$(1)(2) + (2)(3) + (3)(4) + \cdots+ (k) (k+1) + (k+1) (k + 2)$$ Which is Equal To: $$\frac{(k)(k+1)(k+2)}{3} + (k+1) (k + 2)$$ This is where I've went so far If I did the calculation right the Answer should be $$\frac{(k+1)(k+2)(k+3)}{3}$$ - 1 Factor $(k+1)(k+2)$ out from the expression after "Which is equal to". – David Mitra Feb 2 at 13:11 1 Note, if you wanted to subvert the problem stated, you could perform induction separately on $\sum n^2$ and $\sum n$. – half-integer fan Feb 2 at 13:20 I don't understand, @Andrew: you asked a very, very similar questions some hours ago and you had exactly the same algebraic problem as you have here (non-factoring when possible)...are you really making an efforto to learn?! – DonAntonio Feb 2 at 16:25 @Andrew : Please. Don't promiscuously interchage lower-case $n$ with capital $N$, nor lower-case $k$ with capital $K$, as if they were the same thing. Mathematical notation is case-sensitive. To anyone reading what you write who knows standard conventions it will look as if you can't spell. – Michael Hardy Feb 2 at 16:54 ## 1 Answer Your proof is fine, but you should show clearly how you got to the last expression. $\dfrac{k(k+1)(k+2)}{3}+(k+1)(k+2)$ $=\dfrac{k}{3}(k+1)(k+2)+(k+1)(k+2)$ $=(\dfrac{k}{3}+1)(k+1)(k+2)$ $=\dfrac{k+3}{3}(k+1)(k+2)$ $=\dfrac{(k+1)(k+2)(k+3)}{3}$. You should also word your proof clearly. For example, you can say "Let $P(n)$ be the statement ... $P(1)$ is true ... Assume $P(k)$ is true for some positive integer $k$ ... then $P(k+1)$ is true ... hence $P(n)$ is true for all positive integers $n$". - I know That the Final answer is $$\frac{(k+1)(k+2)(k+3)}{3}$$ by adding a k + 1 to every unknown from the Step 2 Which is _Assume N = K_$$\frac{(k)(k+1)(k+2)}{3}$$ – Andrew Feb 2 at 13:19 Thanks for the Quick Edit, I only have one more question, What I'm Not understanding is how $$\frac{k(k+1)(k+2)}{3}$$ was changed to $$\frac{k}{3}+1$$ – Andrew Feb 2 at 13:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502466320991516, "perplexity_flag": "middle"}
http://bama.ua.edu/~seam28/abs.html
# Southeastern Analysis Meeting XXVIII ## Abstracts ### Austin Amaya Virginia Tech Title: Zero-pole interpolation, Beurling-Lax representations for shift-invariant subspaces, and transfer function realizations: half-plane/continuous time versions Abstract: Given a full-range simply-invariant shift-invariant subspace $\mathcal{M}$ of the vector-valued $L^{2}$ space $L^{2}_{\mathcal{U}}(\mathbb{T})$ over the unit circle, the classical Beurling-Lax-Halmos Theorem obtains a unitary operator-valued function $W$ on $\mathbb{T}$ so that $\mathcal{M} = WH^{2}_{\mathcal{U}}$; in this case necessarily $\mathcal{M}^\perp = W\left( H^{2}_{\mathcal{U}} \right)^\perp$. The Beurling-Lax-Halmos Theorem of Ball-Helton (1984) obtains such a representation for the case of a pair of shift-invariant subspaces $(\mathcal{M},\mathcal{M}^{cross})$---with $\mathcal{M}$ forward full-range simply-invariant and $\mathcal{M}^{cross}$ backward full-range simply-invariant---forming a direct-sum decomposition of $L^{2}_{\mathcal{U}}(\mathbb{T})$ with a new almost everywhere invertible $W$ on $\mathbb{T}$. For the case where $(\mathcal{M},\mathcal{M}^{cross})$ is a finite-dimensional perturbation of the model pair $(H^{2}_{\mathcal{U}}(\mathbb{T}),H^{2}_{\mathcal{U}}(\mathbb{T})^\perp)$, Ball-Gohberg-Rodman (1990) obtained a transfer function realization formule for the representer $W$, parameterized from zero-pole data computed from $\mathcal{M}$ and $\mathcal{M}^{cross}$. Later work by Ball-Raney (2007) extended this analysis to the nonrational case where the zero-pole data is taken in an appropriate infinite-dimensional operator-theoretic sense. Our current work obtains the analogue of these results for the case of a pair of subspaces $(\mathcal{M},\mathcal{M}^{cross})$ of $L^{2}_{\mathcal{U}}(\mathbb{R})$ invariant under the forward and backard translation groups. These results rely on recent advances in the understanding of continuous-time infinite-dimensional input-state-output linear systems now codified in the book of Staffans (2005). In this talk, we present a more definitive version of our results presented at SEAM 27. ### Valentin Andreev Lamar University Title: Estimating the Error in the Koebe Construction Abstract: In 1912, Paul Koebe proposed an iterative method, the Koebe construction, to construct a conformal mapping of a non-degenerate, finitely connected domain D onto a circular domain C. In 1959, Gaier provided a convergence proof of the construction which depends on prior knowledge of the circular domain. We demonstrate that it is possible to compute the convergence rate solely from information about D. ### Joseph Ball Virginia Tech Title: The spectral set question and extreme points for the normalized Herglotz class over planar domains Abstract: Given a domain $\Omega$ in the complex plane, the spectral set question asks whether a Hilbert-space operator $T \in {\mathcal L}({\mathcal H})$ with spectrum contained in $\Omega$ with a contractive $A(\Omega)$-functional calculus ($A(\Omega)$ equal to continuous functions on the closure of $\Omega$ which are holomorphic on $\Omega$) has a $\partial \Omega$-normal dilation. An equivalent reformulation due to Arveson asks whether any contractive representation of $A(\Omega)$ is in fact completely contractive. The question is known to have a positive answer if $\Omega$ is the unit disk (by the Sz.-Nagy dilation theorem) or if $\Omega$ an an annulus (Agler), and is now known to have a negative answer for $\Omega$ equal to certain triply-connected planar domains (Dritschel-McCullough). We discuss implications of this negative result for the extreme point structure of the convex normalized Herglotz class over $\Omega$. ### Snehalathe Ballamoole Mississippi State University Title: Spectral properties of Cesàro-like operators on weighted Bergman spaces Abstract: We consider operators \begin{equation*} C_{\nu}f(z):= \frac{1}{z^\nu}\int_{0}^{z}\frac{f(\omega)\omega^{\nu-1}}{1-\omega}d\omega , \hspace{2cm} (f\in \mathcal{H}(\mathbb{D}), z\in \mathbb{D}). \end{equation*} We obtain spectral properties of $C_{\nu}$ on the Bergman spaces $L_{a}^{p,\alpha}$, $\alpha>-1$, $p\ge1$, by computing resolvent estimates. This work is closely related to recent work of Dahlner, Aleman and Persson. This is joint work with Len Miller and Vivien Miller. ### Kelly Bickel Washington University in St. Louis Title: Fundamental Agler Decompositions Abstract: It is well-known that every holomorphic function bounded by one on the bidsk possesses an Agler decomposition. In general, such decompositions are difficult to write down explicitly. In this talk, we present a constructive, elementary proof of the existence of Agler decompositions using shift-invariant subspaces of the Hardy space on the bidisk. We then use these constructed decompositions to analyze properties about general Agler decompositions. ### Miriam Castillo Gil University of Florida Title: Functions of Positive Real Part on the Polydisk. Abstract: We study some classes of holomorphic functions of positive real part on certain kinds of domains $\Omega \subset \mathbb C^n$, and characterize these classes through operator-valued Herglotz formulas and through von Neumann-type inequalities. Inspired on the work of J.E. McCarthy and M Putinar we extend results over the unit ball, due to M.T. Jury, to the polydisk and other domains by defining a family of Fantappi? pairings on $\Omega$ to establish duality relations between certain pairs of classes. ### Raphael Clouatre Indiana University Title: Similarity results for operators of class $C_0$ Abstract: By virtue of the classification theorem, it is known that any multiplicity-free operator of class $C_0$ is quasisimilar to a Jordan block. In case the minimal function of the operator is a Blaschke product with roots forming a Carleson sequence, we will discuss a condition under which the relation above can be strengthened to similarity. We will also explain how this gives a new interpretation of Carleson's classical interpolation theorem in the setting of $C_0$ operators. ### David Cruz-Uribe Trinity College Title: $A_p$ bump conditions for two-weight norm inequalities for classical operators Abstract: $A_p$ bump conditions are generalizations of the Muckenhoupt A_p condition. There is a longstanding conjecture that these conditions are sufficient for Calderon-Zygmund singular integrals to map $L^p(v)$ into $L^p(u)$. In this talk we will review recent work on this conjecture, including its connections with two conjectures of Muckenhoupt and Wheeden that were recently disproved. This work is joint with Martell and Perez and also with Volberg and Reznikov. ### Raul Curto University of Iowa Title:  Hyponormality and subnormality of block Toeplitz operators Abstract: I will discuss hyponormality and subnormality of block Toeplitz operators acting on the vector-valued Hardy space $H^2_{C^n}$ of the unit circle. In joint work with I.S. Hwang and W.Y. Lee, we first establish a tractable and explicit criterion to determine the hyponormality of block Toeplitz operators having bounded type symbols; we do this via the triangularization theorem for compressions of the shift operator. Secondly, we consider the gap between hyponormality and subnormality for block Toeplitz operators. This is closely related to Halmos's Problem 5: Is every subnormal Toeplitz operator either normal or analytic? We show that if $\Phi$ is a matrix-valued rational function whose co-analytic part has a coprime decomposition then every hyponormal Toeplitz operator $T_{\Phi}$ whose square is also hyponormal must be either normal or analytic. Next, we apply our results to solve the following Toeplitz completion problem: Find the unspecified Toeplitz entries of the partial block Toeplitz matrix $A:=\begin{pmatrix} U^* & ? \\ ? & U^* \end{pmatrix}$ so that A becomes subnormal, where $U$ is the unilateral shift on $H^2$. ### Francesco Di Plinio Indiana University Title: $L^p$ bounds for singular integrals along N directions in $R^2$ Abstract: Let $K$ be a Calderon-Zygmund convolution kernel on $R$. We are concerned with the $L^p$-boundedness of the maximal directional singular integral $$T_{V} f (x)= \sup_{v \in V} \Big| \int_R f(x+t v) K(t) \, dt \Big|$$ where $V$ is a finite set of $N$ directions. This is a discrete version of Stein's conjecture on the boundedness of the Hilbert transform along smooth vector fields in the plane. For this problem, we are able to prove sharp (in terms of $\log N$) $L^p$ and weak $L^2$ bounds for lacunary and Vargas sets of directions. The latter include the case of uniformly distributed directions and the finite truncations of the Cantor set. We make use of both classical harmonic analysis methods and product-BMO based time-frequency analysis techniques. In addition to presenting our results, we plan on discussing possible applications of these techniques to the solution of the full conjecture by Stein. This is joint work with Ciprian Demeter. ### Son Duong University of California at San Diego Title: Transversality in CR Geometry Abstract: We consider the transversality of holomorphic mappings between CR submanifolds of complex spaces. In equidimension case, we show that a holomorphic mapping sending one generic submanifold into another of the same dimension is CR transversal to the target submanifold provided that the source manifold is of finite type and the map is of generic full rank. This result and its corollaries completely resolve two questions posed by Ebenfelt and Rothschild six years ago. In different dimensions, the situation is more delicate as examples show. We will show that under certain restrictions on the dimensions and the rank of Levi forms, the mappings whose set of degenerate rank is of codimension at least 2 is transversal to the target. In addition, we show that under more restrictive conditions on the manifolds, finite holomorphic mappings are transversal. This is a joint work with Peter Ebenfelt. ### Matthew Gamel Nicholls State University Title: High Order Derivatives of Blaschke Products in $H^p$ Spaces Abstract: In this paper, we consider higher order derivatives of Blaschke products in $H^p$ spaces given conditions on the zeros. Specifically, we prove that if $B(z)$ is a Blaschke product with zeros $(a_n)$ and $m \in \mathbb{N}$, then: • if $$\sum_{j=1}^\infty (1-|a_j|^2)^\alpha< \infty$$ for some $\alpha$ with $0 < \alpha < \tfrac{1}{m+1}$, then $B^{(m)} \in H^{\frac{1-\alpha}{m}}$. • if $$\sum_{j=1}^\infty (1-|a_j|^2)^{\frac{1}{m+1}} \log \left(\frac{1}{1-|a_j|^2} \right) < \infty$$ then $B^{(m)} \in H^{\frac{1}{m+1}}$. These results generalize some of the work done by D. Protas in 1973 for $m=1$. This is joint work with Manfred Stoll. ### Jarod Hart University of Kansas Title:  Bilinear Vector Valued Calderon-Zygmund, Square Functions and Littlewood-Paley estimates Abstract: In this work, an extension of the bilinear Calderon-Zygmund theory to the vector valued setting is developed and used together with interpolation arguments to derive new bilinear square function and Litlewood-Paley estimates. ### Mike Jury University of Florida Title: "Noncommutative" Aleksandrov-Clark measures Abstract: We consider deBranges-Rovnyak type subspaces of the Drury-Arveson space; these are reproducing kernel spaces $\mathcal{H}(b)$ in the unit ball with kernel of the form $(1-b(z)b(w)^*)(1-zw*)^{-1}$. When this kernel is positive, the Cayley transform $(1+b)(1-b)^{-1}$ is represented as a "noncommutative" Herglotz integral of a state on the Cuntz-Toeplitz operator system. We prove that many of the known connections between $\mathcal{H}(b)$ spaces and Aleksandrov-Clark measures have analogs in this setting (e.g. expressing $\mathcal{H}(b)$ functions as Cauchy transforms, Clark's theorem on rank-one perturbations of the backward shift, etc.) ### Ilya Krishtal Northern Illinois University Title: Gabor frames in amalgam spaces Abstract: We discuss several results on convergence of multiwindow Gabor frames in Wiener amalgam spaces. The talk is based on the joint work with R. Balan, J. Christensen, K.Okoudjou, andd J.-L. Romero. ### Hyun Kwon Seoul National University Title: Similarity of Operators in the Bergman Space Setting Abstract: We give a necessary and sufficient condition for an n-hypercontraction to be similar to the adjoint of the operator of multiplication by the independent variable in a weighted Bergman space. The description is a generalization of the one given in the Hardy space setting where the geometry of the eigenvector bundles of the operators involved are considered. This talk is based on joint work with Ronald G. Douglas and Sergei Treil. ### Michael Lacey Georgia Tech Title: On the two weight inequality for the Hilbert transform. Abstract: The two weight inequality for the Hilbert transform arises in the settings of analytic function spaces, operator theory, and spectral theory, and what would be most useful is a characterization in the simplest real-variable terms. We show that the $L^2$ to $L^2$ inequality holds if and only if two $L^2$ to weak-$L^2$ inequalities hold. This is a corollary to a characterization in terms of a two-weight Poisson inequality, and a pair of testing inequalities on bounded functions. Joint work with Eric Sawyer, Chun-Yun Shen, and Ignacio Uriate-Tuero. The two weight inequality for the Hilbert transform arises in the settings of analytic function spaces, operator theory, and spectral theory, and what would be most useful is a characterization in the simplest real-variable terms. We show that the $L^2$ to $L^2$ inequality holds if and only if two $L^2$ to weak-$L^2$ inequalities hold. This is a corollary to a characterization in terms of a two-weight Poisson inequality, and a pair of testing inequalities on bounded functions. Joint work with Eric Sawyer, Chun-Yun Shen, and Ignacio Uriate-Tuero. ### Constanze Liaw Texas A&M Title: Dilations and finite rank perturbations Abstract: For a fixed natural number n, we consider a family of rank n perturbations of a completely non-unitary (cnu) contraction T. We allow the corresponding characteristic operator function of T to be non-inner. We relate the unitary dilation of T to its rank n unitary perturbations. Based on this construction, we prove that the spectra of the perturbed operators are purely singular if and only if the operator-valued characteristic function corresponding to the unperturbed operator is inner. In the case where n=1 the latter statement reduces to a well-known result in the theory of rank one perturbations. However, our method of proof via the theory of dilations extends to the case of arbitrary n. We also consider the analogous family of cnu contractions that arise as rank n perturbations of T. ### Issam Louhichi KFUM Title: Quasihomogeneous Toeplitz operators on the harmonic Bergman space Abstract: This talk is about the product of Toeplitz operators on the harmonic Bergman space of the unit disk of the complex plane C. Mainly, we discuss when the product of two quasihomogeneous Toeplitz operators is also a Toeplitz operator, and when such operators commute. ### Neil Lyall University of Georgia Title:  Polynomial Patterns in Subsets of the Integers Abstract: It is a striking and elegant fact (proved independently by Furstenberg and Sarkozy) that any subset of the integers of positive upper density necessarily contains two distinct elements whose difference is given by a perfect square. We will discuss recent quantitative extensions and generalizations of this result. ### Svitlana Mayboroda University of Minnesota Title: Singular integrals, perturbation problems, boundary regularity, and harmonic measure for elliptic PDEs in rough media Abstract: Elliptic boundary value problems are well-understood in the case when the boundary, the data, and the coefficients exhibit smoothness. However, perfectly uniform smooth systems do not exist in nature, and every real object inadvertently possesses irregularities (a sharp edge of the boundary, an abrupt change of the medium, a defect of the construction). The analysis of general non-smooth elliptic PDEs gives rise to decisively new challenges: possible failure of maximal principle and positivity, breakdown of boundary regularity, lack of well-posedness in $L^2$, to mention just a few. Further progress builds on a powerful blend of harmonic analysis, potential theory and geometric measure theory techniques. In this talk we are going to discuss some highlights of the history, conjectures, paradoxes, and recent discoveries such as the higher-order Wiener criterion and maximum principle for higher order PDEs, solvability of rough elliptic boundary problems and perturbation in $L^p$, development of the new theory of Hardy spaces and analysis of singular integrals beyond the realm of the Calderon-Zygmund theory, as well as an intriguing phenomenon of localization of eigenfunctions. ### Andrew Morris University of Missouri Title: Finite Propagation Speed for First Order Systems and Huygens' Principle for Hyperbolic Equations Abstract: We prove that strongly continuous groups generated by first order systems on Riemannian manifolds have finite propagation speed. Our procedure provides a new direct proof for self-adjoint systems, and allows an extension to operators on metric measure spaces. As an application, we present a new approach to the weak Huygens' principle for second order hyperbolic equations. ### Katharine Ott University of Kentucky Title: The mixed problem for the Lamé system of elastostatics Abstract: In this talk I will discuss recent progress on the study of the mixed boundary value problem for the Lamé system of elastostatics in Lipschitz domains in two dimensions. This is joint work with Russell Brown. ### Jonathan Poelhuis Indiana University Title: Local Fractional Maximal Operators. Abstract: Fractional maximal operators arise in the study of Riesz potentials and are discussed in the work of Hedberg (1972) and others. These operators have interesting properties that are distinct from those of the Hardy-Littlewood maximal operator, which they resemble. In another direction, many results have followed Stromberg's (1979) paper discussing local maximal operators. Local operators in this sense have the advantage that they are defined for measurable functions, not just integrable ones. We define local analogues for the fractional maximal operators and discuss several of their properties, including a John-Nirenberg type inequality. ### Alexei Poltoratski Texas A&M Title: The Gap and Type Problems. Abstract: One of the basic problems of Harmonic Analysis is to determine if a given collection of functions spans a given Hilbert space. A classical theorem by Beurling and Malliavin solved such a problem in the case when the space is $L^2$ on an interval and the collection consists of complex exponentials. In my talk I will discuss two classical problems closely related to the Beurling-Malliavin theorem, the so-called Gap and Type Problems, that remained open until recently. ### Alex Rice University of Georgia Title: Sarkozy's Theorem for P-intersective Polynomials Abstract: We will discuss improvements and generalizations of two theorems of Sarkozy, the qualitative versions of which state that any subset of the natural numbers of positive upper density necessarily contains two distinct elements which differ by a perfect square, as well as two elements which differ by one less than a prime number, confirming conjectures of Lovasz and Erdos, respectively. Specifically, we will present a new generalized hybrid of these two results, giving a strong quantitative bound on the size of the largest subset of {1,2,...,N} which contains no nonzero differences of the form $h(p)$ for any prime $p$, where $h$ lies in the largest possible class of polynomials. ### Sonmez Sahutoglu University of Toledo Title: Localization of compactness of Hankel operators on pseudoconvex domains in C^n Abstract: We prove the following localization for compactness of Hankel operators on Bergman spaces. Assume that $D$ is a bounded pseudoconvex domain in $C^n$,p is a boundary point of $D$, and $B(p,r)$ is a ball centered at p with radius r so that $U=D\cap B(p,r)$ is a domain. We show that if the Hankel operator $H_f$ with symbol $f\in C^1(\overline{D})$ is compact on $A^2(D)$ then $H_f$ is compact on $A^2(U)$ where $A^2(D)$ and $A^2(U)$ denote the Bergman spaces on D and U, respectively. ### Prabath Silva Indiana University Bloomington Title: Bilinear Hilbert transform tensor product with paraproduct Abstract: C. Muscalu, J. Pipher, T. Tao and C. Thiele showed that bi-parameter paraproduct maps $L^p \times L^q$ into $L^r$ when $\frac{1}{p}+\frac{1}{q}=\frac{1}{r}$ and $p,q> 1$ and the much more singular operator double bilinear Hilbert transform does not satisfy any $L^p$ bounds and raised the question about $L^p$ bounds for the operator bilinear Hilbert transform tensor product with paraproduct, which is more singular than bi-parameter paraproduct and less singular that double bilinear Hilbert transform. We give a positive answer by showing the bilinear Hilbert transform tensor product with paraproduct maps $L^p \times L^q$ into $L^r$ when $\frac{1}{p}+\frac{1}{q}=\frac{1}{r}$ and $p,q, r> 1.$ ### Esma Yildiz Ozkan Gazi University Title: A bivariate generalization of Meyer-König and Zeller type operators and its Approximation properties Abstract: A bivariate generalization of a general sequence of Meyer-König and Zeller operators based on q-integers is constructed. Approximation properties of these operators are obtained by using Volkov-type convergence theorem for bivariate functions.Furthermore, rates of convergence by means of modulus of continuity and the elements of Lipschitz class functionals are also established.An r-th order generalization of these operators is also defined and its approximation properties are observed. ### Sawano Yoshihiro Kyoto University Title: Hardy spaces with variable exponents and generalized Campanato spaces Abstract: Hardy spaces play an important role not only in harmonic analysis but also in partial differential equations because singular integral operators are bounded on Hardy spaces. The Hardy space $H^1$, which substitute for $L^1$, and the Hardy spaces $H^p$ with $p \in (0,1)$, are different in that the latter contains non-regular distributions. Although it will turn out to be an equivalent expression of $L^p$, for $1< p <\infty$, we can define the Hardy space $H^p$. To have a unified understanding of these situations, we consider and define Hardy spaces with variable exponents on ${\mathbb R}^n$. We will connect harmonic analysis with function spaces with variable exponents. We then obtain the atomic decomposition and the molecular decomposition. ### Abdelrahman Yousef University of Jordan Title: Commutants of a Toeplitz operator with a certain harmonic symbol. Abstract: In this talk, we show that if an operator $T$ in the norm closed subalgebra, generated by Toeplitz operators with bounded symbols of the form $\displaystyle f(re^{i\theta})=\sum_{k=-\infty}^{N} e^{ik\theta}f_k(r)$, commutes with $T_{z+\bar{z}}$, then $T$ must be a polynomial of $T_{z+\bar{z}}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 126, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8703901767730713, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Euclidean_Algorithm&diff=32964&oldid=25218
# Euclidean Algorithm ### From Math Images (Difference between revisions) | | | | | |----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (10:27, 28 June 2012) (edit) (undo) (Moved desciption of Eclid into intro, minor grammar changes) | | | (33 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | - | |ImageName=Euclid's Method to find the gcd | + | |ImageName=Euclidean Algorithm | | | |Image=EA1.jpg | | |Image=EA1.jpg | | | |ImageIntro= | | |ImageIntro= | | | | | | | - | This image shows Euclid's method to find the greatest common divisor (gcd) of two integers. The greatest common divisor of two numbers a and b is the largest integer that divides the numbers without a remainder. | + | About 2000 years ago, Euclid, one of the greatest mathematician of Greece, devised a fairly simple and efficient algorithm to determine the greatest common divisor of two integers, which is now considered as one of the most efficient and well-known early algorithms in the world. The Euclidean algorithm hasn't changed in 2000 years and has always been the the basis of Euclid's number theory. | | | | | | | - | :Here I use 52 and 36 as an example to show you how Euclid found the gcd, so you have a sense of the Euclidean algorithm in advance. As you have probably noticed already, Euclid uses lines, defined as multiples of a common unit length, to represent numbers. First, use the smaller integer of the two, 36, to divide the bigger one, 52. Use the remainder of this division, 16, to divide 36 and you get the remainder 4. Now divide the last divisor, 16, by 4 and you find that they divide exactly. Therefore, 4 is the greatest common divisor. For every two integers, you will get the gcd by repeating the same process until there is no remainder. | + | This image shows Euclid's method to find the greatest common divisor of two integers. The '''greatest common divisor''' of two numbers a and b is the largest integer that divides the numbers without a remainder. | | - | | + | | | - | :You may have many questions so far: "What is going on here?" "Are you sure that 4 is the gcd of 52 and 36?" Don't worry. We will talk about them precisely later. This brief explanation is just to preheat your enthusiasm for Euclidean Algorithm! It is amazing to see that he explains and proves his algorithm relying on visual graphs, which is different from how we treat number theory now. | + | | | | | | | | | | | | | | |ImageDescElem= | | |ImageDescElem= | | | | | | | - | We all know that 1 divides every number and that no positive integer is smaller than 1, so 1 is the smallest common divisor for any two integers a and b. Then what about the greatest common divisor? Finding the gcd is not as easy as finding the smallest common divisor. | + | When asked to find the gcd of two integers, a possible way is to prime factor each integer and see which factors are common between the two, or we could simply try different numbers and see which number works. However, both approaches could be very complicated and time consuming as the two integers become relatively large. | | | | | | | - | When asked to find the gcd of two integers, a possible way we can think of is to prime factorize each integer and see which factors are common between the two integers, or we could simply try different numbers and see which number works. However, both approaches could be very sophisticated and time consuming as the two integers become relatively large. | | | | - | | | | | - | About 2000 years ago, Euclid, one of the greatest mathematician of Greece, devised a fairly simple and efficient algorithm to determine the gcd of two integers, which is now considered as one of the most efficient and well-known early algorithms in the world. The Euclidean algorithm hasn't changed in 2000 years and has always been the the basis of Euclid's number theory. | | | | | | | | | | '''Euclidean <balloon title="An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations.">algorithm</balloon> (also known as Euclid’s algorithm) ''' describes a procedure for finding the greatest common divisor of two positive integers. This method is recorded in Euclid’s '' Elements '' Book VII. This book contains the foundation of number theory for which Euclid is famous. | | '''Euclidean <balloon title="An algorithm is a process or set of rules to be followed in calculations or other problem-solving operations.">algorithm</balloon> (also known as Euclid’s algorithm) ''' describes a procedure for finding the greatest common divisor of two positive integers. This method is recorded in Euclid’s '' Elements '' Book VII. This book contains the foundation of number theory for which Euclid is famous. | | | | | | | - | The Euclidean algorithm comes in handy with computers because large numbers are hard to factor but relatively easy to divide. It is used in many other places and we’ll talk about its applications later. | + | An example of the method is shown in the image. First, use the smaller integer of the two, 36, to divide the bigger one, 52. Use the remainder of this division, 16, to divide 36 and you get the remainder 4. Now divide the last divisor, 16, by 4 and you find that they divide exactly. Therefore, 4 is the greatest common divisor. For every two integers, you will get the gcd by repeating the same process until there is no remainder | | | | + | | | | | + | The Euclidean algorithm comes in handy with computers because large numbers are hard to factor but relatively easy to divide. | | | | | | | | | | | | Line 33: | | Line 30: | | | | :''Example:'' 3 <math>\mid</math> 6 ; 4 <math>\mid</math> 16 | | :''Example:'' 3 <math>\mid</math> 6 ; 4 <math>\mid</math> 16 | | | * '''gcd''' means the greatest common divisor, also called the greatest common factor (gcf), the highest common factor (hcf), and the greatest common measure (gcm). | | * '''gcd''' means the greatest common divisor, also called the greatest common factor (gcf), the highest common factor (hcf), and the greatest common measure (gcm). | | - | * '''gcd(a, b)''' means the gcd of two positive integers a and b. | + | * '''gcd(a, b)''' means the gcd of two positive integers a and b; (a, b) is another notation for gcd(a, b). | | | | + | | | | Keep those abbreviations in mind; you will see them a lot later. | | Keep those abbreviations in mind; you will see them a lot later. | | | | | | | | ===Precondition === | | ===Precondition === | | | The Euclidean Algorithm is based on the following theorem: | | The Euclidean Algorithm is based on the following theorem: | | - | :'''Theorem:''' <math> gcd(a, b) = gcd(b,~ a~mod~b) </math> where <math>a > b</math> and <math>a~ mod~ b ~\ne 0 </math> | + | :'''Theorem:''' <math> gcd(a, b) = gcd(b,~ a~mod~b) </math> where <math>a > b</math> and <math>a~ mod~ b ~\ne 0 </math>. | | - | :'''Proof:''' Since <math>a > b</math>, <math>a</math> could be denoted as <math>a = kb + r</math> with <math>0 \leqslant r < b </math>. Then <math>r = a~mod~b</math>. Assume <math>d</math> is a common divisor of <math>a</math> and <math>b</math>, thus <math>d \mid a , d\mid b</math>, or we could write them as <math>a= q_1 d, b = q_2 d. </math> Because of <math>r = a - kb</math>, <math> r = q_1 d - k q_2 d = (q_1 - k q_2) d </math> and we will get<math>d \mid r </math>. Therefore <math>d</math> is also a common divisor of <math>(b, r) = (b, a~mod~b)</math>. Hence, the common divisors of <math>(a, b)</math> and <math>(b, a~mod~b)</math> are the same. In other words, <math>(a, b)</math> and <math>(b, a~mod~b)</math> have the same common divisors, and so they have the same greatest common divisor. | + | :'''Proof:''' | | | | + | | | | | + | ::Since <math>a > b</math>, <math>a</math> could be denoted as <math>a = kb + r</math> with <math>0 \leqslant r < b </math>. | | | | + | ::Then the remainder <math>r = a~mod~b</math>. | | | | + | ::Assume <math>d</math> is a common divisor of <math>a</math> and <math>b</math>, thus <math>d \mid a , d\mid b</math>, or we could write them as <math>a= q_1 d, b = q_2 d. </math> | | | | + | ::Because <math>r = a - kb</math>, | | | | + | ::<math> r = q_1 d - k q_2 d = (q_1 - k q_2) d </math>, so we know <math>d \mid r </math>. | | | | + | ::Therefore <math>d</math> is also a common divisor of <math>(b, r) = (b, a~mod~b)</math>. | | | | + | ::Hence, the common divisors of <math>(a, b)</math> and <math>(b, a~mod~b)</math> are the same. | | | | + | ::In other words, <math>(a, b)</math> and <math>(b, a~mod~b)</math> have the same common divisors, and so they have the same greatest common divisor. | | | | | | | | ===Description=== | | ===Description=== | | Line 73: | | Line 80: | | | | An example will make the Euclidean algorithm clearer. Let's say we want to know the gcd of 168 and 64. | | An example will make the Euclidean algorithm clearer. Let's say we want to know the gcd of 168 and 64. | | | | | | | - | 168 = 2 <math>\times</math> 64 + 40 | + | In this case, a = 168, b = 64. Start writing the first equation: | | | | | | | - | 64 = 1 <math>\times</math> 40 + 24 | + | 168 = 2 <math>\times</math> 64 + 40 | | | | + | :'' (Try to find the greatest possible coefficient (integer) for quotient 64. Couldn't be 1 because the remainder has to be smaller than then quotient 64. Couldn't be 3 otherwise it is greater than 168. So it turns out to be 2 and the remainder is 40.) '' | | | | + | | | | | + | 64 = 1 <math>\times</math> 40 + 24 | | | | + | :'' (Get the remainder 40 from the last equation. <math>r_1 = 40</math>. Use it as the quotient for this second equation. By analog, find the coefficient for 40 and the remainder.) '' | | | | | | | | 40 = 1 <math>\times</math> 24 + 16 | | 40 = 1 <math>\times</math> 24 + 16 | | Line 94: | | Line 105: | | | | | | | | | ===Modern Proof=== | | ===Modern Proof=== | | - | In order to prove that Euclidean algorithm works, the first thing is to show that the number we get from this algorithm is a common divisor of a and b. Then we will show that it is the greatest. Recall that | | | | | | | | | - | :<math>a = k_0b + r_1, \quad a\,\bmod\,b = r_1, \quad 0 < r_1 < b</math> | + | *'''Proving That It Is A Common Divisor''' | | - | :<math>b = k_1r_1 + r_2 , \quad b\,\bmod\,r_1 = r_2, \quad 0 < r_2 < r_1</math> | + | | | - | :<math>r_1 = k_2r_2 + r_3, \quad r_1\,\bmod\,r_2 = r_3, \quad 0 < r_3 < r_2</math> | + | | | - | :<math>r_2 = k_3r_3 + r_4, \quad r_2\,\bmod\,r_3 = r_4, \quad 0 < r_4 < r_3</math> | + | | | - | :... ... | + | | | - | :<math>r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad r_{n-2}\,\bmod\,r_{n- 1} = r_n, \quad 0 < r_{n -1} < r_n</math> | + | | | - | :<math>r_{n -1} = k_nr_n, \quad\quad\quad\quad r_{n- 1}\,\bmod\, \quad r_n = 0</math> | + | | | | | | | | - | Based on the last two equations, we substitute <math>r_{n -1} </math> with <math>k_nr_n</math> in the last to second equation such that <math>r_{n -2} = k_{n -1}r_{n -1} + r_n = k_{n -1}k_nr_n + r_n = (k_{n -1}k_n + 1) r_n </math>. | + | In order to prove that Euclidean algorithm works, the first thing is to show that the number we get from this algorithm is a common divisor of a and b. Recall that | | | | + | | | | | + | {{EquationRef2|Eq. 1}} <math>a = k_0b + r_1, \quad \quad \quad \quad \quad0 < r_1 < b</math> | | | | + | {{EquationRef2|Eq. 2}} <math>b = k_1r_1 + r_2 , \quad \quad \quad \quad \quad 0 < r_2 < r_1</math> | | | | + | {{EquationRef2|Eq. 3}} <math>r_1 = k_2r_2 + r_3, \quad \quad \quad \quad \quad 0 < r_3 < r_2</math> | | | | + | :::... ... | | | | + | {{EquationRef2|Eq. n-1}}<math>r_{n-3} = k_{n-2}r_{n-2} + r_{n -1}, \quad \quad 0 < r_{n-2} < r_{n -1} </math> | | | | + | {{EquationRef2|Eq. n}} <math>r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad \quad \quad 0 < r_{n -1} < r_n</math> | | | | + | {{EquationRef2|Eq. n+1}}<math>r_{n -1} = k_nr_n, \quad\quad\quad\quad \quad \quad r_n = 0</math> | | | | + | | | | | + | Based on the last equation {{EquationNote|Eq. n+1}}, we substitute <math>r_{n -1} </math> with <math>k_nr_n</math> in {{EquationNote|Eq. n}} such that | | | | + | | | | | + | <math>r_{n -2} = k_{n -1}r_{n -1} + r_n = k_{n -1}k_nr_n + r_n </math>. | | | | + | | | | | + | <math>r_{n-2} = (k_{n -1}k_n + 1) r_n </math> | | | | | | | | Thus we have <math>r_n \mid r_{n -2} </math>. | | Thus we have <math>r_n \mid r_{n -2} </math>. | | | | | | | - | From the equation before those two, we repeat the steps we did just now: <math>r_{n -3} = k_{n -2}r_{n -2} + r_{n -1} = k_{n -2} \Big( (k_{n -1} k_n + 1)r_n \Big) + k_nr_n = (k_{n -2} k_{n -1} k_n + k_{n - 2} + k_n) r_n </math>. | + | From the equation before those two {{EquationNote|Eq. n-1}}, we repeat the steps we did just now: <math>r_{n -3} = k_{n -2}r_{n -2} + r_{n -1} = k_{n -2} \Big( (k_{n -1} k_n + 1)r_n \Big) + k_nr_n = (k_{n -2} k_{n -1} k_n + k_{n - 2} + k_n) r_n </math>. | | | | | | | | Now we know <math>r_n \mid r_{n -3}</math>. | | Now we know <math>r_n \mid r_{n -3}</math>. | | Line 114: | | Line 132: | | | | Continue this process and we will find that <math>r_n \mid a, r_n \mid b</math>, so <math>r_n </math>, the number we get from Euclidean algorithm, is indeed a common divisor of a and b. | | Continue this process and we will find that <math>r_n \mid a, r_n \mid b</math>, so <math>r_n </math>, the number we get from Euclidean algorithm, is indeed a common divisor of a and b. | | | | | | | - | Second, we need to show that <math>r_n</math> is the greatest among all the common divisors of a and b. To show that <math>r_n</math> is the greatest, let's assume that there is another common divisor of a and b, d, where d is a positive integer. Then we could rewrite a and b as a = dm , b = dn, where m and n are also positive integers. This second part of the proof is going to be similar to the first part because they both repeat the same steps and eventually get the result, but this time we start from the first equation of the Euclidean algorithm: | + | *'''Proving That It Is The Greatest''' | | | | + | Second, we need to show that <math>r_n</math> is the greatest among all the common divisors of a and b. To show that <math>r_n</math> is the greatest, let's assume that there is another common divisor of a and b, d, where d is a positive integer. Then we could rewrite a and b as a = dm , b = dn, where m and n are also positive integers. This second part of the proof is going to be similar to the first part because they both repeat the same steps and eventually get the result, but this time we start from the first equation of the Euclidean algorithm {{EquationNote|Eq. 1}}: | | | | + | | | | | + | We know that <math>a = k_0b + r_1 </math>. Thus, | | | | + | | | | | + | <math>r_1 = a - k_0b = dm - k_0 dn</math>, and | | | | | | | - | Because <math>a = k_0b + r_1 </math> | + | <math>r_1 = (m - k_0 n) d</math> (substitute dm for a and dn for b). | | - | Therefore <math>r_1 = a - k_0b = dm - k_0 dn = (m - k_0 n) d</math> (substitute dm for a and dn for b) | + | | | | | | | | | Therefore, <math>d \mid r_1 </math>. Let <math> r_1 = d_1 d</math>. | | Therefore, <math>d \mid r_1 </math>. Let <math> r_1 = d_1 d</math>. | | | | | | | - | Consider the second equation. Solve for <math>r_2</math> in the same way. | + | Consider the second equation {{EquationNote|Eq. 2}}. Solve for <math>r_2</math> in the same way. | | - | Because <math>b = k_1r_1 + r_2 </math> | + | | | - | Therefore <math> r_2 = b - k_1r_1= dn - k_1d_1 d = (n - k_1d_1) d</math> | + | | | - | Thus, <math>d \mid r_2</math> | + | | | | | | | | - | Continuing the process until we reach the last equation, we will get <math>d \mid r_n </math>. Since we pick d to represent any possible common divisor of a and b except <math>r_n, d \mid r_n</math> means that <math>r_n</math> divides any other common divisor of a and b, meaning that <math>r_n</math> must be greater than all the other common divisors. Therefore, the number we get from the Euclidean Algorithm, <math>r_n</math>, is indeed the greatest common divisor of a and b. | + | We know that <math>b = k_1r_1 + r_2 </math>. Thus, | | | | + | | | | | + | <math> r_2 = b - k_1r_1= dn - k_1d_1 d </math>, and | | | | + | | | | | + | <math>r_2 = (n - k_1d_1) d</math>. | | | | + | | | | | + | Therefore, <math>d \mid r_2</math>. | | | | + | | | | | + | Continuing the process until we reach the last equation {{EquationNote|Eq. n}}, we will get <math>d \mid r_n </math>. Since we pick d to represent any possible common divisor of a and b except <math>r_n, d \mid r_n</math> means that <math>r_n</math> divides any other common divisor of a and b, meaning that <math>r_n</math> must be greater than all the other common divisors. Therefore, the number we get from the Euclidean Algorithm, <math>r_n</math>, is indeed the greatest common divisor of a and b. | | | | | | | | | | | | Line 152: | | Line 179: | | | | :15. A number is said to '''multiply''' a number when that which is multiplied is added to itself as many times as there are units in the other, and thus some number is produced.<ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> | | :15. A number is said to '''multiply''' a number when that which is multiplied is added to itself as many times as there are units in the other, and thus some number is produced.<ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> | | | | | | | - | '' NOTE: '' | + | ''Editor's Note: '' | | | | | | | | : In a nutshell, Euclid's one unit is the number 1 in algebra. He uses lines to represent numbers; the longer the line the greater the number. | | : In a nutshell, Euclid's one unit is the number 1 in algebra. He uses lines to represent numbers; the longer the line the greater the number. | | Line 183: | | Line 210: | | | | :Therefore no number will measure the numbers AB, CD; therefore AB, CD are prime to one another. <ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> [VII.Def.12] Q.E.D. | | :Therefore no number will measure the numbers AB, CD; therefore AB, CD are prime to one another. <ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> [VII.Def.12] Q.E.D. | | | | | | | - | ''NOTE : '' | + | ''Editor's Note : '' | | | | | | | - | Here is a translation of Proposition 1. | + | Here is my translation of Proposition 1. | | | | | | | | Euclid wants to show that a and b must be prime to each other if we get 1 left instead of 0. Why? | | Euclid wants to show that a and b must be prime to each other if we get 1 left instead of 0. Why? | | Line 235: | | Line 262: | | | | PORISM. From this it is manifest that, if a number measure two numbers, it will also measure their greatest common measure. <ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> Q.E.D | | PORISM. From this it is manifest that, if a number measure two numbers, it will also measure their greatest common measure. <ref name=BookVII> Health T.L.. (1956). The Thirteen Books of Euclid's Elements. New York: Dover Publication.</ref> Q.E.D | | | | | | | - | '' NOTE : '' | + | ‘’Editor's Note: '' | | | | | | | | Prop.2 is pretty self-explanatory, proved in a similar way as Prop.1. | | Prop.2 is pretty self-explanatory, proved in a similar way as Prop.1. | | Line 245: | | Line 272: | | | | =Extended Euclidean Algorithm= | | =Extended Euclidean Algorithm= | | | | | | | - | Expand the Euclidean algorithm and you will be able to solve <balloon title="In number theory, Bézout's identity for two integers a,b is an expression ax + by = d, where x and y are integers (called Bézout's coefficients for (a, b)) and d is a common divisor of a and b." >'''Bézout's identity'''</balloon> for x and y where d = gcd(a, b): | + | Expand the Euclidean algorithm and you will be able to solve '''<balloon title="In number theory, Bézout's identity for two integers a,b is an expression ax + by = d, where x and y are integers (called Bézout's coefficients for (a, b)) and d is a common divisor of a and b." >Bézout's identity</balloon>''' for x and y where d = gcd(a, b): | | | | | | | | <math>ax +by = gcd(a, b). </math> | | <math>ax +by = gcd(a, b). </math> | | | | | | | | Note: Usually either x or y will be negative since a, b and gcd(a, b) are positive and both a and b are usually greater than gcd(a, b). | | Note: Usually either x or y will be negative since a, b and gcd(a, b) are positive and both a and b are usually greater than gcd(a, b). | | - | | | | | - | ==Lead In== | | | | - | Recall that | | | | - | :<math>a = k_0b + r_1, \quad 0 < r_1 < b</math> | | | | - | :<math>b = k_1r_1 + r_2 , \quad 0 < r_2 < r_1</math> | | | | - | :<math>r_1 = k_2r_2 + r_3, \quad 0 < r_3 < r_2</math> | | | | - | :<math>r_2 = k_3r_3 + r_4, \quad 0 < r_4 < r_3</math> | | | | - | :... ... | | | | - | :<math>r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad r_{n-2}\,\bmod\,r_{n- 1} = r_n, \quad 0 < r_{n -1} < r_n</math> | | | | - | :<math>r_{n -1} = k_nr_n, \quad\quad\quad\quad r_{n- 1}\,\bmod\, \quad r_n = 0</math> | | | | - | | | | | - | Solve for <math>r_n</math> using the second to last equation and we get: | | | | - | :<math>r_n = r_{n-2} - k_{n-1}r_{n-1}</math> | | | | - | Because <math>r_n = gcd(a, b)</math> by Euclidean algorithm, | | | | - | :{{EquationRef2|Eq. 1}}<math>gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1}</math> | | | | - | | | | | - | Now let's solve for <math>r_{n-1} </math> in the same way: | | | | - | :{{EquationRef2|Eq. 2}}<math>r_{n -1} = r_{n -3} - k_{n-2}r_{n-2}</math> | | | | - | | | | | - | Substitute {{EquationNote|Eq. 2}} into {{EquationNote|Eq. 1}}: | | | | - | | | | | - | :<math> gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1} </math> | | | | - | :<math> gcd(a, b) = r_{n -2} - k_{n -1}(r_{n -3} - k_{n-2}r_{n-2}) </math> | | | | - | :<math> gcd(a, b) = r_{n -2} - k_{n -1}r_{n -3} + k_{n-1}k_{n-2}r_{n-2} </math> | | | | - | :<math> {\color{Blue}gcd(a, b)} = (1 + k_{n-1}k_{n-2}){\color{Blue}r_{n-2}} - k_{n-1} {\color{Blue}r_{n - 3}}</math> | | | | - | | | | | - | Now you can see gcd(a, b) is expressed by a linear combination of <math>r_{n-2}</math> and <math>r_{n-3}</math>. If we continue this process by using the previous equations from the list above, we could get a linear combination of <math>r_{n-3}</math> and <math>r_{n-4}</math> with <math>r_{n-3}</math> representing <math>r_{n-2}</math> and <math>r_{n-4}</math> representing <math>r_{n-3}</math>. If we keep going like this till we hit the first equation, we can express gcd(a, b) as a linear combination of a and b, which is what we intend to do. | | | | | | | | | | ==Description== | | ==Description== | | Line 288: | | Line 288: | | | | '''Computation:''' | | '''Computation:''' | | | | | | | - | *If <math>b = 0,</math> set <math>d = a, x = 1, y = 0,</ math> and return <math>(d, x, y).</math> | + | *If <math>b = 0,</math> set <math>d = a, x = 1, y = 0,</math> and return <math>(d, x, y).</math> | | | *If not, set<math> x_2 = 1, x_1 = 0, y_2 = 0, y_1 = 1</math> | | *If not, set<math> x_2 = 1, x_1 = 0, y_2 = 0, y_1 = 1</math> | | | *While <math>b > 0</math>, do | | *While <math>b > 0</math>, do | | Line 317: | | Line 317: | | | | *Use the extended Euclidean algorithm to get x and y: | | *Use the extended Euclidean algorithm to get x and y: | | | | | | | - | :From the fourth equation we get {{EquationRef2|Eq. 3}} <math>8 = 24 - 1 \times 16. </math> | + | :From the fourth equation we get {{EquationRef2|Eq. 6}} <math>8 = 24 - 1 \times 16. </math> | | - | :From the third equation we get {{EquationRef2|Eq. 4}} <math>16 = 40 - 1 \times 24 </math>. | + | :From the third equation we get {{EquationRef2|Eq. 7}} <math>16 = 40 - 1 \times 24 </math>. | | | | | | | - | *Substitute {{EquationNote|Eq. 4}} into {{EquationNote|Eq. 3}}: | + | *Substitute {{EquationNote|Eq. 7}} into {{EquationNote|Eq. 6}}: | | | | | | | | :<math>8 = 24 - 1 \times (40 - 1 \times 24) </math> | | :<math>8 = 24 - 1 \times (40 - 1 \times 24) </math> | | Line 337: | | Line 337: | | | | | | | | | :<math>\therefore x = -3, y = 8 </math> | | :<math>\therefore x = -3, y = 8 </math> | | | | + | | | | | + | ==Proof== | | | | + | Recall that | | | | + | :<math>a = k_0b + r_1, \quad 0 < r_1 < b</math> | | | | + | :<math>b = k_1r_1 + r_2 , \quad 0 < r_2 < r_1</math> | | | | + | :<math>r_1 = k_2r_2 + r_3, \quad 0 < r_3 < r_2</math> | | | | + | :<math>r_2 = k_3r_3 + r_4, \quad 0 < r_4 < r_3</math> | | | | + | :... ... | | | | + | :<math>r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad \quad 0 < r_{n -1} < r_n</math> | | | | + | :<math>r_{n -1} = k_nr_n, \quad\quad\quad\quad \quad r_n = 0</math> | | | | + | | | | | + | Solve for <math>r_n</math> using the second to last equation and we get: | | | | + | :<math>r_n = r_{n-2} - k_{n-1}r_{n-1}</math> | | | | + | Because <math>r_n = gcd(a, b)</math> by Euclidean algorithm, | | | | + | :{{EquationRef2|Eq. 4}}<math>gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1}</math> | | | | + | | | | | + | Now let's solve for <math>r_{n-1} </math> in the same way: | | | | + | :{{EquationRef2|Eq. 5}}<math>r_{n -1} = r_{n -3} - k_{n-2}r_{n-2}</math> | | | | + | | | | | + | Substitute {{EquationNote|Eq. 5}} into {{EquationNote|Eq. 4}}: | | | | + | | | | | + | :<math> gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1} </math> | | | | + | :<math> gcd(a, b) = r_{n -2} - k_{n -1}(r_{n -3} - k_{n-2}r_{n-2}) </math> | | | | + | :<math> gcd(a, b) = r_{n -2} - k_{n -1}r_{n -3} + k_{n-1}k_{n-2}r_{n-2} </math> | | | | + | :<math> {\color{Blue}gcd(a, b)} = (1 + k_{n-1}k_{n-2}){\color{Blue}r_{n-2}} - k_{n-1} {\color{Blue}r_{n - 3}}</math> | | | | + | | | | | + | Now you can see gcd(a, b) is expressed by a linear combination of <math>r_{n-2}</math> and <math>r_{n-3}</math>. If we continue this process by using the previous equations from the list above, we could get a linear combination of <math>r_{n-3}</math> and <math>r_{n-4}</math> with <math>r_{n-3}</math> representing <math>r_{n-2}</math> and <math>r_{n-4}</math> representing <math>r_{n-3}</math>. If we keep going like this till we hit the first equation, we can express gcd(a, b) as a linear combination of a and b, which is what we intend to do. | | | | | | | | | | | | | '''Euclidean algorithm and extended Euclidean algorithm makes it elegantly easy to compute the two Bézout's coefficients. ''' | | '''Euclidean algorithm and extended Euclidean algorithm makes it elegantly easy to compute the two Bézout's coefficients. ''' | | | | + | | | | | | | | | =Efficiency= | | =Efficiency= | | Line 349: | | Line 377: | | | | Gabriel Lamé is the first person who shows the number of steps required by the Euclidean algorithm. '''Lamé's theorem''' states that the number of steps in Euclidean algorithm for gcd(a,b) is at most five times the number of digits of the smaller number b. Thus, the Euclidean algorithm is linear-time in the number of digits in b. | | Gabriel Lamé is the first person who shows the number of steps required by the Euclidean algorithm. '''Lamé's theorem''' states that the number of steps in Euclidean algorithm for gcd(a,b) is at most five times the number of digits of the smaller number b. Thus, the Euclidean algorithm is linear-time in the number of digits in b. | | | | | | | - | '''Proof:''' | + | <big>'''Proof'''</big> | | | | | | | | Recall the division equations from the Euclidean algorithm, | | Recall the division equations from the Euclidean algorithm, | | Line 369: | | Line 397: | | | | #All the numbers in the division equations, <math>a, b, r_n, k_n</math>, are positive integers. | | #All the numbers in the division equations, <math>a, b, r_n, k_n</math>, are positive integers. | | | | | | | - | Then we will have three conclusions: | + | Analyze the division equations and we will have three conclusions: | | | | | | | | *<math>r_n</math> couldn't be 0, as otherwise all the remainders would be 0. Hence, <math>r_n \geqslant 1 = F_2</math>. | | *<math>r_n</math> couldn't be 0, as otherwise all the remainders would be 0. Hence, <math>r_n \geqslant 1 = F_2</math>. | | Line 377: | | Line 405: | | | | *<math>k_{n -1}</math> is an integer, so <math>k_{n -1} \geqslant 1 </math>. Thus, <math>r_{n -2} \geqslant r_n + r_{n -1} </math>. Since <math>r_{n -1} \geqslant F_3 </math> and <math>r_{n} \geqslant F_2, </math> we have <math> r_{n-2} \geqslant F_2 + F_3 = F_4 </math>. | | *<math>k_{n -1}</math> is an integer, so <math>k_{n -1} \geqslant 1 </math>. Thus, <math>r_{n -2} \geqslant r_n + r_{n -1} </math>. Since <math>r_{n -1} \geqslant F_3 </math> and <math>r_{n} \geqslant F_2, </math> we have <math> r_{n-2} \geqslant F_2 + F_3 = F_4 </math>. | | | | | | | - | To simplify the three conclusions: | + | Simplify the three conclusions: | | | | | | | | :<math>r_n \geqslant 1 = F_2</math> | | :<math>r_n \geqslant 1 = F_2</math> | | Line 391: | | Line 419: | | | | Therefore, <math> b \geqslant F_{n + 2}</math> | | Therefore, <math> b \geqslant F_{n + 2}</math> | | | | | | | - | A theorem about the lower bound of Fibonacci numbers states that for all integers <math>n \geqslant 3, it is true that F_n > \alpha ^{n -2}, </math> where <math>\alpha = \frac{1+\sqrt{5}}{2} \approx 1.61803</math> (the sum of the [[Golden Ratio]] and 1). | + | A theorem about the lower bound of Fibonacci numbers states that for all integers <math>n \geqslant 3</math>, it is true that <math> F_n > \alpha ^{n -2} </math> where <math>\alpha = \frac{1+\sqrt{5}}{2} \approx 1.61803</math> ( the sum of the [[Golden Ratio]] and 1). | | | | | | | | Therefore, <math>b \geqslant F_{n+2} > \alpha ^n</math> | | Therefore, <math>b \geqslant F_{n+2} > \alpha ^n</math> | | | | | | | - | Thus, <math>\log_{10} b \geqslant \log_{10} \alpha ^n = n \log_{10} \alpha \approx n 0.208 \approx \frac{n}{5}</math> | + | Thus, <math>\log_{10} b > \log_{10} \alpha ^n = n \log_{10} \alpha \approx n~0.208 \approx \frac{n}{5}</math> | | - | | + | | | - | Assume b has k digits, so <math>k > \log_{10} b</math>. Then | + | | | | | | | | | | + | Because b has k digits, <math>k > \log_{10} b</math>. Then | | | | + | :<math> k > \log_{10} b > \frac{n}{5} </math> | | | :<math> k > \frac{n}{5} </math> | | :<math> k > \frac{n}{5} </math> | | | :<math> 5k > n</math> | | :<math> 5k > n</math> | | | :<math>5k +1 > n + 1 </math> | | :<math>5k +1 > n + 1 </math> | | | :<math>5k \geqslant n + 1 </math> | | :<math>5k \geqslant n + 1 </math> | | | | + | :<math>n + 1 \leqslant 5k </math>. | | | | | | | - | Therefore, the number of steps <math>n + 1 \leqslant 5k </math>. The number of steps required by Euclidean algorithm for gcd(a,b) is no more than five times the number of digits of b. | + | Therefore, the number of steps (<math>n + 1</math>) required by Euclidean algorithm for gcd(a,b) is no more than five times the number of digits of b (<math>5k</math>). | | | | | | | | ==Shortcomings of the Euclidean Algorithm== | | ==Shortcomings of the Euclidean Algorithm== | | | | | | | - | The Euclidean algorithm is an ancient but good and simple algorithm to find the gcd of two nonnegative integers; it is well designed both theoretically and practically. Due to its simplicity, it is widely applied in many industries today. However, when dealing with really big integers (prime numbers over 64 digits in particular), finding the right quotients using the Euclidean algorithm adds to the time of computation for modern computers. | + | The Euclidean algorithm is an ancient but good and simple algorithm to find the gcd of two nonnegative integers; it is well designed both theoretically and practically. Due to its simplicity, it is widely applied in many industries today. However, when dealing with really big integers (prime numbers over 64 digits in particular), finding the right quotients using the Euclidean algorithm adds to the time of computation for modern computers. | | | | | | | - | '''Stein's algorithm (also known as the binary GCD algorithm)''' is also an algorithm to compute the gcd of two nonnegative integers brought forward by J. Stein in 1967. This alternative is made to enhance the efficiency of the Euclidean algorithm, because it replaces complicated division and multiplication with addition, subtraction and shifts, which make it easier for the CPU to compute large integers. | + | '''Stein's algorithm (also known as the binary GCD algorithm)''' is also an algorithm to compute the gcd of two nonnegative integers brought forward by J. Stein in 1967. This alternative is made to enhance the efficiency of the Euclidean algorithm, because it replaces complicated division and multiplication in Euclidean algorithm with addition, subtraction and shifts, which make it easier for the CPU to compute large integers. | | | | | | | | | | | | - | The algorithm has the following conclusions: | + | Stein's algorithm has the following conclusions: | | - | *gcd(m, 0) = m, gcd(0, m) = m. It is because every number except 0 divides 0 and m is the biggest number that can divide itself. | + | *<math>gcd(m, 0) = m, gcd(0, m) = m. </math>It is because every number except 0 divides 0 and m is the biggest number that can divide itself. | | - | *If e and f are both even integers, then gcd(e, f) = 2 gcd(<math>\frac{e}{2}, \frac{f}{2}</math>), because 2 is definitely a common divisor of two even integers. | + | *If e and f are both even integers, then <math>gcd(e, f) = 2 \cdot gcd\left ( \frac{e}{2}, \frac{f}{2} \right )</math>, because 2 is definitely a common divisor of two even integers. | | - | *If e is even and v is odd, then gcd(e, f) = gcd(<math>\frac{e}{2}</math>, f), because 2 is definitely not a common divisor of an even integer and an odd integer. | + | | | - | *Otherwise both are odd and gcd(e, f) = gcd(<math>\frac{|e-f|}{2}</math>, the smaller one of e and f). According to Euclidean algorithm, the difference of e and f could also divide the gcd of e and f. And Euclidean algorithm with a division by 2 results in an integer because the difference of two odd integers is even. | + | | | | | | | | | | + | *If e is even and f is odd, then <math>gcd(e, f) = gcd \left ( \frac{e}{2}, f \right ) </math>, because 2 is definitely not a common divisor of an even integer and an odd integer. | | | | | | | - | The description of Stein's algorithm: | + | *Otherwise both are odd and <math> gcd(e, f) = gcd\left ( \frac{|e-f|}{2}, \mbox{the smaller one of e and f} \right ) </math>. According to Euclidean algorithm, the difference of e and f, which is <math>|e-f|</math>, could also divide <math>gcd(e, f)</math>. And <math>\frac{|e-f|}{2}</math> is an integer because the difference of two odd integers is even. Thus, the gcd of <math>\frac{|e-f|}{2}</math> and the smaller one of e is the gcd of e and f. | | | | | | | - | '''Input:'''<math> u, v ( 0 < u \leqslant v ) </math>; | | | | | | | | | - | '''Output: ''' g = gcd(u, v) | + | Based on the three conclusions, Stein's algorithm is described as the following. Note that the inner computation below is actually the same as the three conclusions. We just restate the three conclusions in an "algorithm form." | | | | + | | | | | + | '''Input:''' any two distinctive positive integers<math> u, v</math> with <math> 0 < u \leqslant v </math>; | | | | + | | | | | + | '''Output: ''' <math>g = gcd(u, v) </math> | | | | | | | | '''Inner Computation: ''' | | '''Inner Computation: ''' | | | | | | | | #g = 1. | | #g = 1. | | - | # While both u and v are even integers, do <math> u = \frac{u}{2}, v = \frac{v}{2}, g = 2g </math>. | + | # While both u and v are even integers, do <math> u = \frac{u}{2}, v = \frac{v}{2}, g = 2g </math>; ('' "while" means both "if" and "iteration until the condition is no longer satisfied" '') | | | # While <math>u > 0 </math>, do: | | # While <math>u > 0 </math>, do: | | - | ## While u is even, do: <math> u =\frac{u}{2} </math>. | + | ## While u is even, do: <math> u =\frac{u}{2} </math>; | | - | ## While v is even, do: <math> v = \frac{v}{2}</math>. | + | ## While v is even, do: <math> v = \frac{v}{2}</math>; | | - | ## <math> t = \frac{\left\vert u - v \right\vert }{2}</math>. | + | ## <math> t = \frac{\left\vert u - v \right\vert }{2}</math>; | | - | ## If <math> u \leqslant v </math>, u = t; else, v = t. | + | ## If <math> u \leqslant v , u = t </math>; else, <math>v = t;</math> | | - | # Return (<math>g \cdot v </math>) | + | # Return <math>g \cdot v </math>. | | | | | | | - | '''Example:''' ''<math>u=168, v=64 </math>'' | + | '''Example:''' | | - | #<math>u = \frac{168}{2} = 84, v = \frac{64}{2} = 32, g = 2 </math>; <math> u = \frac{84}{2} = 42, v = \frac{32}{2} = 16, g = 4 </math>; <math> u = \frac{42}{2} = 21, v = \frac{16}{2} = 8, g =8 </math>; | + | | | | | + | ''Steiner's algorithm is designed for large numbers, but we only provide an example with small numbers for convenience.'' | | | | + | | | | | + | :<math>u=168, v=64 </math> | | | | + | | | | | + | # <math> g = 1</math>; | | | | + | # Both u and v are even integers. | | | | + | ##<math>u = \frac{168}{2} = 84, v = \frac{64}{2} = 32, g = 2 </math>; | | | | + | ##<math> u = \frac{84}{2} = 42, v = \frac{32}{2} = 16, g = 4 </math>; | | | | + | ##<math> u = \frac{42}{2} = 21, v = \frac{16}{2} = 8, g =8 </math>; (''u and v are not both even now. Move on to the next step.'') | | | # <math> u = 21 > 0; </math> | | # <math> u = 21 > 0; </math> | | - | ## <math> v = \frac{8}{2} = 4; v = \frac{4}{2} = 2; v = \frac{2}{2} = 1; </math> | + | ## v is even. | | | | + | ###<math> v = \frac{8}{2} = 4; </math> | | | | + | ###<math> v = \frac{4}{2} = 2; </math> | | | | + | ###<math> v = \frac{2}{2} = 1; </math> | | | ## <math> t =\frac{\left\vert 21- 1 \right\vert}{2} = \frac{20}{2} = 10; t = 10 </math>, | | ## <math> t =\frac{\left\vert 21- 1 \right\vert}{2} = \frac{20}{2} = 10; t = 10 </math>, | | - | ## <math>\because u = 21 > 1 = v, \therefore u = t =10; </math> | + | ## <math>u = 21 > 1 = v, \therefore u = t =10; </math> | | | # <math> u = 10 > 0, v =1; </math> | | # <math> u = 10 > 0, v =1; </math> | | - | ## <math>u = \frac{10}{2} = 5; </math> | + | ## u is even. | | | | + | ###<math>u = \frac{10}{2} = 5; </math> | | | ## <math> t = \frac{\left\vert 5- 1 \right\vert}{2} = \frac{4}{2} = 2; t = 2, </math> | | ## <math> t = \frac{\left\vert 5- 1 \right\vert}{2} = \frac{4}{2} = 2; t = 2, </math> | | - | ## <math> \because u = 5 > 1 = v, \therefore u = t = 2;</math> | + | ## <math> u = 5 > 1 = v, \therefore u = t = 2;</math> | | | # <math> u = 2 > 0, v =1; </math> | | # <math> u = 2 > 0, v =1; </math> | | - | ## <math>u = \frac{2}{2} = 1; </math> | + | ## u is even. | | | | + | ###<math>u = \frac{2}{2} = 1; </math> | | | ## <math> t = \frac{\left\vert 1 - 1 \right\vert}{2} = \frac{0}{2} = 0; t = 0, </math> | | ## <math> t = \frac{\left\vert 1 - 1 \right\vert}{2} = \frac{0}{2} = 0; t = 0, </math> | | - | ## <math> \because u = 2 > 1 = v, \therefore u = t = 0;</math> | + | ## <math>u = 2 > 1 = v, \therefore u = t = 0;</math> ('' Because u = 0, condition u > 0 is no longer satisfied. Move on to the next step'') | | - | # <math> g = g \cdot v = 8 \times 1 = 8. </math> | + | # Return <math> g = g \cdot v = 8 \times 1 = 8. </math> | | | | + | | | | | | | | - | Now you may have a better understanding of the efficiency of Stein's algorithm, which substitutes divisions with faster operations by exploiting the binary representation that real computers use nowadays. | + | Now you may have a better understanding of the efficiency of Stein's algorithm, which substitutes divisions with faster operations by exploiting the binary representation that real computers use nowadays. | | | | | | | | | | | | | |WhyInteresting= | | |WhyInteresting= | | | | | | | - | The Euclidean algorithm is a fundamental algorithm for other mathematical theories and various subjects in different areas. Please see [[The Application of Euclidean Algorithm]] to learn more about the Euclidean algorithm. | + | The Euclidean algorithm is a fundamental algorithm for other mathematical theories and various subjects in different areas. Please see [[Application of the Euclidean Algorithm]] to learn more about the Euclidean algorithm. | | | | | | | | | | | | Line 506: | | Line 552: | | | | #Worst case of Euclidean algorithm. | | #Worst case of Euclidean algorithm. | | | | | | | - | |InProgress=Yes | + | |InProgress=No | | | }} | | }} | ## Current revision Euclidean Algorithm Fields: Number Theory and Algebra Image Created By: Phoebe Jiang Euclidean Algorithm About 2000 years ago, Euclid, one of the greatest mathematician of Greece, devised a fairly simple and efficient algorithm to determine the greatest common divisor of two integers, which is now considered as one of the most efficient and well-known early algorithms in the world. The Euclidean algorithm hasn't changed in 2000 years and has always been the the basis of Euclid's number theory. This image shows Euclid's method to find the greatest common divisor of two integers. The greatest common divisor of two numbers a and b is the largest integer that divides the numbers without a remainder. # Basic Description When asked to find the gcd of two integers, a possible way is to prime factor each integer and see which factors are common between the two, or we could simply try different numbers and see which number works. However, both approaches could be very complicated and time consuming as the two integers become relatively large. Euclidean algorithm (also known as Euclid’s algorithm) describes a procedure for finding the greatest common divisor of two positive integers. This method is recorded in Euclid’s Elements Book VII. This book contains the foundation of number theory for which Euclid is famous. An example of the method is shown in the image. First, use the smaller integer of the two, 36, to divide the bigger one, 52. Use the remainder of this division, 16, to divide 36 and you get the remainder 4. Now divide the last divisor, 16, by 4 and you find that they divide exactly. Therefore, 4 is the greatest common divisor. For every two integers, you will get the gcd by repeating the same process until there is no remainder The Euclidean algorithm comes in handy with computers because large numbers are hard to factor but relatively easy to divide. # A More Mathematical Explanation Note: understanding of this explanation requires: *Number Theory, Algebra [Click to view A More Mathematical Explanation] ## The Description of Euclidean Algorithm ### Mathematical definitions and their abbreviations • ' [...] [Click to hide A More Mathematical Explanation] ## The Description of Euclidean Algorithm ### Mathematical definitions and their abbreviations • a mod b is the remainder when a is divided by b (where mod = modulo). Example: 7 mod 4 = 3; 4 mod 2 = 0; 5 mod 9 = 5 • a $\mid$ b means a divides b exactly or b is divided by a without any remainder. Example: 3 $\mid$ 6 ; 4 $\mid$ 16 • gcd means the greatest common divisor, also called the greatest common factor (gcf), the highest common factor (hcf), and the greatest common measure (gcm). • gcd(a, b) means the gcd of two positive integers a and b; (a, b) is another notation for gcd(a, b). Keep those abbreviations in mind; you will see them a lot later. ### Precondition The Euclidean Algorithm is based on the following theorem: Theorem: $gcd(a, b) = gcd(b,~ a~mod~b)$ where $a > b$ and $a~ mod~ b ~\ne 0$. Proof: Since $a > b$, $a$ could be denoted as $a = kb + r$ with $0 \leqslant r < b$. Then the remainder $r = a~mod~b$. Assume $d$ is a common divisor of $a$ and $b$, thus $d \mid a , d\mid b$, or we could write them as $a= q_1 d, b = q_2 d.$ Because $r = a - kb$, $r = q_1 d - k q_2 d = (q_1 - k q_2) d$, so we know $d \mid r$. Therefore $d$ is also a common divisor of $(b, r) = (b, a~mod~b)$. Hence, the common divisors of $(a, b)$ and $(b, a~mod~b)$ are the same. In other words, $(a, b)$ and $(b, a~mod~b)$ have the same common divisors, and so they have the same greatest common divisor. ### Description The description of the Euclidean algorithm is as follows: • Input two positive integers, a,b (a > b) • Output g, the gcd of a, b • Internal Computation 1. Divide a by b and get the remainder r. 2. If r=0, report b as the gcd of a and b. If r $\ne$0, replace a by b and replace b by r. Go back to the previous step. The algorithm process is like this: $a = k_0b + r_1, \quad 0 < r_1 < b$ $b = k_1r_1 + r_2 , \quad 0 < r_2 < r_1$ $r_1 = k_2r_2 + r_3, \quad 0 < r_3 < r_2$ $r_2 = k_3r_3 + r_4, \quad 0 < r_4 < r_3$ ... ... $r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad 0 < r_{n -1} < r_n$ $r_{n -1} = k_nr_n, \quad\quad\quad\quad r_n = 0$ To sum up, $(a, b) = (b, r_1) = (r_1, r_2) = (r_2, r_3) = (r_3, r_4) = ... ... =(r_{n- 2}, r_{n -1}) = (r_{n- 1}, r_n)$ $r_n$ is the gcd of a and b. Note: The Euclidean algorithm is iterative, meaning that the next step is repeated using the result from the last step until it reaches the end. ### Example An example will make the Euclidean algorithm clearer. Let's say we want to know the gcd of 168 and 64. In this case, a = 168, b = 64. Start writing the first equation: 168 = 2 $\times$ 64 + 40 (Try to find the greatest possible coefficient (integer) for quotient 64. Couldn't be 1 because the remainder has to be smaller than then quotient 64. Couldn't be 3 otherwise it is greater than 168. So it turns out to be 2 and the remainder is 40.) 64 = 1 $\times$ 40 + 24 (Get the remainder 40 from the last equation. $r_1 = 40$. Use it as the quotient for this second equation. By analog, find the coefficient for 40 and the remainder.) 40 = 1 $\times$ 24 + 16 24 = 1 $\times$ 16 + 8 16 = 2 $\times$ 8 (168, 64) = (64, 24) = (24, 16) = (16, 8) Therefore, 8 is the gcd of 168 and 64. • Here's an applet for you to play around with finding the gcd by using the Euclidean algorithm. ## Proof of the Euclidean Algorithm ### Modern Proof • Proving That It Is A Common Divisor In order to prove that Euclidean algorithm works, the first thing is to show that the number we get from this algorithm is a common divisor of a and b. Recall that $a = k_0b + r_1, \quad \quad \quad \quad \quad0 < r_1 < b$ $b = k_1r_1 + r_2 , \quad \quad \quad \quad \quad 0 < r_2 < r_1$ $r_1 = k_2r_2 + r_3, \quad \quad \quad \quad \quad 0 < r_3 < r_2$ ... ... $r_{n-3} = k_{n-2}r_{n-2} + r_{n -1}, \quad \quad 0 < r_{n-2} < r_{n -1}$ $r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad \quad \quad 0 < r_{n -1} < r_n$ $r_{n -1} = k_nr_n, \quad\quad\quad\quad \quad \quad r_n = 0$ Based on the last equation Eq. n+1, we substitute $r_{n -1}$ with $k_nr_n$ in Eq. n such that $r_{n -2} = k_{n -1}r_{n -1} + r_n = k_{n -1}k_nr_n + r_n$. $r_{n-2} = (k_{n -1}k_n + 1) r_n$ Thus we have $r_n \mid r_{n -2}$. From the equation before those two Eq. n-1, we repeat the steps we did just now: $r_{n -3} = k_{n -2}r_{n -2} + r_{n -1} = k_{n -2} \Big( (k_{n -1} k_n + 1)r_n \Big) + k_nr_n = (k_{n -2} k_{n -1} k_n + k_{n - 2} + k_n) r_n$. Now we know $r_n \mid r_{n -3}$. Continue this process and we will find that $r_n \mid a, r_n \mid b$, so $r_n$, the number we get from Euclidean algorithm, is indeed a common divisor of a and b. • Proving That It Is The Greatest Second, we need to show that $r_n$ is the greatest among all the common divisors of a and b. To show that $r_n$ is the greatest, let's assume that there is another common divisor of a and b, d, where d is a positive integer. Then we could rewrite a and b as a = dm , b = dn, where m and n are also positive integers. This second part of the proof is going to be similar to the first part because they both repeat the same steps and eventually get the result, but this time we start from the first equation of the Euclidean algorithm Eq. 1: We know that $a = k_0b + r_1$. Thus, $r_1 = a - k_0b = dm - k_0 dn$, and $r_1 = (m - k_0 n) d$ (substitute dm for a and dn for b). Therefore, $d \mid r_1$. Let $r_1 = d_1 d$. Consider the second equation Eq. 2. Solve for $r_2$ in the same way. We know that $b = k_1r_1 + r_2$. Thus, $r_2 = b - k_1r_1= dn - k_1d_1 d$, and $r_2 = (n - k_1d_1) d$. Therefore, $d \mid r_2$. Continuing the process until we reach the last equation Eq. n, we will get $d \mid r_n$. Since we pick d to represent any possible common divisor of a and b except $r_n, d \mid r_n$ means that $r_n$ divides any other common divisor of a and b, meaning that $r_n$ must be greater than all the other common divisors. Therefore, the number we get from the Euclidean Algorithm, $r_n$, is indeed the greatest common divisor of a and b. ### Euclid's Proof Now let's look at Euclid's proof. Since Euclid's method of finding the gcd is based on several definitions, I quote the first 15 definitions in Book VII of his Elements for you. Definitions 1. A unit is that by virtue of which each of the things that exist is called one. 2. A number is a multitude composed of units. 3. A number is a part of a number, the less of the greater, when it measures the greater. 4. but parts when it does not measure it. 5. The greater number is a multiple of the less when it is measured by the less. 6. An even number is that which is divisible into two equal parts. 7. An odd number is that which is not divisible into two equal parts, or that which differs by an unit from an even number. 8. An even-times even number is that which is measured by an even number according to an even number. 9. An even-times odd number is that which is measured by an even number according to an odd number. 10. An odd-times odd number is that which is measured by an odd number according to an odd number. 11. A prime number is that which is measured by an unit alone. 12. Numbers prime to one another are those which are measured by an unit alone as a common measure. 13. A composite number is that which is measured by some number. 14. Numbers composite to one another are those which are measured by some number as a common measure. 15. A number is said to multiply a number when that which is multiplied is added to itself as many times as there are units in the other, and thus some number is produced.[1] Editor's Note: In a nutshell, Euclid's one unit is the number 1 in algebra. He uses lines to represent numbers; the longer the line the greater the number. In Def.3, "measure" means "divide." Proposition 1. (See Image 1) Two unequal numbers being set out, and the less being continually subtracted in turn from the greater, if the number which is left never measures the one before it until an unit is left, the original numbers will be prime to one another. For, the less of two unequal numbers AB, CD being continually subtracted from the greater, let the number which is left never measure the one before it until an unit is left; image 1 I say that AB, CD are prime to one another, that is, that an unit alone measures AB, CD. For, if AB, CD are not prime to one another, some number will measure them. Let a number measure them, and let it be E; let CD, measuring BF, leave FA less then itself, let, AF measuring DG, leave GC less than itself, and let GC, measuring FH, leave an unit HA. Since, then, E measures CD, and CD measure BF, therefore E also measures BF. But it also measures the whole BA; therefore it will also measure the remainder AF. But AF measures DG; therefore E also measures DG. But it also measures the whole DC; therefore it will also measure the remainder CG. But CG measures FH; therefore E also measures FH. But it also measures the whole FA; therefore it will also measure the remainder, the unit AH, though it is a number: which is impossible. Therefore no number will measure the numbers AB, CD; therefore AB, CD are prime to one another. [1] [VII.Def.12] Q.E.D. Editor's Note : Here is my translation of Proposition 1. Euclid wants to show that a and b must be prime to each other if we get 1 left instead of 0. Why? Recall a > b. Write Euclid's proof in equations and we will get: $a = kb + r, \quad 0 < r < b$ $b = qr + t, \quad 0 < t < r$ $r = lt + {\color{Red}1}, \quad 1 < r$ Assume a and b have a common measure e with e greater than one. Then e measures or divides r based on the first equation above, and e divides t based on the second equation above. Hence, e divides r and 1, but e cannot divide 1. In other words, 1 cannot be divided by e without any remainder because e is greater than 1. Therefore, a and b are prime to each other. Proposition 2. (See Image 2) Given two numbers not prime to one another, to find their greatest common measure. Let AB, CD be the two given numbers not prime to one another. Thus it is required to find the greatest common measure of AB, CD. If now CD measures AB - and it also measures itself - CD is a common measure of CD, AB. And it is manifest that it is also the greatest; for no greater number than CD will measure CD. But, if CD does not measure AB, then, the less of the numbers AB, CD being continually subtracted from the greater, some number will be left which will measure the one before it. For an unit will not be left; otherwise AB, CD will be prime to one another [VII, I], which is contrary to the hypothesis. image 2 Therefore, some number will be left which will measure the one before it. Now let CD, measuring BE, leave EA less than itself, let EA, measuring DF, leave FC less than itself, and let CF measure AE. Since then, CF measures AE, and AE measures DF, therefore CF will also measure DF. But it also measures itself; therefore it will also measure the whole CD. But CD measures BE; therefore CF also measures BE. But it also measures EA; therefore it will also measure the whole BA. But it also measures CD; therefore CF measures AB, CD. Therefore CF is a common measure of AB, CD. I sat next that it is also the greatest. For, if CF is not the greatest common measure of AB, CD, some number which is greater than CF will measure the numbers AB, CD. Let such a number measure them, and let it be G. Now, since G measures CD, while CD measures BE, G also measures BE. But it also measures the whole BA; therefore it will also measure the remainder AE. But AE measures DF; therefore G will also measure DF. But it also measures the whole DC; therefore it will also measure the remainder CF, that is, the greater will measure the less: which is impossible. Therefore no number which is greater than CF will measure the number AB, CD; therefore CF is the greatest common measure of AB, CD. PORISM. From this it is manifest that, if a number measure two numbers, it will also measure their greatest common measure. [1] Q.E.D ‘’Editor's Note: Prop.2 is pretty self-explanatory, proved in a similar way as Prop.1. Comparing the modern proof with Euclid's proof, it is not hard to notice that the modern proof is more about algebra, while Euclid did his proof of his algorithm using geometry because, at that time, algebra had not been invented yet. However, the main idea is pretty much the same. They both prove that the result is a common divisor first and then show that it is the greatest among all the common divisors. # Extended Euclidean Algorithm Expand the Euclidean algorithm and you will be able to solve Bézout's identity for x and y where d = gcd(a, b): $ax +by = gcd(a, b).$ Note: Usually either x or y will be negative since a, b and gcd(a, b) are positive and both a and b are usually greater than gcd(a, b). ## Description The description of the extended Euclidean algorithm is: Input: Two non-negative integers a and b ( $a \leqslant b$). Output: d = gcd(a, b) and integers x and y satifying ax + by = d. Computation: • If $b = 0,$ set $d = a, x = 1, y = 0,$ and return $(d, x, y).$ • If not, set$x_2 = 1, x_1 = 0, y_2 = 0, y_1 = 1$ • While $b > 0$, do $q = floor(\frac{a}{b}), r = a - qb, x = x_2 - qx_1, y = y_2 - q y_1.$ $a = b, b = r, x_2 = x_1, x_1 = x, y_2 = y_1, y_1 = y.$ • Set $d = a, x = x_2, y = y_2,$ and return $(d, x, y).$ ## Example This linear equation is going to be very complicated with all these notations, so it is much easier to understand with an example: Solve for integers x and y such that 168x + 64y = 8. • Apply Euclidean algorithm to compute gcd(168, 64), and we have a list of the following equations: $168 = 2 \times 64 + 40$ $64 = 1 \times 40 + 24$ $40 = 1 \times 24 + 16$ $24 = 1 \times 16 + 8$ $16 = 2 \times 8 + 0$ Thus gcd(168, 64) = 8. So we know that we can solve for x and y by extended Euclidean algorithm. • Use the extended Euclidean algorithm to get x and y: From the fourth equation we get $8 = 24 - 1 \times 16.$ From the third equation we get $16 = 40 - 1 \times 24$. • Substitute Eq. 7 into Eq. 6: $8 = 24 - 1 \times (40 - 1 \times 24)$ $8 = 24 - 1 \times 40 + 1 \times 24$ $8 = 2 \times 24 - 1 \times 40$ • Do the same steps for the second equation in the list: $24 = 64 - 1 \times 40$ $8 = 2\times (64 - 1 \times 40) - 1\times 40$ $8 = 2\times 64 - 3\times 40$ For the first equation in the list, we get $40 = 168 - 2 \times 64$ $8 = 2\times 64 - 3 \times (168 - 2 \times 64)$ $8 = -3 \times 168 + 8 \times 64$ $\therefore x = -3, y = 8$ ## Proof Recall that $a = k_0b + r_1, \quad 0 < r_1 < b$ $b = k_1r_1 + r_2 , \quad 0 < r_2 < r_1$ $r_1 = k_2r_2 + r_3, \quad 0 < r_3 < r_2$ $r_2 = k_3r_3 + r_4, \quad 0 < r_4 < r_3$ ... ... $r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad \quad 0 < r_{n -1} < r_n$ $r_{n -1} = k_nr_n, \quad\quad\quad\quad \quad r_n = 0$ Solve for $r_n$ using the second to last equation and we get: $r_n = r_{n-2} - k_{n-1}r_{n-1}$ Because $r_n = gcd(a, b)$ by Euclidean algorithm, $gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1}$ Now let's solve for $r_{n-1}$ in the same way: $r_{n -1} = r_{n -3} - k_{n-2}r_{n-2}$ Substitute Eq. 5 into Eq. 4: $gcd(a, b) = r_{n -2} - k_{n -1}r_{n-1}$ $gcd(a, b) = r_{n -2} - k_{n -1}(r_{n -3} - k_{n-2}r_{n-2})$ $gcd(a, b) = r_{n -2} - k_{n -1}r_{n -3} + k_{n-1}k_{n-2}r_{n-2}$ ${\color{Blue}gcd(a, b)} = (1 + k_{n-1}k_{n-2}){\color{Blue}r_{n-2}} - k_{n-1} {\color{Blue}r_{n - 3}}$ Now you can see gcd(a, b) is expressed by a linear combination of $r_{n-2}$ and $r_{n-3}$. If we continue this process by using the previous equations from the list above, we could get a linear combination of $r_{n-3}$ and $r_{n-4}$ with $r_{n-3}$ representing $r_{n-2}$ and $r_{n-4}$ representing $r_{n-3}$. If we keep going like this till we hit the first equation, we can express gcd(a, b) as a linear combination of a and b, which is what we intend to do. Euclidean algorithm and extended Euclidean algorithm makes it elegantly easy to compute the two Bézout's coefficients. # Efficiency How efficient could Euclidean algorithm be? Is it always perfect? Does Euclidean algorithm have shortcomings? ## Number of Steps - Lamé's Theorem Gabriel Lamé is the first person who shows the number of steps required by the Euclidean algorithm. Lamé's theorem states that the number of steps in Euclidean algorithm for gcd(a,b) is at most five times the number of digits of the smaller number b. Thus, the Euclidean algorithm is linear-time in the number of digits in b. Proof Recall the division equations from the Euclidean algorithm, $a = k_0b + r_1, \quad 0 < r_1 < b$ $b = k_1r_1 + r_2 , \quad 0 < r_2 < r_1$ $r_1 = k_2r_2 + r_3, \quad 0 < r_3 < r_2$ $r_2 = k_3r_3 + r_4, \quad 0 < r_4 < r_3$ ... ... $r_{n -2} = k_{n - 1}r_{n -1} + r_n,\quad 0 < r_n < r_{n -1}$ $r_{n -1} = k_nr_n, \quad r_{n+1} = 0$ We can tell from the equations that the number of steps is $n+1$ with n being the same n as in the division equations. So we want to prove that $n + 1 \leqslant 5k$ where k is the number of digits of b. Notations: 1. a and b are integers and we assume a is bigger than b, so $a > b \geqslant 1$. 2. The Fibonacci Numbers are 1, 1, 2, 3, 5, 8, 13, ... , where every later number is the sum of the two previous numbers. 3. Denote $Fn$ as the nth Fibonacci number (i.e. $F_2 = 1, F_3 = 2, F_4 = 3, F_5 = 8$). 4. All the numbers in the division equations, $a, b, r_n, k_n$, are positive integers. Analyze the division equations and we will have three conclusions: • $r_n$ couldn't be 0, as otherwise all the remainders would be 0. Hence, $r_n \geqslant 1 = F_2$. • We know that $r_{n- 1} > r_n$. Thus, according to the last equation $r_{n -1} = k_nr_n$, $k_n$ should be greater than 1: $k_n \geqslant 2$. Therefore, $r_{n -1} = k_nr_n \geqslant 2 r_n \geqslant 2F_2 = 2 = F_3$. • $k_{n -1}$ is an integer, so $k_{n -1} \geqslant 1$. Thus, $r_{n -2} \geqslant r_n + r_{n -1}$. Since $r_{n -1} \geqslant F_3$ and $r_{n} \geqslant F_2,$ we have $r_{n-2} \geqslant F_2 + F_3 = F_4$. Simplify the three conclusions: $r_n \geqslant 1 = F_2$ $r_{n -1} = k_nr_n \geqslant 2 r_n \geqslant 2F_2 = F_3$ $r_{n-2} \geqslant r_{n} + r_{n -1} \geqslant F_2 + F_3 = F_4$ According to induction, $r_{n -3} \geqslant r_{n -1} + r_{n -2} \geqslant F_3 + F_4 = F_5$ ... $r_1 \geqslant r_3 + r_2 \geqslant F_{n- 1} + F_n = F_{n+1}$ $b \geqslant r_2 + r_1 \geqslant F_n + F_{n+1} = F_{n+2}$ Therefore, $b \geqslant F_{n + 2}$ A theorem about the lower bound of Fibonacci numbers states that for all integers $n \geqslant 3$, it is true that $F_n > \alpha ^{n -2}$ where $\alpha = \frac{1+\sqrt{5}}{2} \approx 1.61803$ ( the sum of the Golden Ratio and 1). Therefore, $b \geqslant F_{n+2} > \alpha ^n$ Thus, $\log_{10} b > \log_{10} \alpha ^n = n \log_{10} \alpha \approx n~0.208 \approx \frac{n}{5}$ Because b has k digits, $k > \log_{10} b$. Then $k > \log_{10} b > \frac{n}{5}$ $k > \frac{n}{5}$ $5k > n$ $5k +1 > n + 1$ $5k \geqslant n + 1$ $n + 1 \leqslant 5k$. Therefore, the number of steps ($n + 1$) required by Euclidean algorithm for gcd(a,b) is no more than five times the number of digits of b ($5k$). ## Shortcomings of the Euclidean Algorithm The Euclidean algorithm is an ancient but good and simple algorithm to find the gcd of two nonnegative integers; it is well designed both theoretically and practically. Due to its simplicity, it is widely applied in many industries today. However, when dealing with really big integers (prime numbers over 64 digits in particular), finding the right quotients using the Euclidean algorithm adds to the time of computation for modern computers. Stein's algorithm (also known as the binary GCD algorithm) is also an algorithm to compute the gcd of two nonnegative integers brought forward by J. Stein in 1967. This alternative is made to enhance the efficiency of the Euclidean algorithm, because it replaces complicated division and multiplication in Euclidean algorithm with addition, subtraction and shifts, which make it easier for the CPU to compute large integers. Stein's algorithm has the following conclusions: • $gcd(m, 0) = m, gcd(0, m) = m.$It is because every number except 0 divides 0 and m is the biggest number that can divide itself. • If e and f are both even integers, then $gcd(e, f) = 2 \cdot gcd\left ( \frac{e}{2}, \frac{f}{2} \right )$, because 2 is definitely a common divisor of two even integers. • If e is even and f is odd, then $gcd(e, f) = gcd \left ( \frac{e}{2}, f \right )$, because 2 is definitely not a common divisor of an even integer and an odd integer. • Otherwise both are odd and $gcd(e, f) = gcd\left ( \frac{|e-f|}{2}, \mbox{the smaller one of e and f} \right )$. According to Euclidean algorithm, the difference of e and f, which is $|e-f|$, could also divide $gcd(e, f)$. And $\frac{|e-f|}{2}$ is an integer because the difference of two odd integers is even. Thus, the gcd of $\frac{|e-f|}{2}$ and the smaller one of e is the gcd of e and f. Based on the three conclusions, Stein's algorithm is described as the following. Note that the inner computation below is actually the same as the three conclusions. We just restate the three conclusions in an "algorithm form." Input: any two distinctive positive integers$u, v$ with $0 < u \leqslant v$; Output: $g = gcd(u, v)$ Inner Computation: 1. g = 1. 2. While both u and v are even integers, do $u = \frac{u}{2}, v = \frac{v}{2}, g = 2g$; ( "while" means both "if" and "iteration until the condition is no longer satisfied" ) 3. While $u > 0$, do: 1. While u is even, do: $u =\frac{u}{2}$; 2. While v is even, do: $v = \frac{v}{2}$; 3. $t = \frac{\left\vert u - v \right\vert }{2}$; 4. If $u \leqslant v , u = t$; else, $v = t;$ 4. Return $g \cdot v$. Example: Steiner's algorithm is designed for large numbers, but we only provide an example with small numbers for convenience. $u=168, v=64$ 1. $g = 1$; 2. Both u and v are even integers. 1. $u = \frac{168}{2} = 84, v = \frac{64}{2} = 32, g = 2$; 2. $u = \frac{84}{2} = 42, v = \frac{32}{2} = 16, g = 4$; 3. $u = \frac{42}{2} = 21, v = \frac{16}{2} = 8, g =8$; (u and v are not both even now. Move on to the next step.) 3. $u = 21 > 0;$ 1. v is even. 1. $v = \frac{8}{2} = 4;$ 2. $v = \frac{4}{2} = 2;$ 3. $v = \frac{2}{2} = 1;$ 2. $t =\frac{\left\vert 21- 1 \right\vert}{2} = \frac{20}{2} = 10; t = 10$, 3. $u = 21 > 1 = v, \therefore u = t =10;$ 4. $u = 10 > 0, v =1;$ 1. u is even. 1. $u = \frac{10}{2} = 5;$ 2. $t = \frac{\left\vert 5- 1 \right\vert}{2} = \frac{4}{2} = 2; t = 2,$ 3. $u = 5 > 1 = v, \therefore u = t = 2;$ 5. $u = 2 > 0, v =1;$ 1. u is even. 1. $u = \frac{2}{2} = 1;$ 2. $t = \frac{\left\vert 1 - 1 \right\vert}{2} = \frac{0}{2} = 0; t = 0,$ 3. $u = 2 > 1 = v, \therefore u = t = 0;$ ( Because u = 0, condition u > 0 is no longer satisfied. Move on to the next step) 6. Return $g = g \cdot v = 8 \times 1 = 8.$ Now you may have a better understanding of the efficiency of Stein's algorithm, which substitutes divisions with faster operations by exploiting the binary representation that real computers use nowadays. # Why It's Interesting The Euclidean algorithm is a fundamental algorithm for other mathematical theories and various subjects in different areas. Please see Application of the Euclidean Algorithm to learn more about the Euclidean algorithm. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # References [3] Artmann, Benno. (1999) ‘‘ Euclid-the creation of mathematics.’’ New York: Springer-Verlag. [4] Weisstein, Eric W. Euclidean Algorithm. From MathWorld--A Wolfram Web Resource. Retrieved from http://mathworld.wolfram.com/EuclideanAlgorithm.html. [6] Health, T.L. (1926) Euclid The Thirteen Books of the Elements. Volume 2, Second Edition. London: Cambridge University Press. [8] Ranjan, Desh. Euclid’s Algorithm for the Greatest Common Divisor. Retrieved from http://www.cs.nmsu.edu/historical-projects/Projects/EuclidGCD.pdf. [11] Gallian, Joseph A. (2010) Contemporary Abstract Algebra Seventh Edition. Belmont: Brooks/Cole, Cengage Learning. [14] Wikipedia (Binary GCD Algorithm). (n.d.). Binary GCD Algorithm. Retrieved from http://en.wikipedia.org/wiki/Binary_GCD_algorithm. # Future Directions for this Page 1. More applets or animations of Euclidean algorithm. 2. More pictures if possible. 3. Worst case of Euclidean algorithm. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 212, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.859497606754303, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47344/representations-of-lorentz-group
# Representations of Lorentz Group I'd be grateful if someone could check that my exposition here is correct, and then venture an answer to the question at the end! $SO(3)$ has a fundamental representation (spin-1), and tensor product representations (spin-$n$ for $n\in\mathbb{Z})$. $SO(3)$ has universal covering group $SU(2)$. The fundamental representation of $SU(2)$ and its tensor product representations descend to projective representations of $SO(3)$. We call these representations spin representations of $SO(3)$ (spin-$n/2$ for $n\in \mathbb{Z}$). The complex vector space $\mathbb{C}^2$ has elements called spinors, which transform under a rotation $R$ according to the relevant representative $D(R)$. The natural generalisation of a spinor is called a pseudotensor, and lives in the tensor product space. We can repeat the analysis for the proper orthochronous Lorentz group $L_+^\uparrow$. We find that the universal covering group is $SL(2,\mathbb{C})$ and we get two inequivalent spin-$1/2$ projective representations of $L_+^\uparrow$, namely the fundamental and conjugate representations of $SL(2,\mathbb{C})$. Now when we pass to the full Lorentz group, somehow the projective representations disappear and become genuine representations. Why, morally and mathematically, is this? If it's possible to give an answer without resorting to the Lie algebra, and just working with representations of the group I'd be delighted! Many thanks in advance. - ## 1 Answer Now when we pass to the full Lorentz group, somehow the projective representations disappear and become genuine representations. I don't think this is true. Some but not all of the spinor representations of the proper orthochronous Lorentz group extend to representations of the full Lorentz group; you just add parity reversal and time reversal. But the new representations are still projective. - You are right: the Lorentz group does have projective representations. The full argument is given in Weinberg I ch. 2.7: if a group has no projective reps, then (1) its Lie algebra must have no central charge(s) and (2) the group must be simply connected. (1) is okay for the Lorentz group, but (2) not: the group is isomorphic to $SL(2,\mathbb{C})/\mathbb{Z}_2$, which is not simply connected. However, the 'phase' appearing under group multiplication can only be $\pm 1$ (if the state in question is a boson/fermion), and there can be no mixing between the two. – Vibert Dec 21 '12 at 22:47 @user1504 - Many thanks! So when people talk about half spin representations of the full Lorentz group they really mean projective representations then? Perhaps it's because I come from a mathematical background, but I don't like it when people talk about a "representation" where $D(I)\neq I$! – Edward Hughes Dec 22 '12 at 0:20 1 @EdwardHughes: Yes, that's correct. Projective reps of $G$ are reps of the universal cover of $G$. They aren't actually representations of $G$. But in quantum mechanics, since we don't care about any constants multiplying a state, symmetries can be realized as projective representations. – user1504 Dec 22 '12 at 1:25 @user1504: it's careless to say that we don't care about constants multiplying a state, since normally in QM phases matter. In this case we don't care, because by unitarity the phase cannot depend on the state in question, so you cannot measure it, even in superpositions. – Vibert Dec 22 '12 at 16:09 @Vibert So the Wikipedia article here is wrong, and should say projective representation everywhere it says representation then? – Edward Hughes Dec 23 '12 at 0:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355066418647766, "perplexity_flag": "head"}
http://mathoverflow.net/questions/95490?sort=newest
trying to understand the support of the sheaf of relative differentials Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) So I'm trying to understand a proof of Belyi's theorem from http://eprints.soton.ac.uk/29785/1/b45h1koe.pdf specifically lemma 3.4. The setup is as follows: Let $X/\mathbb{C}$ be a curve, and let $t : X\rightarrow\mathbb{P}^1_\mathbb{C}$ be a meromorphic function on $X$ thought of as a covering map of degree $n$. Further, let $\text{Crit}(t)$ denote the critical points of the cover $t$ - ie, the points in $\mathbb{P}^1_\mathbb{C}$ which have fewer than $n$ pre-images under $t$. Then, he claims that $\text{Crit}(t) = t(\text{supp}(\Omega^1_{X/\mathbb{P}^1_\mathbb{C}}))$. Now, it's my understanding that the critical points of $t$ should be the images of the ramification points of $t$ under $t$, so I've been trying to understand why it should be the case that the sheaf of relative differentials of $X/\mathbb{P}^1_\mathbb{C}$ should be nonzero only on the ramification points (or at least only above the critical points). To this end, I'm trying to understand the definition given in Hartshorne (III.8), namely: $\Omega^1_{X/\mathbb{P}^1_\mathbb{C}} = \Delta^*(\mathcal{I}/\mathcal{I}^2)$, where $\Delta : X\rightarrow X\times_{\mathbb{P}^1_\mathbb{C}} X$ is the diagonal map, and $\mathcal{I}$ is the sheaf of ideals of the image $\Delta(X)$ in some open subset $W\subset X\times_{\mathbb{P}^1_\mathbb{C}} X$. I kind of understand sheaves of ideals (they're essentially functions on the ambient space that vanish on the closed subscheme), but I'm still not very comfortable with the notion of $\Delta^*(\mathcal{I}/\mathcal{I}^2)$ (in this case defined to be $\Delta^{-1}\mathcal{I}/\mathcal{I}^2\otimes_{\Delta^{-1}\mathcal{O}_{X\times X}}\mathcal{O}_X$, where the fibred product is taken over $\mathbb{P}^1$). Any comments on how I should think of $\Delta^*(...)$ and why the sheaf of relative differentials only have nonzero stalks at ramification points would be awesome! thanks. - 1 Try looking at the local picture. Pick local coordinates on $X$ and $\mathbb P^1$, centered on a ramification point on $X$, such that the morphism is given by $z \mapsto z^k$. Write down the short exact sequence associated of tangent sheaves associated to the morphism, and note that the differential of $f$ is $k z^{k-1} dz$. For a fixed $z \not= 0$ this morphism $T_X \to f^*T_{\mathbb P^1}$ is surjective, because it is injective and both spaces are of dimension 1, so the sheaf of relative differentials is zero. For $z = 0$, the morphism is zero, whence the support of the relative sheaf. – Gunnar Magnusson Apr 29 2012 at 7:58 1 In my understanding, $\Delta^*(\mathcal{I}/\mathcal{I}^2)$ is technically convenient because it is obviously well-defined, but not a good way to reason about differentials in practice. First, you need to understand $\Omega_{X/\Bbbk}$, where $X$ is a smooth variety over a field $\Bbbk$. (This is essentially the sheaf of differential forms.) Once you understand this, $\Omega_{X/Y}$ is obtained (for a morphism $X \to Y$) by taking $\Omega_{X/\Bbbk}$ and modding out by pullbacks of differential forms on $Y$. – Charles Staats Apr 29 2012 at 21:48 Qualification: $\Delta^*(\mathcal{I}/\mathcal{I}^2)$ is often not the best way to reason about differentials in practice, particularly when you are learning. – Charles Staats Apr 29 2012 at 21:50 2 Answers Ravi Vakil has a good explaination for the definition $\Delta^*(I/I^2)$ in his notes. See his AG notes here or here (chapter 23). In particular, I guess thinking about this locally makes it a little clearer what's going on, in terms of derivations etc. Also, when $X$ is smooth, it is instructive to see that this really gives the cotangent bundle on $X$. As for your question about ramification points: Let $f:X\to Y$ be a finite morphism of curves (I will assume that these are smooth in the following). It is useful to have in mind the exact sequence $$0\to f^*\Omega_{Y}\to \Omega_X \to \Omega_{X|Y}\to 0.$$(This is exact at the right in the smooth case, but not in general). Note that $\Omega_{X|Y}$ is a torsion sheaf since the two other sheaves are locally free of the same rank (they are line bundles on $X$). At a point $q\in Y$ and $p\in X$ in the preimage of $q$, let $dx$ denote a generator for $\Omega_{Y,q}$ as a $O_{Y,q}$-module. Now, $(\Omega_{X|Y})_P=0$ if and only if $f^*dx$ is a generator of $\Omega_{X,p}$, which happens if and only if $f$ pulls back a local parameter to a local parameter, that is $p$ is unramified. Moreover, the exact sequence above shows that the ramification index is exactly the length of the sheaf $\Omega_{X|Y}$. Finally, note that this sequence gives the Riemann hurwitz formula, relating the canonical divisors of $X$ and $Y$ and the ramification divisor of $f$. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $\pi:X\to Y$ be a finite morphism of curves. Then, for any point $x$ in $X$ lying over $y$ in $Y$, the coefficient $v_x(\pi)$ of $\Omega_{\pi}$ is the valuation of the different of the extension of dvr's $\mathcal{O}_{y}\subset \mathcal{O}_x$. If you are working in characteristic zero, then $$v_y(\pi) = e_x-1,$$ where $e_x$ is the ramification index. So you see that $\Omega_{\pi}$ is supported on the ramification points. Also, you have a short exact sequence (it's on page 2 of Chapter IV.2 in Hartshorne) which relates $\Omega_\pi$ with $\Omega_X$ and $\Omega_Y$. The above actually shows the important Riemann-Hurwitz formula: $$K_X = \pi^{\ast} K_Y + R.$$ Here $R$ is the ramification divisor. This equals $\Omega_{\pi}$ in this case. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392982125282288, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/EPR_paradox
EPR paradox Albert Einstein Quantum mechanics Introduction Glossary · History Background Fundamental concepts Experiments Formulations Equations Interpretations Advanced topics People Quantizations Mechanics Interpretations - Theorems Mathematical Formulations The EPR paradox is an early and influential critique leveled against quantum mechanics. Albert Einstein and his colleagues Boris Podolsky and Nathan Rosen (known collectively as EPR) designed a thought experiment intended to reveal what they believed to be inadequacies of quantum mechanics. To that end they pointed to a consequence of quantum mechanics that its supporters had not noticed. According to quantum mechanics, under some conditions, a pair of quantum systems may be described by a single wave function, which encodes the probabilities of the outcomes of experiments that may be performed on the two systems, whether jointly or individually. At the time the EPR article was written, it was known from experiments that the outcome of an experiment sometimes cannot be uniquely predicted. An example of such indeterminacy can be seen when a beam of light is incident on a half-silvered mirror. One half of the beam will reflect, the other will pass. But what happens when we keep decreasing the intensity of the beam, so that only one photon is in transit at any time? Whether any one photon will reflect or transmit cannot be predicted quantum mechanically. The routine explanation of this effect was, at that time, provided by Heisenberg's uncertainty principle.[citation needed] Physical quantities come in pairs which are called conjugate quantities. Examples of such conjugate pairs are position and momentum of a particle and components of spin measured around different axes. When one quantity was measured, and became determined, the conjugated quantity became indeterminate. Heisenberg explained this as a disturbance caused by measurement. The EPR paper, written in 1935, was intended to illustrate that this explanation is inadequate. It considered two entangled particles, referred to as A and B, and pointed out that measuring a quantity of a particle A will cause the conjugated quantity of particle B to become undetermined, even if there was no contact, no classical disturbance. Heisenberg's principle was an attempt to provide a classical explanation of a quantum effect sometimes called non-locality. According to EPR there were two possible explanations. Either there was some interaction between the particles, even though they were separated, or the information about the outcome of all possible measurements was already present in both particles. The EPR authors preferred the second explanation according to which that information was encoded in some 'hidden parameters'. The first explanation, that an effect propagated instantly, across a distance, is in conflict with the theory of relativity. They then concluded that quantum mechanics was incomplete since, in its formalism, there was no space for such hidden parameters. Bell's theorem is generally understood to have demonstrated that the EPR authors' preferred explanation was not viable. Most physicists who have examined the matter concur that experiments, such as those of Alain Aspect and his group, have confirmed that physical probabilities, as predicted by quantum theory, do show the phenomena of Bell-inequality violations that are considered to invalidate EPR's preferred "local hidden-variables" type of explanation for the correlations to which EPR first drew attention.[1][2] History of EPR developments The article that first brought forth these matters, "Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?" was published in 1935.[3] Einstein struggled to the end of his life for a theory that could better comply with his idea of causality, protesting against the view that there exists no objective physical reality other than that which is revealed through measurement interpreted in terms of quantum mechanical formalism. However, since Einstein's death, experiments analogous to the one described in the EPR paper have been carried out, starting in 1976 by French scientists Lamehi-Rachti and Mittig[4] at the Saclay Nuclear Research Centre. These experiments appear to show that the local realism idea is false.[5] Quantum mechanics and its interpretation Main article: Interpretations of quantum mechanics Since the early twentieth century, quantum theory has proved to be successful in describing accurately the physical reality of the mesoscopic and microscopic world, in multiple reproducible physics experiments. Quantum mechanics was developed with the aim of describing atoms and explaining the observed spectral lines in a measurement apparatus. Although disputed, it has yet to be seriously challenged. Philosophical interpretations of quantum phenomena, however, are another matter: the question of how to interpret the mathematical formulation of quantum mechanics has given rise to a variety of different answers from people of different philosophical persuasions (see Interpretations of quantum mechanics). Quantum theory and quantum mechanics do not provide single measurement outcomes in a deterministic way. According to the understanding of quantum mechanics known as the Copenhagen interpretation, measurement causes an instantaneous collapse of the wave function describing the quantum system into an eigenstate of the observable that was measured. Einstein characterized this imagined collapse in the 1927 Solvay Conference. He presented a thought experiment in which electrons are introduced through a small hole in a sphere whose inner surface serves as a detection screen. The electrons will contact the spherical detection screen in a widely dispersed manner. Those electrons, however, are all individually described by wave fronts that expand in all directions from the point of entry. A wave as it is understood in everyday life would paint a large area of the detection screen, but the electrons would be found to impact the screen at single points and would eventually form a pattern in keeping with the probabilities described by their identical wave functions. Einstein asks what makes each electron's wave front "collapse" at its respective location. Why do the electrons appear as single bright scintillations rather than as dim washes of energy across the surface? Why does any single electron appear at one point rather than some alternative point? The behavior of the electrons gives the impression of some signal having been sent to all possible points of contact that would have nullified all but one of them, or, in other words, would have preferentially selected a single point to the exclusion of all others.[6] Einstein's opposition Einstein was the most prominent opponent of the Copenhagen interpretation. In his view, quantum mechanics is incomplete. Commenting on this, other writers (such as John von Neumann[7] and David Bohm[8]) hypothesized that consequently there would have to be 'hidden' variables responsible for random measurement results, something which was not expressly claimed in the original paper. The 1935 EPR paper [1] condensed the philosophical discussion into a physical argument. The authors claim that given a specific experiment, in which the outcome of a measurement is known before the measurement takes place, there must exist something in the real world, an "element of reality", that determines the measurement outcome. They postulate that these elements of reality are local, in the sense that each belongs to a certain point in spacetime. Each element may only be influenced by events which are located in the backward light cone of its point in spacetime (i.e. the past). These claims are founded on assumptions about nature that constitute what is now known as local realism. Though the EPR paper has often been taken as an exact expression of Einstein's views, it was primarily authored by Podolsky, based on discussions at the Institute for Advanced Study with Einstein and Rosen. Einstein later expressed to Erwin Schrödinger that, "it did not come out as well as I had originally wanted; rather, the essential thing was, so to speak, smothered by the formalism."[9] In 1936 Einstein presented an individual account of his local realist ideas.[10] Description of the paradox The original EPR paradox challenges the prediction of quantum mechanics that it is impossible to know both the position and the momentum of a quantum particle. This challenge can be extended to other pairs of physical properties. EPR paper The original paper purports to describe what must happen to "two systems I and II, which we permit to interact ...", and, after some time, "we suppose that there is no longer any interaction between the two parts." In the words of Kumar (2009), the EPR description involves "two particles, A and B, [which] interact briefly and then move off in opposite directions."[11] According to Heisenberg's uncertainty principle, it is impossible to measure both the momentum and the position of particle B exactly. However, according to Kumar, it is possible to measure the exact position of particle A. By calculation, therefore, with the exact position of particle A known, the exact position of particle B can be known. Also, the exact momentum of particle B can be measured, so the exact momentum of particle A can be worked out. Kumar writes: "EPR argued that they had proved that ... [particle] B can have simultaneously exact values of position and momentum. ... Particle B has a position that is real and a momentum that is real." EPR appeared to have contrived a means to establish the exact values of either the momentum or the position of B due to measurements made on particle A, without the slightest possibility of particle B being physically disturbed.[11] EPR tried to set up a paradox to question the range of true application of Quantum Mechanics: Quantum theory predicts that both values cannot be known for a particle, and yet the EPR thought experiment purports to show that they must all have determinate values. The EPR paper says: "We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete."[11] The EPR paper ends by saying: While we have thus shown that the wave function does not provide a complete description of the physical reality, we left open the question of whether or not such a description exists. We believe, however, that such a theory is possible. Measurements on an entangled state This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2011) We have a source that emits electron–positron pairs, with the electron sent to destination A, where there is an observer named Alice, and the positron sent to destination B, where there is an observer named Bob. According to quantum mechanics, we can arrange our source so that each emitted pair occupies a quantum state called a spin singlet. The particles are thus said to be entangled. This can be viewed as a quantum superposition of two states, which we call state I and state II. In state I, the electron has spin pointing upward along the z-axis (+z) and the positron has spin pointing downward along the z-axis (−z). In state II, the electron has spin −z and the positron has spin +z. Therefore, it is impossible (without measuring) to know the definite state of spin of either particle in the spin singlet.[12]:421-422 The EPR thought experiment, performed with electron–positron pairs. A source (center) sends particles toward two observers, electrons to Alice (left) and positrons to Bob (right), who can perform spin measurements. Alice now measures the spin along the z-axis. She can obtain one of two possible outcomes: +z or −z. Suppose she gets +z. According to the Copenhagen interpretation of quantum mechanics, the quantum state of the system collapses into state I. The quantum state determines the probable outcomes of any measurement performed on the system. In this case, if Bob subsequently measures spin along the z-axis, there is 100% probability that he will obtain −z. Similarly, if Alice gets −z, Bob will get +z. There is, of course, nothing special about choosing the z-axis: according to quantum mechanics the spin singlet state may equally well be expressed as a superposition of spin states pointing in the x direction.[13]:318 Suppose that Alice and Bob had decided to measure spin along the x-axis. We'll call these states Ia and IIa. In state Ia, Alice's electron has spin +x and Bob's positron has spin −x. In state IIa, Alice's electron has spin −x and Bob's positron has spin +x. Therefore, if Alice measures +x, the system 'collapses' into state Ia, and Bob will get −x. If Alice measures −x, the system collapses into state IIa, and Bob will get +x. Whatever axis their spins are measured along, they are always found to be opposite. This can only be explained if the particles are linked in some way. Either they were created with a definite (opposite) spin about every axis—a "hidden variable" argument—or they are linked so that one electron "feels" which axis the other is having its spin measured along, and becomes its opposite about that one axis—an "entanglement" argument. Moreover, if the two particles have their spins measured about different axes, once the electron's spin has been measured about the x-axis (and the positron's spin about the x-axis deduced), the positron's spin about the z-axis will no longer be certain, as if (a) it knows that the measurement has taken place, or (b) it has a definite spin already, about a second axis—a hidden variable. However, it turns out that the predictions of Quantum Mechanics, which have been confirmed by experiment, cannot be explained by any hidden variable theory. This is demonstrated in Bell's theorem.[14] In quantum mechanics, the x-spin and z-spin are "incompatible observables", meaning the Heisenberg uncertainty principle applies to alternating measurements of them: a quantum state cannot possess a definite value for both of these variables. Suppose Alice measures the z-spin and obtains +z, so that the quantum state collapses into state I. Now, instead of measuring the z-spin as well, Bob measures the x-spin. According to quantum mechanics, when the system is in state I, Bob's x-spin measurement will have a 50% probability of producing +x and a 50% probability of -x. It is impossible to predict which outcome will appear until Bob actually performs the measurement. Here is the crux of the matter. You might imagine that, when Bob measures the x-spin of his positron, he would get an answer with absolute certainty, since prior to this he hasn't disturbed his particle at all. But Bob's positron has a 50% probability of producing +x and a 50% probability of −x—so the outcome is not certain. Bob's positron "knows" that Alice's electron has been measured, and its z-spin detected, and hence B's z-spin calculated, so its x-spin is uncertain. Put another way, how does Bob's positron know which way to point if Alice decides (based on information unavailable to Bob) to measure x (i.e. to be the opposite of Alice's electron's spin about the x-axis) and also how to point if Alice measures z, since it is only supposed to know one thing at a time? The Copenhagen interpretation rules that say the wave function "collapses" at the time of measurement, so there must be action at a distance (entanglement) or the positron must know more than it's supposed to (hidden variables). Here is the paradox summed up: It is one thing to say that physical measurement of the first particle's momentum affects uncertainty in its own position, but to say that measuring the first particle's momentum affects the uncertainty in the position of the other is another thing altogether. Einstein, Podolsky and Rosen asked how can the second particle "know" to have precisely defined momentum but uncertain position? Since this implies that one particle is communicating with the other instantaneously across space, i.e. faster than light, this is the "paradox". Incidentally, Bell used spin as his example, but many types of physical quantities—referred to as "observables" in quantum mechanics—can be used. The EPR paper used momentum for the observable. Experimental realisations of the EPR scenario often use photon polarization, because polarized photons are easy to prepare and measure. Locality in the EPR experiment The principle of locality states that physical processes occurring at one place should have no immediate effect on the elements of reality at another location. At first sight, this appears to be a reasonable assumption to make, as it seems to be a consequence of special relativity, which states that information can never be transmitted faster than the speed of light without violating causality. It is generally believed that any theory which violates causality would also be internally inconsistent, and thus useless.[12]:427-428[15] It turns out that the usual rules for combining quantum mechanical and classical descriptions violate the principle of locality without violating causality.[12]:427-428[15] Causality is preserved because there is no way for Alice to transmit messages (i.e. information) to Bob by manipulating her measurement axis. Whichever axis she uses, she has a 50% probability of obtaining "+" and 50% probability of obtaining "−", completely at random; according to quantum mechanics, it is fundamentally impossible for her to influence what result she gets. Furthermore, Bob is only able to perform his measurement once: there is a fundamental property of quantum mechanics, known as the "no cloning theorem", which makes it impossible for him to make a million copies of the electron he receives, perform a spin measurement on each, and look at the statistical distribution of the results. Therefore, in the one measurement he is allowed to make, there is a 50% probability of getting "+" and 50% of getting "−", regardless of whether or not his axis is aligned with Alice's. However, the principle of locality appeals powerfully to physical intuition, and Einstein, Podolsky and Rosen were unwilling to abandon it. Einstein derided the quantum mechanical predictions as "spooky action at a distance". The conclusion they drew was that quantum mechanics is not a complete theory.[16] In recent years, however, doubt has been cast on EPR's conclusion due to developments in understanding locality and especially quantum decoherence. The word locality has several different meanings in physics. For example, in quantum field theory "locality" means that quantum fields at different points of space do not interact with one another. However, quantum field theories that are "local" in this sense appear to violate the principle of locality as defined by EPR, but they nevertheless do not violate locality in a more general sense. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and all of its environment. Since the underlying behaviour doesn't violate local causality, it follows that neither does the additional effect of wavefunction collapse, whether real or apparent. Therefore, as outlined in the example above, neither the EPR experiment nor any quantum experiment demonstrates that faster-than-light signaling is possible. Resolving the paradox Hidden variables There are several ways to resolve the EPR paradox. The one suggested by EPR is that quantum mechanics, despite its success in a wide variety of experimental scenarios, is actually an incomplete theory. In other words, there is some yet undiscovered theory of nature to which quantum mechanics acts as a kind of statistical approximation (albeit an exceedingly successful one). Unlike quantum mechanics, the more complete theory contains variables corresponding to all the "elements of reality". There must be some unknown mechanism acting on these variables to give rise to the observed effects of "non-commuting quantum observables", i.e. the Heisenberg uncertainty principle. Such a theory is called a hidden variable theory. To illustrate this idea, we can formulate a very simple hidden variable theory for the above thought experiment. One supposes that the quantum spin-singlet states emitted by the source are actually approximate descriptions for "true" physical states possessing definite values for the z-spin and x-spin. In these "true" states, the positron going to Bob always has spin values opposite to the electron going to Alice, but the values are otherwise completely random. For example, the first pair emitted by the source might be "(+z, −x) to Alice and (−z, +x) to Bob", the next pair "(−z, −x) to Alice and (+z, +x) to Bob", and so forth. Therefore, if Bob's measurement axis is aligned with Alice's, he will necessarily get the opposite of whatever Alice gets; otherwise, he will get "+" and "−" with equal probability. Assuming we restrict our measurements to the z- and x-axes, such a hidden variable theory is experimentally indistinguishable from quantum mechanics. In reality, there may be an infinite number of axes along which Alice and Bob can perform their measurements, so there would have to be an infinite number of independent hidden variables. However, this is not a serious problem; we have formulated a very simplistic hidden variable theory, and a more sophisticated theory might be able to patch it up. It turns out that there is a much more serious challenge to the idea of hidden variables. Bell's inequality Main article: Bell's theorem In 1964, John Bell showed that the predictions of quantum mechanics in the EPR thought experiment are significantly different from the predictions of a particular class of hidden variable theories (the local hidden variable theories). Roughly speaking, quantum mechanics has a much stronger statistical correlation with measurement results performed on different axes than do these hidden variable theories. These differences, expressed using inequality relations known as "Bell's inequalities", are in principle experimentally detectable. Later work by Eberhard showed that the key properties of local hidden variable theories which lead to Bell's inequalities are locality and counter-factual definiteness. Any theory in which these principles apply produces the inequalities. Arthur Fine subsequently showed that any theory satisfying the inequalities can be modeled by a local hidden variable theory. After the publication of Bell's paper, a variety of experiments were devised to test Bell's inequalities (experiments which generally rely on photon polarization measurement). All the experiments conducted to date have found behavior in line with the predictions of standard quantum mechanics theory. However, Bell's theorem does not apply to all possible philosophically realist theories. It is a common misconception that quantum mechanics is inconsistent with all notions of philosophical realism, but realist interpretations of quantum mechanics are possible, although, as discussed above, such interpretations must reject either locality or counter-factual definiteness. Mainstream physics prefers to keep locality, while striving also to maintain a notion of realism that nevertheless rejects counter-factual definiteness. Examples of such mainstream realist interpretations are the consistent histories interpretation and the transactional interpretation. Fine's work showed that, taking locality as a given, there exist scenarios in which two statistical variables are correlated in a manner inconsistent with counter-factual definiteness, and that such scenarios are no more mysterious than any other, despite the inconsistency with counter-factual definiteness seeming 'counter-intuitive'. Violation of locality is difficult to reconcile with special relativity, and is thought to be incompatible with the principle of causality. On the other hand the Bohm interpretation of quantum mechanics keeps counter-factual definiteness while introducing a conjectured non-local mechanism in form of the 'quantum potential', defined as one of the terms of the Schrödinger equation. Some workers in the field have also attempted to formulate hidden variable theories that exploit loopholes in actual experiments, such as the assumptions made in interpreting experimental data, although no theory has been proposed that can reproduce all the results of quantum mechanics. There are also individual EPR-like experiments that have no local hidden variables explanation. Examples have been suggested by David Bohm and by Lucien Hardy. Einstein's hope for a purely algebraic theory The Bohm interpretation of quantum mechanics hypothesizes that the state of the universe evolves smoothly through time with no collapsing of quantum wavefunctions. One problem for the Copenhagen interpretation is to precisely define wavefunction collapse. Einstein maintained that quantum mechanics is physically incomplete and logically unsatisfactory. In "The Meaning of Relativity," Einstein wrote, "One can give good reasons why reality cannot at all be represented by a continuous field. From the quantum phenomena it appears to follow with certainty that a finite system of finite energy can be completely described by a finite set of numbers (quantum numbers). This does not seem to be in accordance with a continuum theory and must lead to an attempt to find a purely algebraic theory for the representation of reality. But nobody knows how to find the basis for such a theory." If time, space, and energy are secondary features derived from a substrate below the Planck scale, then Einstein's hypothetical algebraic system might resolve the EPR paradox (although Bell's theorem would still be valid). Edward Fredkin in the Fredkin Finite Nature Hypothesis has suggested an informational basis for Einstein's hypothetical algebraic system. If physical reality is totally finite, then the Copenhagen interpretation might be an approximation to an information processing system below the Planck scale. "Acceptable theories" and the experiment According to the present view of the situation, quantum mechanics flatly contradicts Einstein's philosophical postulate that any acceptable physical theory must fulfill "local realism". In the EPR paper (1935) the authors realised that quantum mechanics was inconsistent with their assumptions, but Einstein nevertheless thought that quantum mechanics might simply be augmented by hidden variables (i.e. variables which were, at that point, still obscure to him), without any other change, to achieve an acceptable theory. He pursued these ideas for over twenty years until the end of his life, in 1955. In contrast, John Bell, in his 1964 paper, showed that quantum mechanics and the class of hidden variable theories Einstein favored[17] would lead to different experimental results: different by a factor of 3⁄2 for certain correlations. So the issue of "acceptability", up to that time mainly concerning theory, finally became experimentally decidable. There are many Bell test experiments, e.g. those of Alain Aspect and others. They support the predictions of quantum mechanics rather than the class of hidden variable theories supported by Einstein.[2] According to Karl Popper these experiments showed that the class of "hidden variables" Einstein believed in is erroneous.[citation needed] Implications for quantum mechanics Most physicists today believe that quantum mechanics is correct, and that the EPR paradox is a "paradox" only because classical intuitions do not correspond to physical reality. How EPR is interpreted regarding locality depends on the interpretation of quantum mechanics one uses. In the Copenhagen interpretation, it is usually understood that instantaneous wave function collapse does occur. However, the view that there is no causal instantaneous effect has also been proposed within the Copenhagen interpretation: in this alternate view, measurement affects our ability to define (and measure) quantities in the physical system, not the system itself. In the many-worlds interpretation locality is strictly preserved, since the effects of operations such as measurement affect only the state of the particle that is measured.[15] However, the results of the measurement are not unique—every possible result is obtained. The EPR paradox has deepened our understanding of quantum mechanics by exposing the fundamentally non-classical characteristics of the measurement process. Prior to the publication of the EPR paper, a measurement was often visualized as a physical disturbance inflicted directly upon the measured system. For instance, when measuring the position of an electron, one imagines shining a light on it, thus disturbing the electron and producing the quantum mechanical uncertainties in its position. Such explanations, which are still encountered in popular expositions of quantum mechanics, are debunked by the EPR paradox, which shows that a "measurement" can be performed on a particle without disturbing it directly, by performing a measurement on a distant entangled particle. In fact, Yakir Aharonov and his collaborators have developed a whole theory of so-called Weak measurement.[citation needed] Technologies relying on quantum entanglement are now being developed. In quantum cryptography, entangled particles are used to transmit signals that cannot be eavesdropped upon without leaving a trace. In quantum computation, entangled quantum states are used to perform computations in parallel, which may allow certain calculations to be performed much more quickly than they ever could be with classical computers. Mathematical formulation The above discussion can be expressed mathematically using the quantum mechanical formulation of spin. The spin degree of freedom for an electron is associated with a two-dimensional complex vector space V, with each quantum state corresponding to a vector in that space. The operators corresponding to the spin along the x, y, and z direction, denoted Sx, Sy, and Sz respectively, can be represented using the Pauli matrices:[18]:9 $S_x = \frac{\hbar}{2} \begin{bmatrix} 0&1\\1&0\end{bmatrix}, \quad S_y = \frac{\hbar}{2} \begin{bmatrix} 0&-i\\i&0\end{bmatrix}, \quad S_z = \frac{\hbar}{2} \begin{bmatrix} 1&0\\0&-1\end{bmatrix}$ where $\hbar$ stands for Planck's constant divided by 2π. The eigenstates of Sz are represented as $\left|+z\right\rang \leftrightarrow \begin{bmatrix}1\\0\end{bmatrix}, \quad \left|-z\right\rang \leftrightarrow \begin{bmatrix}0\\1\end{bmatrix}$ and the eigenstates of Sx are represented as $\left|+x\right\rang \leftrightarrow \frac{1}{\sqrt{2}} \begin{bmatrix}1\\1\end{bmatrix}, \quad \left|-x\right\rang \leftrightarrow \frac{1}{\sqrt{2}} \begin{bmatrix}1\\-1\end{bmatrix}$ The vector space of the electron-positron pair is $V \otimes V$, the tensor product of the electron's and positron's vector spaces. The spin singlet state is $\left|\psi\right\rang = \frac{1}{\sqrt{2}} \bigg (\left|+z\right\rang \otimes \left|-z\right\rang - \left|-z\right\rang \otimes \left|+z\right\rang \bigg)$ where the two terms on the right hand side are what we have referred to as state I and state II above. From the above equations, it can be shown that the spin singlet can also be written as $\left|\psi\right\rang = -\frac{1}{\sqrt{2}} \bigg (\left|+x\right\rang \otimes \left|-x\right\rang - \left|-x\right\rang \otimes \left|+x\right\rang \bigg)$ where the terms on the right hand side are what we have referred to as state Ia and state IIa. To illustrate how this leads to the violation of local realism, we need to show that after Alice's measurement of Sz (or Sx), Bob's value of Sz (or Sx) is uniquely determined, and therefore corresponds to an "element of physical reality". This follows from the principles of measurement in quantum mechanics. When Sz is measured, the system state ψ collapses into an eigenvector of Sz. If the measurement result is +z, this means that immediately after measurement the system state undergoes an orthogonal projection of ψ onto the space of states of the form $\left| +z \right\rangle \otimes \left| \phi\right\rangle \quad \phi \in V$ For the spin singlet, the new state is $\left| +z \right\rangle \otimes \left| -z \right\rangle$ Similarly, if Alice's measurement result is −z, the system undergoes an orthogonal projection onto $\left| -z \right\rangle \otimes \left| \phi\right\rangle \quad \phi \in V$ which means that the new state is $\left|-z\right\rangle \otimes \left|+z\right\rangle$ This implies that the measurement for Sz for Bob's positron is now determined. It will be −z in the first case or +z in the second case. It remains only to show that Sx and Sz cannot simultaneously possess definite values in quantum mechanics. One may show in a straightforward manner that no possible vector can be an eigenvector of both matrices. More generally, one may use the fact that the operators do not commute, $\left[S_x, S_z\right] = -i\hbar S_y \ne 0$ along with the Heisenberg uncertainty relation $\left\lang (\Delta S_x) ^2 \right\rang \left\lang (\Delta S_z) ^2 \right\rang \ge \frac{1}{4} \left|\left\lang \left[S_x, S_z\right] \right\rang\right|^2$ References Selected papers • P.H. Eberhard, Bell's theorem without hidden variables. Nuovo Cimento 38B1 75 (1977). • P.H. Eberhard, Bell's theorem and the different concepts of locality. Nuovo Cimento 46B 392 (1978). • A. Einstein, B. Podolsky, and N. Rosen, Can quantum-mechanical description of physical reality be considered complete? Phys. Rev. 47 777 (1935). [2] • A. Fine, Hidden Variables, Joint Probability, and the Bell Inequalities. Phys. Rev. Lett. 48, 291 (1982).[3] • A. Fine, Do Correlations need to be explained?, in Philosophical Consequences of Quantum Theory: Reflections on Bell's Theorem, edited by Cushing & McMullin (University of Notre Dame Press, 1986). • L. Hardy, Nonlocality for two particles without inequalities for almost all entangled states. Phys. Rev. Lett. 71 1665 (1993).[4] • M. Mizuki, A classical interpretation of Bell's inequality. Annales de la Fondation Louis de Broglie 26 683 (2001). • P. Pluch, "Theory for Quantum Probability", PhD Thesis University of Klagenfurt (2006) • M. A. Rowe, D. Kielpinski, V. Meyer, C. A. Sackett, W. M. Itano, C. Monroe and D. J. Wineland, Experimental violation of a Bell's inequality with efficient detection, Nature 409, 791–794 (15 February 2001). [5] • M. Smerlak, C. Rovelli, Relational EPR [6] Notes 1. Bell, John. On the Einstein–Poldolsky–Rosen paradox, Physics 1 3, 195-200, Nov. 1964 2. ^ a b Aspect A (1999-03-18). "Bell's inequality test: more ideal than ever". Nature 398 (6724): 189–90. Bibcode:1999Natur.398..189A. doi:10.1038/18296. Retrieved 2010-09-08. 3. Einstein, A; B Podolsky, N Rosen (1935-05-15). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?". 47 (10): 777–780. Bibcode:1935PhRv...47..777E. doi:10.1103/PhysRev.47.777. 4. Gribbin, J (1984). In Search of Schrödinger's cat. Black Swan. ISBN 0-7045-3071-6. . 5. von Neumann, J. (1932/1955). In Mathematische Grundlagen der Quantenmechanik, Springer, Berlin, translated into English by Beyer, R.T., Princeton University Press, Princeton, cited by Baggott, J. (2004) Beyond Measure: Modern physics, philosophy, and the meaning of quantum theory, Oxford University Press, Oxford, ISBN 0-19-852927-9, pages 144–145. 6. Quoted in Kaiser, David. "Bringing the human actors back on stage: the personal context of the Einstein–Bohr debate," British Journal for the History of Science 27 (1994): 129–152, on page 147. 7. Einstein, Albert (1936). "Physik und realität". Journal of the Franklin Institute (Elsevier) 221 (3): 313–347. doi:10.1016/S0016-0032(36)91045-1. Retrieved 9 December 2012.  English translation by Jean Piccard, pp 349–382 in the same issue, doi:10.1016/S0016-0032(36)91047-5). 8. ^ a b c Kumar, Manjit (2011). Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality (Reprint edition ed.). W. W. Norton & Company. pp. 305–306. ISBN 978-0393339888. 9. ^ a b c Griffiths, David J. (2004), Introduction to Quantum Mechanics (2nd ed.), Prentice Hall, ISBN 0-13-111892-7 10. Laloe, Franck (2012), Do We Really Understand Quantum Mechanics, Cambridge University Press, ISBN 978-1-107-02501-1 11. George Greenstein and Arthur G. Zajonc, The Quantum Challenge, p. "[Experiments in the early 1980s] have conclusively shown that quantum mechanics is indeed orrect, and that the EPR argument had relied upon incorrect assumptions." 12. ^ a b c Blaylock, Guy (January 2010). "The EPR paradox, Bell's inequality, and the question of locality". American Journal of PHysics 78 (1): 111–120. 13. Bell, John (1981). "Bertlmann's socks and the nature of reality". J. Physique colloques C22: pp. 41–62. 14. 15. Sakukra, J. J.; Napolitano, Jim (2010), Modern Quantum Mechanics (2nd ed.), Addison-Wesley, ISBN 978-0805382914 Books • John S. Bell (1987) Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press. ISBN 0-521-36869-3. • Arthur Fine (1996) The Shaky Game: Einstein, Realism and the Quantum Theory, 2nd ed. Univ. of Chicago Press. • J.J. Sakurai, J. J. (1994) Modern Quantum Mechanics. Addison-Wesley: 174–187, 223–232. ISBN 0-201-53929-2. • Selleri, F. (1988) Quantum Mechanics Versus Local Realism: The Einstein–Podolsky–Rosen Paradox. New York: Plenum Press. ISBN 0-306-42739-7 • Leon Lederman, L., Teresi, D. (1993). The God Particle: If the Universe is the Answer, What is the Question? Houghton Mifflin Company, pages 21, 187 to 189. • John Gribbin (1984) In Search of Schrödinger's Cat. Black Swan. ISBN 978-0-552-12555-0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 13, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9227375984191895, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/106321/mochizukis-proof-and-siegel-zeros/106399
## Mochizuki’s proof and Siegel zeros ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Granville and Stark (Invent. Math. 139 (2000), 509-523) proved that a uniform version of the abc conjecture for number fields eliminates Siegel zeros for $L$-functions associated with quadratic characters of negative discriminant. Does the recently announced proof by Mochizuki of the abc conjecture cover this statement? - 1 Thanks for the link! – Thomas Riepe Sep 4 at 9:47 ## 1 Answer I don't think so. Mochizuki claims to have proved a diophantine result for points of bounded degree, while you need a uniform form of the ABC conjecture for the application that you mention. EDIT: About your question below, on the version of the ABC conjecture claimed in Mochizuki's work, it is clearly stated in Theorem A of the 4th paper. Anyways, for the benefit of the people that might read this question, I will state in very elementary terms a corollary of Theorem A in the following context: X is the projective line with the usual projective coordinates [x:y], and D is the divisor $[0:1] + [1:0] + [1:1]$ which makes the curve U=X\D hyperbolic (the degree of the canonical divisor $\omega$ of X in this case is -2 and the degree of D is 3, hence the degree of $\omega(D)$ is 1>0). Ok, here is the corollary (the notation is explained below): Statement: Let $d$ be a positive integer and let $\epsilon>0$. There is a constant $C>0$ depending only on $d$ and $\epsilon$ such that the following is true: If $A,B$ are non-zero algebraic numbers with $A+B=1$, and if the degree over Q of the number field $K=Q(A)$ is at most d, then we have $H(A,B,1) < C(\Delta_K N_K(A,B,1))^{1+\epsilon}.$ Notation: Here I am using the same definition of $\Delta_K$, H(a,b,c) and $N_K(a,b,c)$ as in the paper on Siegel zeros of the question (this notation is explained in the first page of the paper). Well, if you check the reference you'll see that actually there is one difference: the paper uses N(a,b,c), not $N_K(a,b,c)$. However, in the above statement it is crucial that we must compute N(A,B,1) using the number filed K=Q(A), that's why I added this subscript. I hope that the readers can see the difference between this version and the uniform ABC conjecture for the paper on Siegel zeros: the fact that here the constant C also depends on d, not only $\epsilon$. A last trivial remark. To get the classical ABC conjecture with coprime integers a+b=c you take A=a/c, B=b/c and hence K=Q which makes $\Delta_K=1$, and N(A,B,1)=rad(abc). - 2 Can you give more detail, please? For example, which version of the abc conjecture follows from Mochizuki's work? Also, is the constant depending on $X$ and $d$ in his Theorem A effective? – GH Sep 5 at 16:53 3 I don't know if the claimed result is effective or not, I started to read the papers just about a week ago and they are certainly hard. However, the first main Diophantine consequence claimed in the 4th paper is for a somewhat restricted class of curves which is nonetheless "sufficiently general". Then the author reduces the general case to this sufficiently general case (sec. 2), and it is remarkable that this reduction step is performed keeping track of explicit constants. Does this indicate that the ultimate goal is an effective result? no idea, I guess that we have to read, not speculate. – Pasten Sep 6 at 1:47 @Pasten: Thank you. I will leave this question open to collect more information. – GH Sep 6 at 20:59 1 @Pasten: Thanks for the update. Let me argue with your statement "it is clearly stated in Theorem A of the 4th paper". Theorem A is not a statement about the equation $a+b=c$, but about the canonical height, the logarithmic different etc. You can perhaps make the connection immediately, but someone who is unfamiliar with these quantities must spend a fair amount of time (e.g. digesting Vojta's paper) to make this connection. I call this bad writing, especially if in contrast the paper painfully states and justifies trivialities like Proposition 1.6 or Proposition 2.1. – GH Sep 10 at 1:43 5 @GH: Perhaps the word "clearly" should be omitted in my post. Once I heard that the word "clearly" should be omitted in all the mathematical literature: if something is "so clear" then it is pointless to say that it is clear, while on the other hand if we use the word "clearly" to hide an argument that we don't want to write then we should perhaps be honest and at least give some hint. In this case, what I meant is that Theorem A alone (modulo notation) gives the main result without having to prove further propositions before using it. In any case, sorry about using the word "clearly". – Pasten Sep 10 at 2:58 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395579695701599, "perplexity_flag": "head"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/G02/g02brc.html
# NAG Library Function Documentnag_ken_spe_corr_coeff (g02brc) ## 1  Purpose nag_ken_spe_corr_coeff (g02brc) calculates Kendall and Spearman rank correlation coefficients. ## 2  Specification #include <nag.h> #include <nagg02.h> void nag_ken_spe_corr_coeff (Integer n, Integer m, const double x[], Integer tdx, const Integer svar[], const Integer sobs[], double corr[], Integer tdc, NagError *fail) ## 3  Description nag_ken_spe_corr_coeff (g02brc) calculates both the Kendall rank correlation coefficients and the Spearman rank correlation coefficients. The data consists of $n$ observations for each of $m$ variables: where ${x}_{\mathit{i}j}$ is the $\mathit{i}$th observation on the $j$th variable. The function eliminates any variable ${x}_{\mathit{i}j}$ , for $\mathit{i}=1,2,\dots ,n$, where the argument ${\mathbf{svar}}\left[\mathit{j}-1\right]=0$, and any observation ${x}_{i\mathit{j}}$, for $\mathit{j}=1,2,\dots ,m$, where the argument ${\mathbf{sobs}}\left[i-1\right]=0$. The observations are first ranked as follows: For a given variable, $j$ say, each of the observations ${x}_{\mathit{i}j}$ for which ${\mathbf{sobs}}\left[\mathit{i}-1\right]>0$, for $\mathit{i}=1,2,\dots ,n$, has associated with it an additional number, the rank of the observation, which indicates the magnitude of that observation relative to the magnitudes of the other observations on that same variable for which ${\mathbf{sobs}}\left[i-1\right]>0$. The smallest of these valid observations for variable $j$ is assigned the rank 1, the second smallest observation for variable $j$ the rank 2, and so on until the largest such observation is given the rank ${n}_{s}$, where ${n}_{s}$ is the number of observations for which ${\mathbf{sobs}}\left[i-1\right]>0$. If a number of cases all have the same value for a given variable, $j$, then they are each given an ‘average’ rank — e.g., if in attempting to assign the rank $h+1$, $k$ observations for which ${\mathbf{sobs}}\left[i-1\right]>0$ were found to have the same value, then instead of giving them the ranks $h+1,h+2,\dots ,h+k$ all $k$ observations would be assigned the rank $\frac{2h+k+1}{2}$ and the next value in ascending order would be assigned the rank $h+k+1\text{.}$ The process is repeated for each of the $m$ variables for which ${\mathbf{svar}}\left[j-1\right]>0$. Let ${y}_{i\mathit{j}}$ be the rank assigned to the observation ${x}_{i\mathit{j}}$ when the $\mathit{j}$th variable is being ranked. For those observations, $i$, for which ${\mathbf{sobs}}\left[i-1\right]=0$, ${y}_{i\mathit{j}}=0$ , for $\mathit{j}=1,2,\dots ,m$. For variables $j,k$ the following are computed: (a) Kendall's tau correlation coefficients: $R jk = ∑ h=1 n ∑ i=1 n sign y hj - y ij sign y hk - y ik n s n s - 1 - T j n s n s - 1 - T k j , k = 1 , 2 , … , m ;$ where ${n}_{s}$ is the number of observations for which ${\mathbf{sobs}}\left[i-1\right]>0$, and sign $u=1$ if $u>0$, sign $u=0$ if $u=0$, sign $u=-1$ if $u<0$, and ${T}_{j}=\sum {t}_{j}\left({t}_{j}-1\right)$ where ${t}_{j}$ is the number of ties of a particular value of variable $j$, and the summation is over all tied values of variable $j$. (b) Spearman's rank correlation coefficients: $R jk = n s n s 2 - 1 - 6 ∑ i=1 n y ij - y ik 2 - 1 2 T j + T k n s n s 2 - 1 - T j n s n s 2 - 1 - T k j , k = 1 , 2 , … , m ;$ where ${n}_{s}$ is the number of observations for which ${\mathbf{sobs}}\left[i-1\right]>0$, and ${T}_{j}=\sum {t}_{j}\left({t}_{j}^{2}-1\right)$ where ${t}_{j}$ is the number of ties of a particular value of variable $j$, and the summation is over all tied values of variable $j$. ## 4  References Siegel S (1956) Non-parametric Statistics for the Behavioral Sciences McGraw–Hill ## 5  Arguments 1:     n – IntegerInput On entry: the number of observations in the dataset. Constraint: ${\mathbf{n}}\ge 2$. 2:     m – IntegerInput On entry: the number of variables. Constraint: ${\mathbf{m}}\ge 2$. 3:     x[${\mathbf{n}}×{\mathbf{tdx}}$] – const doubleInput On entry: ${\mathbf{x}}\left[\mathit{i}-1×{\mathbf{tdx}}+\mathit{j}-1\right]$ must contain the $\mathit{i}$th observation on the $\mathit{j}$th variable, for $\mathit{i}=1,2,\dots ,n$ and $\mathit{j}=1,2,\dots ,m$. 4:     tdx – IntegerInput On entry: the stride separating matrix column elements in the array x. Constraint: ${\mathbf{tdx}}\ge {\mathbf{m}}$. 5:     svar[m] – const IntegerInput On entry: ${\mathbf{svar}}\left[j-1\right]$ indicates which variables are to be included, for the $j$th variable to be included, ${\mathbf{svar}}\left[j-1\right]>0$. If all variables are to be included then a NULL pointer (Integer *)0 may be supplied. Constraint: ${\mathbf{svar}}\left[\mathit{j}-1\right]\ge 0$, and there is at least one positive element, for $\mathit{j}=1,2,\dots ,m$. 6:     sobs[n] – const IntegerInput On entry: ${\mathbf{sobs}}\left[i-1\right]$ indicates which observations are to be included, for the $i$th observation to be included, ${\mathbf{sobs}}\left[i-1\right]>0$. If all observations are to be included then a NULL pointer (Integer *)0 may be supplied. Constraint: ${\mathbf{sobs}}\left[\mathit{i}-1\right]\ge 0$, and there are at least two positive elements, for $\mathit{i}=1,2,\dots ,n$. 7:     corr[${\mathbf{m}}×{\mathbf{tdc}}$] – doubleOutput On exit: the upper ${n}_{s}$ by ${n}_{s}$ part of corr contains the correlation coefficients, the upper triangle contains the Spearman coefficients and the lower triangle, the Kendall coefficients. That is, for the $j$th and $k$th variables, where $j$ is less than $k$, ${\mathbf{corr}}\left[j-1×{\mathbf{tdc}}+k-1\right]$ contains the Spearman rank correlation coefficient, and ${\mathbf{corr}}\left[k-1×{\mathbf{tdc}}+j-1\right]$ contains Kendall's tau, for $j,k=1,2,\dots ,{n}_{s}$. The diagonal will be set to 1. 8:     tdc – IntegerInput On entry: the stride separating matrix column elements in the array corr. Constraint: ${\mathbf{tdc}}\ge {\mathbf{m}}$. 9:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_2_INT_ARG_LT On entry, ${\mathbf{tdc}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdc}}\ge {\mathbf{m}}$. On entry, ${\mathbf{tdx}}=〈\mathit{\text{value}}〉$ while ${\mathbf{m}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{tdx}}\ge {\mathbf{m}}$. NE_ALLOC_FAIL Dynamic memory allocation failed. NE_INT_ARG_LT On entry, ${\mathbf{m}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{m}}\ge 2$. On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 2$. NE_INT_ARRAY_1 Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{sobs}}\left[〈\mathit{\text{value}}〉\right]$ not valid. Correct range for elements of sobs is ${\mathbf{sobs}}\left[i\right]\ge 0$. Value $〈\mathit{\text{value}}〉$ given to ${\mathbf{svar}}\left[〈\mathit{\text{value}}〉\right]$ not valid. Correct range for elements of svar is ${\mathbf{svar}}\left[i\right]\ge 0$. NE_INTERNAL_ERROR An initial error has occurred in this function. Check the function call and any array sizes. NE_SOBS_LOW On entry, sobs must contain at least 2 positive elements. Too few observations have been selected. NE_SVAR_LOW No variables have been selected. On entry, svar must contain at least 1 positive element. ## 7  Accuracy The computations are believed to be stable. None. ## 9  Example A program to calculate the Kendall and Spearman rank correlation coefficients from a set of data. ### 9.1  Program Text Program Text (g02brce.c) ### 9.2  Program Data Program Data (g02brce.d) ### 9.3  Program Results Program Results (g02brce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 105, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.707586407661438, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/17356/angle-for-pointing-at-a-certain-point-in-2d-space?answertab=votes
# Angle for pointing at a certain point in 2d space Recently, I have been programming a simple game. Very simple: There is a tank, and the cannon will aim at whatever position the mouse is at. Now lets talk about the cannon graphic. The cannon graphic points to the north, with 0 rotation. Here are the variables I have for my game, and that might be important factors for solving my problem: Tx = The tank's X position in the world. Ty = The tank's Y position in the world. Mx = The mouse's X position in the world. My = The mouse's Y position in the world. Also, in this programming language, the greater the Y coordinate, the lower you are. And the less the Y coordinate is, the higher you are. So, Y = 0 means the top. My problem is, how do calculate the rotation needed for my cannon graphic to "point" to the mouse's position? - Have you checked Wikipedia's Trigonometry page? – Yuval Filmus Jan 13 '11 at 6:25 +1 for giving enough information that you are asking a well-defined question. – Ross Millikan Jan 13 '11 at 13:52 @Ross Millikan: It was well-defined, until I got to the end of my answer and realized Omega doesn't say whether the angle gives clockwise or counterclockwise rotation of the tank's cannon! – hardmath Jan 13 '11 at 14:32 ## 2 Answers Suppose this is the situation: $\displaystyle D_x$ is the difference of the $\displaystyle x$-coordinates and $\displaystyle D_y$ is the difference of the $\displaystyle y$-coordinates. Then angle $\displaystyle w$ is given by $\displaystyle \tan w = \frac{D_y}{D_x}$ and thus $\displaystyle w = \arctan (\frac{D_y}{D_x})$. The angle you will need to rotate would then be anti-clockwise $\displaystyle \frac{\pi}{2} + w$, if the tank is point "up". Note: The above assumes that $\displaystyle w$ is acute. I will leave it to you to try to work out the other cases (for different tank and mouse positions) and come up with a general formula. I would suggest reading up on atan or (to avoid a trap involving division by $0$) atan2. Most likely the Math package of your programming language will have both. Hope that helps. - One thing: The result ranges from around -1.5 to -0.5. I didn't quite get what to do with these, I need degrees to work with my graphics.. – Omega May 6 '11 at 23:07 – Aryabhata May 6 '11 at 23:10 Okay, in your coordinates "north" presumably means up, i.e. the direction of decreasing $y$ coordinates. Let's assume for convenience that all $y$ coordinates are nonnegative, so $y=-1$ is off the display. From a location $(x_0,y_0)$ to target $(x_1,y_1)$ the angle you want has cosine $(y_0 - y_1)/d$ where $d = \sqrt{(x_1 - x_0)^2 + (y_1 - y_0)^2}$ is the distance location to target. Taking the arccosine of this gives the absolute value of the angle you want. For the sign you need to say whether the cannon rotates clockwise or counterclockwise as the angle increases. If increasing the angle (from zero = up/north) rotates clockwise, then choose a positive sign whenever $x_1 > x_0$ and a negative sign when $x_1 < x_0$. If $x_1 = x_0$, then the angle is either zero when $y_1 < y_0$ or $\pi$ radians (aka 180 degrees) if $y_1 > y_0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9114902019500732, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/16463-minimum-function.html
# Thread: 1. ## The minimum of the function Hello The minimum of the function $f(x) = (log_{2}x)^2 + log_{4}x + 1$ is ? What's the process to solve this problem? Thanks. 2. Originally Posted by Patrick_John Hello The minimum of the function $f(x) = (log_{2}x)^2 + log_{4}x + 1$ is ? What's the process to solve this problem? Thanks. the process is the same as for all problems of this type, find it's derivative and set it equal to zero. if necessary, use the second derivative to verify which of the critical points is a minimum you will need the change of base formula to change all the logs to ln though Change of Base Formula for Logarithms: $\log_a b = \frac { \log_c b}{ \log_c a}$ or maybe noticing that $\log_4 x = \log_2 \sqrt {x}$ will simplify the problem a bit. but i'd go with my first suggestion 3. Originally Posted by Patrick_John Hello The minimum of the function $f(x) = (log_{2}x)^2 + log_{4}x + 1$ is ? What's the process to solve this problem? Thanks. Do what Jhevon did, $\log_2^2 x + \log_2 \sqrt{x} + 1$ Rewrite as, $\log_2^2 x + \frac{1}{2}\log_2 x+1$ Let $y=\log_2^2 x$ to get, $y^2 + \frac{1}{2}y+1$ This is a parabola, you can use the formula to obtain its minimum. 4. Originally Posted by ThePerfectHacker Do what Jhevon did, $\log_2^2 x + \log_2 \sqrt{x} + 1$ Rewrite as, $\log_2^2 x + \frac{1}{2}\log_2 x+1$ Let $y=\log_2^2 x$ to get, $y^2 + \frac{1}{2}y+1$ This is a parabola, you can use the formula to obtain its minimum. Thanks for that TPH, I forgot Patrick isn't in Calculus. I see "minimum" and i switched to calc mode. doing it using calc is fun though, the precalc method was so anticlimactic 5. Thanks for the help. But since I'm not very aware of what is a minimum, I don't know how to get it using the formula, could somebody explain in more detail how to do it? Also, why did you rewrite $\log_2 \sqrt{x}$ as $\frac{1}{2}\log_2 x$ ? 6. Hello, Patrick_John! I will assume that we are not allowed to use Calculus . . . Find the minimum of the function: . $f(x) \:= \:(\log_{2}x)^2 + \log_{4}x + 1$ The logs have two different bases; we'll make them the same. Let $\log_4x = P$ . . Then: . $4^P = x\quad\Rightarrow\quad (2^2)^P = x\quad\Rightarrow\quad 2^{2P} = x$ . . Take logs (base 2): . $\log_2\left(2^{2P}\right) = \log_2x\quad\Rightarrow\quad2P\!\cdot\!\log_22 = \log_2x$ . . Then: . $2P = \log_2x\quad\Rightarrow\quad P = \frac{1}{2}\log_2x$ . . Hence: . $\log_4x = \frac{1}{2}\log_2x$ Substitute into the original equation: . $f(x)\:=\:\left(\log_2x\right)^2 + \frac{1}{2}\log_2x + 1$ Let $z = \log_2x$ Then we have: . $f(z) \:=\:z^2 + \frac{1}{2}z + 1$ This is an up-opening parabola; its minimum is at its vertex. The vertex formula is: . $\frac{\text{-}b}{2a}$ We have: . $a = 1,\:b = \frac{1}{2}$ Hence, the vertex is at: . $z \:=\:\frac{\text{-}\frac{1}{2}}{2(1)} \:=\:-\frac{1}{4}$ Therefore, the minimum is: . $f\left(\text{-}\frac{1}{4}\right)\;=\;\left(\text{-}\frac{1}{4}\right)^2 + \frac{1}{2}\left(\text{-}\frac{1}{4}\right) + 1 \;=\;\boxed{\frac{15}{16}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9189961552619934, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/73646/love-math-can-you-read-this-formula
# LOVE +MATH = can you read this formula? i don't remember where exactly, i found in internet this image: i tried to replicate the formula with python and i tried this: ````b = 0.9 y = 2*b + sqrt(x*x) + sqrt((x+b)*(3*b-x)) y1 = 2*b + sqrt(x*x) - sqrt((x+b)*(3*b-x)) plot(x,y, x, y1) ```` where sqrt is the square root! but my curve is not very similar to the picture.. mayebe i'm not able to read it because of it is handwritten. some help? - 2 I'm fairly certain that it's not $\sqrt{x^2}$... the root and the exponent are probably different so as to give the desired "cusp". – The Chaz 2.0 Oct 18 '11 at 14:11 2 The first radical appears to not be a square root, but an $n$th root, which is pretty hard to make out. It'd be silly to use $\sqrt{x^2}$, which is $|x|$, since there's probably a nice function for that in Python. Try experimenting with some other roots, maybe $x^{2/3}$ – platinumtucan Oct 18 '11 at 14:12 – J. M. Oct 18 '11 at 14:16 – Chris Phan Oct 18 '11 at 15:42 1 @Jeroen "[your text](the http address)" with the [ ] ( ) included – belisarius Oct 18 '11 at 21:20 show 4 more comments ## 2 Answers I did it in Maple... Vary b to change the picture. - ok thanks it is perfect! and can i ask you also some little comment about the kind and analysis of this formula? – nkint Oct 31 '11 at 9:25 $\sqrt{x^2}$ is the same thing as $|x|$, the absolute value of $x$, whose graph has a sharp corner. When I plot exactly the first equation you wrote above, what I get is quite similar to the part of the graph above the two left and right vertical tangents. But it doesn't have a vertical tangent at the cusp in the middle, although it does have a sharp corner there. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399075508117676, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/20784/connected-components-of-a-fiber-product-of-schemes
# Connected components of a fiber product of schemes The underlying set of the product $X \times Y$ of two schemes is by no means the set-theoretic product of the underlying sets of $X$ and $Y$. Although I am happy with the abstract definition of fiber products of schemes, I'm not confident with some very basic questions one might ask. One I am specifically thinking about has to do with the connected components of a fiber product. Say $X$ and $Y$ are schemes over $S$ and let's suppose that $Y$ is connected and that $X = \coprod_{i \in I} X_i$ is the decomposition of $X$ into connected components. Is the connected component decomposition of $X \times_S Y$ simply $\coprod_{i \in I} X_i \times_S Y$? If so, how can we see this? What about if we replace "connected" with "irreducible". "No" in both cases: let $f\in K[X]$ be a separable irreducible polynomial of degree $d>1$ over some field $K$. Let $L$ be the splitting field of $f$ over $K$. Let $X:=\mathrm{Spec} (K[X]/fK[X])$, $Y:=\mathrm{Spec}(L)$ and $S:=\mathrm{Spec}(K)$. Then $X$ is irreducible and $X\times_K Y=\mathrm{Spec}(L[X]/fL[X])$ consists of $d$ points, which are the irreducible components of $X\times_K Y$. Moreover these points are also the connected components of $X\times_K Y$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9537182450294495, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/196028/using-ldct-to-show-a-function-is-continuous-and-differentiable
Using LDCT to show a function is continuous and differentiable We have the following test prep question, for a measure theory course: $\forall s\geq 0$, define $$F(s)=\int_0^\infty \frac{\sin(x)}{x}e^{-sx}\ dx.$$ a) Show that, for $s>0$, $F$ is differentiable and find explicitly its derivative. b) Keeping in mind that $$F(s)=\int_0^\pi \frac{\sin(x)}{x}e^{-sx}\ \ dx\ +\int_\pi^\infty \frac{\sin(x)}{x}e^{-sx}\ dx,$$ and conveniently doing integration by parts on the second integral on the right hand side of the previous equation, show that $F(s)$ is continuous at $s=0$. Calculate $F(s)\ (s\geq 0)$. Since it's a measure theory course, I'm thinking there are methods involving the things you typically learn in these courses, and I think Lebesgue's Dominated Convergence Theorem will play a role, because I was looking at books by Bartle and Apostol, and they both have similar exercises or theorems, and both use LDCT. Also, I suppose these proofs regarding continuity or differentiability could be done with standard calculus stuff (like $\epsilon$'s and $\delta$'s or the actual definition of a derivative), but I want to avoid these methods and focus on what I should be learning from the class. I think I have part (a), or at least a good idea, based on the Bartle book. If I let $f(x,s)=\frac{\sin(x)}{x}e^{-sx}$, I just need to find an integrable function $g$ such that $\big|\frac{\partial f}{\partial s}\big|\leq g(x)$ (after showing that partial does exist, of course :) ). And then, $$\frac d{ds}F(s)=\int _{\mathbb{R}^+}\frac{\partial f}{\partial s}\ dx.$$ Please correct me if I'm mistaken, or missing something. Now, for part (b) I'm a little stumped. In the Apostol book, the case $s>0$ is done explicitly, but I read through it and it didn't help me. Looking at the Bartle book, I get the idea of defining $f_n=(x,s_n)$, where $s_n=\frac1{n+1}$ or some such sequence that goes to zero. Then, somehow, maybe, LDCT kicks in (but I guess I'd have to find a function what would dominate these $f_n$). I also don't really see the point in dividing the integral into the two parts up there, so I must be missing something. - 2 Answers For part a), your idea is good: take $g(x):=e^{-x}\chi_{(1,+\infty)}+\chi_{[0,1]}$, which is integrable. For part b), the first term converges as $s\to 0$ to $\int_0^\pi\frac{\sin t}tdt$, by LDTC. For the second one, we write \begin{align} \int_{\pi}^{+\infty}\frac{\sin t}te^{-st}dt&=\int_\pi^{+\infty}\sin t\frac{e^{-st}}tdt\\ &=\left[-\cos t\frac{e^{-st}}t\right]_{t=\pi}^{t=+\infty}+\int_\pi^{+\infty}\cos t\left(-s\frac{e^{-st}}t-\frac{e^{-st}}{t^2}\right)dt\\ &=\frac{e^{-s\pi}}\pi-\int_{\pi}^{+\infty}\frac{\cos t}{t^2}e^{-st}dt-s\int_\pi^{+\infty}\cos t\frac{e^{—st}}tdt. \end{align} By LDTC, the first two terms converge to $\frac 1{\pi}-\int_{\pi}^{+\infty}\frac{\cos t}{t^2}dt$, and integrating by parts we notice it's equal to $\int_\pi^{+\infty}\frac{\sin t}tdt$. So we have to show that $\lim_{s\to 0}s\int_\pi^{+\infty}\cos t\frac{e^{—st}}tdt=0$. To see that, we integrate by parts. - You're correct that LDCT is the right theorem for this problem. For part A, try explicitly writing out the definition of the derivative for this function and using LDCT to interchange the limit in the definition of the derivative with the integral- this isn't quite the same as bounding the derivative of f by an integrable function. Together with the linearity of integration, this will let you move the derivative "inside" the integral as you wanted to do. For part B, you want to use LDCT on both of the integrals on the right hand side to take a limit as s goes to 0, but you run into the problem that LDCT doesn't actually apply to the second integral! The integral is split up so that the product term arising from the integration by parts will cancel out, and you should be able to apply LDCT to the new integral you get. You can then use the continuity of the integrand to finish the problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930756151676178, "perplexity_flag": "head"}
http://mathoverflow.net/questions/54815/are-there-any-notion-of-almost-primes-known-to-have-small-gaps
## Are there any notion of ‘almost primes’ known to have small gaps? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A notorious question with prime numbers is estimating the gaps between consecutive primes. That is, if $(p_n)_{n \geq 1}$ is the canonical enumeration of the primes, then set $g_n = p_{n+1} - p_n$. It is shown that $g_n > \frac{c \log(n) \log \log(n) \log \log \log \log(n)}{(\log \log \log(n))^2}$ infinitely often, but a precise estimate is not known. My question is, is there a 'natural' superset of the primes that are of interest (say, the set of numbers that are either primes or product of two primes) such that the gap between consecutive members is well known or well estimated? - Considering only odd primes. Odd numbers have small gaps, all equal $2.$ – Luis H Gallardo Feb 8 2011 at 21:54 It is not quite clear to me what you are looking for (and even if it were chances are I could not answer). Still, a small remark in the hope it is relevant: If you restrict the number of prime factors, say by $k$, you will get about $(x/log x) (\log \log x)^{k-1}$ elements below $x$. So, the gaps on avarage cannot be too small, roughly I guess also some $\log x$ times some quotient of iterate $\log$ factors. On the other hand there will be small gaps too. Thus, the gaps will remain quite non-uniform in size. – quid Feb 9 2011 at 0:35 ## 1 Answer Let $q_n$ denote the $n^{\text{th}}$ number that is a product of exactly two distinct primes. It is known that $$\liminf_{n\to \infty} \ (q_{n+1}-q_n) \le 6.$$ This is a result of Goldston, Graham, Pintz, and Yildirim. http://arxiv.org/abs/math/0609615 - Are there any known non-trivial upper bounds? – Stanley Yao Xiao Feb 8 2011 at 22:28 2 There's also Chen's theorem saying infinitely often that $p+2$ is a product of at most two primes, for $p$ prime. – Matt Young Feb 8 2011 at 22:31 1 $\limsup_{n\to \infty} \ (q_{n+1}-q_n) = \infty$ and that remains true even if you look at numbers with no more than 20 distinct factors (Pretty much like the fact that there are large prime gaps, just use the Chinese Remainder Theorem ). You might hope for results on the average although it would have to cover long intervals (pretty much like with primes) – Aaron Meyerowitz Feb 9 2011 at 3:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.951188862323761, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/145708-multiparameter-exponential-family.html
# Thread: 1. ## Multiparameter exponential family Let X be distributed as $N(\mu, \sigma^2)$ with n=2 and $\theta = (\mu, \sigma) \in R \times R^{+}$, where mu and sigma are treated as parameters.. How should I show that this belongs to a two parameter exponential family? 2. Originally Posted by serious331 Let X be distributed as $N(\mu, \sigma^2)$ with n=2 and $\theta = (\mu, \sigma) \in R \times R^{+}$, where mu and sigma are treated as parameters.. How should I show that this belongs to a two parameter exponential family? Hint: $f(x; \theta)$ belongs to a two parameter exponential family if you can express $f(x; \theta) = a(\theta).g(x). \mbox{exp} ( \sum_{i=1}^{2} {b_{i}}(\theta). {R_{i}(x)} )$ Now try to express the pdf of your normal dist. in the above form
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945612907409668, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/133440/pdes-with-non-local-terms
# PDEs with non-local terms Not sure if I've used the correct terminology here (`non-local'). I think the lack of knowing the correct terminology is why I haven't been able to find any information about my query thus far. I'm interested in particular systems of semi-linear first-order (functional?) PDEs. One example where I seek solutions defined on $\mathbb{R}_+ \times \mathbb{R}_+$ is: $\frac{\partial F_1(z,t)}{\partial t} + \gamma \frac{\partial F_1(z,t)}{\partial z} = \lambda F_2(z,t) - \beta F_1(z,t) F_1(0,t)$ $\frac{\partial F_2(z,t)}{\partial t} = \beta F_1(z,t) F_1(0,t) - \lambda F_2(z,t)$ when $z > 0$; and: $\frac{\partial F_1(z,t)}{\partial t} = \lambda F_2(z,t) - \beta F_1(z,t) F_1(0,t)$ $\frac{\partial F_2(z,t)}{\partial t} = \beta F_1(z,t) F_1(0,t) - \lambda F_2(z,t)$ when $z = 0$. Boundary conditions are $F_1(z,0) = \sigma(z)$ and $F_2(z,0) = 0$ for all $z$. The thing which appears to make these special is the presence of the `non-local' terms $F_1(0,t)$. I guess I could try and solve the system using finite difference methods, but I was wondering, is there anything better that can be done here, e.g. can the method of characteristics still be used? I'm not particularly familiar with PDEs so any help would be gratefully received --- do systems like this even have a name, are there any references I should see? Thanks! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9729303121566772, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/88522/bound-on-bessel-function-of-the-first-order
Bound on Bessel function of the first order Let $I_1(z)$ be the Bessel function of the first order with purely imaginary argument. Can we explicitly bound $I_1$ on $[0,x]$, where $x>0$ is a real number in terms of $x$? - 1 – J. M. Dec 5 '11 at 9:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7165608406066895, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/43497/understanding-work-and-the-conservation-of-energy?answertab=votes
# Understanding Work and the conservation of energy We have a car with a mass of $780 kg$ with travels with a speed of $50 km/h$. The car brakes and after $4,2m$ is stops completely. Warmth is created. Calculate the friction. I solved this easily, by simply filling in the data like a headless chicken (the solution I'm showing is from the correction model, I got the same answers but I did it without writing anything down, so): $E_{total 1} = E_{total 2}$ $0.5mv_1^2 = 0.5mv_2^2 + Q = 0+ F_w . s$ $0.5 . 780 . (50/3.6)^2 = F_w . 4.2$ $F_w = 1,8.10^4 N$ I didn't have any trouble with this, I got the same answer, but than I started thinking about it, and I am increasingly finding the solution illogical. The LHS is completely logical to me, but the right hand side. Normally, the formula of work is the resultant force times the distance d. But, after you've travelled the 4,2 meters, your $F_w$ is $0$. So how can you say that $F_w \times d$ = LHS, because by the time the total $d$ (4.2) is reached, the resistance has already turned into 0. What is the logic behind this? I know this is high school level so it is simplified, but even then, knowing it's simplified a lot, I don't understand the logic. Can someone explain? - ## 1 Answer Centered dots for multiplication in LaTeX are put in with \cdot The force of kinetic friction does not gradually draw down to zero. It has a specific value for any nonzero velocity, and only at zero velocity does the friction force exhibit discontinuity as we abruptly enter the regime of static friction. Discontinuities in general are hard to fathom, but this is the standard way of teaching friction. Think of it this way: the force on the car while it is moving is constant, so the work is easy to calculate. Once the car stops moving, the force may be different...but the car no longer moves, and so there is no work to account for. - But am I right then? Because you reach 4.2 when you've stopped moving, aka the friction is 0, in other words, this is incorrect? – user14445 Nov 5 '12 at 17:29 1 4.2 meters is the distance traveled while under the influence of the constant kinetic friction force. The work that this constant force does is correctly calculated. What happens at 4.2 meters is of no direct concern. You could calculate the work done by the time the car travels 3.5 meters or 2.1 meters, and the math would be valid for those distances traveled. What happens when the car reaches a given distance is not relevant, as long as the force was constant the whole time before it arrived there. – Muphrid Nov 5 '12 at 17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606214761734009, "perplexity_flag": "head"}
http://en.m.wikibooks.org/wiki/Solutions_to_Hartshorne's_Algebraic_Geometry/Separated_and_Proper_Morphisms
# Solutions to Hartshorne's Algebraic Geometry/Separated and Proper Morphisms The reference for this section is EGA II.5, EGA II.6, EGA II.7. For the discrete vaulation ring questions at the end see Samula and Zariski's Commutative Algebra II. ## Exercise II.4.1 Let $f: X \to Y$ be the finite morphism. Finite implies finite type so we only need to show that $f$ is universally closed and separated. $f$ is separated. We want to show that $X \to X \times_Y X$ is a closed immersion. To check that a morphism is a closed immersion it is enough to check for each element of an open cover of the target. Let $\{ Spec\ B_i\}$ be an open affine cover of $Y$. The pull-back of $X \to X \times_Y X$ along each $Spec\ B_i \to Y$ is $Spec\ A_i \to Spec\ A_i \otimes_{B_i} A_i$ where $Spec\ A_i = f^{-1}Spec\ B_i$. The ring homomorphism corresponding to these morphisms of affine schemes is surjective, and so they are all closed immersions according to Exercise II.2.18(c). $f$ is universally closed. The proof of Exercise II.3.13(d) goes through to show that finite morphisms are stable under base change (in fact, the proof becomes easier). Secondly, we know that finite morphisms are closed (Exercise II.3.5) and therefore finite morphisms are universally closed. ↑Jump back a section ## Exercise II.4.2 Let $U$ be the dense open subset of $X$ on which $f$ and $g$ agree. Consider the pullback square(s): ```\xymatrix{ U \ar@{=}[r] \ar[d] & U \ar[d] \\ Z \ar[r]^{\Delta'} \ar[d] & X \ar[d]^{f,g} \\ Y \ar[r]^\Delta & Y \times_S Y } ``` Since $Y$ is separated, the lower horizontal morphism is a closed immersion. Closed immersions are stable under base extension (Exercise II.3.11) and so $Z \to X$ is also a closed immersion. Now since $f$ and $g$ agree on $U$, the image of $U$ in $Y \times_S Y$ is contained in the diagonal and so the pullback is, again $U$ (at least topologically. But this means that $U \to X$ factors through $Z$, whose image is a closed subset of $X$. Since $U$ is dense, this means that $sp\ Z = sp\ X$. Since $Z \to X$ is a closed immersion, the morphism of sheaves $\mathcal{O}_X \to \mathcal{O}_Z$ is surjective. Consider an open affine $V = Spec\ A$ of $X$. Restricted to $V$, the morphism $Z \cap V \to V$ continues to be a closed immersion and so $Z \cap V$ is an affine scheme, homeomorphic to $V$, determined by an ideal $I \subseteq A$. Since $Spec\ A / I \to Spec\ A$ is a homeomorphism, $I$ is contained in the nilradical. But $A$ is reduced and so $I = 0$. Hence, $Z \cap V = Z$ and therefore $Z = X$. 1. Consider the case where $X = Y = Spec\ k[x,y] / (x^2, xy)$, the affine line with nilpotents at the origin, and consider the two morphisms $f,g: X \to Y$, one the identity and the other defined by $x \mapsto 0$, i.e. killing the nilpotents at the origin. These agree on the complement of the origin which is a dense open subset but the sheaf morphism disagrees at the origin. 2. Consider the affine line with two origins, and let $f$ and $g$ be the two open inclusions of the regular affine line. They agree on the complement of the origin but send the origin two different places. ↑Jump back a section ## Exercise II.4.3 Consider the pullback square ```\xymatrix{ U \cap V \ar[r] \ar[d] & U \times_S V \ar[d] \\ X \ar[r]^{\Delta} & X \times_S X } ``` Since $X$ is separated over $S$ the diagonal is a closed immersion. Closed immersions are stable under change of base (Exercise II.3.11(a)) and so $U \cap V \to U \times_S V$ is a closed immersion. But $U \times_S V$ is affine since all of $U, V, S$ are. So $U \cap V \to U \times_S V$ is a closed immersion into an affine scheme and so $U \cap V$ itself is affine (Exercise II.3.11(b)). For an example when $X$ is not separated consider the affine plane with two origins $X$ and the two copies $U, V$ of the usually affine plane inside it as open affines. The intersection of $U$ and $V$ is $\mathbb{A}^2 - \{0\}$ which is not affine. ↑Jump back a section ## Exercise II.4.4 Since $Z \to S$ is proper and $Y \to S$ separated it follows from Corollary II.4.8e that $Z \to Y$ is proper. Proper morphisms are closed and so $f(Z)$ is closed. $f(Z) \to S$ is finite type. This follows from it being a closed subscheme of a scheme $Y$ of finite type over $S$ (Exercise II.3.13(a) and (c)). $f(Z) \to S$ is separated. This follows from the change of base square and the fact that closed immersions are preserved under change of base. ```\xymatrix{ f(Z) \ar[d]^\Delta \ar[r] & Y \ar[d]^\Delta \\ f(Z) \times_S f(Z) \ar[r] & Y \times_S Y } ``` $f(Z) \to S$ is universally closed. Let $T \to S$ be some other morphism and consider the following diagram ```\xymatrix{ T \times_S Z \ar[r] \ar[d]^{f'} & Z \ar[d]^f \\ T \times_S f(Z) \ar[r] \ar[d]^{s'} & f(Z) \ar[d]^s \\ T \ar[r] & S } ``` Our first task will be to show that $T \times_S Z \to T \times_S f(Z)$ is surjective. Suppose $x \in T \times_S f(Z)$ is a point with residue field $k(x)$. Following it horizontally we obtain a point $x' \in f(Z)$ with residue field $k(x') \subset k(x)$ and this lifts to a point $x'' \in Z$ with residue field $k(x'') \supset k(x')$. Let $k$ be a field containing both $k(x)$ and $k(x'')$. The inclusions $k(x''), k(x) \subset k$ give morphisms $Spec\ k \to T \times_S f(Z)$ and $Spec\ k \to Z$ which agree on $f(Z)$ and therefore lift to a morphism $Spec\ k \to T \times_S, Z$ giving a point in the preimage of $x$. So $T \times_S Z \to T \times_S f(Z)$ is sujective. Now suppose that $W \subseteq T \times_S f(Z)$ is a closd subset of $T \times_S f(Z)$. Its vertical preimage $(f')^{-1}W$ is a closed subset of $T \times_S Z$ and since $Z \to S$ is universally closed the image $s' \circ f'((f')^{-1}(W))$ in $T$ is closed. As $f'$ is surjective, $f'((f')^{-1}(W)) = W$ and so $s' \circ f'((f')^{-1}(W)) = s'(W)$. Hence, $T \times_S f(Z)$ is closed in $T$. ↑Jump back a section ## Exercise II.4.5 1. Let $R$ be the valuation ring of a valuation on $K$. Having center on some point $x \in X$ is equivalent to an inclusion $\mathcal{O}_{x,X} \subseteq R \subseteq K$ (such that $\mathfrak{m}_R \cap \mathcal{O}_{x,X} = \mathfrak{m}_x$) which is equivalent to a diagonal morphism in the diagram ``` \xymatrix{ Spec\ K \ar[r] \ar[d] & X \ar[d] \\ Spec\ R \ar[r] \ar[ur] & Spec\ k ``` But by the valuative criterion for separability this diagonal morphism (if it exists) is unique. Therefore, the center, if it exists, is unique. 1. Same argument as the previous part. 1. The argument for the two cases is the same so we will prove: Suppose that every valuation ring $R$ of $K$ has a unique center in $X$, then $X$ is proper. This is clearly true for integral $k$-schemes of finite type of dimension zero. Suppose that it is true for integral $k$-schemes of dimension less than $n$ and that $X$ is an integral $k$-scheme of dimension $n$. We will use the valuative criteria. Suppose that we have a diagram ``` \xymatrix{ Spec\ L \ar[r] \ar[d] & X \ar[d] \\ Spec\ S \ar[r] & Spec\ k ``` with $S$ a valuation ring of function field $L$. If the image of the unique point of $Spec\ L$ is not the generic point of $X$ then let $Z$ be the closure of its image with the reduced structure. We have a diagram ``` \xymatrix{ Spec\ L \ar[r] \ar[d] & Z \ar[r] & X \ar[d] \\ Spec\ S \ar[r] & Spec\ k \ar@{=}[r] & Spec\ k ``` The scheme $Z$ is an integral $k$-scheme of dimension less than $n$ and so the square on the lest admits a lifting, which gives a lifting for the outside rectangle. Moreover, as closed immersions are proper, any lifting of the outside rectangle factors uniquely through $Z$ by the valuative criteria and so the lifting is unique. Now suppose that the image of the point of $Spec\ L$ is the generic point of $X$. Then we have a tower of field extensions $L / K / k$ and the valuation on $L$ induces a valuation on $K$. We then have the following diagram. ``` \xymatrix{ Spec\ L \ar[r] \ar[d] & Spec\ K \ar[r] & X \ar[d] \\ Spec\ S \ar[r] & Spec\ R \ar[r] & Spec\ k ``` By assumption the valuation ring $R$ has a unique center $x$ on $X$ and so there is a unique extension of the diagram above ``` \xymatrix{ Spec\ L \ar[r] \ar[d] & Spec\ K \ar[r] & Spec\ \mathcal{O}_{X,x} \ar[r] & X \ar[d] \\ Spec\ S \ar[r] & Spec\ R \ar[rr] \ar[ur] && Spec\ k ``` Hence, there is a unique lifting of our original square. By the valuative criteria, the scheme $X$ is then proper. 1. Suppose that there is some $a \in \Gamma(X, \mathcal{O}_X)$ such that $a \not\in k$. Consider the image $a \in K$. Since $k$ is algebraically closed, $a$ is transcendental over $k$ and so $k[a^{-1}]$ is a polynomial ring. Consider the localization $k[a^{-1}]_{(a^{-1})}$. This is a local ring contained in $K$ and therefore there is a valuation ring $R \subset K$ that dominates it. Since $\mathfrak{m}_R \cap k[a^{-1}]_{(a^{-1})} = (a^{-1})$ we see that $a^{-1} \in \mathfrak{m}_R$. Now since $X$ is proper, there exists a unique dashed morphism in the diagram on the left. ``` \xymatrix{ Spec\ K \ar[r] \ar[d] & X \ar[d] && K & \Gamma(X, \mathcal{O}_X) \ar[l] \ar@{-->}[dl] \\ Spec\ R \ar[r] \ar@{-->}[ur] & Spec\ k && R \ar[u] & k \ar[l] \ar[u] ``` Taking global sections gives the diagram on the right which implies that $a \in R$ and so $v_R(a) \geq 0$. But $a^{-1} \in \mathfrak{m}_R$ and so $v_R(a^{-1}) > 0$ This gives a contradiction since $0 = v_R(1) = v_R(\frac{a}{a}) = v_R(a) + v_R(\frac{1}{a}) > 0$. ↑Jump back a section ## Exercise II.4.6 Since $X$ and $Y$ are affine varieties, by definition they are integral and so $f$ comes from a ring homomorphism $B \to A$ where $A$ and $B$ are integral. Let $K = k(A)$. Then for valuation ring $R$ of $K$ that contains $\phi(B)$ we have a commutative diagram ``` \xymatrix{ Spec\ K \ar[r] \ar[d] & X \ar[d] \\ Spec\ R \ar[r] \ar@{-->}[ur]^{\exists !} & Y ``` Since $f$ is proper, the dashed arrow exists (uniquely, but we don't need this). From Theorem II.4.11A the integral closure of $\Phi(B)$ in $K$ is the intersection of all valuation rings of $K$ which contain $\phi(B)$. As the dashed morphism exists for any valuation ring $K$ containing $\phi(B)$ so it follows that $A$ is contained in the integral closure of $\phi(B)$ in $K$. Hence every element of $A$ is integral over $B$, and this together with the hypothesis that $f$ is of finite type implies that $f$ is finite. ↑Jump back a section ## Exercise II.4.7 ↑Jump back a section ## Exercise II.4.8 • Let $X \stackrel{f}{\to} Y$ and $X' \stackrel{f'}{\to} Y'$ be the morphisms. The morphism $f \times f'$ is a composition of base changes of $f$ and $f'$ as follows: \tbd{\mathfrak{m}arginpar{Should really check that the all the claims made about pullbacks in here are true.}} ``` \xymatrix@R=6pt{ & X \ar[dd] \\ X \times X' \ar[ur] \ar[dd] \\ & Y \\ Y \times X' \ar[ur] \ar[dd] \ar[dr] \\ & X' \ar[dd] \\ Y \times Y' \ar[dr] \\ & Y' ``` Therefore $f \times f'$ has property $P$. • Same argument as above but we should also note that since $g$ is separated the diagonal morphism $Y \to Y \times_Z Y$ is a closed embedding and therefore satsifies $P$. ``` \xymatrix@R=6pt{ & Y \ar[dd] \\ X \ar[ur] \ar[dd] \\ & Y\times_Z Y \\ X \times_Z Y \ar[ur] \ar[dd] \ar[dr] \\ & X \ar[dd] \\ Y \ar[dr] \\ & Z ``` • Consider the factorization ``` \xymatrix{ X_{red} \ar@/^/[drr]^{id} \ar@/_/[ddr]_{f_{red}} \ar[dr]^{\Gamma_{f_{red}}} \\ & Y_{red} \times_Y X_{red} \ar[r] \ar[d] & X_{red} \ar[d] \\ & Y_{red} \ar[r] & Y ``` The morphism $X_{red} \to X \to Y$ is a composition of a closed immersion and a morphism with property $scP$ and therefore it has property $P$. Therefore the vertical morphism out of the fibre product is a base change of a morphism with property $P$ and therefore, itself has property $P$. To se that $f_{red}$ has property $P$ it therefore remains only to see that the graph $\Gamma_{f_{red}}$ has property $P$ for then $f_{red}$ will be a composition of morphisms with property $P$. To see this, recall that the graph is following base change ``` \xymatrix{ X_{red} \ar[r] \ar[d]^\Gamma & Y_{red} \ar[d]^\Delta \\ X_{red} \times_Y Y_{red} \ar[r] & Y_{red} \times_Y Y_{red} ``` But $Y_{red} \times_Y Y_{red} = Y_{red}$ and $\Delta = id_{Y_{red}}$ and so $\Delta$ is a closed immersion. Hence, $\Gamma$ is a base change of a morphism with property $P$. ↑Jump back a section ## Exercise II.4.9 Let $X \stackrel{f}{\to} Y \stackrel{g}{\to} Z$ be two projective morphisms. This gives rise to a commutative diagram ``` \xymatrix{ X \ar[r]^{f'} \ar[dr]_f & \mathbb{P}^r \times Y \ar[d] \ar[r]^{id \times g'} & \mathbb{P}^r \times \mathbb{P}^s \times Z \ar[d] \\ & Y \ar[r]^{g'} \ar[dr]_g & \mathbb{P}^s \times Z \ar[d] \\ & & Z ``` where $f'$ and $g'$ (and therefore $id \times g'$) are closed immersions. Now using the Segre embedding the projection $\mathbb{P}^r \times \mathbb{P}^s \times Z \to Z$ factors as $\mathbb{P}^r \times \mathbb{P}^s \times Z \to \mathbb{P}^{rs + r + s} \times Z \to Z$ So since the Segre embedding is a closed immersion then we are done since we have found a closed immersion $X \to \mathbb{P}^{rs + r + s}_Z$ which factors $g \circ f$. ↑Jump back a section ## Exercise II.4.10 Chow's Lemma is in EGA II.5.6. ↑Jump back a section ## Exercise II.4.11 See Samula and Zariski's Commutative Algebra II. Suppose that $L = K(t)$. Then define: $\mathfrak{m}_R = \{ a_0 + a_1 t + \dots + a_n t^n \in \mathcal{O}[t] : a_0 \in \mathfrak{m} \}$ The ring $\mathcal{O}[t]$ is a discrete noetherian local domain with maximal ideal $\mathfrak{m}_R$ and quotient field $L$. By induction then, we can reduce to the case when $L$ is a finite field extension of $K$. Now consider a set of generators $\{x_1, \dots, x_n\}$ of $\mathfrak{m}$ such that $x_1 \not\in \sqrt{(x_2, \dots, x_n)}$\mathfrak{m}arginpar{does such a set always exist?} (if $\mathfrak{m}$ is principal wait for the next step). We claim that the ideal $(x_1)$ is not the unit ideal in $\mathcal{O}' = \mathcal{O}[\frac{x_2}{x_1}, \dots, \frac{x_n}{x_1}]$. If it were then there would be some polynomial $f$ of degree, say $d$, in the $\frac{x_i}{x_1}$ such that $1 = x_1 f$. Let $f_0$ be the degree 0 part of $f$ and $f_1$ be the higher degree part. Since $x_1 \in \mathfrak{m}$ the element $1 - x_1 f_0$ has an inverse, say $a$. Now with this in mind, our equality $1 = x_1 f_0 + x_1 f_1$ implies that $1 = a x_1 f_1$ which then implies that $x_1^d = a x_1^{d + 1} f_1$. Since $f_1$ is made up of terms of degree higher than zero, the element $a x_1^{d + 1} f_1 \in (x_2, \dots, x_n)$ which implies that $x_1 \in \sqrt{(x_2, \dots, x_n)}$ contradicting our assumption. So $(x_1)$ is not the unit ideal in $\mathcal{O}'$. Now let $\mathfrak{p}$ be a minimal prime ideal of $(x_1)$, and consider the localization $(\mathcal{O}')_\mathfrak{p}$. ↑Jump back a section ## Exercise II.4.12 See Samula and Zariski's Commutative Algebra II. ↑Jump back a section Last modified on 18 January 2011, at 04:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 243, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395745992660522, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/109004/functions-that-are-omegan?answertab=oldest
Functions that are $\omega$(n) Let $f(n) = \omega(n)$. Then for all constants $c > 0$ there exists a constant $n_0$ such that $f(n + 1) - f(n) > c$ for all $n > n_0$. The concept of Little-Omega is that the function must be increasing asymptotically at a rate faster than the bounding function. So if the bounding function is linear, then the difference between the two points in f(n) has to be greater than some constant. I understand conceptually why this is true, but is there a way to prove this mathematically? (note: this is not homework) - What definition of $\omega$ are you using? – Aryabhata Feb 13 '12 at 20:00 – pepsi Feb 13 '12 at 20:05 2 I sort of doubt this is true. For example, take f(x) = 2^2^2^x if x isn't a power of Grahams's number, and f(x) = f(x-1) otherwise. – Lopsy Feb 13 '12 at 20:07 1 Answer As Lopsy says, this isn't true. Consider the function (with natural number domain) such that $f(n) = n^2$ whenever $n$ is not of the form $p+1$ for prime $p$. And $f(p+1) = f(p)$, for every prime $p$. We have that $f(n) \ge (n-1)^2$ and so $f(n) = \omega(n)$. Whatever $n_0$ you pick, there will be a prime $p \gt n_0$ such that $f(p+1) - f(p) = 0 \lt c$. - 1 Primes are not special. Any infinite set of naturals will do. – Aryabhata Feb 13 '12 at 20:28 1 Indeed: $f(2n)=f(2n+1)=n^2$. – Did Feb 13 '12 at 20:33 @Aryabhata : Can you explain a bit more how we know that this example of $f(n)$ falls within $w(n)$? – pepsi Feb 13 '12 at 20:34 1 @pepsi: Try proving that $(n-1)^2$ is $\omega(n)$. – Aryabhata Feb 13 '12 at 20:38 @Aryabhata : Thank you. I'm justifying this conceptually by looking at it as f(n) still has to grow asymptotically faster than cn, but just not necessarily for every step. – pepsi Feb 13 '12 at 20:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265352487564087, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/37746/math-or-physics-degree?answertab=active
# Math or Physics degree? I am hoping to become a physicist focusing mainly on the theoretical side in the future. I am trying to decide whether to go for a physics or math undergrad course. Assuming that I am capable of doing either, what are the pros and cons of either route? I know that mathematics is essential to doing physics, and in most math courses, there are applied math modules that are very much related to physics. Also that many research physicists have math degrees. But surely there is a reason why people choose the physics course over the math course and vice versa? Thank you. - Go for a Mathematical Physics/Theoretical Physics degree. Most such courses cover both core maths and physics modules. Note, if the institute you choose offers such course it is likely to be hard and alot of work - but great!! :] Good luck... – Killercam Sep 19 '12 at 11:11 Thank you, @Killercam . My institution (if I may call it that), offers 2 courses: Physics (Experimental and Theoretical with supplementary Math modules) and Mathematics (within which I can opt for applied modules). The thing is, the Math route is mainly concerned with solving differential equations (also some QM, electromagnetism, etc) and the Physics course does not have as intense a Math regime but offers a wider range of topics. Which do you think is best? – Rinaldo Sep 19 '12 at 11:23 If you want to go into theoretical physics, maths is all important. For generic mathematical/theoretical physics (in my oppinion) mathematics forms the foundations. So I would opt for the maths option. You can read about the experimental stuff and learn the physics you want to from the avalible options/modules, books and doing some physics modules. However, even doing the maths course, you will eventually be able to taylor what you study - here you can opt for applied rather than pure. I think you should speak to someone face-to-face about this, as it is a big decision. Phone the university... – Killercam Sep 19 '12 at 11:32 1 someone should take time to talk you through your concerns and offer advice. If they don't I would question whether it is the right institution. Having said mathematics; there are some core aspects of physics that are crucial to becoming a theoretician. Quantum Physics/Theory, Relativity (General and Special - meths courses should provide this), Thermodynamics, Mechanics, etc. Good luck... – Killercam Sep 19 '12 at 11:35 I will offer the opinion that it is better to start with physics if one wants to study physics, because of course mathematics is absolutely essential, even for experimental physicists, but there is a danger with starting with mathematics and then going on to physics of getting caught in a special groove of physics and never having an over all strong foundation in it. So it depends on you ambition: somebody ambitious to leave his/her mark in physics should really know as much of physics data and open problems as possible, imo. – anna v Sep 19 '12 at 12:05 show 6 more comments ## 5 Answers After reading this dialogue I can't help but feel like a burden on society....you're all so frickin smart. Anyway, here is two cents from Napoleon's corpral. There is no wrong answer here. Either course/path will enlighten and strengthen your capability to perform to greatest potential. However, I would ask you to acknowledge your passion...physics and race ahead with gusto. Also, this is not an either or. Do both! However, I would say physics first and then go back and police up any math skills later... Hope this helped???? Unless you have some sort of terminal illness you have about 60 years to get toward your goal. - Some time ago I had the same doubt, but I finally chose Physics for following reasons: Pure math very abstract, you may go along very complex structures that will bring you to no where, in sense of there practical useability. Modern math is very far of it's old philosophy "intuitionism" , and they try always to prove things that are very obvious , this decreasing productivity very much, and dragging you back of understanding the "big picture", that of course not a wrong thing, but they are really exaggerating in that in my opinion. Besides other reasons I will not mention to keep it short. My advice to you: Try to read a math book for mathematicians and another one about the same subject for specially for physicists, you will understand immediately what fits you. even from the first 50 pages, that how I understood what I want, and as example of such subject is "differential geometry". - Thank you, TMS. This is very true! – Rinaldo Sep 19 '12 at 17:46 I don't know if there is a right answer of math vs. physics at an introductory level. However, what is important is content, concepts and context. The problem I have seen in many math classes is that they frequently have the content you will eventually need to understand, but they are taught generally independent of the physical concepts and context. Since I don't have the benefits of current course descriptions, I would take a day to map the key material being taught in physics courses to those taught in math courses. A simple table would suffice I think, this may be difficult since it is hard to know a priori what the underlying math in a physics course might be, but some quick searching for online references might help. In any case, one is looking for gaps in the mapping, and then try to understand why the gaps are there and how long it would take to fill them. One thing that is frequently an issue with physics is that they frequently do not keep pace with connecting back to what is being taught concurrently in mathematics and the mathematics will outpace the physics courses in introduction of new content. This might not be true in all universities, but there is often little coordination that would benefit the student. - 1 Hal, the removal from context point is very true! – Rinaldo Sep 19 '12 at 12:23 It depemnds a lot on your interests. If you are more drawn towards abstract thinking, it may be better to start [like me] ith math and learn physics along the way, while if you are more drawn towards understanding physical phenomena, it may be better to start [like most physicists] with physics and learn math along the way. In the end you'll need both anyway for a thorough understanding. You may also want to choose based on what you'd prefer to end up with in case you'll not have the stamina to complete both studies. In any case, studying one subject properly (according to the syllabus of your university of choice) should not deter you from learning as much as you can about both sides of the coin. People with different educational background and/or preferences will often develop different preferred approaches, though they learn of course the traditional ones, too. This diversity is an advantage, as different points of view complement each other. If you plan your life actively rather than have it determined by circumstances, what you specialize on will mainly depend on what you want. Cultivating strong and well-defined interests is a definitive advantage, as it simplifies everything - choices, understanding, motivation, recognizing possibilities and open doors, etc.. - Thank you, Arnold. Very true, I hope to learn as much about both. In your experience, have you noticed different approaches to problems between people who took the different paths? Also is there a particular area of specialization each path leads to? For example, would physics people be disadvantaged in doing say string theory, which may be quite abstract? – Rinaldo Sep 19 '12 at 10:55 @Rinaldo: I added a response in my answer. – Arnold Neumaier Sep 19 '12 at 11:40 Thanks again, Arnold. – Rinaldo Sep 19 '12 at 12:21 you can get a PH Degree on physics and then make a MASTER (sorry i do not how it is told in english) in mathematical physics or perhaps you can switch to mathematics and make a thesis on mathematical physics. for me it is easier to learn MATH from PHYSICS than becoming a MATHEMATICIAN and then trying to learn physics - Thank you, Jose. Would you mind explaining why it could be harder for a mathematician to learn physics than vice versa? – Rinaldo Sep 19 '12 at 10:25 One reason is physicists don't use terms correctly (e.g., Lie groups and Lie algebras are synonyms for physicists). Mathematicians go crazy over this stuff. Another reason is the lack of rigor, and generality of algebra is used often in physics... – Alex Nelson Sep 19 '12 at 15:42 yep, we physicst care only for the result :D (experiment) and sometimes we do not care about rigour but i do not think is so bad , another think mathematician do not usually like is the notation for multiple fourier integral $exp(k.r)$ where k and r are vectors. – Jose Javier Garcia Sep 19 '12 at 16:16 ## protected by dmckee♦Mar 19 at 15:24 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9588513970375061, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/130487-prove-g-has-subgroup-order-p-n.html
# Thread: 1. ## Prove that G has a subgroup of order p^n If $G$ is a finite abelian group and $p$ is a prime such that $p^n$ divides $|G|$, then prove that $G$ has a subgroup of order $p^n$. Note: This proof is very similar to the First Sylow Theorem, but in this one, G is abelian. Attempt at the proof: If $p^n||G|$, then $|G| = p^nm, m \in \mathbb{Z}, (p,m) = 1$. Consider a group $H$. Claim: $H \subset G$ is a group such that $|H| = p^n$. I know that I have to use Lagrange's Theorem to show that $[G:H] = m$. I just don't know how to do this. Can anyone help? 2. Originally Posted by crushingyen If $G$ is a finite abelian group and $p$ is a prime such that $p^n$ divides $|G|$, then prove that $G$ has a subgroup of order $p^n$. Note: This proof is very similar to the First Sylow Theorem, but in this one, G is abelian. Attempt at the proof: If $p^n|G$, then $G = p^nm, m \in \mathbb{Z}, (p,m) = 1$. This well may be false: you're only given $p^n\mid |G|$ , not that n is the maximal power of p dividing the order of G. I'd rather go: let $p^m\mid |G|\,\,\,s.t.\,\,\,p^{m+1}\nmid |G|\Longrightarrow$ by Sylow theorems there exists $H\leq G\,\,\,s.t.\,\,|H|=p^m$ . Now, it's easy to prove that any p-group of order $p^m$ has a normal sbgp. of order $p^k$ for any $0\leq k\leq m-1$ (by induction, say) . In the present case normality is for free since G is abelian, and still we're done. Tonio Consider a group $H$. Claim: $H \subset G$ is a group such that $|H| = p^n$. I know that I have to use Lagrange's Theorem to show that $[G:H] = m$. I just don't know how to do this. Can anyone help? . 3. Aw man, that would've been perfect. I forgot to say that you're not allowed to use the Sylow Theorem to answer this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 30, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9721720814704895, "perplexity_flag": "head"}
http://mathoverflow.net/questions/68413?sort=newest
## Nonabelian cohomology via crossed modules ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$, $H$ be topological groups and let $t \colon G \to H$ be a homomorphism, such that $t \colon G \to H$ is a topological crossed module. For a topological space $X$ we can define the nonabelian cohomology set $\check{H}^1(X, G \to H)$. There is a map of crossed modules from $1 \to H$ to $G \to H$ and this induces a map $$\check{H}^1(X, H) \to \check{H}^1(X,G \to H)$$ What are the conditions for this map to be injective? If I express this problem in terms of classifying spaces, then I think I am asking for the fiber of the map $$BH \to B(G \to H)\ .$$ In particular, my vague hope was that if $G$ is contractible, then the above map is injective. Is this true? - (i) Would perhaps the homotopy fibre of the map be more informative? (ii) There are results in Larry Breen's work on Bitorsors that give some interpretation of what the elements of the non-Abelian cohomology with coefficients in a crossed module 'look like'. That source may give you the answer. (I have a summary of some of it in one of the versions of the Menagerie, but it is better to look at the original.) – Tim Porter Jun 21 2011 at 18:17 @Tim: I meant the homotopy fiber, sorry. – Ulrich Pennig Jun 21 2011 at 20:06 ## 2 Answers For results on the classifying space of (discrete) crossed modules, or more generally, crossed complexes, and in relation to homotopy classification and fibrations, see R. Brown, Exact sequences of fibrations of crossed complexes, homotopy classification of maps, and nonabelian extensions of groups, J. Homotopy and Related Structures 3 (2008) 331-343. However the topological case has not been worked up in this format, as far as I know. - 1 The recent paper of Murray, Roberts and Stevenson (arxiv.org/PS_cache/arxiv/pdf/1102/1102.4388v1.pdf) may be relevant to this. In any case it deserves a mention. :-) – Tim Porter Jun 22 2011 at 11:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The fiber of that map is well known to be $BG$, so what you expect is true. - Any reference for that other than that it is well known :-). – Ulrich Pennig Jun 21 2011 at 19:07 The reference depends on the definition of classifying space you have in mind. Probably the paper where you've got your definition from would contain that result. Otherwise just tell me which classifying space you mean and I'll try to find a reference for that model. – Fernando Muro Jun 21 2011 at 23:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920342206954956, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/53853/combined-am-gm-qm-inequality/53893
# Combined AM GM QM inequality I came across this interesting inequality, and was looking for interesting proofs. $x,y,z \geq 0$ $$2\sqrt{\frac{x^{2}+y^{2}+z^{2}}{3}}+3\sqrt [3]{xyz}\leq 5\left(\frac{x+y+z}{3}\right)$$ Addendum. In general, when is $$a\sqrt{\frac{x^{2}+y^{2}+z^{2}}{3}}+b\sqrt [3]{xyz}\leq (a+b)\left(\frac{x+y+z}{3}\right)$$ true? - 2 I'm not sure what is interesting here, nor how to measure the interest found in a proof. – Asaf Karagila Jul 26 '11 at 15:41 I deleted my answer because it was thoroughly invalid. – anon Jul 26 '11 at 15:59 2 @Asaf, These inequalities are usually abbreviated as $2QM+3GM \leq 5AM$ The interesting bit is that it is true that $GM \leq AM \leq QM$. – picakhu Jul 26 '11 at 16:00 1 @picakhu: Can you change your title both to something that better reflects the question and doesn't use 'interesting?' It seems obvious to me that you find the question interesting, or you wouldn't ask it. – mixedmath♦ Jul 26 '11 at 17:19 2 I see. I guess you are not familiar with mixing variables/EV then. I'll post a comment on this shortly. – Soarer Jul 26 '11 at 17:34 show 15 more comments ## 1 Answer This is not a direct answer to the question, but it's probably too long for a comment, so I'm leaving it as an answer. (In the comments it seems that OP was not familiar with the technique of mixing variables/smoothing, which was used by Honey_S in the link provided to solve the problem; or the (n-1)-EV theorem, so this answer would be a quick exposition of what they are.) Mixing variables/smoothing: In inequalities such as $f(a,b,c) \ge 0$, we seek to prove an inequality of the type $f(a,b,c) \ge f(t,t,c)$. We expect to iterate this inequality so that we can conclude the minimum would be attained when many variables are equal. Example 1: (AM-GM inequality) We want to show that if $a,b,c > 0$, then $a+b+c \ge 3(abc)^{1/3}$. Proof. Consider $f(a,b,c) = a + b + c - 3(abc)^{1/3}$ Now by 2-variable AM-GM, we see that $f(a,b,c) \ge f(\sqrt{ab}, \sqrt{ab}, c)$. You can then imagine that if we keep on doing such smoothing - i.e. next time replace $(\sqrt{ab}, c)$ with 2-tuple of their geometric mean for example, then in infinite time we reach the case where all three variables are equal, and that $f(a,b,c)$ attains its minimum when $a=b=c$, which is 0. In general there are many choices of $t$. If there's an initial condition on $a+b+c$, you may want to change $(a,b)$ to $\left(\frac{a+b}{2}, \frac{a+b}{2} \right)$ or sometimes $(0,a+b)$ if you guess that equality case of the inequality involves 0. If there is an initial condition on $a^2+b^2+c^2$, you may change $(a,b)$ to $\left(\sqrt{\frac{a^2+b^2}{2}}, \sqrt{\frac{a^2+b^2}{2}}\right)$ etc. In the AM-GM example, life is nice because $f(a,b,c) \ge f(t,t,c)$ holds unconditionally. Very often, this is not the case. For example, in Honey_S's solution in the link, after assuming $abc=1$ by homogenity he proved that $f(a,b,c) \ge f(\sqrt{ab},\sqrt{ab},c)$ for $f\left({a,b,c}\right) = 5\left({a+b+c}\right)-2\sqrt{3\left({a^{2}+b^{2}+c^{2}}\right)}$ only when $c = \max (a,b,c)$. However in this case, we are left to show that $f(t,t,c) \ge 0$ under the condition $t^2c = 1$. This is a one-variable inequality easily handled by calculus. If you want to see more examples of smoothing in action, check this thread and the four links in that post, this and this for example. (n-1)-EV theorem: This is a theorem that kills many olympiad inequalities. You can see Vasile Cirtoaje's original article here. Basically what it does is that after some tedious calculus checking, many inequalities actually attain its extremum when (n-1) of the n variables involved in the inequality are equal.(And we can use calculus to check the remaining case) Check out theorem 3 and its corollaries in the link. If you want to see its applications, see the "Applications" section of Vasc's paper, and if you want some more, see here or maybe a recent post on this forum. I hope this would be enough for you now :) - 1 Thanks for the two methods. This will be a fun read for me. :) Maybe I can teach some to my friends who are still young enough to tackle olympiads! – picakhu Jul 26 '11 at 19:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9632174372673035, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/25641/can-i-get-a-better-bound-on-this-function?answertab=active
# Can I get a better bound on this function? Question: if f is analytical and $|f(z)| < M$ for $|z| =< R$ find an upper bound for $|f('n)(z)|$ in $|z| =< \rho < R$ (where $f('n)$ means the nth derivative of $f$). So I cited Cauchy's Inequality, to find the bound $\frac{n!M_R} {R^n}$ , but since we have the additional limit of $\rho$ I think we should be able to find a better bound. The geometric interpretation is that the the circle of radius $\rho$ limits the radius of $|z|$. Is there further geometric intuition that I should see to solve this problem? - ## 2 Answers No, there is no better bound. Consider the function $f(z)=M \frac{z^n}{R^n}$ which fulfills the requirement (ok, that $|f(z)|$ is really smaller than $M$ one should replace $M$ by $M-\epsilon$ and in the end let $\epsilon\to0$). It is easy to check that $$f^{(n)}(z) = \frac{n! M}{R^n},$$ i.e., it exhausts the Cauchy bound. Note however that the $n$-th derivative is independent on $z$. So there is no way you get a better bound by restricting $|z|<\rho$. - This is a question of Lars V. Ahlfors, Complex Analysis, Third Edition, McGraw-Hill International Editions, 1979, at the end of the subsection "Higher Derivatives" of the section "Cauchy's Integral Formula". In this context, the answer should take this into account. My suggestion is to apply the formula $f^{\left(n\right)}\left(z\right)=\frac{n!}{2\pi i}\int_{C}\frac{f\left(\zeta\right)d\zeta}{\left(\zeta-z\right)^{n+1}}$. Thus we have $\begin{split}\left|f^{\left(n\right)}\left(z\right)\right| & =\left|\frac{n!}{2\pi i}\int_{C}\frac{f\left(\zeta\right)d\zeta}{\left(\zeta-z\right)^{n+1}}\right|\\ & =\frac{n!}{2\pi}\left|\int_{C}\frac{f\left(\zeta\right)d\zeta}{\left(\zeta-z\right)^{n+1}}\right|\\ & \leq\frac{n!}{2\pi}\int_{C}\frac{\left|f\left(\zeta\right)\right|d\zeta}{\left|\zeta-z\right|^{n+1}}\\ & \leq\frac{n!}{2\pi}M\int_{C}\frac{d\zeta}{R^{n+1}}\\ & =\frac{n!M}{2\pi R^{n+1}}\int_{C}d\zeta\\ & \leq\frac{n!M}{2\pi R^{n+1}}2\pi R\\ & =\frac{n!M}{R^{n}}.\end{split}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358760714530945, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/30402/why-do-we-think-of-light-as-a-wave
# Why do we think of light as a wave? I've read that light travels in a straight line and has a wavelength of 400nm to 700nm. But I don't understand why does it have a wavelength and what creates its wavelength? I agree with the concept of sound which also has wavelength, thus called sound waves which are created by the vibrational movement in air. I'm not aware of calling light a wave. Does the light vibrate too? If so, then how? - How familiar are you with electric charges and electric fields? If you know some about them, I can give an answer that is a bit more in-depth and accurate without having to explain electromagnetism along the way. – Colin Fredericks Jun 20 '12 at 3:39 @ColinFredericks: I know there are two types of electric charges in an atom. Like charges repel and opposite charges attract each other and this force between them is called electrostatic force. I understand Electric field as simply a field around a charge which attracts or repels another charge. – user143241 Jun 20 '12 at 10:02 I would recommend Feynman's 'QED: The Strange Theory of Light and Matter". Also, light doesn't really travel in a straight line, thats an oversimplification that is sometimes useful, but not strictly correct. Feynman talks about this in his book. – DJBunk Jun 20 '12 at 14:49 Light is a wave in electric and magnetic fields. This is probably a duplicate question. The light doesn't vibrate, the electric and magnetic fields vary in different positions in space and time. – Ron Maimon Jun 20 '12 at 19:36 ## 10 Answers Light is a wave - an electromagnetic wave. Radio waves and microwaves are also electromagnetic waves, they just have different wave lengths. Wikipedia has a nice picture showing the electromagnetic spectrum why does it have a wavelength... It has a wavelength because there is physical space between the peaks of the waves - it is a real, physical, wave. Just like water waves and sound waves, you can do "wave things" to light waves, such as send them through diffraction gratings and see the interference. and what creates its wavelength Whatever creates the light gives it energy, and the wavelength is proportional to that amount of energy: $$\lambda = \frac{hc}{E}$$ Does the light vibrate too? If so, then how? Sound waves are energy waves that compress matter - you can't have a sound wave in a vacuum. Light waves are energy waves too, but they don't need matter to go forward. That's why we can see sunlight, but we can't hear the sun. (And that's why in space, no one can hear you scream.) This subject can get really complicated really fast. Because although light is a wave, it is also a particle. A lot of really smart people have been scratching their really smart heads over that, and will be for a long time. - To begin with you just bother about classical limit of physics (forget quantum mechanics). When you solve the Maxwell's equations in presence of no charge, you will end up with an equation which is similar to a wave equation. If you see history of partial differential equations, mathematicians have found a bunch of equations like "Heat Equation", "Wave Equation", "Poisson Equation" etc. And the maxwell's equation in no charge and electric current matches exactly with the wave equation. This led to believe that light is made up of a wave of magnetic field $\vec{B}$ oscillating mutually perpendicular to wave of electric field $\vec{E}$. - You end up with an example of the wave equation. There is no "similar" involved. It is one. – dmckee♦ Jul 24 '12 at 23:32 Light is a electromagnetic wave. We talk about the dual-nature of light since in some instances it displays properties of a wave and other times as particles, each in a different state of quanta. When a metal is heated it will emit photons as the electrons shift their energy levels. Since electrons are emitted from a metal at a certain frequency of light it indicates that light consists of a particle called a photon. Increasing the intensity of light increases the rate of electron emission. This is not a property of a wave. However when light hits a transparent object at a angle it will be refracted (except at 90 degrees). This is a property of a wave. Therefore light is seen as electric waves and magnetic waves oscillating at right angles from each other. Light is always either a wave or a particle but never both at the same time. Welcome to the dual-nature of light. - Why is very difficult to answer... The speed of a wave is called celerity because the media in which the wave take form does not move but nature found a way to propagate, dissipate energy without moving the whole matter : the wave. For light there is for example an electrical perturbation (let's say a spark) in space then Maxwell equation tell us how the electrical pertubation generate magnetic perturbation which generate magnetic perturbation and so on. The pertubation is then of electromagnetic nature and the speed of propagation is c. I hope I answered your question, if you need me to go further let me know because question begining by the word why are the most interresting but the most difficult to satisfy. - It's not that light is a wave, it's that it can be modelled as a wave and doing so explains lots of physical phenomena. The question could also be "Why if light made of particles (photons)?" and the answer would be the same: because for explaining some of the physical phenomema the particle nature of light is more helpful. - -1: What is the difference between something being "modelled" as something and something "being" something? In this case, the model is complete--- there is no property left out. There is nothing you can do to light that you can't describe by a quantum field. – Ron Maimon Jun 20 '12 at 19:35 Currently light is thought of both as a wave and being made up of particles (photons), because as Robert mentioned in his answer, certain phenomena require modelling light as a wave to explain (interference, diffraction etc.), and others require photons (such as the photo-electric effect). Why do we think of light as a wave? Because modelling it as a wave correctly predicts a large range of physical phenomena. What is the 'wave'? Maxwell's Equations tell us that the wave is perturbations in the electromagnetic field. It is this field that is 'vibrating', or oscillating. - In classical physics, light (visible and invisible) is mathematically modeled as an electromagnetic wave, i.e., waves in the electric and magnetic fields. Electromagnetic waves are not limited to visible light. For example, radio waves are simply very long wavelength "light". X-rays and gamma rays are very short wavelength "light". Light doesn't vibrate. Rather, the electromagnetic field supports propagating "disturbances". For example, if an electron were to suddenly accelerate, the disturbance in the electric and magnetic field associated with the electron propagates outward from the location of the electron with a speed of c, the speed of light. This is somewhat analogous to a disturbance in the air due to, e.g., a loudspeaker that propagates outward at the speed of sound. Of course, imagining "vibrations" in the electromagnetic field is not easy like imagining "vibrations" in air. For years, physicists thought there had to be a substance, called aether, that filled all of space and was the medium for the propagation of light waves. It was the failure to detect the aether that led to the development of the Special Theory of Relativity. - I want to give you an example of how light can created. This has been helpful for my students when it comes to visualizing light as a wave. This is not the only way that light can be created, but it's a way that is a little easier to visualize and may help you understand why it can be seen as a wave. Imagine an electric charge. This charge creates an electric field around itself, which is what allows it to put a force on other charges. If we move that charge, the electric field associated with it will also move. However, and this is really key: the electric field does not adjust instantaneously. It takes time for the field to "catch up" with the new position of the charge. There's a great simulation of that at this link. It's a little exaggerated, but it'll show you the basic idea. You can see that wiggling the charge creates a disturbance in the electric field that looks very wave-like. There is also a magnetic part to this, which is more complex and is not shown on that animation. Moving charges are how we create radio waves, which are a form of light. (They're not the visible light that you were referring to, but there are a lot of different kinds of light.) Radio waves are made by moving electrical currents up and down the broadcast tower via a circuit that's designed to do so. It took a while for people to realize that radio waves, visible light, x-rays, and many other things are all really different forms of light. Once we realized that, it was clear that we could represent visible light with a wave - an electromagnetic wave, with a wavelength, frequency, amplitude, speed, and all the things that waves normally have. The wave is not "light vibrating," the light itself is vibrations in the electric (and magnetic) field. I should also mention that @Robert is correct. Light can be modeled as a wave because it behaves like one under certain circumstances. It can also be modeled as a particle (the photon) or as a collection of waves called a "wave packet." I hope that helps! - I think you mean if you accelerate a charge. No disturbance propagates if a charge is in motion at a constant speed--the field just moves at that speed too. – acjohnson55 Jul 12 '12 at 1:51 You are correct - that's what I get for simplifying my language too far. – Colin Fredericks Jul 12 '12 at 14:27 If visible light is part of LIGHT, which has a wide spectrum, including gamma, x-rays, sound, then why is it said that Light travels at a constant speed, when clearly it means visible light, since sound does not travel at the same speed as visible light, correct? - 1 Sound is emphatically not part of the electromagnetic spectrum. Sound is a compression wave in a medium. Electromagnetic waves--including light--are transverse traveling waves of electric and magnetic fields that require no medium. – dmckee♦ Jul 24 '12 at 23:30 Light is not a wave. If you (for instance) shoot particles of light called photons from a laser (photon-gun) at a wall with two holes in it and for some odd reason you decide to fire one photon every 1000 years you will find out that at the other end the detector (some fancy sort of polaroid) has an alternating series of vertical dark stripes and vertical light stripes. Thus you have what does look like an interference pattern on the screen. That is why it's more correct to say that light is a particle (the photon) which can "behave" like a wave. However as previous commenters explained if you don't want a deep insight into the nature of light you can also think of it in a classical (non quantum mechanical) way as a vibration of the electromagnetic field but in my opinion that response is not only less deep but also a little more abstract! The abstractness of quantum mechanics comes into play in the "why" of why there's an interference pattern but if you understand quantum mechanics already you don't need to also understand all of electromagnetism to understand why light behaves like a wave because EVERYTHING has a wavelength associated to it. Electrons also behave the way I described when you fire an electron-gun at a wall with two holes. - Was anything I said incorrect? My apologies if my post sounded harsh but I wanted to clarify the fact that light really is made of particles. Upon reading the question again however I see that the questioner was asking about the wavelength of light and not talking about the wave function so I could have accidentally implied that these two "waves" are the same. – tachyonicbrane Jun 21 '12 at 14:42 I didn't vote it down, but there are three things that struck me about your answer. First, it's a little incoherent and could be better organized. Second, you attribute interference patterns to particle behavior, when they're actually wave behavior. Third, you answered with material that seems to be way above the level of the original post, and would probably just be confusing. Those are probably why people voted your answer down. – Colin Fredericks Jun 21 '12 at 15:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612023234367371, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/226539/determinant-of-a-n-symmetric-square-matrix-with-diagonal-1/226554
# Determinant of a N symmetric square matrix with diagonal 1 What is the determinant of a symmetric $n \times n$ matrix with all diagonals be 1 and all others are $\rho$ (yes correlation matrix)? Anyone can tell me a method to work it out elegantly? Thanks! - ## 2 Answers Let $A$ be an $n\times n$ matrix of your desired form. Let us form the matrix of all ones, call it $J$. Then we can represent $A$ as $$A=\rho J - (\rho - 1)I$$ The determinant of the above matrix is $$\det(A) = \rho^n\det\left(J - \frac{\rho - 1}{\rho}I\right)$$ Letting $\lambda = \frac{\rho - 1}{\rho}$, the latter determinant is precisely the characteristic polynomial of $J$, easily seen as $$p(\lambda)=(-1)^{n}\lambda^{n-1}(\lambda - n)$$ Some simplifying then gives the determinant as $$\det(A)=\left(1-\rho \right)^{n-1}\left(1 + \rho n-\rho\right)$$ - Let $M_n$ be the matrix with matrix elements $$M_{ij}= \cases{1 & $i=j$\cr \rho & $i\not=j$}$$ Also let $N_n$ be the $n \times n$ matrix, obtained by $M_n$ by replacing first element of the first row with $\rho$. Applying Laplace's method to the first row of $M_n$ and $N_n$: $$\begin{eqnarray} \det M_n &=& \det M_{n-1} - (n-1) \rho \cdot \det N_{n-1} \\ \det N_n &=& \rho \cdot \det M_{n-1} - (n-1) \rho \cdot \det N_{n-1} \end{eqnarray}$$ with $\det M_2 = 1 - \rho^2$ and $\det N_2 = \rho(1-\rho)$. This gives $$\det(M_n) = (1-\rho)^{n-1}(1+(n-1) \rho)\qquad \det N_n = \rho (1-\rho)^{n-1}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9226181507110596, "perplexity_flag": "head"}
http://mathoverflow.net/questions/76242/definite-integral-0-dx-expx2a-expb-x2
## Definite Integral ∫_{0}^{∞} dx exp(−x^2−a exp(b x^2)) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've been trying without success to do $$\int_0^\infty dx\; \exp(-x^2) \exp(-a\exp(bx^2)).$$ It's not in my integral tables. Wolfram online integrator won't do it. It doesn't seem to be amenable to a contour integral method, and the method of integrating $e^{-x^2}$ alone doesn't work either. I don't know if this is the kind of question asked here, but any help would be appreciated. Thanks, Eric - 4 There is little reason to expect a closed form expression, even given pleasant endpoints. What is this for? – Will Jagy Sep 23 2011 at 22:47 1 Could you provide a bit of context? Is this a numerical integration? Symbolic integration? And I hope you are aware that not all integrals evaluate to elementary expressions. And that MO is for research-level mathematics, so that your question might fall outside what this site covers. If this question is closed, please see the FAQ mathoverflow.net/faq for a list of other Q&A sites you could try. – David Roberts Sep 23 2011 at 22:48 It can be evaluated numerically quite readily as a function of $a$ and $b$: i583.photobucket.com/albums/ss275/jaspercrowne/… but I guess that's not what you're after? – jc Sep 23 2011 at 22:53 2 This might be a fine question for MO, but to make it so, please do provide some background. Many integrals show up in many areas of mathematics, and I for one am always interested in hearing about more of them. But it's also easy to write down integrals that do not evaluate to closed expressions. If I understand why this particular integral is important, I'm much more likely to believe that it has a nice evaluation, and I'm much more likely to invest time into answering your question. – Theo Johnson-Freyd Sep 24 2011 at 0:15 1 To that end, I have voted that this question be closed temporarily as "too localized". At best, you will revise it, and my complaints will be moot, and you will get a useful answer, and the question will not be closed. Next best, if it is closed before you have time to make revisions, then after revising it you should "flag for moderator attention" and ask that the question be reopened. The point of closing it temporarily is to put some pressure on you to improve the question (I don't think we know each other, so it's hard for me to exert social pressure). – Theo Johnson-Freyd Sep 24 2011 at 0:17 show 6 more comments ## 2 Answers If you expand the $\exp(-a \exp(b x^2))$ as a power series in the variable $a \exp(b x^2)$ you will get a rather nice series, each term of which is a gaussian integral, so is easy to integrate. I don't have mathematica in front of me as I type, but this should give you about as nice a form as you might hope for (and the sum might be doable in closed form). EDIT THe other point is that if you make the substitution $\u = \exp(b x^2),$ then your integral becomes the integral from $1$ to $\infty$ of a power of $\log u$ times a power of $u,$ which should be amenable to contour integration... - It has $e^{-au}$ too. Maple gives me some mess with the WhittakerM function in it. – Brendan McKay Sep 25 2011 at 10:41 Yes, that's what I meant (I initially was thinking of Laplace transform of a power times power of a log). A mess with a Whittaker function is better than nothing (which is what Mathematica gives me). – Igor Rivin Sep 25 2011 at 14:47 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The manipulations below work assuming that $a>0$ and $b<0$. Using Igor Rivin's idea, one does get a gaussian, but it requires $b<0$. The resulting sum is $$\frac{\sqrt{\pi}}{2} \sum_{k=0}^{\infty} \frac{\left(-a\right)^k}{k!\sqrt{1-kb}}$$ which does not seem to have a closed form. Sure, one can rewrite $\frac{1}{\sqrt{1-kb}}$ a series in $k$, but that doesn't help because the resulting term (in $k$) is not hypergeometric, so swapping the order of summation still leads to a dead end. The above sum might be the best that can be done. - I've tried this :-). The sum isn't really better than the integral for my purposes. And, a>0, b>0 (which I didn't mention in the original post). – Eric Ulm Sep 24 2011 at 12:31 2 What ARE your purposes? It would help if you told us... – Igor Rivin Sep 24 2011 at 14:52 I'm trying to solve a PDE that showed up in a life insurance financial math context. The PDE can be solved numerically straightforwardly, but I've been able to get closed form solutions for some simple but unrealistic mortality laws. I've been working on a more realistic mortality law and have made substantial progress. I could just call the integral above a "special function" in which case I'd be done, but it would be much nicer to do the integral above. I'm sure I could publish it without a solution to that integral, but it would be much nicer with it. – Eric Ulm Sep 24 2011 at 16:24 Well, the question is: what does the answer need to be good for: getting large $a$ or $b$ asymptotics? Evaluating it for a particular value of $a$ and/or $b$? Closed form is not always the best for those purposes... – Igor Rivin Sep 24 2011 at 20:21 I guess my quick answer is that I like a closed form for its own sake, i.e. it's satisfying somehow. Overall, getting a solution for a particular value of a and b is most important. I'd be happy if I knew the functional form w.r.t "a" and an integral over "b" (i.e. get the "a" outside the integral). "a" (which depends on the individual's age) would change much more often than "b" (which depends on the parameters of the mortality law, stock market process and interest rates). – Eric Ulm Sep 24 2011 at 20:45 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9581308960914612, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/944/noticing-that-newtonian-gravity-and-electrostatics-are-equivalent-is-there-also
# Noticing that Newtonian gravity and electrostatics are equivalent, is there also a relationship between the general relativity and electrodynamics? In classical mechanics, we had Newton's law of gravity $F \propto \frac{Mm}{r^2}$. Because of this, all laws of classical electrostatics applied to classical gravity if we assumed that all charges attracted each other due to Coulomb's law being analogous. We can "tweak" classical electrostatics to fit gravity. In modern physics, does the reverse work? Can we "tweak" General Relativity to accurately describe electrostatics or even electromagnetism? - Do you mean "cast in terms of" instead of "tweak?" – John at CashCommons Nov 16 '10 at 18:06 I don't see how the first part can be true, since in electrodynamics, there are magnetic fields. What is the gravitational counterpart? I know there exists a vector-field theory of gravity, but I don't think the facts bear it out, since GR is based on a rank-2 tensor, the metric. – Raskolnikov Nov 16 '10 at 18:18 3 – David Zaslavsky♦ Nov 16 '10 at 20:28 I agree, this looks like a duplicate. – Noldorin Nov 16 '10 at 21:08 This is definitely a duplicate now that I look at it. Please close – Justin L. Nov 16 '10 at 21:41 show 3 more comments ## 2 Answers The parallel between Gravity and E&M is that both forces are mediated by massless particles, the graviton and the photon, respectively. This, in the end of the day, is the reason why both classical theories look similar. But, when you really study what's going on behind the scene, you learn that Gravity is more appropriately described by General Relativity (GR) and ElectroMagnetism is better described by Quantum ElectroDynamics (QED). The resemblance of these two theories, one may say, rests in the fact that both are described by the same mathematical framework: a principle bundle. In GR's (ie, gravity) case this bundle is a Tangent Bundle (or an $SO(3,1)$-bundle) and in QED's (ie, E&M) case it's a $U(1)$-bundle. The geometric structure is the same, what changes is the "gauge group", the object that describes the symmetries of each theory. Under this new sense, then, your question could be posed this way: "Is there a way to modify geometry in order to incorporate both of the symmetries of these two theories?" Now, this question was attacked by Hermann Weyl in his book Space, Time and Matter, giving birth to what we now call Gauge Theory. As it turns out, Weyl's observations amounts to a slight change on what symmetries we use to describe Gravity: rather than only using $SO(3,1)$, Weyl used a different group of symmetries, called Conformal. As Einstein later showed, it turns out that if you try and describe Gravity and E&M using this generalized group of symmetries (under this new geometrical framework of principle bundles) you do not get the appropriate radiation rates for atoms, ie, atoms which we know to be stable (they don't spontaneously decay radioactively) would not be so under Weyl's proposal. After this blow, this notion of unifying Gravity and E&M via a generalization of the geometry (principal bundles) that describes both of them, was put aside: it's virtually impossible to get stable atoms (stability of matter) this way. But, people tried a slightly different construction: they posited that spacetime was 5-dimensional (rather than 4-dimensional, as we see everyday) and constructed something called a Kaluza-Klein theory. So, rather than encode the E&M symmetries by changing the geometry via the use of the Conformal Group, they changed it by increasing its dimension. Now, this proposal has its own drawbacks, for instance, the sore thumb that is supradimensionality, ie, the fact that spacetime is assumed to be 5-dimensional (rather than 4-dim) — there are other technicalities, but let's leave those for later. The bottom-line is that it's proven very hard to describe gravity together with the other forces of Nature. In fact, we can describe the Strong Force, the Wear Force and ElectroMagnetism all together: this is called the "Standard Model of Particle Physics". But we cannot incorporate gravity in this description, despite decades of trying. - It's a pity I can only upvote this amazing answer once. – Marek Nov 16 '10 at 20:42 @Marek: I'm not easy to blush... so, thanks for your kindness. :*) – Daniel Nov 16 '10 at 20:45 First a note of caution: what you are talking about is electro-statics, not dynamics. The reason electrostatics and Newtonian gravitation are similar is that this is really the only possible law that has certain nice properties in 3 dimensions (e.g. Gauss' law). This is of course no definitive proof (the actual proof being based on that these theories are similar approximations of some better theories) but it might give you at least some intuition into the matter. Now, if you appreciate the point that there might be currents and electromagnetic waves, you must instead consider electrodynamics and then you surely can't "tweak" it to fit Newtonian gravity anymore. Going further, even if you consider gravitational field (as studied in general theory of relativity (GTR)) and electromagnetic field, these concepts are pretty different. So in order to give any answers I have to take your question quite liberally and just tell you what are the similarities between the two field theories and whether they can be modeled together. And for this some answers can indeed be given: 1. Gravitomagnetism is a linear approximation of the GTR that is equivalent to electrodynamics. This is probably the closest answer to your question there is because in your case you approximate electrodynamics with electrostatics and this is then equivalent with Newtonian gravitation. So here you go the other way although on a much higher level of field theories. 2. You can consider electrodynamics on the curved background having an effective model for both of them at once. They are completely compatible and in some regards similar (e.g. by having waves propagating at the speed of light). 3. There is something called Kaluza-Klein theory where you study 5-dimensional space-time in which one dimension models electrodynamics. And, in fact, this turns out to be equivalent to the second point but with an extra field called radion. This is is a baby version of the modern stuff treated in string theory and similar research areas. 4. Both theories are an example of a gauge theory although general relativity is quite special in this regard. - 3 Well done for pointing out that the questioner is only referring to electrostatics. This is an important distinction, and really makes the naive proposal fall apart. – Noldorin Nov 16 '10 at 21:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435536861419678, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/62524/is-it-possible-to-use-aks-test-in-integer-factorization
## Is it possible to use AKS-test in integer factorization ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Agrawal-Kayal-Saxena use the identity $$(X+a)^n=X^n+a \pmod{n, X^r-1}$$ for some small $a$'s to determine primes. Is it possible to improve this method and use it for integer factorization? Are there any research in this way? Thanks. More specifically, Is it possible to find a constructive proof (rather than the original existence proof) of AKS theorem, which will reveal some information for composite numbers. - 9 Probably it's not possible to turn it into an efficient factoring method. Primality testing is much easier than factoring: primes have many special properties, and if you fail to detect these properties you know you are dealing with a composite number, but this generally does not produce actual factors. I don't see anything about the AKS test that seems likely to lead to progress on factoring, and although I guess it's hard to rule it out in principle, it doesn't sound like a fruitful research direction to me. – Henry Cohn Apr 21 2011 at 12:17 4 I am voting to close. My apologies, but questions of the form "can (something) be used to approach (really hard problem)?" are a little too open-ended for my taste. – David Hansen Apr 21 2011 at 12:32 1 @David Hansen, if one takes the question very litterally then yes (because to say with certainty something is impossible is mostly impossible). But, I believe one can say something meaningful why this is unlikely (my first attempt is below, but I know there are various people on the site that could say something better/more definite on this). – quid Apr 21 2011 at 13:36 What precisely do you mean by 'AKS theorem'? – quid Apr 23 2011 at 3:22 ## 2 Answers I am a bit hesitant to write this, as I am not really familiar with parts of what I discuss, but if I did not get something wrong I believe one can say something more or less precise, on why, as said by Henry Cohn, AKS should not be relvant for factoring. The AKS-primality test/prove falls into the paradigm of "derandomization," more specifically derandomization of polynomial identity testing, see for example "On Derandomizing Tests for Certain Polynomial Identities" by Agrawal, where the AKS-test is mentioned right at the start. To put it slightly differently AKS did not use much a new or deep insight on primes or number theory, but an insight on efficient and deterministic testing of identities. [ADDED clarification: what I want to say by this is that if one now in retrospect reads the (main) proof in 'Primes is in P' then one needs neither explicitly nor implictly, in the sense of fully understanding results that are invoked, any speciliazed number theoretic knowledge to follow it in all details; e.g., no number fields, no exponential sums, no elliptic curves, no analytic number theory. So it is in a technical sense somehow an elementary argument. However, it is my understanding that (yet this is the part were, as said, I do not really know what I am talking about) it was only findable by following and carrying out the above mentioned derandomization paradigm, coming from Theoretical Computer Science. Opposed to an imaginary situation where a proof would combine in a new way all kinds of and/or improve number theoretic results used before in this context to obtain the conclusion.] Or even more bluntly, the progress is more on the computer-science-side than on the number-theory-side. While to optimize the exponent in the algorithm various deep number-theoretic results and conjectures are relevant (as discussed in the original AKS paper, and there are also subsequent developments on this), just to get some polynomial time algorithm does essentially need no (advanced) number theory at all, but the progress is achieved by taking an approach quite distinct from earlier ones. To me this makes the result of AKS all the more remarkable. Now, this derandomization technique also has other applications, but factoring does not seem to be one (see e.g. the above mentioned paper). Finally, there are no 'fast' (for the right notion of fast) probabilistic factoring algorithms either. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. AKS did use new and deep insights from number theory. They first created a new randomized algorithm for primality and then they were able to derandomize it. A polynomial algorithm for primality was known by Miller under GRH and AKS used a certain averaged deep result by Bombieri instead. They also used deep general techniques of derandomization. Any such new great result may give hope for even greater new results in the same general area. But as Henry explained factoring is considerably harder than primality and there is no specific reason to believe that the AKS algorithm will be relevant for factoring. There are some complexity-theoretic indications that derandomization is possible. Practically, randomized algorithm can be carried out rather safely by using computers methods for generating random bits. (Probably the easyness of practical derandomization and computational-theoretic indications that derandomization is possible are related. But this is an interesting question on its own.) Nevertheless there are notorious problems that derandomization is not known (like testing polynomial identities). So while AKS theorem goes in the expected direction of CC it was still a big surprise. There are some complexity theoretic indications that factoring is hard. (Among them the fact that factoring is hard in practice.) Anyway, the fact that primality is easy is sort of a miracle for which I dont have a good understanding. - I am the unknown giving the other answer. Not sure if I was misunderstood, but it was in no way my intention to somehow belittle the progress that AKS was (as I hoped to convey with a later sentence to that extent). On rereading the phrasing 'AKS did not use...number theory' in my asnwer I see that it is misleading and I will change this. But, what I beieve to be true and wanted to say is that the main proof in 'Primes is in P' does not use these number theoretic res. and succeeds in showing that Primes is in P essentially from 'nothing' (which to me makes the result even better); (cont.) – quid Apr 22 2011 at 0:06 (cont.) the paper also contains a discussion of improvements of the bounds for the exponent (which I mentioned but of which I unfortunately failed to mention that some of them were already in the AKS-paper) that do involve these results. But they are not necessary for what is I believe the main point of the paper, namely that Primes is in P. – quid Apr 22 2011 at 0:15 Sorry, to write a third comment. I now edited my answer to (I hope) express more clearly what I wanted to say. Finally, I would like to appologize if my original crude formulation should have been disrespectful towards AKS. This was certainly not my intention. – quid Apr 22 2011 at 2:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522743821144104, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/16504/big-picture-what-is-the-connection-of-malliavin-calculus-with-differential-geome/16625
## Big Picture: What is the connection of Malliavin calculus with differential geometry? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I know that Paul Malliavin was heavily influenced by ideas from differential geometry while developing his calculus on Wiener space. But what are the concrete analogies between both areas of mathematics? What has this to do with Hörmander's theorem (if so)? - Should this also have the pr.probability tag? I sometimes filter by that tag, and would expect to see questions such as this. – George Lowther Feb 27 2010 at 19:51 I guess at least it should have the "big-picture" tag. – philip314 Feb 27 2010 at 19:58 ## 1 Answer I can't speak for Paul Malliavin's influences, but I do know a bit about Hormander's theorem (by no means, an expert), and it is naturally suited to differentiable manifolds involving largely the idea of pullbacks of vectors. Malliavin calculus was apparently initiated to give a probabilistic proof of Hormander's theorem. Following Rogers and Williams, Hormander's theorem concerns stochastic differential equations of the form $$\partial X = \sum_q U_q(X) \partial B^q + W(X)\partial t.$$ Here, X is a stochastic process taking values in Rn, Uq and W are smooth vector fields, Bq are Brownian motions and ∂ represents the Stratonovich integral. As Stratonovich integration satisfies the standard change of variables formula, this SDE makes sense on an arbitrary differentiable manifold. Next, according to the statement of Hormander's theorem, let [.,.] be the usual Lie Bracket for vector fields, and let A0,A1,... be the sequence of Lie algebras defined as follows. $$\begin{align} & A_0={\rm Lie}(U_1,U_2,...),\ & A_k={\rm Lie}([U,W]\colon U\in A_{k-1}) \end{align}$$ Then Hormander's theorem states that if ∪nAn spans the tangent space at each point of Rn, then X has smooth transition densities. Hormander's theorem is naturally a statement concerning diffusions on differentiable manifolds, as everything I said above makes perfect sense, and is true, if Rn is replaced by any differentiable manifold. The idea behind the Malliavin proof is to consider differentiating with respect to perturbations of the Brownian motions Bq. The point is, that if Bq has a small bump applied at time t, then this creates a small bump in the solution X proportional to Uq at this time, which will then be propagated along the solution. In fact, solutions to the original SDE with all different points give rise to stochastically moving frames, and bumps in the solution X are transported along with these frames in a similar as vector fields give rise to transport of vectors along these fields. The solution for X is smooth with respect to smooth bumps in the Brownian motions, which can be shown by converting bumps in the Brownian motion into changes in the probability measure, using Girsanov transforms. So, according to Malliavin calculus, you can always differentiate the solution with respect to the Brownian motions. The idea behind the proof of Hormanders theorem, is to invert this process of varying the Brownian motion→bumps in the final position of X. To do this, it is necessary to invert the process of transporting along the moving frames. That is, it must be a 1-1 map on the tangent spaces. Then, by a stochastic "pull-back" on the manifold (or Rn), you can interpret differentiating the solution with respect to its position at any time in terms of differentiating with respect to the Brownian motions. So, this method of proving Hormander's theorem requires you to be able to differentiate with respect to the Brownian motion (i.e., Malliavin calculus) and the rest is all differential geometry (i.e., pullbacks of tangent vectors). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9334372878074646, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/208949/function-transformation-order-of-operations
# Function transformation order of operations I am reviewing for a midterm for Pre-Calculus and I am trying to understand the concept of function transformation: Let's say I am given a function $f$ with the domain in the interval of $[1,5]$ and $g(x)=6-2f(x)$. Now my question is does it matter where you start your transformation? Can I move the graph up $6$ units then stretch it by a factor of $-2$? The textbook states to stretch it by a factor of $-2$ then move it up $6$ units. I tried both ways and ended up with different domains, $[-22,-14]$ and $[-4,4]$ respectively. So is there a certain order of operations to follow when transforming functions? ie: PEMDAS? - ## 1 Answer You are trying to grab at a rule where you should be trying to understand a concept. How do you get to $6-2f(x)$, starting from $f(x)$? Do you first multiply by $-2$, and then add 6? or do you first add 6 and then multiply by $-2$? What would happen if you took $f(x)$, and first you added 6, and then you multiplied by $-2$? What would you get? - I think I understand how this works, taking $f(x)$ multiplying it by $-2$ then adding $6$ is what I want. But taking $f(x)$ and adding $6$ then multiplying by $-2$ would give me $-2f(x)-12$ is that correct? – Steven N Oct 7 '12 at 22:49 Yes, that's correct. In the same way that you can't arbitrarily interchange the order of addition and multiplication, you can't arbitrarily interchange the order of translation (movement) and homothety (stretching). When it comes to formulas, those geometric transformations are simply addition and multiplication again :) – Yoni Rozenshein Oct 7 '12 at 23:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9498472213745117, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/02/11/metric-spaces-are-categories/?like=1&_wpnonce=9d779cb1c0
The Unapologetic Mathematician Metric Spaces are Categories! A guest post by Tom Leinster over at The n-Category Café reminded me of an interesting fact I haven’t mentioned yet: a metric space is actually an example of an enriched category! First we’ll need to pick out our base category $\mathcal{V}$, in which we’ll find our hom-objects. Consider the set of nonnegative real numbers with their real-number order, and add in a point called $\infty$ that’s above all the other points. This is a totally ordered set, and orders are categories. Let’s take the opposite of this category. That is, the objects of our category $V$ are the points in the “interval” $\left[0,\infty\right]$, and we have an arrow $x\rightarrow y$ exactly when $x\geq y$. This turns out to be a monoidal category, and the monoidal structure is just addition. Clearly this gives a monoid on the set of objects, but we need to check it on morphisms to see it’s functorial. But if $x_1\geq y_1$ and $x_2\geq y_2$ then $x_1+x_2\geq y_1+y_2$, and so we can see addition as a functor. So we’ve got a monoidal category, and we can now use it to form enriched categories. Let’s keep out lives simple by considering a small $\mathcal{V}$-category $\mathcal{C}$. Here’s how the definition looks. We have a set of objects $\mathrm{Ob}(\mathcal{C})$ that we’ll call “points” in a set $X$. Between any two points $p_1$ and $p_2$ we need a hom-object $\hom_\mathcal{C}(p_1,p_2)\in\mathrm{Ob}(\mathcal{V})$. That is, we have a function $d:X\times X\rightarrow\left[0,\infty\right]$. For a triple $(p_1,p_2,p_3)$ of objects we need an arrow $\hom_\mathcal{C}(p_2,p_3)\otimes\hom_\mathcal{C}(p_1,p_2)\rightarrow\hom_\mathcal{C}(p_1,p_3)$. In more quotidian terms, this means that $d(p_2,p_3)+d(p_1,p_2)\geq d(p_1,p_3)$. Also, for each point $p$ there is an arrow from the identity object of $\mathcal{V}$ to the hom-object $\hom_\mathcal{C}(p,p)$. That is, $0\geq d(p,p)$, so $d(p,p)=0$. These conditions are the first, fourth, and half of the second conditions in the definition of a metric space! In fact, there’s a weaker notion of a “pseudometric” space, wherein the second condition is simply that $d(p,p)=0$, and so we’re almost exactly giving the definition of a pseudometric space. The only thing we’re missing is the requirement that $d(p_1,p_2)=d(p_2,p_1)$. The case can be made (and has been, by Lawvere) that this requirement is actually extraneous, and that it’s in some sense more natural to work with “asymmetric” (pseudo)metric spaces that are exactly those given by this enriched categorical framework. 7 Comments » 1. Is there a standard term for something that satisfies the axioms of a pseudometric space except for symmetry? The Wikipedia page on pseudometric spaces doesn’t list any more general concept. Comment by Jeremy Henty | February 12, 2008 | Reply 2. I really don’t know whether there is or not. Tom certainly didn’t mention one in the post that reminded me about this fact. Comment by | February 12, 2008 | Reply 3. Aha! It’s a quasimetric space: http://en.wikipedia.org/wiki/Quasimetric_space http://planetmath.org/?op=getobj&from=objects&name=QuasimetricSpace Comment by Jeremy Henty | February 14, 2008 | Reply 4. Sorry, Jeremy, the links triggered the spam demon. Fixed now. Anyhow, that’s brilliant. Thanks for turning up that term. Comment by | February 14, 2008 | Reply 5. Incidentally, on a closer look it seems that quasimetric spaces keep the condition that $d(x,y)=0$ if and only if $x=y$. That is, maybe what we’re looking at are pseudoquasimetric spaces. Or vice versa. Comment by | February 14, 2008 | Reply 6. Or almost pseudoquasimetric, since the Lawvere definition you’re using includes $\infty$ as a possible distance. Comment by Todd Trimble | February 15, 2008 | Reply 7. You might want to check out Dictionary of Distances by Deza & Deza (Elsevier, 2006) for names for various almost-metrics. They aren’t quite consistent with those listed in Wikipedia (which I liked, because they had a name for every subset of the metric axioms, including hemimetric and prametric, for example). It also seems that the names pseudo-, quasi- and (especially) semi-metric aren’t used 100% consistently in the literature. Comment by | February 29, 2008 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046666622161865, "perplexity_flag": "middle"}
http://agtb.wordpress.com/2009/09/28/communication-complexity-of-mixed-nash-equilibria/
# Turing's Invisible Hand Feeds: Posts Comments ## Communication Complexity of Mixed-Nash Equilibria September 28, 2009 by algorithmicgametheory In a previous post I discussed the communication complexity of reaching a pure Nash equilibrium.  The communication complexity model aims to capture the basic information transfer bottleneck between the different players in a games, abstracting away the question of incentives, and focusing on the need of communicating the different preferences (utilities) of the players that are assumed to initially be privately known.  The previous post introduced the  model of communication complexity and applied it to the question of finding a pure Nash equilibrium (if it exists).  The bottom line was that, in the general case, essentially all information about the utilities must be transferred and no “shortcuts” are possible.  In a multi-player game this amount of information is exponential in the number of players, implying that convergence to equilibrium, in general, is impractical, and special properties of the game must be used in order to reach equilibrium in reasonable time (one such property was demonstrated: dominance-solvable games). This post discusses the issue of convergence to a mixed Nash equilibrium, as studied by Sergiu Hart and Yishay Mansour in How Long to Equilibrium? The Communication Complexity of Uncoupled Equilibrium Procedures.   The setting, as in my previous post, has each one of $n$ players holding his utility function $u_i : S_1 \times ... \times S_n \rightarrow \Re$, where each $S_j$ is the set of strategies of player $j$, and for ease of notation lets have $|S_j|=m$ for all $j$.  We will assume that all utilities are finitely represented, i.e. are rational numbers, say with $k$-bit numerator and denominator.  Can a mixed Nash equilibrium be found by communicating significantly less than $m^n \cdot k$ bits — the size of each utility function?  Maybe it can be done by communicating only a polynomial (in $n$, $m$, and $k$) bits?  This would be a necessary condition for efficient convergence of any  (uncoupled) dynamics between the players. Before we look deeper into at this question, let us look at the similar problem of finding a correlated  equilibrium.  In this case it turns out that a polynomial amount of communication suffices and thus only a tiny fraction of the private information needs to be transferred. You can see a previous post of mine for background on correlated equilibrium. ### The Communication Complexity of Correlated Equilibrium Let us recall that a correlated equilibrium is a probability distribution on the strategy profiles $p : S_1 \times ... \times S_n \rightarrow \Re$ that satisfies linear inequalities of the following form:  for every player $i$, and every two strategies of $i$, $s_i, s'_i$ : $\sum_{s_{-i}} p(s_i,s_{-i}) u_i(s_i,s_{-i}) \ge \sum_{s_{-i}} p(s_i,s_{-i}) u_i(s'_i,s_{-i})$, where the sum ranges over all strategy profiles $s_{-i}$ of the other players.  The interpretation is that $s_i$ is indeed a best-reply to the conditional distribution $p(s_i,\cdot)$ on $s_{-i}$. [Edited on Oct 2nd: Albert Xin Jiang was kind enough to point out that the next paragraph is, well, wrong, so I'm striking it out.  In order to show that finding a correlated equilibrium can be done with low communication complexity, one must run the Ellipsoid algorithm on the dual LP (which has polynomial many variables) rather than on the primal (which has exponentially many variables).  How to do so effectively is shown in a paper by Papadimitriou and Roughgarden, and Hart and Mansour show that the algorithm can be implemented with low communication.] As being a correlated equilibrium is defined by a set of linear inequalities, we can find a correlated equilibrium using linear programming.  Let us see how the players can jointly run the Ellipsoid LP algorithm while keeping the amount of communication in check.  The Ellipsoid algorithm runs in time polynomial in $m$, $n$, and $k$ as long as it has access to a separation oracle.  Such an oracle must be able to answer queries of the following form: given an unfeasible point, in our case a candidate distribution $p$, it must find a constraint, in our case a player $i$ and strategies $s_i, s'_i$, where the corresponding inequality is violated by $p$.  The main point is that each of the inequalities can be checked by a single player since it depends only on a single utility function.  Thus to implement the separation oracle, each player must either report a violated constraint or a single bit specifying that all his constraints are satisfied — all together taking $O(n + \log m)$ bits of communication.  When the algorithm terminates — after a polynomial number of steps (polynomial in $n$, $m$, and $k$) — all players know the answer, which is a correlated equilibrium.   (Note that even though a correlated equilibrium is an exponential-sized object, it will turn out to have a polynomial-sized support.) While this low-communication algorithm can not be viewed as natural dynamics that efficiently converge to a correlated equilibrium, it does point out that some dynamics that converge efficiently do exist.   The question of finding natural dynamics then gets more pressing and indeed it turns out that natural dynamics do exist as well: dynamics based on regret minimization (see my previous post.) ### Mixed-Nash Equilibria Now, once we have what to aim for — a similarly low-communication way of reaching a mixed-Nash equilibrium — let us look at the problem carefully again.  First, we must recall that Nash equilibria may be irrational even if all utilities are rational, so a Nash equilibrium can not be “printed” as binary numbers in any finite time.  Still, in our communication complexity formulation this shouldn’t be a problem since any representation of the equilibrium is in principle acceptable as long as it is uniquely determined by the communication.  Closely related to this is the fact that the representation of a Nash equilibrium in an $n$-player game may require exponentially many bits of precision, even if the utilities themselves have only short descriptions.  As before, while the required precision itself is not a lower bound on the communication complexity, Hart and Mansour do show how to use this precision to obtain a lower bound.  Below I give a different, somewhat stronger proof. Let us start with the following 2-player bi-strategy game, where $0<r<1$ is an arbitrary parameter. r, 0                0, r 0, 1-r            1-r, 0 It is easy to see that the only Nash equilibrium of this game has each player choosing his second strategy with probability exactly $r$ and his first with probability $1-r$.  This family of games suffices for giving a tight lower bound for the special case of two-player ($n=2$) bi-strategy ($m=2$) games. Lemma: The communication complexity of finding a Nash equilibrium in two-player bi-strategy games, where all utilities are given by $k$-bit integers, is $\Theta(k)$. The upper bound is trivial and the lower bound is implied by games of the previous form where $r$ ranges over all fractions of the form $r = x/2^k$, for integer $0 <x < 2^k$ since each of these games has a different (unique) answer giving $2^k$ different such answers, which thus requires at least $k$ bits of communication just to get all possibilities. The dependence of the communication complexity on $m$, the number of strategies of each player, is still unclear.  However, Hart and Mansour show that the dependence on the number of players, $n$, is exponential.  The basic idea is that with $n$ players we can get doubly-exponential  many different answers.  A clean way to show this is to “simulate” utilities of representation length $k=2^{\Theta(n)}$, and then invoke the previous bound. Win-Lose Simulation Lemma: For every $n$-player game with utilities that are $k$-bit integers, there exists an $(n + 3\log k)$-player game, with the following properties: • The new game is a win-lose game, i.e. all utilities are 0 or 1. • Each of the first $n$ players has the same set of strategies as in the original game.  The utility of each of these players is fully determined (in an easy manner) by his utility in the original game. • Each of the final $3 \log k$ players has 2 strategies.  The utilities of each of these are constants not depending on the original game at all. • The Nash equilibria of the new game are in 1-1 correspondence with those of the original game: the mixed strategies of each of the first $n$ players are identical, while the strategies of the final $3\log k$ players are fixed constants independent of the game. From this we immediately get our theorem. Theorem: The communication complexity of finding a Nash equilibrium in $n$-player bi-strategy win-lose games is $2^{\Theta(n)}$. The upper bound is trivial and the lower bound is implied by the lower bound on 2-player games with $k$-bit utilities via this reduction using $n=2+3\log k$ players. ### Proof of Win-Lose Simulation lemma It remains to prove the simulation lemma.  We will do so in three steps: first construct a fixed game whose unique equilibrium has exponentially low probabilities, then use this game to simulate games with utilities that are all powers of two, and finally use these to simulate general games with integer utilities. The first step of the reduction is the following construction: Construction: for each $t$ there exist $(2t)$-player win-lose bi-strategy games such that the unique Nash equilibrium has, for each $1 \le i \le t$, players $2i-1$ and $2i$ choosing their first strategy with probability exactly $2^{-2^{i-1}}$. Proof: for $t=1$, this is exactly “matching pennies”, i.e. the $2 \times 2$ game described above for $r=1/2$ (after scaling by a factor of 2).  Now for the induction step we start with $t-1$ pairs of players as promised by the induction hypothesis, and we keep these players’ utilities completely independent of the soon to be introduced 2 new players, so we know that in every Nash equilibrium of the currently constructed $2t$-player game these will still mix exactly as stated by the fact above for $t-1$.  In particular the last pair of players in the $(2t-2)$-player game choose their first strategy with probability $2^{-2^{t-2}}$.  The point is that with these players in place, we can easily simulate the $2 \times 2$ game above for $r=2^{-2^{t-1}}$.  To simulate a “$r$ entry”, we define the utility as 1 when the last pair of players in the $(2t-2)$-player game play their first strategy (which happens with probability $2^{-2^{t-2}} \times 2^{-2^{t-2}} = 2^{-2^{t-1}} = r$) and 0 otherwise, while to define a $1-r$ entry we define the opposite. Our next step is to simulate (in the sense of the lemma) games where all utilities are powers of two in the range $1, 2, 4 , ..., 2^k$, equivalently, after scaling, in the range $1, 1/2, 1/4, ..., 2^{-k}$.  This is done by adding to the original players $t=2\log k$ players as defined in the construction above.  We can now replace each utility of the original players of the form $2^{-x}$ where $x$‘s binary representation is $x = \sum_{i=1}^{t} x_i 2^{i-1}$ with a utility of 1 whenever, for every $i$ with $x_i=1$, new player $2i$ plays his first strategy,(and 0 otherwise).  This happens with probability $\Pi_{i|x_i=1} 2^{-2^{i-1}} = 2^{-\sum_{i |x_i=1} 2^{i-1}} = 2^{-x}$, as needed. Our final step is simulating games with utilities that are general $k$-bit integers by games that have all their utilities powers of two, as in the previous step.  This is done by adding another $\log k$ bi-strategy players, paired into $\log k/2$ independent “matching pennies” games, and thus each of them always evenly mixes between his two strategies.  To simulate a utility of $y=\sum_{i=1}^k y_i 2^{i-1}$ in the original game, we give, in the new game, a utility of $y_i 2^{i-1}$ whenever the $\log k$ new players played a sequence of strategies which is the binary representation of $i$ (and 0 otherwise). The expected value is exactly $y/k$, completing the simulation (as the scaling by a factor of $k$ does not matter). ### What remains? The previous lower bound leaves much to be desired: while it does give an exponential lower bound in the input parameters, this is done at the cost of having an exponentially long output.  If we also take the output length into account (in any representation) then the bound is no more than linear — trivial.  Indeed, in the formal description of the computational Nash equilibrium problem, one formally has an input parameter $\epsilon$ that specifies how close to an equilibrium do we demand the output to be, and the algorithm is allowed to run in time that is polynomial in $\log \epsilon$.  (See a previous post on the subtleties of this approximation parameter $\epsilon$.)  Taking this approach, the lower bound is trivial too as it is bounded from above by $\log\epsilon$. Thus, the main open problem that remains is that of determining the communication complexity of finding a mixed-Nash equilibrium when the precision required at the output is only polynomial in the other parameters. Posted in Uncategorized | Tagged complexity of equilibria, technical | 4 Comments ### 4 Responses 1. on September 30, 2009 at 4:25 pm | Reply Anonymous Nice article. You might wish to correct “loose” to “lose” in several places. • thanks. done. 2. Dear Prof Nisan, Thanks for the fascinating post. A comment regarding communication complexity of correlated equilibria: although every game has a correlated equilibrium with polynomial-sized support, in general there are also correlated equilibria with exponential-sized supports. In particular, if you run the ellipsoid method directly on the LP, the generated candidate solutions will have exponential-sized supports. Then communicating these vectors to the individual players becomes problematic. Perhaps what you had in mind was Papadimitriou’s (STOC 2005) algorithm, which applies the ellipsoid method on the dual LP. Although the dual problem has exponential number of constraints, a separation oracle can be constructed that only requires each player to compute certain expected utilities and submit a short vector. (As an aside, this separation oracle has interesting connections to swap-regret-minimizing learning algorithms.) • oops, you are right. I’ll fix it. thanks. Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 102, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299240112304688, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/90619/prove-int-a-inftyfx-fx-sinfxdx-converges
# Prove $\int_{a}^{\infty}f'(x)/f(x)\sin(f(x))dx$ converges This is a homework excercise: $f(x)$ is monotonic increasing and positive in $[a,\infty)$, $\lim_{a\rightarrow \infty}f(x)=\infty$, and $f'(x)$ is continuous in $[a,\infty)$. Prove $\int_{a}^{\infty}(f'(x)/f(x))\sin(f(x))dx$ converges. I have solved this exercise, but I believe I have a mistake since I did not use the fact that $f'(x)$ is continuous. I would like to ask whether my solution is correct: Solution (in brief) By substituting $u=f(x)$ we get $\int_{a}^{\infty}(f'(x)/f(x))\sin(f(x))dx \rightarrow \int_{f(a)>0}^{f(\infty)=\infty}(\sin(u)/u)du$. The integral $\int_{f(a)}^{N>\max(f(a),1)}(\sin(u)/u)du$ converges (due to continuity), and the integral $\int_{N}^{\infty}(\sin(u)/u)du$ converges by the Dirichlet test ($f(x)=1/x$ and $g(x)=\sin(x)$). As you can see I didn't use the fact that $f'(r)$ is continuous anywhere. Is this solution correct? - ## 1 Answer You've used $f'(x)$ continuous implicitly in using u-substitution. Per the wikipedia page for Integration by Substitution the function you're replacing has to be a continuously differentiable function. Continuously differentiable implies that the derivative is continuous as well as the function in question. As $f$ is supposed to meet those conditions, your proof is correct. - Aah, damn, I knew I was being too optimistic with subsitution. – roel44 Dec 12 '11 at 0:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9711161255836487, "perplexity_flag": "head"}
http://mathoverflow.net/questions/51988/representability-on-the-big-etale-site-and-base-change
## Representability on the big étale site and base change ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am reading M. Artin's treatment of the proper base change theorem for étale cohomology in his "Théorèms de représentabilité pur les espaces algébriques", and I have trouble understanding the following remark on page 222: If $f:X\rightarrow S$ and $g:S'\rightarrow S$ are morphisms of algebraic spaces (or schemes, if you prefer), and if $f':X'\rightarrow S'$, $g':X'\rightarrow X$ denote the base changes of $f$ and $g$, then one can construct for any abelian sheaf $F$ on the big étale site of $X$ the base change morphism `$g^*R^qf_*F\rightarrow R^q f'_*(g'^*F)$` (the higher direct images also computed on the big sites). If I understand correctly, Artin claims that if $F$, $R^q f_*F$ and `$R^qf'_*(g'^*F)$` are representable on the big étale site of $X$, resp. $S$, resp. $S'$ (i.e. locally constructable), then the base change morphism is an isomorphism. Why is that? Is that an easy fact? - ## 1 Answer With help from Milne's book on étale cohomology, I figured out how answer the question, although I am not sure that this argument is what Artin had in mind, and I still think that there's an easier argument. There are morphisms of topoi $\pi_X: X_{ET}\rightarrow X_{et}$ from the topos associated to the big étale site of $X$ to the topos of the small étale site. Similarly for $S$, and $f:X\rightarrow S$ induces $f^s:X_{et}\rightarrow S_{et}$ and $f^b:X_{ET}\rightarrow S_{ET}$, and the obvious diagram commutes, i.e. $f^s\pi_X=\pi_Sf^b$. Given a sheaf $F$ in $S_{ET}$ we get a base change morphism `\[ \pi_S^*R^qf^s_*F\rightarrow R^qf_*^b\pi_X^*F\]` Milne calls this "universal base change morphism", for good reasons: Given any morphism $g:S'\rightarrow S$, you also get morphisms of topoi `$g^b:S'_{ET}\rightarrow S_{ET}$ and ${g'}^b:X'_{ET}\rightarrow X_{ET}$`. Using this to restrict the universal base change morphism to $S'_{ET}$, we get the usual base change morphism for $g$ and $F$. (For this one has to check the commutativity of a few diagrams. All the ingredients can be found, e.g., in great detail in the Stacks Project) Now, if $F$ is locally constructible, i.e. if the adjunction map `$F\rightarrow \pi_X^*\pi_{X,*} F$` is an isomorphism, then it is not hard to check that the universal base change morphism is an isomorphism, and thus every base change morphism is an isorphism. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8668578863143921, "perplexity_flag": "head"}
http://mathoverflow.net/questions/94137/how-do-we-use-an-ehresmann-connection-to-define-a-semispray
## How do we use an Ehresmann connection to define a semispray? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $M$ be a differentiable manifold, let $TM$ be its tangent bundle, and consider $TTM$, the double tangent bundle. Let $V \subseteq TTM$ denote the vertical subbundle, which is determined in a canonical fashion. An Ehresmann connection is a choice of horizontal subbundle $H \subseteq TTM$ which is complimentary to $V$, in that the double tangent bundle admits the horizontal decomposition $TTM = V \oplus H$. One may define an Ehresmann connection by way of a connection form $v$.1 This is a bundle homomorphism $v : TTM \to TTM$ which satisfies $v^2 = v$ and $\operatorname{im}(v) = V$, and this generates the horizontal subbundle $H = \operatorname{ker}(v)$. One should think of $v$ as projecting onto the vertical subspace along $H$. Suppose we are given an Ehresmann connection $H$ and connection form $v$. I would like to use these to generate a semispray. A semispray is a vector field on $TM$ (i.e., a section of $TTM$) which satisfies a certain compatibility condition with the tangent structure, and should somehow be compatible with the connection. I can see from Wikipedia how a semispray generates a torsion-free Ehresmann connection, but it is not clear to me how to use an Ehresmann connection (possibly with torsion) to generate a semispray. 1. The space $\mathcal C$ of connection forms is the subspace of $TTM$-valued $1$-forms $\Omega^1(TM, TTM)$ which satisfy $v^2 = v$ and $\operatorname{im}(v) = V$. Is there a concise, common name for the space $\mathcal C$? Does it have nice algebraic or topological structure? - Isn't an Ehresmann connection on a tangent bundle just the same as an affine connection? – Deane Yang Apr 15 2012 at 21:36 1 @Deane: affine connections satisfy an additional linearity property that Ehresman connections do not need to satisfy. Holonomy for an Ehresmann connection can be a non-linear diffeomorphism of the tangent space. – Ryan Budney Apr 15 2012 at 21:41 1 In other words, an Ehresmann connection "forgets" the linear structure of the fiber of the tangent bundle. Right? – Deane Yang Apr 15 2012 at 22:13 @Tom, can you clarify whether you're talking about Ehresmann connections on the total space of the tangent bundle or on the total space of the frame bundle? In your first line you've called $T\mathcal{M}$ the frame bundle which I'm guessing is a typo, but in your footnote you use $F\mathcal{M}$... – Paul Reynolds Apr 16 2012 at 18:34 1 Tom, that was for affine connections. An Ehresmann connection is oblivious to the linear structure of the tangent bundle or the group structure of the frame bundle, so there is not necessarily any way to use an Ehresmann connection on one to induce an Ehresmann connection on the other. – Deane Yang Apr 17 2012 at 13:32 show 2 more comments ## 3 Answers Isn't the spray associated to the connection just the geodesic differential equation? Let $\pi_M : TM \to M$ be bundle projection, $\pi_{TM} : TTM \to TM$ bundle projection. A double tangent vector $w \in TTM$ represents parallel transport of $\pi_{TM}(w)$ if $v(w) = 0$. i.e. if $w$ is horizontal. You can identify the horizontal spaces with the tangent spaces of $M$ by taking the derivative of $\pi_M : TM \to M$. ($V$ is the kernel of these derivatives). So take as your vector field on $TM$ the function $f : TM \to TTM$ where $\pi_{TM}(f(x))=x$ for all $x \in TM$ and $f(x)$ is the unique horizontal vector in $TTM$ such that $D\pi_M(f(x)) = x$. - Thanks for the answer, Ryan. I'm looking for a more direct approach using $H$ and/or $v$. My context is of random connections, which really means I'm looking at a probability measure $\mathbb P$ over the space $\mathcal C$ of all connection forms. I am hoping for a more structural answer like, "the semispray is the image of $v$ under the map \Phi : \mathcal C \to \Gamma(TTM) ..." or "is the unique $f$ which satisfies the equation $v = ...$". – Tom LaGatta Apr 15 2012 at 19:27 I don't really understand your request. My description of $f$ is as the unique map satisfying an equation. In which way is this not the kind of structural you're looking for? – Ryan Budney Apr 15 2012 at 20:10 In particular, the so-called space of connection forms is defined only if you first fix a choice of connection. Then the difference between any other connection and the fixed connection is a connection form. So if you know the spray for the fixed connection, it is straightforward to find a formula for the spray of any other connection relative to the fixed spray using the connection form. – Deane Yang Apr 15 2012 at 21:33 1 @Deane, in this more general situation isn't the space of connection forms $\mathcal{C}$ (as defined in the tiny footnote) a torsor for the vector space of $\mathcal{V}$-valued $1$-forms on the total space whose kernels contain $\mathcal{V}$? I've got a feeling you were talking about covariant derivative operators on a vector bundle... This says something about its topological structure but I doubt it has much nice geometrical structure with this level of generality. – Paul Reynolds Apr 16 2012 at 12:22 Paul, thanks for clarifying this. – Deane Yang Apr 17 2012 at 13:09 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The paper where this is developed as thoroughly as one may desire is Grifone's Structure presque tangente et connexions I See Proposition I.38 in page 306. - It seems that the "reconstruction" of the semispray from its induced non-linear connection in the Wikipedia-page is in fact a construction of a compatible semispray from any given non-linear connection. Of course the torsion of the connection is lost in the process v -> H -> v_H. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.906856119632721, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4235610
Physics Forums Page 1 of 2 1 2 > Capacitors: How do they store energy ? A Capacitor stores energy when it is charging up but what is the intuition behind such a process ? I, in fact, think that as electrons are being stored on one of the plate, positive charge is being build up on the other plate, an electric field is set up as there is a separation of charges, and this separation of charges would bring about a change in Potential energy of the system (the charges on the plate). According to Coulomb's law, this energy change must be negative. We can infer that the potential energy would really decrease ie become more negative. If that is true, how can a charged capacitor(with negative energy) do positive work while it discharges to power a bulb? (its absurd) PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Recognitions: Gold Member Separating charges would not cause them to develop a negative potential energy. It takes energy to separate them, and they give up energy whenever they return. Quote by Drakkith Separating charges would not cause them to develop a negative potential energy. It takes energy to separate them, and they give up energy whenever they return. Would you like to do the math to prove your point ? or maybe refer to some website which has done the math ? Recognitions: Gold Member Capacitors: How do they store energy ? Quote by hms.tech Would you like to do the math to prove your point ? or maybe refer to some website which has done the math ? I'd be glad to if I had any idea how to do the math or where to find an example. Have you tried google? Also, could you explain how coloumbs law leads to negative potential energy in the case? Perhaps we can work this out step by step. Recognitions: Science Advisor The math can help to understand what's going on here a lot (math is always good for understanding). Let's first consider the most simple case of a two-plate capacitor with vacuum between its plates. Then you can think of charging it that you transport charges (say electrons) from one plate to the other. Due to charge conservation both plates carry the same but opposite charges. The charge transport needs energy because the more you charge the plates with opposite charges the larger the electric field between the plates becomes, leading to a force against further charge transport. Due to energy conservation, valid for the motion of charges in static electric fields, it doesn't matter, how you move the charge from one plate to the other, you always need the same energy to reach a certain charge state of the capacitor. Suppose one plate (sitting at $x=0$) is already charged by an amount $+Q$. Then necessarily the other plate (sitting at $x=d$) parallel to the first plate has to carry the charge $-Q$. In the stationary state both charges are located on the surface of the plates inside the capacitor. Using Gauß's Law to a box parallel to the plates with one side within the conducting capacitor plate and one side inside the vacuum of the capacitor and the symmetry assumption (neglecting the edge effects of the finite plates, assuming that the distance $d$ between the plates is much smaller than their size) leads to an electric field $$\vec{E}=\frac{Q}{4 \pi \epsilon_0 A} \vec{e}_x,$$ where $A$ is the area of the plates. If you want to transport another infinitesimal amount of (negative!) charge $-\mathrm{d} Q$ from the left plate to the right plate you have to do work against the Force $\vec{F}=-\mathrm{d} Q \vec{E}$. Since the path along which you carry the charge doesn't matter, you can take a straight line along $\vec{E}$ to get the work needed to do that: $$\mathrm{d} W=\mathrm{d} Q d E_x =\mathrm{d} Q d \frac{Q}{\epsilon_0 A}.$$ Since you start from an uncharged capacitor, to reach a total charge $Q$ you have to integrate this expression from $0$ to $Q$ wrt. $Q$: $$W=\int_0^Q \mathrm{d} Q' \frac{Q'd}{\epsilon_0 A} = \frac{Q^2 d}{2 \epsilon_0 A}.$$ This we can easily rewrite in terms of the finally reached electric field within the capacitor: $$E_x=\frac{Q}{4 \pi \epsilon_0 A}.$$ Plugging this in our formula for the total work done when carrying the charges from one to the other plate in favor of $Q$, we find $$W=\frac{\epsilon_0}{2} E_x^2 A d.$$ Now $A d$ is the volume between the plates and $\frac{\epsilon_0}{2} \vec{E}^2$ is the energy density of the electric field! Thus the total work done to carry the charges from one plate to the other, is now stored as field energy in the electric field between the plates. If you put a dielectricum between the plates for not too high fields the response of this medium is that a polarization by slightly moving the bound charges inside the medium a little bit from their equilibrium place, which needs further work against the binding forces of these charges, which then is stored in this additional electric field, i.e., the polarization of the medium. This leads to an additional factor $\epsilon_r$ in the formula for the work: $$W=\frac{\epsilon_r \epsilon_0}{2} E_x^2 A d.$$ Recognitions: Gold Member Science Advisor Nice job Van !!! Maybe it'll help also to think about what goes on in a good dielectric vs in free space: Dielectric materials contain polar molecules, ie they have a + and a - end. Water is a good example. Pure water has a dielectic constant around 80, meaning that a capacitor with pure water between its plates would have 80X the capacitance of one with nothing but free space between them. (Water Molecules image courtesy of these guys: http://users.humboldt.edu/rpaselk/C1...C109_lec10.htm and it's an interesting page. In presence of an increasing electric field those polar molecules will begin to align with it, abandoning their preferred random orientations, and that takes mechanical work . Discharging the capacitor removes the field so the dielectric relaxes. That's why oil is used for severe duty AC capacitors - its slippery molecules don't heat up so much as they oscillate with the field. Plastic capacitors will melt in some applications where oil thrive, like commutating or snubbing SCR's. It's analogous to a mechanical spring. You doubtless noticed the similarity - Van's W = K E^2, for a spring it's K X^2 Doubtless this is oversimplified but it helped me in my early days. Now - if someone can explain why it is that empty space has a dielectric constant - i'd be much obliged. thanks, old jim Quote by vanhees71 Let's first consider the most simple case of a two-plate capacitor with vacuum between its plates. Then you can think of charging it that you transport charges (say electrons) from one plate to the other. My notes say the exact same thing , I specifically don't understand how can one even imagine this phenomenon ? It is simply false because there is an insulator (air/dielectric) between the two plates and no charge flows between this space ! Not during charging , not during discharging, Never ! Can you justify your claim ? Here is a portion of my notes : Attached Thumbnails Recognitions: Gold Member Science Advisor You obviously have to connect the two plates first, but once the capacitor is charged you can disconnect it and it will (in an ideal world) maintain its charge indefinitly. Recognitions: Gold Member The plates are 'connected' [prior post] via the electric circuit.... Think of a battery included, the battery provides chemical energy to do the work of moving the charges,,,,,this energy is then present in the field energy of the charged capacitor less any losses due to battery heating, resistance losses in the circuit,etc.. Quote by f95toli You obviously have to connect the two plates first, but once the capacitor is charged you can disconnect it and it will (in an ideal world) maintain its charge indefinitly. Do we ? From what I recall the plates are connected to the oppositely charged terminals of a battery and that is it . We don't have to complete the circuit ... Recognitions: Gold Member that is what the two posts prior to yours are saying. Recognitions: Gold Member Science Advisor Quote by hms.tech Do we ? From what I recall the plates are connected to the oppositely charged terminals of a battery and that is it . We don't have to complete the circuit ... The battery is part of the circuit, so if a capacitor is connected in parallel to a battery electrons can redistribute so that you get "extra" electrons on one plate, and a "shortage" of electrons on the other. The amount of energy stored this way will depend on the voltage (squared) of the battery and the capacitance. Quote by f95toli The battery is part of the circuit, so if a capacitor is connected in parallel to a battery electrons can redistribute so that you get "extra" electrons on one plate, and a "shortage" of electrons on the other. The amount of energy stored this way will depend on the voltage (squared) of the battery and the capacitance. How does that prove the point that electrons can travel through air towards the other plate ? Recognitions: Gold Member How does that prove the point that electrons can travel through air towards the other plate ? they do not....unless there is a spark due to dielectric breakdown. Quote by Naty1 they do not....unless there is a spark due to dielectric breakdown. Then why is this "false concept" used to derive the equation for energy on a capacitor ? see the attachments, where it is clearly said that an electron moves one by one .... Recognitions: Gold Member Science Advisor It's not. Where does it say that the electron moves through the air? All it says is that you can calculate the stored energy by considering the amount of energy required to move electrons from one plate to the other. HOW they get from one plate to the other is irrelevant. Recognitions: Science Advisor Quote by hms.tech My notes say the exact same thing , I specifically don't understand how can one even imagine this phenomenon ? It is simply false because there is an insulator (air/dielectric) between the two plates and no charge flows between this space ! Not during charging , not during discharging, Never ! Can you justify your claim ? Here is a portion of my notes : That's the magic of the fact that the electrostatic field has a potential. Of course, in reality there are no charges transported through the non-conducting vacuum or dielectric between the plates, but that doesn't matter, as I've emphasized in my posting! No matter, how you imagine to transport the charges such that you finally have a charged capacitor, the work to be done for this is the same. This work is stored as energy in the electric field between the plates and in the polarization of the dielectric (if there is one between the plate). So I can just use this simple picture of moving the electrons through the space between the plates to get the capacitor in this charged state. Of course, in practice of usual capacitors nobody does this, but thanks to the curl-free nature of the electrostatic field the energy stored in the charged capacitor (in precisely this field) is the same, no matter, how the charges were moved. A discription of the real processes happening when connecting a battery with the capacitor is way more complicated und fortunately not needed to answer the question. To answer Jim Hardy's question. Of course vacuum has no "dielectric" constant. The $\epsilon_0$ is in the equations from the choice of the units in the Systeme International (SI). From a physical point of view it's a very unnatural choice of units and sometimes leads to misunderstandings, because the constants $\epsilon_0$ and $\mu_0$ seem pretty mysterious to the beginner, because they are quite artificial to make the numbers easier to handle for every-day electrical engineering. The only fundamental constant in classical electromagnetism is the speed of light, indicating that Maxwell's electromagnetic theory is a relativistic theory. Page 1 of 2 1 2 > Thread Tools | | | | |-------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Capacitors: How do they store energy ? | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Introductory Physics Homework | 3 | | | Classical Physics | 1 | | | Classical Physics | 5 | | | Chemistry | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950207531452179, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/133079/splitting-field-of-x6x31-over-mathbbq?answertab=votes
# Splitting field of $x^6+x^3+1$ over $\mathbb{Q}$ I am trying to find the splitting field of $x^6+x^3+1$ over $\mathbb{Q}$. Finding the roots of the polynomial is easy (substituting $x^3=t$ , finding the two roots of the polynomial in $t$ and then taking a 3-rd root from each one). The roots can be seen here [if there is a more elegant way of finding the roots it will be nice to hear] Is is true the that the splitting field is $\mathbb{Q}((-1)^\frac{1}{9})$ ? I think so from the way the roots look, but I am unsure. Also, I am having trouble finding the minimal polynomial of $(-1)^\frac{1}{9}$, it seems that it would be a polynomial of degree 9, but of course the degree can't be more than 6...can someone please help with this ? - 2 It makes no sense to talk about "minimal polynomial" of a polynomial. "Minimal polynomials" are associated to algebraic elements, not to polynomials. Do you mean, "splitting field", as in the title? – Arturo Magidin Apr 17 '12 at 19:14 @ArturoMagidin yes, I will edit. thanks for pointing out the typo – Belgi Apr 17 '12 at 19:15 2 Note that your polynomial is the ninth cyclotomic polynomial. – Eric Gregor Apr 17 '12 at 19:17 ## 2 Answers You've got something wrong: the roots of $t^2+t+1$ are the complex cubic roots of one, not of $-1$: $t^3-1 = (t-1)(t^2+t+1)$, so every root of $t^2+t+1$ satisfies $\alpha^3=1$). That means that you actually want the cubic roots of some of the cubic roots of $1$; that is, you want some ninth roots of $1$ (not of $-1$). Note that $$(x^6+x^3+1)(x-1)(x^2+x+1) = x^9-1.$$ So the roots of $x^6+x^3+1$ are all ninth roots of $1$. Moreover, those ninth roots should not be equal to $1$, nor be cubic roots of $1$ (the roots of $x^2+x+1$ are the nonreal cubic roots of $1$): since $x^9-1$ is relatively prime to $(x^9-1)' = 9x^8$, the polynomial $x^9-1$ has no repeated roots. So any root of $x^9-1$ is either a root of $x^6+x^3+1$, or a root of $x^2+x+1$, or a root of $x-1$, but it cannot be a root of two of them. If $\zeta$ is a primitive ninth root of $1$ (e.g., $\zeta = e^{i2\pi/9}$), then $\zeta^k$ is also a ninth root of $1$ for all $k$; it is a cubic root of $1$ if and only if $3|k$, and it is equal to $1$ if and only if $9|k$. So the roots of $x^6+x^3+1$ are precisely $\zeta$, $\zeta^2$, $\zeta^4$, $\zeta^5$, $\zeta^7$, and $\zeta^8$. They are all contained in $\mathbb{Q}(\zeta)$, which is necessarily contained in the splitting field. Thus, the splitting field is $\mathbb{Q}(\zeta)$, where $\zeta$ is any primitive ninth root of $1$. - I still fail to understand how to see that if $(k,9)\not=1$ then it is not a root. – Belgi Apr 17 '12 at 19:32 1 @Belgi: The polynomial $x^9-1$ has exactly nine complex roots. Since it is relatively prime with its derivative, $9x^8$, none of the roots are multiple roots. So a power of $\zeta$ has to be a root of $x^9-1$, hence either a root of $x^6+x^3+1$, or of $x^3-1$. If $(k,9)\neq 1$, then it is a root of $x^3-1$; since $x^9-1$ has no repeated roots, it cannot be a root of both $x^3-1$ and $x^6+x^3+1$, so it cannot be a root of $x^6+x^3+1$. – Arturo Magidin Apr 17 '12 at 19:34 OK, can you also help me understand what is the minimal polynomial of the primitive ninth root of unity ? (I'm guessing that it's the one in the question, but why ?) ,+1 – Belgi Apr 17 '12 at 19:47 @Belgi: The minimal polynomial is $x^6+x^3+1$. The ninth roots of unity are the roots of $$x^9-1 = (x^3-1)(x^6+x^3+1)= (x-1)(x^2+x+1)(x^6+x^3+1).$$ By definition, a complex number $\zeta$ is a primitive $n$th root of unity if and only if $\zeta^n=1$, and $\zeta^k\neq 1$ for all $k$, $1\leq k\lt n$. So a primitive ninth root of unity must satisfy $\zeta^9=1$, but $\zeta^k\neq 1$ for $1\leq k\lt 9$. If $\zeta^9=1$, then either $\zeta$ is a root of $x^3-1$, or of $x^6+x^3+1$, but not both. So it is a root of $x^6+x^3+1$ if and only if $\zeta^3\neq 1$. (cont) – Arturo Magidin Apr 17 '12 at 19:50 @Belgi: (cont) So the primitive $9$th roots of unity are precisely the roots of $x^6+x^3+1$, and you just need to verify that $x^6+x^3+1$ is irreducible over $\mathbb{Q}$. You can check that $(t+1)^6+(t+1)^3+1$ is Eisenstein at $3$, so it is irreducible, hence $x^6+x^3+1$ is also irreducible (it's the same shift that is used to prove that $x^{p-1}+\cdots+x+1$ is irreducible when $p$ is a prime, so the idea should suggest itself as well). – Arturo Magidin Apr 17 '12 at 19:53 This polynomial is $\Phi_9(x)$, the ninth cyclotomic polynomial whose roots are precisely the primitive ninth roots of unity. A $\mathbb{Q}$-basis for this extension is $\{\zeta_1,\zeta_2,\zeta_4,\zeta_5,\zeta_7,\zeta_8\}$. So you have your splitting field. - How do you know what powers to take ? – Belgi Apr 17 '12 at 19:20 – Eric Gregor Apr 17 '12 at 19:22 @Belgi: You take those powers that are relatively prime to 9, ie. units mod 9. – Brett Frankel Apr 17 '12 at 20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395100474357605, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/70054/solving-pde-using-method-of-characteristics
# Solving PDE using Method of Characteristics I need to solve the following by using the method of characteristics $$u\frac{\partial u}{\partial x}+\frac{\partial u}{\partial y}=1~,~u|_{x=y}=\frac{x}{2}$$ I have the following characteric equations: $$\frac{dx}{ds}=u~;\frac{dy}{ds}=1~;\frac{du}{ds}=1$$ from the above I get $$x=us+x_{0}$$ $$y=s +y_{0}$$ $$u=s+u_{0}$$ I am now thinking I should go with the standard conditions $$y_0=0$$ and $$u(x,0)=f(x_0)$$ this now gives me: $$x=uy+x_o$$ $$y=s$$ $$u=y+f(x_0)$$ Im confused because of the $$u$$ term in my equation for $$x$$ Thanks a mil - 1 You will be more likely to get a good answer if you were to accept those offered on earlier question of yours. It is easy, you just click the tick mark at the top-left of the answer post you found most helpful. – Sasha Oct 5 '11 at 12:50 thanks Sasha I didnt know about the whole 'accept' thing. blush hope that helps encourage helpers. ;) – sarah jamal Oct 5 '11 at 13:19 1 – pedja Oct 5 '11 at 14:23 ## 1 Answer Your solution to characteristic equations is incorrect, which you can easily check by plugging your current solution back in. The source of the problem is that $u(s)$ is not constant, thus $x^\prime(s) = u(s)$ is not solved by $x(s) = u(s) s + x_0$. It may help to note that $x^{\prime\prime}(s) = u^\prime(s) = 1$. Now that you are back into ODE with constant coefficients, finding solutions should be easy. - Thanks Sasha. Do i solve $$\frac{d^2x}{ds^2}=1$$ as: $$\frac{dx}{ds}=s+k$$ and so $$x=\frac{1}{2}s^2+ks+x_0$$ ? – sarah jamal Oct 5 '11 at 14:17 1 – Sasha Oct 5 '11 at 14:21 Ok, so the other characteristic equations were right? So I have $$y=s$$ and $$u=s+f(x_0)$$ and then using what I have above $$x_0=x-\frac{1}{2}y^2-ky$$ I am still confused about what to do about the condition $$u|_x=y=\frac{x}{2}$$ – sarah jamal Oct 5 '11 at 14:29 1 We have $u(s) = s + u_0$, $x(s) = \frac{s^2}{2} + k s + x_0$, and $y(x) = s + y_0$. The boundary conditions $\left. 2 u \right|_{x=y} = x$ means that for $s$, such that $x(s) =y(s)$, $2 u(s) = x(s)$. This leads to a system of equations for the undetermined coefficients. – Sasha Oct 5 '11 at 14:58 1 Solve for $s$ and substitute the solution into the other equation. – Sasha Oct 5 '11 at 15:42 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420889616012573, "perplexity_flag": "head"}
http://mathoverflow.net/questions/5901?sort=oldest
Do the signs in Puppe sequences matter? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A basic construction in homotopy is Puppe sequences. Given a map $A \stackrel{f}{\to} X$, its homotopy cofiber is the map $X\to X/A=X \cup_f CA$ from $X$ to the mapping cone of $f$. If we then take the cofiber again, something remarkable happens: $(X/A)/X$ is naturally homotopy equivalent to the suspension $\Sigma A$ of $A$. This isn't hard to see geometrically; a nice picture and discussion can be found in pages 397-8 of Hatcher. If we iterate this, we end up with a sequence $$A \to X \to X/A \to \Sigma A \to \Sigma X \to \Sigma(X/A) \to \Sigma^2 A \to \cdots$$ in which each map is the homotopy cofiber of the previous map. If we then apply a functor which sends cofiber sequences to exact sequences, we get a long exact sequence. This can be understood as the origin of long exact sequences of cofibrations in (co)homology, using the fact that $H^n(X)=H^{n+1}(\Sigma X)$. One subtlety of this construction is that under the natural identifications of $(X/A)/X$ and $((X/A)/X)/(X/A)$ with $\Sigma A$ and $\Sigma X$, the map $\Sigma A\to\Sigma X$ is not the suspension of the original map $f$, but rather its negative (i.e., $-1\wedge f: S^1\wedge A=\Sigma A \to \Sigma X=S^1\wedge X$, where -1 is a map of degree -1). The geometric explanation for this can neatly be seen in Hatcher's picture, where the cones are successively added on opposite sides, so the suspension dimensions are going in opposite directions. However, you usually don't need to worry about this sign issue. First, since a map of degree -1 is a self-homotopy equivalence (even homeomorphism) of $\Sigma A$, we could just change our identification of $(X/A)/X$ with $\Sigma A$ by such a map and then we would just have $\Sigma f:\Sigma A \to \Sigma X$ (note though that then we are not changing how we identify the next space in the sequence with $\Sigma X$, which breaks some of the symmetry of the picture). Alternatively, if we only care about the Puppe sequence because of the long exact sequences it gives us, we could note that an exact sequence remains exact if you change the sign of one of its maps. My question is: is there any situation where these signs really do matter and have interesting consequences? Might they be somehow connected to the signs that show up in graded commutative objects in topology? - 4 If I can add a question to this one: are they any famous examples of errors which result from mishandling the signs in the Puppe sequence? – Charles Rezk Nov 18 2009 at 3:56 5 Answers This is not exactly an answer but shows that these signs do matter in general: Example 4.21, page 32, of B. Iversen's ‘Cohomology of sheaves’ (Universitext, Springer-Verlag, Berlin, 1986) shows that you cannot change the signs of one map in a distinguished triangle in a triangulated category without breaking its distinguishedness—the example is in the homotopy category of complexes of abelian groups. You can swap two signs, though. - This is basically what I was going to post - for more detail you can look at my answer in the context of the homotopy and derived categories of abelian groups here - mathoverflow.net/questions/4653/… – Greg Stevenson Nov 18 2009 at 3:56 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. If you take a commuting square of maps and use homotopy cofibers to extend outward to a big grid, you eventually run into some anticommuting squares, which are really supercommuting squares. I think this answers your last question in the affirmative. Whether this answers your other questions really depends on your standards, I suppose. The anticommuting square is mentioned in Faisceaux Pervers and possibly also the triangulated category chapter in Weibel's Homological Algebra. Edit: I think there is a way to write a Mayer-Vietoris sequence as chains on the totalization of a big grid like this. Then you need the signs to make it work. Here is an attempt at a diagram: $A \to B \to B/A \to \Sigma A$ $\downarrow \qquad \downarrow \qquad \downarrow \quad \qquad \downarrow$ $C \to D \to D/C \to \Sigma C$ $\downarrow \quad \qquad \downarrow \quad \qquad \downarrow \qquad \downarrow$ $C/A \to D/B \to X \to \Sigma(C/A)$ $\downarrow \quad \qquad \downarrow \quad\qquad \downarrow \qquad \qquad \downarrow$ $\Sigma A \to \Sigma B \to \Sigma(B/A) \to \Sigma^2 A$ The bottom right square anticommutes, but the rest commute. The maps on the bottom and right edges have minus signs. - Ah, so maybe the point is that we should think of the suspension as an "odd" operation, that somehow got (super)commuted past something? (co)homology theories are basically super, no? – Theo Johnson-Freyd Nov 18 2009 at 6:30 Suspension shifts the dimension of singular chains up by one. One then runs into orientation considerations when gluing (as in Eric's question), and it changes their parity when looking at products. – S. Carnahan♦ Nov 18 2009 at 15:31 1 Yes, I think Theo is exactly right. In homotopy categories that come from a monoidal model category, commuting S^m (the m-fold suspension of the unit) past S^n always introduces a sign of (-1)^{mn}. This was a conjecture in my model categories book that was resolved by Denis-Charles Cisinski. Essentially one should think of monoidal model categories as algebras over simplicial sets, so whatever happens in simplicial sets to the unit will also happen in any monoidal model category. – Mark Hovey Nov 18 2009 at 16:03 Basically my interpretation of Mariano's answer is as follows: one needs to be consistent with the signs in the Puppe sequence in order for the good properties one wants to hold (namely the mapping axiom when using cofiber sequences as triangles, I am not sure off the top of my head if this is a problem in the enhanced versions), but at least in my opinion in some sense which consistent choice of signs one makes is just convention. This requirement leads one to the sign changes in the axiom concerning rotation of triangles in a triangulated category. It is this which in some sense is the source of the graded-commutative behaviour one observes (so really it comes from the signs in the Puppe sequence as one source of motivation). For instance one can formalize this behaviour by considering the "central ring" $Z^*(\mathbf{T})$ of a triangulated category $\mathbf{T}$ (it might not form a set hence the "") whose graded pieces are $$Z^i(\mathbf{T}) = \{\eta\colon Id \to \Sigma^i \; \vert \; \eta\Sigma = (-1)^i\Sigma\eta \}$$ i.e., natural transformations from the identity functor to powers of the suspension functor which commute or anticommute with suspension depending on the degree. Composition makes this into a graded-commutative "ring" which has a natural action on the graded hom-sets of $\mathbf{T}$, where we need the graded-commutativity to ensure that the sign switch on suspension is taken care of and so everything plays nicely with triangles. So if you have interesting natural transformations in higher degrees you have a natural graded-commutative action. - There was a case where a paper proved a very strong result and the referee found an unfixable hole in the proof caused by forgetting the sign in the Puppe sequence! As others have said, it is analogous to the sign in the tensor product of chain complexes: $d(x \otimes y) = dx \otimes y + (-1)^k x \otimes dy$ if x has degree k. It matters. - 1 Is it possible to give an idea about what said result was without violating any confidentiality? – Tyler Lawson Nov 18 2009 at 13:35 2 There are some detailed computations of the behavior of beta elements and products thereof in the Adams-Novikov spectral sequence at odd primes. It was one of those. This was quite some time ago, maybe the early 1990's or even late 1980's. – Mark Hovey Nov 18 2009 at 15:57 There is a specific sort of situation I know about where that sign matters. Suppose you have $f:X \rightarrow Y$ and $g:Z \rightarrow W$ cofibrations (if the maps are not cofibrations, all the same things work - you just replace the quotient spaces by mapping cones). You extend both maps to their Dold-Puppe sequences, so you get the sequences $X \rightarrow Y \rightarrow Y/X \rightarrow \Sigma X \rightarrow \Sigma Y \ldots$ and $Z \rightarrow W \rightarrow W/Z \rightarrow \Sigma Z \rightarrow \Sigma W \ldots$ Now suppose you have maps $a: X \rightarrow W$ and $b: Y \rightarrow W/Z$ making the obvious square commute up to homotopy. You can then extend these to make a commutative ladder from the first Dold-Puppe sequence to the second. (Notice that the sequences are deliberately offset from each other by one spot.) Using the usual parameters and the obvious choices of homotopies you will get a square involving $Y/X, \Sigma X, \Sigma Z, \Sigma W$. This square will commute if it includes the map $-\Sigma g: \Sigma Z \rightarrow \Sigma W$, but not generally with the map $\Sigma g$. (To check all this, I recommend doing the Dold-Puppe sequences with mapping cones rather than quotient spaces but keeping the homotopy equivalences with the quotient spaces in mind, which is the only way I know to calculate what the right maps should be.) At this point, if you were feeling stubborn, you could replace the map in your ladder $\Sigma X \rightarrow \Sigma Z$ with $-1$ times that map, and that would allow you to have used $\Sigma g$ in the square I mention in the above paragraph, but that creates other issues; if you choose not to simply use suspensions of your original maps to go from one Dold-Puppe sequence to the other then you run into problem when you are mapping between Dold-Puppe sequences without the shift of this example. I hope this helps unravel Greg's answer (which is correct - you need the sign to get good mapping properties). Of course one sees exactly the same phenomenon in the category of chain complexes of abelian groups (where homotopy is chain homotopy) and other such categories. I agree with Theo and Mark that one thinks about the suspension as "odd" (in the sense of parity not the sense of peculiar). The published paper that Mark refers to that has an error of exactly this sort (which is unfortunately fundamental to the paper) is by Lin Jinkun in Topology v. 29, no. 4, pp. 389-407. I read this paper in preprint form in 1988 and missed this error, but discovered it in 1992 when reading another paper by the same author with the same error. In the Topology paper the error is made in diagram 4.4 on the right hand square (proof of Lemma 4.3). - I should add that I learned of the example that I mentioned from Hal, so I'm glad he gave this definitive answer. – Mark Hovey Dec 1 2009 at 23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392995238304138, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/212196/how-to-denote-a-function-that-depends-on-a-1-ldots-a-n-without-the-dots
# How to denote a function that depends on $a_1, \ldots, a_n$ without the dots? I know that I can wrote something like $$a_1 + \cdots + a_n$$ without the dots as $$\sum_{i=1}^n a_i$$ which seems clearer to me. As a programmer, I'd rather have a rule set with variables than something with dots where I have to extract the pattern from. Is there some notation to do this for the parameters of a function? Say a Lagrangian like so: $$L\left(q_1, \ldots, q_n, \dot q_1, \ldots, \dot q_n, t\right)$$ The thought in the back of my head is the following. In Python, I could have a function like so: ````f(x, y, z) ```` When I call that function, I could either to `f(1, 2, 3)` or I could do the following: ````parameters = [1, 2, 3] f(*parameters) ```` Where I basically “dump” that list of parameters into the parenteses of the function. Is there some math notation for the same thing? - I think it's worth noting that even with $\sum_{i=1}^n$, there is a standard interpretation that is not really any more explicit than $a_1+\cdots+a_n$. – alex.jordan Oct 13 '12 at 17:57 ## 3 Answers I don't know if that's what you're looking for, but we do that last thing in $\mathbb R^n$ usually ; the vectors in $\mathbb R^n$ are defined as vectors of the form $$(x_1, \dots, x_n), \qquad x_i \in \mathbb R$$ but if we write $x = (x_1, \dots, x_n)$, when defining a function $f : \mathbb R^n \to \mathbb R$ for instance, we can just write $f(x)$ instead of $f(x_1, \dots,x_n)$. Is that what you were looking for? For your Lagrangian for instance, you could define $q = (q_1, \dots, q_n)$, $\dot q = (\dot q_1, \dots, \dot q_n)$, and write $$L(q,\dot q, t)$$ instead of $$L(q_1, \dots, q_n, \dot q_1, \dots, \dot q_n, t).$$ Hope that helps, - Well, $L\left(\vec{q}, \vec{\dot q}, t\right)$ seems clean and to the point. Defining vectors is probably the thing I am looking for! – queueoverflow Oct 13 '12 at 17:27 @queueoverflow : I added something. Oh and you don't need to put arrows over the vectors. These are for children. =) – Patrick Da Silva Oct 13 '12 at 17:27 I see that the vector arrows are not used so much at university. They even write $\int \mathrm dx$ when I would write $\iiint \mathrm dV$ :-) – queueoverflow Oct 13 '12 at 17:30 @queueoverflow : When one does measure theory he also often writes $\int_X f$. So yeah, arrows for children. Did I answer your question? – Patrick Da Silva Oct 13 '12 at 17:31 You answered my question, but I need to wait a little to accept it. – queueoverflow Oct 13 '12 at 17:32 show 1 more comment According to one professor of mine, $\underline{x} = x_1, \dots , x_n$. - One professor of mine used the underscore for matrices. – queueoverflow Oct 14 '12 at 18:53 You can write $\{x_i\}_{i\leq n}$ instead of $\{x_1, \dots, x_n\}$. - True. But how can I make a set the parameters of a function? – queueoverflow Oct 14 '12 at 14:29 queueoverflow: What's wrong with $f(\{x_i\}_{i\leq n})$? – Fernando Martin Oct 14 '12 at 18:38 Maybe I am too much programmer here, but then $f$ is a function which domain is a set, not a bunch of variables. – queueoverflow Oct 14 '12 at 18:47 Then redefine your function's domain to $n$-uples. IMO you shouldn't emphasize so much on this; after all it's just notation. Humans are not computers, as long as your notation isn't ambiguous there shouldn't be any problem. – Fernando Martin Oct 14 '12 at 19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938428521156311, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/56810-chinese-remainder-theorem-print.html
# Chinese Remainder Theorem Printable View • October 31st 2008, 02:39 PM aaronrj Chinese Remainder Theorem I'm trying to find all solutions to this system of congruences: x = 1(mod 2) (sorry, I'm using = instead of the logical equivalence) x = 2(mod 3) x = 3(mod 5) x = 4(mod 11) 2 * 3 * 5 * 11 = 330 M1 = m / 2 = 165 (3 * 5 * 11) M2 = m / 3 = 110 (2 * 5 * 11) M3 = m / 5 = 66 (2 * 3 * 11) M4 = m /11 = 30 (2 * 3 * 5) (I think this is where my problem is) 165 = 1(mod 2) 110 = 1(mod 3) 66 = 1(mod 5) 30 = 8(mod 11) x = 1 * 165 * 1 + 2 * 110 * 2 + 3 * 66 * 1 + 4 * 30 * 8 = 1763 1763 = 113(mod 330) x = 113 holds up for the first three congruences, but fails on the 4th. Could someone please point out where I'm going wrong? I have no problem solving the system with the first three congruences, but every time I add the fourth and try to solve it, I run into problems. • November 1st 2008, 01:27 AM Opalg I have noticed that people who use these mechanical algorithms for problems of this sort often end up with the wrong answer. I'm sure that the formulas work if they are used correctly, but in practice they seem to invite arithmetic mistakes. So I prefer to avoid the formulas and use more common-sense methods. In this case, if you can deal with the first three congruences then you know that $x\equiv23\!\!\!\pmod{30}$, and you want to combine this with $x\equiv4\!\!\!\pmod{11}$. The first of those congruences looks simpler if you add 7 to both sides, to get $x+7\equiv0\!\!\!\pmod{30}$. By good fortune, the second congruence also looks simpler if you add 7 to both sides, to get $x+7\equiv0\!\!\!\pmod{11}$. So x+7 is a multiple of both 11 and 30, and therefore it is a multiple of 330. Therefore $x\equiv-7\equiv323\!\!\!\pmod{330}$. All times are GMT -8. The time now is 06:01 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320366382598877, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/54451/how-does-earth-carry-moon-with-it-if-it-can-not-force-moon-to-touch-it-by-gravi/54460
# How does earth carry moon with it, if it can not force moon to touch it by gravitational force? earth's gravitational force is acting on its moon in such a way that it forces moon to rotate round its orbit by centripetal force and carries it while rotating round the sun by gravitational force. I don't understand why in this condition moon doesn't come to the earth while earth is carrying it through gravitational force to rotate round the sun?? - 4 – ghoppe Feb 19 at 22:33 I'm a bit confused by the question. As I understand it, @ghoppe 's related link answers the question. Is there another question in your question that isn't addressed there? – joshphysics Feb 19 at 23:01 @joshphysics I don't understand that a body is moving somewhere and another body at a distance of 384000km away from that body is going with it because of the attraction of the moving body. i don't understand if the earth has such much power that it is carrying moon everywhere it is going then why can the earth not attract moon in such a way that it attracts to a falling apple. I mean why does moon (in in condition) come to the earth and collide with it when earth has such much power that the moon goes with it where ever earth goes – kashif Feb 19 at 23:42 1 @kashif The linked related question answers that question. The moon is always "falling" towards the earth like your apple, it's just "falling" at the precise distance where it misses the ground. – ghoppe Feb 20 at 0:00 ## 2 Answers I think I might understand another facet of your question besides what is addressed in the comments. Let me demonstrate a result in classical mechanics which I think might alleviate your concern. The result is that Given a system of particles, the center of mass of the system moves is though it were a point mass acted on by the net external force on the system. So if you think of the Earth-Moon system as being acted on by a net external force which is simply the gravitational attraction to the Sun (to good approximation), then what's happening is that this entire system is orbiting (essentially freely falling) around the sun. The details of what's happening in the Earth-Moon system itself are described by the first link in the original comments, but for purposes of what's happening to the entire system consisting of the Earth+Moon when it orbits the Sun, the details of the internal interactions don't really matter. Here is a proof of the statement above: Consider a system of particles with masses $m_i$ and positions $\mathbf x_i$ as viewed in an inertial frame. Newton's second law tells us that the net force $\mathbf F_i$ on each particle is equal to its mass times its acceleration; $$\mathbf F_i = m_i \mathbf a_i, \qquad \mathbf a_i = \ddot{\mathbf x}_i$$ Let $\mathbf f_{ij}$ denote the force of particle $j$ on particle $i$, and let us break up the force $\mathbf F_i$ on each particle into the sum of the force $\mathbf F^e_i$ due to interactions external to the system and the net force $\sum_j \mathbf f_{ij}$ due to interactions with all other particles in the system; $$\mathbf F_i = \mathbf F_i^e + \sum_j \mathbf f_{ij}$$ Combining these two facts, we find that $$\sum_i m_i\mathbf a_i = \sum_i \mathbf F_i^e + \sum_{ij} \mathbf f_{ij}$$ The last term vanishes by Newton's third law $\mathbf f_{ij} = -\mathbf f_{ji}$. The term on the left of the equality is just $M\ddot {\mathbf R}$ where $M$ is the total mass and $\mathbf R$ is the position of the center of mass of the system. Combining these facts gives $$M\ddot{\mathbf R} = \sum_i \mathbf F_i^e$$ - Gravity is applied to objects even when they don't touch the earth. For example, think about a stone. Toss it in the air, and it comes back down, even though it was in the air, not touching the earth. The earth's gravity is strong, strong enough to pull on the moon and not let it get away. The moon is about 238,900 miles (384,400 km) from the earth. The moon's gravity, even so far away, is still strong enough to pull on the earth, and cause the tides to rise and fall in the oceans. Like you said, centripetal force is trying to pull the moon down onto the earth. But centrifugal force is trying to send the moon flying in a straight line and off into space. The two forces are equal, so the moon is caught, orbiting the earth. The sun is much larger; its gravitational pull has all the planets caught and orbiting it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611942768096924, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/plotting?page=4&sort=newest&pagesize=15
# Tagged Questions Questions on creating visualizations from functions or data using high-level constructors such as Plot, ListPlot, Histogram, etc. learn more… | top users | synonyms (2) 2answers 194 views ### Triangle mapped on a sphere in $\mathbb R^3$? How can I map a triangle on an sphere? I want to visualize (plot or animate) it for my student in my Non Euclidean geometry. I have no restrictions on the triangle's kind or on the sphere in \$\mathbb ... 3answers 245 views ### StreamPlot in Polar Coordinates I want to use StreamPlot to map out the field lines of an electric field $\mathbf{E}$ given by \mathbf{E} = \frac{3D}{4r^{4}}(3\cos(\theta)^{2}-1)\mathbf{\hat{r}} ... 1answer 53 views ### Restricting a plotted expression to reals in a Manipuate I have an expression from which I'm trying to create a plot that I can manipulate. The expression contains Abs, and I only want to have it evaluated for reals. What ... 3answers 100 views ### Defining a new Function using RegionPlot I'm trying to make images like this: to illustrate 2D integrals to a calculus class. I used the code ... 2answers 124 views ### Sequence of data in 3D, joining the points Let me just first say I am not actually trying to find a function for these set of data. All I am doing is joining the points to make a line. Basically let's say I have some data, which I cleverly ... 4answers 139 views ### Left-aligned PlotLabel? Is it possible to have the text generated by PlotLabel (or any other function) aligned to the left side of the plot instead of in the center? 2answers 119 views ### Speed up plot of $\sum_{j\ge1} 2^{-j}(1-2^{-j})^{n-1}$ I'm a beginner at Mathematica. I would like to plot the following function: $${n\over2} \sum_{j\ge1} 2^{-j}(1-2^{-j})^{n-1}$$ However the following code is just too slow: ... 1answer 130 views ### ContourPlot with parameter I have an equation for function F[x,y]==0 which first argument x is real and another, y, ... 1answer 57 views ### Plotting data that lands on the axis I have a data point that lands right on the Y axis, which corresponds to the index of refraction of some sample. the x axis is the mole fraction of chloroform of my samples. How do I get the first ... 2answers 109 views ### Better method for creating a tuple out of two lists I just combined two lists using this method ... 1answer 59 views ### How to set PlotLegend Number Format Is there any way to make Mathematica to show the legend labels like a real number rather than in fraction form? I read the BarLegend options, however they are just ... 1answer 65 views ### Padding plot ticks with zeros on the right I would like to know how can I make a real-valued plot tick pad with a zero to the right of the decimal point on integer values. This is what I have: ... 3answers 147 views ### Custom functions by delegating options in a specific way and using core functions I'd like to create a custom function that does essentially the same as a core function of mathematica but uses different default settings. Example: I want a Plot function that uses Mathematica's core ... 1answer 55 views ### Use Results from Manipulate Plot NDSolve to create another plot versus variable used in DE I created my plot using using this input: ... 0answers 76 views ### Is it possible to mask a vector image? I am trying to plot a contour plot within a non-rectangular region. I have seen in other posts techniques that involve calculating an interpolation function and using the RegionFunction command within ... 3answers 72 views ### BarChart without axes neglects the last zero of a list Creating a BarChart with {1, 2, 3, 0} as BarChart[{1, 2, 3, 0}, Axes -> False, Frame -> True] gives me only the first three bars, a chart identical to just ... 1answer 94 views ### How can I change the position of my plot legends? As the title indicates, I want to change the current position of the legends in a combined plot. The Mathematica code I used is ... 1answer 77 views ### Why don't I see all the grid lines in my combined contour plots? I am trying to display four continuous functions, but when I try to combine them withe a single Show, I can't see the second vertical line. I suppose the problem is ... 3answers 141 views ### Label points in plot with a text I'm a newbie at Mathematica and I couldn't find how to label the maximum and the zero of a simple function in Plot with their names: ... 1answer 106 views ### Plot an undefined function When I plotted an undefined function, Mathematica also spends considerable time to process it. It returns a blank figure necessarily. What confuses me is where the ... 2answers 98 views ### Stop use of scientific notation when displaying FrameTicks I have a dynamic graph which changes depending on the current selection from a drop down box. All selections from the drop down box give fine results except one. For the one selection it shows the ... 3answers 90 views ### Plotting multiple parametric functions with different intervals I want to make a parametric plot like this: ParametricPlot[{{2 t, -10 t^2}, {t, 2 t}}, {t, 0, 2}] I have tried the following but it doesn't work: ... 3answers 320 views ### How to plot a 3D surface with a simple black and white style? Mathematica has great plotting capabilities. However, sometimes what is needed is a very basic black and white plot without textures, lighting, glow and other complex features. So, here is my ... 2answers 205 views ### Plot 2D Vector function in 3D I want to plot a 2D vector function such as $F(x,y) = (a(x,y),\,b(x,y))$ in a 3D graph so that the vectors are embedded in the xy plane. I tried to do the following: First I defined a piecewise ... 1answer 68 views ### How do I remove this awkward plane? [duplicate] Using Plot3D, I tried to plot $x^3 + y^3+3xy=z$ ... 4answers 387 views ### Plotting data points: Optimizing size and visuals The following problem is one that may have showed up to you while plotting a large data collection. Suppose you have a data set of say 1 million points. ... 3answers 185 views ### How to make a plot on top of other plot? I want to plot two functions. One should appear on top of the other. The x axis has the same values for both functions, but should appear twice (the second- upper I want to be dashed). y axis should ... 1answer 67 views ### Plotting expression as function of another expression [closed] How can I plot $\frac{m}{(M+m)}$ as function of $\frac{m}{M}$? I've tried the regular Plot[] but it didn't work. This is a function of the loss of kinetic energy ... 1answer 121 views ### ErrorListPlot Legend with Markers I'm trying for some days now to integrate markers in my legend. I have a ErrorListPlot with markers and a legend but difficulties to realize the markers in the legend as well. Thanks a lot for your ... 5answers 182 views ### Plotting the solutions to a transcendental equation I am trying, to no avail, to use Mathematica to produce a plot in (x, y)-space of the solutions to the equation Cos[Sqrt[y]] + Sin[Sqrt[y]]/Sqrt[y] == Cos[x] ... 2answers 120 views ### List version of SphericalPlot3D could you give me some input on how to create a list version of SphericalPlot3D[r[theta,phi],{theta,min,max},{phi,min,max}] When r[] is an explicit analytical ... 0answers 58 views ### Tooltip and table in plot [duplicate] Plot[Tooltip@{1. k^2, 1.2 k^2, 1.4 k^2, 1.6 k^2, 1.8 k^2, 2. k^2}, {k, -2, 2}] gives But why does ... 3answers 100 views ### How to control the speed in Manipulation As the title says, I want to control the speed when I use Manipulate. I have the following piece of code ... 0answers 56 views ### Is there a relatively easy way to add the “Cone of Influence” to a waveletscalogram? I am analyzing time series data using the ContinousWaveletTransform function, and plotting the results with WaveletScalogram. I would like to plot the cone of influence on top of the WaveletScalogram ... 3answers 119 views ### Importing multiple files using a for-loop once again I am stuck solving a seemingly easy problem :( I would like to make a List of tables by importing data from several txt-Files. Importing the data into a table works so far and I am able to ... 2answers 155 views ### How can I superimpose a set of points on a ContourPlot? I am familiar with Mathematica to a certain extent, but its subtleties still elude me. Currently I am trying to solve the following problem: The function $f(x,y)$ is continuous. I know how to create ... 4answers 162 views ### How can I fill a curve in ParametricPlot3D? I'd like to fill a curve in a ParametricPlot3D in the same way as I might with ListPointPlot3D; i.e., ... 3answers 138 views ### Fitting data without an equation Let's say I have some data generated by anything. For practical sake, let's say I was able to generate a sequence of numbers through a recursion and I listed them out through a table of numbers. ... 2answers 154 views ### How to plot a parametric region representing a coordinate transformation The Wikipedia page on Rindler coordinates shows a nice example of how a coordinate transformation can be represented in a plot. They start with two coordinates $T,X$ with \$0 \,<\, X \,<\, ... 1answer 142 views ### Normal Plot (visual test for normality) I wish to construct a Normal Plot to visually test for normality of data. My data is: {188,199,171,200,219,172,235,194,234,206} And needs to be plotted to look something like this: Whereby ... 1answer 88 views ### Is it possible to modify the tick marks in BarChart plot? I need to specify the length of the tick marks on the x axis in a BarChart plot: ... 2answers 120 views ### Click on a curve to start Manipulating another function I'm able to plot two lists with a tooltip on the curves to display the name of the curve: ... 1answer 97 views ### Can anyone identify these plots? [duplicate] I need to produce some plots that look like these but I'm not sure what they are called in mathematica, can anyone identify them ? Update: I want to plot triples of the form (xvalue, yvalue, ... 2answers 151 views ### Why are these flow lines cut short? I want to plot some flow lines for a certain vector field, but Mathematica doesn't seem to plot the entire flow line, only a small piece of it. How can I fix this? ... 3answers 232 views ### How to remove unwanted regions in a three-dimensional surface As the title indicates, I want to delete some unwanted regions in a three-dimensional surface created using ContourPlot3D. Here is the corresponding code ... 2answers 181 views ### Combining two plots which are in two regions I want to combine two plots which are in two regions. For an example, I want to plot following two figures in the same plot. ... 3answers 180 views ### Phase space vector field [duplicate] I have a system of non linear equations and from NDSolve I get the solution. I plot the phase space with ... 4answers 157 views ### ListPointPlot3D seen from above I have a couple of {x,y,z} points a = {{0, 1, 0.}, {50, 1, 0.018931}, {100, 1, 0.02}, {0, 2, 0.}, {50, 2, 0.131}, {100, 2, 0.2}}; and I'm visualizing them with ... 1answer 196 views ### plotting an equation with double summations Thank you so much for the helpful comments. Now able to manage to plot all the functions. ... 1answer 229 views ### How can I draw the transition diagram of a Markov chain? I apologize for my heaviness as I am a Mathematica novice. Is there a kind soul who can tell me how to draw diagrams like those shown on page 158, 160 and 161 of this document.Chaine de Markov I ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002374410629272, "perplexity_flag": "middle"}