url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathoverflow.net/questions/87602/topological-vector-spaces-that-are-isomorphic-to-their-duals/87603
## Topological vector spaces that are isomorphic to their duals ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) After reviewing the (locally convex) topological vector spaces that I know, the only examples I could find where there is an isomorphism from the space to its (anti)dual, are Hilbert spaces. So my question is : Are there topological vector spaces $V$ such that the topology does not come from a Hilbert structure, and such that there exists an isomorphism $\chi : V \to V'$, where $V'$ denotes the antidual of $V$ (continuous antilinear forms on $V$) ? - ## 3 Answers An interesting family of examples comes from number theory (or algebraic geometry, depending on who you ask): If you have a field $k$, the Laurent power series field $k((t))$ has an ultrametric topology where `$\{ t^n k[[t]] \}_{n \in \mathbb{Z}}$` form a neighborhood basis of zero. This space is isomorphic to its topological dual: by adjoining $(dt)^{1/2}$, one obtains the perfect residue pairing $$\langle f(t) (dt)^{1/2}, g(t) (dt)^{1/2} \rangle = \operatorname{Res} f(t)g(t) dt.$$ You can do a similar trick with finite dimensional vector spaces over $k((t))$. If you want nontrivial antilinearity, you may choose $k$ to be a separable quadratic extension of some underfield $F$, and change the residue pairing to be sesquilinear: $$\langle f(t) (dt)^{1/2}, g(t) (dt)^{1/2} \rangle = \operatorname{Res} f(t) \bar{g}(t) dt.$$ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. `If $X$ is any reflexive space, then`$X \oplus X^{*}$```is isomorphic to its dual $X^* \oplus X^{**}$```. - Take a reflexive TVS $V$, and consider $V \times V^\ast$. - 2 Well, technically the question wanted antidual so you need, I guess, an involution on $V$ (an antilinear continuous linear bijection from $V$ to itself). But, yeah, examples abound... – Matthew Daws Feb 5 2012 at 19:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139239192008972, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=516212
Physics Forums Page 1 of 2 1 2 > ## Confused about the concepts of dual spaces, dual bases, reflexivity and annihilators My background in linear algebra is pretty basic: high school math and a first year course about matrix math. Now I'm reading a book about finite-dimensional vector spaces and there are a few concepts that are just absolutely bewildering to me: dual spaces, dual bases, reflexivity and annihilators. The book I'm reading explains everything in extremely general terms and doesn't provide any numerical examples, so I can't wrap my head around any of this. I'd really appreciate it if my loose understanding of these concepts could be critiqued/corrected and, if possible, some simple numerical examples could be provided. I really can't make heads or tails of some of this. Note: I've never seen this bracket notation before, so I'll briefly introduce it in case it isn't something standard: [x,y]$\equiv$y(x) First, dual spaces. My understanding of a linear functional is that it's a black box where vectors go in and scalars come out (e.g. dot product). The dual space V' of a vector space V, is the set of all linear functionals that can be applied to that vector space. So, why is this called a "space"? How can things like integration and dot products (i.e. operations) form a space? The author also refers to the elements of V' as "vectors" -- how can an operation be a vector? My understanding of a vector is that it is a value with both magnitude and direction. Obviously, operations produce values, but V' is the set of operations, not values. Second, dual bases. I just don't understand this at all, so I'll just provide the definition in this book: If V is an n-dimensional vector space and if X={x1,...,xn} is a basis in V, then there is a uniquely determined basis X' in V', X'={y1,...,yn}, with the property that [xi,yi]=∂ij. Consequently the dual space of an n-dimensional space is n-dimensional. The basis X' is called the dual basis of X. I think ∂ij is the Kronecker delta, but I'm not 100% sure. So, what I think this means is that there is one operation in V' for each V, for which yj(xi)=1 for j=i and 0 for all j≠i. But does V' necessarily have dimension n? Third, reflexivity. I just don't understand this at all. Here's the definition in the book: If V is a finite-dimensional vector space, then corresponding to every linear functional z0 on V' there is a vector x0 in V such that z0(y)=[x0,y](x0) for every y in V'; the correspondence z0$\leftrightarrow$x0 between V'' and V is an isomorphism. The correspondence described in this statement is called the natural correspondence between V'' and V. It is important to observe that the theorem shows not only that V and V'' are isomorphic -- this much is trivial from the fact that they have the same dimension -- but that the natural correspondence is an isomorphism. This property of vector spaces is called reflexivity; every finite-dimensional vector space is reflexive. Fourth, annihilators. I think I understand this concept somewhat, but the proofs presented don't make sense to me. My understanding of an annihilator is that it is any subset of V' which evaluates to 0 for all x in V. The thing I'm confused about is the annihilator of an annihilator. If M is a subspace in a finite-dimensional vector space V, then M00 (=(M0)0) = M. Now, I'm willing to accept this proposition, but the proof is relatively short yet I cannot make sense of it. The proof is: By definition, M00 is the set of all vectors x such that [x,y]=0 for all y in M0. Since, by the definition of M0, [x,y] = 0 for all x in M and all y in M0, it follows that M$\subset$M00. The desired conclusion now follows from a dimension argument. Let M be m-dimensional; then the dimension of M0 is n-m, and that of M00 is n-(n-m)=m. Hence M = M00, as was to be proved. The problem I have here is "By definition, M00 is the set of all vectors x such that [x,y]=0 for all y in M0". Shouldn't it be the set of all vectors z (or whatever letter you like) in V'? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Philmac My background in linear algebra is pretty basic: high school math and a first year course about matrix math. Now I'm reading a book about finite-dimensional vector spaces and there are a few concepts that are just absolutely bewildering to me: dual spaces, dual bases, reflexivity and annihilators. The book I'm reading explains everything in extremely general terms and doesn't provide any numerical examples, so I can't wrap my head around any of this. I'd really appreciate it if my loose understanding of these concepts could be critiqued/corrected and, if possible, some simple numerical examples could be provided. I really can't make heads or tails of some of this. Note: I've never seen this bracket notation before, so I'll briefly introduce it in case it isn't something standard: [x,y]$\equiv$y(x) First, dual spaces. My understanding of a linear functional is that it's a black box where vectors go in and scalars come out (e.g. dot product). The dual space V' of a vector space V, is the set of all linear functionals that can be applied to that vector space. So, why is this called a "space"? How can things like integration and dot products (i.e. operations) form a space? The author also refers to the elements of V' as "vectors" -- how can an operation be a vector? My understanding of a vector is that it is a value with both magnitude and direction. Obviously, operations produce values, but V' is the set of operations, not values. Let's begin by clearing this up. Once you get this, we'll get to the next thing: You're having a major, major misconception. You think of vector as some kind of "arrow" with both a magnitude and direction. While this is certainly true in basic math, this is absolutely false when you get to more advanced spaces. In fact, an $\mathbb{R}$-vector space is any set equipped with an addition and a scalar multiplication (which satisfy some elementary axioms). All a vector is, is an element of a vector space. The easiest example of a vector space is of course $\mathbb{R}^n$, whose elements can indeed be seen as "arrows" with a magnitude and a direction. However, this is far from the only example of a vector space. For example: $$\{f:[0,1]\rightarrow \mathbb{R}~\vert~\text{f continuous}\}$$ is also a vector space! All this means is that the sum and scalar multiplication of continuous functions gives us a continuous function. And it would be very awkward to see this set as a collection "arrows". The vectors of these set are now continuous functions! Other vector spaces are the polynomials, the differentiable functions, etc... The thing I want you to realize is that a vector space is a very broad concept. There's a lot that can be a vector space, not just "arrows with maginute and direction". When given a vector space V (which can be anything), we can form the dual space $$V^\prime=\{f:V\rightarrow \mathbb{R}~\vert~f~\text{linear}\}$$ This is a vector space. All this means is that the sum and scalar product of linear functions is linear. There's nothing more to it. The vectors here are simply linear functions!! So remember: a vector space can be anything!!!! Once you understand this, we can move on to your next questions. But I feel that you must grasp this first. Thank you very much! There is a list of axioms defining both fields and vector spaces at the beginning of the book I'm reading. I do agree that a dual space satisfies these axioms. However, I errantly believed that "vectors" had to have some geometric interpretation. I thought that perhaps vectors were more abstract than I had previously believed, and you've confirmed that for me. So a vector (space) is simply anything that satisfies the axioms, nothing more than that. I believe I'm ready to hear explanations for the rest of these concepts :) Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus ## Confused about the concepts of dual spaces, dual bases, reflexivity and annihilators OK, dual bases then/ Quote by Philmac Second, dual bases. I just don't understand this at all, so I'll just provide the definition in this book: I think ∂ij is the Kronecker delta, but I'm not 100% sure. So, what I think this means is that there is one operation in V' for each V, for which yj(xi)=1 for j=i and 0 for all j≠i. But does V' necessarily have dimension n? The definition is maybe a bit more abstract then was possible. But that's not necessary a bad thing. Let's first look at the vector space $V=\mathbb{R}^n$, which is the nice vector space of arrows. Elements in the dual space are now just linear functions $T:\mathbb{R}^n\rightarrow \mathbb{R}$. The first important observation is that T is completely determined on how it acts on a basis. That is, if you would know that $$T(1,0,...,0)=y_1,T(0,1,0,...,0)=y_2,..., T(0,0,0,...,1)=y_n$$ then you know completely what x does on every element: $$T(a_1,...,a_n)=a_1y_1+...+a_ny_n$$ So it just suffices to say what T does on a basis. Now, if V is an arbitrary vector space, then the same holds: we can define a linear function by saying what it does on the basis. Now, let's define such a function. Take a basis $\{e_1,...,e_n\}$ and define $$T_1(e_1)=1, T_k(e_k)=0~\text{for k>1}$$ In general, we have $$T_i(e_i)=1,~T_k(e_i)=0~\text{if}~k\neq i$$ Even more abstractly put: $T_i(e_k)=\delta_{ik}$, where we indeed have the Kronecker delta. What is our T in our nice space $\mathbb{R}^n$? Well, you can easily see that $$T_i(a_1,...,a_n)=a_i$$ so Ti is simply the i'th projection!! Now, when V is finite-dimensional, I claim it is the case that Ti is a basis for V' (the dual space). This means nothing more then: • The Ti are linearly independent: $$\lambda_1 T_1+...+\lambda_n T_n=0~\Rightarrow~\lambda_1=...=\lambda_n=0$$ Indeed, if $\lambda_1 T_1+...+\lambda_n T_n=0$, then $$\lambda_1 T_1(x)+...+\lambda_n T_n(x)=0$$ for every vector x. So try the vectors ei as our x's. • The Ti span the space. This means only that every functional T can be written as $$\lambda_1 T_1+...+\lambda_n T_n=T$$ So, we must find $\lambda_i$ such that the above is true. But, it suffices to take $\lambda_i=T(e_i)$ here. Then you can easily check that for any x, it holds that $$\lambda_1T_1(x)+...+\lambda_nT_n(x)=T(x)$$ I hope that clarifies this. By the way, which book are you reading? Thanks again! The book I'm reading is "Finite-Dimensional Vector Spaces" by Paul R. Halmos. I've read your post many times but I'm having a great deal of trouble understanding it. Math notation has always been extremely confusing for me (I really need concrete examples with numbers), so I apologize if my questions are extremely basic. Here's what I'm having trouble with: Quote by micromass Let's first look at the vector space $V=\mathbb{R}^n$, which is the nice vector space of arrows. Elements in the dual space are now just linear functions $T:\mathbb{R}^n\rightarrow \mathbb{R}$. The first important observation is that T is completely determined on how it acts on a basis. That is, if you would know that $$T(1,0,...,0)=y_1,T(0,1,0,...,0)=y_2,..., T(0,0,0,...,1)=y_n$$ then you know completely what x does on every element: $$T(a_1,...,a_n)=a_1y_1+...+a_ny_n$$ So it just suffices to say what T does on a basis. What exactly is T? I understand that it is a map from Rn to R1, but are you saying that each element of V'=Tn or are you saying that V'=T? What are the arguments of T? x? The elements y1...yn are operations, so how can anything equate to them? Perhaps this goes back to my original problem with the notion of a dual space. Quote by micromass Now, if V is an arbitrary vector space, then the same holds: we can define a linear function by saying what it does on the basis. Now, let's define such a function. Take a basis $\{e_1,...,e_n\}$ and define $$T_1(e_1)=1, T_k(e_k)=0~\text{for k>1}$$ In general, we have $$T_i(e_i)=1,~T_k(e_i)=0~\text{if}~k\neq i$$ Even more abstractly put: $T_i(e_k)=\delta_{ik}$, where we indeed have the Kronecker delta. What is our T in our nice space $\mathbb{R}^n$? Well, you can easily see that $$T_i(a_1,...,a_n)=a_i$$ I sort of understand this... For example, in R3 a basis is (1,0,0),(0,1,0),(0,0,1), so I understand the Kronecker delta here, but a dual base is made up of operations -- what guarantees that each operation in V' will match up with the right (or any) element of the base of V such that it evaluates to 1 (and 0 for all the other elements of the base)? Quote by micromass so Ti is simply the i'th projection!! If I were in the familiar territory of R2 or R3 I would understand this completely, but what does it mean when you project an operation? Say, onto the "integration axis". What would this mean? Say Tk=integration, and ai=3. Would this mean that the operation being performed consists (in part) of integrating the argument and multiplying the result by 3? Quote by micromass Now, when V is finite-dimensional, I claim it is the case that Ti is a basis for V' (the dual space). This means nothing more then: The Ti are linearly independent: $$\lambda_1 T_1+...+\lambda_n T_n=0~\Rightarrow~\lambda_1=...=\lambda_n=0$$ Indeed, if $\lambda_1 T_1+...+\lambda_n T_n=0$, then $$\lambda_1 T_1(x)+...+\lambda_n T_n(x)=0$$ for every vector x. So try the vectors ei as our x's. The Ti span the space. This means only that every functional T can be written as $$\lambda_1 T_1+...+\lambda_n T_n=T$$ So, we must find $\lambda_i$ such that the above is true. But, it suffices to take $\lambda_i=T(e_i)$ here. Then you can easily check that for any x, it holds that $$\lambda_1T_1(x)+...+\lambda_nT_n(x)=T(x)$$ I understand your discussion of linear independence, but how does this work with a dual space? How is it even possible for the elements of a dual space to be linearly dependent? How does one combine, say, multiplication and the dot product to produce integration? Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Philmac Thanks again! The book I'm reading is "Finite-Dimensional Vector Spaces" by Paul R. Halmos. Hmmm, maybe not the best book for beginners... I've read your post many times but I'm having a great deal of trouble understanding it. Math notation has always been extremely confusing for me (I really need concrete examples with numbers), so I apologize if my questions are extremely basic. Here's what I'm having trouble with: What exactly is T? I understand that it is a map from Rn to R1, Indeed, T is nothing more or nothing less then a linear map $T:R^n\rightarrow R$. That's all it is. but are you saying that each element of V'=Tn or are you saying that V'=T? No, Tn and T are certainly elements of V'. Why? Because T is a linear map from V to R, and V' is simply the set of such a linear maps. I'm not claiming that V'=T or something. This would make little sense, since V' is a vector space and T is an operator. What are the arguments of T? x? The arguments of T are elements of Rn. So, we can do T(2,3,2) if n=3, for example. In general notation, I write (x1,...,xn) for an n-tuple in Rn. The elements y1...yn are operations, so how can anything equate to them? Sorry about this. I didn't mean y1,..., yn as operations here. When I wrote them, I just meant them to be real numbers!! For example, we can have T(1,0,...,0)=2, and then y1=2. I do not mean yi to be an operator here. I sort of understand this... For example, in R3 a basis is (1,0,0),(0,1,0),(0,0,1), so I understand the Kronecker delta here, but a dual base is made up of operations -- what guarantees that each operation in V' will match up with the right (or any) element of the base of V such that it evaluates to 1 (and 0 for all the other elements of the base)? Could you explain this more? What do you mean with "mathcing up with the right element such that it evaluates to 1"?? If I were in the familiar territory of R2 or R3 I would understand this completely, but what does it mean when you project an operation? I only meant this explanation to be in R2 or R3. There are notions of "projections" in arbitrary vector spaces, but I don't think that now is a good time to discuss this. I understand your discussion of linear independence, but how does this work with a dual space? How is it even possible for the elements of a dual space to be linearly dependent? How does one combine, say, multiplication and the dot product to produce integration? Elements of a dual space V' are linear independent by definition if for all x in V it holds that $$a_1T_1(x)+...+a_nT_n(x)=0~\Rightarrow~a_1=...=a_n=0$$ For example, let me look at the dual space of $\mathbb{R}^2$. Take two operators: $$T:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow 2x+y$$ and $$S:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow x+2y$$ these are elements of (R2)' because the maps are linear. I claim that they are linearly independent. This means that If for all (x,y) it holds that aT(x,y)+bS(x,y)=0, then a=b=0. So, let's assume that aT(x,y)+bS(x,y)=0 for all x and y. Translating this gives us $$0=aT(x,y)+bS(x,y)=a(2x+y)+b(x+2y)=(2a+b)x+(a+2b)y$$ This must hold for all x and y. So in particular for x=1 and y=0. So, if we fill that in, we get $$2a+b=0$$ But it must also hold for x=0 and y=1. So, filling that in, we get $$a+2b=0$$ So, if aT(x,y)+bS(x,y)=0 for all x and y, then certainly it must hold true that $$2a+b=0~\text{and}~a+2b=0$$ but this only holds for a=0 and b=0. So S and T are independent. Quote by micromass Could you explain this more? What do you mean with "mathcing up with the right element such that it evaluates to 1"?? My understanding of the Kronecker delta ∂ij is that it evaluates to 1 when i=j and to 0 when i≠j. So, doesn't this mean that yj(xi) must evaluate to 1 for all i=j and 0 for all i≠j for the Kronecker delta to make sense here? What guarantee is there that this will happen? In R≤3 this makes sense because any basis can be reduced to something like (1,0,0), etc. but with operations there is no reducing, there is simply the operation (e.g. integration). I think I'm just not understanding this part at all, now that I think about it some more. Quote by micromass Elements of a dual space V' are linear independent by definition if for all x in V it holds that $$a_1T_1(x)+...+a_nT_n(x)=0~\Rightarrow~a_1=...=a_n=0$$ For example, let me look at the dual space of $\mathbb{R}^2$. Take two operators: $$T:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow 2x+y$$ and $$S:\mathbb{R}^2\rightarrow \mathbb{R}:(x,y)\rightarrow x+2y$$ these are elements of (R2)' because the maps are linear. I claim that they are linearly independent. This means that If for all (x,y) it holds that aT(x,y)+bS(x,y)=0, then a=b=0. So, let's assume that aT(x,y)+bS(x,y)=0 for all x and y. Translating this gives us $$0=aT(x,y)+bS(x,y)=a(2x+y)+b(x+2y)=(2a+b)x+(a+2b)y$$ This must hold for all x and y. So in particular for x=1 and y=0. So, if we fill that in, we get $$2a+b=0$$ But it must also hold for x=0 and y=1. So, filling that in, we get $$a+2b=0$$ So, if aT(x,y)+bS(x,y)=0 for all x and y, then certainly it must hold true that $$2a+b=0~\text{and}~a+2b=0$$ but this only holds for a=0 and b=0. So S and T are independent. Oh, I see. That makes perfect sense. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Philmac My understanding of the Kronecker delta ∂ij is that it evaluates to 1 when i=j and to 0 when i≠j. So, doesn't this mean that yj(xi) must evaluate to 1 for all i=j and 0 for all i≠j for the Kronecker delta to make sense here? What guarantee is there that this will happen? In R≤3 this makes sense because any basis can be reduced to something like (1,0,0), etc. but with operations there is no reducing, there is simply the operation (e.g. integration). I think I'm just not understanding this part at all, now that I think about it some more. What guarantee?? You define things that way. You define yi such that $$y_i(e_i)=1~\text{and}~y_i(e_j)=0~\text{if}~i\neq j$$ This always happens by definition! Quote by micromass What guarantee?? You define things that way. You define yi such that $$y_i(e_i)=1~\text{and}~y_i(e_j)=0~\text{if}~i\neq j$$ This always happens by definition! Is it always possible to do this? Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Yes, given a vector space and given a basis $\{e_1,...,e_n\}$, I can define a linear function just by specifying what it will do on the basis. For example, the following function is linear: $T(\lambda_1 e_1+...+\lambda_n e_n)=\lambda_i$ and it will be the function such that $T(e_j)=\delta_{ij}[/tex]. Can I find a linear function that will send every [itex]T(e_i)$ to 2? Yes! $T(\lambda_1e_1+...+\lambda_n e_n)=2(\lambda_1+...+\lambda_n)$ will be such a function. I can let the basis go to anything!!!! That's exactly what bases are good for. Quote by micromass Yes, given a vector space and given a basis $\{e_1,...,e_n\}$, I can define a linear function just by specifying what it will do on the basis. For example, the following function is linear: $T(\lambda_1 e_1+...+\lambda_n e_n)=\lambda_i$ and it will be the function such that $T(e_j)=\delta_{ij}[/tex]. Can I find a linear function that will send every [itex]T(e_i)$ to 2? Yes! $T(\lambda_1e_1+...+\lambda_n e_n)=2(\lambda_1+...+\lambda_n)$ will be such a function. I can let the basis go to anything!!!! That's exactly what bases are good for. But how do you know that the appropriate operations will be available? And how do you know that V' is always n-dimensional? Take R1 for example, I can think of at least two linear operations: multiplication and integration. So wouldn't V' have a dimension of at least 2 (even though n=1)? Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Multiplication and integration are not linear operations on R. All the linear functionals on R have the form $$f:R\rightarrow R:x\rightarrow \lambda x$$ for a certain $\lambda$. Quote by micromass Multiplication and integration are not linear operations on R. All the linear functionals on R have the form $$f:R\rightarrow R:x\rightarrow \lambda x$$ for a certain $\lambda$. Hm, alright. Thank you. I think I'm ready for reflexivity. Mentor Quote by Philmac Hm, alright. Thank you. I think I'm ready for reflexivity. You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So... For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''. Now you just need to verify that this function is linear and bijective onto V''. By the way, I think V* is a more common notation than V'. Quote by Fredrik You're looking for an isomorphism from V into V''. This is very easy, because it turns out that the first function we can think of from V into V'' (except for constant functions of course) is an isomorphism. We want to define a function f:V→V'', so we must specify a member of V'' for each x. This member of V'' will of course be denoted by f(x). A member of V'' is defined by specifying what we get when it acts on an arbitrary member of V'. So we must specify f(x)(ω) for each ω in V'. f(x)(ω) is supposed to be a real number, and ω(x) is a real number. So... For each x in V, we define f(x) in V'' by f(x)(ω)=ω(x) for all ω in V'. This defines a function f:V→V''. Now you just need to verify that this function is linear and bijective onto V''. By the way, I think V* is a more common notation than V'. Thank you! I still don't quite understand yet, but I think I'm a bit closer. If I'm not mistaken, it seems that we define the elements of V'' such that regardless of the value of y (or omega) in V', there is a bijective map (an isomorphism, in this context, I believe) between V and V'', in other words, each value of V'' corresponds to one, and only one, value in x. And since y (or omega) doesn't matter, there is only one value of [x0, y] which corresponds to any element z0 of V''. I hope that made sense Also, what you said seems very similar to something in my textbook. I couldn't make sense of it, but after reading what you said I think it makes slightly more sense. Perhaps you could clear some things up about these concepts. Here it is: If we consider the symbol [x, y] for some fixed y = y0, we obtain nothing new: [x, y0] is merely another way of writing the value y0(x) of the function y0 at the vector x. If, however, we consider the symbol [x, y] for some fixed x = x0, then we observe that the function of the vectors in V', whose value at y is [x0, y], is a scalar-valued function that happens to be linear; in other words, [x0, y] defines a linear functional on V', and, consequently, an element of V''. I understand the part about [x, y0], but I really don't follow the reasoning about [x0, y]. [x, y] is a scalar, so how does keeping the value of x constant suddenly make (what is seemingly) the exact same thing an element of V''? Also, backing up a bit to dual bases, I'd just like to verify that I understand. I'd appreciate it if someone could let me know if this example makes sense. V=R3 X={(1,0,0),(0,1,0),(0,0,1)} is a basis in V V'={ax1+bx2+cx3|a,b,c$\in$R} X'={x1,x2,x3} Mentor Quote by Philmac Thank you! I still don't quite understand yet, but I think I'm a bit closer. If I'm not mistaken, it seems that we define the elements of V'' such that regardless of the value of y (or omega) in V', there is a bijective map (an isomorphism, in this context, I believe) between V and V'', in other words, each value of V'' corresponds to one, and only one, value in x. And since y (or omega) doesn't matter, there is only one value of [x0, y] which corresponds to any element z0 of V''. I hope that made sense Not entirely. The first thing that sounds weird to me is "regardless of the value of y (or omega)". The claim that V is isomorphic to V'' has nothing to do with any specific member of V,V' or V''. I guess that could be your point, but to say it this way is like saying that regardless of the value of q, we have 1+1=2. It's true, but it's weird to mention q when we could have just said that 1+1=2. If you would like to use the [,] notation, we could say that for each x in V, there's exactly one z in V'' such that [y,z]=[x,y] for all y in V'. This z is denoted by f(x), and this defines the function f. (If I understand the [,] notation correctly, [y,z]=[x,y] means z(y)=y(x)). Quote by Philmac Also, what you said seems very similar to something in my textbook. I couldn't make sense of it, but after reading what you said I think it makes slightly more sense. It will make more sense after you have verified that the f I defined is an isomorphism (i.e. that it's linear and bijective). Quote by Philmac I understand the part about [x, y0], but I really don't follow the reasoning about [x0, y]. [x, y] is a scalar, so how does keeping the value of x constant suddenly make (what is seemingly) the exact same thing an element of V''? I don't like the [,] notation, and I'm not crazy about this author's way of explaining it either. You already understand that for each x in V and each y in V', [x,y]=y(x) is a real number. The author is saying that for each y in V', the function that takes x to [x,y] is a function from V into ℝ that we already had a notation for (this function is denoted by y). Then he's saying that for each x in V, the function that takes y to [x,y] is a function from V' to ℝ. Let's denote this function by g. We have $$g(ay+bz) = [x,ay+bz] = (ay+bz)(x) = (ay)(x)+(bz)(x) = a(y(x))+b(z(x)) = a[x,y]+b[x,z]=ag(y)+bg(z),$$ so g is linear. That means that it's a member of V''. Quote by Philmac Also, backing up a bit to dual bases, I'd just like to verify that I understand. I'd appreciate it if someone could let me know if this example makes sense. V=R3 X={(1,0,0),(0,1,0),(0,0,1)} is a basis in V V'={ax1+bx2+cx3|a,b,c$\in$R} X'={x1,x2,x3} You're right that if X is a basis for V=ℝ3 and X' is its dual basis, then the members of V' can be uniquely expressed as linear combinations of members of X'. However, if X' is any basis for V', then the members of V' can still be uniquely expressed as linear combinations of members of X'. What you mentioned is a property of any basis, not just the dual basis. Quote by Fredrik Not entirely. The first thing that sounds weird to me is "regardless of the value of y (or omega)". The claim that V is isomorphic to V'' has nothing to do with any specific member of V,V' or V''. I guess that could be your point, but to say it this way is like saying that regardless of the value of q, we have 1+1=2. It's true, but it's weird to mention q when we could have just said that 1+1=2. If you would like to use the [,] notation, we could say that for each x in V, there's exactly one z in V'' such that [y,z]=[x,y] for all y in V'. This z is denoted by f(x), and this defines the function f. I like your q, 1+1=2 analogy. That, in combination with what you said below, triggered a bit of an epiphany. V' is the set of functionals that take x as their argument and V'' is the set of functionals that take y as their argument. I find this notation to be slightly misleading (I'm probably just misunderstanding something) -- is there a V'''? Quote by Fredrik (If I understand the [,] notation correctly, [y,z]=[x,y] means z(y)=y(x)). Exactly. Quote by Fredrik I don't like the [,] notation, and I'm not crazy about this author's way of explaining it either. You already understand that for each x in V and each y in V', [x,y]=y(x) is a real number. The author is saying that for each y in V', the function that takes x to [x,y] is a function from V into ℝ that we already had a notation for (this function is denoted by y). Then he's saying that for each x in V, the function that takes y to [x,y] is a function from V' to ℝ. Let's denote this function by g. We have $$g(ay+bz) = [x,ay+bz] = (ay+bz)(x) = (ay)(x)+(bz)(x) = a(y(x))+b(z(x)) = a[x,y]+b[x,z]=ag(y)+bg(z),$$ so g is linear. That means that it's a member of V''. This makes much more sense now, thank you. However, now that I think about it again, how can there only be one zi in V'' for each xi in V? Changing the value of yj will change the value of [xi,yj], so shouldn't V'' have dimension dimVxdimV'? Quote by Fredrik You're right that if X is a basis for V=ℝ3 and X' is its dual basis, then the members of V' can be uniquely expressed as linear combinations of members of X'. However, if X' is any basis for V', then the members of V' can still be uniquely expressed as linear combinations of members of X'. What you mentioned is a property of any basis, not just the dual basis. Great, I guess that means I have at least some grasp on the idea :) Page 1 of 2 1 2 > Thread Tools Similar Threads for: Confused about the concepts of dual spaces, dual bases, reflexivity and annihilators Thread Forum Replies Differential Geometry 8 Quantum Physics 2 Linear & Abstract Algebra 10 Linear & Abstract Algebra 2 Introductory Physics Homework 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 40, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549452066421509, "perplexity_flag": "head"}
http://terrytao.wordpress.com/2008/05/22/random-matrices-a-general-approach-for-the-least-singular-value-problem/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Random matrices: A general approach for the least singular value problem 22 May, 2008 in math.PR, paper | Tags: least singular value, random matrices, universality Van Vu and I have just uploaded to the arXiv our paper “Random matrices: A general approach for the least singular value problem“, submitted to Israel J. Math.. This paper continues a recent series of papers by ourselves and also by Rudelson and by Rudelson-Vershynin on understanding the least singular value $\sigma_n(A) := \inf_{\|v\|=1} \|Av\|$ of a large random $n \times n$ random complex matrix A. There are many random matrix models that one can consider, but here we consider models of the form $A = M_n + N_n$, where $M_n = {\Bbb E}(A)$ is a deterministic matrix depending on n, and $N_n$ is a random matrix whose entries are iid with some complex distribution x of mean zero and unit variance. (In particular, this model is useful for studying the normalised resolvents $(\frac{1}{\sqrt{n}} N_n - zI)^{-1}$ of random iid matrices $N_n$, which are of importance in the spectral theory of these matrices; understanding the least singular value of random perturbations of deterministic matrices is also important in numerical analysis, and particularly in smoothed analysis of algorithms such as the simplex method.) In the model mean zero case $M_n = 0$, the normalised singular values $\frac{1}{\sqrt{n}} \sigma_1(A) \geq \ldots \geq \frac{1}{\sqrt{n}} \sigma_n(A) \geq 0$ of $A = N_n$ are known to be asymptotically distributed according to the Marchenko-Pastur distribution $\frac{1}{\pi} (4-x^2)_+^{1/2} dx$, which in particular implies that most of the singular values are continuously distributed (via a semicircular distribution) in the interval ${}[0, 2\sqrt{n}]$. (Assuming only second moment hypotheses on the underlying distribution x, this result is due to Yin; there are many earlier results assuming stronger hypotheses on x.) This strongly suggests, but does not formally prove, that the least singular value $\sigma_n(A)$ should be of size $\sim 1/\sqrt{n}$ on the average. (To get such a sharp bound on the least singular value via the Marchenko-Pastur law would require an incredibly strong bound on the convergence rate to this law, which seems out of reach at present, especially when one does not assume strong moment conditions on x; current results such as those of Götze-Tikhomirov or Chatterjee-Bose give some upper bound on $\sigma_n(A)$ which improves upon the trivial bound of $O(n^{1/2})$ by a polynomial factor assuming certain moment conditions on x, but as far as I am aware these bounds do not get close to the optimal value of $O(n^{-1/2})$, except perhaps in the special case when x is Gaussian.) The statement that $\sigma_n(A) \sim 1/\sqrt{n}$ with high probability has been conjectured (in various forms) in a number of places, for instance by von Neumann, by Smale, and by Spielman-Teng. There have been several papers establishing lower tail bounds on the least singular value consistent with the above conjecture of varying degrees of strength, under various hypotheses on the matrix $M_n$ and distribution x. For instance: 1. In 1988, Edelman showed that ${\Bbb P}( \sigma_n(A) \leq \varepsilon/\sqrt{n}) = O( \varepsilon )$ for any $\varepsilon > 0$ when $M_n=0$ and x is Gaussian (there is also a matching lower bound when $\varepsilon = O(1)$). 2. Some non-trivial bounds on ${\Bbb P}( \sigma_n(A) \leq \varepsilon/\sqrt{n})$ for very small $\varepsilon$ (polynomially small in size) are established implicitly in the case when $M_n$ is a multiple of the identity in the papers of Girko and of Bai assuming some moment and continuity assumptions on x, although these bounds are not specified explicitly. 3. In 2005, Rudelson showed that ${\Bbb P}( \sigma_n(A) \leq \varepsilon/\sqrt{n}) = O( n \varepsilon + n^{-1/2})$ when $M_n=0$ and x is subgaussian. 4. In 2005, Van Vu and I showed that for every $C_1 > 0$ there existed $C_2 > 0$ such that ${\Bbb P}( \sigma_n(A) \leq n^{-C_2}) = O( n^{-C_1} )$, when $M_n=0$ and x is the Bernoulli distribution (equal to +1 or -1 with a equal probability of each). 5. In 2006, Senkar, Teng, and Spielman extended Edelman’s result to the case when $M_n$ is nonzero (but x is still Gaussian). 6. In 2007, Rudelson-Vershynin showed that ${\Bbb P}( \sigma_n(A) \leq \varepsilon/\sqrt{n})$ is bounded by $O( \varepsilon^c )$ for $\varepsilon$ independent of n when $M_n=0$ and x has bounded fourth moment, and is bounded by $O( \varepsilon + c^n)$ for all $\varepsilon > 0$ if x is subgaussian. 7. In 2007, Van Vu and I showed that our earlier result extends to the case when $M_n$ is non-zero, as long as it has polynomial size in n and is also integer-valued; we also allow x to be more general than the Bernoulli distribution, but still integer-valued. (Our restriction to the integer case was due to a certain rounding trick that converted finite precision arithmetic to exact arithmetic, but only when the expressions involved ultimately come from integers.) 8. In 2007, Van Vu and I removed the integrality assumptions on our previous results, thus establishing the bound ${\Bbb P}( \sigma_n(A) \leq n^{-C_2}) = O( n^{-C_1} )$ whenever M has polynomial size and with no assumptions on x other than zero mean and unit variance. (See also my earlier blog post on this paper.) 9. In 2008, Rudelson and Vershynin obtained generalisations of their previous results to rectangular matrices (thus also generalising some previous work of Litvak, Pajor, Rudelson, and Tomczak-Jaegermann), but still for the case $M_n=0$ and with subgaussian hypotheses on x. These recent results are largely based on entropy (or “epsilon-net”) arguments, combined with conditioning arguments, with the entropy bounds in turn originating from inverse Littlewood-Offord theorems; see my Lewis lectures for further discussion. The current paper is a partial unification of the results of Rudelson and Vershynin (which give very sharp tail estimates, but under strong hypotheses on M and x) and ourselves (which have very general assumptions on M and x, but a weak tail estimate). A little more precisely, we obtain an estimate of the form ${\Bbb P}( \sigma_n(A) \leq \varepsilon/\sqrt{n}) \leq O( n^{o(1)} \varepsilon + n^{-C} + {\Bbb P}( \|N_n\| \geq C \sqrt{n} ) )$ for any fixed $C > 0$, assuming that $\|M\| = O(\sqrt{n})$. This result almost recovers those of Rudelson and Vershynin under subgaussian hypotheses on x (which are known to imply exponentially good bounds on ${\Bbb P}( \|N_n\| \geq C \sqrt{n} )$) in the case when $\varepsilon$ has polynomial size, except that we lose a factor of $n^{o(1)}$. One has analogous results here in which $\|M\|$ and $\|N_n\|$ are bounded by some weaker power of n than $n^{1/2}$; roughly speaking, if $\|M\|$ and $\|N_n\|$ have size $O(n^\gamma)$, then we can show that $\sigma_n(A)$ is at least $n^{-(2C+1) \gamma}$ with probability $1 - O(n^{-C+o(1)})$ for any given C. We also give a simple example that show that the deterioration of these bounds when $\|M\|$ gets large is necessary; in particular, we show that the universality of the bounds of Edelman type can break down once $\|M\|$ exceeds $n^2$, because one can design M to force the existence of an unusually small value of $Av = (M +N_n)v$ for a certain type of unit vector v. Our methods here are based on those of earlier papers, particularly our circular law paper; the key innovation is to run a certain scale pigeonholing argument as efficiently as possible, so that one only loses factors of $n^{o(1)}$ in the final bound. [Update, May 22: some typos corrected.] ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue… ## 5 comments I know this is quite a different question but maybe you know the answer: What happens over a finite field F, with N elements, say. If I pick the matrix elements independently with a flat distribution, what is the probability that the matrix A is not invertible? My guess would be 1/N, since one could imagine all elements but the last (a11 say) have been drawn already and det(A)= c1 + c2 a11 with some constants c1 and c2. If c2 does not vanish the determinant takes all N different values when a11 is varied but the case of c2=0 (some minor) it depends on the distribution of c1′s… This problem came up in an encoding problem: There (imagine for example a RAID system) you want to spread information between a number of different people. For example the data are many (say L) vectors in F^k and everybody in a room has access to those vectors. However they cannot remember kL numbers. But if everybody picks one random vector and projects all the L vectors on that random vector and just remebers the random vector and the L scalar products. Then the full information can likely be reconstructed by any choice of k people (namely if their random vectors are linearly independent). So I am asking what is the probability that this fails? wow, you can also prove theorems in the future! (e.g., item 9) For the question in Comment 1 (if I understand it right) for matrrices of size $n$, the probability you want seems to be simply one minus the ratio of the order of the group of invertible matrices divided by $N^{n^2}$. The order of the group of invertible matrices is well-known, and one finds this probability is $1-\prod_{k=1}^n{(1-N^{-k})}$ which has a limit (for fixed $N$) as $n$ goes to infinity (one minus the inverse of the generating series of the partition function, evaluated at $z=1/N$). This is indeed asymptotic to $1/N$ by Taylor expansion, when $N$ is large. Because the answer is explicit, one can also understand the behavior in many mixed situations when both $N$ and the size $n$ of the matrices are fixed. Dear Enric: Oops, thanks for the correction! Dear Robert: Emmanuel is correct. One way to see the fact that the probability of being invertible is $\prod_{k=1}^n (1 - N^{-k})$ is to expose one row at a time. After k-1 rows have been exposed, and conditioning on the event that they are already linearly independent, the probability that the k^th row is going to be linearly independent of all previous rows is $(1 - N^{-k})$. Multiplying all these conditional probabilities together gives the claim. There is also a paper of Kahn and Komlos that investigates this problem for slightly different matrix models (e.g. random +1/-1 matrices over finite fields): http://www.ams.org/mathscinet-getitem?mr=1833067 Thanks for the help! I guess, the question I really wanted to ask is: What is the best strategy to come up with many vectors in F^k such that the the probability that a random selection of k of those is linearly independent? Again my guess would be that you cannot do better than picking (non-zero) random vectors and then the probability is what you write. Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 74, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322906732559204, "perplexity_flag": "head"}
http://mathoverflow.net/questions/34087?sort=newest
## Selecting basic sequences ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose $(x_\alpha)_\alpha$ is an uncountable, linearly independent family of norm one vectors in a Banach space. Can one always select a basic sequence (or at least a minimal system) from this family? I suspect the answer is no but I cannot come up with an example. Thank you! - ## 2 Answers Not a basic sequence. Consider $e_0 \oplus e_\gamma$ in $R\oplus H$ for $H$ a non separable Hilbert space, or, if you want a separable example, make $e_\gamma$ a Hamel basis for a separable Hilbert space. EDIT: Aug 2. Every separated sequence of unit vectors contains a minimal subsequence with bounded biorthogonal functionals. There remains the case where your linearly independent set has compact closure. I have an example of such a set where any minimal sequence in the set has only unbounded biorthogonal functionals, but I do not know whether such a set must contain a minimal sequence. Too bad you did not ask this a day earlier when we could have discussed it face to face. I'll write something down when I get a chance. EDIT: Aug 2. It was not so bad to write--I was able to do it on the plane. If $x_n$ is a separated subset of the unit sphere of $X$, then $x_n$ has a minimal subsequence whose biorthogonal functionals are uniformly bounded. Indeed, if $x_n$ does not have weakly compact closure, then it has a basic subsequence (see e.g. the book of Albiac-Kalton), so we can assume that $x_n\to x$ weakly. If $x=0$, then $x_n$ has a basic subsequence. If not, let $Q$ be the quotient map from $X$ onto $X/[x]$, where $[x]$ is the linear span of $x$. $Qx_n\to 0$ weakly and is bounded away from zero by the separation assumption, hence has a basic subsequence $Qx_{n(k)}$, whence $x_{n(k)}$ is minimal with uniformly bounded biorthogonal functionals. I don't know what can happen when $A$ is a linearly independent subset of the unit sphere of $X$ that is totally bounded, but there exists such sets so that every minimal sequence in the set has biorthogonal functionals that are not uniformly bounded. Consider the Cantor set as the branches of the infinite binary tree and let the nodes index the unit vector basis of $c_0$. Given a branch $t=(t(n))$ (where `$t_1<t_2<\dots$` in the tree ordering) of the tree, let $x_t=\sum 2^{-n} e_{t(n)}$. By compactness (which is really just pigeonholing), any sequence $y_k$ of $x_t$-s has a subsequence that converges to some $x_s$, which means that for any $n$, if $k\ge k(n)$ then $y_k(j)=x_s(j)$ for $1\le j \le n$. From this it is easy to see that $y_k$ cannot be uniformly minimal. In the above argument, try replacing the unit vector basis of $c_0$ with an appropriate normalized countably linearly independent sequence that has no minimal subsequence. I think it is known that such sequences exist. Probably an example is in Kadec's book. Maybe this will give an example that has no infinite minimal subset. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Indeed, such sequences exist. Sequences that have the same closed span as any of their subsequences appear in the literature under the name overcomplete or overfilling (see for example Byorthogonal Systems in Banach Spaces, by Hajek, Montesinos, Vanderwerff and Zizler, Excercise 1.1, page 42). Every infinite dimensional separable Banach space contains such a sequence whose span is also dense. If $(x_n)_n$ is a set on the unit sphere whose span is dense in $X$, take, for example, $y_n=\sum_{k=0}^{\infty}\frac{x_k}{n^{k}k!}$. Then $(y_n)_n$ converges to $x_0$. Setting $z_n=n(y_n-x_0)$, we see that $(z_n)_n$ converges to $x_1$, and so on...Hence, any subsequence of $(y_n)_n$ has dense span. Taking any normalized, $\omega$-linearly independent, overcomplete sequence and doing the previous construction on the binary tree suggested by Bill, indeed gives a linearly independent uncountable set that contains no infinite minimal subset. Bill's answer was very helpful and completely answers my question. Thanks again. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499127268791199, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/123331/question-on-lipschitz-condition
# Question on Lipschitz condition. I want help in showing that $f$ is Lipschitz on $[0,1]$ $\implies$ that $f$ can written in the form $$f(x) = f(0) + \int_0^x h(x)~dt$$ for some bounded Lebesgue measurable function $h$ on $[0,1]$. $f$ being Lipschitz on $[0,1]$ implies there is some constant $K$ such that $|f(x)-f(y)|\leq K |x-y|$ for every $x,y \in [0,1]$. Can I argue as follows: I know that if $f$ is Lipschitz on $[0,1]$, then $f$ is abosolutely continuous on $[0,1]$ and so $f$ is a definite integral. i.e. $$f(x) = f(0) +\int_0^x f'(t)~dt,$$ So I can take $f' = h$. Can I do this or there is a better way of approaching it. - Can't you use that if $f$ is absolutely continuous then it's differentiable almost everywhere and then apply the fundamental theorem of calculus? – Matt N. Mar 22 '12 at 17:10 I thought that's what I did...no? – Jack Mar 22 '12 at 17:12 – t.b. Mar 22 '12 at 17:12 Thanks for the link. – Jack Mar 22 '12 at 17:21 1 Note, you should argue why $f'$ is bounded. – David Mitra Mar 22 '12 at 17:23 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360526204109192, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/18209/collapse-a-sum-factor-an-element-out-of-a-sum
# Collapse a Sum / factor an element out of a sum I am trying to get something in the form of a $\Sigma (\dots) * \alpha_i = (\dots)$ from the output of the code below. The thing is that I cannot figure out how to tell Mathematica to "collapse" down to a single sum and factor out the $\alpha_i$. Thank you, John ````$Assumptions = i \[Element] Integers && j \[Element] Integers && n \[Element] Integers && i \[Element] Constant && j \[Element] Constant && n \[Element] Constant; r[t_] = (D[#, {t, 2}] - t*D[#, t] + 1/6*# - (-3)) &; inner[x_] = (Integrate[ Simplify[ ComplexExpand[#1 Conjugate[#2], TargetFunctions -> {Re, Im}]], {x, 0, 10}]) &; uB = 5 - \[Pi]/5*x; \[Phi][i_] = x^i*(10 - x); uApp = uB + Sum[Subscript[\[Alpha], i]*\[Phi][i], {i, 1, n}]; (r[x][uApp] /. x -> (10 - 0)/(n + 1)*j) == 0 ```` - It's not clear to me what you mean by "collapsing down" - the $\alpha_i$ can't be factored out of the sum over $i$ because they depend on the summation index, don't they? – Jens Jan 22 at 3:56 I am not sure what you are trying to accomplish here. Are you trying to simplify the summand, or are you trying to factor a constant factor out of the sum entirely? Also, do you have a simpler example of your code, as the sum is getting lost in the details? (Yes, the pun was intended.) – rcollyer Jan 22 at 3:57 An unrelated issue is that you cannot declare variables as constants by saying `j \[Element] Constant && n \[Element] Constant`, because `Constant` isn't a domain of which something can be an element. To distinguish constants in a differentiation you don't have to do anything special. It's the opposite case that requires a little work - see here. – Jens Jan 22 at 4:11 Thank you for the responses. Currently, there are several summations, each linear in \alpha\$_i. I wanted to collapse these down to a single sum. – user1543042 Jan 22 at 15:26 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9038941860198975, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/169835/for-cfg-g-and-regular-expressions-r-s-to-which-class-does-langle-g-r-s?answertab=votes
For CFG $G$ and regular expressions $R,S$: To which class does $\{ \langle G,R,S \rangle: L(G) \cap L(R) = L(S) \}$ belong? I'd really like your help with the following question: For $G$ a context free grammar, and $R$, $S$ regular expressions, To which class does $\{ \langle G,R,S \rangle : L(G) \cap L(R) = L(S) \}$ belong? Is it in $R$, $RE/R$, co-$RE/R$? I know that Intersection of CFG and Regular expressions is CFG, and to compare two regular expressions is in $R$, but I'm not sure what is the answer. Thank you - 1 Answer Given a context-free grammar $G$ and a regular expression $R$ the language $L(G) \cap L(R)$ is context-free, and can be found by an algorithm. This can be proved by taking a product of a pushdown automaton and a finite automaton. Therefore, you can reduce the problem to checking if $L(G) = L(S)$. This problem is in $\mathsf{coRE}$ since you can search for counterexample $w \in A^{\ast}$ which is in exactly one language. However, given a CFG $G$, it is undecidable whether $L(G)=A^{\ast}$. This is proven using computation histories. Therefore the problem is undecidable, and the answer is $\mathsf{coRE} - \mathsf{R}$. You might wonder if there are might be restrictions on $S$ which make the problem decidable. I asked this question at cstheory some time ago, and it turns out there is a rather nice criterion. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428209662437439, "perplexity_flag": "head"}
http://mathoverflow.net/questions/107795/examples-of-non-kahler-compact-symplectic-manifolds/107816
## Examples of non-Kahler compact symplectic manifolds. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am trying to gather a list of all known symplectic manifolds which don't have Kahler structure. If you know any please add to the list and give references for it. Please avoid giving repetitive examples. Thanks. - 2 Though one can write down such examples, I think the idea of such a list rather misses the point. There are qualitative differences that can be hard to apply to specific examples. We know, thanks to Gompf, that arbitrary finitely presented groups appear as $\pi_1$ of symplectic 4-manifolds of symplectic Kodaira dimension 1 and also of symplectic Kodaira dimension 2. – Tim Perutz Sep 21 at 19:06 We also know that such statements are wildly false for Kaehler surfaces of Kodaira dimension 1 or 2, since e.g. we have only finitely many deformation classes of general type surfaces of fixed $c_1^2$ and $c_2$. Yet it might be hard to decide whether some particular symplectic manifold has a Kaehler structure. – Tim Perutz Sep 21 at 19:06 I see; do we know anything about closed simply connected examples? – Mohammad F.Tehrani Sep 21 at 19:23 1 This should probably be community wiki. – Agol Sep 21 at 22:04 Mohammad: once you restrict to the simply connected case I can no longer make any such sweeping comments (and I won't attempt to answer in a comment box). I wonder what happens, though, if you embed Gompf's manifolds symplectically into some high-dimensional $\mathbb{C}P^n$ using Gromov-Tischler and then blow up this submanifold. – Tim Perutz Sep 21 at 22:37 ## 5 Answers Honestly, this is like asking for a list of all animals that are not dolphins. Instructions for making your own: Step 1 - use the symplectic mapping torus for a symplectomorphism of your favorite Kaehler manifold (acting suitably on homology) to produce something whose Betti numbers violate being Kaehler (one odd degree Betti number is odd) Step 2 - take the manifold from Step 1 and embed it symplectically in projective space, then blow up. That gets rid of the fundamental group, but raises the dimension a lot. Step 3 - take Donaldson hypersurfaces to get the dimension down again. - could you add some details or references - how are you blowing up? why are the Donaldson hypersurfaces not Kahler? – Agol Sep 22 at 1:24 Agol: in step 1, the symplectic mapping torus $X$ is $S^1$ times the usual one. A general theorem of Gromov-Tischler embeds the resulting (integral) symplectic $2N$-manifold symplectically into $\mathbb{C}P^{2N+1}$. You can blow up along such a submanifold much as you would in Kaehler geometry (see e.g Voisin's book). If $b_1(X)$ is odd, we get a symplectic $2N$-manifold with odd $b_3$. Donaldson hypersurfaces obey the Lefschetz hyperplane theorem, so you can then cut down to 8 dimensions preserving the odd $b_3$. – Tim Perutz Sep 22 at 15:45 It seems interesting to ask what happens if you take e.g. some Noether-violating simply connected symplectic 4-manifold and build a symplectic 8-manifold by this procedure; can it ever be Kaehler? – Tim Perutz Sep 22 at 15:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There are lots of simply-connected four-dimensional examples in Gompf's symplectic sum paper paper which are shown to be non-Kahler by virtue of violating the Noether inequality (see Theorem 6.2). There are also higher-dimensional examples in the last section of Gompf's paper, including infinitely many simply-connected non-Kahler ones in any even dimension at least 8. Some of these are obtained directly by symplectic sum, and others by taking four-dimensional examples and embedding them in CP^n and blowing up, as suggested by Tim and eigenbunny--in these cases Gompf uses the Hard Lefschetz theorem to prove that the result isn't Kahler. The standard symplectic surgery operations in four-dimensions (symplectic sum, Luttinger surgery, rational blowdown...) should generally be expected to produce non-Kahler manifolds more often than not, though it's not always feasible to see that the result isn't Kahler--the standard ways of doing so are by the parity of $b_1$ or by the Noether inequality. In particular this paper of Fintushel-Park-Stern gives another large collection of simply connected examples violating the Noether inequality. One can also show that a symplectic manifold is not Kahler by showing that it is not formal (in the sense of rational homotopy theory). Often nonformal examples also don't satisfy hard Lefschetz (so could instead just be shown to be non-Kahler by that criterion), but there is a nonformal example of Cavalcanti which does satisfy hard Lefschetz. - I would recommend the Tralle-Oprea book, Symplectic manifolds with no Kähler structure. - I did not know there is a book on it, thanks for sharing. – Mohammad F.Tehrani Sep 23 at 3:19 The original example of a torus bundle over a torus which is symplectic but admits no Kahler structure is due to Thurston. - Edited: Corrected after Eugene Lerman's comment of Sep 22, 2012, which also implies this is not an answer to the question (edits are in italic). In [Invent. Math. 131 (1998), no. 2, 311–319] Chris Woodward gives examples of multiplicity-free compact Hamiltonian manifolds that are not compatibly Kähler. This proved that Delzant's result that all compact multiplicity-free torus actions are compatibly Kähler [Bul. Soc. math. France 116, 315–339 (1988)] does not extend to the nonabelian case. Woodward uses a result from Susan Tolman's paper "Examples of non Kähler Hamiltonian torus actions" [Invent. Math. 131 (1998), no. 2, 299–310]. So, back-to-back papers with examples of symplectic manifolds with a lot of symmetry that are not compatibly Kähler. - Wait, Tolman only proves that there are no {\em invariant} Kaehler structure on her examples, not that the examples are not Kaehler. I don't remember Woodward's paper well, but I think it is the same kind of result. The example may admit a Kaehler structure, just not invariant. – Eugene Lerman Sep 22 at 12:40 @Eugene Lerman: you are right. Thanks for pointing this out. I have edited my "answer" accordingly. – Bart Van Steirteghem Sep 22 at 18:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9151583313941956, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/45497/acceleration-and-relativity
# Acceleration and relativity Usually in physics books this equation used $a=\frac {dv}{dt}$ to calculate the relativistic acceleration. It is true that speed $v=\beta c$ don't have relativistic (Lorentz) factor, But time have a relativistic factor $t=\tau \gamma$, what about acceleration!? - – Qmechanic♦ Nov 30 '12 at 15:06 ## 2 Answers Assuming I've understood your comment to twistor59's answer correctly: in SR (this changes in GR) acceleration by itself doesn't cause any relativistic effects. If you compare two frames that are instaneously at rest relative to each other there will be no relativistic effects no matter how fast one of the frames is accelerating. Accelerating objects do experience relativistic effects, but only because over any finite timespan their velocity is different to your reference frame. Chapter 6 of Gravitation by Misner, Thorne and Wheeler explains how to calculate the effects of acceleration, or John Baez's article on the Relativistic Rocket gives the headlines. - Just restricting to motion in one dimension: If the $x'$ frame is moving wrt to the $x$ frame with a speed $v$ in the positive x direction then a speed $u'$ measured in the $x'$ frame transforms as $$u=\frac{u'+v}{1+\frac{vu'}{c^2}}$$, so $$du = \frac{(1-\frac{v^2}{c^2})du'}{(1+\frac{vu'}{c^2})^2}$$ $$=\frac{du'}{\gamma^2(1+\frac{vu'}{c^2})^2}$$ Similarly $$t=\gamma(1+\frac{vx'}{c^2})t'$$ so $$dt=\gamma(1+\frac{vu'}{c^2})dt'$$ So the acceleration transformation is $$a=\frac{du}{dt}=\frac{1}{\gamma^3(1+\frac{vu'}{c^2})^3}\frac{du}{dt}=\frac{1}{\gamma^3(1+\frac{vu'}{c^2})^3}a'$$ - This is acceleration transformation from one frame work to another, i am asking about relativistic factor for acceleration by itself, – Neo Nov 30 '12 at 9:37 Well the relation $t=\tau \gamma$ relates the time in a frame for which a particle is at rest (i.e proper time) and the time in another frame which is moving relative to it. So I guess the analogous result for acceleration would be that, for an instant in which the particle was at rest in the primed frame, then the accelerations are related just by $\frac{1}{\gamma^3}$ – twistor59 Nov 30 '12 at 10:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9144964814186096, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/19419/most-efficient-way-to-output-a-cycle-starting-from-an-unordered-list-of-graph-ve
# Most efficient way to output a cycle starting from an unordered list of graph vertices I have some graph $G$ and a list of vertices $(v_1, ..., v_N) \in V$. Using graph structures in Mathematica version 9.0, what is the most efficient way to determine whether or not $(v_1, ..., v_N)$ represents a cycle, and then, if so, to output a permutation cycle starting from some desired $v_i$? Let me provide a specific example: Say I have a ring of eight vertices where: $v_1 \to v_2$ $v_2 \to v_3$ $v_3 \to v_4$ $v_4 \to v_5$ $v_5 \to v_6$ $v_6 \to v_7$ $v_7 \to v_8$ $v_8 \to v_1$ And also: $v_2 \to v_4$ $v_4 \to v_6$ $v_6 \to v_8$ $v_8 \to v_2$ Say the above list is scrambled (i.e. we randomly assign the vertex labels). Without scrambling things here, and specifying that I want a permutation to start from $v_2$, how would I output a permutation: $(v_2,v_3,v_4,v_5,v_6,v_7,v_8,v_1)$ Scrambling, we could maybe map the labels: $(v_1, v_2, v_3, v_4, v_5, v_6, v_7, v_8)$ to something like: $(v_{1111111111}, v_{11011}, v_{111110011}, v_{100101}, v_{11}, v_{1010111}, v_{111}, v_{101010101})$ where the labels convey no ordering information. - you can check the link about Ordering vertices in GraphPlot mathematica.stackexchange.com/questions/10755/… – s.s.o Feb 11 at 15:27 Is your graph directed? – Szabolcs Feb 11 at 19:43 ## 2 Answers If I am reading your question correctly, you are looking for ````HamiltonianGraphQ@Subgraph[g, {v1, v2, ...}] ```` You can use `FindHamiltonianCycle` to actually find a cycle (an ordering of vertices). - Please let me know if I misunderstood. – Szabolcs Feb 11 at 19:47 Right, calling "FindHamiltonianCycle" solves my problem, though I have a special very easy case of what would otherwise by an NP-complete problem. I suppose I could also cut the connections between a pair of vertices $v_i$ and $v_j$ and then run "FindShortestPath". – Roger Harris Feb 11 at 22:04 1 Note that `FindHamiltonianCycle` is not necessarily slow ("bad" complexity). It is slow for some input graphs, but it isn't for a cycle graph. – Szabolcs Feb 11 at 22:08 I think I should be able to make my set of vertices a cycle... Do we know the time complexity for "FindHamiltonianCycle" on a cycle graph? – Roger Harris Feb 11 at 22:53 @Roget I don't know how it is implemented, but it makes sense to assume that it is linear complexity (in the number of nodes). There aren't that many ways to traverse a cycle graphs. If you want to make sure, you can measure directly. – Szabolcs Feb 11 at 23:12 show 2 more comments If you just want to verify a given list of vertices is a cycle, you could define the following function to check it's cycle: ````cycleQ[g_, vs_] := Block[{edge}, edge = If[DirectedGraphQ[g], DirectedEdge, UndirectedEdge]; VectorQ[Partition[vs, 2, 1, 1], EdgeQ[g, edge @@ #] &] ] ```` For example, ````g = CycleGraph[10]; In[96]:= cycleQ[g, Range[5]] Out[96]= False In[97]:= cycleQ[g, Range[10]] Out[97]= True ```` To permute the list, you could use RotateLeft or RotateRight with Position: ````In[91]:= RotateLeft[{a, b, c, d, e}, Position[{a, b, c, d, e}, d][[1, 1]] - 1] Out[91]= {d, e, a, b, c} ```` - Thanks, the CycleQ[...] function addresses the first part of my question! – Roger Harris Feb 11 at 14:23 However, let's say the vertices are unordered and not necessarily just connected to their nearest-neighbors in the cycle (i.e. vertex $i$ could be connected to vertex $j$ and $j+1$, etc.). We are only guaranteed that there is a unique solution for the cycle that only uses each vertex once. How would we output a cycle permutation? – Roger Harris Feb 11 at 14:24 lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8717896938323975, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/99571?sort=oldest
## Terminology: Banach spaces equipped with continuous associative product? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is admittedly a low-interest question mathematically, and is arguably a question I could resolve if I had time over the next few days to go and look through a large number of the Banach algebra/functional analysis books on my shelves and in the library. However, it strikes me that this is easily crowd-sourceable and that people may know of texts I am less familiar with. My reason for asking on MO rather than MSE is that I think it will get better answers here. So: the usual definition of a Banach algebra is that it is a (complex) algebra equipped with a complete vector-space norm, such that $\Vert ab\Vert\leq \Vert a\Vert \Vert b\Vert$ for all elements $a,b$. Now suppose we have a (complex) algebra $A$ equipped with a complete vector-space norm $\Vert\cdot\Vert$ and a constant $K>0$ such that $\Vert ab\Vert\leq K\Vert a\Vert \Vert b\Vert$ for all $a,b$. These are much rarer in the literature, most likely for the following reason: a standard exercise doled out to students is to show that there is an equivalent norm on $A$ for which multiplication is contractive, i.e. rendering $A$ (in this new norm) a Banach algebra in the usual sense. In this sense "one has nothing new". However, in some joint work I am writing up, I am toying with the idea of working in this greater generality, in order to let certain technical functorial constructions have more natural formulations. (In a bit more detail, it is to do with certain homologically flavoured constructions for Banach algebras and Banach bimodules more naturally living in a world where multiplication need not be contractive.) So my question is this: do these kinds of algebra have a standard name, and where are the established sources for such terminology? I have a dim recollection that they are given a name of their own in Zelazko's old book on Banach algebras, but I don't recall what the name was, and I can't find anything in Bonsall & Duncan. Note: I am not after arguments as to what terminology should or should not be, or observations about one definition being a "semigroup object in Ban$_1$" while the other is a "semigroup object in Ban". Rather, I need some idea of whether one choice of terminology is standard, and hence least likely to cause confusion/irritation to the intended audience, should I decide to pursue this course. - ## 2 Answers To my knowledge, something in the same vein was first (?) considered in: C. Le Page (1967), Sur quelques conditions impliquant la commutativité dans les algèbres de Banach, C.R.A.S. Paris, Ser. A-B, 265, A235-A237 (click). In any event, if really necessary, I would refer to a norm (of ring-like structures) with that property as a quasi-norm wrt multiplication. Indeed, the kind of algebras that you're considering are somehow related to quasi-normed algebras (cf. Wiki). But this is not the real answer that I would have liked to give. - 1 It sounded to me like Yemon's constant only appears in the product inequality, not in the triangle inequality, so "quasi-normed algebra" isn't exactly what he wants ... – Nik Weaver Jun 14 at 15:20 Sorry, I was a little bit sloppy. I'd better say that the kind of algebras considered by Yemon are reminiscent of quasi-normed algebras. Let me fix it. – Salvo Tringali Jun 14 at 15:44 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yemon, I have used the term "weak Banach algebra" for such things. I don't think there is a standard term, though. I vaguely recall seeing people simply call them Banach algebras (probably in some older papers when the terminology in the subject hadn't really stabilized). (I ran into this issue when dealing with the Lipschitz algebra $Lip_0(X)$ for $X$ a complete finite diameter metric space. You really want to use Lipschitz number as the norm, even though this only makes it a weak Banach algebra. There's no real penalty for doing this, and the advantage is that it allows you to identify $X$ isometrically with the normal spectrum of $Lip_0(X)$.) Edit: I've just realized that this is what Gelfand meant by "normed ring". E.g., on the first page of his book Commutative Normed Rings (1960) he writes: "A normed ring is a complex Banach space in which an associative multiplication is defined that is permutable with the multiplication by complex numbers, distributive with respect to addition, and continuous in each factor." and there is a footnote which says "In another terminology, a Banach algebra." A few pages in he proves that you can always achieve $\|xy\| \leq \|x\|\|y\|$ by renorming. - Thanks for the update, Nik! Interesting how "working definitions" become fossilized as new axioms – Yemon Choi Jan 11 at 19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502196907997131, "perplexity_flag": "middle"}
http://physics.aps.org/articles/large_image/f1/10.1103/Physics.2.19
Illustration: Alan Stonebraker, adapted from a NASA image Figure 1: A small satellite moving under the influence of a sun-earth system will rotate in fixed relation to the sun-earth system if it is located near one of five Lagrange ($L$) points. The figure shows the equipotential lines of the effective potential in the rotating frame. In the atomic analog the effect of the earth is replaced by a circularly polarized microwave field so that the electronic wave packet moves without spreading around the nucleus following the direction of the microwave field. (Figure not to scale).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951603770256042, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/10952/list
## Return to Answer 4 edited body; deleted 15 characters in body; added 22 characters in body This is a great question. Someone will come along with a better answer I'm sure, but here's a bit off the top of my head: 1) The Hilbert class field of a number field $K$ is the maximal everywhere unramified abelian extension of $K$. (Here when we say "$K$" we really mean "$\mathbb{Z}_K$", the ring of integers. That's important in the language of etale maps, because any finite separable field extension is etale.) In the case of a curve over $\mathbb{C}$, the "problem" is that there are infinitely many unramified abelian extensions. Indeed, Galois group of such is the abelianization of the fundamental group, which is free abelian abelian of rank $2g$ ($g$ = genus of the curve). Let me call this group G. The problem here is This implies that the covering space of C corresponding to G has infinite degree, so is a non-algebraic Riemann surface. (In fact, I have never really thought about what it looks like. It's fundamental group is the commutator subgroup of the fundamental group of C, which sounds complicated.) I believe is a free group of infinite rank. I don't think the field of meromorphic functions on this guy is what you want. 2) On the other hand, the Hilbert class group $G$ of $K$ can be viewed as the Picard group of $\mathbb{Z}_K$, which classifies line bundles on $\mathbb{Z}_K$. This generalizes nicely: the Picard group of $C$ is an exension of $\mathbb{Z}$ by a $g$-dimensional complex torus $J(C)$, which has exactly the same abelian fundamental group as $C$ does: indeed their first homology groups are canonically isomorphic. $J(C)$ is called the Jacobian of $C$. 3) It is known that every finite unramified abelian covering of $C$ arises by pulling back an isogeny from $J(C)$. So there are reasonable claims for calling either $G \cong \mathbb{Z}^{2g}$ and $J(C)$ the Hilbert class field group of $C$. These two groups are -- canonically, though I didn't explain why -- Pontrjagin dual to each other, whereas a finite abelian group is (non-canonically) self-Pontrjagin dual. [This suggests I may have done something slightly wrong above.] As to what the Hilbert class field should be, the analogy doesn't seem so precise. Proceeding most literally you might take the direct limit of the function fields of all of the unramified abelian extensions of $C$, but that doesn't look like such a nice field. Finally, let me note that things work out much more closely if you replace $\mathbb{C}$ with a finite field $\mathbb{F}_q$. Then the Hilbert class field of the function field of that curve is a finite abelian extension field whose Galois group is isomorphic to $J(C)(\mathbb{F}_q)$, the (finite!) group of $\mathbb{F}_q$-rational points on the Jacobian. 3 added 356 characters in body This is a great question. Someone will come along with a better answer I'm sure, but here's a bit off the top of my head: 1) The Hilbert class field of a number field $K$ is the maximal everywhere unramified abelian extension of $K$. (Here when we say "$K$" we really mean "$\mathbb{Z}_K$", the ring of integers. That's important in the language of etale maps, because any finite separable field extension is etale.) In the case of a curve over $\mathbb{C}$, the "problem" is that there are infinitely many unramified abelian extensions. Indeed, Galois group of such is the abelianization of the fundamental group, which is free abelian abelian of rank $2g$ ($g$ = genus of the curve). Let me call this group G. The problem here is that the covering space of C corresponding to G has infinite degree, so is a non-algebraic Riemann surface. (In fact, I have never really thought about what it looks like. It's fundamental group is the commutator subgroup of the fundamental group of C, which sounds complicated.) I don't think the field of meromorphic functions on this guy is what you want. 2) On the other hand, the Hilbert class group $G$ of $K$ can be viewed as the Picard group of $\mathbb{Z}_K$, which classifies line bundles on $\mathbb{Z}_K$. This generalizes nicely: the Picard group of $C$ is an exension of $\mathbb{Z}$ by a $g$-dimensional complex torus $J(C)$, which has exactly the same abelian fundamental group as $C$ does: indeed their first homology groups are canonically isomorphic. $J(C)$ is called the Jacobian of $C$. 3) It is known that every finite unramified abelian covering of $C$ arises by pulling back an isogeny from $J(C)$. So there are reasonable claims for calling either $G \cong \mathbb{Z}^{2g}$ and $J(C)$ the Hilbert class field of $C$. These two groups are -- canonically, though I didn't explain why -- Pontrjagin dual to each other, whereas a finite abelian group is (non-canonically) self-Pontrjagin dual. [This suggests I may have done something slightly wrong above.] As to what the Hilbert class field should be, the analogy doesn't seem so precise. Proceeding most literally you might take the direct limit of the function fields of all of the unramified abelian extensions of $C$, but that doesn't look like such a nice field. Finally, let me note that things work out much more closely if you replace $\mathbb{C}$ with a finite field $\mathbb{F}_q$. Then the Hilbert class field of the function field of that curve is a finite abelian extension field whose Galois group is isomorphic to $J(C)(\mathbb{F}_q)$, the (finite!) group of $\mathbb{F}_q$-rational points on the Jacobian. 2 added 283 characters in body; added 4 characters in body; added 11 characters in body This is a great question. Someone will come along with a better answer I'm sure, but here's a bit off the top of my head: 1) The Hilbert class field of a number field $K$ is the maximal everywhere unramified abelian extension of $K$. (Here when we say "$K$" we really mean "$\mathbb{Z}_K$", the ring of integers. That's important in the language of etale maps, because any finite separable field extension is etale.) In the case of a curve over C, $\mathbb{C}$, the "problem" is that there are infinitely many unramified abelian extensions. Indeed, Galois group of such is the abelianization of the fundamental group, which is free abelian abelian of rank $2g$ ($g$ = genus of the curve). Let me call this group G. The problem here is that the covering space of C corresponding to G has infinite degree, so is a non-algebraic Riemann surface. (In fact, I have never really thought about what it looks like. It's fundamental group is the commutator subgroup of the fundamental group of C, which sounds complicated.) I don't think the field of meromorphic functions on this guy is what you want. 2) On the other hand, the Hilbert class group $G$ of $K$ can be viewed as the Picard group of $\mathbb{Z}_K$, which classifies line bundles on $\mathbb{Z}_K$. This generalizes nicely: the Picard group of $C$ is an exension of $\mathbb{Z}$ by a $g$-dimensional complex torus $J(C)$, which has exactly the same abelian fundamental group as $C$ does: indeed their first homology groups are canonically isomorphic. $J(C)$ is called the Jacobian of $C$. 3) It is known that every finite unramified abelian covering of $C$ arises by pulling back an isogeny from $J(C)$. So it seems there are reasonable to say that claims for calling either $G \cong \mathbb{Z}^{2g}$ and $J(C)$ is the Hilbert class group field of $C$. These two groups are -- canonically, though I didn't explain why -- Pontrjagin dual to each other, whereas a finite abelian group is (non-canonically) self-Pontrjagin dual. [This suggests I may have done something slightly wrong above.] As to what the Hilbert class field should be, the analogy doesn't seem so precise. Proceeding most literally you might take the direct limit of the function fields of all of the unramified abelian extensions of $C$, but that doesn't look like such a nice field. 1 This is a great question. Someone will come along with a better answer I'm sure, but here's a bit off the top of my head: 1) The Hilbert class field of a number field $K$ is the maximal everywhere unramified abelian extension of $K$. (Here when we say "$K$" we really mean "$\mathbb{Z}_K$", the ring of integers. That's important in the language of etale maps, because finite separable field extension is etale.) In the case of a curve over C, the "problem" is that there are infinitely many unramified abelian extensions. Indeed, Galois group of such is the abelianization of the fundamental group, which is free abelian abelian of rank $2g$ ($g$ = genus of the curve). Let me call this group G. The problem here is that the covering space of C corresponding to G has infinite degree, so is a non-algebraic Riemann surface. (In fact, I have never really thought about what it looks like. It's fundamental group is the commutator subgroup of the fundamental group of C, which sounds complicated.) I don't think the field of meromorphic functions on this guy is what you want. 2) On the other hand, the Hilbert class group $G$ of $K$ can be viewed as the Picard group of $\mathbb{Z}_K$, which classifies line bundles on $\mathbb{Z}_K$. This generalizes nicely: the Picard group of $C$ is an exension of $\mathbb{Z}$ by a $g$-dimensional complex torus $J(C)$, which has exactly the same abelian fundamental group as $C$ does: indeed their first homology groups are canonically isomorphic. $J(C)$ is called the Jacobian of $C$. 3) It is known that every finite unramified abelian covering of $C$ arises by pulling back an isogeny from $J(C)$. So it seems reasonable to say that $J(C)$ is the Hilbert class group of $C$. As to what the Hilbert class field should be, the analogy doesn't seem so precise. Proceeding most literally you might take the direct limit of the function fields of all of the unramified abelian extensions of $C$, but that doesn't look like such a nice field.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 102, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444113969802856, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/22843/list
## Return to Question 3 added 4 characters in body; edited tags I am looking for a characterization of (0,1)-forms on a CR manifold M that admit a real primitive, i.e. those can be written as: $\omega=\overline\partial_M f$ for a real function f. If M is a complex manifold, by expanding ddf=0 one obtains the following characterization: $\overline\partial\omega=0$, $\partial\overline\omega=0$, $\partial\omega+\overline\partial\overline\omega=0$. In the CR case however, there is not a good substitute for $\partial$, and also the symmetry between (0,1) forms and (0,1)-forms fails. Edit: One easy condition is $\overline\partial_M\omega=0$. In general it is a difficult problem even to know say if $\omega=\overline\partial_M g$ for some complex function g. My question should be rephrased as follows: Assuming that there exist a complex solution g to $\omega=\overline\partial_M g$, when is it possible to choose g real? 2 added 349 characters in body I am looking for a characterization of (0,1)-forms on a CR manifold M that admit a real primitive, i.e. those can be written as: $\omega=\overline\partial_M f$ for a real function f. If M is a complex manifold, by expanding ddf=0 one obtains the following characterization: $\overline\partial\omega=0$, $\partial\overline\omega=0$, $\partial\omega+\overline\partial\overline\omega=0$. In the CR case however, there is not a good substitute for $\partial$, and also the symmetry between (0,1) forms and (0,1)-forms fails. Edit: One easy condition is $\overline\partial_M\omega=0$. In general it is a difficult problem to know if $\omega=\overline\partial_M g$ for some complex function g. My question should be rephrased as follows: Assuming that there exist a complex solution g to $\omega=\overline\partial_M g$, when is it possible to choose g real? 1 # Real primitive of a complex form on a CR manifold I am looking for a characterization of (0,1)-forms on a CR manifold M that admit a real primitive, i.e. those can be written as: $\omega=\overline\partial_M f$ for a real function f. If M is a complex manifold, by expanding ddf=0 one obtains the following characterization: $\overline\partial\omega=0$, $\partial\overline\omega=0$, $\partial\omega+\overline\partial\overline\omega=0$. In the CR case however, there is not a good substitute for $\partial$, and also the symmetry between (0,1) forms and (0,1)-forms fails.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9241420030593872, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/225521/separated-scheme
Separated scheme How to show, that the affine line with a split point is not a separated scheme? Hartshorne writes something about this point in product, but it is not product in topological spaces category! Give the most strict proof! - 1 Should "separable" be "separated?" – Matt Oct 30 '12 at 23:20 2 You can compute the fibre products locally to see that there must be four origins in $X\times X.$ To see that the diagonal is not closed, consider the intersection of the diagonal with the canonical open charts of $X\times X.$ – Andrew Oct 30 '12 at 23:22 What are origins, and why they should be 4? – user46336 Oct 31 '12 at 4:07 1 Answer Let $X$ be the affine line with the origin doubled. More precisely, if we let $Z = \mathbb A^1$ and $U = \mathbb A^1 \setminus \{0\},$ then $X$ is the union of two copies of $Z$ in which the two copies of $U$ are identified in the obvious way. There are two obvious maps $Z \to X$ (corresponding to the two copies of $Z$ of which $X$ is the union), and they are distinct, but they coincide when restricted to $U$. These two maps induce a map $Z \to X \times X$, and the above discussion shows that preimage of the diagonal is exactly equal to $U$. Since $U$ is not closed in $Z$, we conclude that the diagonal is not closed in $X\times X$. Thus $X$ is not separated. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.952854573726654, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/26692/can-we-model-gravitation-as-a-repulsive-force/26693
# Can we model gravitation as a repulsive force? This question is actually related to my earlier question ("what is motion"). The fact that objects move a lot in the universe and that the universe is expanding, can imply that gravity is a repulsive force that increases with distance.. so the farthest objects repel us more. This can still explain several existing observations, e.g., why does the apple fall? Motion is the result of such repulsion. Two objects unlucky enough not to be moving relative to each other get squished due to the repulsion of the rest of the universe around them. The earth repels the apple less than the stars so it is pushed towards the earth. Furthermore, it can explain the expanding universe without the need for dark energy. This could be demonstrated in a thought experiment. If we take a lot of same-charge particles (with small mass) such as electrons and lock them in a large box at a low enough temperature. The mutual repulsion of the particles may cause similar motion as if due to gravitational attraction. Another experiment would be to measure the slight changes in our weight during day and night when the sun and earth align (if their masses are large enough to detect the feeble change in repulsion). [EDIT: the question in the original form may not have been clear. It is "can we model".. with a yes/no answer and why (not). If downvoting, please justify. - 1 If the stars push the apple to the Earth, wouldn't the stars on the other side of the Earth push it back up with an equal force? Or maybe you're a flat-Earthist? – Pete Jackson Aug 5 '11 at 21:52 ## 5 Answers Show me a distribution of remote mass that would provide the behavior we see for both • Jupiter in orbit around the sun • the many moons in orbit around Jupiter which both appear to be $1/r^2$ forces. Now try to generalize to support all the moons and planets in the solar system. You can't do it because the system is highly over-constrained. - – Glen Little Jan 11 at 18:18 Interestingly, this theory has been proposed earlier (Le Sage theory). According to wikipedia, mainstream scientists discount this theory for lack of experimental evidence. So this answers the question. - Suggestion to the answer(v1): If you mention the words Le Sage theory explicitly in the answer, then it would become a searchable keyword for future visitors to find this post. – Qmechanic♦ Jan 11 at 18:31 I would assume the expanding of the universe is perhaps due to the fact that systems with relatively equal gravitational densities repel each other rather than attract. The theory that gravity causes dents in space can model the idea of repulsion. So perhaps there can be a gravitational model for repulsion; however I don't think it could be modeled in your sense. The universe is expanding due to ultimately the infinite increase of entropy which can be seen as dark matter of similar gravitational density repulsion. Mind you, I have no real evidence for the stuff I wrote, this is all speculation. Good luck in finding your answer. - There is definitely a strong history of "repulsive" gravity... often called "pushing gravity". You can read about it in the book: Pushing Gravity. For a recent proponent of the concept, the late Tom Van Flandern collected lots of pieces of the theory and made a number of predictions based on the ideas. You can read more about his work here: Gravity - Yes, the force that causes the expansion of the Universe is repulsive. No, this force is not gravity. It is the fifth force. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504384994506836, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/176048-finding-line-trapezoid-intersection-through-system-equations.html
# Thread: 1. ## Finding a line/ trapezoid intersection through system of equations Hello, I have a problem that I so far haven't been able to solve, I know it includes geometry but I need to find a way to solve it looking at it as functions, a system of linear equations or such that might be solved in a matrix. I want to find out if a line( finite) is either inside or intersecting a trapezoid. I know that the line can never be intersecting the trapezoid twice and the line is always parallel to the parallel lines of the trapezoid. I have tried to find an approach using a parametrized equation of the plane and line intersection but I dont think it covers all cases. ( something like this ( from wikipedia) : la+(lb+la)t=p0+(p1-p0)u+(p2-p0)v thanks 2. Here's an idea. 1. Your test line is parallel to the parallel lines of the trapezoid. Hence, there's no need to test for intersection with the parallel lines. 2. Parametrize the non-parallel lines of the trapezoid as follows: $y_{1}=m_{1}t+b_{1},\quad t\in[a_{1},b_{1}],$ and $y_{2}=m_{2}t+b_{2},\quad t\in[a_{2},b_{2}].$ Parametrize your test line in the same way: $z=m_{z}t+b_{z},\quad t\in[a_{z},b_{z}].$ 3. Attempt to solve, one after the other, the equations $z=y_{1},$ and $z=y_{2}$ for $t.$ In both cases, there should be exactly one solution, since the lines are obviously not parallel. Test the value of $t$ and see if it is inside both allowable ranges. For example, if you're solving $z=y_{1},$ then make sure both that $t\in[a_{1},b_{1}]$ and $t\in[a_{z},b_{z}].$ If so, then you have an intersection. Otherwise, you don't. Make sense? 3. Thanks! a question though, what is "a" in the interval? 4. $a_{1}$ is the left-hand boundary for the interval $[a_{1},b_{1}].$ For example, if $[a_{1},b_{1}]=[2,5],$ then $a_{1}=2$ and $b_{1}=5.$ Does that make sense? 5. Just browsing by to say a thanks a lot . The solution is great, working with wind turbine wakes here. 6. Great! You're very welcome for any help I could give. Have a good one!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933253824710846, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/192835/fastest-and-intuitive-ways-to-look-at-matrix-multiplication/192845
# Fast(est) and intuitive ways to look at matrix multiplication? Most of the time I see matrix multiplication presented and defined, as a seemingly arbitrary sequence of operations. For example, the textbook I'm currently reading for a linear algebra course defines the product $AB$ as the $(i, j)$ entry in the $1 \times 1$ matrix that is the product of the ith row of $A$ and the jth column of $B$. Properties of matrix multiplication are subsequently proven based upon this definition. The definition is clear, but why the matrix product is useful is not clear to me as a student. A different textbook I'm referencing defines the product $AB$ in terms of linear combinations. The problem I have is doing matrix multiplication quickly by hand, particularly when the $A$ is $p \times 1$ and $B$ is $1 \times q$. I would like to know of how to look at or define matrix multiplication, in a manner which makes it easy (for the average student) to compute by hand, while being intuitive and consistent for use for later proofs. wikipedia has a great article. - 1 – Ben Millwood Sep 8 '12 at 18:52 $A= \left( \begin{matrix} a_{11}&a_{12}&a_{13} \\a_{21}&a_{22}&a_{23} \\a_{31}&a_{32}&a_{33} \end{matrix} \right)$ – T. Webster Apr 27 at 4:53 ## 5 Answers In general, let $A$ be $m\times n$ and let $B$ be $n\times l$. Let $A_i=(a_{i1},a_{i2},\dots,a_{in})$ be the $i$th row of $A$, $1\leq i\leq m$ and let $B_j=(b_{1j},b_{2j},\dots,b_{nj})$ be the $j$th column of $B$, $1\leq j \leq l$. Given a row vector $\overline x=(x_1,x_2,\dots,x_n)$ and a row vector $\overline y =(y_1;y_2;\dots;y_n)$, define $$\overline x\cdot \overline y=\sum_{i=1}^n x_iy_i$$ Then we can define $$A\cdot B$$ as the matrix who has as entries $$(AB)_{ij}=A_iB_j=\sum_{k=1}^n a_{ki}b_{jk}$$ where we interpret each row and column as a row vector and column vector respectively. You can write it as $$A\cdot B=\left( \begin{matrix} A_1B_1&A_1B_2&\cdots&A_1B_l\\ A_2B_1&A_2B_2&\cdots&A_2B_l \\\vdots&{}&{}&\vdots \\A_mB_1&A_mB_2&\dots&A_mB_l\end{matrix} \right)$$ - Perhaps the best way to look at matrix multiplication when you want to compute a product by hand is as follows: When you're computing by hand, just bump the second matrix up to make room for the product in the lower-right corner. Note that the upper-left corner must be a square, which re-confirms the requirement that "columns of $A$" = "rows of $B$"; moreover, you can see that $A\times B$ inherits its row dimension from $A$ and its column dimension from $B$. - Matrix multiplication is defined as it is so that is reflects the composition of linear maps. No more, no less. - This is the reason why matrix multiplication is defined this way. – Fernando Martin Sep 14 '12 at 4:06 It's important to understand how to multiply $AB$ by recognizing that each column of $AB$ is a linear combination of columns of $A$ with the corresponding column of $B$ telling you which particular linear combination; similarly each row of $AB$ is a linear combination of the rows of $B$ with the corresponding row of $A$ telling you which particular linear combination. This is covered in any reasonable text on linear algebra. This perspective is both helpful for doing concrete calculations by hand as well as for understanding matrices theoretically. In particular, this interpretation of matrix multiplication is very handy for understanding Gaussian elimination and for studying the rank of a matrix. - 1 Thats my favorite method. The great thing is that when the numbers are reasonable, I am able to compute a whole row at "once" instead of advancing by single numbers. – Honza Brabec Sep 8 '12 at 20:18 Just remember that each ij-th entry in AB (where i signifies the row of AB and j signifies the column in AB) is equal to the the dot product of the ith row of A and the jth row of B. Another trick is to visualize turning the jth column of B and aligning it with the ith row of A, multiplying each entry, and then adding it up. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417718648910522, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118494?sort=votes
## Largest permutation group without 2-cycles or 3-cycles ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The largest permutation group without 2-cycles is $A_n$, which has size $n!/2$. I think the largest permutation group without 2-cycles or 3-cycles is much smaller, but I can't figure out if it should be polynomially smaller (eg. of size $n!/n^3$), or more dramatically smaller (eg. of size $(.5n)!$). The largest group I could come up with is $\{\phi(x_1,\dots,x_{n/2}) \circ \phi(x_{n/2+1},\dots,x_n) | \phi \in S_{n/2}\}$, which has size $(.5n)!$. EDIT: Posting this here since the answers below pointed me in the right direction, but ended up conjecturing something that was not quite correct. The group $\{(g_1, g_2, \dots, g_{d-1}\!,\ g_1g_2\cdots g_{d-1}\!)\ |\ g_i \in S_{n/d}\} \le S_{n/d}^d \le S_n$ has no 2-cycles or 3-cycles, and has $(n/d)!^{d-1}$ elements. When $d = \log n$, this is $n!/n^{\Theta(n\log\log(n)/\log(n))}$, which is smaller than $n!/poly(n)$ but larger than $(cn)!$ for any $c<1$. You can do a little bit better by using a wreath product instead of a direct product, and by tweaking $d$, but I think this is more or less optimal. - You can get a rough upper bound with number theory by removing the factors of 2 and 3 from n factorial (as any desired subgroup has order dividing that factor). I am guessing the result will be dramtically smaller. Already for n=7 the answer is at most 35, and in fact is actually 7. Gerhard "Is Calculating From The Hip" Paseman, 2013.01.09 – Gerhard Paseman Jan 10 at 3:04 Upon reflection, I see that I am looking at subgroups without elements of orders 2 or 3, which may be different from what the poster desires. Gerhard "Is Reading From Hip Too" Paseman, 2013.01.09 – Gerhard Paseman Jan 10 at 3:08 Gerhard, no, it's about subgroups, it's about cyclic structures of the elements of orders 2 and 3. – Dima Pasechnik Jan 10 at 8:27 Thanks everyone for your help and insight! The answers are exactly what I was looking for. I would "accept" several answers if I could, but it looks like mathoverflow only allows one accepted answer per question. – rishig Jan 11 at 5:45 ## 4 Answers I can do a bit better! For even $n=2m$ there is a subgroup of order $2^{m-1}m!$ with no 2-cycles or 3-cycles. Let $W$ be the wreath product of a cyclic group of order 2 with $S_m$. In other words, $W$ is the subgroup of $S_n$ that preserves the partition ```$\{\{1,2\}, \{3,4\},\ldots,\{2m-1,2m\}\}$``` of ${1,2,\ldots,n}$. So $|W| = 2^mm!$. Then $W$ has no 3-cycles, but it does have the 2-cycles $(1,2),(3,4),$ etc. Now $W$ is a semidirect product $B \rtimes S_m$, where $B$ is the base group of the wreath product, which is the subgroup of order $2^m$ that fixes each of the pairs `$\{2i-1,2i\}$` in the partition. The subgroup $C$ of $B$ consisting of the even permutations in $B$ has order $2^{m-1}$ and has no 2-cycles, and it is normal in $W$, so $G := C \rtimes S_m$ has no 3-cycles and no 2-cycles. I would guess that this is the best you can do for large $n$, and I am sure that this could be proved using the methods suggested by Dima Pasechnik. The intransitive and imprimitive maximal subgroups of $S_n$ are respectively direct and wreath products of symmetric groups, so their structure is well understood. The primitive maximal subgroups are comparatively small. There is an old result (of Praeger and Saxl I think) that says they have order at most $4^n$, and many more recent more accurate results, but $4^n$ is already asymptotically smaller than $|G|$. - oops, you're right. I somehow didn't think straight enough when I realized that the example in the question can be improved. :-) – Dima Pasechnik Jan 10 at 12:02 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For small $n$ your construction is not optimal (e.g. for $n=6$ there is a group of size 120, isomorphic to $S_5$; $M_{12}$) is an example for $n=12$, the biggest exceptional $n$, it seems). But for sufficiently large $n$, your construction is almost optimal; you can still add an extra 2 to the order of your group. Namely, add the permutation $(1,n/2+1)(2,n/2+2)\dots (n/2,n)$. A way to prove that this becomes optimal (for sufficiently large $n$) might go as follows: • prove that this is the best possible with intransitive groups • same for imprimitive groups • for primitive groups, invoke O'Nan-Scott theorem (eventually, the classification of finite simple groups). Perhaps there is a better way to deal with the last step, I don't know. - 1 It is known that the only primitive group on $n$ points containing a 2-cycle is $S_n$ and any primitive group containing a 3-cycle contains $A_n$. These are classical regults going back to Jordan. Thus any primitive group other than $A_n$ or $S_n$ has no 2-cycles or 3-cycles. There are very good upper bounds on the order of primitive group. The best is by Maroti which says that for such primitive groups one of the following holds: - G is a Mathieu group $M_n$ for $n=11,12,23,24$ -G is a subgroup of $S_m wr S_k$ with $n=m^k$, $m\geq 5$ and $k\geq 2$ -|G|\leq n^{1+\lfloor log_2(n)\rfloor}\$ – Michael Giudici Jan 10 at 9:42 This is really an comment to @Dima's answer, but it's a bit long... There is a classical result of Jordan in permutation group theory that says the following: If a primitive group $G$ [on a set of order $n$] contains a $p$-cycle, where $p < n - 3$ is prime, then $G$ is the alternating or symmetric group of degree $n$. (See Wielandt's Finite permutation groups for a proof.) So any primitive group will satisfy the requirements of the OP. There are LOTS of papers written about the maximal orders of primitive groups, so you should investigate these. The starting point is classical work of W.A. Manning, but once the Classification of Finite Simple Groups was completed, much stronger results were possible. Here is a relevant quote: "It is now known that the largest [primitive] groups [on a set of order $n$ occur for $n$ of the form $c(c − 1)/2$ and are $S_c$ and $A_c$ acting on the unordered pairs from a set of size $c$." The quote comes from Permutation Groups and Normal Subgroups by Cheryl Praeger. So you can work out from this what the upper bound for the order of a primitive group containing neither 2-cycles nor 3-cycles. Which leaves the intransitive and imprimitive ones... - But the conclusion will be that, at least for large enough $n$, primitive groups will not be the largest groups with the required property. – Derek Holt Jan 10 at 11:48 Derek, of course you are right, as your citation of the Praeger-Saxl result makes clear. – Nick Gill Jan 10 at 13:15 Two sequences motivated by this question made with a brute force GAP program: https://oeis.org/A208232 and https://oeis.org/A208235 Someone may like to extend these sequences. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427140951156616, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=122979
Physics Forums ## Hubble Recession vs. Moon Recession [latex]70 \frac{km/s}{Mpc}*\left(384400\ km\right)=8.72028137*10^{-10}\frac{m}{s}[/latex] [latex]8.72028137*10^{-10}\frac{m}{s}*\frac{3.1556926*10^{7}\ s}{yr}=2.75185274\ \frac{cm}{yr}[/latex] Do we consider Hubble Recession as a part of the directly calculated moon recession? Why or why not? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Science Advisor Staff Emeritus Nope. Ned Wright addresses this issue in his cosmology FAQ. http://www.astro.ucla.edu/~wright/cosmology_faq.html#SS with a reference to the literature http://xxx.lanl.gov/abs/astro-ph/9803097 (the Cooperstock reference below) Why doesn't the Solar System expand if the whole Universe is expanding? This question is best answered in the coordinate system where the galaxies change their positions. The galaxies are receding from us because they started out receding from us, and the force of gravity just causes an acceleration that causes them to slow down, or speed up in the case of an accelerating expansion. Planets are going around the Sun in fixed size orbits because they are bound to the Sun. Everything is just moving under the influence of Newton's laws (with very slight modifications due to relativity). [Illustration] For the technically minded, Cooperstock et al. computes that the influence of the cosmological expansion on the Earth's orbit around the Sun amounts to a growth by only one part in a septillion over the age of the Solar System. This effect is caused by the cosmological background density within the Solar System going down as the Universe expands, which may or may not happen depending on the nature of the dark matter. The mass loss of the Sun due to its luminosity and the Solar wind leads to a much larger [but still tiny] growth of the Earth's orbit which has nothing to do with the expansion of the Universe. Even on the much larger (million light year) scale of clusters of galaxies, the effect of the expansion of the Universe is 10 million times smaller than the gravitational binding of the cluster. [add]I'm thinking about analyzing this problem in a different way, but I don't want to hijack this post, and I'm not confident of my approach yet - so I may post some more on this in another thread. Note that the increase in the moon's orbit is well understood - it's due to the spin-orbit coupling between the moon and the Earth. This is due to the tides on the Earth caused by the moon. This effect has nothing to do with GR, it is correctly predited by Newtonian gravity. see for example http://curious.astro.cornell.edu/que...php?number=124 Quote by pervect Nope. Ned Wright addresses this issue in his cosmology FAQ. http://www.astro.ucla.edu/~wright/cosmology_faq.html#SS with a reference to the literature http://xxx.lanl.gov/abs/astro-ph/9803097 (the Cooperstock reference below) [add]I'm thinking about analyzing this problem in a different way, but I don't want to hijack this post, and I'm not confident of my approach yet - so I may post some more on this in another thread. Note that the increase in the moon's orbit is well understood - it's due to the spin-orbit coupling between the moon and the Earth. This is due to the tides on the Earth caused by the moon. This effect has nothing to do with GR, it is correctly predited by Newtonian gravity. see for example http://curious.astro.cornell.edu/que...php?number=124 But according to my calcuation above, either the Hubble constant does not apply over short distances < 1AU, or the moon is affected by the Hubble constant, however ever small of a fraction its contribution. Sure, gravity overwhelms it, that quite obvious and not new. But when I plug in the numbers, assuming a Hubble Constant of 70 km/s/Mpc (see first post), I get something in the range of the centimeters. Could the Hubble Expansion be a co-cause of the Moon Recession? Of course, we know that this would result in the usual conservation of angular momentum. And why does the literature you cite have a answer that is orders magnitude different than when I approximated by mutiplying initial Hubble Velocity (H*D) with time? If that source is right, then the (H*D) cannot be applied at the scale of the solarsystem... which is weird since the literature you cite assumes that expansion occurs at all scales. Is it that the real function of expansion depends on the energy density, and thus makes (H*D) too simple, or just plain wrong, for cosmologists? Recognitions: Science Advisor Staff Emeritus ## Hubble Recession vs. Moon Recession The motion of bodies near the solar system can be computed from the actual metric in the solar system. This metric is not the metric used for cosmology. The equations of cosmology such as Hubble's law do not apply to the case of planetary motion. Cosmological equations use an "averaged" metric that is not actually present in the solar system. If you don't like thinking in terms of metric, think in terms of forces. (The solar system is not particularly relativistic, so thinking in forces does not introudce any serious conceptual errrors. The forces on the planets in the solar system are just those due to nearby bodies, and are reasonably well-known and well-studied. There are not any "mysterious" forces due to distant bodies (though there are some tiny forces usually neglected, like gravitational effects from alpha-centauri). There are not any "mysterious forces" due to the universe as a whole, or its expansion. Well, there are probably not any such mysterious forces. If you look at http://www.physicsforums.com/showthread.php?t=123017 for more discussion (I didn't want to hijack this thread with my questions in that discussion), you'll see that there are some technical issues with proving this point rigorously. From what I've read, Birkhoff's theorem seems to probably be able to be generalized to include a cosmological constant, but I could be incorrect in this. I should also add that "dark matter" concentrations could, in principle, cause "mysterious effects", because we can't see "dark matter" (by defintion). Any such dark-matter effects must be tiny, there are not a lot of confirmed anomilies in the motions of planets in the solar system. (The pioneer anamoly is a potentially important counter-example, however people are still looking at the issue). Note that the papers I cited previously in this thread take a somewhat different approach, but come to very similar conclusion. The biggest effect that anyone suggests using any of the various approaches is "parts in a septillion" Blog Entries: 1 Recognitions: Gold Member I'll start with the bad news but before I do, I will state an assumption. If you disagree with the assumption, then there is no need for you to read the rest of my post. Assumption: Hubble flow occurs at all scales. Under this assumption, after all other effects are accounted for, the Moon should be receding from the Earth at a speed of 2.75 cm/yr as calculated by the OP. However, I don't think you can measure this recession for the following reason. Consider an experment carried out in a remote region of space where forces are nil. Place two neutrons at rest with respect to each other and at a distance from each other equal to the radius of the Moon's orbit. Come back a year later and see where they've gotten to. In my opinion, they will not have moved. My reason is that in order to make it seem that they are at rest wrt each other at the outset, you must actually give them a velocity toward each other of 2.75 cm/yr in order to cancel out the Hubble flow. When you let them go, they will continue to have a velocity that will exactly cancel out the Hubble flow and so the experiment will fail. Now for the good news. I got this idea from the Cooperstock paper although I did not actually read much of it. The idea is to measure not velocity, but acceleration. Cooperstock essentially starts with a somewhat stationary pair of objects and waits billions of years until Hubble flow takes them far apart from each other enough to measure the displacement effects of the acceleration. I propose to start with objects that are initially moving very fast wrt to each other so that they reach that distance much sooner than billions of years. In fact 80 minutes should suffice. Here is what to do. Place a particle accelerator in a remote region of space, and a particle detector 10 AU away from it (80 light minutes, roughly the distance from the Sun to Saturn). Keep them at rest wrt each other so that their relative velocity exactly offsets Hubble flow. Send an electron at nearly the speed of light from the accelerator to the detector. Measure the speed of the particle at the detector. If the assumption is correct and if the Hubble constant is 70 (km/s)/Mpc then there should be a acceleration equal to 1 part in 10^14 $$70 (km/s)/Mpc \times 10 AU \times 5 \times 10^-12 MPc/AU = 3.5 * 10^-9 (km/s)$$ $$c = 3 \times 10^5 (km/s)$$ This experiment is not practical at the present time. However, it does not require such fantastic resourses as the galaxy-sized accelerators needed for string theory. It is reasonable to think that such an experiment could possibly be carried out within the 21st century. My thanks to Dr. Yen-Ting Lin of Princeton University and the Spanish Pontifical Catholic University of Chile for help with the equations. I should point out that some of the parameters are variable. For instance, by using a particle of speed 10^-4 * c, you only need find a difference in velocity of 1 part in 10^10. But then the experiment would take 8 * 10^5 minutes or 1.5 years to complete. Recognitions: Science Advisor Staff Emeritus My \$.02, again: I think you are insisting on taking the hard apporach to the problem, after I"ve attempted to point out that there is an easier way. But as long as you get the same answers in the end, it doesn't necessarily matter. Under this assumption, after all other effects are accounted for, the Moon should be receding from the Earth at a speed of 2.75 cm/yr as calculated by the OP. However, I don't think you can measure this recession for the following reason. Consider an experment carried out in a remote region of space where forces are nil. Place two neutrons at rest with respect to each other and at a distance from each other equal to the radius of the Moon's orbit. Come back a year later and see where they've gotten to. In my opinion, they will not have moved You are basically close to being on the right track here, except that there is something that you have not considered. a(t) is not a truly linear function of time. If a(t) were linear, you'd be correct. The next level of approximation is to see what happens when a(t) is modelled with a linear term and a square law term. If assuming that a(t) is linear gives no effect, you need to carry out the analysis to the next order to find out what the actual effect will be. Exercise: calculate what happens between the neutrons if a(t) is not linear. You should find that they accelerate towards each other if the second derivative of the scale factor d^2a/dt^2 < 0, away from each other if the second derivative is positive Exercise: a(t) satisfies the Friedman equation. A convenient form of this is, using currently accepted values for the various constants http://www.astro.ucla.edu/~wright/Distances_details.gif http://en.wikipedia.org/wiki/Friedmann_equations [tex] da(t)/dt = H0 \sqrt{\Omega_m/a + \Omega_{vac}a^2} [/tex] also written as [tex] [da(t)/dt] / a(t) = H0 \sqrt{\Omega_m/a^3 + \Omega_{vac}} [/tex] You can see that these are equivalent here a(t) is the changing scale factor of the universe H0 is the current value of Hubble's constant. H(t) is, by defintion, (da/dt) / a H0 is the value of H(t) now. This gives you numbers that you can actually plug in to find the second derivatve of a(t). The simplest case to analyze in the one that Cooperstock also analyzes. This is the case with no cosmological constant. You can then assume $\Omega_{vac}=0$ and $\Omega_m=1$ Compare with http://xxx.lanl.gov/PS_cache/astro-p...03/9803097.pdf equation 3.1 You will note that in this model, with no cosmological constant, the neutrons accelerate towards each other as previously noted. Exercise: compare this acceleration with the gravitational acceleration due to the matter density rho in the sphere between the two neutrons using Newton's law. (Matter in the hollow sphere outside the two neutrons won't contribute). You'll need to find the matter density rho via the equation rho = 3H^2 / 8 Pi G (This is the assumed "critical" value of matter density equivalent to $\Omega_m=1$, the one that makes space flat, i.e. k=0.) As a result of this calculation, you should then see that this provides exactly the same answer as the previoius methods did. If you wish to go beyond this "no-cosmological constant" model to the consensus Lambda-CDM model would be $\Omega_{m}$, the contribution due to matter is .27 $\Omega_{vac}$, the contribution of the vacuum is .73 I've decided in retrospect to snip discussion of this - better to get one point through than to lose it in a sea of information. Blog Entries: 1 Recognitions: Gold Member Quote by pervect there is something that you have not considered. a(t) is not a truly linear function of time. If a(t) were linear, you'd be correct. Good point. Actually, it seems to me that I have only used the constant term, not even the linear one. And it seems that I have shown, at least to my own satisfaction, that the zero order effect cannot be measured even in principle. I wonder if the first and higher order effects are measurable in practice. That is, by staring at the Moon for a really long time to see if anything interesting happens. My uneducated guess is 'no'. Don't forget, my not so hidden agenda is to measure Hubble flow at small scales and in a practical way. Your discussion raises, not lowers my hopes. I also wonder if over the time scale of my proposed experiment (80 minutes) that first and higher order effects are also ignorable. My experiment would look for an acceleration (due to the constant term in Hubble flow) on the order of 1 part in 10^14. If I take higher order effects of Hubble flow into account, would this alter the acceleration by an order of magnitude? If so, it could only enlarge it and make it easier to measure. Recognitions: Science Advisor Staff Emeritus The free-space neutron-neutron acceleration value has been calculated - it's unfortunately too tiny to measure. See 3.1 of the Cooperstock paper, which puts it at 3*10^-47 m/s^2 for two negligible mass test particles 1 au apart. Unfortunately, I think one needs a fair knowledge of relativity to perform the calculations I suggested - one at least needs to know how to calculate the conserved quantities for geodesic motion in the FRW metric. Because the metric is space translation invariant, there will be conserved momenta. The formula itself is not horribly complex The FRW metric ds^2 = -dt^2 + a(t)^2 (dx^2 + dy^2 + dz^2) leads to the following conserved quantity: a^2 dx/dtau = a^2 vx / sqrt(1- a^2(vx^2+vy^2+vz^2)) where vx = dx/dt, vy=dy/dt, vz=dz/dt This conserved quantity being the conserved x component of the momentum. However, it takes considerable knowledge to justify it. (The form of the y and z components should be reasonably obvious from the above). Thread Tools | | | | |----------------------------------------------------------|--------------------|---------| | Similar Threads for: Hubble Recession vs. Moon Recession | | | | Thread | Forum | Replies | | | Cosmology | 138 | | | Current Events | 34 | | | Current Events | 8 | | | General Discussion | 15 | | | Current Events | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352361559867859, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/16199/is-there-a-newtons-third-law-for-the-em-field?answertab=oldest
# Is there a Newton's third law for the em field? There is a momentum associated with the em field that ensures the conservation of total momentum for a system of interacting charges. Can the same be done in an analagous way to ensure Newton's third law is also true? - – Qmechanic♦ Jan 21 at 9:57 ## 1 Answer Well, Newton's third law is just conservation of momentum... $$F_1 = -F_2 \Rightarrow \frac{dp_1}{dt} + \frac{dp_2}{dt} = 0$$ - 1 The point is that Newton's third law fails for point charges in motion with a magnetic interaction. – Ron Maimon Oct 25 '11 at 23:21 But conservation of momentum doesn't in general (I can't think about an example of failure now) isn't it? – fénix Oct 25 '11 at 23:32 2 Conservation of momentum works, Newton's third law fails, because there is field momentum. This can lead to paradoxical forces even when the field is not radiating. – Ron Maimon Oct 25 '11 at 23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313371777534485, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/14406/do-your-friends-on-average-have-more-friends-than-you-do
# Do your friends on average have more friends than you do? I was watching this TED talk, which suggested that on average your friends tend to individually have more friends than you do. To define this more formally, we are comparing the average number of friends with: ````average over each person p of: friend popularity, defined as: average over each friend f of p: number of friends f has ```` Intuitively, this seems to make sense. After all, if someone has a high number of friends, they will tend to increase friend popularity and affect a high number of people, while those people who decrease friend popularity only affect a low number of people. Does this result hold for all graphs? Given a person `p`, let `t` stand for: ````sum over each friend f of p: number of friends f has ```` It is pretty clear that `sum(t)=sum(f^2)` as a person with `f` friends has value of `f` towards their `f` friends value of `t`. We are then trying to determine whether: `sum(t/f)>sum(f)` holds for all graphs. - 2 As I recall, there was an informal discussion of something like this in one of Gladwell's books... that being said, this is probably more sociological than mathematical. – J. M. Dec 15 '10 at 11:46 4 @J.M. I have provided an exact graph theory question that should be answerable mathematically. Does `sum(t/f)>sum(f)` hold for all graphs or is there a counter example? – Casebash Dec 15 '10 at 11:49 ## 2 Answers The answer is yes, this holds for any graph (with weak inequality, as Jon points out). Let's set up some notation. The graph of friendships is $G$. The set of vertices of $G$ (the people) is $V$; the set of edges (the friendships) is $E$. For a person $v$, the number of friends that person has is $\deg v$. The total number of people is $n$. We want to show that $$\frac{1}{n} \sum_{v \in V} \deg v \leq \frac{1}{n} \sum_{v \in V} \frac{1}{\deg v} \sum_{(u,v) \in E} \deg(u).$$ Cancel the $1/n$'s from both sides. After a little rewriting, we want to show that $$\sum_{v \in V} \sum_{(u,v) \in E} 1 \leq \sum_{v \in V} \sum_{(u,v) \in E} \frac{\deg u}{\deg v}. \quad (*)$$ Let's consider what a given edge $(u,v)$ contributes to each side of $(*)$. On the left, it contributes $1+1=2$. On the right, it contributes $(\deg u)/(\deg v) + (\deg v)/(\deg u)$. For any two positive numbers $x$ and $y$, we have $2 \leq x/y+y/x$. So every edge contributes more to the right hand side of $(*)$ than to the left, and we have the claimed result. - 3 By the way, a standard heuristic for this sort of problem: Try to get all your sums to be over the same index set, even at the expense of doing something like replacing $\deg v$ by $\sum_{(u,v) \in E} 1$. – David Speyer Dec 15 '10 at 14:17 1 That's a very nice proof. So the core observation is that each edge contributes 2 to the total number of friends, but `deg u/deg v+deg v/deg u` to the number obtained by adding each person's friend popularity. Also, its worthwhile noting that `2≤x/y+y/x` is easily proved using either calculus or the AM-GM inequality, just in case anyone doesn't know that – Casebash Dec 15 '10 at 22:07 To clarify my previous comment: `deg u` and `deg v` are the degrees of the vertices after the graph has been completed. As it is worded, my comment could have been incorrectly interpreted to be referring to a situation where a graph is built one edge at a time and the degree is the degree at this stage of construction. That is probably why I didn't spot this solution - you consider the effect that each edge being added separately has at the end, rather than the effect right now. – Casebash Jul 25 '12 at 14:57 Trivial comment, but I'm new so I have to post it as an answer: The strict inequality definitely doesn't hold in general, since the sums are equal for regular graphs. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595929980278015, "perplexity_flag": "head"}
http://mathoverflow.net/questions/32320/what-is-the-relationship-between-translation-and-time-complexity/32328
What is the relationship between “translation” and time complexity? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider the problem of deciding a language $L$; for concreteness, say that this is the graph isomorphism problem. That is, $L$ consists of pairs of graphs $(G, H)$ such that $G\simeq H$. Now the time complexity of deciding this problem as stated depends on how the graphs are encoded. For example, if one were to have a "canonical" encoding of graphs (such that encoding strings are in bijective correspondence with isomorphism classes of graphs) the problem would be $O(n)$, as we could decide whether $G\simeq H$ simply by comparing the string representing $G$ to the string representing $H$. On the other hand, if we represent a graph via its adjacency matrix, the best known algorithm (according to Wikipedia) gives only a subfactorial bound. Now consider the time complexity of converting from one language to another. If we let $T_1, T_2$ be the time complexity of deciding languages $L_1$ and $L_2$ respectively, and $T_{ij}$ be the time it takes a Turing machine to take a string $S$ and output another string $S'$ which is in language $j$ if and only if $S$ is in language $i$. We have $$T_1\leq T_2+T_{12}$$ $$T_2\leq T_1+ T_{21}$$ as given a string that we want to test for its belonging to $L_i$, we may run it through the translation $L_i \to L_j$ and then decide language $j$. Indeed, this is a special case of a trivial "triangle inequality" for translation; the time it takes to translate from $L_1$ to $L_2$ plus the time it takes to translate from $L_2$ to $L_3$ is greater than or equal to the time it takes to translate from $L_1$ to $L_3$. (I say it is a special case because a decision problem is the same as converting a language $L$ to the language $\{ 1 \}$.) What I want to know is: Can we better quantify the relationship between the time complexity of a decision problem and the nature of the encoding? So that this question is not prohibitively vague, let us say that I am looking for (1) related references, and (2) a measure of the complexity of an encoding which more tightly relates to time complexity of the "underlying" decision problem. Added (7/19/2010): The answers below, particularly Ryan Williams' excellent survey of the dependence of the time complexity of various problems on their encoding, get at the motivation to my question but not at my question itself. In particular, it's clear that every problem may be re-encoded to allow (say) $O(\log n)$ time complexity, by padding. My question is whether there's a reasonable way to measure this dependence. For example, say the decision problem for $L_1$ is reducible to the decision problem for $L_2$, and vice versa, so that $L_1$ and $L_2$ in some sense represent the same problem. Is there a way to formalize this last statement (about "representing the same problem")? I am imagining, for example, a measure $C_i$ of the complexity of a language so that if $T_i$ is the time complexity of the language, and $L_1$ and $L_2$ are, say, easily reducible to one another, then $T_1/C_1\sim T_2/C_2$. (Of course $C_i=T_i$ works, but ideally $C_i$ would be somehow a property of the language, rather than the decision problem.) This is unfortunately becoming quite speculative, so again, related references would be a great answer. - Usually, I think people say L1 and L2 represent the same problem precisely when there are sufficiently efficient reductions back and forth between them. (Whatever "sufficiently efficient" means depends on what you want to do with the problems.) It sounds like your idea (of taking the ratio of time over some other complexity measure) could be a definition for some other complexity measure. I haven't seen such a concept. I'm not sure why you distinguish between "decision problem" and "language"; for all intents and purposes these are the same. (I certainly have been using them interchangably.) – Ryan Williams Jul 19 2010 at 21:59 @Ryan Williams: I'm asking if such a complexity measure exists. And as far as I can tell I am using the terms "decision problem" and "language" interchangeably; I think I'm (unfortunately) using the word "problem" to refer to a class of languages which "represent the same thing" (again, I'm being vague because the question is asking if such a concept exists). – Daniel Litt Jul 20 2010 at 0:50 @Daniel: I was referring to your phrase: "ideally C_i would be somehow a property of the language, rather than the decision problem." I've updated my answer to include a reference that you may find interesting. – Ryan Williams Jul 20 2010 at 3:48 @Ryan: Oh, good call. I am being imprecise; what I mean is that ideally the word "Turing machine" should not appear in the definition of $C_i$. – Daniel Litt Jul 20 2010 at 13:06 4 Answers When it comes to the time complexity of problems, the encoding of the problem can be totally crucial. In general, the encoding of the problem cannot be separated from the complexity of the problem itself. The first canonical example of this (as mentioned before in answering another question) can be seen with the following two problems: (1) Given a deterministic Turing machine $M$, string $x$, and integer $k$ written in binary, does $M$ accept $x$ within $k$ steps? Problem (1) is $EXPTIME$-complete. However the following problem is $P$-complete: (2) Given a deterministic Turing machine $M$, string $x$, and integer $k$ written as a string of $k$ ones, does $M$ accept $x$ within $k$ steps? So already, the way in which $k$ is represented in an instance completely determines the complexity of the problem. (Note if I wrote $k$ in ternary, $4$-ary, etc., problem (1) remains $EXPTIME$-complete.) Another interesting example comes from circuit complexity. Consider the following two problems: (3) Given a truth table of $2^n$ bits for a function $f:$ {$0,1$}$^n \rightarrow${$0,1$}, return a circuit with AND/OR/NOT gates that computes $f$ and contains a minimum number of gates. (4) Given a function $f:$ {$0,1$}$^n \rightarrow${$0,1$} represented as a circuit with AND/OR/NOT gates, return a circuit that also computes $f$ and contains a minimum number of gates. Problem (3) can be easily seen to be in $NP$, since the minimum circuit for $f$ needs at most $O(2^n/n)$ gates, and checking that a given circuit works for $f$ takes $2^{O(n)}$ steps. However (3) is not known to be in $P$, nor is it clear that it's $NP$-complete. The curious status of (3) is discussed in Valentine Kabanets, Jin-yi Cai: Circuit minimization problem. STOC 2000: 73-79 What about problem (4)? It is not known to be in $NP$! It is known to be in $\Sigma_2 P$ of the polynomial time hierarchy, but not known to be complete for that class. However the version where you use the representation of formulas instead of circuits is known to be $\Sigma_2 P$-complete under Turing reductions: David Buchfuhrer, Christopher Umans: The Complexity of Boolean Formula Minimization. ICALP (1) 2008: 24-35 Examples of this sort are everywhere in complexity theory, simply because the encoding can really matter if the relative sizes of encodings (or the complexities of encodings) are different enough. Luckily, most "natural" encodings (for which there are polynomial time mappings from one encoding to another) do not seem to affect the overall complexity of a problem (e.g. whether or not a problem is in $NP$). This is another reason why the notion of polynomial time is one of the main focuses in complexity. It is a "robust" notion that isn't affected by whether you use e.g. adjacency lists versus adjacency matrices to represent a graph in your graph problem. Related to this, there is a recent and thought-provoking reference that outlines a complexity theory for succinctly represented graphs (graphs whose adjacency matrices are the truth tables of small size circuits): Sanjeev Arora, David Steurer, Avi Wigderson: Towards a Study of Low-Complexity Graphs. ICALP (1) 2009: 119-131 Finally, concerning your proposed "isomorphism-respecting" encoding of graphs: while it would be very neat to have, it would not be considered natural, since we don't know how to efficiently obtain such an encoding from any of the other encodings that have already been deemed natural. UPDATE TO ADDRESS YOUR REVISED QUESTION: I think it is a neat idea to try to study "problems" as classes of languages that "represent the same thing" in some strong sense. I'm not aware of significant prior work on this (other than the cheap reply that "all NP-complete problems represent the same thing", which I don't think is what you are driving at). The closest reference I can think of is a related attempt to define "algorithm" in a similar way. See Blass, Dershowitz, and Gurevich's cool paper: http://research.microsoft.com/en-us/um/people/gurevich/Opera/192.pdf - Probably the question becomes less interesting this way, but I’d go as far as to say that the encoding is itself part of the statement of the problem (particularly when we describe decision problems as languages). – Antonio E. Porreca Jul 19 2010 at 16:06 @Antonio: That is one way of defining the question away, but it seems to run counter to the philosophy of "reducing" one problem to another. – Daniel Litt Jul 19 2010 at 16:10 1 Indeed, while the encoding of a problem matters, we'd rather have that it doesn't matter, as much as that is possible. Sometimes this is unavoidable (e.g. in algorithms for planar graphs, using an adjacency matrix kills your linear time algorithm). It would be very annoying if the "true" complexity of Boolean satisfiability (say) depended on the precise manner in which you encoded formulas. The reason that it doesn't (up to "natural" encodings) is due to the robustness of the definition of NP. One encoding may put it in NTIME[n] or another in NTIME[n^2], but it's still NP. – Ryan Williams Jul 19 2010 at 17:13 Interesting paper! I think a similar philosophical argument can be made against the existence of a useful equivalence relation on languages "representing the same problem." This doesn't of course rule out a more quantitative complexity measure. – Daniel Litt Jul 20 2010 at 13:18 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The general abstract setting for the issue driving your question is the notion of reduction of equivalence relations. The idea of this is that one equivalence relation $E$ reduces to another $F$ with respect to some complexity concept if there is a function $f$ in this class such that • $x\, E\, y$ if and only if $f(x)\, F\, f(y)$ You can imagine that $E$ is the equivalence relation arising from one way of representing mathematical objects (graphs, algebraic structures, whatever) and $F$ is the relation corresponding to an alternative method. The reduction is saying that equivalence with respect to the $E$ way of representing the objects is no more difficult than equivalence with respect to the $F$ way of representing them. I claim that understanding this reducibility relation amounts to understanding exactly what your question is aimed at, the question of how one manner of representing the same objects can be simpler than another. More generally, this reducibility relation provides a very precise way to understand what it means to say that one classification problem is strictly harder than another, even when the objects in the two cases seem totally unrelated at first. In the case you seem most interested, you could regard $E$ and $F$ as NP equivalence relations and insist that $f$ is polynomial time computable. This is a case that has been recently investigated by Sy Friedman, and this MO question arose out of a talk he gave on this topic here in New York, and discusses as motivation some of the relevant general theory. This appears to be a completely new research area, ripe for progress. I would encourage anyone to enter into it. Much of that theory is inspired by the enormous successes of the much more developed instance of this concept, occurring when $E$ and $F$ are Borel relations on the reals and $f$ is a Borel function. This case is the emerging-but-possibly-now-mature field of Borel equivalence relation theory (see Greg Hjorth's survey article and Simon Thomas' notes). The theory of Borel equivalence relation theory has to deal explicitly with the Borel analogues of the precise issues you mention in your question, and has made huge illuminating progress in understanding the structure of Borel equivalence relations under Borel reducibility. I mention some of the basic results in this MO answer. In general, for each notion of complexity, the goal is to study the whole hierarchy of equivalence relations, to discover its features and general structural results. The Borel case is quite well developed by now, exhibiting many fascinating features, but the NP case is much less well developed. I do know personally, however, that other researchers are working on several other natural contexts of this idea. - Excellent! Do you know any preprints/papers on the NP case? – Daniel Litt Jul 22 2010 at 3:55 You can look at Sy Friedman's web page at logic.univie.ac.at/~sdf and also in particular at this preprint: logic.univie.ac.at/~sdf/papers/… – Joel David Hamkins Jul 22 2010 at 4:10 A legit encoding would give two different codings for two different objects. The problem of, for instance, graph isomorphism only makes sense if one considers two isomorphic graphs to be different. There are tons of problems for which changing the encoding leads to a classification in a smaller complexity class. Take for example (Garey & Johnson, p. 159), LINEAR DIVISIBILITY, which is, given integers $a$ and $c$ the problem of answering $(\exists x)[ax +1|c]$. This problem is $\gamma$-complete, but trivially in P if the inputs are given in unary. G&J add "[The] supposed intractability [of LINEAR DIVISIBILITY] depends heavily on the convention that numbers be represented by strings having length logarithmic in their magnitudes." You should in particular check the notion of "pseudo-polynomial time" and section 4.2 of Garey & Johnson. Hope this helps. - Thanks! This is sort of a restatement of the motivation for the question; but I will take a look at the text you mention. – Daniel Litt Jul 17 2010 at 23:21 Your question reminded me of matroid problems. With these it is of great importance to specify how the input is given, as translating between input forms can increase the size of the input exponentially. There is a survey of this issue here: http://arxiv.org/abs/math/0702567 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 101, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421207308769226, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/Likelihood
Likelihood function Talk0 31,732pages on this wiki Redirected from Likelihood Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model, defined as follows: the likelihood of a set of parameter values given some observed outcomes is equal to the probability of those observed outcomes given those parameter values. Likelihood functions play a key role in statistical inference, especially methods of estimating a parameter from a set of statistics. In non-technical parlance, "likelihood" is usually a synonym for "probability" but in statistical usage, a clear technical distinction is made. One may ask "If I were to flip a fair coin 100 times, what is the probability of it landing heads-up every time?" or "Given that I have flipped a coin 100 times and it has landed heads-up 100 times, what is the likelihood that the coin is fair?" but it would be improper to switch "likelihood" and "probability" in the two sentences. If a probability distribution depends on a parameter, one may on one hand consider—for a given value of the parameter—the probability (density) of the different outcomes, and on the other hand consider—for a given outcome—the probability (density) this outcome has occurred for different values of the parameter. The first approach interprets the probability distribution as a function of the outcome, given a fixed parameter value, while the second interprets it as a function of the parameter, given a fixed outcome. In the latter case the function is called the "likelihood function" of the parameter, and indicates how likely a parameter value is in light of the observed outcome. Definition For the definition of the likelihood function, one has to distinguish between discrete and continuous probability distributions. Discrete probability distribution Let X be a random variable with a discrete probability distribution p depending on a parameter θ. Then the function $\mathcal{L}(\theta |x) = p_\theta (x) = P_\theta (X=x), \,$ considered as a function of θ, is called the likelihood function (of θ, given the outcome x of X). Sometimes the probability on the value x of X for the parameter value θ is written as $P(X=x|\theta)$, but should not be considered as a conditional probability. Continuous probability distribution Let X be a random variable with a continuous probability distribution with density function f depending on a parameter θ. Then the function $\mathcal{L}(\theta |x) = f_{\theta} (x), \,$ considered as a function of θ, is called the likelihood function (of θ, given the outcome x of X). Sometimes the density function for the value x of X for the parameter value θ is written as $f(x|\theta)$, but should not be considered as a conditional probability density. The actual value of a likelihood function bears no meaning. Its use lies in comparing one value with another. E.g., one value of the parameter may be more likely than another, given the outcome of the sample. Or a specific value will be most likely: the maximum likelihood estimate. Comparison may also be performed in considering the quotient of two likelihood values. That's why generally, $\mathcal{L}(\theta |x)$ is permitted to be any positive multiple of the above defined function $\mathcal{L}$. More precisely, then, a likelihood function is any representative from an equivalence class of functions, $\mathcal{L} \in \left\lbrace \alpha \; P_\theta: \alpha > 0 \right\rbrace, \,$ where the constant of proportionality α > 0 is not permitted to depend upon θ, and is required to be the same for all likelihood functions used in any one comparison. In particular, the numerical value $\mathcal{L}$(θ | x) alone is immaterial; all that matters are maximum values of $\mathcal{L}$, or likelihood ratios, such as those of the form $\frac{\mathcal{L}(\theta_2 | x)}{\mathcal{L}(\theta_1 | x)} = \frac{\alpha P(X=x|\theta_2)}{\alpha P(X=x|\theta_1)} = \frac{P(X=x|\theta_2)}{P(X=x|\theta_1)},$ that are invariant with respect to the constant of proportionality α. A. W. F. Edwards defined support to be the natural logarithm of the likelihood ratio, and the support function as the natural logarithm of the likelihood function (the same as the log-likelihood; see below).[1] However, there is potential for confusion with the mathematical meaning of 'support', and this terminology is not widely used outside Edwards' main applied field of phylogenetics. For more about making inferences via likelihood functions, see also the method of maximum likelihood, and likelihood-ratio testing. Log-likelihood For many applications involving likelihood functions, it is more convenient to work in terms of the natural logarithm of the likelihood function, called the log-likelihood, than in terms of the likelihood function itself. Because the logarithm is a monotonically increasing function, the logarithm of a function achieves its maximum value at the same points as the function itself, and hence the log-likelihood can be used in place of the likelihood in maximum likelihood estimation and related techniques. Finding the maximum of a function often involves taking the derivative of a function and solving for the parameter being maximized, and this is often easier when the function being maximized is a log-likelihood rather than the original likelihood function. For example, some likelihood functions are for the parameters that explain a collection of statistically independent observations. In such a situation, the likelihood function factors into a product of individual likelihood functions. The logarithm of this product is a sum of individual logarithms, and the derivative of a sum of terms is often easier to compute than the derivative of a product. In addition, several common distributions have likelihood functions that contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function. As an example, consider the gamma distribution, whose likelihood function is $\mathcal{L} (\alpha, \beta|x) = \frac{\beta^\alpha}{\Gamma(\alpha)} x^{\alpha-1} e^{-\beta x}$ and suppose we wish to find the maximum likelihood estimate of β for a single observed value x. This function looks rather daunting. Its logarithm, however, is much simpler to work with: $\log \mathcal{L}(\alpha,\beta|x) = \alpha \log \beta - \log \Gamma(\alpha) + (\alpha-1) \log x - \beta x. \,$ The partial derivative with respect to β is simply $\frac{\partial \log \mathcal{L}(\alpha,\beta|x)}{\partial \beta} = \frac{\alpha}{\beta} - x$ If there are a number of independent random samples x1,…,xn, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be the sum of individual derivatives: $\frac{n \alpha}{\beta} - \sum_{i=1}^n x_i$ Setting that equal to zero and solving for β yields $\hat\beta = \frac{\alpha}{\bar{x}}$ where $\hat\beta$ denotes the maximum-likelihood estimate and $\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i$ is the sample mean of the observations. Likelihood function of a parameterized model Among many applications, we consider here one of broad theoretical and practical importance. Given a parameterized family of probability density functions (or probability mass functions in the case of discrete distributions) $x\mapsto f(x\mid\theta), \!$ where θ is the parameter, the likelihood function is $\theta\mapsto f(x\mid\theta), \!$ written $\mathcal{L}(\theta \mid x)=f(x\mid\theta), \!$ where x is the observed outcome of an experiment. In other words, when f(x | θ) is viewed as a function of x with θ fixed, it is a probability density function, and when viewed as a function of θ with x fixed, it is a likelihood function. Note: This is not the same as the probability that those parameters are the right ones, given the observed sample. Attempting to interpret the likelihood of a hypothesis given observed evidence as the probability of the hypothesis is a common error, with potentially disastrous real-world consequences in medicine, engineering or jurisprudence. See prosecutor's fallacy for an example of this. From a geometric standpoint, if we consider f (x, θ) as a function of two variables then the family of probability distributions can be viewed as level curves parallel to the x-axis, while the family of likelihood functions are the orthogonal level curves parallel to the θ-axis. Likelihoods for continuous distributions The use of the probability density instead of a probability in specifying the likelihood function above may be justified in a simple way. Suppose that, instead of an exact observation, x, the observation is the value in a short interval (xj−1, xj), with length Δj, where the subscripts refer to a predefined set of intervals. Then the probability of getting this observation (of being in interval j) is approximately $\mathcal{L}_\text{approx}(\theta \mid x \text{ in interval } j) = f(x_{*}\mid\theta) \Delta_j, \!$ where x* can be any point in interval j. Then, recalling that the likelihood function is defined up to a multiplicative constant, it is just as valid to say that the likelihood function is approximately $\mathcal{L}_\text{approx}(\theta \mid x \text{ in interval } j)= f(x_{*}\mid\theta), \!$ and then, on considering the lengths of the intervals to decrease to zero, $\mathcal{L}(\theta \mid x )= f(x\mid\theta). \!$ Likelihoods for mixed continuous–discrete distributions The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses pk(θ) and a density f(x | θ), where the sum of all the p's added to the integral of f is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with as above by setting the interval length short enough to exclude any of the discrete masses. For an observation from the discrete component, the probability can either be written down directly or treated within the above context by saying that the probability of getting an observation in an interval that does contain a discrete component (of being in interval j which contains discrete component k) is approximately $\mathcal{L}_\text{approx}(\theta \mid x \text{ in interval } j \text{ containing discrete mass } k)=p_k(\theta) + f(x_{*}\mid\theta) \Delta_j, \!$ where $x_{*}\$ can be any point in interval j. Then, on considering the lengths of the intervals to decrease to zero, the likelihood function for a observation from the discrete component is $\mathcal{L}(\theta \mid x )= p_k(\theta), \!$ where k is the index of the discrete probability mass corresponding to observation x. The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation x, but not with the parameter θ. Example 1 Let $p_\text{H}$ be the probability that a certain coin lands heads up (H) when tossed. So, the probability of getting two heads in two tosses (HH) is $p_\text{H}^2$. If $p_\text{H} = 0.5$, then the probability of seeing two heads is 0.25. In symbols, we can say the above as: $P(\text{HH} | p_\text{H}=0.5) = 0.25.$ Another way of saying this is to reverse it and say that "the likelihood that $p_\text{H} = 0.5$, given the observation HH, is 0.25"; that is: $\mathcal{L}(p_\text{H}=0.5 | \text{HH}) = P(\text{HH} | p_\text{H}=0.5) = 0.25.$ But this is not the same as saying that the probability that $p_\text{H} = 0.5$, given the observation HH, is 0.25. Notice that the likelihood that $p_\text{H} = 1$, given the observation HH, is 1. But it is clearly not true that the probability that $p_\text{H} = 1$, given the observation HH, is 1. Two heads in a row hardly proves that the coin always comes up heads. In fact, two heads in a row is possible for any $p_\text{H} > 0$. The likelihood function is not a probability density function. Notice that the integral of a likelihood function is not in general 1. In this example, the integral of the likelihood over the interval [0, 1] in $p_\text{H}$ is 1/3, demonstrating that the likelihood function cannot be interpreted as a probability density function for $p_\text{H}$. Example 2 Main article: German tank problem Consider a jar containing N lottery tickets numbered from 1 through N. If you pick a ticket randomly then you get positive integer n, with probability 1/N if n ≤ N and with probability zero if n > N. This can be written $P(n|N)= \frac{[n \le N]}{N}$ where the Iverson bracket [n ≤ N] is 1 when n ≤ N and 0 otherwise. When considered a function of n for fixed N this is the probability distribution, but when considered a function of N for fixed n this is a likelihood function. The maximum likelihood estimate for N is N0 = n (by contrast, the unbiased estimate is 2n − 1). This likelihood function is not a probability distribution, because the total $\sum_{N=1}^\infty P(n|N) = \sum_{N} \frac{[N \ge n]}{N} = \sum_{N=n}^\infty \frac{1}{N}$ is a divergent series. Suppose, however, that you pick two tickets rather than one. The probability of the outcome {n1, n2}, where n1 < n2, is $P(\{n_1,n_2\}|N)= \frac{[n_2 \le N]}{\binom N 2} .$ When considered a function of N for fixed n2, this is a likelihood function. The maximum likelihood estimate for N is N0 = n2. This time the total $\sum_{N=1}^\infty P(\{n_1,n_2\}|N) = \sum_{N} \frac{[N\ge n_2]}{\binom N 2} =\frac 2 {n_2-1}$ is a convergent series, and so this likelihood function can be normalized into a probability distribution. If you pick 3 or more tickets, the likelihood function has a well defined mean value, which is larger than the maximum likelihood estimate. If you pick 4 or more tickets, the likelihood function has a well defined standard deviation too. Relative likelihood Suppose that the maximum likelihood estimate for θ is $\hat \theta$. Relative plausibilities of other θ values may be found by comparing the likelihood of those other values with the likelihood of $\hat \theta$. The relative likelihood of θ is defined[2] as $\mathcal{L}(\theta | x)/\mathcal{L}(\hat \theta | x)$. A 10% likelihood region for θ is $\{\theta : \mathcal{L}(\theta | x)/\mathcal{L}(\hat \theta | x) \ge 0.10\},$ and more generally, a p% likelihood region for θ is defined[2] to be $\{\theta : \mathcal{L}(\theta | x)/\mathcal{L}(\hat \theta | x) \ge p/100 \}.$ If θ is a single real parameter, a p% likelihood region will typically comprise an interval of real values. In that case, the region is called a likelihood interval.[2][3] Likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.7% likelihood interval for θ will be the same as a 95% confidence interval.[2] In a slightly different formulation suited to the use of log-likelihoods, the e−2 likelihood interval is the same as the 0.954 confidence interval (under certain conditions).[3] The idea of basing an interval estimate on the relative likelihood goes back to Fisher in 1956 and has been by many authors since then.[3] If a likelihood interval is specifically to be interpreted as a confidence interval, then this idea is immediately related to the likelihood ratio test which can be used to define appropriate intervals for multivariate parameters. This approach can be used to define the critical points for the likelihood ratio statistic to achieve the required coverage level for a confidence interval. However a likelihood interval can be used as such, having been determined in a well-defined way, without claiming any particular coverage probability. Likelihoods that eliminate nuisance parameters In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters so that a likelihood can be written as a function of only the parameter (or parameters) of interest; the main approaches being marginal, conditional and profile likelihoods.[4][5] These approaches are useful because standard likelihood methods can become unreliable or fail entirely when there are many nuisance parameters or when the nuisance parameters are high-dimensional. This is particularly true when the nuisance parameters can be considered to be "missing data"; they represent a non-negligible fraction of the number of observations and this fraction does not decrease when the sample size increases. Often these approaches can be used to derive closed-form formulae for statistical tests when direct use of maximum likelihood requires iterative numerical methods. These approaches find application in some specialized topics such as sequential analysis. Conditional likelihood Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters. One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test. Marginal likelihood Main article: Marginal likelihood Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components. Profile likelihood It is often possible to write some parameters as functions of other parameters, thereby reducing the number of independent parameters. (The function is the parameter value which maximizes the likelihood given the value of the other parameters.) This procedure is called concentration of the parameters and results in the concentrated likelihood function, also occasionally known as the maximized likelihood function, but most often called the profile likelihood function. For example, consider a regression analysis model with normally distributed errors. The most likely value of the error variance is the variance of the residuals. The residuals depend on all other parameters. Hence the variance parameter can be written as a function of the other parameters. Unlike conditional and marginal likelihoods, profile likelihood methods can always be used, even when the profile likelihood cannot be written down explicitly. However, the profile likelihood is not a true likelihood, as it is not based directly on a probability distribution, and this leads to some less satisfactory properties. Attempts have been made to improve this, resulting in modified profile likelihood. The idea of profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood. In the case of parameter estimation in partially observed systems, the profile likelihood can be also used for identifiability analysis.[6] An implementation is available in the MATLAB Toolbox PottersWheel. Partial likelihood A partial likelihood is a factor component of the likelihood function that isolates the parameters of interest.[7] It is a key component of the proportional hazards model. Historical remarks In English, "likelihood" has been distinguished as being related to but weaker than "probability" since its earliest uses. The comparison of hypotheses by evaluating likelihoods has been used for centuries, for example by John Milton in Aeropagitica: "when greatest likelihoods are brought that such things are truly and really in those persons to whom they are ascribed". In Danish, "likelihood" was used by Thorvald N. Thiele in 1889.[8][9][10] In English, "likelihood" appears in many writings by Charles Sanders Peirce, where model-based inference (usually abduction but sometimes including induction) is distinguished from statistical procedures based on objective randomization. Peirce's preference for randomization-based inference is discussed in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883)". "probabilities that are strictly objective and at the same time very great, although they can never be absolutely conclusive, ought nevertheless to influence our preference for one hypothesis over another; but slight probabilities, even if objective, are not worth consideration; and merely subjective likelihoods should be disregarded altogether. For they are merely expressions of our preconceived notions" (7.227 in his Collected Papers). "But experience must be our chart in economical navigation; and experience shows that likelihoods are treacherous guides. Nothing has caused so much waste of time and means, in all sorts of researchers, as inquirers' becoming so wedded to certain likelihoods as to forget all the other factors of the economy of research; so that, unless it be very solidly grounded, likelihood is far better disregarded, or nearly so; and even when it seems solidly grounded, it should be proceeded upon with a cautious tread, with an eye to other considerations, and recollection of the disasters caused." (Essential Peirce, volume 2, pages 108–109) Like Thiele, Peirce considers the likelihood for a binomial distribution. Peirce uses the logarithm of the odds-ratio throughout his career. Peirce's propensity for using the log odds is discussed by Stephen Stigler.[citation needed] In Great Britain, "likelihood" was popularized in mathematical statistics by R.A. Fisher in 1922[11]: "On the mathematical foundations of theoretical statistics". In that paper, Fisher also uses the term "method of maximum likelihood". Fisher argues against inverse probability as a basis for statistical inferences, and instead proposes inferences based on likelihood functions. Fisher's use of "likelihood" fixed the terminology that is used by statisticians throughout the world. Notes 1. ↑ Edwards, A.W.F. 1972. Likelihood. Cambridge University Press, Cambridge (expanded edition, 1992, Johns Hopkins University Press, Baltimore). ISBN 0-8018-4443-6 2. ↑ 2.0 2.1 2.2 2.3 Kalbfleisch J.G. (1985) Probability and Statistical Inference, Springer (§9.3.) 3. ↑ 3.0 3.1 3.2 (1971). Interval Estimation from the Likelihood Function. 33 (2): 256–262. 4. ↑ Pawitan, Yudi (2001). In All Likelihood: Statistical Modelling and Inference Using Likelihood, Oxford University Press. 5. ↑ Wen Hsiang Wei. Generalized linear model course notes. Tung Hai University, Taichung, Taiwan. URL accessed on 2007-01-23. 6. ↑ Raue, A (2009). Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics 25 (15): 1923–9. 7. ↑ Cox, D. R. (1975). Partial likelihood. 62 (2): 269–276. 8. ↑ Anders Hald (1998). A History of Mathematical Statistics from 1750 to 1930, New York: Wiley. 9. ↑ Steffen L. Lauritzen, Aspects of T. N. Thiele’s Contributions to Statistics. , 58, 27–30, 1999. 10. ↑ Steffen L. Lauritzen (2002). Thiele: Pioneer in Statistics, 288, [Oxford University Press]. 11. ↑ Fisher, R.A. (1922). On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A 222 (594–604): 309–368. References • John W. Pratt (May 1976). F. Y. Edgeworth and R. A. Fisher on the Efficiency of Maximum Likelihood Estimation. The Annals of Statistics 4 (3): 501–514. | jstor = 2958222 • Stephen M. Stigler (1978). Francis Ysidro Edgeworth, Statistician. Journal of the Royal Statistical Society, Series A 141 (3): 287–322. | jstor = 2344804 • Stephen M. Stigler. The History of Statistics: The Measurement of Uncertainty before 1900, Harvard University Press. • Stephen M. Stigler. Statistics on the Table: The History of Statistical Concepts and Methods, Harvard University Press. • Anders Hald (1999). On the History of Maximum Likelihood in Relation to Inverse Probability and Least Squares. Statistical Science 14 (2): 214–222. | jstor = 2676741 • Hald, A. (1998). A History of Mathematical Statistics from 1750 to 1930, New York: Wiley. Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8830546140670776, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/3605-graph-sketching-print.html
# Graph Sketching Printable View • June 24th 2006, 05:51 AM macca101 Graph Sketching OK Here we go.... To Sketch the graph of $<br /> f(x)=\frac{4x-15}{x^2-9}<br />$ 1. Find the domain. The denominator is 0 when $<br /> x= \pm3<br />$ so the domain of $<br /> f<br />$ is all of $<br /> R<br />$ except for $<br /> \pm 3<br />$ 2. Is the function odd or even. $<br /> f(2) = -\frac{7}{5} , f(-2)= \frac {-23}{-3}<br />$ as $<br /> f(-2) <br />$ is not equal to $<br /> \pm f(2) <br />$ $<br /> f <br />$ is neither odd or even 3. The intercepts The x intercept occurs when $<br /> 4x-15=0\\ x=\frac{15}{4}<br />$ The y intercept occurs at $<br /> f(0) = \frac{5}{3}<br />$ 4. Construct a sign table $<br /> f(x)=\frac{4x-15}{x^2-9}\\=\frac{4x-15}{(x-3)(x+3)}<br />$ Having constructed a sign table (don't know how to create it in LATEX) but gives me the following information to help in the sketching of the graph. $<br /> f<br />$ is positive on the interval $<br /> (-3,0)\,(0,3)\,(3,\infty)<br />$ and negative on $<br /> (-\infty,-3)<br />$ 5. The Derivative of $<br /> f(x)<br />$ $<br /> f'(x)= \frac{(x^2-9)4-(4x-15)2x}{(x^2-9)^2}<br />$ $<br /> f'(x)= \frac{(-4x+6)(x-6)}{(x^2-9)^2}<br />$ $<br /> f(x)<br />$ has two fixed points at $<br /> x=\frac{3}{2}<br />$ and $<br /> x=6<br />$ As $<br /> f(\frac{3}{2})=\frac{4}{3}<br />$ and $<br /> f(6)=\frac{1}{3}<br />$ the fixed points are located at $(\frac{3}{2},\frac{4}{3})\,(6,\frac{1}{3})<br />$ After applying the first derivative test $(\frac{3}{2},\frac{4}{3})<br />$ is found to be a local minimum and $(6,\frac{1}{3})<br />$ is found to be a local maximum. Also after constructing the sign table of $<br /> f'(x)<br />$ the function is found to be decreasing on $(-\infty,-3)\,(-3,\frac{3}{2})\,(6,\infty)$ and increasing on $(\frac{3}{2},3)\,(3,6,)$ 6. Asymptotes As the denominator is 0 when $<br /> x= \pm3<br />$ then the lines $<br /> x=3<br />$ and $<br /> x=-3<br />$are vertical asymptotes Now here is the question(s) 1 Is all the above correct ? 2 Does the graph have a horizontal asymptote ? 3 Is there enough information the sketch the graph? (I've had a go but I am unable to reproduce my result here.) Thanks • June 24th 2006, 06:23 AM Jameson Just taking a quick glance at your work it all seems good. To find the horizontal asymptotes, take $\lim_{x \to \infty}f(x)$ and $\lim_{x \to -\infty}f(x)$. You do have plenty of information to sketch the graph. You could take the second derivative though to test for concavity. • June 24th 2006, 09:36 AM Soroban Hello, macca101! You did a good job . . . one error. Quote: $f(x)\:=\:\frac{4x-15}{x^2-9}$ Domain: all $x \neq \pm 3$ Intercepts: $\left(\frac{15}{4},0\right),\;\;\left(0,\frac{5}{3 }\right)$ Sign table $f$ is positive on the intervals: $(-3,0),\;(0,3),\;(3,\infty)$ . . . no and negative on $(-\infty,-3)$ $f$ is negative on $\left(3,\frac{15}{4}\right)$ . . . then positive on $\left(\frac{15}{4},\infty\right)$ Quote: Derivative of $f(x)$ $f(x)$ has fixed points at: $\left(\frac{3}{2}, \frac{4}{3}\right),\;\left(6, \frac{1}{3}\right)$ First derivative test $\left(\frac{3}{2},\frac{4}{3}\right)$ is found to be a local minimum and $\left(6,\frac{1}{3}\right)$ is found to be a local maximum. Sign table of $f'(x)$ The function is found to be decreasing on $(-\infty,-3),\;\left(-3,\frac{3}{2}\right),\;(6,\infty)$ and increasing on $\left(\frac{3}{2},3\right),\;(3,6)$ Asymptotes Vertical asymptotes are: $x = -3$ and $x = 3.$ For horizontal asymptotes, examine: . $\lim_{x\to\infty}f(x)$ and $\lim_{x\to-\infty}f(x)$ We have: . $\lim_{x\to\infty}\frac{4x - 15}{x^2 - 9} \;= \;$ $\lim_{x\to\infty}\frac{\frac{1}{x^2}(4x - 15)}{\frac{1}{x^2}(x^2 - 9)}\;=\;$ $\lim_{x\to\infty}\frac{\frac{4}{x} - \frac{15}{x^2}}{1 - \frac{9}{x^2}} \;= \;\frac{0 - 0}{1 - 0}\;=\;0$ Hence, the horizontal asymptote is: . $y = 0$, the x-axis. There is plenty of information (which you found) for sketching the graph. Code: ```                    :*    |    *:                     :    |    :                     : *  |  * :                     :  *  |    :                     :    *|  *  :          *                     :    |*    :      *      *                     :    |    :    *              *       --------------:-----+-----:---*----------------------         *          :    |    :  *               *    :    |    :                 *  :    |    : *                   * :    |    :                     :    |    :                   *:    |    :*``` I'll let you label the intercepts, asymptotes and critical points. • June 24th 2006, 10:12 AM macca101 Thanks for spotting my deliberate ;) mistake Soroban. My graph looks very similar to the one you have produced. Amazing Thanks again All times are GMT -8. The time now is 03:16 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 59, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8940128684043884, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/7293/american-option-price-formula-assuming-a-loglaplace-distribution?answertab=active
# American Option price formula assuming a logLaplace distribution? What are $d_1$ and $d_2$ for Laplace? may be running before walking. When I tried to use the equations provided, the pricing became extremely lopsided, with the calls being routinely double puts. This is extremely unrealistic. My guess was correct that a distribution closer to the ideal (whatever it is) would remove the volatility smile, proven here with European options assuming logLaplace (formula 1 page up): http://books.google.com/books?id=cb8B07hwULUC&pg=PA297&lpg=PA296#v=onepage&q&f=false However, it looks like the fundamental assumptions even going back to risk-neutral have to be changed because all BS depends upon lognormality at some point whatever the derivation. I've searched and searched, but I can't find anything that's worked out American option prices assuming a logLaplace distribution and not lognormality. What is the American option price formula for a call assuming a logLaplace Distribution and not lognormality? - ## 1 Answer Have you looked at using Laplace in a Monte Carlo simulation? Here is how you price American style options within a MC framework: http://www2.math.uu.se/research/pub/Jia1.pdf and the Longstaff, Schwartz paper: http://escholarship.org/uc/item/43n1k4jb#page-1 Regarding the discretization of a process that draws its random variables from a Laplace distribution I can only suggest ideas as I have not myself worked with those in regards to a MC discretization: You can draw RV from a uniform distribution and generate a laplace distributed RV `X = μ - b * sgn(U) * ln(1-2 * |U|)`. This only concerns the RV generation itself. You need to make other transformations as well but I am not aware this distribution has been much applied to pricing financial derivatives. The following may help, though they use a mixture of Normal-Laplace distributions: http://asianfa2012.mcu.edu.tw/fullpaper/10305.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175135493278503, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/tagged/complex
# Tagged Questions Questions about using complex numbers in Mathematica. This includes basic arithmetic, functions of complex numbers, plotting complex functions, and dealing with branch cuts. 1answer 49 views ### Overloading conjugate operator for a particular function I trying to modify the behaviour of the built-in Conjugate[] operator on a particular function I have defined, to take into account that some of its variables are real. ... 3answers 86 views ### finding an argument of a complex number What is the simplest way to find the argument of the following function? ((1 - E^((I π (1 - α))/(β - α)) z)/(1 - E^(-((I π (1 - α))/(β - α))) z)) as I tried the ... 0answers 77 views ### How to make the imaginary part of a +0. I zero globally? Values like a +0. I are really annoying. Answers from How to reduce expressions with complex coefficients in the form of a+0.*I? Is there a way to globally set when to treat a very small number as ... 1answer 92 views ### why there is a small imaginary part [closed] I encountered a problem. I have a eigenvector eigvsI[1] ... 0answers 76 views ### Solve equations real and imaginary part separately For my system of equations, the procedure described in Solving complex equations of using Reduce works no more. How can I separate the real and imaginary part of ... 2answers 50 views ### Conjugate and simplify I want to get a cosine from taking the real part of a complex exponential: $cos(x) = Re(exp(i x))$. What I do in Mathematica is ... 3answers 125 views ### How to extract phase angle from sinusoid I'm doing some electric circuit calcualtions and I'm trying to get the phasor representation of some arbitrary function of Sin or Cos. Could be complex like: ... 0answers 70 views ### ComplexExpand absolute squared ComplexExpand[Abs[a + b I]] Gives $\sqrt{a^2 + b^2 }$ ComplexExpand[Abs[a + b I]^2] On the other hand gives Abs[a ... 2answers 74 views ### How to take conjugate of a function? Naïvely this is what happens and it obviously is not helpful! ... 1answer 97 views ### Plotting a complex function [duplicate] What does it mean if this message appears: {Im[(1-E^Times[<<3>>] f)/(1-Power[<<2>>] f)]-0,Im[(1-E^Times[<<3>>] f)/(1-Power[<<2>>] f)]-0} must be a list of equalities or ... 1answer 130 views ### ContourPlot with parameter I have an equation for function F[x,y]==0 which first argument x is real and another, y, ... 1answer 85 views ### Simplifying Complex Numbers that contain physical units I am trying to evaluate the following: Simplify[Meter Nano Re[(a + I b)/(Meter Nano)], Assumptions -> Element[{a, b}, Reals]] However, Mathematica returns: ... 2answers 139 views ### Convergence and value of a complex power series I've done a little math and I got the following power series expansion of $\log z$ about $z_0=-2+i$. $$\log z=\log(-2+i)+\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n(-2+i)^n}[z-(-2+i)]^n$$ I've shown that ... 5answers 409 views ### Why is this Mandelbrot set's implementation infeasible: takes a massive amount of time to do? [closed] The Mandelbrot set is defined by complex numbers such as $z=z^2+c$ where $z_0=0$ for the initial point and $c\in\mathbb C$. The numbers grow very fast in the iteration. ... 0answers 66 views ### Choosing the appropriate solution for the square root [closed] My problem is the following: given the function myfunc[x_, a_]:= (a^3 - (a^2)^(3/2))/(x) the limit as x goes to 0 should be well defined and = 0. However, ... 1answer 255 views ### How do I put an image on the complex plane? I watched this video and became interested in transforming an image. But I have no good idea on how to embed an image in the complex plane using Mathematica. I have a method that seems to work, but ... 2answers 211 views ### Inconsistent results from equivalent integrals Why is Mathematica returning different values for these two integrals: I am just being introduced to complex integration, so it's possible that I have a misunderstanding of how this works, but in ... 2answers 157 views ### What is the value Re[Sqrt[1+I*2*x]]? When I try to evaluate Re[Sqrt[z]], for some values of Mathematica fails to evaluate it. For example, Re[Sqrt[2 + I*x]]` ... 6answers 339 views ### Image of first quadrant under $f(z)=(z+i)/(z-i)$ I'm able to plot the region where Im[z] > 0 and Re[z] > 0: ... 3answers 117 views ### Wrapper for inexact numeric complex numbers that maintains polar form Related question: How can I convert a complex number a+b I to the exponent form A Exp(I phi)? Mathematica insists on displaying complex number in form a+I b when ... 2answers 159 views ### What can I do to eliminate the error FindFit::nrjnum:? I am testing the "Power Law with finite-time singularity" hypothesis for world population growth for a project. The data I'm using (same behaviour should also be exhibited by the stock market, thats ... 1answer 130 views ### Stop Mathematica from giving imaginary solutions I have the following equation: $$D=\frac{1}{64} \pi A ^3 B \sin \left(C\right)-\frac{1}{2} \pi A B \sin \left(C\right)$$ which I want to solve for $A$. The equation is cubic in $A$ so this should ... 1answer 123 views ### How can I get the solution of complicated implicit function? The question is what is the method to solve the implicit function has real and imaginary number. For example, The function is F(x,y)=(-I*x + 2*y^2)^2 + x^2 - 4*y^4*Sqrt[1 - I*x/(y^2)]. Although I ... 1answer 116 views ### Compiling Error functions of complex values According to List of compilable functions Erf and Erfc are compilable functions. However, I want to make a compiled version of ... 1answer 418 views ### Complex line integral Can someone recommend an online article or introductory tutorial that will show me how to do real and complex line integrals using Mathematica? 2answers 285 views ### Contour plot doesn't look right I have an implicit function expression ... 0answers 41 views ### Contour integration [duplicate] Possible Duplicate: Symbolic integration in the complex plane Does mathematica do Contour integration ? like this one : $\displaystyle \oint_{\gamma} f(z)\ \mathrm{d}z$ if yes, how ... 2answers 169 views ### Adding multiple Complex Numbers in Euler form Say I have a series of $n$ complex numbers of the form $A_k e^{(I \ \theta_k x)}$ where $A_k$ is a real number and so is $\theta_k$ and $k$ runs from $1$ to $n$. $x$ is an algebraic symbol. ... 3answers 436 views ### Solving cubic equation for real roots I'm looking to solve the following cubic equation for x: $\beta\, x^3 - \gamma \,x = c$. I have plugged in some sample values ($\beta = 2$, $\gamma = 5$ and $c = 2$). When I try to solve this ... 1answer 327 views ### Forcing FindRoot to return only real solutions FindRoot documentation reports that if the Equation and the initial point are reals, the solutions are searched in the real domain. However, in the following case I ... 2answers 166 views ### How to eliminate the zero real part of a purely imaginary number? In Mathematica 9, a purely imaginary number, e.g. 0.9 I, will display as 0. + 0.9i in the output form. How can I eliminate the ... 1answer 101 views ### Manipulate[]'ing complex roots of an equation using a 2D slider [closed] I want to make a demonstration of how the complex roots of a polynomial change when I alter the coefficients. Here is my attempt: ... 3answers 169 views ### Show does not combine the plots i've the following problem: because it's not possible to plot complex numbers (or is it?) i created my own "function": ... 1answer 107 views ### Why am I not getting Indeterminate for f[1] when f[x_] = (x - 1)/(x - 1)? [closed] In Mathematica 9 If I write: f[x_] = (x - 1)/(x - 1) I get 1 And if then I write: ... 3answers 329 views ### How can I convert a complex number a+b I to the exponent form A Exp(I phi)? When I have an expression such as: (1/4 + I/4) ((1 - 2 I) x + Sqrt[3] y) it is hard to get an intuition of the number. So I want to convert it to the complex ... 2answers 438 views ### Linear equation with complex numbers I have to solve an equation of the type $$a z+b \overline{z}=c$$ with $a,b,c\in\mathbb{C}$. My approach is to set $F(z)=a z+b \overline{z}-c$ transform $z$ to $x+i y$ and then get a real linear ... 1answer 304 views ### Bifurcation diagrams for multiple equation systems I am interested in constructing a bifurcation diagram for some of my parameters (especially for β and γ) in the dynamical system given in the code below. I want to see how parameter changes affect the ... 4answers 1k views ### Plotting an Argand Diagram I have the function: $F(\omega) = \frac{5\; - \;i\;\omega}{5^2\; +\; \omega^2}$ When $\omega$ has the values : $\{ -7, -2,\; 0,\; 2,\; 7\}$ How would I plot the Argand diagram in Mathematica? Or ... 1answer 104 views ### Exporting/Importing a Table of complex numbers I'm generating a long table of list of the form: PN={{1,2,1+i},{3.5,2.6,2}...},{...},... Using: Export["PN.dat", PN, "Table"] ... 0answers 100 views ### Why does Mathematica choose branches as it does in this situation? Consider these integrals: ... 2answers 224 views ### Moving the location of the branch cut in Mathematica According to the documentation, Mathematica chooses the branch cut for $\log(z)$ to lie along the negative real axis. It it possible to change this so that it lies along the positive axis or elsewhere ... 0answers 151 views ### Function approaching incorrect limit in mathematica [closed] I am interested in the complex function $f(z)=\frac{1}{a}\log\left[2\sinh(a\sqrt{z})\right]$. where $a > 0$ is a real parameter. Clearly for large $a$ this approaches $f(z) \to \sqrt{z}$. But ... 2answers 303 views ### Eigensystem, Eigenvalue doesn't output nonreal eigenvalues Basically I have a matrix and when I used either Eigenvalue or Eigensystem, it doesn't output nonreal eigenvalues, instead it ... 1answer 148 views ### Testing for primality in quadratic rings? Testing for primality in $\mathbb{Z}[\sqrt{-1}]$ in Mathematica is easy: PrimeQ[n, GaussianIntegers -> True] But how can I test for primality in, say, ... 1answer 233 views ### Symbolic Integration along contour: branch cut problem? Context Following this question on path integrals in the complex plane, having defined again a numerical and symbolic integrator along a path as ... 0answers 277 views ### Dual complex integral over implicit path using contour plot Context I am interested in doing double contour integral over paths which are defined implicitely. For the sake of debugging, let's assume its $$\oint_{\cal C}\oint_{\cal C} \frac{1}{u\, x} d u d x$$ ... 1answer 326 views ### Symbolic integration in the complex plane Context While answering this question, I defined (symbolic and numerical) path integrations as follows ... 1answer 253 views ### Finding residues of multi-dimensional complex functions Say I have a function $f$ of $n$ complex variables, $\{ z_i \}_{i=1}^{i=Nc}$. And then I want to contour integrate the expression such that for each $z_i$ its an integration on an unit circle about ... 1answer 309 views ### Is Abs[z]^2 a bad way to calculate the square modulus of z? For a numerical quantity z, Abs[z] returns the square root of the sum of the squares of the real and imaginary parts of ... 2answers 308 views ### Is there a way to solve the Apollonius Circle problem in Mathematica? Assuming x, a, and b are complex numbers, is there a way to reduce the equation Abs[x - a] == k Abs[x - b] to something like ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8718945980072021, "perplexity_flag": "middle"}
http://nrich.maths.org/5589
### Roots and Coefficients If xyz = 1 and x+y+z =1/x + 1/y + 1/z show that at least one of these numbers must be 1. Now for the complexity! When are the other numbers real and when are they complex? ### Target Six Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions. ### 8 Methods for Three by One This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different? Which do you like best? # Sextet ##### Stage: 5 Challenge Level: If $$x + {1\over x} = 1$$ investigate $$x^n+ {1\over x^n}$$. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492472410202026, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30625/weight-on-planet-earth?answertab=oldest
# Weight on planet earth? I was wondering : does the weight on the planet earth is equal over the years ? meaning : all the people , ground , water ,gas. does the weight stays the same over the years ? - ## 2 Answers This answer is very similar to Adam's, though I come to the opposite conclusion i.e. that earth loses mass over time. According to the Scientific American article the earth loses about 3kg of hydrogen per second, and I make that about $10^8$ kg per year. According to this article the Earth gains about $3 \times 10^7$ kg per year from meteors (mostly extremely small ones). So unless there are other sources of weight loss or gain that I haven't thought of, I reckon the Earth gets lighter by about $7 \times 10^7$ kg per year. - loses hydrogen to where ? – Royi Namir Jun 22 '12 at 19:00 you say getting lighter...adam says heavier.... – Royi Namir Jun 22 '12 at 19:01 John, interesting article, I hadn't seen that one before. I understand the mechanisms they discuss, but it surprises me that they think there is enough momentum behind the molecules and atoms to eject them out of Earths gravitational well. I suppose they just need enough to get caught in the solar winds... huh, interesting. – AdamRedwine Jun 22 '12 at 19:08 Royi, I once had a homework problem to determine the solar energy incident on the Earth. Every person in the class got a different answer and the teacher's answer was different still. Welcome to physics. – AdamRedwine Jun 22 '12 at 19:09 1 @RoyiNamir: the weight loss/gain is basically insignificant compared to the weight of the Earth, and any figures Adam and I come up with are going to be very approximate. I think Adam would probably agree with me the weight change is zero within experimental error. Re the hydrogen loss, hydrogen is a very light gas and it can diffuse to the very top of the atmosphere where it gets removed by (as Adam says) the solar wind. – John Rennie Jun 22 '12 at 19:32 Like many questions involving large complex systems, there is a fractal nature to this question. The question is similar to how long is the coast line of a country, do you count the land exposed by tide? What about the waves? In this case, there are things to consider such as, do you count the mass of satelites? What about the equipment left on the Moon? How about the atmosphere? How you answer these will depend on what you want out of the calculation. Some of these things seem pretty straightforward. The mass of meteorites entering the atmosphere is actually quite substantial; these represent an obvious increase in mass. Satellites, on the other hand, represent a pretty obvious loss of mass. Given that there are "only" about 10,000 satellites in orbit, they represent a loss of less than about a single year's worth of meteorites. Slightly less obvious, I would include the mass of the atmosphere and also of the airplanes and other objects contained within it. The atmosphere may seem etherial, but it is quite massive. It does not, however, change with respect to contributing to the mass of the planet. On the whole, it appears that the mass of the Earth increases over time with the increase primarily due to meteorites. - so if all sattelite and birds etc are landing.... the weight is the same except for meteorites....right ? – Royi Namir Jun 22 '12 at 18:53 Again, depends on what level of detail you like. Do you want a number that is accurate to the giga-ton? to the ton? to the kilo? to the gram? If you go to that level of detail, the mass is changing constantly for a huge variety of reasons. the less resolution you take in your answer, the more stable it is. – AdamRedwine Jun 22 '12 at 19:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510908722877502, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/214313/how-to-prove-the-definition-of-arctangent-by-g-h-hardy-through-integral
# How to prove the definition of arctangent by G. H. Hardy through integral? From introduction to analysis,by Arthur P. Mattuck,problem 20-1. I am stuck in the sub-problem (d) of this problem,especially the magic number 2.5,please help,thanks. Problems 20-1 One way of rigorously defining the trigonometric functions is to start with the definition of the arctangent function. (This is the route used for example in the classic text Pure Mathematics by G. H. Hardy.) So, assume amnesia has wiped out the trigonometric functions (but the rest of your knowledge of analysis is intact). Define $$T(x)=\int_{0}^{x}\frac{dt}{1+t^{2}}$$ (a) Prove T(x) is defined for all x and odd. (b) Prove T(x) is continuous and differentiable, and find T'(x). (c) Prove T(x) is strictly increasing for all x; find where it is convex, where concave, and its points of inflection. (d) Show T(x) is bounded for all x, and |T(x)| < 2.5, using comparison of integrals. Can you get a better bound? - Don't you want the integrand to be $1\over 1+t^2$ instead of $1\over 1+x^t$? And perhaps you want to find $T'(x)$ instead of $T(x)$ in (b)? – Per Manne Oct 15 '12 at 16:41 Consider the integral $\int_1^x dt/t^2$. – GEdgar Oct 15 '12 at 16:42 The magic number $2.5$ is quite a bit too large. One can do better, and it is not clear to me how to obtain such a poor bound in a natural way. – André Nicolas Oct 15 '12 at 17:05 It is diffcult to get the loose upper bound 2.5,even through $$\int_{1}^{x}\frac{dt}{t^{2}}$$ .Any more idea? – inix Oct 16 '12 at 14:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8773729801177979, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/96777?sort=votes
## non-isomorphic stably isomorphic fields ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Q1: What is the simplest example of two non-isomorphic fields $L$ and $K$ of characteristic $0$ such that $L(x)\simeq K(x)$ (here $x$ is an indeterminate)? Q2: Do we have a sufficient criterion for a general field $K$ of characteristic $0$ which guarantees that if $K(x_1,\ldots,x_n)\simeq L(x_1,\ldots, x_n)$ (here $L$ is a field and the $x_i$'s are indeterminates) then $K\simeq L$? - Just a comment: The question $R[x] \cong S[x] \Rightarrow R \cong S$ has been studied in the literature since the 70s (google for "isomorphic polynomial rings" or see math.stackexchange.com/questions/13504); Hochster has constructed counterexamples. On the other hand, this is true for fields (consider units). So in Q1, we cannot expect $L[x] \cong K[x]$ to hold. By the way: 1+, since I don't know an example for Q1 at all. – Martin Brandenburg May 12 2012 at 15:15 1 Concerning Q2: A sufficient condition is that $K,L$ are algebraic extensions of the prime field. – Ralph May 12 2012 at 16:26 1 Here is one possible way of constructing such examples: Let $\iota_1:G\rightarrow S_{n}$ and $\iota_2:G\rightarrow S_{m}$ be two embeddings of a finite group $G$ where $S_k$ denotes the symmetric group of degree $k$. Let $K_n$ be the field of rational functions over $\mathbf{Q}$ in $n$ variables then it is easy to see that $K_n^{\iota_1(G)}$ and $K_{m}^{\iota_2(G)}$ are stable isomorphic, but in general I don't see any reason why they should be isomorphic. Of course one needs to choose the group $G$ carefully since for "many" $G$'s $K_n^{\iota_1(G)}$ will always be purely transcendental. – Hugo Chapdelaine May 12 2012 at 17:50 Thanks Ralph for the answer, yes indeed, an isomorphism takes algebraic elements over the prime field to algebraic elements over the prime field. – Hugo Chapdelaine May 12 2012 at 17:57 1 I don't know if it's of practical help, but $K(x) \cong L(x)$ implies $K \cong L$ iff there is an isomorphism $K(x) \xrightarrow[]{\sim} L(x)$ that maps a transcendence base of $K|F$ onto a transcendence base of $L|F$ (where $F$ denotes the prime field). – Ralph May 12 2012 at 20:43 show 5 more comments ## 2 Answers I don't think that there are any really easy examples. In the famous paper of Beauville, Colliot-Thélène, Sansuc and Swinnerton-Dyer "Variétés stablement rationnelles non rationnelles" they construct surfaces $S$ over $\mathbb Q$ that are not rational, but such that the products $S \times \mathbb P^3$ are rational. You get an example by taking $K$ to be a purely transcendental extension of the function field of $S$ of transcendence degree $d$, and a purely transcendental extension of $\mathbb Q$ of transcendence degree $d+2$, for some $d$ between $0$ and $3$ (I don't know the correct value of $d$). - Is this really the simplest example? – Martin Brandenburg May 12 2012 at 19:48 2 It's the simplest example I know, which does not mean much. – Angelo May 12 2012 at 20:24 Silly question: what is wrong with the example $K=\mathbb{Q}$ and $L=\mathbb{Q}(x)$? – Mahdi Majidi-Zolbanin May 14 2012 at 4:20 @Mahdi: Well, $K(t) \not\cong L(t)$. – Martin Brandenburg May 14 2012 at 14:53 @Martin: I see. What I had in mind was to say $K(x)\cong L(x)$ (same $x$ as in $L=\mathbb{Q}(x)$), but I see now, $x$ is not an indeterminate over $\mathbb{Q}(x)$. Thanks! – Mahdi Majidi-Zolbanin May 14 2012 at 15:38 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. An answer to Q2, generalizing Ralph's comment: "$K$ is algebraically closed" is a sufficient condition. Indeed, you can characterize $K$ inside $K(x_1,\dots,x_n)$ as the set of elements having $m$-th roots for infinitely many integers $m$. More generally, it is enough to assume that for some $m>1$, the $m$-th power map on $K$ is onto. Examples: $K$ perfect of positive characteristic, or $K=\mathbb{R}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.914821207523346, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45004/help-an-aspiring-physicists-what-to-self-study
# Help an aspiring physicists what to self-study [closed] This is probably not the kind of question you'll often encounter on this forum, but I think a bit of background is needed for this question to make sense and not seem like a duplicate: 2012 has been an annus horribilis in my life. I have lost a lot of close relatives in a sudden surge of cardiovascular diseases in my family. I also discovered I inherited genetic diseases and that I'll probably undergo the same fate sooner or later. One of the people I've lost is my father. We used to talk about physics all the time, and ever since I was 5 I kept telling him my dream was contributing to the field. When I lost him a couple of months ago, I used studying as an emotional outlet. He always emphasized the importance of academic excellence, and for this reason I got obsessed with studying and getting high grades even more than I ever did. I now am 1/80 in a top high school, but I am frustrated enormously. I find that I waste my time at high school, especially since I probably won't have as much time here as many other people do. I find the mathematics and physics boring and easy, and I feel like I'm wasting my time with certain classes which don't interest me at all (for example Latin). So I decided to study physics and mathematics outside of school. My school has been somewhat supportive, granting me a day per week off to do whatever I want, basically. I of course have a considerable amount of free time in addition to that day, since I ace almost every test without too much studying and without making my homework (not because I don't want to, but because I don't need to). I decided to self-study because I decided that life is too short (and mine will be even shorter, if I reach 50 I'd be lucky) to waste time. So my plan is to do at least the first 2 years of undergraduate physics in the 2 years I've got left at my high school. My main objective is to gain a mathematical and physical understanding of quantum mechanics, as advanced as I possibly can. I am currently studying Linear Algebra and Statistics, but I have a problem. I don't know what to study and, especially, in what order to study it. I have read literally read dozens of questions and answers as to what should be the mathematical/physical background for Quantum Mechanics (my future field of interest). But I find these to be too general, and I often am overwhelmed by it. In the same way you can get overwhelmed when you need to clean your house, but it’s so dirty that you don’t know where to start. So I would like your help. My current mathematical background: • Basic differential calculus and no integral calculus, we will get that later on this year, however, I think it’s best for me to study it myself before we get it at school since it is crucial in physics. To show my level of differential calculus, this is about the toughest homework question we had to solve algebraically: Given are the functions $f_p(x) = \dfrac{9\sqrt{x^2+p}}{x^2+2}$. The line $k$ with slope $2.5$ touches the function of $f_p$ at point $A$ with $x_A=-1$. Get the function of $k$ algebraically. • Trigonometry and trigonometric functions. Again, as above, one of the toughest question we had to solve: Given are the functions $f(x)=-3+2cos(x)$ and $g(x)=cos(x-0.25\pi)-2$. Get the functions $s(x)=f(x)+g(x)$ and $v(x) = f(x)-g(x)$ in the form $y(x)=a+bcos(c(x-d))$. • Analytic Geometry (conic sections, tangency, bisections, you know the drill). • And of course everything below this level. I probably forgot some things, but you can ask my in the comments if I know certain fields. We will get a lot more mathematics in the coming years, but I want you to disregard that fact when answering that questions. I want to self-study as much as I can, and my mathematics teacher is very fond of me, so if I know a topic before we get it in class, he will let me do other mathematics that I want (he even said this). So I won’t lose time by self-studying subjects we’ll get eventually, so don’t worry about that. My current physics background (names of the chapters we discussed): • Newton’s laws, Mechanical energy/forces • Pressure and Heath • Signal processing • Electric currents (Ohm’s law, Series and parallel circuits, etc.) • Again, everything below this level too (again, I’m probably forgetting stuff). Here exactly the same thing counts as with mathematics, we will get a lot more physics in the coming year, but again, disregard that. My physics teacher adores me, even more so than my mathematics teacher, so again, he won’t mind if I do something else if I know the material he’s discussing already. This is of higher level than American AP classes and British A-levels, keep that in mind. Now my question is, what mathematics and physics do I need to study, and my importantly, in which order do I need to study it, in order to have a basic understanding of quantum mechanics in 2 years? I know basic is a very general term, but I think you people, as people who studied it themselves, know what is realistic and achievable. I know this might seem like a duplicate of hundreds of previous questions, but it isn’t. All the other people asking this question have gotten answers that I don’t find suitable for me. Mostly the answers are from people who assume that you have to ‘have a basic understanding of this, a basic understanding of that’, etc. But how do I know what ‘basic means’? Also, now that you guys know exactly what I know and what I don’t, you can more finely tune the answers into my personal situation. As I said, currently I am doing Linear Algebra and Statistics, so you can omit those 2 from your answers, and start from the point I finished those 2 (which will be around January). • p.s. If you want to recommend certain books, be my guest. If it's a good book, than money is no issue, I've saved up enough money throughout the years - 5 – Manishearth♦ Nov 24 '12 at 13:49 1 – Qmechanic♦ Nov 24 '12 at 14:18 5 @Manishearth and people being critical, it is ok to be a bit human now and then instead of pedantic. The forum will not suffer and some moral support to a young person is good. – anna v Nov 24 '12 at 15:25 3 @PersonalVendetta do not angst over such hereditary problems as you describe, possibly by the time they can hit you medicine will have progressed to the point of eliminating the problems. Enjoy studying and exploring physics, it is a great adventure. – anna v Nov 24 '12 at 15:29 2 One piece of advice that someone should give: Learning mechanics and E&M in high school is quite do-able. But don't go rushing through the basics! Take the time to learn them deeply. Do as many problems as you can. Spend time studying and thinking about the solutions of the equations in as many situations as you can. Don't just be content with the Coulomb potential! – user1504 Nov 24 '12 at 17:14 show 3 more comments ## closed as off topic by Manishearth♦, Qmechanic♦, David Zaslavsky♦Nov 24 '12 at 16:53 Questions on Physics Stack Exchange are expected to relate to physics within the scope defined in the FAQ. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about closed questions here. ## 4 Answers As I think you are serious, you should start studying mechanics at a higher level, and classical electricity and magnetism. As I am of an older generation, the books I found good as a preparation for later understanding quantum mechanics and quantum field theory are Classical Mechanics by Goldstein, and Classical Electricity and Magnetism by Panofski and Philips. . Their formalism in the later chapters is very useful. Maybe somebody younger has a better recomendation. - ## Did you find this question interesting? Try our newsletter email address Set your goals on something concrete that you really would like to know but do not yet understand at all, such as ''Why does water freeze?'' or ''What is an elementary particle?''. (And if that is settled, go for something more advanced but again concrete, etc.) Given the goal, search for the answer, starting with Wikipedia (and later Google Scholar), and backtracking on not or only partially understood concepts and topics until you feel firm ground. This is the only way to see what is important and why. It may take you years to fully understand the answers (some questions of this kind are still poorly understood even on the research level), but it will unlock all your intellectual capabilities, and make you an independent thinker. The attitude is more important than the order in which you tackle things. Look at my theoretical physics FAQ at http://www.mat.univie.ac.at/~neum/physfaq/physics-faq.html, especially at the first few sections of Chapter C4: How to learn theoretical physics: • How to become a good theoretical physicist • Learning quantum mechanics at age 14 • Physics for self-study while still in school • Research at age 16 • Do I always need to have good marks? • Learning scientific concepts You may also try my online book http://lanl.arxiv.org/abs/0810.1019, for which I got several times feedback from 16 year olds who liked it. If you lack some background, send me an email indicating what you don't understand, and I'll tell you how to learn that. • - Your second paragraph is an excellent advise! (+1) – Eduardo Guerras Valera Nov 24 '12 at 16:54 2 Whereas, I must say, it is not always possible. Put yourself in the position of a newby, then look for 'Chiral gauge' in Wikipedia and try to follow the concept... – Eduardo Guerras Valera Nov 24 '12 at 17:08 @Eduardo: From en.wikipedia.org/wiki/Chiral_gauge_theory, you discover that you need to know, e.g., the concept of a chiral (i.e. Weyl) fermion. This is one step of backtracking. Then en.wikipedia.org/wiki/Weyl_spinor tells you that you need to understand the concept of an orthogonal group, anothe backtrack. Then en.wikipedia.org/wiki/Orthogonal_group reduces it to linear algebra, which the OP started to study already. Thus my recipe works perfectly in this (and every other) case. – Arnold Neumaier Nov 24 '12 at 17:09 The backtracking depth will be usually below a dozen, but one may need lots of such steps. I indicated this by saying that it may take years. But these years are very well spent, probably much better than following all details of a standard textbook. – Arnold Neumaier Nov 24 '12 at 17:14 I will make an unusual suggestion, just because no-one else will suggest proceeding in this way: You should make it your goal to understand the Higgs mechanism, both as a way to give masses to the W and Z bosons, and as a way to give masses to the elementary fermions (the details are different for the two cases). The Higgs mechanism is a sufficiently advanced concept that to understand it will require you to learn quantum mechanics, special relativity, quantum field theory... so as a goal it is advanced enough to force you to learn all the things that you want to learn anyway. And it is newly validated as an aspect of the real world, so this is the right time to be learning how it works. - Your history is deeply moving, so... I apologize for breaking the rules for once, answering here. First, bear in mind that any life expectancy is a statistical quantity, but every single individual is a statistical fluctuation. If you have, for instance, an expectancy of 20 years in front of you, that only means that the average time left in a set of thousands men in your situation, or if you could rewind and live your life thousands times (that equivalence is called ergodicity) will be 20 years. But for a single individual, nobody can state anything. (After all, this answer turns out to be not fully "unscientific") Second, please send me a mail. I have very good suggestions on self-learning physics. My address is in one of the comments (I will erase that comment soon). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600909948348999, "perplexity_flag": "middle"}
http://www.impan.pl/cgi-bin/dict?enumeration
## enumeration Let $r_1, r_2,\ldots$ be an enumeration of the rationals in $[0,1]$. Go to the list of words starting with: a b c d e f g h i j k l m n o p q r s t u v w y z
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5101047158241272, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21577/why-is-oxygen-in-a-triplet-state-and-what-are-the-consequences?answertab=active
# Why is oxygen in a triplet state and what are the consequences? From Wikipedia here and here: ''Almost all molecules encountered in daily life exist in a singlet state, but molecular oxygen is an exception.'' ''The unusual electron configuration prevents molecular oxygen from reacting directly with many other molecules, which are often in the singlet state. Triplet oxygen will, however, readily react with molecules in a doublet state, such as radicals, to form a new radical. '' This wiki page is also relevant. Here is a picture (which I can't read). How is this triplet state property quantitatively computed and why is it such an exceptional feature? How does the triplet state come about in oxygen? Don't more electrons in electron shell mean more complicated factoring into representations and therefore even more complicated states? How does that impact the thermodynamical properties of the element and where is the thermodynamical difference (besides the different energy) to the singlet state? How can the reaction features be understood? Where does the energy difference of the two state come from. Taking a look at the periodic table of elements, has $\text S$ or $\text {Se}$ similar properties? Btw. I don't mind any math, but I'd probably need explanations for expressions like $\text O_2(b^1\Sigma_g^{+})$. - From FAQ: > Your questions should be reasonably scoped. If you can imagine an entire book that answers your question, you’re asking too much. – gigacyan Feb 28 '12 at 13:17 ## 1 Answer There are lots of questions asked here, but I'll attempt to answer some of these... Oxygen is found in the triplet state because the triplet state is most stable. This is a complex function of the properties of the atoms (e.g. charge and separation between atoms) and the electrons (e.g. number of electrons present, possible combinations of orbitals). The molecular orbitals given on the wiki page show three different states: $^{3}\Sigma^{-}_{g}$: The ground triplet state $^{1}\Delta_{g}$: The ground singlet state $^{1}\Sigma^{+}_{g}$: An excited singlet state These symbols are explained well at Wikipedia. In short, a triplet state has two electrons with parallel spins, for example the two red arrows pointing 'up' in the $^{3}\Sigma^{-}_{g}$ MO diagram. If we investigate the energy of an oxygen molecule as a function of the distance between the oxygen atoms, we can uncover which of these states are most stable. An example of this is given here. It is evident that the $^{3}\Sigma^{-}_{g}$ state is lower than all other states, thus we expect triplet oxygen to be the most stable state. More complicated electronic configurations do exist, an example of these is given by the $^{1}\Sigma^{+}_{g}$ state. We can see here that it is even less stable than ground singlet ($^{1}\Delta_{g}$) oxygen. These variations in energy states arise because 'putting' electrons into different orbitals with different spins give varied 'goodness of overlap'. Thermodynamic properties would be expected to be subtly different between electronic states. This is because the molecule would have different vibrational states which would 'translate' into different thermodynamics through statistical mechanics. I cannot find a source which directly compares such properties. The reaction features are, of course, very different. For example, triplet oxygen will happily dissolve in water, but singlet will react with it. Singlet oxygen is often used when one wants to 'attack' double bonds. An explanation of why this is the case is very complex - but in short, is due to the quantum state of oxygen compared with the states of most molecules it attempts to react with. - Your [3] and [4] seems to be the same link. Also, the wiki picture shows only one triplet and two singlet states. – Nick Kidman Mar 1 '12 at 8:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9365308880805969, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/order-theory
## Tagged Questions 2answers 151 views ### Is it possible to reconstruct an order type from its initial segments? Suppose $T$ is a totally ordered set without a maximal element, $\tau$ is the order type of $T$, $S$ is the set of order types of all proper initial segments (downward closed sub … 1answer 171 views ### Order dimension and weak poset partitions The order dimension of a poset $(P,\leq)$ is the least number of linear extensions of $(P,\leq)$ such that the intersection of these extensions is $(P,\leq)$. The wikipedia entry p … 2answers 81 views ### Order-isomorphic down-set lattices Let $X$ be an ordered set. A down-set (also called a lower set or an order ideal) of $X$ is a subset $D$ of $X$ such that for every $x, y \in D$, if $x \in D$ and $y \leq_X x$, the … 0answers 82 views ### Set of upper bounds is finite for any finite subset Is there a term to describe a preordered set $P$ in which any finite subset $S \subset P$ has at most finitely many minimal upper bounds? The preordered sets I'm studying generally … 3answers 304 views ### Why do we choose the standard total order on the integers? I understand why the set of natural numbers $\mathbb N = \{ 0, 1, 2, \cdots \}$ is equipped with a total order. Indeed, every monoid has a pre-order, where n' \succeq n \quad \ma … 1answer 60 views ### Counting linear extensions of unlabeled series parallel structures I am interested in the problem of counting the number of linear extensions of series-parallel structures. The wikipedia article at http://en.wikipedia.org/wiki/Series-parallel_part … 0answers 37 views ### Rotation-invariant strict-inclusion-preserving preorderings on subsets of the circle Say that a preordering $\le$ on a set of subsets of some space preserves strict inclusion provided that $A\lt B$ whenever $A\subset B$ (where $A\lt B$ iff $A\le B$ and \$B \not\le A … 1answer 83 views ### Distributive lattice embedding into a finite lattice. Suppose one has an inclusion $\iota : D \hookrightarrow S$ where $D$ is a finite distributive lattice and $S$ is a finite join-semilattice. If $\iota$ preserves all meets and join … 1answer 210 views ### Is there a natural measurable structure on the $\sigma$-algebra of a measurable space? Let $(X, \Sigma)$ denote a measurable space. Is there a non-trivial $\sigma$-algebra $\Sigma^1$ of subsets of $\Sigma$ so that $(\Sigma, \Sigma^1)$ is also a measurable space? … 2answers 181 views ### Banach lattice subspace of $C([0,1])$ not a sublattice This is probably easy, but I did not see it in standard texts. Describe a closed subspace $V$ of $C([0,1])$ such that $V$ is a Banach lattice (in the pointwise ordering), but $V$ … 2answers 145 views ### Reference Book for supremum and infimum theorems For my work I need many of the very easy and basic properties of suprema and infima. While they are all pretty easy to prove, I would prefer to refer to a standard text book. Howev … 1answer 78 views ### Covering of a partial order by upwards convex sets First off: I'm not an expert in order theory, so some of my terms might be off; correct them if you wish. Let me call a subset $A$ of a lattice $(S,\le)$ upwards convex (not sure … 2answers 133 views ### Cardinality of Equivalence Relation of Eventually Sublinear Functions Let $\Bbb{R}^{+}_{0}$ be the set of non-negative real numbers and $\Bbb{R}^{+}$be the set of positive reals. Let us say that a function \$f \colon \Bbb{R}^{+}_{0} \to \Bbb{R}^{+}_{0 … 0answers 27 views ### Looking for a uniform explanation of algebras with canonical generators. Let $\mathcal{V}$ be a finitary variety i.e. the algebras for a signature whose operations have finite arity and for some arbitrary set of equations. Then any algebra \$A \in \mathc … 1answer 172 views ### Is it possible to decide in polynomial time if a poset is a subposet of another which is given ? I am reading some theory on partial orders and I wonder something which perhaps has a simple answer : Given two partial orders $G_1,G_2$ (by their hasse diagrams), is it possible t … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8999494314193726, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/167176/visualization-of-cantor-set-by-mathematica-or-maple/167235
# Visualization of Cantor set by Mathematica or Maple! We all know what it is the Cantor set. The Cantor set is created by repeatedly deleting the open middle thirds of a set of line segments. One starts by deleting the open middle third $(\frac{1}{3}, \frac{2}{3})$ from the interval $[0, 1]$, leaving two line segments: $[0, \frac{1}{3}] ∪ [\frac{2}{3}, 1]$. Next, the open middle third of each of these remaining segments is deleted, leaving four line segments: $[0, \frac{1}{9}] ∪ [\frac{2}{9}, \frac{1}{3}] ∪ [\frac{2}{3}, \frac{7}{9}] ∪ [\frac{8}{9}, 1]$. This process is continued ad infinitum, where the nth set. The Cantor ternary set contains all points in the interval $[0, 1]$ that are not deleted at any step in this infinite process. An explicit formula for the Cantor set is $$C=[0,1]\bigcup_{m=1}^{\infty}\bigcup_{k=0}^{3^{m-1}-1}(\frac{3k+1}{3^m},\frac{3k+2}{3^m})$$ I know Maple just for doing some calculus and some basic modeling. I am asking if someone can note me a program in which we visualize the Cantor set. As I don't know if this job can be done in Maple, I added Mathematica in the title. Maybe its environment is more powerful than Maple in this question. Thanks. - – sydeulissie Jul 8 '12 at 8:52 ## 2 Answers The Cantor $\frac13$ set is kind of hard to visualize, since it has infinitely many points but zero Lebesgue measure. The best visualization I could think of was to plot an approximation of its indicator function: ````> f := a -> [seq(x/3, x in a), seq((2+x)/3, x in a)]: > cantor := n -> (f@@n)([0,1]): > delta2plot := points -> [seq(seq([x,y], y in [0,1,0]), x in points))]: > plots:-listplot(delta2plot(cantor(5)), color=gray, view=[0..1,0..1]); ```` $\hspace{110px}$ - For Mathematica see - Thanks. I didn't know it was done before! – Babak S. Jul 5 '12 at 18:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9355871677398682, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/199198/linear-de-where-did-i-go-wrong
# linear DE - Where did I go wrong? I'm trying to find the general solution for $(x+2)y' = 3-\frac{2y}{x}$ This is what I've done so far: $y'+\frac{2y}{x(x+2)}=\frac{3}{x+2}$ $(\frac{x}{x+2}y)'=\frac{3x}{(x+2)^2}$ $\frac{x}{x+2}y=3\int \frac{x}{(x+2)^2}dx$ $\frac{x}{x+2}y= 3\int \frac{1}{x+2}dx - 6\int\frac{1}{(x+2)^2}dx$ $\frac{x}{x+2}y= 3 ln|x+2| + \frac{6}{x+2}+c$ I tested this solution for when c=0, but it failed. Can anyone spot my mistake? - I believe mistake is that you did not find the integrating factor correctly. In your solution, I don't think your 2nd line is equivalent to the first one – gt6989b Sep 19 '12 at 16:19 How did it fail? I see no mistake, and your answer agrees with Wolfram Alpha. [You remembered to divide out by $x/(x+2)$ from the LHS of the final line to check that $y$ is a solution right?] Also, your general solution needs to include $+D\,v(x)$, where $v$ is a solution to $(x+2)v'+2v/x=0$ (remember homogeneous and inhomogeneous parts!) and $D$ an arbitrary constant. – anon Sep 19 '12 at 16:20 Okay I quadruple-checked my answer and it looks like it's correct. The textbook I'm working with had the answer in a crazy form, and I thought WolframAlpha had something different. Btw, I've no idea what you mean by (in)homogenous parts yet, I've just started working with D.E. :) I'll figure it out! – Korgan Rivera Sep 19 '12 at 16:28 Looks fine. Did you finish solving for y? – Mike Sep 19 '12 at 18:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9660314917564392, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/108963/prime-ideals-in-polynomials-rings/108974
# Prime ideals in polynomials rings Let $A$ be a commutative ring, $\mathbb{q}\subset A$ an ideal of $A$, and $\mathbb{q}A[x]$ the ideal of $A[x]$ generated by $\mathbb{q}$ (consists of the polynomials with coefficients in $\mathbb{q}$). Show $\mathbb{q}$ is prime in $A \Leftrightarrow \mathbb{q}A[x]$ is prime in $A[x]$. I've attempted to construct an isomorphism from $(A/\mathbb{q})[x]\to A[x]/\mathbb{q}A[x]$, but I'm not really sure how to go about doing so other than some appeal to the first isomorphism theorem for rings. - ## 4 Answers Just consider the natural map $A[x]\rightarrow A/q[x]$ and note that its kernel is exactly $qA[x]$. - It has been a long night; I can't believe I didn't realize that the kernel is $\mathfrak{q}A[x]$... – chris Feb 13 '12 at 18:47 To prove that $qA[x]$ is again prime you have to prove that for $P\cdot Q \in qA[x]$ either $P$ or $Q$ is in $qA[x]$. Induction on the sum of the degrees of $n=deg(P)$ and $m=deg(Q)$. If $n=0$ it follows from the fact that $q$ is prime. If $n>0$ $$PQ = a_{n}b_{m}X^{n+m}+\ldots$$ so $a_{n}b_{m}\in q$ and wlog $a_{n}\in q$. Now $P=P' +a_{n}X^{n}$ and $$PQ = P'Q+a_{n}X^nQ\in qA[x]\Rightarrow P'Q\in qA[x]$$ Now use the induction hypothesis! - We can assume $\mathfrak q=0$ (replace $A$ by $A/\mathfrak q$). Then the statement reduces to: $A$ is a domain if and only if $A[x]$ is a domain, which is clear. - +1 This is actually a very nice way to look at the problem. – Adrián Barquero Feb 13 '12 at 18:54 Hint: $q$ prime $\:\Rightarrow\:qA[x]$ prime follows from examining leading coefficients mod $q$, i.e. ${\rm mod}\ q:\: f,g \not\equiv 0\:\Rightarrow f \equiv a\:x^n+\cdots,\ \ g \equiv b\:x^m+\:\cdots,\ a,b\not\in q,\:$ where $\cdots$ means lower degree terms. Hence we deduce $\:fg \equiv ab\:x^{n+m}+\cdots,\ ab\not\in q\:\Rightarrow\: fg \not\equiv 0$. Note: for $q = 0$ the proof becomes: domain $A\:\Rightarrow\:$ domain $A[x]$. The proof works by exploiting the multiplicativity of the leading coefficient map combined with the nonexistence of zero-divisors in a domain, to show that the product of nonzero polynomials remains nonzero because the same holds true for their leading coefficients. This is a prototypical example of using a multiplicative map to transfer multiplicative properties from one monoid to another (here the nonexistence of zero-divisors). For some further examples, see this post on using norms to transfer multiplicative properties between $\mathbb Z$ and rings of algebraic integers, and also this post where one deduces the existence of inverses of domain elements $\ne 0$ that are algebraic over a subfield (generalizing rationalizing denominators). Note: one can reduce to the case $A$ is a domain by factoring out by the prime $q$. However, when learning these techniques, it is instructive to compare the structural vs. element-wise proofs. Note also that this may be viewed as a generalization of one form of Gauss's Lemma. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367977380752563, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/astronomy?sort=faq&pagesize=30
# Tagged Questions The science dealing with objects and phenomena located beyond Earth. In particular, this applies to observations and data. At its core, astronomy is the physically informed cataloging and classifying of the contents of the universe in order to better understand what is out there. 6answers 808 views ### Is rotational motion relative to space? Let's assume that there is nothing in the universe except Earth. If the Earth rotates on its axis as it does, then would we experience the effects of rotational motion like centrifugal force and ... 1answer 184 views ### How vacuous is intergalactic space? You often hear intergalactic space is an example for a very good vacuum. But how vacuos is space between galaxy clusters and inside a huge void structure? Are there papers quoting a ... 2answers 508 views ### How do you measure distance to stars within the galaxy? I know that for close by stars (<50 LY) we can use the parallax effect. And for distant galaxies we use red-shift (& hubble's constant). So how do we measure how far is a star lets say 50,000 ... 2answers 2k views ### Why does Venus rotate the opposite direction as other planets? Given: Law of Conservation of Angular Momentum. Reverse spinning with dense atmosphere (92 times > Earth & CO2 dominant sulphur based). Surface same degree of aging all over. Theoretical large ... 2answers 202 views ### Why don't we see solar and lunar eclipses often? Since we see the new moon at least once in a month when the Moon gets in between of the Sun and the Moon at the night and as far as I know if this happens during the day, you'll get to see a solar ... 4answers 2k views ### Why is a new moon not the same as a solar eclipse? Forgive the elementary nature of this question: Because a new moon occurs when the moon is positioned between the earth and sun, doesn't this also mean that somewhere on the Earth, a solar eclipse ... 9answers 4k views ### Can Jupiter be ignited? Our solar system itself contains two candidate "Earths" One is Jupiter's moon Europa and another is Saturn's moon Titan. Both of them have the problem of having at low temperature as Sun's heat ... 6answers 976 views ### What methods can astronomers use to find a black hole? How can astronomers say, we know there are black holes at the centre of each galaxy? What methods of indirect detection are there to know where and how big a black hole is? 1answer 364 views ### About binary stars and calculating velocity, period and radius of their orbit I saw somewhere about being able to measure the velocity, period and radius of a binary star orbit by looking at red shift and blue shift. I understand it but can someone give me an example of ... 8answers 5k views ### Would it matter if the Earth rotated clockwise? In the Futurama episode "That Darn Katz!" they save the world by rotating the Earth backwards saying it shouldn't matter (which direction Earth rotates). If Earth rotated clockwise and remained in ... 4answers 3k views ### How do astronomers measure the distance to a star or other celestial object? How do scientists measure the distance between objects in space? For example, Alpha Centauri is 4.3 light years away. 2answers 350 views ### How did Halley calculate the distance to the Sun by measuring the transit of Venus? What numbers did Halley, Cook, et al. have? What was the strategy by which they calculated the AU? 4answers 545 views ### Optical explanation of images of stars? Very often when viewing pictures of the cosmos taken by telescopes, one can observe that larger/brighter stars do not appear precisely as points/circles on the image. Indeed, the brighter the light ... 2answers 771 views ### What are good books for graduates/undergraduates in Astrophysics? There are no book recommendations for Astrophysics here. I will write my own answer, but I am also interested in what are others' views on the question (I will NOT mark my own answer as the best one). ... 2answers 229 views ### Is dark matter really present around the sun? Recently I read an article that there is dark matter around the sun but if it is so, than why can we see it clearly. If it is called matter than it shall show some hindrance in radiation we receive ... 1answer 134 views ### What time scale is used by the JPL HORIZONS system? I'm confused by the ust of the term "UT" in the description of time scales used by the JPL HORIZONS system. Their manual states that UT is Universal Time This can mean one of two non-uniform ... 4answers 489 views ### How is it possible for astronomers to see something 13B light years away? In a NPR News story from a few years back: "A gamma-ray burst from about 13 billion light years away has become the most distant object in the known universe." I'm a layman when it comes ... 4answers 260 views ### Are galactic stars spiraling inwards? Are the stars in our galaxy spiraling inwards towards the center, or are they in a permanent orbit? And if we are heading towards the center then what is the rate of this process? I started ... 2answers 109 views ### What is a good introductory text to astronomy What is a good and easy to read introductory text for an adult with limited basic scientific knowledge to astronomy for someone without a telescope and lives in a big city and why do you think that ... 2answers 123 views ### What's the evidence supporting 1 singular Big Bang? [duplicate] Possible Duplicate: What has been proved about the big bang, and what has not? I love to dabble with science, I'm by no means a scolar in this field. One thing that haven't seen proven yet ... 1answer 382 views ### What percentage of the sky is occluded by stars? If you drew rays from the center of the earth out to infinity at every angle, what percentage of them would intersect a star? Extra details: Assume the rays are mathematical rays, or that they ... 3answers 427 views ### Direct observations of a black hole? I'm not very knowledgeable about physics generally, but know that nothing can escape a black hole's gravitational pull, not even light (making them nearly invisible?). My question is: What has been ... 3answers 295 views ### Is there a limit to the resolving power of a mirror telescope? Like, if you flattened out Ceres to a 1 mm iron foil telescope mirror with 20x the surface area of the Sun, could you resolve details on the surface of an exoplanet? Could you make it arbitrarily ... 0answers 209 views ### Why is my approach to the equation of time off by a constant? I'm trying to better understand the causes for the equation of time by deriving an approximation from first principles. My naive approach, $EOT_{NAIVE}$, is to take the difference between the right ... 2answers 1k views ### How is the speed of light calculated? How is the speed of light calculated? My knowledge of physics is limited to how much I studied till high school. One way that comes to my mind is: if we throw light from one point to another (of known ... 9answers 168 views ### In astronomy what phenomena have theory predicted before observations? As far as I know, astronomy is generally an observational science. We see something and then try to explain why it is happening. The one exception that I know of is black holes: first it was thought ... 5answers 676 views ### Anti-Matter Black Holes Assuming for a second that there were a pocket of anti matter somewhere sufficiently large to form all the type of object we can see forming from normal matter - then one of these objects would be a ... 4answers 736 views ### How can I stabilize an unstable telescope? I have an 80 mm refractor telescope on a tripod, but it shakes on every touch. It's very hard to see via 6 mm (x120) ocular. Even a little wind causes the image to become too unsteady. How ... 2answers 4k views ### How is distance between sun and earth calculated? How has the distance between sun and earth been calculated by scientists? and size of sun? Thanks, 1answer 57 views ### Is there an established standard for naming exoplanets? I understand that exoplanets are named by adding a lowercase letter to the a designation of the planet's parent star or stellar system, beginning with 'b' (the star itself is 'a') in order of ... 1answer 43 views ### Is there an Algorithm to find the time when the sun is X degrees above the horizon for a given latitude B at date C Is there an accurate algorithm / method to determine the precise time of day/night when the sun is X degrees above (or below) the horizon for a given latitude Y at date Z? Is this the same question ... 2answers 258 views ### When is the right ascension of the mean sun 0? I understand that the right ascension of the mean sun changes (at least over a specified period) by a constant rate, but where is it zero? I had naively assumed that it would be zero at the most ... 1answer 232 views ### When and how were relative distances to the planets first measured? I understand that the absolute distance to a planet can be measured using earth-baseline (e.g., diurnal) parallax, and that the first reasonably accurate such measurement was made for Mars by Cassini ... 1answer 124 views ### How do physicists and astronomers handle leap seconds? I'm confused by the many contradictory descriptions I see about how UTC leap seconds are accounted for. I understand that there are various ways to handle them in common practice, and I've seen a ... 2answers 293 views ### Does the length of the sidereal day vary systematically? I'm confused about some properties of the sidereal day, in particular whether its duration varies systematically over the course of the year.1 It seems to me that that must be the case, but the ... 1answer 225 views ### How did Copernicus establish the relative distance to the superior planets? I understand that the relative distances to the planets had been calculated using various methods since ancient times, and, in particular, that the assumptions of the Copernican model of the Solar ... 3answers 4k views ### How does the moon reflect light? We can see the moon in the night because it reflects sunlight. But the light is incident on the opposite side of moon with respect to the observer in the night. In this case, how does the moon ... 5answers 256 views ### Does the rotational speed of a planet consistently become faster and faster given that there are no conflicting events? [closed] Does the rotational speed of a planet consistently become faster and faster given that there are no conflicting events? 1answer 186 views ### How to explain the Moon halo phenomenon? Today, here in Brazil, I have observed (and is still observing) an interesting phenomenon. The Moon is near to a big star in the sky, but this is normal. The interesting part is what's around them. ... 3answers 254 views ### Why is this radio telescope's reflector spherical and not parabolic? This is the Arecibo Observatory in Arecibo, Puerto Rico. Its reflector is spherical, measuring 1,001 ft. in diameter. It is considered the most sensitive radio telescope on Earth, but the fact that ... 2answers 176 views ### How would one navigate interstellar space? Headed out from Earth within the Solar System, Sol and Earth both may be used as reference. When traveling in interstellar space with stellar systems themselves traveling at varying velocities even ... 6answers 436 views ### Can any telescope be used for solar observing? Can any telescope, such as a 8" reflector, that is normally used at night to look at planets be used or adapted for solar observing? What kind of adapters or filters are required or is it better to ... 2answers 439 views ### What does the sky look like to human eyes from orbit? There are numerous pictures, obviously, of the blackness of space from the shuttle, the space station, and even the moon. But they all suffer from being from the perspective of a camera, which is not ... 1answer 435 views ### How do we know the masses of single stars? I have recently read that we can only know the masses of stars in binary systems, because we use Kepler's third law to indirectly measure the mass. However, it is not hard to find measurements for the ... 2answers 101 views ### What's the best way to start out with astrophotography on a tight budget? I'm new to astronomy and I don't have a great deal of cash to spend. I currently have a 3" reflector telescope which I've had great fun with. I also have a Pentax DSLR which I've been using to take ... 2answers 215 views ### Are there planetary systems where the planes of orbits vary greatly? Inspired by this question, are there any known planetary systems with largely varying planes of orbit? For example a system where two planets have perpendicular planes? 3answers 493 views ### Is a water world possible, and for how long could it be stable? I have several questions regarding this topic. First, could a water world be stable for thousands of years with most of its surface remaining covered in water. What would it take for this to be ... 1answer 1k views ### How much sky do we see at any one moment? When we look at any particular point the sky, what percentage of the celestial sphere do we see? This question arises from the notion that on average there passes one meteor per hour overhead. So ... 0answers 151 views ### Mirrors and light beam divergence technology limits There are many applications for orbital space mirrors in astronomy (better telescopes) and space propulsion (solar power for deep space probes), but this is limited by the minimum beam divergence ... 2answers 220 views ### How much thrust would be needed to turn a hobbyist satellite into a deep space probe? I was reading the article Weather Balloon Space Probes that says you can put your own satellite at 65,000 ft temporarily. Is it even remotely possible to raise the probe high enough using ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419195055961609, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6637?sort=votes
## The De Rham Cohomology of $\mathbb{R}^n - \mathbb{S}^k$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm reading Madsen and Tornehave's "From Calculus to Cohomology" and tried to solve this interesting problem regarding knots. Let $\Sigma\subset \mathbb{R}^n$ be homeomorphic to $\mathbb{S}^k$, show that $H^p(\mathbb{R}^n - \Sigma)$ equals $\mathbb{R}$ for $p=0,n-k-1, n-1$ and 0 for all other $p$. Here $1\leq k \leq n-2$. Now the case $p=0$ is obvious from connectedness and the two other cases are easily solved by applying the fact that $H^{p+1}(\mathbb{R}^{n+1} - A) \simeq H^p(\mathbb{R}^n - A),~~~~p\geq 1$ and $H^1(\mathbb{R}^n - A) \simeq H^0(\mathbb{R}^n - A)/\mathbb{R}\cdot 1$ So what is my problem, really? Now instead let's look at this directly from Mayer-Vietoris. If $\hat{D}^k$ is the open unit disk and $\bar{D}^k$ the closed. Then $\mathbb{R}^n - \mathbb{S}^k = (\mathbb{R}^n - \bar{D}^k)\cup (\hat{D}^k)$ and $(\mathbb{R}^n - \bar{D}^k)\cap (\hat{D}^k) = \emptyset$ Now $H^p(\mathbb{R}^n - \bar{D}^k) \simeq H^p(\mathbb{R}^n - \{ 0 \})$ since $\bar{D}^k$ is contractible. And $H^p(\mathbb{R}^n - \{ 0 \})$ is $\mathbb{R}$ if $p=0,n-1$ and 0 else. Since $\hat{D}^k$ is open star shaped we find it's cohomology to be $\mathbb{R}$ for $p=0$ and 0 for all other $p$. This yields and exact sequence $\cdots\rightarrow 0\overset{I^{*}}\rightarrow H^{n-1}(\mathbb{R}^n - \mathbb{S}^k) \overset{J^{*}}\rightarrow \mathbb{R} \rightarrow 0\cdots$ So due to exactness I find that `$\ker(J^*) = \text{Im}(I^*) = 0$' and that `$J^*$' is surjective, hence $H^{n-1}(\mathbb{R}^n - \mathbb{S}^k) \simeq \mathbb{R}$. But ... If I apply the exact same approach to $p = n-k-1$ my answer would be $0$ for $H^{n-k-1}(\mathbb{R}^n - \mathbb{S}^k)$. Where does this last approach fail? (Also - my TeX doesn't seem to render properly when published, help?) - ## 2 Answers To apply the Mayer-Vietoris sequence, you need subspaces whose interiors cover your space (see e.g. Wikipedia, or Hatcher, p. 149). This is not true in your example, because a k-disk in Rn has empty interior for k<n. You might also enjoy deriving this result from Alexander duality. Edit: Carsten makes an excellent point, which is that the hard part is to show that the homology of the complement is independent of the embedding. You did say that this is proved in your book, but I wanted to point out that this is quite difficult and arguably surprising. 1) One embedding that satisfies the conditions of the theorem is the Alexander horned sphere, a "wild" embedding of S2 into S3 (this animation is quite nice too). While it's true that the outer component of the complement has the homology of a point, it is very far from being simply connected -- in fact its fundamental group is not finitely generated. (You can find an explicit description of its fundamental group in Hatcher, p. 170-172.) 2) Every knot is an embedding of S1 into S3. The fundamental group of the complement is a strong knot invariant, and is usually much more complicated than just Z. Since H1 of the knot complement is the abelianization of the knot group, the result you are using implies that all knot groups have infinite cyclic abelianization. This is true (it can be seen nicely from the Wirtinger presentation), but it's not obvious. 3) It is important that the ambient space is a sphere (or equivalently Rn). For a simple example where the theorem breaks down, consider embeddings of S1 into a surface Σg of genus g≥2. Taking g=2 for simplicity, we see that there are three topologically inequivalent ways of embedding a circle into Σ2: A) a tiny loop enclosing a disk; B) a loop encircling the waist of the surface and separating it into two components, each of genus 1; and C) a loop going through one of the handles, which does not separate the surface at all. The homology groups of Σ2 are H0=Z, H1=Z4, and H2=Z. For both A) and B), the complement of S1 has homology groups H0=Z2 because the curve separates, and H1=Z4. However, for C) we have H0=Z because the complement is connected, and H1=Z3 because we have "interrupted" one of the elements [you can see where it went by looking at the Mayer-Vietoris sequence]. Thus we see that the homology of the complement depends essentially on the embedding into the surface Σg, in contrast with the classical case of embedding a circle into the sphere S2. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Your question has been answered by Tom. But I am also not sure if you are aware that the point of the problem was to show that the cohomology of a sphere embedded in euclidean space is independent of the embedding. You seem to think only of the standard embedding. As Tom mentioned, duality is one way to prove this independence. (This might have been better as a comment, but I am not yet allowed to comment.) - From Tom's comment I figured that I couldn't use those two sets as a covering, as both of them aren't open! Now I'm not really sure what you are trying to say and my book doesn't say anything about duality (at least not explicitly, anyways). Now the reason is that I only consider $\mathbb{S}^k$ is that it is closed and homeomorphic to $\Sigma$, hence $H^p(\mathbb{R}^n - \mathbb{S}^k)$ = $H^p(\mathbb{R}^n - \Sigma)$. Are there anything I seem not to be aware of? – Magnus Botnan Nov 24 2009 at 22:56 Ok, I do not know the book. Generally, the hard part is to deduce from $\Sigma\approx\mathbb S^k$ that $H^p(\mathbb R^n-\Sigma)\cong H^p(\mathbb R^n-\mathbb S^k)$. It is hard, because the complements are not homotopy equivalent in general. But maybe, this was proven somewhere in the book. – Carsten Schultz Nov 25 2009 at 22:24 Yep, it is proved! (I tried to find the proof in the book through google books but unfortunately those two pages are removed from the preview.) – Magnus Botnan Nov 26 2009 at 1:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554020762443542, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/52463-angular-speed.html
# Thread: 1. ## Angular Speed..... Find the angular speed of a point on the earth surface. That is all that is given, assume that Radius is 6400km. how do I set this up. 2. angular velocity is the same for all points on the earth except the poles. $\omega = \frac{1 \, revolution}{day}$ 3. Hello, Brndo4u! There is either too much information or insufficient information. Find the angular speed of a point on the Earth's surface. Assume that radius of the Earth is 6400 km. Every point on the Earth, except the two poles, turns though 360° every 24 hours. . . Hence, the point moves: . $\frac{360}{24} \:=\:15\text{ degrees per hour.}$ And the radius of the Earth is irrelevant. If the speed is to be in kilometers per hour, . . we should have been given the latitude of the point. Assuming that point is on the Equator, . . the circumference is: . $2\pi\cdot6400 \:\approx\:40,\!212.4$ km. Then the point moves at: . $\frac{40,\!212.4}{24} \:\approx\:1675.5$ km/hr.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8005157709121704, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/57565/list
## Return to Answer 2 Reformulated the answer to clarify that the cited random-walk based algorithm is not actually space efficient.; added 63 characters in body The answer to your question is, essentially Brent's paper Some integer factorization algorithms using elliptic curves describes a "birthday paradox" ECM extension based on a random walk that only uses $O(\sqrt{r})$ group operations on the elliptic curve (see Section 6), yeshowever it is not space efficient. The use Cycle detection techniques do not apply because the iteration function used is not a deterministic operation on the elliptic curve modulo any of birthday paradox algorithms in the second stage (unknown) prime factors of ECM $n$, and it is standardnot clear how one might construct such a function. One can apply the usual Pollard-$\rho$ approach to computations on the elliptic curve performed mod $n$, say using an iteration function where $Q_{i+1}$ is $2Q_i$ or $2Q_i+Q$, depending on the parity of the $x$-coordinate of $Q_i$ when viewed as an integer in $[0,n-1]$. This will eventually lead to a cycle, and this which can be done space efficiently recognized using standard techniques (e.g. Floyd's algorithm) with a random walk approachspace complexity of $O(\log n)$ bits. The usual Pollard rho algorithm requires some minor modifications But the expected length of this cycle (using assuming this iteration function actually approximates a slightly different choice of random walk, as you suggested), which are described in detail in section 6 ) is $O(\sqrt{m})$, where $m$ is the order of Brent's 1986 paper Some integer factorization algorithms using elliptic curves.$P$ on $E(\mathbb{Z}/n\mathbb{Z})$, not $O(\sqrt{r})$. 1 The answer to your question is, essentially, yes. The use of birthday paradox algorithms in the second stage of ECM is standard, and this can be done space efficiently using a random walk approach. The usual Pollard rho algorithm requires some minor modifications (using a slightly different choice of random walk, as you suggested), which are described in detail in section 6 of Brent's 1986 paper Some integer factorization algorithms using elliptic curves.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8984909057617188, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/06/02/the-category-of-matrices-i/?like=1&source=post_flair&_wpnonce=5d5a17dad5
# The Unapologetic Mathematician ## The Category of Matrices I What we’ve been building up to is actually the definition of a category. Given a field $\mathbb{F}$ we define the category $\mathbf{Mat}(\mathbb{F})$ of matrices over $\mathbb{F}$. Most of our other categories have been named after their objects — groups are the objects of $\mathbf{Grp}$, commutative monoids are the objects of $\mathbf{CMon}$, and so on — but not here. In this case, matrices will be the morphisms, and the category of matrices illustrates in a clearer way than any we’ve seen yet how similar categories are to other algebraic structures that are usually seen as simpler and more concrete. Down to business: the objects of $\mathbf{Mat}(\mathbb{F})$ will be the natural numbers $\mathbb{N}$, and the morphisms in $\hom(m,n)$ are the $n\times m$ matrices. That is, a morphism is a collection of field elements $\left(t_i^j\right)$ where $i$ runs from ${1}$ to $m$ and $j$ runs from ${1}$ to $n$. We compose two morphisms by the process of matrix multiplication. If $\left(s_i^j\right)$ is an $n\times m$ matrix in $\hom(m,n)$ and $\left(t_j^k\right)$ is a $p\times n$ matrix in $\hom(n,p)$, then their product $\left(s_i^jt_j^k\right)$ is a $p\times m$ matrix in $\hom(m,p)$ (remember the summation convention). The category of matrices is actually enriched over the category of vector spaces over $\mathbb{F}$. This means that each set of morphisms is actually a vector space over $\mathbb{F}$. Specifically, we add matrices of the same dimensions and multiply matrices by scalars component-by-component. We have yet to speak very clearly about identities. The axioms of an enriched category state that for each object (natural number) $n$ there must be a linear function $I_n:\mathbb{F}\rightarrow\hom(n,n)$. Because of linearity, this function is completely determined by its value at $1\in\mathbb{F}$: $I_n(x)=xI_n(1)$. We must pick this matrix $I_n(1)$ so that it acts as an identity for matrix multiplication, and we choose the Kronecker delta for this purpose: $I_n(1)=\left(\delta_i^j\right)$. That is, we use an $n\times n$ matrix whose entries are ${1}$ if the indices are equal and ${0}$ otherwise. It’s straightforward to check that this is indeed an identity. Other properties I’ve skipped over, but which aren’t hard to check, are that matrix multiplication is bilinear and associative. Both of these are straightforward once written out in terms of the summation convention; sometimes deceptively so. For example, the associativity condition reads $(r_i^js_j^k)t_k^l=r_i^j(s_j^kt_k^l)$. But remember that there are hidden summation signs in here, so it should really read: $\displaystyle\sum\limits_k\left(\sum\limits_jr_i^js_j^k\right)t_k^l=\sum\limits_jr_i^j\left(\sum\limits_ks_j^kt_k^l\right)$ so there’s an implicit change in the order of summation here. Since we’re just doing finite sums, this is no problem, but it’s still worth keeping an eye on. ## 9 Comments » 1. [...] Category of Matrices II As we consider the category of matrices over the field , we find a monoidal [...] Pingback by | June 3, 2008 | Reply 2. “Down to business: the objects of \mathbf{Mat}(\mathbb{F}) will be the natural numbers \mathbb{N}…” Wait, why? Why wouldn’t it be a vector? I can see how the morphisms are matrices, but don’t the morphisms map one object to another? And wouldn’t matrices – being linear operators – map vector spaces to vector spaces, and vectors to vectors? So…why are natural numbers the objects? Comment by Wait one moment... | June 4, 2008 | Reply 3. (Sorry to keep posting ) But a thought, it’s the natural numbers because the natural number indicates the number of dimensions of the vector space, right? Which is why it’s an n by m matrix that maps the object ‘m’ to the object ‘n’, right? Comment by Wait one moment... | June 4, 2008 | Reply 4. Yes, that’s it. This is not a category of algebraic structures and mappings between them. The morphisms are the most important part of this category, and the objects are largely bookkeeping to tell which morphisms can be composed. Comment by | June 4, 2008 | Reply 5. ‘Wait one moment’ wrote: So…why are natural numbers the objects? The real reason is that it doesn’t matter what you call the objects – you can call them $n$ or $F^n$. The latter seems more reasonable when you’re first getting started, but once this realization fully sinks in – it doesn’t matter what you call the objects – people often decide the shorter name $n$ is fine. This is especially true because there’s a large class of important categories where there’s one object per natural number. Some of these categories are called “PROPs” – and there’s a whole big theory of PROPs, which is very interesting. Comment by | June 5, 2008 | Reply 6. Grr… I typed in the html to get F^n with a superscript for the n, but all I got was Fn. Comment by | June 5, 2008 | Reply 7. fixed it. And yes, that’s a good point. It’s pretty much what I meant by “the objects are largely bookkeeping”, but your explanation is appreciated. Comment by | June 5, 2008 | Reply 8. Alright, thanks for the clarification Dr Baez and Dr Armstrong; if I may employ a terrible pun, may I give *PROPS* to you both. Ahaha…ahem, what a terrible pun. Comment by Wait one moment... | June 7, 2008 | Reply 9. [...] of Matrices III At long last, let’s get back to linear algebra. We’d laid out the category of matrices , and we showed that it’s a monoidal category with [...] Pingback by | June 23, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306620955467224, "perplexity_flag": "head"}
http://mathoverflow.net/questions/108494?sort=votes
## How can I picture antisymmetry of the Lie derivative? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It's obvious that the Lie derivative defined in terms of Lie brackets is anti-symmetric. But what is an intuitive way to visualize the anti-symmetry in the 'differentiating along a flow' definition? Thanks! - ## 1 Answer Whichever definition you use that involves differentiating along a flow, it ought to have the property that $L_X X = 0$; in other words, for reasons that should be clear, the Lie derivative is alternating. In the presence of bilinearity, this is equivalent to antisymmetry. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307227730751038, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/156190-derive-equation-rectangle.html
# Thread: 1. ## Derive equation of rectangle Hi, I have a triangle whose perimeter is 34cm. The diagonal is 13cm, and the width is $x$cm. Derive the equation $x^2-17x+60=0$. I attempted this, and this is what I did: Using Pythagoras' theorem I knew that $13 = x^2 + h^2$ where $h$ is the height of the rectangle. Thus, the unknown must be $h^2 = 13-x^2$ which becomes $h = \sqrt{13-x^2}$, therefore the following should be true: $2x+2(\sqrt{13-x^2})=34$. However as you can see it doesn't match the derivation given at the start. How wrong was my answer? Where do I go from here? I would like to point out that this isn't homework. Thanks for any help 2. Since seeing it on screen I have managed to progress: $2(\sqrt{13-x^2})=34-2x$ $\sqrt{13-x^2}=17-x$ $13-x^2=(17-x)^2$ $13-x^2=289-34x+x^2$ $276-34x+2x^2=0$ $138-17x+x^2=0$ However this still doesn't match the provided derivation... 3. Originally Posted by webguy Since seeing it on screen I have managed to progress: $2(\sqrt{13-x^2})=34-2x$ $\sqrt{13-x^2}=17-x$ $13-x^2=(17-x)^2$ $13-x^2=289-34x+x^2$ $276-34x+2x^2=0$ $138-17x+x^2=0$ However this still doesn't match the provided derivation... You have two major typos!! According to your calculations, the shape ought to be a rectangle, so that the perimeter is 2x+2h. Then you are incorrectly applying Pythagoras' theorem on top of that.... $x^2+h^2=13^2$ The square on the hypotenuse is the sum of the squares of the perpendicular sides. Hence, replace your 13 in the calculations with 169 and see how you get on.. EDIT: my apologies....you do have "rectangle" in your title! 4. Hello, webguy I have a rectangle whose perimeter is 34 cm. The diagonal is 13 cm, and the width is $x$ cm. Derive the equation: $x^2-17x+60\:=\:0$ Code: ``` * - - - - - - - - - - - * | * | | 13 * | | * | x | * | | * | * . - - - - - - - - - - * L``` The length of the rectangle is $\,L.$ The width of the rectangle is $\,x$. The perimeter is 34: . $2L + 2x \:=\:34 \quad\Rightarrow\quad L \:=\:17 - x$ .[1] Pythagorus says: . $x^2 + L^2 \:=\:13^2$ Substitute [1]: . $x^2 + (17-x)^2 \:=\:169$ . . . . . . . $x^2 + 289 - 34x + x^2 \:=\:169$ . . . . . . . . . . $2x^2 - 34x + 120 \;=\;0$ . . . . . . . . . . . $x^2 - 17x - 60 \:=\:0$ 5. Originally Posted by webguy I have a triangle whose perimeter is 34cm. Oops! Sorry, that should have said rectangle. I don't understand why it's $13^2=x^2+h^2$. Pythagoras' theorem states that the hypotenuse is $c^2=b^2-a^2$, and I know the hypotenuse; it's 13. So shouldn't it be $13=b^2-a^2$? 6. Hi Sory to be so thick but where does the 34x come from in the second line after substitute? Thanks in advance 7. From expanding $(17-x)^2$ you get $289-17x-17x+x^2$. ---------- I don't understand why it's . Pythagoras' theorem states that the hypotenuse is , and I know the hypotenuse; it's 13. So shouldn't it be ? 8. The pythagorus theorum states: $a^2 + b^2 = c^2$ a and b are the lengths of the triangle c is the hypotenuse. You have the pythagorus theorum wrong. The equation you have is to find the other length whilst knowing the hypotenuse and a length. Wikipedia: Pythagoras theorum 9. Oh wow I am completely stupid for making such a mistake! I will return to my cave... 10. Actually I still don't understand why it's why it's $13^2=x^2+h^2$ and not $13=x^2+h^2$. 11. Originally Posted by webguy Actually I still don't understand why it's why it's $13^2=x^2+h^2$ and not $13=x^2+h^2$. I could draw some diagrams to show you, though Educated's link contains most of them, but if you use a ruler and draw two perpendicular sides 3 centimeters long and 4 centimeters long. Then draw the hypotenuse and measure it. It should be 5 centimeters long if the other two sides are perpendicular. Notice then that $4^2+3^2=16+9=25=5^2$ It works the same way for all other dimensions if the sides are perpendicular. 12. I understand Pythagoras' proof, but I don't understand why it is $13^2$ and not $13$. I know the hypotenuse is 13cm, so why do I square it instead of replacing $c^2$? 13. Originally Posted by webguy I understand Pythagoras' proof, but I don't understand why it is $13^2$ and not $13$. The equation is $\,a^2+b^2=c^2$. 13 takes the place of c, if that helps. 14. Originally Posted by webguy I understand Pythagoras' proof, but I don't understand why it is $13^2$ and not $13$. I know the hypotenuse is 13cm, so why do I square it instead of replacing $c^2$? I think I see what you mean.... The side lengths of the triangle are x, (17-x) and 13. These are your $a,\; b,\; c.$ You have to square all of those to use Pythagoras' theorem. The lengths of the perpendicular sides themselves do not sum to give the hypotenuse length, but the squares of the perpendicular sides do sum to give the square of the hypotenuse. $a^2+b^2=c^2\Rightarrow\ x^2+(17-x)^2=13^2$ $x^2+289-34x+x^2=169$ $2x^2-34x+289-169=0$ $2x^2-34x+120=0$ 15. Thank you for all the help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 55, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367595911026001, "perplexity_flag": "middle"}
http://cms.math.ca/cjm/browse/vn/60/2
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM « Vol. 60 No. 1 Vol. 60 No. 3 » Volume 60 Number 2 (Apr 2008) Looking for a printed back issue? Page Contents 241 Alexandrova, Ivana Here we define and prove some properties of the semi-classical wavefront set. We also define and study semi-classical Fourier integral operators and prove a generalization of Egorov's theorem to manifolds of different dimensions. 264 Baake, Michael; Baake, Ellen . 266 Bergeron, Nantel; Reutenauer, Christophe; Rosas, Mercedes; Zabrocki, Mike We introduce a natural Hopf algebra structure on the space of noncommutative symmetric functions. The bases for this algebra are indexed by set partitions. We show that there exists a natural inclusion of the Hopf algebra of noncommutative symmetric functions in this larger space. We also consider this algebra as a subspace of noncommutative polynomials and use it to understand the structure of the spaces of harmonics and coinvariants with respect to this collection of noncommutative polynomials and conclude two analogues of Chevalley's theorem in the noncommutative setting. 297 Bini, G.; Goulden, I. P.; Jackson, D. M. The classical Hurwitz enumeration problem has a presentation in terms of transitive factorizations in the symmetric group. This presentation suggests a generalization from type~$A$ to other finite reflection groups and, in particular, to type~$B$. We study this generalization both from a combinatorial and a geometric point of view, with the prospect of providing a means of understanding more of the structure of the moduli spaces of maps with an $\gS_2$-symmetry. The type~$A$ case has been well studied and connects Hurwitz numbers to the moduli space of curves. We conjecture an analogous setting for the type~$B$ case that is studied here. 313 Choi, Yong-Kab; o, Miklós Csörg\H This paper establishes general theorems which contain both moduli of continuity and large incremental results for $l^\infty$-valued Gaussian random fields indexed by a multidimensional parameter under explicit conditions. 334 Curry, Eva We show that a characterization of scaling functions for multiresolution analyses given by Hern\'{a}ndez and Weiss and that a characterization of low-pass filters given by Gundy both hold for multivariable multiresolution analyses. 348 Santos, F. Guillén; Navarro, V.; Pascual, P.; Roig, Agust{\'\i} We prove that for a topological operad $P$ the operad of oriented cubical singular chains, $C^{\ord}_\ast(P)$, and the operad of simplicial singular chains, $S_\ast(P)$, are weakly equivalent. As a consequence, $C^{\ord}_\ast(P\nsemi\mathbb{Q})$ is formal if and only if $S_\ast(P\nsemi\mathbb{Q})$ is formal, thus linking together some formality results which are spread out in the literature. The proof is based on an acyclic models theorem for monoidal functors. We give different variants of the acyclic models theorem and apply the contravariant case to study the cohomology theories for simplicial sets defined by $R$-simplicial differential graded algebras. 379 rgensen, Peter J\o A commutative local Cohen--Macaulay ring $R$ of finite Cohen--Macaulay type is known to be an isolated singularity; that is, $\Spec(R) \setminus \{ \mathfrak {m} \}$ is smooth. This paper proves a non-commutative analogue. Namely, if $A$ is a (non-commutative) graded Artin--Schelter \CM\ algebra which is fully bounded Noetherian and has finite Cohen--Macaulay type, then the non-commutative projective scheme determined by $A$ is smooth. 391 Migliore, Juan C. In a recent paper, F. Zanello showed that level Artinian algebras in 3 variables can fail to have the Weak Lefschetz Property (WLP), and can even fail to have unimodal Hilbert function. We show that the same is true for the Artinian reduction of reduced, level sets of points in projective 3-space. Our main goal is to begin an understanding of how the geometry of a set of points can prevent its Artinian reduction from having WLP, which in itself is a very algebraic notion. More precisely, we produce level sets of points whose Artinian reductions have socle types 3 and 4 and arbitrary socle degree $\geq 12$ (in the worst case), but fail to have WLP. We also produce a level set of points whose Artinian reduction fails to have unimodal Hilbert function; our example is based on Zanello's example. Finally, we show that a level set of points can have Artinian reduction that has WLP but fails to have the Strong Lefschetz Property. While our constructions are all based on basic double G-linkage, the implementations use very different methods. 412 Nguyen-Chu, G.-V. On calcule les restrictions {\a} l'alg{\e}bre de Hecke sph{\'e}rique des traces tordues compactes d'un ensemble de repr{\'e}sentations explicitement construites du groupe $\GL(N, F)$, o{\`u} $F$ est un corps $p$-adique. Ces calculs r\'esolve en particulier une question pos{\'e}e dans un article pr\'ec\'edent du m\^eme auteur. 443 Shen, Z.; Yildirim, G. Civi In this paper, we find equations that characterize locally projectively flat Finsler metrics in the form $F = (\alpha + \beta)^2/\alpha$, where $\alpha=\sqrt{a_{ij}y^iy^j}$ is a Riemannian metric and $\beta= b_i y^i$ is a $1$-form. Then we completely determine the local structure of those with constant flag curvature. 457 Teplyaev, Alexander We define sets with finitely ramified cell structure, which are generalizations of post-crit8cally finite self-similar sets introduced by Kigami and of fractafolds introduced by Strichartz. In general, we do not assume even local self-similarity, and allow countably many cells connected at each junction point. In particular, we consider post-critically infinite fractals. We prove that if Kigami's resistance form satisfies certain assumptions, then there exists a weak Riemannian metric such that the energy can be expressed as the integral of the norm squared of a weak gradient with respect to an energy measure. Furthermore, we prove that if such a set can be homeomorphically represented in harmonic coordinates, then for smooth functions the weak gradient can be replaced by the usual gradient. We also prove a simple formula for the energy measure Laplacian in harmonic coordinates.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.864999532699585, "perplexity_flag": "head"}
http://mathoverflow.net/questions/26392?sort=oldest
## What is cardinality of the set of true undecidable minimal sentences in a formal theory of aritmetic ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let T be a true theory of arithmetic to which the incompleteness theorems apply. Consider two sentences in the language of T to be equivalent if they are provably equivalent over T. How many equivalence classes are there, under this relation, that contain a true-but-unprovable sentence? - 2 How can there be uncountably many sentences? – Mariano Suárez-Alvarez May 29 2010 at 22:42 It seems you can easily construct (by nested recursion) sets of sentences that are false and grow like an infinite binary complete tree. I'll have to rethink that; for now I'll remove the comment, as it's not essential to the question. – Halfdan Faber May 29 2010 at 22:59 I thought Gödel gives you a sentence for each choice of code, and there are countably infinitely many of those (assuming your language is a reasonable size). – S. Carnahan♦ May 29 2010 at 23:48 1 It would be reasonable to try cooking up nontraditional Goedel numberings that would not be provably isomorphic in the theory; I would try the same sort of construction as for non-acceptable Goedel numberings. But there are simpler solutions to the original question as Francois Dorais and I have indicated. – Carl Mummert May 30 2010 at 0:36 1 Haldan, the reason your construction won't give you uncountably many false sentences is because each sentence is finite. Usually the language will only have finitely many symbols, so sentences will be finite strings of finitely many symbols. Clearly then you can have only countably many such. – Kiochi May 30 2010 at 20:34 show 2 more comments ## 2 Answers Note: The top part answers an old version of the question, which is now irrelevant. Given a axiomatizable theory T of arithmetic, the set of all statements independent of T is the complement of a computably enumerable set. When nonempty (e.g. when T extends PA) this set is countably infinite. (Trivially, if φ is such then so is φ∧∃x(x=x), etc.) However, there is no general algorithm to produce the shortest element of such a set, never mind counting them. (The requirement that the sentence be true is negligible since negation only adds a constant number of symbols depending on syntactic conventions.) That said, some variants of your question have been actively studied. Hilbert's Tenth Problem says that there are Diophantine equations that have no integer solutions, but this fact is not provable from T. The question of the minimum number variables and minimum degree such diophantine equations have been studied. Over Z, Skolem showed that degree 4 is sufficient and Zhi-Wei Sun showed that 11 variables is sufficient. It is unknown whether these results are optimal. Now that I reread your question, I think you wanted to have infinitely many logically inequivalent statements each of which is independent of T. This is true when T has no axiomatizable complete extension, which is guaranteed Gödel's Theorem when T is a consistent axiomatizable theory that extends PA. Indeed, if there were only finitely many statements φ1,...,φk independent of T, up to T-provable equivalence. Then we could get an axiomatizable extension of T by adding to T each such φi or its negation ¬φi while maintaining consistency. (For example, when the standard model satisfies T, we can pick whichever is true in the standard model.) Since we're only adding finitely many new axioms, the result would be an axiomatizable complete theory even if our finitely many decisions were very complex; this would contradict Gödel's Theorem. - Francois, Thanks, Yes I wanted to exactly exclude such trivial extensions, such as you mention, so that when you have a non-decidable proposition φ and admit countably infinite decidable extensions such as φ∧∃x(x=x). The idea is to remove all sentence additions that are decidable and do not change the truth interpretation. – Halfdan Faber May 30 2010 at 0:17 Francois, Thanks, I think you got it... Yes, when you add the undecidable statement to the axiom scheme, you find a new undecidable statement by diagonalization. This can continue ad infinitum. However, are you sure that all these sentences are undecidable in all extended axiom schemes? – Halfdan Faber May 30 2010 at 0:21 By the way if someone would give me one more down vote it seems I can pick up the "peer pressure" badge... – Halfdan Faber May 30 2010 at 0:24 I rewrote parts of my answer while you were writing your comment. Does the new write up answer your question? – François G. Dorais♦ May 30 2010 at 0:28 Francois, Thanks, I like your answer. I'm still thinking. However, did you understand my previous comment? If we assume there is one and only one sentence φ and find a new undecidable proposition φ2 by diagonalization in the axiom scheme extended with φ, will φ2 then be undecidable in the original axiom scheme? In general probably not, since it was explicitly constructed to state that it couldn't be proved in the extended axiom scheme. – Halfdan Faber May 30 2010 at 0:47 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The original question can be read sensibly as follows: Let T be a true theory of arithmetic to which the incompleteness theorems apply. Consider two sentences in the language of T to be equivalent if they are provably equivalent over T. How many equivalence classes are there, under this relation, that contain a true-but-unprovable sentence? This avoids the issue of sentences like φ∧∃x(x=x), which I think is what the question means by "with decidable tautologies or decidable sentences disregarded". The answer is trivial, though, assuming T is a true theory: there are still countably many such equivalence classes, which is as many as there possible could be. "True theory" means "satisfied by the standard model". First, T + Con(T) is strictly stronger than to T. Also T + Con(T) is a true theory, and the incompleteness theorems apply to it, so it is strictly weaker than (T+ Con(T)) + Con(T +Con(T)). Continuing this way gives an ω-chain of stronger and stronger true theories extending T, each of which adds only a finite number of (true) axioms to T. There is a more non-trivial fact that regardless whether T is a true theory, if T is essentially incomplete then the Lindenbaum algebra of sentences modulo provability over T is the countable atomless Boolean algebra, so it has all sorts of structure. This is because any coatom [φ] in this algebra would correspond to a complete, consistent, effective theory T + φ, which cannot exist. - +1, Carl. Of course you know this, but some readers may be interested in knowing that your construction can be iterated into the transfinite up to $\omega_1^{CK}$. This is discussed in this old answer of mine - mathoverflow.net/questions/12865/… – François G. Dorais♦ May 30 2010 at 0:36 Carl, thanks for formulating my question with exactly the interpretation I had in mind. I will take the liberty of adding the reformulation to the question itself, if you don't mind. – Halfdan Faber May 30 2010 at 1:07 Well, I took several down votes for objections to my claim that the set of sentences without semantic equivalence classes is uncountable. This is, however, actually true. For any statement P we add the infinite set of infinite statements of the form P & 0=0 & 1=1 & .... As this is isomorphic to the infinite set of infinite binary strings, it is trivially uncountable. With the equivalence classes everything becomes countable. A minimal sentence is simply the shortest sentence in the equivalence class. – Halfdan Faber May 30 2010 at 2:06 1 There are only countably many formulas in the language of arithmetic. Each formula is finite. – Carl Mummert May 30 2010 at 2:10 2 If you mean to use infinitary logics, you have to say so explicitly, because the default is ordinary first-order logic. But that would break other things in your question. In the context of infinitary logic, it is not obvious that the incompleteness theorem would hold. Also, because the usual axioms of PA are finite formulas, and the usual inference rules cannot generate infinitary formulas, you'd have to change something in order for any infinitary formula at all to be provable. – Carl Mummert May 30 2010 at 11:17 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421321749687195, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/23989/isomorphic-of-n/23991
# Isomorphic of (N,+)? Does a binary operation * exist such that (N;+) and (Z;*) are isomorphic? N : set of natural numbers, Z : set of integers and + is the addition operation. If yes, please give me an isomorphism function? - ## 1 Answer Whenever you have two sets that are in bijection you can "transport" an operation, or any other algebraic structure, from one to the other. Namely, if $A$ is set endowed with an operation $+$ and $f:A\rightarrow B$ is a bijection of sets, you can define an operation $\ast$ on $B$ simply as $$b\ast b^\prime=f(a+a^\prime)$$ where $a$ and $a^\prime$ are the unique elements in $A$ such that $f(a)=b$ and $f(a^\prime)=b^\prime$. Then $(B,\ast)$ is isomoprphic to $(A,+)$ by construction. This can be applied to the question's case since there are surely bijections ${\Bbb N}\rightarrow{\Bbb Z}$. The problem is that the operations on $\Bbb Z$ that you construct in this way are very artificial and quite uninteresting. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616856575012207, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2011/02/11/intertwinors-from-semistandard-tableaux-span-part-1/?like=1&source=post_flair&_wpnonce=eb8d9d939c
The Unapologetic Mathematician Intertwinors from Semistandard Tableaux Span, part 1 Now that we’ve shown the intertwinors that come from semistandard tableaux are independent, we want to show that they span the space $\hom(S^\lambda,M^\mu)$. This is a bit fidgety, but should somewhat resemble the way we showed that standard polytabloids span Specht modules. So, let $\theta\in\hom(S^\lambda,M^\mu)$ be any intertwinor, and write out the image $\displaystyle\theta(e_t)=\sum\limits_Tc_TT$ Here we’re implicitly using the fact that $\mathbb{C}[T_{\lambda\mu}]\cong M^\mu$. First of all, I say that if $\pi\in C_t$ and $T_1=\pi T_2$, then the coefficients of $T_1$ and $T_2$ differ by a factor of $\mathrm{sgn}(\pi)$. Indeed, we calculate $\displaystyle\pi\left(\theta(e_t)\right)=\theta\left(\pi(\kappa_t\{t\})\right)=\theta(\mathrm{sgn}\kappa_t\{t\})=\mathrm{sgn}(\pi)\theta(e_t)$ This tells us that $\displaystyle\pi\sum\limits_Tc_TT=\mathrm{sgn}(\pi)\sum\limits_Tc_TT$ Comparing coefficients on the left and right gives us our assertion. As an immediate corollary to this lemma, we conclude that if $T$ has a repetition in some column, then $c_T=0$. Indeed, we can let $\pi$ be the permutation that swaps the places of these two identical entries. Then $T=\pi T$, while the previous result tells us that $c_T=\mathrm{sgn}c_T=-c_T$, and so $c_T=0$. 2 Comments » 1. [...] continue our proof that the intertwinors that come from semistandard tableaux span the space of all such [...] Pingback by | February 12, 2011 | Reply 2. [...] we are ready to finish our proof that the intertwinors coming from semistandard generalized tableaux span the space of all [...] Pingback by | February 14, 2011 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8792271614074707, "perplexity_flag": "middle"}
http://www.reference.com/browse/Antisymmetric+relation
Definitions Nearby Words # Antisymmetric relation In mathematics, a binary relation R on a set X is antisymmetric if, for all a and b in X, if a is R to b and b is R to a, then a = b. In mathematical notation, this is: $forall a, b in X, a R b and b R a ; Rightarrow ; a = b$ or equally, $forall a, b in X, a R b and a ne b Rightarrow lnot b R a.$ Inequalities are antisymmetric, since for numbers a and b, a ≤ b and b ≤ a if and only if a = b. The same holds for subsets. Note that 'antisymmetric' is not the logical negative of 'symmetric' (whereby aRb implies bRa). (N.B.: Both are properties of relations expressed as universal statements about their members; their logical negations must be existential statements.) Thus, there are relations which are both symmetric and antisymmetric (e.g., the equality relation) and there are relations which are neither symmetric nor antisymmetric (e.g., the preys-on relation on biological species). Antisymmetry is different from asymmetry. According to one definition of asymmetric, anything that fails to be symmetric is asymmetric. Another definition of asymmetric makes asymmetry equivalent to antisymmetry plus irreflexivity. ## Examples • The equality relation = on any given domain. • The usual order relation ≤ on the real numbers. • The subset order ⊆ on the subsets of any given set. • The relation "x is even, y is odd" between a pair (x, y) of integers: : ## Properties containing antisymmetry • Partial order - An antisymmetric relation that is also transitive and reflexive. • Total order - An antisymmetric relation that is also transitive and total.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9137865900993347, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18774/triangles-squares-and-discontinuous-complex-functions/18783
Triangles, squares, and discontinuous complex functions Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there some onto function $f:$ $\mathbb{C}$ $\rightarrow$ $\mathbb{C}$ such that for each triangle $T$ (with its interior), $f(T)$ is a square (with interior, too) ? I would have the same question for triangles and squares without interior, respectively. - Is it easy to come up with such a function without the onto restriction? – Tony Huynh Mar 19 2010 at 17:32 2 Yes. Just pick a single square and then make sure that the image of every open set is equal to that square -- which can be done in many ways. – gowers Mar 19 2010 at 18:32 2 Answers With interior: yes. Fix a sequence of squares $Q_1\subset Q_2\subset\dots$ whose union is the entire plane. Then arrange a map $g:\mathbb R\to\mathbb R^2$ such that, for every nontrivial segment $[a,b]\subset\mathbb R$, its image is one of the squares $Q_i$. To do that, construct countably many disjoint Cantor sets so that every nontrivial interval contains at least one of them. Then send every Cantor set $K$ bijectively onto $Q_n$ where $n$ is the minimum number such that $K\cap [-n,n]\ne\emptyset$. Send the complements of these Cantor sets to a fixed point inside $Q_1$. Then define $f(x,y)=g(y)$. (This is a detailed version of gowers' answer.) UPDATE Without interior: no. Take any triangle $T$ and consider its image $Q$ with vertices $ABCD$. There is a side $I$ of $T$ whose image has infinitely many points on (at least) two sides of $Q$. If these are opposite sides, say $AB$ and $CD$, the image of any triangle containing $I$ must stay within the strip bounded by the lines $AB$ and $CD$. And if these are two adjacent sides of $Q$, say $AB$ and $AD$, the image of any triangle containing $I$ stays within the quarter of the plane bounded by the rays $AB$ and $AD$. In both cases, the images of the triangles containing $I$ do not cover the plane, hence the map is not onto. - You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think this works but haven't checked. I'm pretty sure that for any open set it's easy to find a map that takes any open subset of that set to all of $\mathbb{C}$, or to all of a square, or to whatever single set you feel like. So now for each n choose a map that takes every open subset of the annulus {$z: n < |z| \leq n+1$} to the square that consists of all points with real and imaginary parts less than or equal to n. Now, given any triangle, there will be a maximum n such that it belongs to the nth annulus, and it will intersect that annulus in an open set and therefore map to a square. - Incidentally, it's a nice exercise to find a map from the reals to the reals that takes every value in every open interval. Using that it isn't hard to find a map of the kind I'm claiming exists. – gowers Mar 19 2010 at 18:41 Such a "locally surjective" map $f$ may be obtained as follows. Let {$1$} $\cup$ { $b_{t};t\in\mathbb{R}$ } be a Hamel basis of $\mathbb{R}$ over $\mathbb{Q}$, and let us define the $\mathbb{Q}$ - linear map $f:$ $\mathbb{R}$ $\rightarrow$ $\mathbb{R}$ by $f(1):=0$ , and $f\left(b_{t}\right)=t$ $\left(t\in\mathbb{R}\right)$. – Ady Mar 19 2010 at 19:48 I don't want to spoil anyone else's fun, but it's not giving too much away to say that it can also be done without the axiom of choice. In fact, when I've set this question I've tended to get about as many constructions as people who seriously attempted the question. – gowers Mar 19 2010 at 20:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 50, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538512825965881, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/109597/solution-of-the-complex-ginzburg-landau-equation?answertab=active
# Solution of the complex Ginzburg - Landau equation Can someone show that it's possible to find a solution of the kind: $$\Phi(x,t)=R(x,t)\exp[i\Psi(x,t)]$$ of the complex Ginzburg - Landau equation: $$\frac{\partial{\Phi}}{\partial{t}}=(1+ia)\frac{\partial^2{\Phi}}{\partial{x}^2}+\Phi-(ib-1)|\Phi|^2\Phi$$ assuming that $R(x,t)$ and $\Psi(x,t)$ are defined as real - valued functions? Thanks - ## 1 Answer Any complex number $z$ can be written as $z = r e^{i\theta}$ where $r = |z| > 0$ and $\theta$ is real, so any solution $\Phi(x,t)$ can certainly be written that way. Some particular solutions in this case are $\Phi \left( x,t \right) =R{{\rm e}^{i \left( \sqrt {1+{R}^{2}}x+ \left( a \left( 1+{R}^{2} \right) -b{R}^{2} \right) t \right) }}$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160796403884888, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/265872/topology-cocountable-space-is-not-hausdorff
# topology-cocountable space is not Hausdorff Let $T=\{ U\subseteq X:X\setminus U\textrm{ is countable}\}\cup \{\emptyset\}$ Then this is known as co-countable topology. Clearly,real line with co-countable topology is not Hausdorff. For: If T is Hausdorff there exist two disjoint open sets that seperate the any pair of points;say G and H. Then $X\setminus (A \cap B)=X$. So the LHS is countable, but RHS is uncountable, which is a contradiction. Can I use the same argument to prove the result if X is the complex plane or the Euclidean space? - You need to add the empty set to $T$, otherwise it's not a topology on $\Bbb R$. This also makes your argument not work even on $\Bbb R$. – nonpop Dec 27 '12 at 11:49 1 I've added $\{\emptyset\}$ to your $T$. As nonop mentioned, it is essential, but this also makes it necessary to mention that $A$ and $B$ are both nonempty (which is obvious, but should be mentioned in such a basic exercise anyway). Also, to write tex formulas, encase them in dollar signs (\$), like usual. – tomasz Dec 27 '12 at 12:05 ## 1 Answer All that matters for topological properties of the co-countable topology is the cardinality of the underlying set. (It is very easy to show that if $X$ and $Y$ are co-countable topological spaces and $|X| = |Y|$, then $X \cong Y$.) So, to answer your question: yes, you can use the same argument for these sets, since they all have the same cardinality as $\mathbb{R}$. But, even more, if $X$ is any uncountable set, then your argument shows that the co-countable topology on $X$ is not Hausdorff (you never used anything particular about the cardinality of $\mathbb{R}$ in your argument, except that it is uncountable). - @ Arthur,Thank you very much! – ccc Dec 27 '12 at 12:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9275789260864258, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/239566/subset-of-a-finite-set-is-finite
# Subset of a finite set is finite We define $A$ to be a finite set if there is a bijection between $A$ and a set of the form $\{0,\ldots,n-1\}$ for some $n\in\mathbb N$. How can we prove that a subset of a finite set is finite? It is of course sufficient to show that for a subset of $\{0,\ldots,n-1\}$. But how do I do that? - 3 I wrote this answer to a now-deleted question. I realized that I don't recall many proofs of the pigeonhole principle (anywhere) which are longer than "obviously true.", so I decided to post the question with an answer. – Asaf Karagila Nov 18 '12 at 1:40 2 I welcome other people to write answers as well. – Asaf Karagila Nov 18 '12 at 1:58 ## 2 Answers The proof is essentially the pigeonhole principle, and it is proved by induction. Let us denote $[n]=\{0,\ldots,n-1\}$. What we want to prove is in two parts: 1. If $A\subseteq[n]$ then either $A=[n]$ or there is a bijection between $A$ and $[m]$ for some $m<n$. 2. If $m<n$ then there is no bijection from $[n]$ into $[m]$. Equivalently we want to prove that if $f\colon[n]\to[n]$ is injective then it is a surjective. Also we may want to prove that if $f\colon[n]\to[m]$ then $f$ is not injective, or $f\colon[m]\to[n]$ then $f$ is not surjective. The first part is not very difficult, we define by induction $f(0)=\min A$; and $f(k+1)=\min A\setminus\{f(0),\ldots,f(k)\}$. Either $f$ is a bijection between $A$ and $[n]$ or the induction had to stop before and $f$ is a bijection between $[m]$ and $A$ for some $m<n$. Note that this is not begging the question because $A$ is a set of natural numbers, and it is a subset of $[n]$ so it cannot contain more elements than $[n]$ (more in the actual sense, not in the sense of cardinality). The second part is tricky because it seems so obvious. This is where the pigeonhole principle is actually proved. We prove this part by induction. For $n=0$ this is obvious because $[0]=\varnothing$. Assume that for $[n]$ it is true that if $f\colon[n]\to[n]$ is injective then it is surjective. We want to show this is also true for $[n+1]$. Let $f\colon[n+1]\to[n+1]$ be an injective function. There are two cases: 1. If $f(k)\neq n$ for all $k<n$ then restricting $f$ to $[n]$ is an injective function from $[n]$ into $[n]$. By the induction hypothesis we have to have that the restriction of $f$ to $[n]$ is surjective, therefore $f(n)\notin[n]$ (otherwise $f$ is not injective) and therefore $f(n)=n$ and so $f$ is surjective as well. 2. If $f(k)=n$ for some $k<n$, by injectivity this $k$ is unique. We define $g$ as follows: $$g(x)\begin{cases} f(n) & x=k\\ n & x=n\\ f(x) & x\neq k,n\end{cases}$$ It follows that $g$ is also injective, and for all $k< n$ we have to have $g(k)\neq n$ (because $g(n)=n$ and $g$ is injective). Therefore by the previous case we know that $g$ is surjective. It follows that $f$ is also surjective, as wanted. A word on the axiom of choice and finiteness. The above proof shows that finite sets are Dedekind-finite. There are other ways of defining finiteness, all which are true for finite sets, but may also be true for infinite sets. For example "every surjection is a bijection" might fail for infinite Dedekind-finite sets; or "every linearly ordered partition is finite"; etc. Assuming the axiom of choice, or actually the axiom of countable choice, is enough to conclude, however, that all Dedekind-finite sets are finite. Therefore the above forms of finiteness are equivalent once more. (The reverse implication is false, by the way, it is consistent that the axiom of countable choice fails, but every Dedekind-finite set is finite.) - 1 There is another proof in my mind. Am I allowed to write it here Sir? – Babak S. Nov 18 '12 at 2:07 @Babak: See my second comment on the question. :-) – Asaf Karagila Nov 18 '12 at 2:09 Techically, the question seems to ask only for 1. – Michael Greinecker Nov 18 '12 at 2:17 @Michael: True, but it simpler to prove this using th second point. At least if you want a full proof. :-) – Asaf Karagila Nov 18 '12 at 2:19 Please have a look at it, I don't want it to be duplicate. I use this fact that if $A$ is finite and $B\subset A$, a proper subset, then $B$ is finite as well. We can do this by induction on $|A|$. If $|A|=0$ then clearly we havee $A=\emptyset$ and since the empty set has no proper subset so our claim is trure. Now, assume that the claim is true for all finite sets with $|A|=n$ and want to show for $|A|=n+1, \; B\subset A$. We know there is an element $\alpha\in A$ such that $\beta\notin B$. – Babak S. Nov 18 '12 at 2:48 show 9 more comments Here is a slightly different proof of the pigeonhole principle: Suppose $f:[n]\to[n]$ is an injection that is not surjective for $n>0$. We can assume without loss of generality that $n$ is not in the range of $f$. For if it is, we map the element that is mapped to $n$ to some element not in the range instead and get a new injection that is not surjective with $n$ not in the range. We now show that $f|[n-1]$ is injective, but not surjective. Trivially, the restriction of an injection is an injection. But $f(n)\in[n-1]$ is not in the range of $f|[n-1]$. - Note that this argument also uses induction. – Asaf Karagila Nov 18 '12 at 2:20 @Asaf Of course, the natural numbers are essentially defined by an indctive property. – Michael Greinecker Nov 18 '12 at 2:22 Yes, I just made this remark because the induction is not very apparent in your proof, and one could think that the proof is not inductive. :-) – Asaf Karagila Nov 19 '12 at 0:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9591405987739563, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/266037/pullback-of-an-empty-family?answertab=active
# Pullback of an empty family As I understand it, in the category of sets, there is no morphism $\{1\}\rightarrow\emptyset$. On the other hand, is one permitted to say sentences like the following? "Consider the empty family $(\phi_\alpha)_\alpha$ of morphisms, where $\phi_\alpha:\{1\}\rightarrow\emptyset$." For example, can one say, "Find the pullback of the empty family $(\phi_\alpha)_\alpha$ of morphisms, where $\phi_\alpha:\{1\}\rightarrow\emptyset$." - ## 1 Answer Sure. There's still a perfectly well-defined universal property (it's just vacuous) and a perfectly well-defined universal object satisfying it (exercise). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9017367959022522, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/116045/basic-question-about-affine-group-schemes
## Basic question about affine group schemes ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've been reading Waterhouse's book "Introduction to affine group schemes", in part to help prepare myself for an (oral) advanced topic exam in algebraic geometry. There is one exercise in chapter 1 that has been giving me trouble. Let $G$ be an affine group scheme with associated Hopf algebra $A$. The exercise says that I should prove the following Hopf-algebraic fact about $A$ by translating it to a basic fact about the group theory of $G$ : "The map $A \otimes A \rightarrow A \otimes A$ sending $a \otimes b$ to $(a \otimes 1)(\Delta(b))$ is an algebra isomorphism". The other parts of the exercise give Hopf-algebraic facts corresponding to really basic group theory facts, like $(x^{-1})^{-1} = x$ and $(xy)^{-1} = y^{-1} x^{-1}$ and $1^{-1} = 1$. However, I can't figure out which group-theoretic fact the above corresponds to. It almost seems like it is saying that there is some automorphism of the group corresponding to the above Hopf-algebra isomorphism; however, the only group automorphisms I know that exist in general are the inner ones, and those don't seem to do the job. - ## 2 Answers You are confusing algebra isomorphisms with Hopf algebra isomorphisms. The map $A\otimes A \to A\otimes A$ given by $a\otimes b\mapsto \left(a\otimes 1\right)\left(\Delta\left(b\right)\right)$ is an algebra isomorphism but not a Hopf algebra isomorphism in general. So it corresponds not to a group automorphism of $G\times G$, but to an automorphism of the affine scheme $G\times G$. This automorphism is the one that sends $\left(x,y\right)$ to $\left(x,xy\right)$ in terms of points. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Consider the map of groups $G\times G \to G\times G$, $(g_1,g_2) \mapsto (g_1,g_1\cdot g_2)$. Clearly, it is an isomorphism, hence induces an isomorphism of the corresponding Hopf algebras, which is given by precisely the map you are asking about. - Thank you very much! – Catherine Dec 11 at 5:45 It's not a map of groups, though, unless you assume $G$ to be abelian. – darij grinberg Dec 11 at 5:46 @darij grinberg : Yes, I figured that out an instant ago, so I switched the answer to yours. – Catherine Dec 11 at 5:47 Of course it is not an isomorphism of groups, but still it gives an isomorphism of algebras. – Sasha Dec 11 at 5:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9267320036888123, "perplexity_flag": "head"}
http://mathoverflow.net/questions/27519/other-examples-of-composition-of-functions
## other examples of composition of functions ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hello MathOverflow, my first question, I apologize for the LaTeX, it works in preview but not in Safari once posted. There are many methods out there that generate a list of unique unit fractions that sum to some rational number. One of the simplest is called the "Splitting Algorithm" which uses the identity $\frac{1}{a}=\frac{1}{a+1}+\frac{1}{a(a+1)}$. Any fraction $\frac{a}{b}$ can be represented as $\sum^a \frac{1}{b}$. The method looks at all the fractions that are duplicates, keeps one of them, and applies the identity again. For example $\frac{2}{3}=\frac{1}{3}+\frac{1}{3}=\frac{1}{3}+\frac{1}{4}+\frac{1}{12}$ Each of the denominators can be looked at as a composition of functions $f(a)=a+1$ and $g(a)=a(a+1)$. The method works in spite of $f,g,f\circ g,g\circ f,\cdots$ are never equal. fractions like $\frac{5}{2}$. When the method is applied to fractions like this duplicates will appear although it has been proved they can be removed by subsequent applications of the method so that no remaining compositions are equal. ## The Question Can you think of any other areas where compositions of functions are used like this? I know that on its own that is too general; used in the list of sums, products, useful identities, number theory? The best case scenario would be that there are other problems like this and they can be searched for under a common name. Edit: method produces unique unit fractions; clarified 'equal', when I wrote that I was thinking of cases like $\frac{b-1}{b}$ that may not lead to duplicates - References would have helped. Beeckmans, L. "The Splitting Algorithm for Egyptian Fractions." J. Number Th. 43, 173-185, 1993. and Eppstein, D. Egypt.ma Mathematica notebook. ics.uci.edu/~eppstein/numth/egypt/egypt.ma – Will Jagy Jun 9 2010 at 21:06 ## 3 Answers Computing a continued fraction representation for a real number x can be seen as a repeated application of two functions. Starting with some real number x in [0,1[, apply 1/x, then x+1 the correct amount of time to come back in [0,1[, then 1/x again and so on. The theory of fuchsian groups makes use of these codings to connect geometric properties of hyperbolic spaces to arithmetic properties of real numbers. Continued fractions representation for example is related to the geodesics on the modular surface $SL_2(R)/SL_2(Z)$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Another example: there are 3x3 matrices which, when applied to a vector representing a pythagorean triple, produce other pythagorean triples. I think it is even a way to produce all primitive pythagorean triples from (3,4,5). In general, you are looking for a finite number of operations which produce through composition a number of objects, and it seems that you want no overlap, i.e. if h and k are two composition series for which h(obj)=k(obj) for some initial object obj, then h=k as composition series, or in other words they are built up the same way. Universal algebra has some means for the study of the generation of objects through operation composition. In particular, the operations form a clone (semigroup if you look at just the operations which take one argument) which, given the uniqueness requirement above, is relatively free on the set of generators, as there will be no nontrivial equations satisfied by the compostions. There are other areas and examples as well, but until the question becomes more specific, I will stop here. - The second paragraph is essentially what I am looking for: objects generated from compositions and no overlap. I gave the unit fraction example because I had no other way to get across what I meant. I would be interested in the other examples and areas, preferably with a number theory or combinatorics flavor but anything is welcome. – mmm Jun 9 2010 at 21:18 To clarify my example, there are 3 matrices U, M and D, combinations of which applied to (3,4,5) generate all pythagorean triples, without overlap. I think the Roberts' calligraphic number theory book has the details. Gerhard "Ask Me About System Design" Paseman, 2010.06.10 – Gerhard Paseman Jun 10 2010 at 18:25 I'm not sure what you mean by "$f,g,f\circ g,g\circ f,\cdots$ are never equal" - if you use this method to decompose 11/3, you'll see that $g(3)=f^9(3)$, for example. (I think you also need to mention that the purpose of the method is to "generate a list of distinct unit fractions.) The simplest related example that comes to mind is the study of Collatz-like functions, where in a sense the question of interest is precisely when two different compositions are equal. Hugo - 1 I think he just means that no two of that list of functions are equal (as functions). In other words, that the monoid map $F[x, y] \to \hom(\mathbb{N}, \mathbb{N})$ from the free monoid on two generators to the monoid of endofunctions on $\mathbb{N}$, sending $x$ to $f$ and $y$ to $g$, is an injection. – Todd Trimble Jun 9 2010 at 11:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9419069886207581, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/248257/squares-in-a-triangle
# Squares in a triangle? I've got some trouble... IJKL is a square and B, I, J, C are aligned (alternatively, |IJ| is confounded with |BC|. h is the height of acute $\triangle$ ABC from A to side BC. C1 is the red square. C2 is the green square. C3 is the blue square a= |BC| b= |CA| c= |AB| 1. How to find IJKL using a and h 2. a< b< c. Classify C1, C2, C3 from the largest. 3. Find ABC triangles (their area, S) when the square IJKL has the largest area. Thank you in advance. - 3 What is this??. – DonAntonio Nov 30 '12 at 20:23 You may want to upload the picture somewhere else, because I have to register to Math Help Forum to be able to see the picture to which you are linking. – anonymous Nov 30 '12 at 20:23 Can you see the picture ? – Dan Nov 30 '12 at 20:38 @Dan, I separated the statements as you did in your post (use two "enters" to separate lines, or when you want to begin another line, use <br>. I assumed that, e.g., [BC] means the length |BC|? I'm not clear what you mean by [IJ] is confunded with [BC]. Do you mean confounded? Or that |IJ| is proportional to |BC|, and varies as |BC| varies? – amWhy Nov 30 '12 at 20:49 1 In fact, I want to express the lenght of the sides of IJKL using a and h. Did you understand ? – Dan Nov 30 '12 at 21:06 show 12 more comments ## 1 Answer 1. Let $s = |IJ|$ be the side length of the square. Consider the similar triangles $ABC$ and $ALK$. The ratio of the height of $ALK$ to the height of $ABC$ is the same as the ratio of the base of $ALK$ to the base of $ABC$. That is, $$\frac{h-s}{h} = \frac{s}{a}.$$ Cross multiply to obtain $ah - as = sh$, and solve for $s$ to obtain $$s = \frac{ah}{a+h}.$$ 2. Let $h_a = h$ be the height when we consider $a$ to be the base, and define $h_b$ and $h_c$ correspondingly. Then from the first part, the side length of $C_1$ is $\frac{ah_a}{a+h_a}$, for $C_2$ it is $\frac{bh_b}{b+h_b}$, and for $C_3$ it is $\frac{ch_c}{c+h_c}$. The numerator is constant in each case (because it is proportional to the height of the whole triangle), so we need only consider the denominator. Note that we can write $h_a = b\sin C = c \sin B$, and the other two heights analogously. Then $$(a + h_a) - (b + h_b) = a + b\sin C - b - a\sin C = (a-b)(1 - \sin C) < 0,$$ since $a < b$ and $\sin C < 1$. Thus $a+h_a < b+h_b$, and so the side length of $C_1$ is larger than the side length of $C_2$. By exactly the same method we can prove that $C_2$ is larger than $C_3$, so from largest to smallest we have $C_1$, $C_2$, and then $C_3$. For this to hold we actually don't need the triangle to be acute, we just need it not to be a right triangle. 3. The area of $IJKL$ will be maximized relative to the total area $S$ of the triangle when $a = h$, since this minimizes $a+h$ for fixed $ah$. In this case the area of the square is exactly one-half the area of the triangle. - Thank you for your help but I didn't learn what are "similar triangles" so I can't use this. Do you have another way to find the answer of the first question ? – Dan Nov 30 '12 at 21:36 Similar triangles are one of the most basic and important ideas in plane geometry. There's probably another way to prove it without similar triangles, but almost certainly not as elegant. I suggest you learn what similar triangles are so that you can use them. – Jonathan Christensen Nov 30 '12 at 21:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 36, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453961253166199, "perplexity_flag": "head"}
http://nrich.maths.org/5772/solution
### Doodles A 'doodle' is a closed intersecting curve drawn without taking pencil from paper. Only two lines cross at each intersection or vertex (never 3), that is the vertex points must be 'double points' not 'triple points'. Number the vertex points in any order. Starting at any point on the doodle, trace it until you get back to where you started. Write down the numbers of the vertices as you pass through them. So you have a [not necessarily unique] list of numbers for each doodle. Prove that 1)each vertex number in a list occurs twice. [easy!] 2)between each pair of vertex numbers in a list there are an even number of other numbers [hard!] ### Russian Cubes How many different cubes can be painted with three blue faces and three red faces? A boy (using blue) and a girl (using red) paint the faces of a cube in turn so that the six faces are painted in order 'blue then red then blue then red then blue then red'. Having finished one cube, they begin to paint the next one. Prove that the girl can choose the faces she paints so as to make the second cube the same as the first. ### N000ughty Thoughts Factorial one hundred (written 100!) has 24 noughts when written in full and that 1000! has 249 noughts? Convince yourself that the above is true. Perhaps your methodology will help you find the number of noughts in 10 000! and 100 000! or even 1 000 000! # Weekly Problem 27 - 2008 ##### Stage: 3 and 4 Challenge Level: Firstly, there are $12$ unit squares which contain an even number. Every $2\times2$ square in the diagram has entries which consist of two odd numbers and two even numbers and hence have an even total. There are $16$ of these. Each $3\times3$ square in the diagram, however, has entries which consist of five odd numbers and four even numbers (giving an odd total), or four odd numbers and five even numbers (giving an even total). There are $4$ of the latter: those with $8$, $12$, $14$ or $18$ in the centre. Every $4\times4$ square in the diagram has entries which consist of eight odd numbers and eight even numbers and hence have an even total. There are $4$ of these. Finally, the full $5\times5$ square contains $13$ odd numbers and $12$ even numbers, giving an odd total. So the required number is $12+6+4+4$, that is $36$. This problem is taken from the UKMT Mathematical Challenges. View the previous week's solution View the current weekly problem The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388008713722229, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/285248/whats-the-difference-between-rationals-and-irrationals-topologically/285317
# What's the difference between rationals and irrationals - topologically? I know that sets of rational and irrational numbers are quite different. In measure, almost no real number is rational and of course, $\mathrm{card}(\mathbb Q) < \mathrm{card}(\mathbb R \setminus \mathbb Q)$ tells us that there are indeed much "more" irrationals than rational nubers. On the other side, we can observe the following • between every two different rationals, there are infinitely many irrations • between every two different irrationals, there are infinitely many rationals • both sets are dense in $\mathbb R$, i.e. every real number can be written as a limit of a sequence of both rationals or irrationals • both are disconnected, neither open nor closed • ... So I wonder, is there any way to distinguish $\mathbb Q$ and $\mathbb R \setminus \mathbb Q$ from a topological perspective as subspaces of $\mathbb R$? Is there any way to explain why the one set is so much bigger by looking at the topology? In what regard are they different? - ## 2 Answers • The irrational numbers are a Baire space (and also the Baire space), and they are completely metrizable. This means that there is a complete metric space which is homeomorphic to the irrational numbers with the subspace topology. The rational numbers, on the other hand, are not a Baire space and they are not completely metrizable which means they are not homeomorphic to any complete metric space. • From a "local" (Borel) perspective the irrationals are a $G_\delta$ set which is not $F_\sigma$, and the rationals (consequentially) are an $F_\sigma$ set which is not $G_\delta$. This means that the irrationals are the intersection of countably many open sets, but not the union of countably many closed sets. I should add that being $G_\delta$ is sometimes denoted as $\bf\Pi^0_2$, and being $F_\sigma$ can be denoted as $\bf\Sigma^0_2$. The two properties are almost the same. It can be shown that $G_\delta$ subsets of a complete metric space (like $\mathbb R$) are exactly those subsets which are completely metrizable. On the other hand, having no isolated points and being completely metrizable means that you're a Baire space (and therefore the rationals are not such space). - 1 – kahen Jan 23 at 19:18 That's an interesting difference... – rschwieb Jan 23 at 19:19 @kahen: You are correct. I tried to improve the answer so it won't sound as if all Baire spaces are completely metrizable. – Asaf Karagila Jan 23 at 20:09 1 Thanks very much, this was the kind of characterization I was hoping for. I especially like the completely metrizable-property, because it doesn't mention countability/uncountability in the first place. – Dario Jan 23 at 22:07 Both spaces have classical characterizations: the rationals $\mathbb{Q}$ is the (up to homeomorphism) unique countable metric space without isolated points. The irrationals are the (up to homeomorphism) unique separable metric space that is completely metrizable, zero-dimensional (it has a base of clopen sets) and nowhere locally compact (no point has a compact neighbourhood). All of these properties $\mathbb{Q}$ also has, except being completely metrizable (because it is not a Baire space). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9645330309867859, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/77394-please-please-help-statistic-problem-print.html
# PLEASE PLEASE help with this statistic problem! Printable View • March 7th 2009, 01:52 PM Kalyn Well to calculate, for example, the odds of 9 people catching it, you multiply $0.4^9 * 0.6^8 * 17C9$. Do you recognise this at all? It would take a while to calculate this for 11, 12, 13, 14, 15, 16 and 17 people catching. There are cumulative binomial probability tables with the data already calculated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9081032276153564, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/21887/i-think-there-is-a-mistake-in-solutions-to-this-problem-plus-and-minus-sign
# I think there is a mistake in solutions to this problem. plus and minus sign ! I pretty much had what the solutions had, but we disagreed with one thing, a minus sensitive sign. $\sum W = \Delta K$ $\int_0^{-s} - ks^2 ds = \frac{ks^3}{3} = \frac{1}{2}m(v^2-4^2)$ Rearranging, I get $v^2 = 4^2 - \frac{2k(-0.2)^3}{3m} = 19.2$ So v = 4.38m/s But the key did $v^2 = 4^2 - \frac{2k(0.2)^3}{3m} = 12.8$ where they let $\int_0^{s} - ks^2 ds = \frac{ks^3}{3}$ EDIT Apparently I am stupid and I made a question that didn't even need answering... I had this on paper and I typed up the wrong thing, that's why I was confused. $\sum W = -\int_{-s}^{0} ks^2 ds = \frac{1}{2}m(v^2-4^2)$ Sorry everyone - You've already taken into account that the force is against the direction of displacement in the minus sign inside the integral. Your $ds$ itself is (explicitly) negative, so no need to integrate to $-s$. – Manishearth♦ Mar 5 '12 at 3:50 If the spring was linear, F = -kx, I still would've needed to use take care of the additional minus sign. – jak Mar 5 '12 at 3:51 1 Usually in such problems I find it better to qualitatively analyse if velocity increases or decreases first and then explicitly tack on the sign onto $\Delta W$ – Manishearth♦ Mar 5 '12 at 3:52 Oh I see what you mean. I guesss it would make sense for the velocity to drop since it had more kinetic energy in the beginning. But this confuses with what I thought I knew before. – jak Mar 5 '12 at 3:56 Its a common confusion. I have to go now, but I'll try to later post an answer explaining the multitude of negatives that can crop up here. – Manishearth♦ Mar 5 '12 at 4:07 show 1 more comment ## 1 Answer There's a multitude of negatives here: Firstly, the force would be more correctly written as $-ks^2\hat{s}$, which allows us to analyse it like a normal spring. $F=ks^2$ on its own becomes unidirectional, which can lead to confusions. I think this is your main issue here. Now, $W=\int \vec{F}\cdot d\vec{s}$ try not to get confused with work done by internal/external force. Here, let's look at work done by the external(spring) force. Now, since work is a path integral, our $d\vec{s}$ and thus our $\vec{s}$ must be in the direction of movement. If not, we get an extra negative. For convenience, we'll try to avoid that extra negative and take $\vec{s}$ as positive-right. Now, since the final displacement is also right, our upper limit becomes positive. So, the only negative involved is the one in $-ks^2\hat{s}$, which bubbles out of the dot product. - Doesn't make sense still. $W = \int \vec{F} \cdot d\vec{s} = \int -ks^2 \hat{s} \cdot d\vec{s}$. Our final displacement is to the left, not to the right. So I don't understand why you would still say the upper limit is positive. – jak Mar 6 '12 at 2:49 @jak I said that work is a line integral. We integrate it in the direction that we are moving. As such, we move to the left, so $\hat{s},\vec{s},\mathrm{d}s,s$ are all positive-left. – Manishearth♦ Mar 6 '12 at 2:53 @jak or, you can define $ds'=-ds$ and use it in the integral, where s is now positive-right. – Manishearth♦ Mar 6 '12 at 3:41 Oh okay I see what you mean on the line integral stuff now. That argument also make sense (and consistent with a linear spring too). Not sure what your ds' meant. But how exactly are you suppose to know that when you tackle the problem the 1st time without the solutions if you were to use the method I chose from before? – jak Mar 6 '12 at 6:20 Oh you know what? I actually screwed my LaTeX in the beginning of the problem, that's why I was confused. I actually did this right the 1st time lol... I'll vote you up. I edited my post. This was so stupid of me... – jak Mar 6 '12 at 6:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9642215967178345, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Magma_(algebra)
# Magma (algebra) For other uses, see Magma (disambiguation). In abstract algebra, a magma (or groupoid; not to be confused with groupoids in category theory) is a basic kind of algebraic structure. Specifically, a magma consists of a set $M$ equipped with a single binary operation $M \times M \rightarrow M$. A binary operation is closed by definition, but no other axioms are imposed on the operation. The term magma for this kind of structure was introduced by Nicolas Bourbaki. The term groupoid is an older, but still commonly used alternative which was introduced by Øystein Ore. Algebraic structures Group-like Ring-like Lattice-like Module-like Algebra-like ## Definition A magma is a set $M$ matched with an operation "$\cdot$" that sends any two elements $a,b \in M$ to another element $a \cdot b$. The symbol "$\cdot$" is a general placeholder for a properly defined operation. To qualify as a magma, the set and operation $(M,\cdot)$ must satisfy the following requirement (known as the magma axiom): For all $a$, $b$ in $M$, the result of the operation $a \cdot b$ is also in $M$. And in mathematical notation: $\forall a,b \in M : a \cdot b \in M$ ### Etymology In French, the word "magma" has multiple common meanings, one of them being "jumble". It is likely that the French Bourbaki group referred to sets with well-defined binary operations as magmas with the "jumble" definition in mind. ## Types of magmas Magmas are not often studied as such; instead there are several different kinds of magmas, depending on what axioms one might require of the operation. Commonly studied types of magmas include • quasigroups—nonempty magmas where division is always possible; • loops—quasigroups with identity elements; • semigroups—magmas where the operation is associative; • semilattices—semigroups where the operation is commutative and idempotent; • monoids—semigroups with identity elements; • groups—monoids with inverse elements, or equivalently, associative loops or associative quasigroups; • abelian groups—groups where the operation is commutative. Note that both divisibility and invertibility imply the cancellation property. ## Morphism of magmas A morphism of magmas is a function $f\colon M\to N$ mapping magma $M$ to magma $N$, that preserves the binary operation: $f(x \; *_M \;y) = f(x) \; *_N\; f(y)$ where $*_M$ and $*_N$ denote the binary operation on $M$ and $N$ respectively. ## Combinatorics and parentheses For the general, non-associative case, the magma operation may be repeatedly iterated. To denote pairings, parentheses are used. The resulting string consists of symbols denoting elements of the magma, and balanced sets of parenthesis. The set of all possible strings of balanced parenthesis is called the Dyck language. The total number of different ways of writing $n$ applications of the magma operator is given by the Catalan number $C_n$. Thus, for example, $C_2=2$, which is just the statement that $(ab)c$ and $a(bc)$ are the only two ways of pairing three elements of a magma with two operations. Less trivially, $C_3=5$: $((ab)c)d$, $(ab)(cd)$, $(a(bc))d$, $a((bc)d)$, and $a(b(cd))$. A shorthand is often used to reduce the number of parentheses. This is accomplished by using juxtaposition in place of the operation. For example, if the magma operation is $*$, then $xy * z$ abbreviates $(x * y) * z$. Further abbreviations are possible by inserting spaces, for example by writing $xy * z * wv$ in place of $((x * y) * z) * (w * v)$. Of course, for more complex expressions the use of parenthesis turns out to be inevitable. A way to avoid completely the use of parentheses is prefix notation. ## Free magma A free magma $M_X$ on a set $X$ is the "most general possible" magma generated by the set $X$ (that is there are no relations or axioms imposed on the generators; see free object). It can be described, in terms familiar in computer science, as the magma of binary trees with leaves labeled by elements of $X$. The operation is that of joining trees at the root. It therefore has a foundational role in syntax. A free magma has the universal property such that, if $f\colon X\to N$ is a function from the set $X$ to any magma $N$, then there is a unique extension of $f$ to a morphism of magmas $f^\prime$ $f^\prime\colon M_X \to N.$ ## Classification by properties | | Totality* | Associativity | Identity | Inverses | |-----------------------|------------------------------------------------------------------------------------------------------------------------------------------|-----------------|------------|------------| | Group-like structures | | | | | | | | | | | | Yes | No | No | No | No | | Yes | Yes | No | No | No | | Yes | Yes | Yes | No | No | | Yes | Yes | Yes | Yes | No | | Yes | Yes | Yes | Yes | Yes | | Yes | No | Yes | Yes | No | | Yes | No | No | Yes | No | | No | Yes | Yes | Yes | No | | No | Yes | Yes | No | No | | No | Yes | No | No | No | | | *Closure, which is used in many sources to define group-like structures, is an equivalent axiom to totality, though defined differently. | | | | A magma (S, *) is called • unital if it has an identity element, • medial if it satisfies the identity xy * uz = xu * yz (i.e. (x * y) * (u * z) = (x * u) * (y * z) for all x, y, u, z in S), • left semimedial if it satisfies the identity xx * yz = xy * xz, • right semimedial if it satisfies the identity yz * xx = yx * zx, • semimedial if it is both left and right semimedial, • left distributive if it satisfies the identity x * yz = xy * xz, • right distributive if it satisfies the identity yz * x = yx * zx, • autodistributive if it is both left and right distributive, • commutative if it satisfies the identity xy = yx, • idempotent if it satisfies the identity xx = x, • unipotent if it satisfies the identity xx = yy, • zeropotent if it satisfies the identity xx * y = yy * x = xx, • alternative if it satisfies the identities xx * y = x * xy and x * yy = xy * y, • power-associative if the submagma generated by any element is associative, • left-cancellative if for all x, y, and z, xy = xz implies y = z • right-cancellative if for all x, y, and z, yx = zx implies y = z • cancellative if it is both right-cancellative and left-cancellative • a semigroup if it satisfies the identity x * yz = xy * z (associativity), • a semigroup with left zeros if there are elements x for which the identity x = xy holds, • a semigroup with right zeros if there are elements x for which the identity x = yx holds, • a semigroup with zero multiplication or a null semigroup if it satisfies the identity xy = uv, for all x,y,u and v • a left unar if it satisfies the identity xy = xz, • a right unar if it satisfies the identity yx = zx, • trimedial if any triple of its (not necessarily distinct) elements generates a medial submagma, • entropic if it is a homomorphic image of a medial cancellation magma. If $*$ is instead a partial operation, then S is called a partial magma. See n-ary group. ## References • M. Hazewinkel (2001), "Magma", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4 • M. Hazewinkel (2001), "Free magma", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8678493499755859, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6196/explaining-valence-with-quantum-mechanics?answertab=votes
# Explaining valence with quantum mechanics Can anyone give me a quantum mechanical explanation of the theory of valence? (i.e. why atoms bond just enough to have a complete orbital) EDIT: To clarify, I already have an idea of why atoms bond, and the molecular orbit makes sense to me. The problem is that the valence bond theory provides a simple and fairly accurate description for when a molecule is stable. To make this more concrete: why is CH_4 more stable than CH_3 or CH_5. Looking up the former on wikipedia, I see a better way of phrasing the question might be: why are free radicals reactive? (Should I change the title?) - It has to do with the Lie Algebra of the angular momentum operator for the electrons circling the nuclei. My advice is to pick up any decent book on QM and read the section on angular momentum. – Matt Mar 2 '11 at 7:03 ## 2 Answers All these rules are approximate, and all of them follow from Schrödinger's equation that offers a much more accurate, quantitative result. Bonds in molecules may arise not just from "valence bonds", which remain localized in atoms, but also from "molecular orbitals" which are localized across the whole molecule. The latter kind of bond is more general and the methods to study it are newer. Back to the valence bonds. Energetically, the bonds occur because they allow the pair of atoms to reduce its energy. Just like for individual atoms, pairs of atoms are more stable when a whole orbital is filled because the other states that could be filled are separated by a gap and they have a higher energy. So whenever it's possible to divide the electrons among the pair of atoms so that both atoms have full shells, then all the electrons with low enough energy have been added, to minimize the ratio of energy per electron, and the addition of an extra electron would increase the energy by a much higher amount, which suggests that the previous full-shell solution is at least a local minimum of the energy per electron. Because the interactions between the electrons themselves are complicated, this superficial wording can't really replace a calculation - quantum mechanics for many electrons, which deals with a $3N$-dimensional wave function. In fact, in this picture, even in the Hartree-Fock approximation, many other emergent phenomena occur such as "orbital hybridization" - the emergence of totally new kinds of orbitals that are appropriate to describe the molecule. - I will try to give a very qualitative explanation of chemical bonds like the so called covalent bonds. Let's take the example of hydrogen molecule where two hydrogen atoms share two electrons. Now, if the two nuclei of the two hydrogen atoms are separated by a distance, there is a potential barrier between them. Classically the electrons can not cross that barrier, but quantum mechanics allows them to cross it due to tunneling effect. The electrons therefore wander back and forth continuously through the barrier and each of them spends almost equal times in close proximity of the two nuclei. Now quantum mechanical calculation will show that the potential energy of this system is much lower than that of the system if they were far apart. There is an optimum distance between the nuclei when the potential energy of the system is minimum. Naturally the system at that distance will be most stable. This is how covalent bond forms. In other molecules it may be the case that the two atoms are different. In that case electrons spend a little more time to the proximity of the relatively positive nucleus at the most stable (i.e. least potential energy) configuration. There are other kinds of bonds as well. The mathematical treatments for them differ but each follows from the rules of quantum mechanics and in all types of bonds the common thing is, the potential energy of the system has to be minimum for the stable configuration. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9449911117553711, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/85710-need-help-trig-equation.html
# Thread: 1. ## Need help with a Trig equation $2sin\theta cos\theta-cos\theta=0$ This is what I did First I took $cos\theta$ to other side resulting in $2sin\theta cos\theta=cos\theta$ Then I divided by $cos\theta$ resulting in $2sin\theta =1$ then I divided by 2 $sin\theta = \frac{1}{2}$ 2. Hi Dividing by $\cos\theta$ makes you lose solutions. You must factor. $2\sin\theta \cos\theta-\cos\theta=0$ $\cos\theta (2\sin\theta-1)=0$ leading to $\cos\theta = 0$ or $\sin\theta = \frac{1}{2}$ 3. Originally Posted by daunder $2sin\theta cos\theta-cos\theta=0$ This is what I did First I took $cos\theta$ to other side resulting in $2sin\theta cos\theta=cos\theta$ Then I divided by $cos\theta$ resulting in $2sin\theta =1$ then I divided by 2 $sin\theta = \frac{1}{2}$ That would give you some solutions but not all of them. You can get them all by factorising: $cos(\theta)(2sin(\theta)-1) = 0$ Then you can solve $cos(\theta) = 0$ to give $\theta = \frac{\pi}{2} \text { , } \frac{3\pi}{2}$ Then solve $2sin(\theta)-1 = 0$ $\theta = \frac{\pi}{6} \text { , } \frac{5\pi}{6}$ note: I have worked out the solutions of $0 \leq \theta \leq 2\pi$ 4. Thanks for the quick reply 5. Originally Posted by daunder $2sin\theta cos\theta-cos\theta=0$ This is what I did First I took $cos\theta$ to other side resulting in $2sin\theta cos\theta=cos\theta$ Then I divided by $cos\theta$ resulting in $2sin\theta =1$ then I divided by 2 $sin\theta = \frac{1}{2}$ You can't divide by $\cos(\theta)$ as it can be equal to 0. Here is how to do. factorise by cos(theta) : $\cos \theta (2 \sin \theta-1)=0$ the product ab=0 if and only if a=0 or b=0 ----> $\cos \theta=0$ or $\sin \theta=\frac 12$ For the first one... there are infinitely many solutions ! we know that $\cos \frac \pi 2=0$ but since cos(-x)=cos(x), we know that $\cos \left(-\frac \pi 2\right)=0$ moreover, cos is a 2pi-periodic function. Hence $\boxed{\cos \left(\frac \pi 2 +2k\pi\right)=\cos \left(-\frac \pi 2 +2k\pi\right)=0}$ these give all the possible solutions for cos(theta)=0 (in general, if a is such that cos(a)=b, then the solutions to cos(x)=b are in the form a+2kpi or -a+2kpi, for any integer k) For the second one, there are infinitely many solutions too. we know that $\sin\left(\frac \pi 3\right)=0$ but we know that $\sin(\pi-x)=\sin(x)$. Thus $\sin\left(-\frac \pi 3+\pi\right)=0$ again, sin is a 2pi-periodic function. Hence $\sin\left(2k\pi\right)=\sin\left(-\frac \pi 3+\pi+2k\pi\right)=0$ this can be rewritten : $\boxed{\sin\left(2k\pi\right)=\sin\left(-\frac \pi 3+(2k+1)\pi\right)=0}$ Finally, the solutions are : $S=\left\{\tfrac{\pi}{2}+2k \pi \big| k \in \mathbb{Z}\right\} \cup \left\{-\tfrac{\pi}{2}+2k \pi \big| k \in \mathbb{Z}\right\} \cup \left\{2k \pi \big| k \in \mathbb{Z}\right\} \cup \left\{-\tfrac \pi 3+(2k+1)\pi \big| k \in \mathbb{Z}\right\}$ does it look good to you ? Edit : woops, a bit late
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 42, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9598522186279297, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/129481-simultaneous-equation-help.html
# Thread: 1. ## Simultaneous equation help Okay I'm not even sure if this even is a simultaneous equation, but I don't know how to solve it anyway.. x+y=60 x^2 = 0.25 y^2 Find x and y. It's part of a massive a level physics question and I've simplified it a lot, so no I'm not a 12 year old asking for the answers to his maths homework :P 2. Changing the second around a bit, you get the much simpler 4x^2 = y^2 It is now your responsibility to remember that y = 0 is inappropriate. If we KNOW that x and y are greater than zero (0), we get: 2x = y Do we KNOW that x and y are greater than zero (0)? Simultaneous solution is rather simple after that. 3. Yes x and y are greater than zero Okay thanks a lot, that did it and got the correct answer. But out of interest, how did you go from 4x^2 = y^2 to 2x = y? Is that just a general rule when x and y are greater than 0? 4. Originally Posted by Magma828 Yes x and y are greater than zero Okay thanks a lot, that did it and got the correct answer. But out of interest, how did you go from 4x^2 = y^2 to 2x = y? Is that just a general rule when x and y are greater than 0? Hi Magma828, TK took the square root of each side of the equation. $4x^2=y^2$ $\sqrt{4x^2}=\sqrt{y^2}$ $2x=y$ 5. ... and that's why we needed to know things were positive. If things can be negative, that square root is not quite as straight forward.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.97181236743927, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/32153/alternative-definition-for-topological-spaces/169206
Alternative definition for topological spaces? I have just started reading topology so I am a total beginner but why are topological spaces defined in terms of open sets? I find it hard and unnatural to think about them intuitively. Perhaps the reason is that I can't see them visually. Take groups, for example, are related directly to physical rotations and numbers, thus allowing me to see them at work. Is there a similar analogy or defintion that could allow me to understand topological spaces more intuitively? - 3 I know this question has been asked here before and also on mathoverflow and I hope somebody can find the relevant posts, but I'd like to mention that metric spaces are much more intuitive and that the axioms of a topological space are abstractions of what happens for metric spaces. – Grumpy Parsnip Apr 10 '11 at 20:56 to add to the general confusion: the most intuitive definition I know is in non-standard analysis, where topology (or rather uniformity) is replaced by an equivalence relation of "being infinitesimally near" – user8268 Apr 10 '11 at 20:58 A good approach is to first study metric spaces, which are special cases of topological spaces where the notion of "continuity" is very similar to the notion you'd have learned in calculus. However, it turns out you don't need a notion of distance to get a "topology" on a space, and many metrics can lead to the same "topology" on a set. – Thomas Andrews Apr 10 '11 at 21:11 In addition, the shear beauty of the "open set" definition is that it makes the definition of continuity of functions between topological spaces nearly trivial. It also turns out that the definition of continuity using opens sets has very interesting and useful meanings when the topology does not come from a metric, for example, when dealing with partial orders. – Thomas Andrews Apr 10 '11 at 21:46 3 – Nate Eldredge Jul 21 '11 at 22:06 show 3 more comments 4 Answers There is a MathOverflow question about this very issue; this answer is a nice intuitive explanation, though you will probably also find some of the other answers useful. - 1 – joriki Apr 10 '11 at 20:59 @joriki, yes, if anything that is a better resource for the OP given their current level. This question should probably be closed as a duplicate then. – Zev Chonoles♦ Apr 10 '11 at 21:01 It's not an exact duplicate, I think, and your link to MO seems useful, too, so I wouldn't necessarily close this. – joriki Apr 10 '11 at 21:04 Think of the half-open interval $(0,1]$ with the usual open sets (e.g. $(1−ε,1]$ is an open neighborhood of $1$. Then modify the collection of sets considered "open" so that every open neighborhood of $1$ contains some set of the form $(1−ε,1]∪(0,ε)$, i.e. it covers small parts of BOTH ends of the interval. Can you understand that this modification in which sets are considered open also modifies the way in which the space is connected together? - Sir , You have said "it covers small parts of BOTH ends " can you elaborate a bit more so that i can understand better. I am also suffering from my bad understanding on Top Vector Spaces :( . Thank you. – Theorem Jun 18 '12 at 13:35 The set $(1-\varepsilon,1]\cup(0,\varepsilon)$ includes a small interval $(1-\varepsilon,1]$ at the right end of the half-open interval $(0,1]$, and also a small interval $(0,\varepsilon)$ at the left end of the half-open interval $(0,1]$. – Michael Hardy Jun 18 '12 at 14:33 From Wikipedia: In topology and related branches of mathematics, the Kuratowski closure axioms are a set of axioms which can be used to define a topological structure on a set. They are equivalent to the more commonly used open set definition. They were first introduced by Kazimierz Kuratowski, in a slightly different form that applied only to Hausdorff spaces. - In "Quantales and continuity spaces" Flagg develops the notion of a metric space where the distance function takes values in a value quantale. A value quantale is an abstraction of the properties of the poset $[0,\infty]$ needed for 'doing analysis'. It is then showed that every topological space $X$ is metrizable in the sense that there exists a value quantale $V$ (depending on the topology on $X$) such that the topological space $X$ is given by the open balls determined by a metric structure on $X$ with values in $V$. At this level of abstraction it is thus seen that the open sets axiomatization for topology is nothing but the good old notion of a metric space, only taking values in value quantales other than $[0,\infty]$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397667646408081, "perplexity_flag": "head"}
http://www.scholarpedia.org/article/Crises
# Crises From Scholarpedia Edward Ott (2006), Scholarpedia, 1(10):1700. Curator and Contributors 1.00 - Edward Ott 0.75 - Eugene M. Izhikevich Considering a dynamical system with a chaotic attractor, qualitative changes (bifurcations) of such attractors can occur as a system parameter is varied. Very commonly, these changes occur due to the collision of the chaotic attractor with an unstable invariant set, typically an unstable periodic orbit (equivalently, a collision with the stable manifold of the unstable periodic orbit). Such events are called crises (Grebogi et al. 1983; Ott 2002). Here by a collision, we mean that for a system parameter $$p$$ below (or above) a critical crisis value, $$p_c\ ,$$ the attractor does not contain the unstable periodic orbit with which it collides, but that, at $$p=p_c$$ the attractor does contain it (here we define an attractor as the closure of an orbit originating from a typical initial condition in its basin of attraction). Crisis bifurcations of chaotic attractors are extremely common and have been observed in a host of experimental settings (an early example is Ditto et al. 1983). In general, crises result in discontinuous changes in the chaotic attractor, and different types of crises may be distinguished on the basis of the different types of changes that they induce. Three types of crises (Grebogi et al. 1983, 1987; Ott 2002) will be discussed below: • boundary crises, in which a chaotic attractor is suddenly created or destroyed, • interior crises, in which a chaotic attractor experiences a sudden change in size and shape, and • symmetry restoring (or breaking) crises, in which a number of symmetrically disposed chaotic attractors merge (or, inversely, split). Other crisis-type transitions include the transition to phase synchronization of chaos (Rosa et al. 1998) and the merging of chaotic bands following period doubling cascades (Grebogi et al. 1987). ## Boundary Crises In a boundary crisis, as a system parameter $$p$$ varies, the minimum distance between a chaotic attractor and its basin boundary decreases, approaching zero as $$p\rightarrow p_c\ .$$ Since the accessible inner "edge" of the attractor's basin typically coincides with the stable manifold of an unstable invariant set, this is a "collision". Slightly past the crisis value ($$|p-p_c|$$ small), the attractor is replaced by a "chaotic transient". Basically, the attractor becomes "leaky" (hence no longer an attractor). What happens is this. If an initial condition is placed in the region corresponding to the attractor's basin before the crisis, the orbit generated from that initial condition typically approaches the vicinity of the pre-crisis attractor. It then proceeds to "bounce around" on what appears to be the old attractor. In this phase of its evolution, the orbit's motion is very similar to that of a chaotic orbit on the pre-crisis chaotic attractor. However, after some time $$\tau\ ,$$ which depends extremely sensitively on the placement of the original initial condition, the orbit then abruptly starts to move away from the region of the original attractor, proceeding to approach some other previously co-existing attractor (which may be chaotic or not; e.g., it may be an attracting periodic orbit). We call the state of the orbit while it mimics orbits on the pre-crisis attractor a "chaotic transient". For initial conditions randomly chosen in the region of the pre-crisis attractor's basin, the probability distribution function of the durations $$\tau$$ of chaotic transients is exponential, $P(\tau )\sim \exp (-\tau /\langle\tau\rangle)\ ,$ for large $$\tau\ ,$$ where $$\langle\tau\rangle$$denotes the characteristic transient lifetime. In the above, we have discussed the boundary crisis for the case where the parameter value is varied in the direction in which the attractor is destroyed. Clearly, one can also view such a crisis for variation of the parameter in the other direction, in which case a chaotic attractor is created from a chaotic transient. Thus boundary crises are a route to chaos, i.e., a mechanism by which chaotic attractors can be created. ## Interior Crises and Crisis Induced Intermittency [[Image:OttPicSmall2.gif|thumb|400px|right|f1|An illustration of an interior crisis for the HJM map,$$z_{n+1}=a+bz_n \exp(i[\kappa -p(1+|z_n|^2)^{-1}])\ ,$$ where $$z$$ is complex, $$z=x+iy\ ,$$ $$a=0.85\ ,$$ $$b=0.9\ ,$$ $$\kappa =0.4\ ,$$ and the critical value of $$p$$ is $$p_c=7.2688\ldots \ .$$ In the upper panel (a) $$p_c>p =7.26\ ,$$ while in the lower panel (b), \(p_c y > -0.25\ .\) Slightly past the crisis (lower panel), we see occasional bursts out of this band (e.g., at times $$n\cong 210, 300, 750$$). If we had continued the plot shown in the lower panel to very long times, we would see that these bursts continue to occur intermittently in time. Moreover, the lengths of the times between successive bursts appear to be random, and a statistical analysis shows that these interburst times are exponentially distributed as in the equation above with a characteristic interburst time $$\langle\tau\rangle\ .$$ Figure 2 Figure 1: Attractor of the HJM map just past its crisis. Many successive orbit points $$(x_n, y_n)$$ are plotted for a single orbit. shows an orbit for a situation just past the crisis in the HJM map (as in the lower panel of Figure ). In this figure, the system state $$(x_n, y_n)$$ is plotted for many different times $$n$$ for a single orbit on the chaotic attractor. Regions where the density of orbit points are densest are color-coded white, while regions where the orbit density is low are colored light blue. We see that the attractor has a dense core surrounded by a sparse-density halo. The dense core corresponds closely to the pre-crisis attractor, while the halo is the region explored by the orbit during its intermittent bursts. As $$p$$ increases further past its crisis value $$p_c\ ,$$ the orbit spends a progressively larger fraction of its time in the halo ($$\langle\tau\rangle$$ decreases). ## Symmetry Restoring (Breaking) Crises Another type of crisis that leads to crisis induced intermittency is a symmetry restoring (or breaking) crisis [also called a symmetry increasing (or decreasing) bifurcation; Chossat and Golubitsky 1988]. In this type of crisis, at appropriate parameter values, due to a system symmetry, there are several distinct chaotic attractors that transform, one to the other, under a suitable symmetry transformation. No single one of these chaotic attractors, by itself, has the symmetry of the full system. As the crisis is approached, each of the symmetrically disposed attractors moves toward the basin boundary separating its basin from the basins of its symmetric neighbors. Figure 2: An illustration of a symmetry restoring crisis. For $$p<p_c$$ there are two mirror-image chaotic attractors, one centered in the left $$(x<0)$$ potential well, and one in the right potential well $$(x>0)\ .$$ For $$p>p_c$$ the two attractors merge to form one attractor with mirror $$(x\rightarrow -x)$$ symmetry. The figure shows $$x(t)$$ for (a) $$p<p_c$$ for the $$x>0$$ attractor, and (b) $$p>p_c\ ,$$ illustrating intermittent switching between chaotic oscillations in the $$x>0$$ well and the $$x<0$$ well (Ishii et al. 1986). At the crisis, the attractors all simultaneously collide with an unstable periodic orbit on their respective basin boundaries. Past the crisis, this leads to a merging of the pre-crisis, distinct, symmetrically disposed attractors, thus forming a single large attractor which, by itself, is invariant to the symmetry transformation of the global system. Just past the crisis, an orbit on the large, merged attractor spends long epochs on what appears to be one of the pre-crisis attractors, but then abruptly jumps to the state-space region of one of its neighboring pre-crisis attractors, spending another long epoch there, jumping, and so on, ad infinitum. Moreover, the durations of these epochs appear random and are exponentially distributed. Thus this offers another example of crisis induced intermittency, where, in this case, there is an intermittent switching between different asymmetric chaotic epochs, but the infinite time orbit density conforms to the specific symmetry of the particular system. An example is shown in Figure 2 for the case of a sinusoidally forced particle of mass $$m\ ,$$ with friction $$v\ ,$$ moving in a potential $$V(x)\ ,$$ where $$x$$ is the particle location, $$md^2x/dt^2=-\nu dx/dt-dV(x)/dx+p\sin(\omega t)\ ,$$ and $$V(x)$$ is a double-well potential with even asymmetry, $$V(x)=V(-x)=\alpha x^4/4-\beta x^2/2$$ [Ishii et al. 1986, Grebogi et al. 1987]. ## The Characteristic Crisis Time Scale In all the cases discussed above, there was a time scale $$\langle \tau \rangle\ .$$ • In the case of a boundary crisis, this time scale characterized the length of a chaotic transient. • In the case of an interior crisis, it characterized the time between bursts. • In the case of a symmetry breaking crisis, it characterized the time spent on a pre-crisis component of the merged attractor. In all these cases $$\langle \tau \rangle \rightarrow \infty$$ as the crisis value is approached $$|p-p_c|\rightarrow 0\ .$$ E.g., in the case of the boundary crisis, the approach of $$\langle \tau \rangle$$ to infinity signifies that the transient chaos becomes permanent; i.e., a chaotic attractor is created. It is of great interest to consider how $$\langle \tau \rangle$$ approaches infinity. For a large class of systems, this approach is as a power law, $\langle \tau \rangle \sim |p-p_c|^{-\gamma}\ ,$ where $$\gamma$$ is the critical crisis exponent. A theory exists for determining $$\gamma$$ in terms of the characteristics of the unstable periodic orbit mediating the crisis (Grebogi et al. 1987; Ott 2002). It should also be noted that there are cases in which the divergence of $$\langle \tau \rangle$$ as $$|p-p_c|\rightarrow 0$$ is not a power law, but is a much stronger divergence, $\langle \tau \rangle \sim \exp [\kappa /|p-p_c|^{1/2}]\ .$ Such behavior has been referred to as a super-persistent chaotic transient (Grebogi et al., 1985). ## References • Chossat P., and Golubitsky, M. (1988) Symmetry Increasing Bifurcation of Chaotic Attractors, Physica D 32, 423. • Ditto W. L. et al. (1989) Experimental Observation of Crisis-Induced Intermittency and its Critical Exponent, Phys. Rev. Lett. 63, 923. • Grebogi C., Ott E. and Yorke J. A. (1983) Crises: Sudden Changes in Chaotic Attractors and Chaotic Transients, Physica D 7, 181. • Grebogi C., Ott E. and Yorke J. A. (1985) Super-Persistent Chaotic Transients, Ergodic Theor. and Dyn. Sys. 5, 341. • Grebogi C., Ott E., Romeiras F. and Yorke J. A. (1987) Critical Exponents for Crisis Induced Intermittency, Phys. Rev. A 36, 5365. • Hammel S., Jones C.K.R.T. and Maloney J. (1985) Global Dynamical Behavior of the Optical Field in a Ring Cavity, J. Opt. Soc. Am. B 2, 552. • Ishii H., Fujisaka H. and Inoue (1986) Breakdown of Chaos Symmetry and Intermittency in the Double-Well Potential System, Phys. Lett. A 116, 257. • Ott E. (2002) Chaos in Dynamical Systems, Section 8.3 (Cambridge University Press, second edition). • Rosa E., Ott E. and Hess M.H. (1998) Transition to Phase Synchronization of Chaos, Phys. Rev. Lett. 80, 1642. Internal references • John W. Milnor (2006) Attractor. Scholarpedia, 1(11):1815. • Edward Ott (2006) Basin of attraction. Scholarpedia, 1(8):1701. • Peter Ashwin (2006) Bubbling transition. Scholarpedia, 1(8):1725. • Eugene M. Izhikevich (2006) Bursting. Scholarpedia, 1(3):1300. • Edward Ott (2006) Controlling chaos. Scholarpedia, 1(8):1699. • Jeff Moehlis, Kresimir Josic, Eric T. Shea-Brown (2006) Periodic orbit. Scholarpedia, 1(7):1358. • Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 54, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867594599723816, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/81798/are-advanced-number-theoretic-techniques-related-to-undecidability
## Are advanced number-theoretic techniques related to undecidability? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there any evidence for or against the idea that some of the important statements of number theory that have only been proved using infinite sets, are in fact undecidable in Peano arithmetic? Most modern number theory is based, not on considering problems directly, but on applying some sort of complicated machinery. In particular, one works with infinite sets with various sorts of structure, like algebraic number fields, modular forms, and the Riemann zeta function. I have sometimes wondered whether, with sufficient work and cleverness, one could reconstruct these proofs entirely in the language of elementary number theory. Theoretically, this should be possible, except for the fact that set theory is a stronger theory than Peano arithmetic or something else that represents elementary number theory. An example of this is the proof of the Ramsey theorem by way of the infinite Ramsey theorem. This proof cannot really be translated into arithmetic since it can also be used to prove the strengthened finite Ramsey theorem, which is undecidable in PA. It is natural to conjecture that other results, like results about the distribution of primes which depend on zeta functions and related functions may also be undecidable. Is there any hard evidence that this is or is not the case? - 5 The sets needed in most number theory are pretty benign. For instance, Riemann zeta function is computable, so there is no particular difficulty in defining it in first-order arithmetic. One can work with all the objects you mention (algebraic number fields, modular forms) in $\mathrm{ACA}_0$, which is a conservative extension of PA. A couple of weaks ago, Angus Macintyre had a tutorial at a meeting in Oberwolfach where he explained how one can formalize the basic concepts involved in the proof of Fermat’s last theorem in PA. – Emil Jeřábek Nov 24 2011 at 13:00 In the unlikely case that you are asking if there are statements in the language of PA that are decidable in ZFC but not in PA then the answer is positive, e.g. consistency of PA (or if you prefer the solvablity of the Diophantine equation corresponding to that). But I guess you have a more refined definition of what you mean by number theory. – Kaveh Nov 24 2011 at 18:38 See also this question: mathoverflow.net/questions/39452/… – Kaveh Nov 24 2011 at 18:38 My question is whether relatively natural (e.g. short statement, and not contrived to be undecidable) statements, which mathematicians have, in practice, found very difficult to prove, are undecidable in PA. – Will Sawin Nov 28 2011 at 2:02 This can happen with combinatorial statements (Ramsey-like principles, Kruskal’s theorem, this kind of results). It usually does not happen in number theory, where proofs are often complex and ingenious, but relatively low level with respect to proof-theoretic strength. – Emil Jeřábek Nov 28 2011 at 12:59 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927139163017273, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/13332/what-local-system-really-is
# What local system really is I know a local system is a locally constant constant sheaf. But why does a local system on the topological space $X$ correspond to $\tilde{X}\times_G V$, where $G$ is the fundamental group of $X$, $\tilde{X}$ is the universal covering space of $X$, and $V$ is a $G$-module? How do you recover the locally free sheaf from $\tilde{X} \times_G V$? - ## 1 Answer The group $G$ acts properly discontinuously on $\tilde{X}$, and so if $x$ is any point of $\tilde{X}$, it admits a neighbourhood $U$ s.t. that $U g$ is disjoint from $U$ if $g \in G$ is non-trivial. Thus the natural map from $U$ to $\tilde{X}/G = X$ is an embedding. Thus the natural map from $U \times V$ to $\tilde{X}\times_G V$ is also an embedding, and so $\tilde{X}\times_G V$ is locally constant (i.e. locally a product). More detailed remarks: • We should equip $V$ with its discrete topology • The object $\tilde{X}\times_G V$ is not itself actually a sheaf, but is rather the espace etale of a sheaf. To get the actual sheaf we consider the natural projection $\tilde{X}\times_G V \to \tilde{X}/G = X$, and form the associated sheaf of sections. Over the open set $U \hookrightarrow X,$ this restricts to the sheaf of sections to the projection $U\times V \to U$, whose sections are precisely the constant sheaf on $U$ attached to the vector space $V$. (Here is where we see that it is important to equip $V$ with the discrete topology.) Thus our original sheaf of sections is locally constant, as claimed. - so does all the local system arise in this way? Does this way more natural than the original way(the usual definition of sheaf)? – abc Dec 7 '10 at 7:35 @abc: Yes, all local systems arise in this way, and this a very natural way to think about them. Very often people will speak of the local system corresponding to a $G$-module $V$, and this is what they mean. – Matt E Dec 7 '10 at 8:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9220991730690002, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/92424/deriving-sdes-and-expectation-from-given-pde/92722
# Deriving SDE(s) and Expectation from Given PDE We want to solve the PDE $u_t + \left( \frac{x^2 + y^2}{2}\right)u_{xx} + (x-y^2)u_y + ryu = 0$ where $r$ is some constant and $u(x,y,T) = V(x,y)$ is given. Write an SDE and express $u(x,y,0)$ as the expectation of some function of the path $X_t, Y_t$. Attempt: I tried to use the multivariate backward equation (2 dimensional) to recover the original SDE's and ended up with $dX_t= \sqrt{x^2 + y^2} dW_t$ and $dY_t = (x-y^2)dt + \sqrt{x^2 + y^2} dW_t$. The problem I have is recovering the expectation. I'm not too familiar with multidimensional Feynman-Kac, but judging by the $ryu$ term and extrapolating from the one-dimensional case, the desired expectation should have the form E[exp(riemann integral of Y_t)]. Can anyone shed some light on this? Thank you. EDIT: Oops, wrote the forward equation incorrectly and made a typo, the SDE's have changed - Is it really missing a $u_{yy}$ term? – Jon Dec 18 '11 at 10:18 @Bob : I think you should start from that fact that you would like to express $V(x,y)=E[F(X_T,Y_T)|X_t=x,Y_t=y]$ as a martingale for a regular function $F$ where $X_t$ and $Y_t$ are following unknwon SDE (so the coefficient are the unknowns) and use Itô on $F(X_T,Y_T)$ to retrieve your PDE by setting the drift on $dF$ equal to 0. I don't feel like doing the calculations but I think this should do the trick. Regards – TheBridge Dec 18 '11 at 12:22 @Bob : By the way the final condition giving $F$ is missing in your question. – TheBridge Dec 18 '11 at 13:14 – David Dec 19 '11 at 0:38 @Bob, thanks I'll think about it. Also u(x,y,T) = V(x,y) where V is some "payoff" function is the final condition I think. – David Dec 19 '11 at 0:41 show 1 more comment ## 1 Answer What do you think about the system of SDEs : $$dX_t=\sqrt{X_t^2+Y_t^2}dW_t$$ $$dY_t=(X_t-Y_t^2)dt$$ And finally : $$u(X_t,Y_t,t)=\mathbb{E}[V(X_T,Y_T).e^{-\int_t^TrY_s.ds}|X_t,Y_t]$$ You can check that $u$ is satisfying your PDE, but as always check my calculations as I am used to making errors. The way I found this is the following : I set $r=0$, then looking for $u$ as an expectation of $V(X_T,Y_T)$ and deriving its SDE via Itô's lemma and looking for a null drift and then indentifying terms with the original PDE with those coming from the drift of $dV$ with $dX_t=a_1(X,Y,t)dt+b_1(X,Y,t)dW_t$ and $dX_t=a_2(X,Y,t)dt+b_2(X,Y,t)dB_t$ gives the solution for $a_1,a_2,b_1,b_2$ when $r=0$ ($B$ and $W$ are independent Brownian motions, which is coming from the intuitive fact that there is no $u_{xy}$ terms in the PDE). Then two minutes of reflection gives that $F(X_T,Y_T,T)=V(X_T,Y_T).e^{-\int_T^Tr.Y_sds}$ respects the final condition and acts on the drift part of $dF$ by only multiplying the PDE's with a $e^{-\int_t^Tr.Y_s.ds}$ and adds the $rYV$ term which was missing in the solution with $r=0$. Best regards - Hmm, you're right, Y should not have a diffusion term. This was a mistake on my part in working from the backward equations. I now know the approach is to use Ito or equivalently, use the tower property and make a Taylor expansion on f. There are quite a few typos in my OP, but thanks for verifying my intuition. – David Dec 19 '11 at 21:17 – David Dec 19 '11 at 21:25 @Bob : Sasha gave you an excellent answer on your new posted question. By the way did I answered properly your question ? If so you might consider accepting it. Regards. – TheBridge Dec 20 '11 at 9:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515434503555298, "perplexity_flag": "head"}
http://cs.stackexchange.com/questions/9602/cfg-with-regular-expression-terminals-on-rhs
CFG with regular expression terminals on RHS Suppose that we expand our idea of context free grammar rules to allow regular expressions of terminals on the right hand side. For example, consider $G_1$: $\begin{align*} S & \rightarrow (a \mid b) S (c \mid d) \\ S & \rightarrow (a \mid b) A (c \mid d) \\ A & \rightarrow (f \mid g)^* \end{align*}$ Then the language of $G_1$ is the following: $$L(G_1) = \{(a \mid b)^n (f \mid g)^* (c \mid d)^n \mid n > 0\}$$ Give a standard CFG that has the same language as $G_1$, is your grammar weakly equivalent to $G_1'$, strongly equivalent to $G_1'$, or both? Why? Secondly, how can I transform any CFG with regular expressions of terminals on the right hand side to a normal context free grammar? - 1 These questions have been answered dozens of times in SO. If you're going to post homework, you should at least post your best attempt at a solution. – Apalala Feb 7 at 14:39 2 Answers The general answer is pretty straightforward: if you have a grammar rule of the form $S \rightarrow {\alpha}r{\beta}$, where $r$ is a regular expression over the set of nonterminals, change this production to $S \rightarrow {\alpha}S'{\beta}$, find a right-regular grammar with start symbols $S'$ generating $L(r)$ (there is an algorithm for this); and then your grammar will include all those productions as well. Repeat for every production containing a regular expression on the right-hand side. - A exactly grammar equivalent to G1 is following ( say G2) : ````S → X S Y S → X A Y X → a | b Y → c | d A → fA | gA | ^ ```` Where `^` is a null symbol (epsilon) exactly equivalent means L(G1) = L(G2) that is language of G1 and G2 are same( every string in L(G1) also in L(G2) and vise-versa). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9248154759407043, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/42700-tetrahedron.html
# Thread: 1. ## tetrahedron Calculate the volume of the tetrahedron limited by the plan $n1: 3x+2y-4z-12=0$ and plans for coordinated. Answer: 12 2. Originally Posted by Apprentice123 Calculate the volume of the tetrahedron limited by the plan $n1: 3x+2y-4z-12=0$ and plans for coordinated. Answer: 12 I think you are asking for the volume of the tetrahedron bounded by $n_1$ and the coordinate planes? If so, one vertex will be at the origin, and the other three will be located at the points where the plane cuts the $x, y,\text{ and }z$ axes, namely at $(4,0,0),\;(0,6,0),\text{ and }(0,0,-3)$. Thus, the volume is $V = \pm\frac16\left\lvert\begin{matrix}<br /> 0 & 0 & 0 & 1\\<br /> 4 & 0 & 0 & 1\\<br /> 0 & 6 & 0 & 1\\<br /> 0 & 0 & -3 & 1<br /> \end{matrix}\right\rvert = \frac16\cdot72 = 12$ 3. Hello, Apprentice123! The volume of a pyramid with base area $B$ and height $h$ is: . $V \;=\;\frac{1}{3}Bh$ Calculate the volume of the tetrahedron limited by the plane $n_1:\;\;3x+2y-4z-12\:=\:0$ and the coordinate planes. The vertices are: . $(0,0,0),\:(4,0,0),\;(0,6,0,),\:(0,0,-3)$ The base is a right triangle with legs 4 and 6. . . Its area is: . $B \:=\:\frac{1}{2}(4)(6) \:=\:12\text{ units}^2$ The height is: . $h \,= \,3\text{ units}$ Therefore: . $V \;=\;\frac{1}{3}(12)(3) \;=\;12\text{ units}^3$ 4. Thank you for the help of all
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8734946846961975, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/127748-second-fundamental-theorem-calculue-can-used-every-situation.html
# Thread: 1. ## the second fundamental theorem of calculue can be used in every situation~? the second fundamental theorem of calculue can be used in every situation~? Newton Leibniz`formula I mean that .. 2. I have no idea what you mean by this! The fundamental theorem of calculus can be used in every situation to which it applies certainly, but it isn't of much using, for example, in adding 2+ 2! And why the cryptic words "Newton Leibniz formula"? Are you referring to $\frac{d}{dx}\int_{\alpha(x)}^{\beta(x)} f(x,t) dt$ $= f(x,\alpha(x))\frac{d\alpha}{dx}- f(x,\beta(x))\frac{d\beta}{dx}+ \int_{\alpha(x)}^{\beta(x)}\frac{\partial f}{\partial x}dt$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.891104519367218, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73261/hopf-algebras-and-quantum-groups/73263
## Hopf Algebras and Quantum Groups ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have studied graduate abstract algebra and would like to learn about Hopf algebras and quantum groups. What book or books would you recommend? Are there other subjects that I should learn first before I begin studying Hopf algebras and quantum groups? - 1 Why the "suppose"? Have you, or have you not? (Also, since educational systems can vary, what have your abstract algebra courses covered?) – Yemon Choi Aug 20 2011 at 4:22 I did take abstract algebra. It covered Group theory, Rings, Field Theory, Galois Theory, classical Algebraic geometry, modules, vectors spaces. It was a year long course. – Ahmed Roman Aug 20 2011 at 7:06 2 I fear that a year long course covering the above-mentioned fields is, at best, an introduction. What you definitely will need in Hopf algebra theory is a good grip on tensor products, at least over fields (at the very least, the exactness properties and their consequences). Also, knowledge of representation theory a la arxiv.org/abs/0901.0827 is of much use; many Hopf-algebraic theorems generalize known facts of representation theory and seem utterly devoid of motivation if you don't know the latter. – darij grinberg Aug 20 2011 at 12:03 Related question: mathoverflow.net/questions/115231/… "expository papers related to quantum groups " – Alexander Chervov Dec 5 at 10:17 ## 3 Answers I don't think that you really need to learn much more algebra before you start on Hopf algebras. As long as you know about groups, rings, etc, you should be fine. An abstract perspective on these things is useful; e.g. think about multiplication in an algebra $A$ as being a linear map $m : A \otimes A \to A$, and then associativity of multiplication as being a certain commutative diagram involving some $m$'s. This naturally leads to dualization, i.e. coalgebras, comultiplication, coassociativity, etc, and then Hopf algebras come right out of there by putting the algebra and coalgebra structures together and asking for some compatibility (and an antipode). For the Drinfeld-Jimbo type quantum groups, it is helpful to know some Lie theory, especially the theory of finite-dimensional semisimple Lie algebras over the complex numbers. If you don't know that stuff, the definitions will probably not be that enlightening for you. There are a lot of books on quantum groups by now. They have a lot of overlap, but each one has some stuff that the others don't. Here are some that I have looked at: • Quantum Groups and Their Representations, by Anatoli Klimyk and Konrad Schmudgen. They have a penchant for doing things in excruciating, unenlightening formulas, but this book is the first one that I learned quantum groups from, so it remains the most familiar to me. This one has a lot more about Hopf $*$-algebras than any of the others. • A Guide to Quantum Groups, by Vijayanthi Chari and Andrew Pressley. Has an approach based more on Poisson geometry and deformation quantization. • Foundations of Quantum Group Theory, by Shahn Majid. Goes into more detail on braided monoidal categories, braided Hopf algebras, reconstruction theorems (i.e. reconstructing a Hopf algebra from its category of representations) than most other books, although some of this is covered in Chari-Pressley. • Quantum Groups, by Christian Kassel. Focuses mainly on $U_q(\mathfrak{sl}_2)$ and $\mathcal{O}_q(SL_2)$, and does a lot of stuff with knot invariants coming from quantum groups. • Hopf Algebras and Their Actions on Rings, by Susan Montgomery. This one is more about the theory of Hopf algebras than about Drinfeld-Jimbo quantum groups. There are some other ones which I know are out there, but I haven't read. These include Lectures on Algebraic Quantum Groups, by Ken Brown and Ken Goodearl, Lectures on Quantum Groups, by Jens Jantzen, Introduction to Quantum Groups, by George Lusztig, and Quantum Groups and Their Primitive Ideals, by Anthony Joseph. Having glanced a little bit at the last two in this list, I found both of them more difficult to read than the ones in my bulleted list above. So, as you can see, there is a lot of choice available. I would advise you to check a few of them out of the library and just see which one you like the best. - Etingof also has a book on quantum groups. I haven't spent much time with it but it seems good. – Peter Samuelson Aug 20 2011 at 20:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I agree with MTS comment that you do not need much to start with. Whether you need more or not will depend on your focus: which part of the theory are you interested in? Personally I would reccomend Serre's Complex Semisimple Lie Algebras as a must since you most likely will end up fighting with Serre's relations. Also, classic books on Hopf algebras are not to be forgotten; here the choice is between Abe and Sweedler, if I remember correctly the title is "hopf algebras" in both cases. As for the three books about which MTS does not add comments let me say something: Joseph's book requires a solid background on Lie algebras and their reps, otherwise it's almost impossible to understand its directions. With this background it's a very intriguing, though demanding, reading. Brown-Goodearl is a great book to start with. It's built from lecture notes of a course and therefore "learning oriented". It's much oriented towards the more algebraic part of the theory. I would reccomend McConnell-Robson book on Noncommutative Noetherian Rings at hand... Lusztig's book requires feeling at ease with various categorical issues: definitely not a first reading. It is also uite narrowly focused on some specific aspects. I would not put it in any list: if that is your direction at some point younwill be forced into it. I do not know about Jantzen's book. - "Algebras of functions on quantum groups, Part 1" By Leonid I. Korogodski, Yan S. Soibelman – Charlie Frohman Aug 20 2011 at 12:23 Yes, but that is the less algebraically oriented of all... I am still waiting for Part II – Nicola Ciccoli Aug 20 2011 at 12:55 I think it is also worth to mention the book "S. Dascalescu: Hopf algebras. An introduction" as a suitable textbook on the algebraic theory of Hopf algebras. However, since it is not dealing with quantum groups, it could be timely to use it together with some books mentioned by MTS on this subject. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9477066397666931, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/115746/list
## Return to Answer 2 fixed link; deleted 1 characters in body The document I linked to above is sufficiently striking as to warrant an answer of its own. I hope it complements the community wiki above. As mentioned above the relevant conjecture in this area is due to Wall: Conjecture The number of maximal subgroups of a finite group $G$ is less than the order of $G$. This has been the subject of much study with the landmark work (until recently) being the above-cited work of Liebeck, Pyber and Shalev. In addition to the result mentioned above they show that the conjecture is true if the group $G$ is simple, up to a finite number of exceptions. Now a quote from the linked document is relevant: This largely directed attention to composite groups, where Wall in his original paper had at least shown the conjecture to be true for finite solvable groups. The key remaining cases were known to be semidirect products of a vector space V with a nearly simple finite group G acting faithfully and irreducibly on it. It turns out that in this case Wall's conjecture implies some bounds on the cohomology groups $H^1(G,V)$. And, as the document relates, examples have now been found which violate these bounds. In particular, Wall's conjecture does not hold. In light of this development, the bound $C|G|^{3/2}$ mentioned above, also due to Liebeck, Pyber and Shalev, assumes greater importance. Although, as the linked document mentions, it is likely that the value $3/2$ can be reduced a great deal. One final interesting quote: A conjecture of Aschbacker and Guralnick, not made at the conference... would now rise to be the main conjecture in maximal subgroup theory. (The conjecture states that it is the number of conjugacy classes of maximal subgroups that is bounded, less than the number of conjugacy classes of elements in the group.) Anyone interested should definitely read the document. Not only is it interesting mathematically, it's a very engaging account of how this recent breakthrough was achieved. 1 The document I linked to above is sufficiently striking as to warrant an answer of its own. I hope it complements the community wiki above. As mentioned above the relevant conjecture in this area is due to Wall: Conjecture The number of maximal subgroups of a finite group $G$ is less than the order of $G$. This has been the subject of much study with the landmark work (until recently) being the above-cited work of Liebeck, Pyber and Shalev. In addition to the result mentioned above they show that the conjecture is true if the group $G$ is simple, up to a finite number of exceptions. Now a quote from the linked document is relevant: This largely directed attention to composite groups, where Wall in his original paper had at least shown the conjecture to be true for finite solvable groups. The key remaining cases were known to be semidirect products of a vector space V with a nearly simple finite group G acting faithfully and irreducibly on it. It turns out that in this case Wall's conjecture implies some bounds on the cohomology groups $H^1(G,V)$. And, as the document relates, examples have now been found which violate these bounds. In particular, Wall's conjecture does not hold. In light of this development, the bound $C|G|^{3/2}$ mentioned above, also due to Liebeck, Pyber and Shalev, assumes greater importance. Although, as the linked document mentions, it is likely that the value $3/2$ can be reduced a great deal. One final interesting quote: A conjecture of Aschbacker and Guralnick, not made at the conference... would now rise to be the main conjecture in maximal subgroup theory. (The conjecture states that it is the number of conjugacy classes of maximal subgroups that is bounded, less than the number of conjugacy classes of elements in the group.) Anyone interested should definitely read the document. Not only is it interesting mathematically, it's a very engaging account of how this recent breakthrough was achieved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9554631114006042, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/43406/what-constitutes-an-observation-measurement-in-qm/52539
# What constitutes an observation/measurement in QM? Fundamental notions of QM have to do with observation, a major example being The Uncertainty Principle. 1. What is the technical definition of an observation/measurement? 2. If I look at a QM system, it will collapse. But how is that any different from a bunch of matter "looking" at the same system? 3. Can the system tell the difference between a person's eyes and the bunch of matter? 4. If not, how can the system remain QM? 5. Am I on the right track? - This is a very broad question, with overlap with other questions. You should look at the Heisenberg Pieirls analysis of tracks in a bubble chamber to understand the entanglement apparent collapse of a wavefunction, and then the philosophical problem of turning apparent collapse (decoherence) into collapse, and whether this is philosophy or not. There is no simple answer, and it is hard to not refer you to other questions on the site (although precisely which ones, I can't really be sure without more detail on what you are asking, like a thought experiment) – Ron Maimon Nov 4 '12 at 4:54 – Qmechanic♦ Nov 4 '12 at 16:51 2 – Hal Swyers Nov 5 '12 at 14:24 ## 6 Answers An observation is an act by which one finds some information – the value of a physical observable (quantity). Observables are associated with linear Hermitian operators. The previous sentences tautologically imply that an observation is what "collapses" the wave function. The "collapse" of the wave function isn't a material process in any classical sense much like the wave function itself is neither a quantum observable nor a classical wave; the wave function is the quantum generalization of a probabilistic distribution and its "collapse" is a change of our knowledge – probabilistic distribution for various options – and the first sentence exactly says that the observation is what makes our knowledge more complete or sharper. (That's also why the collapse may proceed faster than light without violating any rules of relativity; what's collapsing is a gedanken object, a probabilistic distribution, living in someone's mind, not a material object, so it may change instantaneously.) Now, you may want to ask how one determines whether a physical process found some information about the value of an observable. My treatment suggests that whether the observation has occurred is a "subjective" question. It suggests it because this is exactly how Nature works. There are conditions for conceivable "consistent histories" which constrain what questions about "observations" one may be asking but they don't "force" the observer, whoever or whatever it is, to ask such questions. That's why one isn't "forced" to "collapse" the wave function at any point. For example, a cat in the box may think that it observes something else. But an external observer hasn't observed the cat yet, so he may continue to describe it as a linear superposition of macroscopically distinct states. In fact, he is recommended to do so as long as possible because the macroscopically distinct states still have a chance to "recohere" and "interfere" and change the predictions. A premature "collapse" is always a source of mistakes. According to the cat, some observation has already taken place but according to the more careful external observer, it has not. It's an example of a situation showing that the "collapse" is a subjective process – it depends on the subject. Because of the consistency condition, one may effectively observe only quantities that have "decohered" and imprinted the information about themselves into many degrees of freedom of the environment. But one is never "forced" to admit that there has been a collapse. If you are trying to find a mechanism or exact rule about the moments when a collapse occurs, you won't find anything because there isn't any objective rule or any objective collapse, for that matter. Whether a collapse occurred is always a subjective matter because what's collapsing is subjective, too: it's the wave function that encodes the observer's knowledge about the physical system. The wave function is a quantum, complex-number-powered generalization of probabilistic distributions in classical physics – and both of them encode the probabilistic knowledge of an observer. There are no gears and wheels inside the wave function; the probabilistic subjective knowledge is the fundamental information that the laws of Nature – quantum mechanical laws – deal with. In a few days, I will write a blog entry about the fundamentally subjective nature of the observation in QM: http://motls.blogspot.com/2012/11/why-subjective-quantum-mechanics-allows.html?m=1 - ''My treatment suggests that whether the observation has occurred is a "subjective" question.'' - If this were really true, one still had to explain why we get objective science out of our subjective measurements. Therefore, there may not be more subjectivity than is in the error bars. – Arnold Neumaier Nov 4 '12 at 15:37 No, Arnold, your question is very correct but the way how you reply to it is completely unscientific. You haven't tried to solve the exercise at all; instead, you decided to humiliate it. Indeed, one may also show why objective science emerges from quantum mechanics and the proof goes through - it requires some QM-type maths. But the proof does not assume that some information about the state of world at a given moment is objective because this is, according to QM, not true! – Luboš Motl Nov 5 '12 at 7:15 -1: The state of an ideal quantum gas in equilibrium is objectively determined, to an accuracy of the square root of the inverse volume, by the measurable numbers P, T, and V. If it were otherwise, objective physics were impossible. – Arnold Neumaier Nov 5 '12 at 9:44 1 I am fairly certain that agreement of outcomes of joint observations are part of the point that is being made above. Entanglement will ensure agreement of joint observables. However each system will have information that can never be observed jointly. There is no inconsistency in saying that those states continue to evolve within their respective systems as long as the probability of joint measurement is effectively zero (or in fact effectively negative). This is captured in the use of complex amplitudes which can track the evolution of unphysical states. – Hal Swyers Nov 6 '12 at 14:18 1 Another way of thinking about this is that if you dream about observing a particle track, there is nothing wrong with someone saying the track did something different from what you dream, since there is no possible way for them to make an observation of what you dreamed. – Hal Swyers Nov 6 '12 at 14:21 show 8 more comments ## Did you find this question interesting? Try our newsletter email address Let me take a slightly more "pop science" approach to this than Luboš, though I'm basically saying the same thing. Suppose you have some system in a superposition of states: a spin in a mix of up/down states is probably the simplest example. If we "measure" the spin by allowing some other particle to interact with it we end up with our original spin and the measuring particle in an entangled state, and we still have a superposition of states. So this isn't an observation and hasn't collapsed the wavefunction. Now suppose we "measure" the spin by allowing a graduate student to interact with it. In principle we end up with our original spin and the graduate student in an entangled state, and we still have a superposition of states. However experience tells us that macrospcopic objects like graduate students and Schrodinger's cat don't exist in superposed states so the system collapses to a single state and this does constitute an observation. The difference is the size of the "measuring device", or more specifically its number of degrees of freedom. Somewhere between a particle and a graduate student the measuring device gets big enough that we see a collapse. This process is described by a theory called decoherence (warning: that Wikipedia article is pretty hard going!). The general idea is that any system inevitably interacts with its environment, i.e. the rest of the universe, and the bigger the system the faster the interaction. In principle when our grad student measures the spin they do form an entangled system in a superposition of states, but the interaction with the rest of the universe is so fast that the system collapses into a single state effectively instantaneously. So observation isn't some spooky phenomenon that requires intelligence. It is simply related to the complexity of the system interacting with our target wavefunction. - 1 Dear Johm, right, I agree we're saying pretty much the same thing. Still, I would probably stress that decoherence is just an approximate emergent description of the quantum evolution of systems interacting with the environment. Even if the density matrix for the observed system gets almost diagonal, it doesn't mean that one is "forced to imagine" that the system has already "objectively chosen" one of the states on the diagonal. Instead, one is only allowed to say such a thing because it no longer leads to contradictions. – Luboš Motl Nov 4 '12 at 9:08 So, for a wavefunction to collapse it need only be able to interact with the rest of the universe? If so, I'm slightly confused. How can the wavefunction know when it has interacted with the "rest of the Universe"? When it is observed by the grad student, can't the student and in the room in which observation has taken place be taken as the rest of the Universe? – ThisIsNotAnId Nov 7 '12 at 2:42 The phrase "the rest of the universe" just means everything that isn't part of the system being studied, so the grad student does count as "the rest of the universe". Have a read of the Wikipedia article I linked and see if that helps. – John Rennie Nov 7 '12 at 6:58 1. A measurement is a special kind of quantum process involving a system and a measurement apparatus and that satisfies the von Neumann & Lüders projection postulate. This is one of the basic postulates of orthodox QM and says that immediately after measurement the system is in a quantum state (eigenstate) corresponding to the measured value (eigenvalue) of the observable. 2. Measurement does not change by considering the pair system+apparatus or by considering the triple system+apparatus+observer, because the fundamental interaction happens between system and measurement apparatus, and the observer can be considered part of the environment that surrounds both. This is the reason why measuring apparatus give the same value when you are in the lab during the measurement that when you are in the cafeteria during the measurement. 3. See 2. 4. The system is always QM. - ''No elementary quantum phenomenon is a phenomenon until it is a registered ('observed', 'indelibly recorded') phenomenon, brought to a close' by 'an irreversible act of amplification'.'' (W.A. Miller and J.A. Wheeler, 1983, http://www.worldscientific.com/doi/abs/10.1142/9789812819895_0008 ) 1. A measurement is an influence of a system on a measurement device that leaves there an irreversible record whose measured value is strongly correlated with the quantity measured. Irreversibility must be valid not forever but at least long enough that (at least in principle) the value can be recorded. 2. There is no difference. 3. The system doesn't care. It interacts with the measurement device, while you are just reading that device. 4. Quantum interactions continue both before, during and after the measurement. Only the reading from the device must be treated in a macroscopic approximation, through statistical mechanics. See, e.g., Balian's paper http://arxiv.org/abs/quant-ph/0702135 5. Which track are you on? - Well, except that irreversibility is always a subjective matter. Many subjects may agree it's irreversible for them but in principle, the situation is always reversible and an agent tracing the "irreversible" phenomena exponentially accurately could do it. – Luboš Motl Nov 5 '12 at 7:18 @LubošMotl: The resuts of statistical mechanics resulting in equilibrium and nonequilibrium thermodynamics are extremely well established, and show that there is nothing subjective at all in irreversibility. We observe it every moment when we look at fluid flow of water or air. - If the basic laws are in principle reversible this has no bearing on the real universe as it is impossible in principle that an observer inside the universe can reverse the universe. The real universe as_observed_by_objects_inside is irreversible, and measurements are permanent records for these observers. – Arnold Neumaier Nov 5 '12 at 9:42 The only problem with your assertion is that in the quantum framework, measurements and other "records" are subjective as well. Many people may agree about them, and they usually do, but in principle, others may disagree. The gedanken experiment known as Wigner's friend illustrates this clearly. A friend chosen in a box may "know" that some record of a measurement is already there and became a fact, but the physicist outside the box may choose a superior treatment and describe the physicist inside by linear superpositions of macro-different states. – Luboš Motl Nov 5 '12 at 10:11 Irreversibility in Nature is never perfect, it's always a matter of approximations, and there's no objective threshold at which one could say that "now it's really irreversible". With a good enough knowledge of the velocities and positions, one may reverse some evolution and prepare a state whose entropy will decrease for a while. It's exponentially difficult but not impossible in principle. The same thing with decoherence. If one traces environmental degrees of freedom, and in principle he can, he may reverse certain amounts of decoherence, too. Decoherence is very fast but never perfect. – Luboš Motl Nov 5 '12 at 10:13 @LubošMotl: ''Irreversibility in Nature is never perfect'' - only according to an idealized theoretical model that assumes (against better knowledge) that one can change something without having to observe the required information and without having to set up the corresponding forces that accomplish the change. This can be done in principle only for very small or very weakly coupled systems. – Arnold Neumaier Nov 5 '12 at 12:25 show 1 more comment What is the technical definition of an observation/measurement? A QM measurement is essentially a filter. Observables are represented by operators $\smash {\hat O}$, states or wave functions by (linear superpositions of) eigenstates of these operators, $|\,\psi_1\rangle, |\,\psi_2\rangle, \ldots$. In a measurement, you apply a projection operator $P_n$ to your state, and check if there is a non-zero component left. You ascertain you yourself that the system is now in the eigenstate $n$. Experimentally, you often send particles through a "filter", and check if something is left. Think of the Stern-Gerlach experiment. Particles that come out in the upper ray have spin $S_z = +\hbar/2$. We say we have measured their spin, but we have actually $prepared$ their spin. Their state now fulfils $\smash{\hat S} \,|\,\psi\rangle = +\hbar/2 \,|\,\psi\rangle$, so it is the spin-up eigenstate of $\smash{\hat S}$. This is physical and works even if no one is around. If I look at a QM system, it will collapse. But how is that any different from a bunch of matter "looking" at the same system? Can the system tell the difference between a person's eyes and the bunch of matter? There are two different things going on, knowledge update (subjective), and decoherence (objective). First the objective part: If you have a quantum system by itself, it's wave function will evolve unitarily, like a spherical wave for example. If you put it in a physical environment, it will have many interactions with the environment, and its behavior will approach the classical limit. Think of the Mott experiment for a very simple example: Your particle may start as a spherical wave, but once it hits a particle, it will be localized, and have a definite momentum (within $\Delta p \,\Delta x \geq \hbar/2$). That's part of the definition of "hits a particle". The evolution will then continue from there, and it is very improbable that the particle has the next collision in the other half of the chamber. Rather, it will follow its classical track. Now the subjective part: If you look at a system, and recognize that it has certain properties (e.g. is in a certain eigenstate), you update your knowledge and use a new expression for the system. This is simple, and not magical at all. There is no change in the physical system in this part; a different observer could have different knowledge and thus a different expression. This subjective uncertainty is described by density matrices. Sidenote on density matrices: A density matrix says you think the system is with probability $p_1$ in the pure state $|\,\psi_1\rangle$, with probability $p_2$ in the pure state $|\,\psi_2\rangle$, and so on. (A pure state is one of the states defined above and can be a superposition of eigenstates, where as a mixed state is one given by a density matrix.) Pure states are objective, if I have a bunch of spin-up particles from my Stern-Gerlach experiment, my colleague will have to agree that they are spin-up, no matter what. They all go in his experiment to the top, too. If I have a bunch of undetermined-spin particles, $$|\,\psi\rangle_\mathrm{undet.} = \frac{1}{\sqrt{2}} (|\,\psi_\uparrow\rangle + |\,\psi_\downarrow\rangle)\,,$$ they will turn out 50/50, for both of us. Mixed states are different. My particles could be all spin-up, but I don't know that. Someone else does, and he uses a different state to describe them (e.g. see this question). If I see them fly through a magnetic field, I can recognize their behavior, and use a new state, too. And note that a mixed state of 50% $|\,\psi_\uparrow\rangle$ and 50% $|\,\psi_\uparrow\rangle$ is not the same as the pure state $|\,\psi\rangle_\mathrm{undet.}$ defined above. If not, how can the system remain QM? Technically, it remains QM all the time (because classical behavior is a limit of QM, and physics always has to obey QM uncertainties). Of course, that's not what you mean. If a system is to stay in a nice, clean quantum state for a prolonged time, it helps that it is isolated. If you have some interaction with the environment, it will not neccessarily completely decohere and become classical, but a perfect QM description will become impractically complicated, as you would have to take the environment and the apparatus into account quantum mechanically. - First, wow really nice reply. Thanks! If I read correctly, are you saying that a QM system can decohere differently for different observers? If so, what is the limit to this subjectivity? For example, can two observers view a particle going in opposite directions at the same time? – ThisIsNotAnId Feb 3 at 0:18 As far as I understand, decoherence is objective, so no, two observers can't disagree. They can disagree over wheter a system is in a pure or a mixed state. Maybe my use of 'observer' is confusing here. I don't mean something deep like different frames of reference, just that different people (experimentators) have different incomplete knowlege, and that is expressed through their density operators / mixed states. Its like statistical mechanics, but QM. – jdm Feb 3 at 12:32 That clears things up nicely, thanks! – ThisIsNotAnId Feb 6 at 23:35 Nothing exists until it is measured and observed. the Copenhagen consensus Everything in this universe universally obeys the Schrodinger equation. There's no special measurement objective collapse. So, there are no measurements. There are no observers either. Ergo, nothing exists. The false assumption nearly everyone makes is something exists. Can you prove something exists? You can't! - ## protected by Qmechanic♦Jan 30 at 8:19 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9438427090644836, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/225407/for-what-functions-instead-of-leq
# For what functions, “$=$” instead of “$\leq$” Theorem: Suppose $f,g\in C(U,\mathbb R^n)$ and let $f$ be locally Lipschitz-continuous in the second argument, uniformly with repsect to the first. If $x(t)$ and $y(t)$ are respective solutions of the Iinitial value problems; $x'=f(t,x), x(t_0)=x_0$ and $y'=g(t,y), y(t_0)=y_0$ then $$|x(t)-y(t)| \leq |x_0-y_0|\cdot e^{L\cdot |t-t_0|}+\frac{M}{L}\left(e^{L\cdot |t-t_0|}-1\right),$$ where $L=\sup \frac{|f(t,x)-f(t,y)|}{|x-y|}$ and $M=\sup|f(t,x)-g(t,x)|$. This is a theorem in one of my books about ODEs . Now I was wondering, for what functions $f(t,x)=f(x)$ and $g(t,x)=g(x)$, the inequality becomes an equality? The first supremum is over $(t,x)\neq (t,y)\in V$, the second supremum is over $(t,x)\in V$. - Just curious, what book is this from? – Christopher A. Wong Oct 30 '12 at 20:05 – Montaigne Oct 30 '12 at 20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9106460809707642, "perplexity_flag": "head"}
http://tamino.wordpress.com/2011/05/27/markov-2/
Science, Politics, Life, the Universe, and Everything # Markov 2 Posted on May 27, 2011 In the last post we showed how Harold Brooks has applied a 1st-order Markov Chain model to the phenomenon of a significant tornado day (“STD”), in particular to explain the frequency of occurrence of long runs of consecutive STDs. An STD is defined as any day with at least one (possibly many more) tornados of strength F2 or greater (on the Fujita scale). The 1st-order Markov model did a good job, whereas a bare-probability model doesn’t. In the bare-probability model, the probability that any given day is an STD can depend on the time of year (May is the peak time of year for tornado probability), but does not depend on whether previous days had significant tornados. However, there are too many long runs of consecutive STDs (as many as 9 in a row in the data used by Brooks, as many as 12 in a row in the NOAA-NWS data) for the bare-probability model to be correct. In the 1st-order Markov model, the probability that today is an STD can depend, not only on the time of year, but also on whether or not yesterday was an STD. If it was, then the probability that today will be an STD is enhanced. So this model has two probabilities (both of which depend on the time of year): $p_{01}$ is the probability of an STD if yesterday was not, while $p_{11}$ is the probability of an STD if yesterday was. There are two probabilities (called “transition probabilities”) because there are two possible states for yesterday: either 0 (non-STD) or 1 (STD). The model is a 1st-order model because it depends on only 1 previous state (yesterday, but not days before that). While the 1st-order does much better than the bare-probability model, the observed number of very long runs is still a bit more than the model indicates. Therefore I decided to explore another possibility — you may already have guessed that I looked at the 2nd-order Markov Chain model. In the 2nd-order model, the probability today will be an STD depends not only on the time of year, but on whether or not the previous two days were STDs. There are four possible states for the previous two days: “00″ (neither was an STD), “01″ (two days ago was not but yesterday was), “10″ (two days ago was but yesterday was not), and “11″ (both yesterday and the day before that were STDs). This means there are four (time-of-year dependent) transition probabilities: $p_{001},~p_{011},~p_{101}$, and $p_{111}$, giving the probability today is an STD for each possible state of the preceding two days. These probabilities certainly exist, whether the process follows a 2nd-order Markov Chain model or not! It’s worth taking note of the fact that if the process follows a 1st-order Markov Chain model, then the probability today is an STD doesn’t depend on the state two days ago. This would mean that the probabilities $p_{001}$ and $p_{101}$ must be the same, equal to $p_{01}$ of the 1st-order model, and also that the probabilities $p_{011}$ and $p_{111}$ are the same, equal to the probability $p_{11}$ of the 1st-order model. If we can show that these equivalences do not hold, then we have managed to disprove the 1st-order Markov Chain model — although that will not undermine its usefulness, nor will it prove the 2nd-order Markov (or any other) model. I took the NOAA-NWS data and used it to estimate all four transition probabilities. Here’s the result: Not only are there differences between $p_{001}$ and $p_{101}$, not only are there differences between $p_{011}$ and $p_{111}$, those differences are statistically significant. This effectively disproves the 1st-order Markov model (but as I said, doesn’t undermine its usefulness nor does it prove the 2nd-order model correct). It’s quite interesting (and counterintuitive) that early in the tornado season (during March), $p_{011}$ is greater than $p_{111}$. This means that if yesterday was an STD, then today is more likely to be an STD if two days ago was not than if it was. During the heart of tornado season, $p_{111}$ is greater than $p_{011}$, so today is more likely to be an STD if both yesterday and the day before were, than if only yesterday was. Also, during most of the year (and almost all of the 2nd half of the year), the difference between $p_{011}$ and $p_{111}$ is not significant, which is what we would expect from the 1st-order Markov model. Throughout the entire year, $p_{101}$ is greater than $p_{001}$. This means that even if yesterday was not an STD, if two days ago was it still enhances the chance of an STD today. Therefore the conditions which bring about STDs have a persistence longer than a single day. When we used the 1st-order model, we got the following comparison between observed and expected numbers of long runs of consecutive STDs (expected in black, observed in red): Using the 2nd-order Markov model, we get a result which is only slightly different, but does have more chance of very long runs: The discrepancy between observed and modeled numbers is less. In particular, with this model the probability of 3 “runs of 12″ in the 58-year NOAA-NWS record is about 1 out of 40, which is implausible but not unbelievable, so it’s significant evidence against the model but not very strong evidence. And, there’s another factor which should also be considered. As Harold Brooks said in a comment, There are a number of papers in both the formal and informal literature that changes in damage assessment over the years have led to a decrease in the reported intensity of the strongest tornadoes over the years. Therefore it’s possible that the trio of runs-of-12 is in part due to the greater likelihood of earlier-in-the-record tornados being classified as F2 or stronger. After all, all three runs-of-12 are from 1967 or before. If tornados were ranked in those earlier records as they are today, we may not have seen so many long runs. ### Like this: This entry was posted in climate change, Global Warming, mathematics and tagged Global Warming, mathematics. Bookmark the permalink. ### 7 Responses to Markov 2 1. M Could you do a markov chain analysis of the first half of the dataset, and of the second half – if there is a change in reporting, might it show up in such an analysis? (probably not enough data to do this well, but a boy can hope) 2. Harold Brooks I had wanted to do a 2nd-order, but messed up the transition probabilities somehow. p011>p111 in March is not counterintuitive to me. It’s a reflection of large-scale weather systems moving across the country. It typically takes a couple of days for systems to go from just east of the Rockies to the East Coast. Typically (go back to 14-16 April, 25-27 April, noting that systems start to move a little more slowly as we move out of winter), you can see reports move east. I think the p111 problem is that many times the large-scale system will have moved off the coast by the time you get to the third day. If you want to do the different periods, the first big break in practices is ~1975 (it took a few years to implement and the presence of 3 April 1974 makes it hard to see when it starts). The second is ~2001. 1975-1999 is a relatively homogeneous rating process. It’s also possible that using F1 and greater may be a better choice for longer consistency. In a crude sense, if the early tornadoes were detected they were almost certainly at least F1, but they were overrated by ~half an F-scale. In the last decade, the community has been much more conservative about rating tornadoes. That’s a big part of the absence of F5 tornadoes and dearth of F4 tornadoes from 2000-2007. 3. John N-G I was going to make the same comment as Harold regarding the intuitiveness of higher p011 in early spring, but then I noticed that p111 was higher than p011 in December and January and realized I don’t understand after all. [Response: I left out the error bars because it made the graph cluttered -- but they're larger for p011 and p111 in Dec/Jan simply because there are fewer tornados, so fewer samples on which to base the estimates. The differences aren't significant for those months.] 4. jyyh Thank you for the link to Brooks et al 2003, an outstanding work I believe the few storm chasers here have already read. do I read it correclty enough to say (informally) the potential for tornados in US midwest is at least 3 times that of anywhere in Europe, and for the most of Europe more than 10 times that? 5. Rattus Norvegicus I found the comment which John N-G commented on at his blog rather disconcerting. (No offense to John or Harold here). When I read Harold’s comment, I found it interesting but not being familiar with him, I googled. It was quite clear that he is an incredibly good source. No need to know which side of the political fence he sits on. Thanks for the comment Harold. Is there any work being done to try and homogenize the record? It seems like an interesting problem. 6. Harold Brooks Homogenizing the record is really hard. Grazulis’s work on significant tornadoes (F2 and greater) is one effort that’s pretty good. It’s clear that there’s a break in it about 1921 in terms of occurrence. One of the main points of the environments work is to use the relative consistency of the environmental observations over the years as a proxy for occurrence. Build relationships between signficant (5 cm hail, 65 kt thunderstorm winds, F2 tornadoes) events and environments when the event observations are good and then use the environments as estimates. There’s some work on satellite estimates of hail that qualitatively looks like the environment obs estimates. I’m optimistic we can use the so-called 20th Century Reanalysis, which uses surface pressure data and sea surface temperatures with an ensemble Kalman filter to estimate the 4D structure of the atmosphere, to get qualitative descriptions of storm days. There are some quantitative issues (moisture may be a little low), but most of the old events I’ve glanced at (1884 Enigma Outbreak, 1890 Louisville, 1896 St. Louis, 1905 Snyder OK, 1925 Tri-state, and even overseas events, e.g., 1934 Finland, 1875 Austria) look good pattern-wise. If we can figure out a good way to quantify the patterns, we can go with a lot from that. I’ll be advertising for a post-doc in a few weeks to look at that and other issues. 7. Pete Dunkelberg PBS features an interview with Brooks. h/t Capital Climate. • ### Support Your Global Climate Blog You can help support this blog with a donation. Any amount is welcome, just click the button below. Note: it'll say "Peaseblossom's Closet" and the donation is for "Mistletoe" -- that's the right place. • ### New! Data Analysis Service Got data? Need analysis? My services are available at reasonable rates. Submit a comment to any thread stating your wishes (I'll keep it confidential). Be sure to include your email address. • ### Recent Comments snarkrates on Boston John Brookes on Boston Doc Snow on Boston Bob Loblaw on Boston snarkrates on Boston Igor Samoylenko on Boston España on Boston P. Lewis on Worth More than a Thousand… Kevin McKinney on Worth More than a Thousand… Kevin McKinney on Boston jyyh on Boston Bernard J. on Boston David B. Benson on Boston chrisd3 on Worth More than a Thousand… Kevin McKinney on Boston • ### mathematics %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546165466308594, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2012/09/07/
# The Unapologetic Mathematician ## Back to the Example Let’s go back to our explicit example of $L=\mathfrak{sl}(2,\mathbb{F})$ and look at its Killing form. We first recall our usual basis: $\displaystyle\begin{aligned}x&=\begin{pmatrix}0&1\\ 0&0\end{pmatrix}\\y&=\begin{pmatrix}0&0\\1&0\end{pmatrix}\\h&=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}\end{aligned}$ which lets us write out matrices for the adjoint action: $\displaystyle\begin{aligned}\mathrm{ad}(x)&=\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\\\mathrm{ad}(y)&=\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\\\mathrm{ad}(h)&=\begin{pmatrix}2&0&0\\ 0&-2&0\\ 0&0&0\end{pmatrix}\end{aligned}$ and from here it’s easy to calculate the Killing form. For example: $\displaystyle\begin{aligned}\kappa(x,y)&=\mathrm{Tr}\left(\mathrm{ad}(x)\mathrm{ad}(x)\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}0&0&-2\\ 0&0&0\\ 0&1&0\end{pmatrix}\begin{pmatrix}0&0&0\\ 0&0&2\\-1&0&0\end{pmatrix}\right)\\&=\mathrm{Tr}\left(\begin{pmatrix}2&0&0\\ 0&0&0\\ 0&0&2\end{pmatrix}\right)\\&=4\end{aligned}$ We can similarly calculate all the other values of the Killing form on basis elements. $\displaystyle\begin{aligned}\kappa(x,x)&=0\\\kappa(x,y)=\kappa(y,x)&=4\\\kappa(x,h)=\kappa(h,x)&=0\\\kappa(y,y)&=0\\\kappa(y,h)=\kappa(h,y)&=0\\\kappa(h,h)&=8\end{aligned}$ So we can write down the matrix of $\kappa$: $\displaystyle\begin{pmatrix}0&4&0\\4&0&0\\ 0&0&8\end{pmatrix}$ And we can test this for degeneracy by taking its determinant to find $-128$. Since this is nonzero, we conclude that $\kappa$ is nondegenerate, which we know means that $\mathfrak{sl}(2,\mathbb{F})$ is semisimple — at least in fields where $1+1\neq0$. ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8965360522270203, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/157771/sum-of-gaussian-processes
# Sum of Gaussian processes I would like to prove that the sum of Gaussian processes is also Gaussian, to be precise, $M_t=W_t+W_{t^2}$, where $W_t$ is standard Wiener process. That is kind of obvious, but I am looking for some more rigorous, as short as possible proof, other than just saying that it is the sum of two Gaussian processes. I have some thoughts but something seems to be missing there. $W_t$ and $W_{t^2}$ are dependent, but $M_t=2W_t+(W_{t^2}-W_t)$ if $t^2\geq t$, then the very first proof, since they are independent, would be what I am looking for. But as I know there has to be done something different when dealing with processes, not random variables. Random process $X_t$ is Gaussian $\Leftrightarrow \forall n\geq1,t_1,\dots,t_n,\lambda_1,\dots,\lambda_n$ $\sum^n_{k=1}\lambda_k X_{t_k}$ is Gaussian. Then $\sum^n_{k=1}\lambda_k M_{t_k}=$ $\left[\sum^n_{k=1}\lambda_k W_{t_k}\right]+\left[\sum^n_{k=1}\lambda_k W_{t_k^2}\right]$, by the same proposition both terms are Gaussian since $W_t$ and $W_{t^2}$ are Gaussian. $\left[\sum^n_{k=1}\lambda_k W_{t_k}\right]+\left[\sum^n_{k=1}\lambda_k W_{t_k^2}\right]=\left[\sum^n_{k=1}2\lambda_k W_{t_k}\right]+\left[\sum^n_{k=1}\lambda_k \left(W_{t_k^2}-W_{t_k}\right)\right]$ so maybe now I could conclude by using that proof for random variables? - @DilipSarwate, you are right! I was trying to construct such a vector, but some other proof confused me and I wanted these $2n$ random variables to be independent.. totally forgot that I can assume that whole vector is Gaussian. I would like to accept this as an answer – Julius Jun 13 '12 at 13:40 ## 1 Answer As requested by the OP, my comment has been converted into an answer. Perhaps you are making this harder than it is. Isn't $(W_{t_1},W_{t_1^2},W_{t_2},W_{t_2^2},\ldots,W_{t_n},W_{t_n^2})$ a Gaussian vector (meaning the $2n$ random variables are jointly Gaussian) and so any linear transformation applied to this vector results in a Gaussian vector? There is no requirement that the $2n$ random variables in question have to be independent for this linear transformation property to hold. It is joint Gaussianity that is required, and joint Gaussianity is guaranteed since the random variables are from a Gaussian process. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9756500720977783, "perplexity_flag": "head"}
http://mathoverflow.net/questions/81982/what-is-the-intuition-of-connections-for-cubical-sets
## What is the intuition of connections for cubical sets? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am beggining to do some work with cubical sets and thought that I should have an understanding of various extra structures that one may put on cubical sets (for purposes of this question, connections). I know that cubical sets behave more nicely when one has an extra set of degeneracies called connections. The question is: Why these particular relations? Why do they show up? Precise references would be greatly appreciated. - 8 cubical sets, presumably, unless you're forming connections between graduate students. – Noam D. Elkies Nov 27 2011 at 3:14 Thank you. I think that I have fixed all of the offenders. – Spice the Bird Nov 27 2011 at 3:22 1 Just to add a comment to the points below: in a simplicial set, a degenerate simplex has some adjacent faces the same. In a cubical set, a degenerate cube has opposite faces the same. The extra structure of connections brings cubical sets nearer to simplicial sets, but keeping other advantages, such as easily understood definitions of compositions. For another application, see Higgins, P.J. Thin elements and commutative shells in cubical {$\omega$}-categories. Theory Appl. Categ. 14 (2005) No. 4, 60--74 – Ronnie Brown Jan 31 2012 at 22:43 ## 4 Answers A list of precise references for connections on cubical sets has to start with : R. Brown, P. J. Higgins and R. Sivera, 2011, Nonabelian Algebraic Topology: Filtered spaces, crossed complexes, cubical homotopy groupoids , volume 15 of EMS Monographs in Mathematics , European Mathematical Society. as in there Brown, Higgins and Sivera have written out and explored the theory in detail. There are several introductory sections on connections both in double categories and in cubical sets. The intuitions come back to the structure of the singular cubical complex of a space in which there are cubes that are degenerate in an intuitive sense but are not of the 'constant in direction $i$' type. The typical example is a square with two adjacent sides constant and the other two copies of the same path. (I cannot draw it here!) Ronnie Brown has numerous introductory articles on his website and I will give you a link to the handout for a talk on higher dimensional group theory in which there is some discussion of the connections from a group theoretic viewpoint.(http://pages.bangor.ac.uk/~mas010/pdffiles/liverpool-beamer-handout.pdf) The discussion is fairly far near the end, so have a look for diagrams with cubes and hieroglyphic pictures! The point made there is that if you want to say that the top face of a cube is the composite of its other faces, then on expanding the cube as a cross shape collection of five squares, there will be holes to fill in the corners, but connection squares are just the right form to fill them. (It is worth roaming around on Ronnie Browns site including http://pages.bangor.ac.uk/~mas010/brownpr.html, as there are several other chatty papers and Beamer presentations that may help.) You can go back to the original Brown-Higgins papers, but as they have been used as the base for the new book, they may not give you anything extra. - That URL doesn't seem to work, Tim -- do you mean this one: pages.bangor.ac.uk/~mas010/publicfull.htm ? – Finn Lawler Nov 27 2011 at 19:04 @Finn, Strange. The link does work for me,. There is another :pages.bangor.ac.uk/~mas010/index.html and the original one was some way down that page. – Tim Porter Nov 28 2011 at 6:22 I got a 404 error when I clicked on it yesterday, but now it works. Odd. Never mind; lots of nice stuff there to get distracted by! – Finn Lawler Nov 28 2011 at 12:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The point I want to make was that the notion of connection on cubical set was forced on us in the following way. Go back to [21]. (with C.B. SPENCER), Double groupoids and crossed modules'', Cah. Top. G\'{e}om. Diff. 17 (1976) 343-362. The problem we started with in 1971 was: since double groupoids were putative codomains for a 2-d van Kampen type theorem, were there interesting examples of double groupoids? We easily found functors (1) (double groupoids) $\to$ (crossed modules) We eventually found a functor (2) (crossed modules) $\to$ (double groupoids) which nicely tied in double groupoids with classical ideas, but which double groupoids arose in this way? A concurrent question was: what is a commutative cube in a double groupoid? (An answer was needed for the conjectured proof of the 2-d vKT.) It was great that both questions were resolved with the notion of connection! (our first perhaps rambling exposition was turned down by JPAA as a result of negative referee reports, and because the 2-d van Kampen theorem, an explicit aim, was not yet achieved). As explained in [21] the transport law was borrowed from a paper of Virsik on path connections, hence the name `connection', see also [21] for a general definition. It was not too hard to formulate the higher dimensional laws on connections, since they involved the monoid structure max in the unit interval, but the verification of the equivalence corresponding to (2) was carried out by Philip Higgins, (phew!), stated in [22]. (with P.J. HIGGINS), Sur les complexes crois\'es, $\omega$-groupo\"{\i}des et T-complexes'', C.R. Acad. Sci. Paris S\'er. A. 285 (1977) 997-999. and published in [31]. (with P.J. HIGGINS), On the algebra of cubes'', J. Pure Appl. Algebra 21 (1981) 233-260. I hope the early pages of `Nonabelian algebraic topology' (pdf with hyperref downloadable from my web site, with permission of EMS) will help to explain the background. Look at particularly the notion of algebraic inverse to subdivision, which necessitated the cubical approach. Another relevant paper is Higgins, P.J. Thin elements and commutative shells in cubical $\omega$-categories. Theory Appl. Categ. 14 (2005) No. 4, 60--74. - Also relevant are: Grandis, Marco; Cubical monads and their symmetries. Proceedings of the Eleventh International Conference of Topology (Trieste, 1993). Rend. Istit. Mat. Univ. Trieste 25 (1993), no. 1-2, 223–262 (1994). Grandis, Marco; Mauri, Luca Cubical sets and their site. Theory Appl. Categ. 11 (2003), No. 8, 185–211 – Ronnie Brown Nov 28 2011 at 15:00 A very concrete instance where you can see the meaning and usefulness of connections is this article by Brown and Mosa: They show that double categories (which do have an underlying (truncated) cubical set) with connections are the same as (globular) 2-categories. The reason is that the connection allows to fold the four different edges of a 2-cell in the cubical double category structure into just two edges, leaving degenerate edges at the other sides, and this can as well be captured in the data of a globular 2-category where 2-cells have just one source and one target 1-cell - see the definition of he folding map right before Proposition 5.1 in the above article. - As far as higher cubical categories are concerned, a connection will allow you to literally rotate a face, i.e. turn a face of one type into a face of another type, in an invertible way. In short it materializes an equivalence between the different types of faces into special degenerate cubes. The 2d case for example is fairly simple as one can either turn horizontal arrows into vertical arrows or vice versa. One advantage of a connection is therefore that it allows one to speak of commutative n-cubes in an n-tuple category with connection. To do so, you can take an n-cube, apply connections until you only have non trivial faces of one type. Then check whether the obtained cube is an identity or not. It turns out that it does not depend on the way you chose to apply the connection, if your cube gives an identity cube with one face rearrangement, it will with another. It is, to my understanding the essence of Brown and AlAlg's equivalence between cubical categories with connections and globular categories. So for cubical categories it is very restrictive, which is also why they are so friendly. But I am not sure about the impact on cubical sets. You surely will find good material in Tim and Ronnie's references. - With regard to the impact on cubical sets, see Tonks, A. Cubical groups which are Kan. J. Pure Appl. Algebra 81 (1992) 83--87. Maltsiniotis, G. La cat\'egorie cubique avec connexions est une cat\'egorie test stricte. Homology, Homotopy Appl. 11 (2009) 309--326. The first paper shows that cubical groups with connections are Kan complexes. The second shows that the geometric realisation of the cartesian product of cubical sets with connection is homotopy equivalent to the cartesian proiduct of the realisations. This is good, though not quite as convenient as the simplicial case. – Ronnie Brown Nov 30 2011 at 11:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9138014316558838, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/tagged/mceliece?sort=active&pagesize=15
# Tagged Questions The mceliece tag has no wiki summary. 3answers 217 views ### McEliece Cryptosystem Implementations Are there any current implementations (language irrelevant) of the McEliece Cryptosystem? I have been hunting around all day, and yet, have only found a few mathematical equations pertaining to the ... 0answers 51 views ### PKC McEliece + $S$ + $P$ I am trying implement the McEliece crytosystem in SAGE. My question is How I will be able to choose the appropriate matrix $S$ and $P$?. I ask this because when I trying obtain the vector $\hat{m}=mS$ ... 0answers 37 views ### Efficient decoding of irreducible binary Goppa codes and the role of matrix P in McEliece cryptosystem If we assume that the support for an irreducible binary Goppa code $\gamma_1, ..., \gamma_n$ is publicly known, when is it possible to efficiently decode the code? I know it's possible if one knows ... 1answer 92 views ### McEliece for streaming data Under the assumption that there exists a real-world implementation of the McEliece scheme, could it be applied to streaming data as is? By that I mean in 'block cipher mode'? I've read that McEliece ... 3answers 292 views ### Is key size the only barrier to the adoption of the McEliece cryptosystem, or is it considered broken/potentially vulnerable? A recent paper showed that the McEliece cryptosystem is not, unlike RSA and other cryptosystems, weakened as drastically by quantum computing because strong Fourier sampling cannot solve the hidden ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073989391326904, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/30537/is-the-schrodinger-equation-derived-or-postulated
# Is the Schrödinger equation derived or postulated? I'm an undergraduate mathematics student trying to understand some quantum mechanics, but I'm having a hard time understanding what is the status of the Schrödinger equation. In some places I've read that it's just a postulate. At least, that's how I interpret e.g. the following quote: Where did we get that (equation) from? Nowhere. It is not possible to derive it from anything you know. It came out of the mind of Schrödinger. -- Richard Feynman (from the Wikipedia entry on the Schrödinger equation) However, some places seem to derive the Schrödinger equation: just search for "derivation of Schrödinger equation" in google. This motivates the question in the title: Is the Schrödinger equation derived or postulated? If it is derived, then just how is it derived, and from what principles? If it is postulated, then it surely came out of somewhere. Something like "in these special cases it can be derived, and then we postulate it works in general". Or maybe not? Thanks in advance, and please bear with my physical ignorance. - ## 2 Answers The issue is that the assumptions are fluid, so there aren't axioms that are agreed upon. Of course Schrodinger didn't just wake up with the Schrodinger equation in his head, he had a reasoning, but the assumptions in that reasoning were the old quantum theory and the deBroglie relation, along with Hamiltonian idea that mechanics is the limit of wave-motion. These ideas are now best thought of as derived from postulating quantum mechanics underneath, and taking the classical limit with leading semi-classical corrections. So while it is historically correct that the semi-classical knowledge essentially uniquely determined the Schrodinger equation, it is not strictly logically correct, since the thing that is derived is more fundamental than the things used to derive it. This is a common thing in physics--- you use approximate laws to arrive at new laws that are more fundamental. It is also the reason that one must have a sketch of the historical development in mind to arrive at the most fundamental theory, otherwise you will have no clue how the fundamental theory was arrived at or why it is true. - In a nutshell, Schrödinger equation is an educated guess. The "derivation" is just the process of guessing. – C.R. Jun 22 '12 at 4:12 2 @KarsusRen: Absolutely not! This is completely wrong. There was no guesswork involved in the thing. Have you read Schrodinger's paper? Einstein got the same equation independently when he heard of the result. It follows from deBroglie's relation, the semiclassical limit, and this is enough to uniquely specify the equation. It is only that later it is seen to be more fundamental than the thing it is derived from, so the derivation ends up logically going the other way. But the historical derivation is correct, and is a mathematically justified argument, like any other in physics. – Ron Maimon Jun 22 '12 at 7:02 – altertoby Jun 22 '12 at 21:51 @altertoby: This is totally false. Einstein indeed showed that the Hamilton Jacobi equation is the semiclassical wave equation, but this is not a sensible wave equation, since it only gives the phase of the wave. HJ equation uniquely determines the Schrodinger equation, since you need the magnitude part to work together with the phase part. This is what Schrodinger shows in his paper. This is a rigorous derivation, based on semiclassical ideas. It is not necessary to read the paper you linked, it is useless for this discussion. I know how to quantize. Please read Schrodinger instead. – Ron Maimon Jun 23 '12 at 2:16 The other path to quantum mechanics was through the semiclassical operator construction of Kramers and Heisenberg. In this path, which is also nearly rigorous (although not as rigorous as Schrodinger's, due to the fact that the analysis is fundamentally perturbative, which is why Heisenberg didn't discover tunneling). In this path, Heisenberg calculates the semiclassical commutator of p and q and shows that it is $i\hbar$, and then postulates that this is true for all n. This postulate is justified from Einstein's A and B coefficients, which give the Harmonic oscillator matrix elements. – Ron Maimon Jun 23 '12 at 2:21 show 1 more comment The Schrödinger equation is postulated. Any source that claims to "derive" it is actually motivating it. The best discussion of this that I'm aware of this is in Shankar, Chapter 4 ("The Postulates -- a General Discussion"). Shankar presents a table of four postulates of Quantum Mechanics, which each given as a parallel to classical postulates from Hamiltonian dynamics. Postulate II says that the dynamical variables x and p of Hamiltonian dynamics are replaced by Hermitian operators $\hat X$ and $\hat P$. In the X-basis, these have the action $\hat X\psi = \psi (x)$ and $\hat P\psi = -i\hbar\frac{d\psi}{dx}$. Any composite variable in Hamiltonian dynamics can be built out of x and p as $\omega(x,p)$. This is replaced by a Hermitian operator $\hat \Omega(\hat X,\hat P)$ with the exact same functional form. Postulate IV says that Hamilton's equations are replaced by the Schrödinger equation. The classical Hamiltonian retains its functional form, with x replaced by $\hat X$ and p replaced by $\hat P$. NB: Shankar doesn't discuss this, but Dirac does. The particular form of $\hat X$ and $\hat P$ can be derived from their commutation relation. In classical dynamics, x and p have the Poisson Bracket {x,p} = 1. In Quantum Mechanics, you can replace this with the commutation relation $[\hat X, \hat P] = i\hbar$. What Shankar calls Postulate II can be derived from this. So you could use that as your fundamental postulate if you prefer. Summary: the Schrödinger equation didn't just come from nowhere historically. It's a relatively obvious thing to try. Mathematically, there isn't anything more fundamental in the theory that you could use to derive it. - I had an electrical engineering course in university which consisted entirely of teaching us how to "derive" the Schrödinger equation. I don't remember much but it was using the rules of calculus to manipulate some other assumed laws of physics. But I guess our teacher might have been wrong in stating that we were "deriving" the equation. – Kmeixner Jan 29 at 23:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567610621452332, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/17233/is-quantum-mechanics-intrinsically-dualistic/22801
# Is quantum mechanics intrinsically dualistic? In just about every interpretation of quantum mechanics, there appears to be some form of dualism. Is this inevitable or not? In the orthodox Copenhagen interpretation by Bohr and Heisenberg, the world is split into a quantum and classical part. Yes, that is actually what they wrote and not a straw man. The Heisenberg cut is somewhat adjustable, though, somewhat mysteriously. This adjustability is also something that can be seen in other interpretations, and almost kind of suggests the cut is unphysical, but yet it has to appear somewhere. There is a duality between the observer and the observables. Von Neumann postulated a two step evolution; one Schroedinger and unitary, and the other a measurement collapse by measurement, whatever a measurement is. Another form of the duality. What a measurement is and when exactly it happens is also adjustable. In decoherence, there is a split into the system and the environment. A split has to be made for decoherence to come out, but again, the position of the split is adjustable with almost no physical consequence. In the many-worlds interpretation, there is the wavefunction on the one hand, and a splitting into a preferred basis on the other followed by a selection of one branch over the others. This picking out of one branch is also dualistic, and is an addendum over and above the wavefunction itself. In the decoherent histories approach, there is the wavefunction on the one hand, and on the other there is an arbitrary choice of history operators followed by a collapse to one particular history. The choice of history operators depends upon the questions asked, and these questions are in dual opposition to the bare wavefunction itself oblivious to the questions. In Bohmian mechanics, there is the wavefunction, and dual to it is a particle trajectory. Why is there a duality? Can there be a nondual interpretation of quantum mechanics? - ## 5 Answers Some people ascribe the duality to the duality between the classical appratus and the quantum microscropic system, but I think this is a little old-fasioned. The quantum description also works for a bad apparatus and a big apparatus--- like my eye looking at a mesoscopic metal ball with light shining on it. This situation does not measure the position of the ball, nor the momentum, nor anything precise at all. In fact, it is hard to determine exactly what operator my eye is measuring by looking at some photons. A modern approach to quantum mechanics treats the whole system as a quantum mechanical, including my eye, and myself. But then the source of the dualism is made apparent. If I simulate my own wavefunction on a computer, and that of the ball, and the light, (the simulation would be enormously large, but ignore that for now), where is my perception of the ball contained in the simulation? It is not clear, because the evolution would produce a enormously large set of wavefunction values in extremely high dimension, most of which are vanishingly small, but a few of which are smeared over configurations describing one of many plausible possible outcomes. The linear time evolution would produce a multiplying collection of weighted configurations, but it will never contain a data bit corresponding to my experience. But I can introspect and find out my own experience, so this data bit is definitely accessible to me. So I can see a data bit using my mind which is not clearly extractable from this computer simulation of my mind. The basic problem is that the knowledge in our heads is classical information, it might as well be data on a computer. But the quantum system is not made up of classical information, but of wavefunction data, and wavefunction data is not classical information, nor is it a probability distribution on classical information, so it does not have an obvious interpretation as ignorance of classical information. The reason probability is unique is because only probability calculus has the Monte-Carlo property that if you sample the distribution and average over time-evolution of the samples, its the same as averaging over the time-evolution of the distribution. In quantum mechanics, samples can interfere with other samples, making the restriction to a collection of independent classical samples inconsistent. So I can't say the simulation is simulating one of many samples, at best I can say it is approximately simulating one of many clumps-of-samples corresponding to nearly completely decohered histories. But when I entangle myself with a quantum system using a device which entangles itself with a quantum system, I find _by_doing_it_ that the result is probabilistic on the classical information in my mind. The classical information is determined after the entanglement event, the result is random with probabilities given by the Born rule, so the result is definitely a probability. But the result is only at best asymptotic to a probability in quantum mechanics. ### Why Duality? The duality in quantum descriptions is always between the linear evolution of the quantum mechanical wavefunction and the production of classical data according to a probability distribution. Wavefunctions are not probabilities, but when they produce classical data, they can only be probabilities, so they turn into probabilities. How exactly do they turn into probabilities? This is the mismatch between the probabilistic calculus for knowledge and information, and the quantum mechanical formalism for states. In order to produce probabilities from pure quantum mechanics, you have to find the proper reason for why wavefunctions are linked to probabilities. Each interpretation has a bit of a different flavor for explaining the link, but of these, Copenhagen, many-worlds, CCC, many-minds, and decoherence/consistent-histories all place the reason in the transition to a macroscopic observer-realm. The details are slightly different--- Copenhagen has a ritualized system/apparatus/observer divide, a classical-quantum divide which looks artificial. Many-worlds has an observer's path of memories, which selects which world is observed. Many-minds too, I can't distinguish between many-minds and many-worlds, not even philosophically. I think many-minds was invented by someone who misunderstood many-worlds as being something other than many-minds. Consciousness-Causes-Collapse is the same as well, except rejecting the alternate counterfactual mental histories as "nonexisting" (whatever that means exactly, I can't differentiate this one from many worlds either). Decoherence/consistent-histories insists that the path is a decoherence consistent selection which is simply a good direction in which the wavefunction has become incoherent and the density matrix is diagonal, but it is specified outside the theory. Its always the same dualism--- the classical data is not in the simulation, and we can see it in our heads, and the reduction to a diagonal density matrix is only asymptotically true, and it needs to be exactly true to work. The variables that describe our experience of the macroscopic world are discrete packets of information with a definite value, or probability distributions on such, which are modeling our ignorance before we get the value. There's nothing else that is out there which can describe our experience. The quantum simulation just doesn't contain these classical bits, nor does it contain anything which is exactly and precisely a classical probability distribution. Quantum mechanically simulate a particle in a superposition interacting with a miniature model brain, and light from the particle triggers a molecule in the brain to store the information about the position of the molecule, the quantum formalism will produce a superposition of at least two different configurations of the molecule and of the brain but at no point will it contain the actual value of the observed bit, nor a probability distribution for this value. If this quantum wavefunction simulation is a proper simulation of the brain, then this internal brain has access to more information than the complete simulation contains viewed from the outside. As far as I see, there are exactly two possible explanations for this. ### Many Worlds The idea starts with the observation that you can't know in advance what it's supposed to feel like to be in a superposition, because what a physical phenomenon "feels like" is not part of physics. There is always a dictionary between physics and "feels like" which tells you how to match physical descriptions to experience. For example, matching light of a certain wavelength to the experience of seeing red. If you simulate a classical brain, and you copy the data in the classical brain simulation, by querying the copies, you will see that they cannot differentiate between their pasts, and they will both think they are the same person. The quantum simulation contains all sorts of things inside, and it is not clear how it feels to internal things, because that all depends on how you query the things. If you query extremely unlikely components of the superposition, you can get any answer at all to any question you ask. You have to ask questions, because without a positive way to investigate the brain's feelings, there is no meaning you can assign to the assertion that it has feelings at all. When you ask the question, you must choose which branch of the simulated quantum system to query. So there is no obvious way to embed classical experiences into the simulation, and the many-worlds interpretation takes the point of view that it is just a perceptual axiom, like seeing red, that the way our classical minds are embedded into a quantum universe is that they feel a unique path through a decohering net of spreading quantum events. A classical mind just doesn't "feel" superposed, it can't feel superposed because feelings are classical things. The embedding into the model is just a little off because of this, and our minds have to select a path through the diverging possible histories. The path-selection by the mind produces new classical information through time, and the duality in quantum mechanics is identified with the philsophers' mind-body duality. ### Quantum mechanics is measurably wrong I think this is the only other plausible possibility. The existence of classical data in our experience make it philosphically preferrable to have a theory which can say something about this classical data, which can interpret it as a sharp value of a quantity in the theory, rather than a history-specification which is outside the physics of the theory. This can be philosophically preferred for two reasons: • It allows a physical identification of mental data with actual bits which can be extracted from the simulation, so that the definite bit values encoding our experiences are contained in a fundamental simulation directly, as they are in the classical model of the world. • It means that simulations of the physical world could be fully comprehended--- they are classical computations on classical data, or probability distributions which represent ensembles of classical data. I think the only real reason to prefer such a theory is if it could described the world with a smaller model than quantum mechanics, one which would require fewer numbers to simulate. It seems like an awful waste to require exponentially growing resources to simulate N-particles, especially when the result in real life is almost always classical behavior with a state variable linear in N. But the only way a theory can do this is if the theory fails to coincide with quantum mechanics at least when doing Shor's algorithm. So this position is that quantum mechanics is wrong for heavily entangled many-particle systems. In this case, the dualism of quantum mechanics would be because it is an approximation to something else deeper down which is not dual, but the approximation makes wavefunctions out of probability distributions in some unknown limit, and this limit is imperfect. So the wavefunctions are approximations to probabilities, not the other way around, and we see the real deal-- the probabilities, because on our scale, the wavefunction description is no good. Nobody has such a theory. The closest thing is the Born version of quantum mechanics, which is computationally even bigger than quantum mechanics, and so even less philosophically satisfying. It might be good even to find a half-way house, just a method of simulating quantum systems which does not require exponential resources except in those cases where you set up a quantum computer to do exponential things. Nobody has such a method either. - I wasn't under the impression that my answer at all depended on having a relatively well-defined measurement. Can you demonstrate, for my benefit, where I'm slipping the assumption in? Your example of your eye percieving photons (perhaps reflected from a plate which is intercepting electrons) is an example where degrees of freedom of the electrons are becoming coupled with the photoplate, and then in turn with your visual cortex, mediated through chemistry and light. – Niel de Beaudrap Nov 20 '11 at 14:04 And exactly which wavefunction values for the atoms of the visual cortex correspond to seeing the light? It's a blob in configuration space defined along certainly nearly orthogonal directions for different perceptions. The map is from classical knowledge to these blobs, and the dictionary is not in the time evolution. – Ron Maimon Nov 20 '11 at 21:20 Indeed, there should be a wide swath (even if you mod out by the idiosyncratic brain structure in the person seeing the light). I don't pretend that there is a simple state corresponding to making an observation; one can't easily say "here: the observation is made just at this point where potassium reaches this threshold concentration". That would be bad neuroscience, to say nothing of bad QM! But this does not contradict the fact that observations, ill-defined as they are, are indeed made; and that they are the result of strong couplings with other systems. That is what I was stressing. – Niel de Beaudrap Nov 21 '11 at 1:20 @Niel: I had no problem with your answer, I think it is saying correct things. I just don't think it exhausts the question, because I think the main problem people have with quantum mechanics is that they can introspect and see firm stable classical data, like the contents of this message, and quantum mechanics, when simulated, produces superpostions of many values of such data, and these superpositions are only asymptotically interpretable as probability densities, and we are not asymptotic beings. – Ron Maimon Nov 21 '11 at 4:40 My reaction was based on your first paragraph, when my answer was one of the only two earlier answers. I'm sorry if I somehow misunderstood. – Niel de Beaudrap Nov 21 '11 at 13:36 show 16 more comments The duality has something to do with strength of interaction of a system with its environment, which may or may not consist largely of a piece of measurement apparatus of which we are consciously aware. In short, the duality arises from fixating on two extremes of behaviour: strongly coupling with the environment, or not. (Realizing this doesn't necessarily simplify our understanding of QM, but it is the theme underlying the dualities you have noted.) What all of the interpretations agree on is this: a system which is isolated evolves according to the Schrödinger equation, and a system which interacts strongly enough with a macroscopic system — such that we can observe a difference in the behaviour of that large system — does not. These are two polar extremes of behaviour; so it is not in principle surprising that they exhibit somewhat different evolutions. This seems to me where the duality comes from: stressing these two opposite poles. • In the Copenhagen interpretation, the "quantum" systems are the isolated ones; the "classical" systems are the large macroscopic ones whose conditions we can measure. Nothing is said about the regime in between. • In von Neumann's description, the evolution of isolated systems is by the Schrödinger equation; ones strongly coupled to macroscopic systems get projected. Again, nothing is said about the regime in between. "Decoherence" and "Many-Worlds" are not really distinguishable interpretations of quantum mechanics (indeed, in Many-Worlds, the preferred basis is thought to be selected by decoherence, though this must still be demonstrated as a technical point). While there is some debate about the precise ontological nature of the phenomenon, and important technical issues to resolve, pretty much everyone in the "decoherence" camp (with or without many worlds) agrees that the statistical nature of quantum mechanics — as opposed to the determinism of the unitary dynamics itself — arises from interaction with the environment. The fuzziness of the boundary between the two situations of "isolated system" and "strong coupling to the environment", in fact, is a symptom of the fact that "not completely isolated" does not automatically take you all the way to the regime of "strongly coupled to the environment". There is, presumably, a gradient. Furthermore, you get to choose what the boundaries of "the environment" — that part of the world which is just too big and messy for you to try to understand, or more to the point, experimentally control — are. So, if a physical system is only a little leaky, or is interfered with only slightly by the outside world, you can try to account for this outside meddling, and so describe the system as one which may be somewhat less leaky. Some of the projects of interpretations of quantum mechanics are trying precisely to describe the two extremes, and so everything in between, using a monism of dynamics. Many-worlds, for instance, seems to shrug at the question of why we only perceive one world out of many, but wholeheartedly believes that all dynamics is in principle unitary, and is trying to prove it. And Bohmian Mechanics already has monism, albeit at the cost of faster than light signalling between particles by way of the quantum potential field — albeit signalling which manifests macroscopically only as correlations, for essentially thermodynamical reasons — which understandably puts most people off. Note that there are also dualisms in science, historically and in modern times, outside of quantum mechanics: • historically: terrestrial and celestial mechanics (subsumed by Newtonian mechanics) • historically: organic versus inorganic matter (subsumed once the chemistry of carbon started to become well-understood) • currently: gravity (treated geometrically) versus other elementary forces (treated by boson mediation) • currently: "hard sciences" (theories of the world largely excluding human behaviour) versus soft "sciences" (theories of the world largely concerning human behaviour) Any time you have two different models of the world which do not seem obviously compatible, but which do (at least somewhat successfully) describe systems well in some domain, there is a sort of duality between those two models. The dualities in our current understanding of quantum mechanics are somewhat unique in that they concern exactly the same systems, and in the fact that interactions in one of the regimes ("strong coupling with the environment") seems to be the only way for us to obtain information about what happens in the other ("weak coupling with the environment")! - 1 Definitely Useful. I like it and I think it's quite accurate to modern pragmatic approaches to QM, although I think there are Physicists who take a PoV close to Ron's more metaphysical Answer. I have a quibble, however, that I think your qualification of your conflation of Decoherence and Many Worlds as interpretations is not full enough. Straight Decoherence interpretations have a much more conservative interpretation of probability and its relationship to statistics than Many Worlds. – Peter Morgan Nov 20 '11 at 14:44 @PeterMorgan: I mean that saying "decoherence versus Many-Worlds" is like saying "Christianity versus Mormonism". Modern Many-Worlds advocates believe that decoherence is the mechanism which generates 'worlds': it is a school of decoherence, though not the only one. Still: it isn't clear to me at all that "striaght decoherence" have any more conservative an interpretation. Just as MWI doesn't explain conscious experience of only one world, the others lack an explanation for why entanglement gives rise to stochastic behaviour (the partial trace formula merely articulates that it does). – Niel de Beaudrap Nov 20 '11 at 15:06 @Downvoter: any critique you would like to make? – Niel de Beaudrap Nov 20 '11 at 17:55 I didn't downvote, but I think it is not so useful to compare QM to other dualities, because it isn't clear that the duality is purely philosophical. – Ron Maimon Nov 20 '11 at 22:01 How does one tell when a "philosophical" duality ceases to be one? The OP notes that there is an apparent duality of processes or natures; historically, what should an alchemist have said about the response of muscle tissue to electrical shocks, or a modern particle theorist say about the non-renormalizability of theories with gravitons? In each case there are things which one must treat by different formalisms but no clear way to distinguish what happens at the boundary. Apparent duality arises out of the lack of a unifying theory, bolstered by opinions that there may/should be none. – Niel de Beaudrap Nov 21 '11 at 1:25 The duality is inherent in the way we do physics. We never consider the whole universe with all its details. In order to make sense of what we observe (whic his always a small part of the universe only) we - the users of physics - must make a distinction between ''the observed = the system'' and ''the remainder = the environment''. The observed system is then described as closely as warranted, while the remaining environment is described in a simple, effective way - e.g., as an external classical field (in many applications), as a classical measurement (in the Copenhagen interpretation), as a bath of harmonic oscillators in equilibrium (in decoherence studies), or as ignored details (in thermodynamics and in cosmology). This is necessary in order that we can get rid of unwanted details without lsing predictability of the system of interest. Thus the duality you mentioned is imposed on the universe by inqusitive minds. - To take a different approach to the variety of ways in which you present QM (which all seem fine, but perhaps they miss the underlying structure), we compute expected values of an observable $O$ using the trace rule in QM, $E[O]=\mathsf{Tr}[\hat O\hat\rho]$, in which on one side there is an operator that represents a measurement and on the other side there is a density matrix that represents a state, essentially because of the Hilbert space structure of vectors and an inner product. Loosely, the inner product of the Hilbert space allows us to ask what components a prepared vector state has "in the same direction" as each of a (possibly infinite) set of reference states. Hilbert spaces are the mathematical structure at the very bottom of all quantum mechanics, and the inner product (that every Hilbert space has as part of its construction) induces a linear duality between prepared states and reference states. That duality may play out in a different interpretations in different ways, but it will always be there. In short, if we have a Hilbert space structure, we have a linear duality. If we don't have a Hilbert space structure, we're not doing quantum mechanics. Not that we can't use other mathematical structures, but it will not be QM unless it can be presented in terms of the mathematics of Hilbert spaces, effectively as a matter of definition. And welcome to PhysicsSE. EDIT: As a result of Niel's and Ron's Comments, I looked at what I've missed in the Question (not infrequently I find that my first response misses some "detail" or another, and sometimes it's the whole point). My initial Answer addresses the cut into System and Observer, which I see as inevitable just because of the underlying mathematics I point out above, but it does not explicitly address the difference between unitary and collapse evolutions. I see these two evolutions so much as an obvious consequence of the mathematical duality that I didn't notice that I was conflating something that would not be obvious. I find Niel's Answer somewhat more congenial to my own thinking, which I would say, still too concisely, as: the difference between unitary and collapse evolutions comes from placing the Heisenberg cut in such a way that there is an (effectively) infinite number of DoFs on the human Observer's side of the mathematical duality, while there is only a relatively small number of DoFs on the other side. That's a somewhat Decoherence-y way of looking at things, to which I do not fully subscribe, but I find it a useful approach nonetheless. I find both Niel's and Ron's Answers Useful, although as different sides of a coin, and I commend them both to you. The duality between the wave function and Bohmian trajectories is rather different, and rather unbalanced, as Niel points out, and it looks as if Ron hasn't much addressed it. I find that I can't see how to address that duality in a unified way, partly because its attractions have never seemed compelling enough for me to work within the mathematics of the Bohmian POV. - 3 I don't think the OP is talking about mathematical duality (a mapping from functions to functionals), but philosophical duality (that there exists two fundamentally different sorts of things in the world, rather than one single sort of thing). It is not obvious that there is a connection between the two. – Niel de Beaudrap Nov 20 '11 at 1:28 Peter Morgan is stating that there are reference states defined by the experimental apparatus which is measuring the system, and the wavefunction only determines the outcome in relation to the reference states selected by the apparatus. You can alter the apparatus to measure a different observable, and this moves the eigenstates of one observable to those of another, a rotation of the Hilbert space, and you can also rotate the Hilbert space by changing the wavefunction, the mathematical duality. This is a very Copenhagen, very operational, and very old fasioned view of QM, it's not reality. – Ron Maimon Nov 20 '11 at 6:10 @Niel I take him to be asking whether interpretations of the mathematics of QM have to include duality in some form. Looking at the mathematics, there is a linear duality in the Hilbert space structure. Your Comment has prodded me to look at the Question more carefully, thanks, and I'll edit my Answer some, although your Answer and Ron's are both very Useful, +1, enough that it seems a little pointless to upgrade my squib. – Peter Morgan Nov 20 '11 at 13:50 @Peter: the terse previous comment is a result of the space limit--- I think that the pure operationalist would agree that the duality between reference states and prepared states is closely related to the duality between the classical measuring apparatus and the quantum microscopic system. But it is difficult to reconcile this with the idea that quantum mechanics should apply universally. Perhaps quantum mechanics should not apply universally, this was Bohr's position after all. – Ron Maimon Nov 21 '11 at 16:19 It had to happen, I was the only one here who hadn't been down-voted. I console myself that it took 2 days for someone to decide that It Won't Do. Although I can see numerous reasons why someone might down-vote this or any of the other Answers, and I'm pretty sure we're all winding down, would the down-voter care to add to the conversation? – Peter Morgan Nov 21 '11 at 23:20 show 2 more comments You correctly noticed that in some interpretations there is a "split" between "quantum" and "classical" and this split is somewhat arbitrary. You can move it closer to the observer without loosing consistency. If you make it to the extreme and move it as close to the observer as possible you will find that the whole universe follows certain laws such as unitary evolution, when separated from the observer, and only the observer, a single isolated person does not. This is what you should obtain and it is correct. What is bad with it? Only one problem. It makes the most fruitful instrument of research ever invented by humans, the scientific method, non-applicable. Scientific method requires independent confirmation of the observations and repeatability. If there is a special person in the universe then the scientific community would be unable to predict the observations by that person based on their own experiments or their predictions will be wrong however advanced instruments they use. That's why the quantum interpretations. All of them are designed to reconcile the scientific method with quantum mechanics to a degree which allows to obtain practical results. Still scientific method remains in conflict with quantum mechanics, but this conflict can be kept contained so that practical results in applied science are possible. - it isn't 100% clear that the split is solipsistic, since you can transfer the solipsism around between different people, and you end up consistent. This is one of the motivations Everett gives for many-worlds in 1957, each solipsist thinks the other guy is superposed until the measurement, so you just transfer the solipsism to an observer far away, and you leave yourself superposed, and this is many-worlds. – Ron Maimon Mar 26 '12 at 0:01 I never used the word "solipsism", so what's your objection? – Anixx Mar 26 '12 at 22:00 Yes, indeed the QM formalism predicts that there is a special person (this is not exactly solipsism). Some physicists do not want special persons so they invent Many-Worlds, Relational QM and other interpretations that postulate that every man is special in their own (unobservable to our science) universe. – Anixx Mar 26 '12 at 22:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9516829252243042, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/153265-joint-prob-density.html
# Thread: 1. ## joint prob density? in ref to earlier thread http://www.mathhelpforum.com/math-he...tml?pagenumber= I have stumbled upon a question which poses an identical problem but not in the form of an exam question therefore i suppose you could sketch it although from the simplicity of it i doubt this is really necessary. If someone could show how they obtained this answer,,,and then give a valid proof to a valid method for my previous thread i'd appreciate the help,, Q: Random variables x and y have jonint density function exp(-x-y) x.y>o findp(x>y) answer back of book is 1/2 2. Originally Posted by bluesblues in ref to earlier thread http://www.mathhelpforum.com/math-he...tml?pagenumber= I have stumbled upon a question which poses an identical problem but not in the form of an exam question therefore i suppose you could sketch it although from the simplicity of it i doubt this is really necessary. If someone could show how they obtained this answer,,,and then give a valid proof to a valid method for my previous thread i'd appreciate the help,, Q: Random variables x and y have jonint density function exp(-x-y) x.y>o findp(x>y) answer back of book is 1/2 same as there $\displaystyle \int _{0} ^{\infty} \;dy \int _{0} ^{y} e^{-x-y} \;dx$ after solving : $\displaystyle \int _{0} ^{y} e^{-x-y} \;dx = -\sinh{y} + \sinh{2y} +\cosh{y} -\cosh{2y}$ and then : $\displaystyle \int _{0} ^{\infty} (\cosh{y}-\cosh{2y}-\sinh{y} + \sinh{2y})\;dy = \frac {1}{2}$ P.S. hope u know to work with hyperbolic functions just to note that $\displaystyle \int e^{(-x-y)} dx =\int e^{(-x-y)} dy = -e^{(-x-y)} ==\sinh{(x+y)} - \cosh{(x+y)}$ 3. yeh thanks everyone
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9169906377792358, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/3845/how-does-one-measure-the-effect-of-latency-on-potential-returns/3847
# How does one measure the effect of latency on potential returns? I am looking to evaluate the hypothetical advantage one trading system has over another in terms of the possible returns given their latency. Irene Aldridge wrote a piece (How Profitable Are High-Frequency Trading Strategies?) which describes how to relate holding time to Sharpe ratio, although her approach seems somewhat arbitrary. As I am investigating the effect of latency on market making strategies, I have modified this approach to use the maximal spread in a time frame to be the return and the spread's variance as risk (as the spread proxies for the risk of the market maker). Are there any other metrics I can make use of? Does my approach thus far seem reasonable? - ## 1 Answer An interesting starting point is The Cost of Latency by Moallemi and Saglam. After setting up a simple order execution problem --- in which a trader must chose between a market order and a limit order and guarantee execution over a fixed interval $[0,T]$, they proceed to derive a (complex) close form solution for the optimal strategy and evaluate the impact of latency on trading costs. In particular, they derive a simple expression to approximate the cost of latency when the latency is small (i.e. in the limit $\Delta t \to 0$, where $\Delta t$ denotes some measure of the latency of the trading system). In terms of price volatility $\sigma$, the bid-ask spread $\delta$, the cost of latency is $$\frac{\sigma\sqrt{\Delta t}}{\delta}\sqrt{\log \frac{\delta^2}{2 \pi \sigma^2 \Delta t}}$$ The profile of the latency cost according to their model is (Fig 7, The Cost of Latency by Moallemi and Saglam) The proceed to evaluate the historical cost of latency and the implied latency for a basket of NYSE stocks (Figs 8 and 9, The Cost of Latency by Moallemi and Saglam) - 4 nice answer @Ryogi – Quant Guy Jul 23 '12 at 13:18 Thanks, @QuantGuy. – Ryogi Jul 23 '12 at 23:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204788208007812, "perplexity_flag": "middle"}
http://www.openwetware.org/index.php?title=20.309:Homeworks/Homework1&oldid=147917
# 20.309:Homeworks/Homework1 ### From OpenWetWare Revision as of 20:40, 4 September 2007 by Steven Wasserman (Talk | contribs) 20.309 Fall Semester 2007 Homework Set 1 Due by 12:00 noon on Friday Sept. 21, 2007 1. Figure 1 shows a resistor network known as a Wheatstone bridge. Part of the apparatus which you'll build shortly for the DNA melting curves module involves half of a "bridge". This is a common circuit used to measure the resistance of an unknown value, Rx. For now, we will look at it analytically. Rx is a resistance you are trying to measure, and R3 is a variable resistor. Figure 1: A Wheatstone bridge circuit. (a) Assuming R3 is set such that the bridge is balanced (i.e. Vab=0), derive an analytical expression for Rx in terms of R1, R2 and R3. (b) Now let R3 also be a fixed value, and suppose that Rx varies in a way that makes Vab nonzero. Derive an expression for the dependence of Vab on Rx. 2. Referring again to the Wheatstone Bridge in Figure 1, suppose that Rx varies with some physical parameter (strain, temperature, etc.) in the range of 1-10kΩ. You want to use the circuit to measure the physical variable by observing Vab=0 and correlating it to the resistance changes. In what range should the values of R1, R2 and R3 be to make a sensitive measurement? Explain your reasoning. (Hint: using matlab to plot the output as a function of the varying resistances is a very useful way to think about this problem). 3. Photodiode i-v characteristics: Using the data that you collected in the lab for the photodiode, generate 3-4 i-v curves for a photodiode at different light levels (including in darkness). Plot these on the same graph to see how incident light affects diode i-v characteristics. Give a brief (qualitative) explanation for why photodiodes are best used in reverse bias? 4. Transfer functions: For the black boxes that you measured in the lab, determine what kind of circuit/filter each one is (two of them will look similar, but have an important difference - what is it?). Determine a transfer function that can model the circuit, and fit the model to the data to see whether the model makes sense. Of the four boxes, "D" is required, and you should choose one of either "A" or "C". You can fit "B" for bonus credit. 5. Referring to the circuit shown in Figure 2, what value of RL (in terms of R1 and R2) will result in the maximum power being dissipated in the load? (Hint: this is much easier to do if you first remove the load, and calculate the equivalent Thevenin output resistance RT of the divider looking into the node labeled Vout. Then express RL for maximal power transfer in terms of RT. Figure 2: A voltage divider formed by R1 and R2 driving a resistive load RL. 6. Consider the op-amp circuit shown in Fig. 3 which was first introduced in Lab Module 0. Figure 3: An inverting op-amp circuit. (a) Calculate the gain of this circuit, Vout/Vin. (b) At times, the signal you may be interested in measuring is in the form of a current, such as in the upcoming DNA melting curve lab. An example circuit for measuring a current is a transimpedance amplifier shown in figure 3. Determine an expression for the output voltage of the circuit with respect to a DC current input. Express your answer in Vout/Iin. Figure 4: A high gain transimpedance amplifier. (c) Since this is such a high-gain circuit, it can be quite noisy, if the input current Iin experiences high-frequency fluctuations. You can insert a capacitor to reduce the noise (i.e. make a low-pass filter to eliminate high-frequency content). Where would you insert it, and how would you choose its size? (d) Now write down the expression for this new circuit's output with respect to the current input for AC signals (Hint: in the expression from part (a), substitute the parallel combination RL$\parallel$C for the resistor Rx that you chose).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9121600985527039, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/206902/specma-affine-variety-leftrightarrow-a-is-finitely-generated-k-algebra?answertab=votes
# $Specm(A)$ affine variety $\Leftrightarrow$ A is finitely generated $k$-algebra I want to prove that "the spectrum of maximal ideals of a ring $A$ is a variety of $\mathbb{A}^n_k$ for some $n$ if and only if $A$ is a finitely generated $k$-algebra". I assume that $k$ is algebraically closed. Any hints on how to make a start for each direction? - 1 What is the definition of an affine variety over $k$? – Makoto Kato Oct 4 '12 at 1:33 1 What do you mean by "the spectrum is an affine variety"? A priori the spectrum is a set (maybe a topological space) and it doesn't make sense to ask whether a set (or a topological space) is a variety. – Qiaochu Yuan Oct 4 '12 at 1:36 @QiaochuYuan: Why not? A variety is both a set and a topological space. – Manos Oct 4 '12 at 1:42 @MakotoKato: I suppose an affine variety of $\mathbb{A}^n_k$ for some $n$. – Manos Oct 4 '12 at 1:44 2 You probably mean to prove something like: $A$ is a finitely generated $k$-algebra iff there exists a subvariety of $\Bbb A^n_k$ whose coordinate ring is isomorphic to $A$, and under this isomorphism one may identify points on the subvariety with maximal ideals of $A$. – Andrew Oct 4 '12 at 2:20 show 10 more comments ## 1 Answer The formulation of the question given by Andrew in the comments is meaningful but false (take $A = k[x]/x^2$). The correct statement comes from replacing "finitely generated $k$-algebra" with "finitely generated integral domain over $k$" and follows from the Nullstellensatz. - Just to make sure i understand what "finitely generated ring over $k$" is: a ring $A$ that is a $k$-module and is finitely generated as a $k$-module? – Manos Oct 4 '12 at 15:37 @Manos: no, it means finitely generated as a $k$-algebra. People use "finite" to mean finitely generated as a $k$-module. – Qiaochu Yuan Oct 4 '12 at 16:10 So finitely generated integral domain over $k$ is just a finitely generated $k$-algebra with no zero divisors...Could you please also give me some insight into your counterexample? I can see that $k[x]/x^2$ is not an integral domain. How do you see that its spectrum of maximal ideals is not isomorphic to a variety of $\mathbb{A}^n_k$? – Manos Oct 4 '12 at 16:15 @Manos: again, that question doesn't make sense. What I'm actually claiming is that $k[x]/x^2$ is not isomorphic to the ring of functions on any variety, and this follows because the ring of functions on any variety is an integral domain. – Qiaochu Yuan Oct 4 '12 at 16:16 Got it. Thanks a lot. – Manos Oct 4 '12 at 16:31 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157858490943909, "perplexity_flag": "head"}
http://terrytao.wordpress.com/tag/structure/page/2/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘structure’ tag. ## Milliman Lecture I: Additive combinatorics and the primes 4 December, 2007 in math.CO, math.NT, talk | Tags: additive combinatorics, Milliman lecture, prime numbers, randomness, structure | by Terence Tao | 9 comments This week I am visiting the University of Washington in Seattle, giving the Milliman Lecture Series for 2007-2008. My chosen theme here is “Recent developments in arithmetic combinatorics“. In my first lecture, I will speak (once again) on how methods in additive combinatorics have allowed us to detect additive patterns in the prime numbers, in particular discussing my joint work with Ben Green. In the second lecture I will discuss how additive combinatorics has made it possible to study the invertibility and spectral behaviour of random discrete matrices, in particular discussing my joint work with Van Vu; and in the third lecture I will discuss how sum-product estimates have recently led to progress in the theory of expanders relating to Lie groups, as well as to sieving over orbits of such groups, in particular presenting work of Jean Bourgain and his coauthors. Read the rest of this entry » ## FOCS slides: structure and randomness in combinatorics 23 October, 2007 in expository, math.CO, talk, travel | Tags: Andrej Bogdanov, Emanuele Viola, FOCS, majority vote, randomness, structure | by Terence Tao | 12 comments I’ve just come back from the 48th Annual IEEE Symposium on the Foundations of Computer science, better known as FOCS; this year it was held at Providence, near Brown University. (This conference is also being officially reported on by the blog posts of Nicole Immorlica, Luca Trevisan, and Scott Aaronson.) I was there to give a tutorial on some of the tools used these days in additive combinatorics and graph theory to distinguish structure and randomness. In a previous blog post, I had already mentioned that my lecture notes for this were available on the arXiv; now the slides for my tutorial are available too (it covers much the same ground as the lecture notes, and also incorporates some material from my ICM slides, but in a slightly different format). In the slides, I am tentatively announcing some very recent (and not yet fully written up) work of Ben Green and myself establishing the Gowers inverse conjecture in finite fields in the special case when the function f is a bounded degree polynomial (this is a case which already has some theoretical computer science applications). I hope to expand upon this in a future post. But I will describe here a neat trick I learned at the conference (from the FOCS submission of Bogdanov and Viola) which uses majority voting to enhance a large number of small independent correlations into a much stronger single correlation. This application of majority voting is widespread in computer science (and, of course, in real-world democracies), but I had not previously been aware of its utility to the type of structure/randomness problems I am interested in (in particular, it seems to significantly simplify some of the arguments in the proof of my result with Ben mentioned above); thanks to this conference, I now know to add majority voting to my “toolbox”. Read the rest of this entry » ## The quantitative behaviour of polynomial orbits on nilmanifolds 25 September, 2007 in math.DS, math.GR, paper | Tags: equidistribution, nilmanifolds, polynomial sequences, Ratner's theorem, structure, symmetric spaces, Weyl's equidistribution theorem | by Terence Tao | 47 comments Ben Green and I have just uploaded our paper “The quantitative behaviour of polynomial orbits on nilmanifolds” to the arXiv (and shortly to be submitted to a journal, once a companion paper is finished). This paper grew out of our efforts to prove the Möbius and Nilsequences conjecture MN(s) from our earlier paper, which has applications to counting various linear patterns in primes (Dickson’s conjecture). These efforts were successful – as the companion paper will reveal – but it turned out that in order to establish this number-theoretic conjecture, we had to first establish a purely dynamical quantitative result about polynomial sequences in nilmanifolds, very much in the spirit of the celebrated theorems of Marina Ratner on unipotent flows; I plan to discuss her theorems in more detail in a followup post to this one.In this post I will not discuss the number-theoretic applications or the connections with Ratner’s theorem, and instead describe our result from a slightly different viewpoint, starting from some very simple examples and gradually moving to the general situation considered in our paper. To begin with, consider a infinite linear sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ in the unit circle ${\Bbb R}/{\Bbb Z}$, where $\alpha, \beta \in {\Bbb R}/{\Bbb Z}$. (One can think of this sequence as the orbit of $\beta$ under the action of the shift operator $T: x \mapsto x +\alpha$ on the unit circle.) This sequence can do one of two things: 1. If $\alpha$ is rational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is periodic and thus only takes on finitely many values. 2. If $\alpha$ is irrational, then the sequence $(n \alpha + \beta)_{n \in {\Bbb N}}$ is dense in ${\Bbb R}/{\Bbb Z}$. In fact, it is not just dense, it is equidistributed, or equivalently that $\displaystyle\lim_{N \to \infty} \frac{1}{N} \sum_{n=1}^N F( n \alpha + \beta ) = \int_{{\Bbb R}/{\Bbb Z}} F$ for all continuous functions $F: {\Bbb R}/{\Bbb Z} \to {\Bbb C}$. This statement is known as the equidistribution theorem. We thus see that infinite linear sequences exhibit a sharp dichotomy in behaviour between periodicity and equidistribution; intermediate scenarios, such as concentration on a fractal set (such as a Cantor set), do not occur with linear sequences. This dichotomy between structure and randomness is in stark contrast to exponential sequences such as $( 2^n \alpha)_{n \in {\Bbb N}}$, which can exhibit an extremely wide spectrum of behaviours. For instance, the question of whether $(10^n \pi)_{n \in {\Bbb N}}$ is equidistributed mod 1 is an old unsolved problem, equivalent to asking whether $\pi$ is normal base 10. Intermediate between linear sequences and exponential sequences are polynomial sequences $(P(n))_{n \in {\Bbb N}}$, where P is a polynomial with coefficients in ${\Bbb R}/{\Bbb Z}$. A famous theorem of Weyl asserts that infinite polynomial sequences enjoy the same dichotomy as their linear counterparts, namely that they are either periodic (which occurs when all non-constant coefficients are rational) or equidistributed (which occurs when at least one non-constant coefficient is irrational). Thus for instance the fractional parts $\{ \sqrt{2}n^2\}$ of $\sqrt{2} n^2$ are equidistributed modulo 1. This theorem is proven by Fourier analysis combined with non-trivial bounds on Weyl sums. For our applications, we are interested in strengthening these results in two directions. Firstly, we wish to generalise from polynomial sequences in the circle ${\Bbb R}/{\Bbb Z}$ to polynomial sequences $(g(n)\Gamma)_{n \in {\Bbb N}}$ in other homogeneous spaces, in particular nilmanifolds. Secondly, we need quantitative equidistribution results for finite orbits $(g(n)\Gamma)_{1 \leq n \leq N}$ rather than qualitative equidistribution for infinite orbits $(g(n)\Gamma)_{n \in {\Bbb N}}$. Read the rest of this entry » ## Structure and randomness in combinatorics 31 July, 2007 in math.CO, paper | Tags: computer science, Hamming cube, randomness, structure | by Terence Tao | 21 comments I’ve just uploaded to the arXiv my lecture notes “Structure and randomness in combinatorics” for my tutorial at the upcoming FOCS 2007 conference in October. This tutorial covers similar ground as my ICM paper (or slides), or my first two Simons lectures, but focuses more on the “nuts-and-bolts” of how structure theorems actually work to separate objects into structured pieces and pseudorandom pieces, for various definitions of “structured” and “pseudorandom”. Given that the target audience consists of computer scientists, I have focused exclusively here on the combinatorial aspects of this dichotomy (applied for instance to functions on the Hamming cube) rather than, say, the ergodic theory aspects (which are covered in Bryna Kra‘s lecture notes from Montreal, or my notes from Montreal for that matter). While most of the known applications of these decompositions are number-theoretic (e.g. my theorem with Ben Green), the number theory aspects are not covered in detail in these notes. (For that, you can read Bernard Host’s Bourbaki article, Ben Green‘s Gauss-Dirichlet article or ICM article, or my Coates article.) ## A visit to the Royal Society 13 July, 2007 in math.NT, non-technical, opinion, talk, travel | Tags: powerpoint, randomness, Royal Society, structure | by Terence Tao | 17 comments This week I was in London, attending the New Fellows Seminar at the Royal Society. This was a fairly low-key event preceding the formal admissions ceremony; for instance, it is not publicised on their web site. The format was very interesting: they had each of the new Fellows of the Society give a brief (15 minute) presentation of their work in quick succession, in a manner which would be accessible to a diverse audience in the physical and life sciences. The result was a wonderful two-day seminar on the state of the art in many areas of physics, chemistry, engineering, biology, medicine, and mathematics. For instance, I learnt • How the solar neutrino problem was resolved by the discovery that the neutrino had mass, which did not commute with flavour and hence caused neutrino oscillations, which have since been detected experimentally; • Why modern aircraft (such as the Dreamliner and A380) are now assembled using (incredibly tough and waterproofed) adhesives instead of bolts or welds, and how adhesion has been enhanced by nanoparticles; • How the bacterium Helicobacter pylori was recently demonstrated (by two Aussies :-) ) to be a major cause of peptic ulcers (though the exact mechanism is not fully understood), but has also been proposed (somewhat paradoxically) to also have a preventative effect against esophageal cancer (cf. the hygiene hypothesis); • How recent advances in machine learning and image segmentation (including graph cut methods!) now allow computers to identify and track many general classes of objects (e.g. people, cars, animals) simultaneously in real-world images and video, though not quite in real-time yet; • How large-scale structure maps of the universe (such as the 2dF Galaxy Redshift Survey) combine with measurements of the cosmic background radiation (e.g. from WMAP) to demonstrate the existence of both dark matter and dark energy (they have different impacts on the evolution of the curvature of the universe and on the current distribution of visible matter); • … and 42 other topics like this. (One strongly recurrent theme in the life science talks was just how much recent genomic technologies, such as the genome projects of various key species, have accelerated (by several orders of magnitude!) the ability to identify the genes, proteins, and mechanisms that underlie any given biological function or disease. To paraphrase one speaker, a modern genomics lab could now produce the equivalent of one 1970s PhD thesis in the subject every minute.) Read the rest of this entry » ## The Lebesgue differentiation theorem and the Szemeredi regularity lemma 18 June, 2007 in expository, math.CA, math.CO | Tags: graph theory, hard analysis, Lebesgue differentiation theorem, randomness, regularity lemma, soft analysis, structure | by Terence Tao | 11 comments This post is a sequel of sorts to my earlier post on hard and soft analysis, and the finite convergence principle. Here, I want to discuss a well-known theorem in infinitary soft analysis – the Lebesgue differentiation theorem – and whether there is any meaningful finitary version of this result. Along the way, it turns out that we will uncover a simple analogue of the Szemerédi regularity lemma, for subsets of the interval rather than for graphs. (Actually, regularity lemmas seem to appear in just about any context in which fine-scaled objects can be approximated by coarse-scaled ones.) The connection between regularity lemmas and results such as the Lebesgue differentiation theorem was recently highlighted by Elek and Szegedy, while the connection between the finite convergence principle and results such as the pointwise ergodic theorem (which is a close cousin of the Lebesgue differentiation theorem) was recently detailed by Avigad, Gerhardy, and Towsner. The Lebesgue differentiation theorem has many formulations, but we will avoid the strongest versions and just stick to the following model case for simplicity: Lebesgue differentiation theorem. If $f: [0,1] \to [0,1]$ is Lebesgue measurable, then for almost every $x \in [0,1]$ we have $f(x) = \lim_{r \to 0} \frac{1}{r} \int_x^{x+r} f(y)\ dy$. Equivalently, the fundamental theorem of calculus $f(x) = \frac{d}{dy} \int_0^y f(z) dz|_{y=x}$ is true for almost every x in [0,1]. Here we use the oriented definite integral, thus $\int_x^y = - \int_y^x$. Specialising to the case where $f = 1_A$ is an indicator function, we obtain the Lebesgue density theorem as a corollary: Lebesgue density theorem. Let $A \subset [0,1]$ be Lebesgue measurable. Then for almost every $x \in A$, we have $\frac{|A \cap [x-r,x+r]|}{2r} \to 1$ as $r \to 0^+$, where |A| denotes the Lebesgue measure of A. In other words, almost all the points x of A are points of density of A, which roughly speaking means that as one passes to finer and finer scales, the immediate vicinity of x becomes increasingly saturated with A. (Points of density are like robust versions of interior points, thus the Lebesgue density theorem is an assertion that measurable sets are almost like open sets. This is Littlewood’s first principle.) One can also deduce the Lebesgue differentiation theorem back from the Lebesgue density theorem by approximating f by a finite linear combination of indicator functions; we leave this as an exercise. Read the rest of this entry » ## Simons Lecture III: Structure and randomness in PDE 8 April, 2007 in math.AP, question, talk, travel | Tags: concentration compactness, nonlinear PDE, randomness, Ricci flow, Simons lecture, solitons, structure | by Terence Tao | 10 comments [This lecture is also doubling as this week's "open problem of the week", as it (eventually) discusses the soliton resolution conjecture.] In this third lecture, I will talk about how the dichotomy between structure and randomness pervades the study of two different types of partial differential equations (PDEs): • Parabolic PDE, such as the heat equation $u_t = \Delta u$, which turn out to play an important role in the modern study of geometric topology; and • Hamiltonian PDE, such as the Schrödinger equation $u_t = i \Delta u$, which are heuristically related (via Liouville’s theorem) to measure-preserving actions of the real line (or time axis) ${\Bbb R}$, somewhat in analogy to how combinatorial number theory and graph theory were related to measure-preserving actions of ${\Bbb Z}$ and $S_\infty$ respectively, as discussed in the previous lecture. (In physics, one would also insert some physical constants, such as Planck’s constant $\hbar$, but for the discussion here it is convenient to normalise away all of these constants.) Read the rest of this entry » ## Simons Lecture II: Structure and randomness in ergodic theory and graph theory 7 April, 2007 in math.CO, math.DS, talk, travel | Tags: correspondence principle, ergodic theory, graph theory, property testing, randomness, regularity lemma, Simons lecture, structure, Szemeredi's theorem | by Terence Tao | 14 comments In this second lecture, I wish to talk about the dichotomy between structure and randomness as it manifests itself in four closely related areas of mathematics: • Combinatorial number theory, which seeks to find patterns in unstructured dense sets (or colourings) of integers; • Ergodic theory (or more specifically, multiple recurrence theory), which seeks to find patterns in positive-measure sets under the action of a discrete dynamical system on probability spaces (or more specifically, measure-preserving actions of the integers ${\Bbb Z}$); • Graph theory, or more specifically the portion of this theory concerned with finding patterns in large unstructured dense graphs; and • Ergodic graph theory, which is a very new and undeveloped subject, which roughly speaking seems to be concerned with the patterns within a measure-preserving action of the infinite permutation group $S_\infty$, which is one of several models we have available to study infinite “limits” of graphs. The two “discrete” (or “finitary”, or “quantitative”) fields of combinatorial number theory and graph theory happen to be related to each other, basically by using the Cayley graph construction; I will give an example of this shortly. The two “continuous” (or “infinitary”, or “qualitative”) fields of ergodic theory and ergodic graph theory are at present only related on the level of analogy and informal intuition, but hopefully some more systematic connections between them will appear soon. On the other hand, we have some very rigorous connections between combinatorial number theory and ergodic theory, and also (more recently) between graph theory and ergodic graph theory, basically by the procedure of viewing the infinitary continuous setting as a limit of the finitary discrete setting. These two connections go by the names of the Furstenberg correspondence principle and the graph correspondence principle respectively. These principles allow one to tap the power of the infinitary world (for instance, the ability to take limits and perform completions or closures of objects) in order to establish results in the finitary world, or at least to take the intuition gained in the infinitary world and transfer it to a finitary setting. Conversely, the finitary world provides an excellent model setting to refine one’s understanding of infinitary objects, for instance by establishing quantitative analogues of “soft” results obtained in an infinitary manner. I will remark here that this best-of-both-worlds approach, borrowing from both the finitary and infinitary traditions of mathematics, was absolutely necessary for Ben Green and I in order to establish our result on long arithmetic progressions in the primes. In particular, the infinitary setting is excellent for being able to rigorously define and study concepts (such as structure or randomness) which are much “fuzzier” and harder to pin down exactly in the finitary world. Read the rest of this entry » ## Simons Lecture I: Structure and randomness in Fourier analysis and number theory 5 April, 2007 in math.CA, math.NT, talk, travel | Tags: arithmetic progressions, Fourier analysis, randomness, sieve theory, Simons lecture, structure | by Terence Tao | 34 comments This week I am in Boston, giving this year’s Simons lectures at MIT together with David Donoho. (These lectures, incidentally, are endowed by Jim Simons, who was mentioned in some earlier discussion here.) While preparing these lectures, it occurred to me that I may as well post my lecture notes on this blog, since this medium is essentially just an asynchronous version of a traditional lecture series, and the hypertext capability is in some ways more convenient and informal than, say, $\LaTeX$ slides. I am giving three lectures, each expounding on some aspects of the theme “the dichotomy between structure and randomness”, which I also spoke about (and wrote about) for the ICM last August. This theme seems to pervade many of the areas of mathematics that I work in, and my lectures aim to explore how this theme manifests itself in several of these. In this, the first lecture, I describe the dichotomy as it appears in Fourier analysis and in number theory. (In the second, I discuss the dichotomy in ergodic theory and graph theory, while in the third, I discuss PDE.) Read the rest of this entry »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 42, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382054805755615, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/88787?sort=oldest
## path of almost complex structure in the definition of heegaard floer homology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In order to define Heegaard Floer Homology for a connected, closed, oriented 3 manifold, we fix a generic path of nearly symmetric almost complex strucutre $J_s$ over $Sym^g(\Sigma)$. By transversality theorem we can say the moduli spaces $M_{J_s}$ are all smooth. Under certain cases we can choose the path to be constant path $Sym^g(j)$ and still make the moduli space smooth. Rather than a path, can we choose only one nearly symmetric almost complec structure $J$ over $Sym^g(\Sigma)$ other than $Sym^g(j)$ to define the heegaard floer homology? If we can not, it is because of we can not make the moduli spaces smooth or apply the Gromov compactness theorem? Thanks in advance. - ## 1 Answer According to Proposition 3.9 of Ozsvath and Szabo's original paper there are indeed topological conditions one can put on the homotopy class of discs to ensure that one can take a single almost complex structure. Moreover, there's an explicit complex structure (coming from a symmetric product of a complex structure on the Heegaard surface) for which you can achieve transversality. Edit: On a second (more careful) reading of your question I realise you already knew this. Apologies. More pertinently to your question, if you have a regular almost complex structure then (provided you're only interested in finitely many homotopy classes at a time) you should be able to perturb $J$ slightly and it will remain regular (for each homotopy class regularity is an open condition). Certainly as long as it's tame you'll never run into problems with Gromov compactness. Disclaimer: I know very little about Heegaard-Floer so maybe someone more specialised can say something more helpful and direct. Instead let me say something more general about transversality for holomorphic discs. In general in Floer theory you need domain-dependent almost complex structures to achieve transversality for holomorphic discs/spheres. The problem is that when proving transversality you make perturbations to the almost complex structure and if the disc is multiply-covered in some region then you may end up having to make different perturbations at the same point in the ambient manifold (to which different points of the disc are mapped). Open Riemann surfaces are particularly bad in this way because different regions can have different covering multiplicities (just think of something like Milnor's doodle). By contrast, a closed holomorphic curve has an underlying simple curve for which one can prove transversality. Of course, for closed spheres in Calabi-Yau 3-folds, the branched covers of a simple, regular sphere are themselves not transverse (they necessarily occur in high dimensional families by varying the branch-points) and you need domain-dependent almost complex structures (or abstract perturbations) to cope with this (e.g. to prove the Aspinwall-Morrison formula) For discs there is also a theorem of Lazzarini (probably in one of these papers) which lets you decompose a disc into subdiscs which are multiple covers of simple discs and this is how people using monotone Floer theory usually cope with the problem. Of course if you have Maslov 0 discs you run into the same problems as for the Aspinwall-Morrizon formula, hence the monotonicity requirement. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9179952144622803, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/126478-factoring-question-print.html
# Factoring question Printable View • January 31st 2010, 01:05 PM james121515 Factoring question Hi, What is the best method to go about factoring this $x^6+6x^4+10x^2+8$? Can this be done by grouping? I tried to do rational root tests but it appears as though it does not have any rational roots. James • January 31st 2010, 01:13 PM pickslides Quote: Originally Posted by james121515 Hi, What is the best method to go about factoring this $x^6+6x^4+10x^2+8$? Can this be done by grouping? I tried to do rational root tests but it appears as though it does not have any rational roots. James Might be helpful to say $a = x^2$ making the whole thing cubic instead. • January 31st 2010, 01:16 PM skeeter Quote: Originally Posted by james121515 Hi, What is the best method to go about factoring this $x^6+6x^4+10x^2+8$? Can this be done by grouping? I tried to do rational root tests but it appears as though it does not have any rational roots. James $y = x^6+6x^4+10x^2+8 > 0$ for all $x$ ... no real roots. • January 31st 2010, 01:17 PM Henryt999 Well Thats an expression you have no = sign anywhere. Also it doesn´t have any x intercepts so it doesn´t factor. • January 31st 2010, 01:25 PM james121515 It factors to $(x^2+2)^3$. I'm curious as to what method you use to do it. • January 31st 2010, 01:33 PM pickslides Quote: Originally Posted by james121515 It factors to $(x^2+2)^3$. I'm curious as to what method you use to do it. $x^6+6x^4+10x^2+8$ make $a = x^2$ now $a^3+6a^2+10a+8$ Then employee the factor theorem checking positive and negative factors of 8. Spoiler: $a = -2$ All times are GMT -8. The time now is 08:01 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533235430717468, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/monte-carlo?sort=votes&pagesize=15
# Tagged Questions Monte Carlo simulation methods uses repeated random experiments to determine results. 2answers 2k views ### How useful is Markov chain Monte Carlo for quantitative finance? Naively, it seems that Bayesian modeling, structural models particularly, would be quite useful in finance because of their ability to incorporate market idiosyncrasies and produce accurate ... 1answer 669 views ### Portfolio optimization with monte carlo sampling from predictive distribution Let's say we have a predictive distribution of expected returns for N assets. The distribution is not normal. We can interpret the dispersion in the distribution as reflection of our uncertainty (or ... 5answers 684 views ### Monte carlo methods for vanilla european options and Ito's lemma. I understand that by applying Ito's lemma to the following SDE $$dX=\mu\,X\,dt+\sigma\,X\,dW$$ one obtains a solution to the above SDE which is as follows: {X}\left( t\right) =\mathrm{X}\left( ... 3answers 420 views ### Reference on Markov chain Monte Carlo method for option pricing? I have to implement option pricing in c++ using Markov chain Monte Carlo. Is there some paper which describes this in detail so that I can learn from there and implement? 1answer 117 views ### Simulating property price index I am trying to write a Monte Carlo simulation to calculate risk associated with some property based products. What is the most reasonable stochastic process to model property price index? Do people ... 4answers 950 views ### Methods for pricing options I'm looking at doing some research drawing comparisons between various methods of approaching option pricing. I'm aware of the Monte Carlo simulation for option pricing, Black-Scholes, and that ... 2answers 445 views ### When to use Monte Carlo simulation over analytical methods for options pricing? I've been using Monte Carlo simulation (MC) for pricing vanilla options with non-lognormal underlyings returns. I'm tempted to start using MC as my primary option-valuating technique as I can get ... 1answer 310 views ### Simulating the joint dynamics of a stock and an option I want to know the joint dynamics of a stock and it's option for a finite number of moments between now and $T$ the expiration date of the option for a number of possible paths. Let $r_{\mathrm{s}}$ ... 2answers 382 views ### Is there any research on applying state-space or dynamic linear models to forecasting equity risk premia? Is there any research on applying state-space or dynamic linear models to forecasting equity risk premia on a security-by-security basis with a medium term horizon (say 3 month to 12 months horizon)? ... 1answer 397 views ### Monte carlo portfolio risk simulation My objective is to show the distribution of a portfolio's expected utilities via random sampling. The utility function has two random components. The first component is an expected return vector ... 5answers 339 views ### portfolio optimization from empirical return distributions I'd like to do a portfolio optimization of a set of ETF's but want to avoid traditional problems with normality assumptions in returns etc. Are there techniques that let me sample 'draws' from the ... 2answers 386 views ### Vanilla European options: Monte carlo vs BS formula I have implemented a monte carlo simulation for a plain vanilla European Option and I am trying to compare it to the analytical result obtained from the BS formula. Assuming my monte carlo pricer is ... 2answers 351 views ### How to minimize the difference between a parametric VaR and a MC-VaR with lognormal assumption? Given that we want to find the Value at Risk for a portfolio of stocks only, there are two main methods to proceed. In the problem, we also assume that stocks follow a geometric Brownian motion. A ... 1answer 240 views ### How to reduce variance in a Cox-Ingersoll-Ross Monte Carlo simulation? I am working out a numerical integral for option pricing in which I'm simulating an interest rate process using a Cox-Ingersoll-Ross process. Each step in my Monte Carlo generated path is a ... 2answers 337 views ### How to transform process to risk-neutral measure for Monte Carlo option pricing? I am trying to price an option using the Monte Carlo method, and I have the price process simulations as an inputs. The underlying is a forward contract, so at all times the mean of the simulations is ... 1answer 262 views ### Simulating conditional expectations There is a multidimensional process X defined via its SDE (we can assume that its a diffusion type process), and lets define another process by $g_t = E[G(X_T)|X_t]$ for $t\leq T$. I would like to ... 1answer 129 views ### Should we apply practical constraints on the distribution of monte carlo paths? to limit interest rate paths to a 'reasonable' range (if we could define reasonable). Now we calibrate log-normal skew and mean reversion monthly to robust basket of atm swaptions and in and out ... 0answers 117 views ### Consistency of economic scenarios in nested stochastics simulation I am interested in references on research regarding the consistency of economic scenarios in nested stochastics for risk measurement. Background: Pricing by Monte-Carlo: For pricing complex ... 4answers 705 views ### Stock Price Behavior and GARCH In my (limited) understanding, the behavior of a stock price can be modeled using Geometric Brownian Motion (GBM). According to the Hull book I'm currently reading, the discrete-time version of this ... 2answers 143 views ### Simulation of GBM I have a question regarding the simulation of a GBM. I have found similar questions here but nothing which takes reference to my specific problem: Given a GBM of the form \$dS(t) = \mu S(t) dt + ... 1answer 302 views ### Sanity check - How to price callables This question is meant as a sanity check whether i got the workflow right for pricing callable bonds. If anyone finds a mistake, or has a suggestion, please answer. The workflow is: For every call ... 4answers 1k views ### How to get greeks using Monte-Carlo for arbitrary option? Let's assume I have an arbitrary option that I can price using Monte-Carlo simulation. What is the general approach (i.e. without relying on specific option type) to calculating the greeks in this ... 1answer 713 views ### Simple model for option premium (for covered call simulation)? Given a historical distribution of weekly prices and price changes for a stock, how can I estimate the the option premium for a nearly at-the-money (ATM) option, say with an expiration date 3 months ... 2answers 146 views ### Is drift rate the same as interest rate in risk-neutral random walk when using Monte Carlo for option pricing? When using following risk-neutral random walk $$\delta S = rS \delta t + \sigma S \sqrt{\delta t} \phi$$ where $\phi \sim N(0,1)$. Now when a text mentions drift = 5% does that mean that interest ... 1answer 336 views ### How to apply quasi-Monte Carlo to path-dependent options? Following up on my recent question on variance reduction in a Cox-Ingersoll-Ross Monte Carlo simulation, I would like to learn more about using a quasi-random sequence, such as Sobol or Niederreiter, ... 2answers 188 views ### What sort of order submission strategy would result in a random walk of trade prices? I have written a simulation that matches buy and sell orders, keeps track of an order book and simulates trades. My first pass at order submission was to generate random orders around the bid/ask ... 1answer 209 views ### Divergence issue with my monte carlo pricer… I am trying to implement a vanilla European option pricer with Monte Carlo and compare its result to the BS analytical formula's result. I noticed that as I increase (from 1 million to 10 millions) ... 2answers 199 views ### Generate correlated random variables from Normal and Gamma distributions I want to generate a random vector $z$ of dimension $k+m$ with some given correlation matrix $\Sigma$, such that the first $k$ elements of the vector are distributed normally and the last $m$ elements ... 1answer 138 views ### Longstaff-Schwartz (Least Squares Monte Carlo) applied to American Options I'm working on an implementation in R of Longstaff & Schwartz method from the this 2001 article. I've managed to build code that replicates their prices in table 1 (p. 127), but only for the ones ... 1answer 1k views ### Mersenne twister random number generator in Java for Monte Carlo Sim. I am using the Mersenne twister random number generator in Java for a Monte Carlo Simulation. I need a uniform distribution of values between -1 and 1. My code is below (I am importing ... 4answers 157 views ### Other means of calibrating Heston models I understand that the simplest way of calibrating a Heston model for volatility surface is to use Monte-Carlo to simulate the vol and stock price trajectories and then use the observed price to do a ... 2answers 1k views ### How do I estimate convergence in monte carlo methods? I am experimenting with Monte Carlo methods. I'd like to measure/estimate convergence with a graph/chart. How do I do that? Can anyone please direct me to relevant documentation/links or even give me ... 1answer 182 views ### Picking from two correlated distributions Can anyone provide a simple example of picking from two distributions, such that the two generated time series give a specified value of Pearson's correlation coefficient? I would like to do this in ... 1answer 104 views ### Value at Risk Monte-Carlo using Generalized Pareto Distribution(GPD) I have created a VBA program to calculate VaR by using Monte Carlo, I have simulated Brownian Motion. This method might be ok for 100% equity portfolio, but let's say this portfolio may have fixed ... 1answer 71 views ### Greeks of Basket I am considering a product composed of 10 underlying assets. The maturity is 5 year. Each year if the performance of the equi-weighted portfolio reach a barrier, it pays a coupon. My question concern ... 1answer 135 views ### BDT model implementation I am looking for a nice and readable description of how to implement BDT model: $d log(r(t)) = [\theta(t)-\frac{\sigma'(t)}{\sigma(t)}log(r(t))]dt + \sigma(t) dW$. I assume I already have ... 2answers 1k views ### Black Scholes and Monte Carlo implementations in Java [duplicate] Possible Duplicate: Is there an all Java options-pricing library (preferably open source) besides jquantlib? Can anyone recommend a library with an implementation of Black Scholes and Monte ... 0answers 36 views ### The observed negative interest rates should be modelled as the observed positive ones? The presently observed negative interest rates for the recently emitted negative interest bonds by France, etc seem to increase in magnitude with the term. This might suggest that their modelling is ... 1answer 336 views ### What is Monte Carlo method? [closed] What is Monte Carlo method, what are the alternatives and when you would choose to use it over it alternatives?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8923289179801941, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/equations-of-motion+energy-conservation
# Tagged Questions 2answers 584 views ### Can a force in an explicitly time dependent classical system be conservative? If I consider equations of motion derived from the pinciple of least action for an explicilty time dependend Lagrangian $$\delta S[L[q(\text{t}),q'(\text{t}),{\bf t}]]=0,$$ under what ... 3answers 143 views ### Equation $H(q,p)=E$ is the equation of motion or energy-conservation law? I do not completely understand, why do we consider Hamilton–Jacobi equation $H(q,p)=E$ as equation of motion, whereas it is looks like energy-conservation law? 4answers 514 views ### Does the stress-energy tensor contain the equations of motion? Derivatives $\nabla_i T^{ik}=0$ of a stress-energy tensor of physical system express conservation laws. Whether contains a stress-energy tensor also the information on the equations of motion of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8809970617294312, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/98044-linear-independence-nilpotent-matrices.html
# Thread: 1. ## Linear Independence, Nilpotent Matrices Hello! The course is Linear Algebra, the subject is vector-spaces, basis, dimensions. The question is: A (from the field) F^nxn is a nilpotent matrix. (Nilpotent matrix - Wikipedia, the free encyclopedia) A square matrix is nilpotent from the order k if $A,A^2,...,A^(k-1)$ are all different from zero, but $A^k=0$ Prove: A^n=0 Now, there's a clue : let k be the smallest number for which $A^k=0$. There is a 'v' for which $A^{k-1}*v$ isn't equal to zero. prove that ${v, Av, A^2v, . . . . , A^{k-1}*v}$ is a linear independent. Okay, now I didn't want to look for any other way to solve this (meaning without using the clue), because I also wanted to use what I've learned and practiced. So, what I did was: Let $A^k=0$, $A^k$=/0 (is not zero) therefore, let there be v=/0, for which $A^{k-1}*v$=/0, and for any other number that is lower than k. (since A is a nilpotent matrix) I have to prove that ${v, Av, A^2v, . . . , A^{k-1}v}$is linear independent. So, let { α1, α2, . . . , αk} be parameters. Let's see when (for which alphas): α1v+α2Av+. . . + α(k)A^(k-1)v=0 (in order to prove that it's linear independent I need to show that the ONLY case that it's equal to zero is when α1=α2=α3=...=αk=0) So, I multiply it by the matrix A:α1Av+α2vA^2+ . . . + α(k-1)A^(k-1)v+α(k)A^k=0 ^according to what we assumed, A^k=0, so it's equal to: α1Av+α2A^2*v+ . . . + α(k-1)A^(k-1)v=0 we'll multiply again by A: α1A^2*v+α2A^3*v+ . . . + α(k-2)A^(k-1)v=0 . . . and so on. In total, we multiply it by A (k-1) times, and in the end we get: α1A^(k-1)*v=0 In such case, α1 can be equal to zero, while the other alphas can be everything. which means this is not necessarily linear independent! Argh...Can anyone please help me with this ? Thank you very much !!! 2. I am not quite clear what you mean..A^n=A^k*A^(n-k)=0??does it not suffice? 3. I'm sorry, I don't understand what you mean. Where should I use : $A^n=A^k*A^{n-k}=0$ ? I have no 'n', since I only use alpha(i), v, k, and A. I need a mathematical proof for this, and more than that - I want one that is based on the clue (because that's connected to the subject I'm practicing). Thank you 4. Originally Posted by adam63 I'm sorry, I don't understand what you mean. Where should I use : $A^n=A^k*A^{n-k}=0$ ? I have no 'n', since I only use alpha(i), v, k, and A. I need a mathematical proof for this, and more than that - I want one that is based on the clue (because that's connected to the subject I'm practicing). Thank you oh,you just let a1=0 and continue do the same thing for a2,a3... 5. Thank you !!! I got it! I was one step away from this... I was supposed to do this: assume that A^n is not 0, and therefore there is k>n for which A^k=0. (so A^k-1 isn't 0). Then, I need to prove what I mentioned here (that these vectors are independent), and then - since I found K independent vectors in the space F^(nxn), then there must be: #B([<] or [=] ) dim(F^nxn). therefore, K([<] or [=])n, which is false, since we assumed in the beginning that there should be a k>n. I like this one !!!! 6. ## Re: Linear Independence, Nilpotent Matrices Bumping this. I have basically the same problem. Can anyone clarify or further elaborate on how to start this problem? I'm not really following adam's explanation or work. 7. ## Re: Linear Independence, Nilpotent Matrices there are different ways to do this. one way is to note that the minimal polynomial m(A) has degree ≤ n, since the characteristic polynomial p(x) = det(xI - A) has degree ≤ n, and p(A) = 0, so m(x) divides p(x). but q(x) = x^k is a polynomial for which q(A) = 0, so m(x) divides q(x) as well. hence m(x) = x^t, for some t ≤ k. but k is the smallest power of A equal to 0, so t = k ≤ n. i'm not entirely sure i agree with the above proofs, since they only show that k ≤ n^2 = dim(F^(nxn)). 8. ## Re: Linear Independence, Nilpotent Matrices I appreciate the response. I'm still not really following, any chance of an explanation in english where I can try and understand what is trying to be proven as a way to get started. 9. ## Re: Linear Independence, Nilpotent Matrices there are different ways to approach this, depending on what you have to work with. i don't know what you do or do not know. where is the first place you get lost? 10. ## Re: Linear Independence, Nilpotent Matrices Any other starting tips.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307132363319397, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/16312/how-helpful-is-non-standard-analysis
## How helpful is non-standard analysis? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) So, I can understand how non-standard analysis is better than standard analysis in that some proofs become simplified, and infinitesimals are somehow more intuitive to grasp than epsilon-delta arguments (both these points are debatable). However, although many theorems have been proven by non-standard analysis and transferred via the transfer principle, as far as I know all of these results were already known to be true. So, my question is: Is there an example of a result that was first proved using non-standard analysis? To wit, is non-standard analysis actually useful for proving new theorems? Edit: Due to overwhelming support of Francois' comment, I've changed the title of the question accordingly. - 9 Note that your question is really about how helpful non-standard analysis is. If you wanted to know how unhelpful it is, you would ask for theorems that cannot be proved using non-standard methods. – François G. Dorais♦ Feb 24 2010 at 22:57 7 I believe that Ben Green, Terry Tao, and Tamar Ziegler are writing there forthcoming paper on the Inverse Conjecture for the Gowers norm (which combined with earlier work of Green and Tao will resolve many cases of the Hardy--Littlewood conjectures on linear equations in primes, including precise asymptotics for primes in arithmetic progressions) in the language of non-standard analysis. That seems pretty helpful! (By the way, I strongly recommend Terry Tao's blog for several discussions of the applicability of non-standard analysis to "everyday" mathematics.) – Emerton Feb 25 2010 at 2:53 2 I added the nonstandard-analysis tag. – Joel David Hamkins Feb 25 2010 at 14:23 ## 16 Answers From the Wikipedia article: the list of new applications in mathematics is still very small. One of these results is the theorem proven by Abraham Robinson and Allen Bernstein that every polynomially compact linear operator on a Hilbert space has an invariant subspace. Upon reading a preprint of the Bernstein-Robinson paper, Paul Halmos reinterpreted their proof using standard techniques. Both papers appeared back-to-back in the same issue of the Pacific Journal of Mathematics. Some of the ideas used in Halmos' proof reappeared many years later in Halmos' own work on quasi-triangular operators. - Great, this is what I was looking for. Thanks for the links. This certainly does answer my question, as Halmos was actually one of the original posers of that problem. It is quite interesting that the papers appear back-to-back in the same journal. – Tony Huynh Feb 24 2010 at 22:53 4 As Greg Lawler put it in his article in the recent volume dedicated to Nelson: "There are some theorems that were first published with nonstandard proofs but, at least in all the cases where I understand the result, they could have been done standardly." In a footnote he adds, "Of course, it is harder to answer the question: would the proofs have been found without nonstandard analysis? In fact, there are probably some proofs that have been done originally using nonstandard analysis but the author chose to write a standard proof instead." – Steve Huntsman Feb 24 2010 at 22:54 1 The reason for choosing standard proofs over nonstandard ones is obvious, and Lawler himself brings it up in the same article. Proving something with NSA hurts when trying to communicate results to a wide audience. In this sense NSA and experimentation are in the same boat--they can help, but generally behind the scenes. – Steve Huntsman Feb 24 2010 at 23:01 The story repeats itself many times. For example, Kamae's proof of the ergodic theorem using nonstandard analysis was rewritten by Weiss, and the two articles appeared back-to-back. I interpret this as antipathy to Robinson himself: "Look, your methods aren't so innovative as you tell everybody!" Perhaps if Robinson had been more likable, we could have a nicer analysis already. – Kevin O'Bryant Feb 26 2011 at 2:15 1 @katz: Wikipedia evolves. If you look at the version which Steve Huntsman quoted from (considering that this post has not been edited since it was first posted, that'd be the version from Feb 22, 2010), the quote is accurate. en.wikipedia.org/w/… – Willie Wong Apr 8 at 11:00 show 1 more comment ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The other answers are excellent, but let me add a few points. First, with a historical perspective, all the early fundamental theorems of calculus were first proved via methods using infinitesimals, rather than by methods using epsilon-delta arguments, since those methods did not appear until the nineteenth century. Calculus proceeded for centuries on the infinitesimal foundation, and the early arguments---whatever their level of rigor---are closer to their modern analogues in nonstandard analysis than to their modern analogues in epsilon-delta methods. In this sense, one could reasonably answer your question by pointing to any of these early fundamental theorems. To be sure, the epsilon-delta methods arose in part because mathematicians became unsure of the foundational validity of infinitesimals. But since nonstandard analysis exactly provides the missing legitimacy, the original motivation for adopting epsilon-delta arguments appears to fall away. Second, while it is true that almost any application of nonstandard analysis in analysis can be carried out using standard methods, the converse is also true. That is, epsilon-delta arguments can often also be translated into nonstandard analysis. Furthermore, someone raised with nonstandard analysis in their mathematical childhood would likely prefer things this way. In this sense, the preference between the two methods may be a cultural matter of upbringing. For example, H. Jerome Keisler wrote an introductory calculus textbook called Elementary Calculus: an infinitesimal approach, and this text was used for many years as the main calculus textbook at the University of Wisconsin, Madison. I encourage you to take a look at this interesting text, which looks at first like an ordinary calculus textbook, except that in the inside cover, next to the various formulas for derivatives and integrals, there are also listed the various rules for manipulating infinitesimals, which fill the text. Kiesler writes: This is a calculus textbook at the college Freshman level based on Abraham Robinson's infinitesimals, which date from 1960. Robinson's modern infinitesimal approach puts the intuitive ideas of the founders of the calculus on a mathematically sound footing, and is easier for beginners to understand than the more common approach via limits. Finally, third, some may take your question to presume that a central purpose of nonstandard analysis is to provide applications in analysis. But this is not correct. The concept of nonstandard models of arithmetic, of analysis and of set theory arose in mathematical logic and has grown into an entire field, with hundreds of articles and many books, with its own problems and questions and methods, quite divorced from any application of the methods in other parts of mathematics. For example, the subject of Models of Arithmetic is focused on understanding the nonstandard models of the first order Peano Axioms, and it makes little sense to analyze these models using only standard methods. To mention just a few fascinating classical theorems: every countable nonstandard model of arithmetic is isomorphic to a proper initial segment of itself (H. Friedman). Under the Continuum Hypothesis, every Scott set (a family of sets of natural numbers closed under Boolean operations, Turing reducibility and satisfying Konig's lemma) is the collection of definable sets of natural numbers of some nonstandard model of arithmetic (D. Scott and others). There is no nonstandard model of arithmetic for which either addition or multiplication is computable (S. Tennenbaum). Nonstandard models of arithmetic were also used to prove several fascinating independence results over PA, such as the results on Goodstein sequences, as well as the Paris-Harrington theorem on the independence over PA of a strong Ramsey theorem. Another interesting result shows that various forms of the pigeon hole principle are not equivalent over weak base theories; for example, the weak pigeon-hole principle that there is no bijection of n to 2n is not provable over the base theory from the weaker principle that there is no bijection of n with n2. These proofs all make fundamental use of nonstandard methods, which it would seem difficult or impossible to omit or to translate to standard methods. - 2 Regarding NSA vs epsilons and deltas, isn't 'not using the Axiom of Choice unnecessarily' a good reason to use the latter? – HW Feb 25 2010 at 21:01 2 Well, many ordinary uses of epsilon-delta also use choice. For example, to know that the epsilon-delta definition of continuity for a function on the reals is equivalent to the convergent sequence characterization relies on AC, since you need to pick the points inside those delta-balls. – Joel David Hamkins Feb 25 2010 at 21:19 One needs no choice at all to construct nonstandard models of arithemtic. For the reals, however, the existence of a nonstandard model of the reals with the transfer principle is equivalent to the existence of a nonprincipal ultrafilter on omega, which would be a weak choice principle. Nevertheless, one needs at least DC to have a decent theory of Lebegue measure, so there seems to be choice all around here. – Joel David Hamkins Feb 25 2010 at 21:26 I should say that the reverse implication in that equivalence uses countable choice, because if you have an ultrafilter, you still need countable choice to verify that the ultrapower satisfies the Los theorem, which is what gives you the Transfer principle. But the transfer principle in any case gives you ultrafilters, which is an interesting little argument. – Joel David Hamkins Feb 25 2010 at 21:32 2 Joel, I enjoyed your answer and learned from it, but I wondered whether "nonstandard analysis exactly provides the missing legitimacy" [of early calculus] was overstating it. I'm more familiar with the "other" way of putting infinitesimals on a firm footing, that of Synthetic Diff Geom (as e.g. in Bell's text A Primer of Infinitesimal Analysis). As I understand it, a crucial difference is that the infinitesimals of SDG can, for instance, have square equal to 0, but the infinitesimals of NSA can't. I'd guess that to be an important part of providing that "missing legitimacy". Any thoughts? – Tom Leinster Feb 27 2010 at 2:21 show 5 more comments Nonstandard hulls of spaces are used all the time in Banach space theory, so much so that books devote sections to the construction of ultraproducts of Banach spaces (e.g. Absolutely summing operators by Diestel, Jarchow, and Tonge). There are cases where NSA is used to prove the existence of an estimate, yet no one knows how directly to compute an estimate. For example, the unconditional constant of any basis for the span of the first n unit basis vectors in the James' space of sequences of bounded quadratic variation must go to infinity, but the only known proof involves NSA. - In 1986 C. Ward Henson and H. J. Keisler published “On the Strength of Nonstandard Analysis” (The Journal of Symbolic Logic, Vol. 51, No. 2 (Jun., 1986), pp. 377-386), which is a seminal contribution to the meta-mathematics of nonstandard analysis. Since their result bears directly on the issue in this thread which has been reopened after laying dormant for some time now, and since no reference to their work is referred to in the original thread, I am taking the liberty of quoting the introduction to Henson and Keisler’s important paper (which I believe is as current today as when it was published). It is often asserted in the literature that any theorem which can be proved using nonstandard analysis can also be proved without it. The purpose of this paper is to show that this assertion is wrong, and in fact there are theorems which can be proved with nonstandard analysis but cannot be proved without it. There is currently a great deal of confusion among mathematicians because the above assertion can be interpreted in two different ways. First, there is the following correct statement: any theorem which can be proved using nonstandard analysis can be proved in Zermelo-Fraenkel set theory with choice, ZFC, and thus is acceptable by contemporary standards as a theorem in mathematics. Second, there is the erroneous conclusion drawn by skeptics: any theorem which can be proved using nonstandard analysis can be proved without it, and thus there is no need for nonstandard analysis. The reason for this confusion is that the set of principles which are accepted by current mathematics, namely ZFC, is much stronger than the set of principles which are actually used in mathematical practice. It has been observed (see [F] and [S]) that almost all results in classical mathematics use methods available in second order arithmetic with appropriate comprehension and choice axiom schemes. This suggests that mathematical practice usually takes place in a conservative extension of some system of second order arithmetic, and that it is difficult to use the higher levels of sets. In this paper we shall consider systems of nonstandard analysis consisting of second order nonstandard arithmetic with saturation principles (which are frequently used in practice in nonstandard arguments). We shall prove that nonstandard analysis (i.e. second order nonstandard arithmetic) with the $\omega_{1}$-saturation axiom scheme has the same strength as third order arithmetic. This shows that in principle there are theorems which can be proved with nonstandard analysis but cannot be proved by the usual standard methods. The problem of finding a specific and mathematically natural example of such a theorem remains open. However, there are several results, particularly in probability theory, whose only known proofs are nonstandard arguments which depend on saturation principles; see, for example, the monograph [Ke]. Experience suggests that it is easier to work with nonstandard objects at a lower level than with sets at a higher level. This underlies the success of nonstandard methods in discovering new results. To sum up, nonstandard analysis still takes place within ZFC, but in practice it uses a larger portion of full ZFC than is used in standard mathematical proofs. [F] S. FEFERMAN. Theories of finite type related to mathematical practice, Handbook of mathematical logic (J. Barwise, editor), North-Holland, Amsterdam, .1977, pp. 913-971. [Ke] H. J. KEISLER, An infinitesimal approach to stochastic analysis, Memoirs of the American Mathematical Society, No. 297 (1984). [S] S. SIMPSON, Which set existence axioms are needed to prove the Cauchy/Peano theorem for ordinary differential equations? JSL, vol. 49 (1984), pp. 783-802. It is perhaps worth adding that Keisler (making use of work of Avigad) subsequently published a sequel to his paper with Henson in which he introduces what might be regarded as a system of Reverse Mathematics for nonstandard analysis with the hope of being able to establish the strength of particular theorems proved using nonstandard analysis. (See “The Strength of Nonstandard Analysis” by H.J. Keisler in The Strength of Nonstandard Analysis ed. By imme van den berg and vitor nerves, Springer, 2007). - I first understood what the Thurston-type-compactification of the space of properly strictly convex real projective structures on a closed surface was using non-standard methods. What had been murky and confusing was suddenly clear. I have struggled with the question of whether or not to use NSA in the written proof. It is so much easier to use NSA I think we will. - The asymptotic cone of a metric space (and hence of a finitely generated group endowed with the word metric) is constructed using non-standard analysis, and has been used to prove many nice theorems. To take just one example, asymptotic cones are an important tool in the proof that mapping class groups are quasi-isometrically rigid. - Freiman conjectured a classification of finite sets $A$ of integers that have $$|A+A| = 3|A|-3+b$$ for some $0\leq b \leq |A|/3-2$. Renling Jin recently resolved this using nonstandard analysis. He has quite a few other nice results that appeared first with nonstandard analysis. - Indeed, I have heard Renling Jin say that although one could translate (many of) his arguments to standard methods, it is better not to do so; they are more illuminating in their nonstandard form. – Joel David Hamkins Feb 28 2010 at 2:07 The first link sent me to a Not Found page. – Todd Trimble Apr 8 at 13:25 This reminded me of a talk by Mircea Mustata in which he mentioned that non-standard analysis type arguments were used to prove some things related to algebraic geometry. I can't remember what the talk was about, but I found the paper that it was based on: http://arxiv.org/abs/0710.4978 The paper mentions that later Kollár found proofs avoiding these techniques (but they are similar in spirit). - Let $k$ be an algebraically closed field of characteristic $0$. Let $T_n$ be the set of all possible log canonical threshold of a pair $(X,Y)$ where $X/k$ is a smooth variety and $Y \subseteq X$ is a nonzero closed subschemes. The following two facts are first proved via non-standard methods: 1) $T_n$ is closed in $\mathbb R$ for all $n$. 2) The set of points of accumulations from above of $T_n$ is $T_{n-1}$. I think proofs that avoid non-standard analysis emerged later, but the first one used non-standard technique. - Ah, Sam and me gave the same answer within 55 seconds of each other (:. Sorry I did not see Sam's answer. – Hailong Dao Feb 24 2010 at 23:01 Gromov was writing in one of his books (among other things) about some new mathematics coming from nonstandard analysis. Another example is proving that some statistical field theories (and lattice QFTs) are well-defined by Sergio Albeverio et al. (look at their book about that kind of applications to physics). Kiesler has been emphasising that some functional spaces are much richer in nonstandard analysis and that this power is one of the main arguments for the theory. Analysts say that one should look for applications where one has several degrees of infinitesimals or asymptotics, to somewhat reduce fitting complicated estimates to satisfy all. There are some other approaches to infinitesimals which are not nonstandard analysis (no general transfer principle), but are similar in spirit, namely the synthetic differential geometry. - I am not aware of any comments by Gromov directly on nonstandard analysis. In his book "metric structures for riemannian and non-riemannian spaces" , page 97, he does comment on the construction of asymptotic cones using nonprincipal ultrafilters, and cites the paper by van den Dries and Wilkie from 1984. In a recent interview, he praised their work more explicitly, but still without mentioning nonstandard analysis. – katz Apr 8 at 12:28 I think the only known solution to the local version of the Hilbert's fifth problem heavily uses nonstandard analysis. To be more precise the result is: every locally euclidean local group is locally isomorphic to a Lie group. You can find details in Isaac Goldbring's paper. - In mathematical economics, one often faces the following problem: One wants to formalize the idea of a large, relatively anonymous group of people (an atomless measure space of agents) that all face some risk that is iid of these people. Since there are lots of people, this risk should cancel out in the aggregate by some law of large numbers. The expost empirical distribution should be the ex ante distribution of the risk. If one uses something like the unit interval endowed with Lebesgue measure, this does not work. Most sample realizations are not measurable in that case. Yeneng Sun has shown that there are exact laws of large numbers with a continuum of random variables for certain types of measure spaces. The only known examples were obtained using the Loeb measure construction that relies heavily on NSA. Later, Konrad Podczeck has shown how to construct appropriate measure spaces using conventional methods. - Here is one paper with some results I have only seen being done in non-standard analysis so far, perhaps it is helpful to you: A mathematical proof of the existence of trends in financial time series by Michel Fliess & C´edric Join From the abstract: "We are settling a longstanding quarrel in quantitative finance by proving the existence of trends in financial time series thanks to a theorem due to P. Cartier and Y. Perrin, which is expressed in the language of nonstandard analysis [...] Those trends, which might coexist with some altered random walk paradigm and efficient market hypothesis, seem nevertheless difficult to reconcile with the celebrated Black-Scholes model. They are estimated via recent techniques stemming from control and signal theory. Several quite convincing computer simulations on the forecast of various financial quantities are depicted. We conclude by discussing the role of probability theory." See also this question/answers on Mathoverflow - Steve Huntsman's claim attributed to wikipedia that "the list of new applications in mathematics is still very small" is patently false. In fact, I was unable to find such a claim there. To mention just the most famous results, there is the recent work by T. Tao et al, by I. Goldbring on the local version of Hilbert's 5, Albeverio (several applications in math physics), Arkeryd (see his piece in the American Mathematical Monthly at http://www.jstor.org/stable/10.2307/30037635) in hydrodynamics, the works on "canards" in perturbation theory, Jin's work in additive number theory, as well as numerous applications in statistics and economics. Robinson's work also occasioned a critical re-evaluation of whig history dominated by a reductive epsilontist agenda. - @katz: Wikipedia evolves. See the version from which Steve Huntsman quoted when he wrote his answer three years ago: en.wikipedia.org/w/… – Willie Wong Apr 8 at 11:01 (BTW, several of the results you mention are already discussed in the various other answers to this question below, and it would be great if you can add links to the ones which aren't [for example, a link or actual citation reference to the relevant papers of Arkeryd would be wonderful!]) – Willie Wong Apr 8 at 11:09 The version you cite dates from 2010. The claim was as false in 2010 as it is in 2013. It is not appropriate to hide behind anynomous claims posted in the public domain if such claims are incorrect. – katz Apr 8 at 11:12 1 I make no comment on the verity of the sentiments expressed by that quote. I take issue with your statement "In fact, I was unable to find such a claim there", which for better or for worse sounds like you are accusing Steve Huntsman of fabricating the quote out of thin air. – Willie Wong Apr 8 at 11:51 You are putting words in my mouth. Most people know that wiki is a work in progress. If this claim was deleted, there must have been good reasons for this. Since posting material on wiki involves little personal responsibility, it is inappropriate to rely on negative claims made there. My objection to Huntsman's presentation of his comment stands. – katz Apr 8 at 12:15 Edward Nelson was working on a book on NSA mentioned here: https://web.math.princeton.edu/~nelson/books.html His existing book "Radically elementary probability theory" (linked from that page) uses some NSA. I've been wanting to read it but don't understand much of it. - That's related (and interesting), but doesn't directly address the question, viz. what results have been proven first using NSA. – Robert Haraway Apr 8 at 21:02 I just came across a 2013 book by F. Herzberg entitled "Stochastic Calculus with Infinitesimals", see http://link.springer.com/book/10.1007/978-3-642-33149-7/page/1 where probability and stochastic analysis are done without having to develop the complexities of measure and integration theory first. Ever since E.Nelson, such an approach is called "radically elementary" and it really is. What this proves is the new result that stochastic calculus can be done without measure theory. To give a historical parallel, recall that Leibniz's mentor in mathematics was Huygens. When Huygens first learned of Leibniz's invention of infinitesimal calculus, Huygens was sceptical, and wrote to Leibniz that he is merely doing what Fermat and others have done before him in a different language. What Huygens failed to recognize immediately (but did recognize later) was the generality of the methods and the lucidity of the presentation of Leibniz's new approach. The Nelson-Herzberg approach to stochastic calculus is in a way more significant than merely a new "result", since it provides a new methodology. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470669031143188, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/85052/intersection-of-geodesiques
## intersection of geodesiques ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $(M,g)$ be a closed riemannian surface . let $\alpha$ be a simple closed geodesique . does there is exist a simple closed geodesic $\beta$ that intersect alpha at only 1 point p such that $[\alpha]$ and $[\beta]$ does not commute in $\pi_1(M,p)$ - I'm assuming you mean a surface with a Riemannian metric? Otherwise for a Riemann surface, presumably you mean a unique constant curvature metric with curvature $1,0$ or $-1$? – Agol Jan 6 2012 at 16:11 1 You put about 10 spelling errors in 3 lines of text, I corrected them all. Please be more precise next time, we are mathematicians after all, and this is a professional forum. – GH Jan 7 2012 at 1:55 It doesn't seem to me that you corrected all the spelling errors! – YangMills Apr 14 2012 at 3:18 ## 2 Answers Yes. Consider the punctured torus, then the $(1, 0)$ and $(0, 1)$ curves together generate the fundamental group (which is the free group on two generators), and so don't commute. Now, if you have a closed riemann surface, one of its handles is a punctured torus, so the above construction goes through without change. EDIT The above answer is for hyperbolic surfaces. Obviously, if the fundamental group is abelian (as for the torus or the sphere or the projective plane), the answer is no. - i had hyperbolic surfaces in mind when i wrote the question – student Jan 9 2012 at 9:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. No, this is false for any curve $\alpha$ on $M$ a torus, sphere, or projective plane (choosing any Riemannian metric on the surface). For a general surface with a Riemannian metric, you might have a simple closed $\mathbb{Z}/2$-homologically trivial geodesic ($\alpha$ bounds a subsurface), in which case there is no geodesic $\beta$ meeting it in a single point. If the curve $\alpha$ is non-separating, then this will be true (if $\chi(M)<0$). There exists transverse curves $\alpha$ and $\beta$ such that $|\alpha\cap \beta|=1$ and $[\alpha]$ and $[\beta]$ not commuting in $\pi_1(M)$ (with the natural base point). Then minimal length representatives of $\alpha$ and $\beta$ will intersect transversely in a single point (see e.g. a paper of Hass-Scott for an elementary proof). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9161964654922485, "perplexity_flag": "head"}
http://mathoverflow.net/questions/6477/applications-of-the-other-definition-of-sheaves/6478
## Applications of the “other” definition of Sheaves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In most literature, when you try to look for the definition of sheaves you will see the usual definition for presheaves as a functor from a topological space (or from a Grothendieck topology) to some category and then sheaves would require this category to be complete and you have some exactness/equalizer condition. But then for some categories there is another equivalent definition. You are defined a "protosheaf" (there are various names for these creatures), a sheaf space, a base space, a local homeomorphism between the sheaf space and the base space, you are even already defined a stalk.. but this definition seems not to be very abstract in the category-theoretical point of view as I only see this kind of definition for very specific categories (for instance in the category of groups or rings, you want the addition operation defined on the fiber product of the sheaf space over the base space to be continuous). What is the equivalent category theoretical way of defining a sheaf using this method? In which cases does this definition give us a more psychological advantage than the aforementioned one? I have personally found the former definition more advantageous in my practice, but there are some mathematical practices by which the latter definition might be more useful. - ## 5 Answers The "sheaf space = espace étalé" definition is better (opinions may vary) than the definition by specifying sections over open sets in at least the following cases: 1. Working with constant or near constant sheaves, such as constructible sheaves. 2. To define the restriction $F|_S$ to an arbitrary, not necessarily open, subset $S\subset X$, and in particular to understand the set of sections $F(S)$ over a non-open subset. 3. To define the pullback $f^{-1}F$. 4. To prove that $f_*$ and $f^{-1}$ are adjoint. Since the two definitions are equivalent, all of these can be accomplished by using only open sets, but using the sheaf space gives a better geometric picture of the situation. These examples apply to sheaves over topological spaces, not necessarily to the other categories you mentioned. - 1 Another nice case is when considering equivariant sheaves (of sets, say). When considering the etale space an equivariant structure is just an action of the group $G$ on the etale space such that the projection is $G$-equivariant. – Geordie Williamson Jan 11 2012 at 12:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Here is an elegant application of sheaves seen as étalé spaces. Consider a complex manifold $M$. It automatically comes with a holomorphic local isomorphism $\pi: \mathcal O_M \to M$ described as follows. As a set $\mathcal O_M$ is the set of all germs of holomorphic functions at all points of $M$.The map $\pi$ sends a germ to the point at which it is considered. Then we endow $M$ with the following topology. For an open connected set $U\subset M$ and a holomorphic function $f$ on $U$, denote by $[U,f]\subset\mathcal O_M$ the set of all germs $f_a$ with $a\in U$. These $[U,f]$ are decreed to be an open basis for the topology of $M$.Then there exists a unique complex structure on $\mathcal O_M$ such that $\pi: \mathcal O_M \to M$ becomes a HOLOMORPHIC local isomorphism. On $\mathcal O_M$ there lives a universal tautological holomorphic function $F:\mathcal O_M \to \mathbb C: f_a \to f_a (a)$. (Note that $\mathcal O_M$ is huge, disconnected but Hausdorff). And now for the punchline : given a holomorphic function $f$ on $U\subset M$, take the connected component $Riem(U)$ of $[U,f]$ in $\mathcal O_M$. Together with the restriction $F|Riem(U)$, this is the maximal holomorphic extension of $f$: a sophisticated concept admirably handled by sheaves as étalé spaces (The manifold $Riem(U)$ is called the domain of existence of $f$.) Even in dimension one and for $M=\mathbb C$ this is quite powerful: you get the Riemann surface $(Riem(U), F|Riem(U))$ of any holomorphic function $f$ on an arbitrary domain $U\subset \mathbb C$ without the cutting, pasting, continuation along paths,... of which classical books on complex analysis are so fond. A reference for this might be Fritzsche-Grauert's book "From Holomorphic Functions to Comples Manifolds", Chapter II, 8,9 (Springer, GTM 213). The book by Narasimhan and Nievergelt that Charles so pertinently and quickly evoked seems to handle the dimension one case (which actually suffices to convey the sheaf idea). Finally, it is noteworthy that the EGA-style definition that Hartshorne gives for the structure sheaf $\mathcal O$of the affine scheme $Spec(A)$ (page 70 of THE BOOK) is exactly analogous to the description above: the étalé space is the disjoint union of the all the local rings $A_P$ for $P\in A$ and $\mathcal O(U)$ is the set of continuous maps of $U$ into the étalé space; Only, Hartshorne doesn't say what the topology is on the étalé space and the continuity condition is replaced by an ad hoc description in terms of elements of the rings of fractions $A_f$. - He doesn't name the topology, but he describes it. I mean, it's just the weak topology w/r/t the maps from the sections. – Harry Gindi Nov 23 2009 at 17:44 very nice! "Then there exists a unique complex structure on $M$ such that $\pi : O_M \to M$ becomes a HOLOMORPHIC local isomorphism." I guess you want $O_M$ instead of $M$? – Geordie Williamson Jan 11 2012 at 12:44 Dear @Geordie, thanks a lot for your attention and for the nice words. You are right, of course, and I have edited the answer accordingly. – Georges Elencwajg Jan 11 2012 at 18:49 My understanding is that what you're talking about is the espace etale (not sure where accents go offhand) of the sheaf, and was the original definition. Proving that the sheaf of sections of this space is an exercise in Hartshorne. The only place I've ever seen that definition used seriously is in this complex analysis book by Narasimhan and Nievergelt. Though they use the espace etale definition exclusively, and for pretty much the whole book to handle germs of holomorphic functions. - But wouldn't holomorphic functions and complex analysis in the whole be easier understood using the "usual" definition of sheaves? – Jose Capco Nov 22 2009 at 19:36 I've generally thought so, but I haven't gone through this book in its entirety, so there might be specific places where the espace etale is more useful. – Charles Siegel Nov 22 2009 at 19:52 11 @ Charles. Here is how the accents go: "espace étalé" but "morphisme étale". As an aside, a very erudite friend of mine assures me that the past participle "étalé" [from the verb "étaler"= to spread ] and the adjective "étale" (refering to an aspect of the sea related to tides) are etymologically unrelated. This sounds so utterly absurd that it must true . – Georges Elencwajg Nov 22 2009 at 22:31 ...that it must be true. – Georges Elencwajg Nov 22 2009 at 22:34 1 I knew they were different, and that they were supposedly linguistically distinct, but I'm just rather bad at spelling in my native language (English) much less any other. – Charles Siegel Nov 22 2009 at 22:55 There is a generalization of a presheaf called a "fibered category" or a "grothendieck fibration". This is analogous to the etale space construction for presheaves on O(X). Every presheaf in the sense of a presheaf taking values in Sets (most other constructions come from enriching presheaves of sets) can be identified with a very simple type of fibered category. In general, fibered categories wth a fixed cleavage (something like a skeleton of pullbacks) define a contravariant pseudofunctor taking values in the 2-category of categories. It is only a pseudofunctor because composition is not in general strictly associative, merely associative up to unique isomorphism. You should check out Vistoli's book on descent, fibered categories, and grothendieck topologies here: http://homepage.sns.it/vistoli/descent.pdf . But to answer your question, to generalize the etale space for sheaves, you'll have to introduce the idea of descent for 1-stacks, then sheaves become degenerate stacks, i.e. 0-stacks. If you're trying to deal with sheaves without resorting to too much category theory, you have to remember that presheaves of abelian groups, for example, are presheaves of abelian group objects in Sets. All of the categories that you've mentioned are monadic with respect to the forgetful functor adjunction with Sets, so we can always just take objects of that sort in the category of sets. It's the reason why a "presheaf of topological spaces" doesn't really make a lot of sense, since Top is not algebraic over Sets. So there is no good way to define sheaves taking values in an arbitrary category without substantially increasing the generality. If you aren't familiar with what I'm talking about specifically, Mac Lane's "sheaves in geometry and logic" has a very detailed explanation of how the etale space works, and how we can produce sheaves that take values in algebraic categories, but not arbitrary categories. It also has a very in-depth construction of the etale space for presheaves of sets and also proves the equivalence of categories between the full subcategory of sheaves of sets and the full subcategory of "bundles" (Mac Lane's terminology here, so don't get confused when he calls the etale space the etale bundle) called etale spaces. - There is an important application of the "other definition" in arithmetic geometry: It is used to give a canonical action of the frobenius on sheaf defined over a finite field: There is an equivalence of categories between constructible sheaves ("usual sheaves") on a variety $X$ and algebraic spaces etale over $X$ ("other sheaves"). Now one can use the frobenius action on spaces and translate it back through the equivalence into an action on sheaves. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 54, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9107093214988708, "perplexity_flag": "head"}
http://mathoverflow.net/questions/72106?sort=oldest
## Strong topology ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $E$ and $F$ be a locally convex topological vector spaces (LCS) and let $E^{\star}$ and $F^{\star}$ denote the strong duals of $E$ and $F$, respectively. A dual of $E^{\star}$ given by the $\beta(E^{\star\star}, E^{\star})$ topology is usually denoted by $E^{\star\star}$ and it is called a double dual of $E$. Question 1 Is it true that the topology $\beta(E^{\star}, E^{\star\star})$ on $E^{\star}$ is always finer than $\beta(E^{\star}, E)$ (on $E^{\star}$), apart from the case when $E$ is reflexive and these topologies coincide. Fact 1 It is well-known that if $E$ is an (F)-space then the (initial) (F)-space topology coincides with strong topology $\beta(E, E^{\star})$. Question 2 Can we find a non-metrizable locally convex space for which the above sentence is true? (i.e. instead of (F)-space we put a non-metrizable locally convex space). Thank you in advance for any help. - ## 1 Answer For the first question, the strong topology is the polar topology generated by all weakly bounded subsets. The weakly bounded subsets of $E$ are also weakly bounded in $E^{\ast\ast}$ since $E\subset E^{\ast\ast}$ and they have the same dual space $E^{\ast}.$ Therefore $\beta(E^{\ast},E^{\ast\ast})$ is finer than $\beta(E^{\ast},E).$ For the second question, did you try the space of test functions, i.e, infinitely differentiable functions with compact support? More generally, Hausdorff barrelled spaces have the property you want, so you should look for spaces in the class of barrelled spaces. - 3 I have found a theorem in "Topological vector spaces" of Adash, Ernst and Keim, that a lcs space $(E, \tau)$ is barelled if and only if $\tau=\beta(E, E^*)$. This answers the Question 2 completely in the class of lcs spaces. – Tomek Kania Aug 17 2011 at 0:29 Thank you for the comment. In fact I don't know much barrelled spaces except the notions and some simple properties :-) – Đức Anh Aug 17 2011 at 12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9224053025245667, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2008/05/12/commutativity-in-series-iii/?like=1&source=post_flair&_wpnonce=5092e672ce
# The Unapologetic Mathematician ## Commutativity in Series III Okay, here’s the part I promised I’d finish last Friday. How do we deal with rearrangements that “go to infinity” more than once? That is, we chop up the infinite set of natural numbers into a bunch of other infinite sets, add each of these subseries up, and then add the results up. If the original series was absolutely convergent, we’ll get the same answer. First of all, if a series $\sum_{k=0}^\infty a_k$ converges absolutely, then so does any subseries $\sum_{j=0}^\infty a_{p(j)}$, where $p$ is an injective (but not necessarily bijective!) function from the natural numbers to themselves. For instance, we could let $p(j)=2j$ and add up all the even terms from the original series. To see this, notice that at any finite $n$ we have a maximum value $N=\max\limits_{0\leq j\leq n}p(j)$. Then we find $\displaystyle\left|\sum\limits_{j=0}^na_{p(j)}\right|\leq\sum\limits_{j=0}^n\left|a_{p(j)}\right|\leq\sum\limits_{k=0}^N\left|a_k\right|\leq\sum\limits_{k=0}^\infty\left|a_k\right|$ So the new sequence of partial sums of absolute values is increasing and bounded above, and thus converges. Now let’s let $p_0$, $p_1$, $p_2$, and so on be a countable collection of functions defined on the natural numbers. We ask that • Each $p_n$ is injective. • The image of $p_n$ is a subset $P_k\subseteq\mathbb{N}$. • The collection $\left\{P_0,P_1,P_2,...\right\}$ is a partition of $\mathbb{N}$. That is, these subsets are mutually disjoint, and their union is all of $\mathbb{N}$. If $\sum_{k=0}^\infty a_k$ is an absolutely convergent series, we define $\left(b_n\right)_j=a_{p_n(j)}$ — the subseries defined by $p_n$. Then from what we said above, each $\sum_{j=0}^\infty\left(b_n\right)_j$ is an absolutely convergent series whose sum we call $s_n$. We assert now that $\sum_{n=0}^\infty s_n$ is an absolutely convergent series whose sum is the same as that of $\sum_{k=0}^\infty a_k$. Let’s set $t_m=\sum_{n=0}^m\left|s_n\right|$. That is, we have $\displaystyle t_m\leq\sum\limits_{j=0}^\infty\left|\left(b_1\right)_j\right|+...+\sum\limits_{j=0}^\infty\left|\left(b_m\right)_j\right|=\sum\limits_{j=0}^\infty\left(\left|\left(b_1\right)_j\right|+...+\left|\left(b_m\right)_j\right|\right)$ But this is just the sum of a bunch of absolute values from the original series, and so is bounded by $\sum_{k=0}^\infty\left|a_k\right|$. So the series of absolute values of $s_n$ has bounded partial sums, and so $\sum_{n=0}^\infty s_n$ converges absolutely. That it has the same sum as the original is another argument exactly analogous to (but more complicated than) the one for a simple rearrangement, and for associativity of absolutely convergent series. This pretty much wraps up all I want to say about calculus for now. I’m going to take a little time to regroup before I dive into linear algebra in more detail than the abstract algebra I covered before. But if you want to get ahead, go back and look over what I said about rings and modules. A lot of that will be revisited and fleshed out in the next sections. ### Like this: Posted by John Armstrong | Analysis, Calculus ## 3 Comments » 1. [...] series converges absolutely, we can adjust the order of summations freely. Indeed, we’ve seen examples of other rearrangements that all go through as soon as the convergence is [...] Pingback by | September 16, 2008 | Reply 2. [...] Like when we translated power series, I’m going to sort of wave my hands here, motivating it by the fact that absolute convergence makes things nice. [...] Pingback by | September 22, 2008 | Reply 3. [...] union is finite, but absolute convergence will give us all sorts of flexibility to reassociate and rearrange our [...] Pingback by | June 23, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 28, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922601044178009, "perplexity_flag": "head"}