url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://en.wikipedia.org/wiki/Square-cube_law
# Square-cube law The square-cube law was first mentioned in Two New Sciences (1638). The square-cube law (or cube-square law) is a mathematical principle, applied in a variety of scientific fields, which describes the relationship between the volume and the area as a shape's size increases or decreases. It was first described in 1638 by Galileo Galilei in his Two New Sciences. This principle states that, as a shape grows in size, its volume grows faster than its area. When applied to the real world this principle has many implications which are important in fields ranging from mechanical engineering to biomechanics. It helps explain phenomena including why large mammals like elephants have a harder time cooling themselves than small ones like mice, and why there are fundamental limits to the size one can build a sand castle. ## Description The square-cube law can be stated as follows: When an object undergoes a proportional increase in size, its new volume is proportional to the cube of the multiplier and its new surface area is proportional to the square of the multiplier. Represented mathematically: $v_2=v_1\left(\frac{\ell_2}{\ell_1}\right)^3$ where $v_1$ is the original volume, $v_2$ is the new volume, $\ell_1$ is the original length and $\ell_2$ is the new length. Which length is used does not matter. $A_2=A_1\left(\frac{\ell_2}{\ell_1}\right)^2$ where $A_1$ is the original surface area and $A_2$ is the new surface area. For example, a cube with a side length of 1 meter has a surface area of 6 m2 and a volume of 1 m3. If the dimensions of the cube were doubled, its surface area would be increased to 24 m2 and its volume would be increased to 8 m3. This principle applies to all solids. ## Applications ### Engineering When a physical object maintains the same density and is scaled up, its mass is increased by the cube of the multiplier while its surface area only increases by the square of said multiplier. This would mean that when the larger version of the object is accelerated at the same rate as the original, more pressure would be exerted on the surface of the larger object. Let us consider a simple example of a body of mass, M, having an acceleration, a, and surface area, A, of the surface upon which the accelerating force is acting. The force due to acceleration, $F= M a$ and the thrust pressure, $T = \frac{F}{A} = M\frac{a}{A}$. Now, let us consider the object be exaggerated by a multiplier factor = x so that it has a new mass, $M' = x^3 M$, and the surface upon which the force is acting has a new surface area, $A' = x^2 A$. The New force due to acceleration $F' = x^3 Ma$ and the resulting thrust pressure, $\begin{align} T' &= \frac{F'}{A'}\\ &= \frac{x^3}{x^2} \times M\frac{a}{A}\\ &= x \times M \frac{a}{A}\\ &= x \times T\\ \end{align}$ Thus, just scaling up the size of an object, keeping the same material of construction (density), and same acceleration, would increase the thrust by the same scaling factor. This would indicate that the object would have less ability to resist stress and would be more prone to collapse while accelerating. This is why large vehicles perform poorly in crash tests and why there are limits to how high buildings can be built. Similarly, the larger an object is, the less other objects would resist its motion, causing its deceleration. #### Engineering examples • Steam engine: James Watt, working as an instrument maker for the University of Glasgow, was given a scale model Newcomen steam engine to put in working order. Watt recognized the problem as being related to the square-cube law, in that the surface area of the model's cylinder surface to volume ratio was greater than the much larger commercial engines, leading to excessive heat loss.[1] Experiments with this model led to Watts famous improvements to the steam engine. See: • Airbus A380: the lift and control surfaces (wings, rudders and elevators) are relatively big compared to the fuselage of the airplane. In, for example, a Boeing 737 these relationships seem much more 'proportional', but designing an A380 sized aircraft by merely magnifying the design dimensions of a 737, would result in wings that are too small for the aircraft weight, because of the square-cube rule. • A clipper needs relatively more sail surface than a sloop to reach the same speed, meaning there is a higher sail-surface-to-sail-surface ratio between these crafts than there is a weight-to-weight ratio. ### Biomechanics If an animal were scaled up by a considerable amount, its relative muscular strength would be severely reduced, since the cross section of its muscles would increase by the square of the scaling factor while its mass would increase by the cube of the scaling factor. As a result of this, cardiovascular and respiratory functions would be severely burdened. In the case of flying animals, the wing loading would be increased if they were scaled up, and they would therefore have to fly faster to gain the same amount of lift. Air resistance per unit mass is also higher for smaller animals, which is why a small animal like an ant cannot be crushed by falling from any height. As was elucidated by J. B. S. Haldane, large animals do not look like small animals: an elephant cannot be mistaken for a mouse scaled up in size. The bones of an elephant are necessarily proportionately much larger than the bones of a mouse, because they must carry proportionately higher weight. To quote from Haldane's seminal essay On Being the Right Size, "...consider a man 60 feet high...Giant Pope and Giant Pagan in the illustrated Pilgrim's Progress.... These monsters...weighed 1000 times as much as Christian. Every square inch of a giant bone had to support 10 times the weight borne by a square inch of human bone. As the human thigh-bone breaks under about 10 times the human weight, Pope and Pagan would have broken their thighs every time they took a step." The giant monsters seen in horror movies (e.g., Godzilla or King Kong) are also unrealistic, as their sheer size would force them to collapse. However, it's no coincidence that the largest animals in existence today are aquatic animals, because the buoyancy of water negates to some extent the effects of gravity. Therefore, sea creatures can grow to very large sizes without the same musculoskeletal structures that would be required of similarly sized land creatures. ## See also Wikiversity has learning materials about Proportions • Biomechanics • Allometric law • "On Being the Right Size," an essay by J. B. S. Haldane that considers the changes in shape of animals that would be required by a large change in size • Surface-area-to-volume ratio ## References • • • 1. Rosen, William (2012). The Most Powerful Idea in the World: A Story of Steam, Industry and Invention. University Of Chicago Press. p. 98. ISBN 978-0226726342
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 14, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593586921691895, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/117068-sum-squares-even-numbers.html
# Thread: 1. ## Sum of squares of even numbers What is the value of (sum of j2 where j is an element of S) where S = {all even numbers greater than zero and less than 11} 2. I assume the expression is $\sum_{j\in S} j^2$ where $S=\{n\in\mathbb{N}\mid0< n<11,\ n\mbox{ is even}\}$. I don't really know what to say except that you need to know what the notations $\sum$ and $\{\dots\mid\dots\}$ mean. If you know this, you can calculate this quantity directly, without using any general formula. 3. Originally Posted by erinneedshelp what is the value of (sum of j2 where j is an element of s) where s = {all even numbers greater than zero and less than 11} 2^2 + 4^2 + 6^2 + 8^2 + 10^2 = ....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9358394145965576, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/207609/the-measurability-of-convex-sets?answertab=active
# The measurability of convex sets How to prove the measurability of convex sets in $R^n$ ? I have seen a proof, but too long and not very intuitive.If you have seen any, please post it here. - What do you know about the topological properties of convex sets and the Lebesgue Measure on $\mathbb{R}^n$? – Tim Duff Oct 5 '12 at 6:51 There is the following way to prove it, which is probably not the most elementary. One can assume that the convex set has non empty interior. Then the projection onto the closure can be shown to be Lipschitz and differentiable everywhere, except at the boundary. By Rademacher's theorem, it implies that the boundary is Lebesgue negligible, and the measurability follows easily. – Ahriman Oct 5 '12 at 8:08 I suspect that if the interior is empty, then you can show that the interior of the affine hull is empty, and from this show that the set is contained in a hyperplane, and hence has measure zero. – copper.hat Oct 5 '12 at 8:10 If you can show that the boundary of your set $\partial C$ has measure zero by squeezing it and using convexity, then $\partial C \cap C$ is measurable by completeness of the Lebesgue measure. You can also reduce the problem to the bounded case by cutting off your set with a countable collection of larger and larger balls. – Nick Alger Oct 5 '12 at 9:59 Note that the answer depends on what you mean by "measurable". A convex set need not be Borel measurable. (Take the open unit ball together with a non-Borel subset of the unit sphere.) – Nate Eldredge Oct 5 '12 at 12:48 ## 2 Answers Let $C$ be your convex set, and assume without loss of generality(1) that it contains zero as an interior point and is bounded. The question boils down to showing that $\partial C$ has measure zero(2), which can be shown by squeezing the boundary between the interior $C^\circ$, and a slightly expanded version of the interior, $\frac{1}{1-\epsilon}C^\circ$. Let $p \in \partial C$. Since $0$ is an interior point, by convexity the point $q:=(1-\epsilon)p$ lies in the interior of the cone $K:=\{sp + (1-s)x: x \in B_r(0) \}$, and therefore $q \in C^\circ$. But then $p=\frac{1}{1-\epsilon}q \in \frac{1}{1-\epsilon}C^\circ$. Thus $$\partial C \subset \frac{1}{1-\epsilon}C^\circ.$$ Since for any set the boundary and the interior are disjoint, $$\partial C \subset \frac{1}{1-\epsilon}C^\circ \setminus C^\circ.$$ Since the interior of a convex set is convex(3) and $C^\circ$ contains zero, $C^\circ$ is contained in it's dilation: $$C^\circ \subset \frac{1}{1-\epsilon}C^\circ.$$ Finally, since we have assumed $C^\circ$ is bounded, the measure of the boundary, $$\lambda(\partial C) \le \lambda(\frac{1}{1-\epsilon}C^\circ \setminus C^\circ) = (\frac{1}{1-\epsilon})^n\lambda(C^\circ)-\lambda(C^\circ),$$ can be made as small as desired by taking $\epsilon \rightarrow 0$. Tying up loose ends: (1): • If the set is not bounded, cut it off with a countable collection of successively larger balls. Since the countable union of measurable sets is measurable, this suffices. • If the set $C$ contains some interior point, translate the set so that the interior point is at zero. Since the Lebesgue measure is translation invariant, this suffices. • If the set $C$ contains no interior points, then all it's point must lie within a $n-1$ dimensional plane, otherwise $C$ would contain a n-tetrahedron (simplex), and a simplex contains interior points. Thus $C$ would lie within a measure zero set and the result is trivial. (2): • The boundary, closure, and interior of a set are always closed, closed, and open respectively, so they are always measurable. • If $\partial C$ has measure zero, then $\partial C \cap C$ is measurable and has measure zero by completeness of the Lebesgue measure. • Once you have measurability of $\partial C \cap C$, you have measurability of $C$ since, $$C=(\partial C \cap C) \cup C^\circ.$$ (3): • The proof that taking interiors preserves convexity is straightforward from the definitions but a little tedious. See lemma 4 here. Edit: To add, the approach in the answer here: Why does a convex set have the same interior points as its closure? is similar to the reasoning in my post, and shines some light onto what's going on. The technique there could be adapted easily to prove the result here as well, and you would get a similar proof. - So: proving that the boundary has measure zero shows not only Lebesgue measurability, but even that the Jordan content exists. – GEdgar Oct 5 '12 at 14:05 Yeah, for convex sets the interior coincides with the interior of the closure, so it makes sense that it would have Jordan measure as well. – Nick Alger Oct 6 '12 at 12:57 A relatively simple proof of a more general result (measurability with respect to every complete product measure of $\sigma$-finite Borel measures) can be found in Lang, Robert A note on the measurability of convex sets. Arch. Math. (Basel) 47 (1986), no. 1, 90--92. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549649357795715, "perplexity_flag": "head"}
http://mathoverflow.net/questions/108167/three-half-circles-on-the-plane-may-not-meet-nicely/108179
## Three half circles on the plane may not meet nicely ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $H$ denote the union of the northen hemisphere of the unit circle $S^{1}$ with the interval $[-1,1]$ on the $x$-axis. That is, $H=\{(x,\sqrt{1-x^{2}}):-1\le x\le 1\}\cup\{(x,0):-1\le x\le 1\}$ Let us say that two copies of $H$ meet nicely if they intersect in exactly 6 points e.g. as the two pictures show below: or Note that the picture on the right shows two half-circles meeting nicely but whose centers do not lie inside their partner´s half disk. Now, if we have three copies of $H$, it may not seem possible to arrange them so that they meet nicely, i.e. both of the following two conditions hold: 1. Any two meet nicely, and 2 The intersection of the three is empty. Is this true? Or, if I am mistaken, I would appreciate that someome would show me, or descibe, the desired arrangement. EDIT: I am considering ALWAYS half-circles, i.e. copies of $H$. No need to give answers related to half-disks. EDIT: It is possibel to arrange three copies of $H$ so that both 1. and 2. hold (See the seleted answer below). - Where is the picture? But it is clear what you mean... – domotorp Sep 26 at 15:03 So, really, does the condition 2. mean exactly what it says? Or you look at the intersection of the half-disks instead of the copies of $H$? – Ilya Bogdanov Sep 26 at 17:04 It means exactly what it says. That is, that the intersection of the three haf-circles is empty. – VCF Sep 26 at 17:18 Could you look at areas? If when half circles meet nicely the area in common was always greater than a quarter of the area of a full circle than any three half circles that meet nicely in pairs would have a point in common. – Kristal Cantwell Sep 26 at 17:37 Dear VCF, I'm afraid there is nothing we can do to undo the transition to community wiki. – S. Carnahan♦ Sep 27 at 5:26 ## 2 Answers 1. This is the answer under the assumption that the condition 2. means exactly what it says. Consider a regular triangle with side $2+\varepsilon$ and three diameters in the middles of its sides. If you construct the half-circles towards the triangle on these diameters, you obtain the desired example. Lemma. Assume that the two copies of $H$ meet nicely. Consider their supporting half-planes determined by the diameters. Then their intersection is an acute angle, and the centers belong to its sides. (Possibly this angle is degenerate; in this case, it should be 0 but not $\pi$, which means that the intersection should be a strip but not a half-plane.) Proof. If the two diameters do not intersect, then each of three pairs of the form (diameter, half-circle) and (half-circle,half-circle) meet at two points. Now, consider a point $A$ of intersection of thelines supporting the diameters. At least one diameter (say, $d_1$) does not contain $A$. Hence, if the angle in the Lemma statement is not acute, then the half-circle on $d_1$ cannot intersect $d_2$. Next, the center $C_1$ clearly lies on the side of this angle. Finally, the projection of $O_2$ onto $d_1$ should lie on the segment $d_1$, hence $O_2$ is also on the side of the angle (but not on its prolongation). Finally, assume that the diameters intersect.Then each half-circle can intersect the other diameter in at most one more point, and the total number of the intersection points is less than 6. Lemma is proved. Now we can prove that the four copies of $H$ cannot pairwise meet nicely. Let $c_{ij}$ be the angle from the lemma for $H_i$ and $H_j$. It is easy to see that $c_{12}$, $c_{13}$, $c_{23}$ should form an acute-angled triangle with the centers $C_1$, $C_2$, $C_3$ on its sides (just try to add the third diameter to $c_{12}$!). But then it is impossible to add the fourth half-plane --- these four half-planes should now form a quadrilateral with four acute angles! 2. Now let us assume that you speak on the half-disks. Then the answer is positive. From the previous paragraph, we see that the three diameters lie inside the three sides of some acute triangle $XYZ$, respectively. Now consider the three distances between the centers. If all three are less than $\sqrt3$, then by Jung's theorem they may be covered by the unit disk, and the center of this disk belongs to all three half-disks. Otherwise, assume that $C_1C_2\geq \sqrt3$, where $C_1$ and $C_2$ be the centers on the sides $XY$ and $XZ$, respectively. We have $d(C_1,XZ)\leq 1$, otherwise the respective half-circle and segment do not intersect. But then the projection of $C_1$ onto $XZ$ is at least $\sqrt2$ away from $C_2$, hence the first half-circle cannot intersect the second diameter twice. So in this case we also get a contradiction. I may expand any part of the above sketch. - About the paragraph right after the picture. Are you saying that it is impossible to arrange four copies of $H$ so that both 1. Any two meet nicely AND 2. Any three of more have empty intersection? – VCF Sep 26 at 17:49 Yes, I meant that the four copies are impossible. Ive tried to add some words (in fact, I've corrected some arguments...). If it is not clear (or not completely valid) --- I may expand it. – Ilya Bogdanov Sep 26 at 18:11 But, in fact, it seems that I do not use condition 2 at all... – Ilya Bogdanov Sep 26 at 18:13 What do you mean by "supporting half-planes determined by the diameters"? My understanding is that you mean planes orthogonal to the diameters passing through the centers of the half-circles. – VCF Sep 26 at 18:49 The supporting half-plane is just a half-plane bounded by the line containing the diameter and containing the copy of $H$ to which this diameter belongs. Surely, the angle may degenerate to a strip between the parallel lines, but not to the half-plane. Then the nexy arguments remain valid. Or, if you wish, you may rotate a copy a bit preserving all the properties. – Ilya Bogdanov Sep 26 at 19:52 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is off the top of my head; please take it skeptically and check all assumptions, visible and hidden. Focus on the line segment and its end points which I call p and q for a sample H. I have convinced myself that if p and q are on opposite sides of another line segment for a different half circle H', then there are at most 4 points of intersection, so H and H' do not meet nicely. Now consider segments of two of the semicircles that meet nicely. They each lie fully in a half plane defined by the other segment. The third segment when placed lies in some intersection of two of the half planes. But in order to meet the other two semicircles nicely, you have to place the circular arc in a way to intersect both line segments. If you are in the middle, this cannot be done. I realize this is far from rigorous, but you may be able to firm it up and use it. Gerhard "Needs To Drink More Coffee" Paseman, 2012.09.26 - I just noticed Karl's suggestion, which seems cleaner. You might try both and see which can extend to semicircles of different sizes. Gerhard"Ask Me About System Design" Paseman, 2012.09.26 – Gerhard Paseman Sep 26 at 16:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276682138442993, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/85314/find-a-couple-of-integers-such-that-the-third-power-of-a-given-natural-can-be-wr?answertab=oldest
# Find a couple of integers such that the third power of a given natural can be written as the difference of the squares of those integers Given a natural number $n$, find inegers $a, b$ such that $n^3=a^2-b^2$. I've tried, but I'm a bit rusty. Please Help - Are you trying to characterize the numbers $n$ whose cubes are equal to a difference of squares? – Dimitrije Kostic Nov 24 '11 at 18:03 Nope, given the natural number I just want to find two integers that satisfy the equation above. Not interested in finding them all. – Chu Nov 24 '11 at 18:10 What natural number $n$ are you talking about? – Dimitrije Kostic Nov 24 '11 at 18:17 Hint: $a^2-b^2=(a-b)(a+b)$. How can you make this product exactly $n^3$? – N. S. Nov 24 '11 at 18:25 @Dimitrije Kostic It is possible to do it for ALL numbers. – N. S. Nov 24 '11 at 18:26 ## 2 Answers Note that $$n^3=\left(\frac{n^2+n}{2}\right)^2-\left(\frac{n^2-n}{2}\right)^2.$$ Comment: The magic identity that solved the problem in fact did not (for me) come by magic. Given a number $K$, we want to find numbers $a$ and $b$ such that $a^2-b^2=K$. So we want $(a+b)(a-b)=K$. This means that $a+b$ and $a-b$ are two integers whose product is $K$. Suppose that $x$ and $y$ are any two integers whose product is $K$. If we set $a+b=x$ and $a-b=y$, then we will have $(a+b)(a-b)=K$. But will $a$ and $b$ be integers? Solve the system $a+b=x$, $a-b=y$. Algebra gives $a=\frac{x+y}{2}$, $b=\frac{x-y}{2}$. In order to make sure that $a$ and $b$ are integers, $x+y$ (and therefore $x-y$) must be even. This means that $x$ and $y$ have to be of the same parity (both odd or both even). Can we express $n^3$ as a product of two numbers of the same parity? If $n$ is odd, we can use $x=n^3$, $y=1$. That won't work if $n$ is even. But $x=n^2$, $y=n$ always works, because $n^2$ and $n$ have the same parity. In general, the integer $K$ is a difference of two squares unless $K$ is even but not divisible by $4$. So $\pm 2$, $\pm 6$, $\pm 10$, $\pm 14$, and so on cannot be expressed as a difference of two squares, and everybody else can be. - @Andres: I wanted to point something out: Recall that we have the surprising identity $$\sum_{k=1}^n k^3=\left(\sum_{k=1}^n k\right)^2 =\left(\frac{n^2+n}{2}\right)^2.$$ A short proof of follows from your above identity, along with using the fact that the series telescopes. – Eric♦ Nov 24 '11 at 19:38 Nice observation! It gives an attractive explanation of the sum of cubes formula. Alternately, we have the combinatorial version $\binom{n+1}{2}^2-\binom{n}{2}^2=n^3$. – André Nicolas Nov 24 '11 at 20:05 If $n$ is odd, then $a=(n^3+1)/2$ and $b=(n^3-1)/2$ will work. If $n$ is even, then $a=(n^3+2)/2$ and $b=(n^3-2)/2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327953457832336, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/189686/sum-of-infinite-series-with-each-element-containing-infinite-product
sum of infinite series with each element containing infinite product We assume $a_j=f(j)$, where $\frac{\alpha}{j}\leq f(j)\leq c$ ($\alpha>1$ and $0<c<1$) for $j$ large enough. Basically, I want to calculate the order of the infinite series \begin{eqnarray*} \pi_j=\sum_{k=j}^{\infty}\frac{A_{j,k}}{(1-A_{j,k})^2}, \end{eqnarray*} where $A_{j,k}=\Pi_{j\leq l\leq k}(1-a_l)$. My question is how to get the general order of $\pi_j$ (represented in terms of $f(j)$) when $j$ is large enough. I have a conjecture that the order of $\pi_j$ may be exactly $[f(j)]^{-2}$. The reason is that for the two extreme cases, I can calculate the order. When $f(j)=\frac{\alpha}{j}$, the order of $\pi_j$ is $j^2$. When $f(j)=c$, the order of $\pi_j$ is $1$, which follows my conjecture. In fact, note the order of the first term is $\frac{1-a_j}{a_j^2}=[f(j)]^{-2}$. For the two special cases, the sum of the series would have the same order as the first element. So for general case, I guess the sum would still be the same order as the first term, which is $[f(j)]^{-2}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9476418495178223, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/143533/solving-induction-prod-limits-i-1n-1-left1-frac1i-righti-fra/143539
# Solving Induction $\prod\limits_{i=1}^{n-1}\left(1+\frac{1}{i}\right)^{i} = \frac{n^{n}}{n!}$ I try to solve this by induction: $$\prod_{i=1}^{n-1}\left(1+\frac{1}{i} \right)^{i} = \frac{n^{n}}{n!}$$ This leads me to: $$\prod_{i=1}^{n+1-1}\left(1+\frac{1}{i}\right)^{i} = \frac{(n+1)^{n+1}}{(n+1)!} = \frac{(n+1)^{n}(n+1)}{n!(n+1)}\\ \prod_{i=1}^{n+1-1}\left(1+\frac{1}{i}\right)^{i} = \frac{(n+1)^{n+1}}{(n+1)!}$$ I tried to solve this that way: $$\prod_{i=1}^{n-1}\left(1+\frac{1}{i}\right)^{i}\left(1+\frac{1}{n}\right)^{n} = \frac{(n+1)^{n+1}}{(n+1)!}$$ Which is equivalent to: $$\frac{n^{n}}{n!}\left(1+\frac{1}{n}\right)^{n} = \frac{(n+1)^{n}(n+1)}{n!(n+1)}$$ I'm not sure if every step is right, but now i can't solve this further. Please help :) - ## 2 Answers You are almost finished. Note that since $1+\frac{1}{n}=\frac{n+1}{n}$, we have $$\left(1+\frac{1}{n}\right)^n=\frac{(n+1)^n}{n^n}=\frac{(n+1)^{n+1}}{(n+1)n^n}.$$ Remark: The induction argument should be written up in a more formal style. Deal with the base case explicitly. Then do the induction step, showing that if the assertion holds for $n=k$, then it holds for $n=k+1$. Even though it is quite all right to do your "scratch" computations backwards, the writeup should be more direct. So for the induction step, we assume that for a given $k$, we have $$\prod_{i=1}^{k-1}\left(1+\frac{1}{i} \right)^{i} = \frac{k^{k}}{k!}\tag{$\ast$}.$$ We want to show that $$\prod_{i=1}^{k}\left(1+\frac{1}{i} \right)^{i} = \frac{(k+1)^{k+1}}{(k+1)!}.$$ Note that $$\prod_{i=1}^{k}\left(1+\frac{1}{i} \right)^{i} =\left( \prod_{i=1}^{k-1}\left(1+\frac{1}{i} \right)^{i} \right) \left(1+\frac{1}{k}\right)^k .$$ By the induction assumption $(\ast)$, the right-hand side is equal to $$\frac{k^{k}}{k!}\left(1+\frac{1}{k}\right)^k.$$ Continue. - Thank you very much, nice trick ;) You helped a beginner very much :) – blang May 10 '12 at 15:41 Thank you for the structure tip too. Of course i use the induction step and wrote it down better than i did it here :) The main problem is i don't know the english words for induction beginning, step and presumption – blang May 10 '12 at 15:49 Hint $\$ The LHS and RHS both satisfy $\rm\:f(n+1)/f(n) = (1+1/n)^n,\ f(1) = 1.\:$ But it's trivial to prove by induction the uniqueness of solutions of such first-order difference equations, which yields the sought equality: LHS = RHS. As I often emphasize, uniqueness theorems provide powerful tools for proving equalities. Note that the solution of such recurrences may be represented by (indefinite) products $$\rm f(n+1) = a_n\: f(n),\:\ f(1) = 1\iff f(n) = \prod_{k=1}^{n-1}\:\! a_k$$ Thus the uniqueness theorem yields that such products are well-defined. It is a gap in most courses that this fact is not proved (making many such inductive proofs circular). For more on "definitions by induction" see the award-winning Monthly exposition of Leon Henkin referenced here and here. This is a special case of telescopy. For as below we can write the RHS as a product of its term ratios $$\rm\ g(n)\ =\ \frac{g(n)}{\color{red}{g(n-1)}}\ \frac{\color{red}{g(n-1)}}{\color{green}{g(n-2)}}\ \frac{\color{green}{g(n-2)}}{\cdots }\ \cdots\ \frac{\cdots}{\color{brown}{g(3)}}\ \frac{\color{brown}{g(3)}}{\color{blue}{g(2)}}\ \frac{\color{blue}{g(2)}}{1}$$ Then the proof amounts to saying that both expressions are equal because they are both products of the same expression (here $\rm\:(1+1/i)^i\:).\:$ The usual ad-hoc induction proof of your problem amounts to essentially proving this uniqueness theorem in this specific case. But here, in fact, proving the general case is simpler than proving the special case, because it is much easier to see the telescopic cancellation without the obfuscating details of the special case. Moreover, one obtains a general proof that can be reused for all problems of this type. Who could ask for more? - Sounds good :) Im a total beginner, i have no idea what you mean ;) But thank you – blang May 10 '12 at 15:42 Wow, nice :) Sry i accepted the answer above because it was the direct answer of my question, but this is amazing. I think i will use this stack more often – blang May 10 '12 at 16:00 1 @blang After you do a few more of these you might find it helpful to revisit this answer. It yields an algorithm for handling problems of this type - removing the need for any guesswork or intuitive leaps. It can be understood at high-school level if explained appropriately. You can find many more examples in my linked posts, both multiplicative and additive. – Gone May 10 '12 at 16:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9492800235748291, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/186946-bird-seed.html
# Thread: 1. ## The bird and the seed There is this bird sitting on the top of the bar with length "L2". The bird has to come down, pick a seed from the farm and go and sit on the top of the bar with length "L1". The distance between the two bars is "S" Propose an equation using which the bird will pass MINIMUM way to do this operation. (Assume L1 is not equal to L2) 2. ## Re: The bird and the seed Originally Posted by Narek There is this bird sitting on the top of the bar with length "L2". The bird has to come down, pick a seed from the farm and go and sit on the top of the bar with length "L1". The distance between the two bars is "S" Propose an equation using which the bird will pass MINIMUM way to do this operation. (Assume L1 is not equal to L2) Assume that the point that the bird picks up the seed is a distance d from the first post. Now write the expression for the length of the path than the bird will follow. Having done that you need to determine what value of d minimises this path length. If you have problems with this stage post your result for the first stage above and you will receive further assistance. CB 3. ## Re: The bird and the seed To CaptainBlack: So based on what you said, we have an equation like this for the MINIMUM: (lets assume L1 = x and L2 = y) (y)^2 + d^2 + (S-d)^2 + (x)^2 = S (y)^2 + d^2 + S^2 - 2Sd + d^2 + x^2 = S (y)^2 + 2d^2 + x^2 = S - S^2 + 2Sd and then ... ? 4. ## Re: The bird and the seed Originally Posted by Narek To CaptainBlack: So based on what you said, we have an equation like this for the MINIMUM: (lets assume L1 = x and L2 = y) (y)^2 + d^2 + (S-d)^2 + (x)^2 = S The left hand side is the square of the path length (which is OK we can find $d$ to minimise the square of this it will give the same answer). Why have you set this equal to $S$? If $x$ and/or $y >0$ it must be greater than $S$ . Let $D^2$ denote the square of the path length: $D^2=y^2 + d^2 + (S-d)^2 + x^2=d^2-2Sd+(x^2+y^2)$ This $D^2$ is what you need to find the minimum value of (and the corresponding d that gives the minimum of $D^2$). You should have been shown a method of finding the minimum of a quadratic experssion like this. (There is an alternative method that requires you reflect the second leg of the birds path in the ground and observe that this new path should be a straight line for the minimum length path) CB 5. ## [SOLVED] Re: The bird and the seed well, I didn't solve this problem and at the end, my friend told me how to solve it. So here's how it is: You extend the line "L1" to the bottom and draw a line from "L2" to that end (The blue line in the picture) The intersect is the MINIMUM. Why? because if that is any other point, based on triangle inequality ( a + b > c ) the line is not minimum. 6. ## Re: [SOLVED] Re: The bird and the seed Originally Posted by Narek well, I didn't solve this problem and at the end, my friend told me how to solve it. So here's how it is: You extend the line "L1" to the bottom and draw a line from "L2" to that end (The blue line in the picture) The intersect is the MINIMUM. Why? because if that is any other point, based on triangle inequality ( a + b > c ) the line is not minimum. Which is the alternative method mentioned in at the end of my earlier post. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9359639286994934, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/theory
# Tagged Questions Pertains to any question concerning the ideas that are formulated, whether mathematically or not, to explain or describe physical phenomena. 5answers 276 views ### Theoretical physics and education: Does it really matter a great deal about what happens inside a black hole, or about Hawking radiation? [closed] I stumbled across this article http://blogs.scientificamerican.com/cross-check/2010/12/21/science-faction-is-theoretical-physics-becoming-softer-than-anthropology/ It got me thinking. Why do we ... 1answer 80 views ### 2nd order pertubation theory for harmonic oscillator I'm having some trouble calculating the 2nd order energy shift in a problem. I am given the pertubation: $\hat{H}'=\alpha \hat{p}$, where $\alpha$ is a constant, and $\hat{p}$ is given by: ... 1answer 114 views ### Can thought experiments qualify as actual research? I wondered whether thought experiments actually can be substituted for actual experimentation. I understand that in some cases it might be necessary, but can it be unnecessary over thinking sometimes? ... 1answer 85 views ### Can we project a 4D world using 3D video technology? Traditional movies, TV, etc, faithfully show our 3-dimensional world using 2 dimensions. So can we have a movie that shows a 4-dimensional world using 3D technology? 2answers 1k views ### Relationship between frequency and wavelength I am currently writing up a report for science class on the relationship between frequency and wavelength. And so i was wondering if anyone knew where i could find published results (literature value) ... 1answer 300 views ### Can a sound mathematical formula become a science theory? Can a sound mathematical formula become a science theory if it is constructed using a pattern creation process from sense-data, applied to observations by an inductive mapping, in contrast to ... 1answer 56 views ### Dense Spherical Black Hole Shell with a Region Inside I'm going to propose a thought experiment, based on two ideas. One: A uniform spherical shell, by the Shell Theorem, does not exert any gravitational force on objects existing in the interior of the ... 2answers 251 views ### Does the mathematics of physics require impure set theory? Suppose for the sake of this question that all mathematics is ultimately reducible to set theory in such a way that the only mathematical objects there really are, are sets. Now, there is a common ... 1answer 105 views ### Would it be possible to have an electron-less solid? We can create plasmas quite easily, indeed you can buy a plasma cutter and generate it all day long for less than \$500. Would it be possible to trap a plasma, say magnetically, and cool it so much ... 1answer 138 views ### Crucial Misconceptions about The Universe [closed] So I am piecing together a school project on the numerous misconceptions of the universe, which I plan to "provide proof against them" with information from various sources (one of the main ones will ... 1answer 136 views ### What is the origin of flavor? [duplicate] Possible Duplicate: Origin of lepton/quark generations? In the standard model (and in nature), Fermions appear in different generations, or flavors. Besides up and down quarks and ... 2answers 145 views ### Is the step of analytic continuation unavoidable or can you model around it? One sometimes considers the analytic continuation of certain quantities in physics and take them seriously. More so than the direct or actual values actually. For example if you use the procedure for ... 0answers 235 views ### Relating the variance of the current operator to measurements (EDIT: Thanks to Nathaniel's comments, I have altered the question to reflect the bits that I am still confused about.) This is a general conceptual question, but for definiteness' sake, imagine a ... 5answers 418 views ### Is physics rigorous in the mathematical sense? I am a student studying Mathematics with no prior knowledge of Physics whatsoever except for very simple equations. I would like to ask, due to my experience with Mathematics: Is there a set of ... 4answers 1k views ### What are the challenges to achieving cold fusion? I am an absolute neophyte regarding physics. What are the challenges to achieving cold fusion? I'm not sure this is a duplicate of Why is cold fusion considered bogus?, because that question is ... 3answers 65 views ### Is physics very dependent on equipment? I always had the impression that physics depends a lot on particle accelerators and heavy machines for experimentation of new theories, I know there's the field called theoretical physics but until ... 1answer 171 views ### Is omniscience impossible? I remember reading a brief note in Scientific American years ago about a mathematician/physicist who had published a paper that formally stated that no entity could both participate in a given system ... 2answers 108 views ### Has there been any serious work in how the world would look if basic physical laws were changed? Has there been any serious work in investigating how the world would look if certain basic physical laws were changed? Like if gravity or electromagnetism laws were changed to have different ... 0answers 88 views ### What is the origin of the many-body expansion? I'm looking for the original introduction of the many-body expansion (MBE) in the scientific literature. More specifically, I'm interested in a theoretical justification of the rapid convergence of ... 0answers 47 views ### four boson quantum system contact interaction I have to solve this problem. Four bosons moving in 1d harmonic potential(their spin is 0) and interacting through contact interaction defined via delta function. Now, methods that I have to use: a) ... 1answer 400 views ### Is anti-gravity possible in theoretical physics? Is anti-gravity http://en.wikipedia.org/wiki/Anti-gravity possible in string theory? I have read some articles about scientists making assumptions about the existence of anti-gravity, but is it ... 1answer 124 views ### Physical -> Chemical -> Nuclear -> [what comes next] If a splitting atoms / fusing isotopes (fission bomb, fusion bomb) yields more energy than chemical changes (TNT, et al) yields more energy than physical change (hydrogen bonds forming during water ... 2answers 122 views ### Wave Function Statistical Interpretation vs Oscillation Interpretation Can the wave function solution to Schrodinger's Equation be interpreted as an oscillation between all possible measurements (obviously with some type of weighting that would describe the shape of the ... 3answers 481 views ### Can physics get rid of the continuum? Almost every physical equation I can think of (even though I don't actually feel comfortable beyond the scope of classical mechanics and macroscopic thermodynamics, as that's enough for dealing with ... 2answers 200 views ### Has anyone else thought about gravity in this way? Picture yourself standing on a ball that is expanding at such a rate that it makes you stick to the ball. Everything in the universe is expanding at this same rate. To escape the earths gravitational ... 1answer 278 views ### The possibility of free electrical energy? Please excuse my lack of knowledge/understanding. Question: Why Nikola Tesla's Free Energy concept was never worked upon? Even today. Context: Now that we know Nikola Tesla was a genius and did ... 2answers 103 views ### What equations govern the formation of droplets on a surface? When some smooth surface (like that of a steel or glass plate) is brought in contact with steam (over e.g. boiling milk) then water is usually seen to condense on that surface not uniformly but as ... 6answers 446 views ### Why are the physical sciences described perfectly by mathematics? Why are the physical sciences described perfectly by mathematics? 5answers 673 views ### Can a scientific theory ever be absolutely proven? I personally cringe when people talk about scientific theories in the same way we talk about everyday theories. I was under the impression a scientific theory is similar to a mathematical proof; ... 2answers 233 views ### why is dark matter the best theory available to explain missing mass problems? Why is dark matter the best theory to explain the missing mass problem? Why is dark matter mathematically necessary to explain the missing mass problem? On a side not I believe dark matter is ... 2answers 208 views ### Number of bits needed to express physical laws? What is the minimum number of bits that would be needed to express a given physical law, like the law of universal gravitation? How many bits are needed to express each of the four fundamental forces? ... 1answer 124 views ### Straightforward questions about calculating SUSY F-terms So in the Lagrangian for a SUSY theory we have the F-terms, which I have seen written (e.g., in Stephen Martin's SUSY primer) as $F^*_i F^i$ where $F^i = \frac{\partial W}{\partial \phi^i}$. I ... 3answers 316 views ### shifting from mathematics to physics I am a postgraduate in mathematics. I studied physics during my B.Sc.studies.I want to go for further studies in physics particularly in theoretical physics. I am in a job and cant afford regular ... 1answer 89 views ### Entropy, Mass and Brane gravity Does string theory state as vibrational entropy increases, mass increases? Related: What is a D-brane? Reference: Cambridge Relativity 1answer 197 views ### Naturalness and experiments Is there an example where model building that is motivated only by Naturalness, has led to experimentally verified observations? If the question is unclear, or if the reader wants more elaboration, ... 1answer 270 views ### Are www.vacuum-mechanics.com and www.autodynamics.org reliable sites? First off, I am not a physicist, although I would have loved to become one. The simple fact is I lack the mathematical skills needed (and now I'm too old to acquire them to any sufficient level). ... 3answers 195 views ### Should any theory of physics respect the principle of conservation of angular momentum or linear momentum? Is it possible that a theory that can describe the universe at the planck scale can violate things that we now consider fundamental in nature?For example can it violate rotational and translational ... 0answers 176 views ### Calculating the number of turns and thickness of an electromagnet I want to calculate the number of turns of an electromagnet and the thickness of the wire. But i have tried to search arround in books, and can't really find any thing. I know my wire is 0,114mm and ... 2answers 225 views ### What could we observe if we see a 4 dimensional object and how could it change our physics view about our universe ? My question is little bit philosophical. I would like to explain my ideas with a 2 dimensional universe model. If we had lived in 2 dimensional universe like a plane, What could we observe when ... 1answer 89 views ### Are there microscopic theories, which work, but which wrongly predict macroscopic behaviour? Motivated by this question (and the P. W. Anderson article linked in that question, which I came across here somewhere today and just read) I wonder about something, which is somewhat bordering an ... 7answers 354 views ### Macroscopic laws which haven't been derived from microscopic laws Can you think of examples where a macroscopic law coexists with a fully known microscopic law, but the former hasn't been derived from the latter (yet)? Or maybe a rule of thumb, which works but ... 4answers 247 views ### Age of universe estimates I was recently involved in a discussion on a sister site regarding how tightly coupled Physics is with the age of the Universe (and Earth). I believe that the Earth and the Universe are both billions ... 1answer 80 views ### Does spin alone have any effect on the physical interactions of particles? In Hartree-Fock theory the time-independent electronic energy of a single (restricted) determinant electronic wavefunction consists of one electron terms, $h_{ii}$, Coulomb interaction energies, ... 1answer 154 views ### Are there any theories or suggestions for how the multiverse came into existence? I've just seen a documentary about the multiverse. This provides an explanation for where the big bang came from. But it leaves me wondering: how did the multiverse come into existence? Because this ... 3answers 178 views ### Does the second law of thermodynamics tell me how the entropy changes? In thermodynamics I can e.g. compute the properties of ideal gases with certain energies $U_1,U_2$ in boxes with certain volumes $V_1$ and $V_2$. Say I have two such boxes and they have some specific ... 1answer 67 views ### Are a measured object always part of the theory? Is there a notion of measurement, which doesn't correspond to a yes/no question or with the idea of the comparison of two real world objects, which produces a real number? And does at least one of ... 1answer 179 views ### advantage of string theory over other theory-of-everything candidates I am getting curious over why string theory, especially M-theory, is the most popular candidate for the theory of everything. It seems that all candidates of the theory of everything lack substantial ... 2answers 156 views ### $2+1$ dimensional physics theory of our universe? Is there any physics theory that depicts our universe as $2+1$ dimensional? I heard that black holes seem to suggest that the world might be $2+1$ dimensional, so I am curious whether such theory ... 8answers 562 views ### What are the frameworks of physics? Are there physical theories in use, which don't fit into the frameworks of either Thermodynamics, Classical Mechanics (including General Relativity and the notion of classical fields) or Quantum ... 3answers 162 views ### New theories and publications [closed] When someone develops a new theory on physics, which is barely on schetch (so there are no measurements, nor simulations) with just a mathematical and conceptual description, in which scientific ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9282523393630981, "perplexity_flag": "middle"}
http://nrich.maths.org/2664
nrich enriching mathematicsSkip over navigation ### Qqq..cubed It is known that the area of the largest equilateral triangular section of a cube is 140sq cm. What is the side length of the cube? The distances between the centres of two adjacent faces of another cube is 8cms. What is the side length of this cube? Another cube has an edge length of 12cm. At each vertex a tetrahedron with three mutually perpendicular edges of length 4cm is sliced away. What is the surface area and volume of the remaining solid? ### Concrete Calculation The builders have dug a hole in the ground to be filled with concrete for the foundations of our garage. How many cubic metres of ready-mix concrete should the builders order to fill this hole to make the concrete raft for the foundations? ### In a Spin What is the volume of the solid formed by rotating this right angled triangle about the hypotenuse? # Efficient Cutting ##### Stage: 4 Challenge Level: A cylindrical container, like the tin cans used to package some food, can be made by using two circles for the ends and a rectangle which wraps round to form the body. To make cylinders of varying sizes, the three pieces can be cut from a single rectangle of flat sheet in several ways. For example: CHALLENGE: Your task is to cut out one rectangle and two circles from a single sheet of A4 paper to make a cylinder with the greatest possible volume. What are its dimensions? (We will assume that the dimensions of an A4 sheet of paper are $21$cm and $29.6$ cm) Click here for a poster of this problem. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9295392036437988, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/F08/f08jgc.html
# NAG Library Function Documentnag_dpteqr (f08jgc) ## 1  Purpose nag_dpteqr (f08jgc) computes all the eigenvalues and, optionally, all the eigenvectors of a real symmetric positive definite tridiagonal matrix, or of a real symmetric positive definite matrix which has been reduced to tridiagonal form. ## 2  Specification #include <nag.h> #include <nagf08.h> void nag_dpteqr (Nag_OrderType order, Nag_ComputeZType compz, Integer n, double d[], double e[], double z[], Integer pdz, NagError *fail) ## 3  Description nag_dpteqr (f08jgc) computes all the eigenvalues and, optionally, all the eigenvectors of a real symmetric positive definite tridiagonal matrix $T$. In other words, it can compute the spectral factorization of $T$ as $T=ZΛZT,$ where $\Lambda $ is a diagonal matrix whose diagonal elements are the eigenvalues ${\lambda }_{i}$, and $Z$ is the orthogonal matrix whose columns are the eigenvectors ${z}_{i}$. Thus $Tzi=λizi, i=1,2,…,n.$ The function may also be used to compute all the eigenvalues and eigenvectors of a real symmetric positive definite matrix $A$ which has been reduced to tridiagonal form $T$: $A =QTQT, where ​Q​ is orthogonal =QZΛQZT.$ In this case, the matrix $Q$ must be formed explicitly and passed to nag_dpteqr (f08jgc), which must be called with ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$. The functions which must be called to perform the reduction to tridiagonal form and form $Q$ are: full matrix nag_dsytrd (f08fec) and nag_dorgtr (f08ffc) full matrix, packed storage nag_dsptrd (f08gec) and nag_dopgtr (f08gfc) band matrix nag_dsbtrd (f08hec) with ${\mathbf{vect}}=\mathrm{Nag_FormQ}$. nag_dpteqr (f08jgc) first factorizes $T$ as $LD{L}^{\mathrm{T}}$ where $L$ is unit lower bidiagonal and $D$ is diagonal. It forms the bidiagonal matrix $B=L{D}^{\frac{1}{2}}$, and then calls nag_dbdsqr (f08mec) to compute the singular values of $B$ which are the same as the eigenvalues of $T$. The method used by the function allows high relative accuracy to be achieved in the small eigenvalues of $T$. The eigenvectors are normalized so that ${‖{z}_{i}‖}_{2}=1$, but are determined only to within a factor $±1$. ## 4  References Barlow J and Demmel J W (1990) Computing accurate eigensystems of scaled diagonally dominant matrices SIAM J. Numer. Anal. 27 762–791 ## 5  Arguments 1:     order – Nag_OrderTypeInput On entry: the order argument specifies the two-dimensional storage scheme being used, i.e., row-major ordering or column-major ordering. C language defined storage is specified by ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. See Section 3.2.1.3 in the Essential Introduction for a more detailed explanation of the use of this argument. Constraint: ${\mathbf{order}}=\mathrm{Nag_RowMajor}$ or Nag_ColMajor. 2:     compz – Nag_ComputeZTypeInput On entry: indicates whether the eigenvectors are to be computed. ${\mathbf{compz}}=\mathrm{Nag_NotZ}$ Only the eigenvalues are computed (and the array z is not referenced). ${\mathbf{compz}}=\mathrm{Nag_InitZ}$ The eigenvalues and eigenvectors of $T$ are computed (and the array z is initialized by the function). ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ The eigenvalues and eigenvectors of $A$ are computed (and the array z must contain the matrix $Q$ on entry). Constraint: ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, $\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$. 3:     n – IntegerInput On entry: $n$, the order of the matrix $T$. Constraint: ${\mathbf{n}}\ge 0$. 4:     d[$\mathit{dim}$] – doubleInput/Output Note: the dimension, dim, of the array d must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$. On entry: the diagonal elements of the tridiagonal matrix $T$. On exit: the $n$ eigenvalues in descending order, unless NE_CONVERGENCE or NE_POS_DEF, in which case d is overwritten. 5:     e[$\mathit{dim}$] – doubleInput/Output Note: the dimension, dim, of the array e must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}-1\right)$. On entry: the off-diagonal elements of the tridiagonal matrix $T$. On exit: e is overwritten. 6:     z[$\mathit{dim}$] – doubleInput/Output Note: the dimension, dim, of the array z must be at least • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{pdz}}×{\mathbf{n}}\right)$ when ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$ and ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,×{\mathbf{pdz}}\right)$ when ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$ and ${\mathbf{order}}=\mathrm{Nag_RowMajor}$; • $1$ when ${\mathbf{compz}}=\mathrm{Nag_NotZ}$. The $\left(i,j\right)$th element of the matrix $Z$ is stored in • ${\mathbf{z}}\left[\left(j-1\right)×{\mathbf{pdz}}+i-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_ColMajor}$; • ${\mathbf{z}}\left[\left(i-1\right)×{\mathbf{pdz}}+j-1\right]$ when ${\mathbf{order}}=\mathrm{Nag_RowMajor}$. On entry: if ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$, z must contain the orthogonal matrix $Q$ from the reduction to tridiagonal form. If ${\mathbf{compz}}=\mathrm{Nag_InitZ}$, z need not be set. On exit: if ${\mathbf{compz}}=\mathrm{Nag_InitZ}$ or $\mathrm{Nag_UpdateZ}$, the $n$ required orthonormal eigenvectors stored as columns of $Z$; the $i$th column corresponds to the $i$th eigenvalue, where $i=1,2,\dots ,n$, unless NE_CONVERGENCE or NE_POS_DEF. If ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, z is not referenced. 7:     pdz – IntegerInput On entry: the stride separating row or column elements (depending on the value of order) in the array z. Constraints: • if ${\mathbf{order}}=\mathrm{Nag_ColMajor}$, • if ${\mathbf{compz}}=\mathrm{Nag_InitZ}$ or $\mathrm{Nag_UpdateZ}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • if ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, ${\mathbf{pdz}}\ge 1$; • if ${\mathbf{order}}=\mathrm{Nag_RowMajor}$, • if ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; • if ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, ${\mathbf{pdz}}\ge 1$. 8:     fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument $〈\mathit{\text{value}}〉$ had an illegal value. NE_CONVERGENCE The algorithm to compute the singular values of the Cholesky factor $B$ failed to converge; $〈\mathit{\text{value}}〉$ off-diagonal elements did not converge to zero. NE_ENUM_INT_2 On entry, ${\mathbf{compz}}=〈\mathit{\text{value}}〉$, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$ and ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: if ${\mathbf{compz}}=\mathrm{Nag_InitZ}$ or $\mathrm{Nag_UpdateZ}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; if ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, ${\mathbf{pdz}}\ge 1$. On entry, ${\mathbf{compz}}=〈\mathit{\text{value}}〉$, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: if ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$, ${\mathbf{pdz}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$; if ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, ${\mathbf{pdz}}\ge 1$. NE_INT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 0$. On entry, ${\mathbf{pdz}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{pdz}}>0$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_POS_DEF The leading minor of order $〈\mathit{\text{value}}〉$ is not positive definite and the Cholesky factorization of $T$ could not be completed. Hence $T$ itself is not positive definite. ## 7  Accuracy The eigenvalues and eigenvectors of $T$ are computed to high relative accuracy which means that if they vary widely in magnitude, then any small eigenvalues (and corresponding eigenvectors) will be computed more accurately than, for example, with the standard $QR$ method. However, the reduction to tridiagonal form (prior to calling the function) may exclude the possibility of obtaining high relative accuracy in the small eigenvalues of the original matrix if its eigenvalues vary widely in magnitude. To be more precise, let $H$ be the tridiagonal matrix defined by $H=DTD$, where $D$ is diagonal with ${d}_{ii}={t}_{ii}^{-\frac{1}{2}}$, and ${h}_{ii}=1$ for all $i$. If ${\lambda }_{i}$ is an exact eigenvalue of $T$ and ${\stackrel{~}{\lambda }}_{i}$ is the corresponding computed value, then $λ~i - λi ≤ c n ε κ2 H λi$ where $c\left(n\right)$ is a modestly increasing function of $n$, $\epsilon $ is the machine precision, and ${\kappa }_{2}\left(H\right)$ is the condition number of $H$ with respect to inversion defined by: ${\kappa }_{2}\left(H\right)=‖H‖·‖{H}^{-1}‖$. If ${z}_{i}$ is the corresponding exact eigenvector of $T$, and ${\stackrel{~}{z}}_{i}$ is the corresponding computed eigenvector, then the angle $\theta \left({\stackrel{~}{z}}_{i},{z}_{i}\right)$ between them is bounded as follows: $θ z~i,zi ≤ c n ε κ2 H relgapi$ where ${\mathit{relgap}}_{i}$ is the relative gap between ${\lambda }_{i}$ and the other eigenvalues, defined by $relgapi = min i≠j λi - λj λi + λj .$ ## 8  Further Comments The total number of floating point operations is typically about $30{n}^{2}$ if ${\mathbf{compz}}=\mathrm{Nag_NotZ}$ and about $6{n}^{3}$ if ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$, but depends on how rapidly the algorithm converges. When ${\mathbf{compz}}=\mathrm{Nag_NotZ}$, the operations are all performed in scalar mode; the additional operations to compute the eigenvectors when ${\mathbf{compz}}=\mathrm{Nag_UpdateZ}$ or $\mathrm{Nag_InitZ}$ can be vectorized and on some machines may be performed much faster. The complex analogue of this function is nag_zpteqr (f08juc). ## 9  Example This example computes all the eigenvalues and eigenvectors of the symmetric positive definite tridiagonal matrix $T$, where $T = 4.16 3.17 0.00 0.00 3.17 5.25 -0.97 0.00 0.00 -0.97 1.09 0.55 0.00 0.00 0.55 0.62 .$ ### 9.1  Program Text Program Text (f08jgce.c) ### 9.2  Program Data Program Data (f08jgce.d) ### 9.3  Program Results Program Results (f08jgce.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 148, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6298450827598572, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48138/cochains-on-eilenberg-maclane-spaces
## Cochains on Eilenberg-MacLane Spaces ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $p$ be a prime number, let $k$ be a commutative ring in which $p=0$, and let $X = K( {\mathbb Z}/p {\mathbb Z}, n)$ be an Eilenberg-MacLane space. Let $F$ be the free $E_{\infty}$-algebra over $k$ generated by a class $\eta$ in (homological) degree $-n$. A result of Mandell asserts that there is a cofiber sequence of $E_{\infty}$-algebras over $k$ $$F \stackrel{A}{\rightarrow} F \rightarrow C^{\ast}(X;k)$$ where $A$ is the Artin-Schreier'' map which carries $\eta$ to $\eta - P^0(\eta)$. In other words, as an $E_{\infty}$-algebra, the cochain complex $C^{\ast}(X;k)$ can be described by one generator (a class in degree $-n$) and one relation (the class should be fixed by $P^0$). Is it possible to prove this result without explicitly computing the homotopy groups of the cofiber of $A$? Let's denote this cofiber by $R$. I can reduce to the problem of showing that $\pi_i R \simeq 0$ for $i > 0$ and that $\pi_0 R \simeq k$ (both of which are obvious consequences of Mandell's theorem). Is there some way to show this directly, without computing the other homotopy groups? - ## 1 Answer Bertrand Toen doesn't seem to do much calculation in Champs affines. - 7 He also doesn't prove this theorem. Unless I misunderstand, he works in the setting of cosimplicial algebras (where the analogous statement is easy) and uses it to prove variants of Mandell's results. – Jacob Lurie Dec 3 2010 at 17:11 oh............. – Ben Wieland Dec 3 2010 at 18:18 1 Toen's technique seems to involve an inductive approach, using the result for $X=K(Z/pZ,n)$ to prove it for $BX=K(Z/pZ,n+1)$. Could that be used here to reduce to the case of $n=1$ or $n=0$? – Charles Rezk Dec 3 2010 at 19:51 3 That's the strategy I had in mind. When n=0 you can prove it using deformation theory, so let's try induction on n. Let R(n) be the cofiber and let R'(n) be the cochains on K(Z/pZ,n). Then doing a bar construction on R(n) produces R(n-1), and similarly for R'(n). So the I.H. tells you that the map R(n) -> R'(n) is an equivalence after applying the bar construction. If you knew that R(n) had no positive homotopy and that pi_0 R(n) = k (statements which are obvious for R'(n)), then the bar construction doesn't lose any information and you are done. But a priori R(n) is a big mess. – Jacob Lurie Dec 3 2010 at 20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9026265740394592, "perplexity_flag": "head"}
http://hbfs.wordpress.com/2009/07/21/checksums-part-i/
Harder, Better, Faster, Stronger Explorations in better, faster, stronger code. Checksums (part I) I once worked in a company specializing in embedded electronics for industrial applications. In one particular project, the device communicated through a RS-422 cable to the computer and seemed to return weird data once in a while, causing unwanted behavior in the control computer whose programming did not provide for this unexpected data. So I took upon myself to test the communication channel as it seemed that the on-board software was operating properly and did not contain serious bugs. I added a check-sum to the data packet and it turned out that some packets came in indeed corrupted despite the supposedly superior electrical characteristics of the RS-422 link. After a few days’ work, I implemented the communication protocol that could detect and repair certain errors while reverting to a request to retransmit if the data was too damaged. I then started gathering statistics on error rate, number of retransmit, etc, and the spurious behavior on the controller’s side went away. My (metaphorically) pointy-haired boss opposed the modification because “we didn’t have any of these damn transmission errors until you put your fancy code in there“. Of course, this was an epic facepalm moment. I tried to explain that the errors have always been there, except that now they’re caught and repaired. Needless to say, it ended badly. Notwithstanding this absurd episode, I kept using check-sum to validate data whenever no other layer of the protocol took care of transmission errors. So, this week, let us discuss check-sums and other error detection algorithms. The simplest type of check-sum, the check sum simply adds all numbers (or bytes, or bits) and reports that sum as the check-sum. This, unfortunately, doesn’t give you much power for error detection. Indeed, it’s very easy to understand that simply computing the sum of a series of number doesn’t tell you much. There are many ways of getting a given sum in the presence of multiple errors. Even a simple inversion goes undetected. A bit more protection is given by Luhn‘s algorithm. Luhn’s algorithm on a series of digits (let us consider only digits as it generalises to any radix anyway) starts with an initial sum of $s_0$, which may be 0 or salted. Every odd-position digit is added once to $s_0$, each even-position digit is added twice to $s_0$. For example, the digit series 6,4,2,5,1 would be summed up as $s=s_0+6+(2\times{}4)+2+(2\times{}5)+1$. Then, $s \mod{}\:10$ is returned as the check-sum. We could reformulate the operation as: $\displaystyle s=s_0+\sum_{i=1}^n 2^{((i+1)\mod{}\:2)} x_i$ An example of this simple type of check-sum is the Canadian Social Insurance Number, whose numbers are chosen so that the check-sum is $\equiv 0 (\!\!\mod 10)$. Needless to say, the validity of the number as a social security number cannot be limited to the verification of the check-sum alone; it has to be checked against the national registry—it’s ridiculously easy to generate a SIN that passes the check-sum test. The idea can be extended to using other multipliers than 1 and 2 in the sum. The ISBN, or International Standard Book Number, uses either 10 or 13 digits codes to uniquely identify a book. With the correct ISBN alone, you can order a book from your favorite bookshop without ambiguity. The ISBN-10 algorithm computes the sum: $\displaystyle x_{10}=\left(\sum_{i=1}^9 i\:x_i\right)\mod\:11$ Which yield the last digit of the code (with X encoding 10). The ISBN-13, however, is a Luhn check-sum using 1 and 3 rather than 1 and 2. Luhn’s method generalizes to any number of digits (and to other bases as well), but is not very strong. Much stronger than Luhn’s method, the CRC, or cyclical redundancy check, computes the “check-sum” as the remainder of the division of the message, considered as a big integer, by a divisor polynomial. There are number-theoretic reasons to call the divisor a “divisor polynomial” but let us not go there right now (if you’re really interested in the mathematics, see here). For all practical purpose, the divisor is a small integer. The divisor is chosen in order to give a remainder on 16, 32 or 64 bits. While the long division of a very big number (the message) by a small integer (the divisor) seems cumbersome, some efficient algorithms exist that will compute the CRC very efficiently, and without doing any arbitrary precision arithmetic. However, if the CRC is much better than the simple check-sum and Luhn’s algorithm, it is not infallible. As it encodes the message’s structure on a very small number of bits compared to the message itself, it cannot possibly trap every errors. But it turns out that for a good divisor polynomial (there are a few standard ones), it will detect quite many errors. It detects all errors of exactly one bit. It will detect all errors of two bits if the errors are separated by more bits than the order of the polynomial; for a polynomial of order $n$, they must be $n$ bits apart. It will also detect all error bursts of $n$ bits or less. However, it doesn’t do well with long series of leading zero bits, as deletion and insertion of leading zeroes will go undetected. The CRC is the preferred method of computing check-sums in a number of applications (from Zip files to Ethernet packets) but there’s a number of ad hoc check-sum algorithms out there, not all equally fantastic. For example, the Adler32 function is meant to be efficiently computed; but yields incomplete coverage for messages that are too short. I also presented an ad hoc hash function in a previous post that is reasonably good while amenable to efficient implementations1. * * * Which brings me to the topic of check-sums and hashes. In UEID, I presented a few hash function families. I discussed secure hash functions like MD5 and SHA in the context of providing a unique descriptor to a piece of data. That’s also the goal of a check-sum. A hash function wants to yield a different value whenever the message changes, ever so lightly, which is also a the goal of a check-sum function. It should be clear by now that a (good) check-sum and a (good) hash function are the same both the exact same thing: they map a series of bits (the message) onto a smaller series of bits (the check-sum/hash) in a way that captures as many changes as possible in the messages. * * * It turns out that inventing a good check-sum/hash algorithm is extremely difficult—even Don Knuth got his ass bitten on this one2. Unless you want to invest a considerable amount of time developing and thoroughly testing your own check-sum/hash function, I would advice to use one that already exist, that have been examined by experts, and that has Open Source implementations available. 1 I tested its behavior in the context of hash table look-ups, and it exhibited a most cromulent behavior. See here (in French). 2 Donald Knuth &mdash The Art of Computer Programming, Volume 3: Sorting and Searching — Addison-Wesley, 1973. Like this: This entry was posted on Tuesday, July 21st, 2009 at 5:19 am and is filed under algorithms, bit twiddling, embedded programming, hacks, Life in the workplace, Mathematics, programming, theoretical computer science. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. One Response to Checksums (part I) 1. [...] a number of different occasions, I briefly discussed Hash Functions, saying that if a hash function needn’t be very strong if [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92524254322052, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2009/09/06/answer-to-test-your-intuition-9/?like=1&source=post_flair&_wpnonce=18b5244a97
Gil Kalai’s blog ## Answer to Test Your Intuition (9) Posted on September 6, 2009 by Two experimental results of 10/100 and 15/100 are not equivalent to one experiment with outcomes 3/200. (Here is a link to the original post.) One way to see it is to think about 100 experiments. The outcomes under the null hypothesis will be 100 numbers (more or less) uniformly distributed in [0,1]. So the product is extremely tiny. What we have to compute is the probability that the product of two random numbers uniformly distributed in [0,1] is smaller or equal 0.015. This probability is much larger than 0.015. Here is a useful approximation (I thank Brendan McKay for reminding me): if we have $n$ independent values in $U(0,1)$  then the prob of product $< X$ is $X \sum_{i=0}^{n-1} ( (-1)^i (log X)^i/i!.$ In this case  0.015 * ( 1 – log(0.015) ) = 0.078 So the outcomes of the two experiments do not show a significant support for the theory. The theory of hypothesis testing in statistics is quite fascinating, and of course, it became a principal tool in science and  led to major scientific revolutions. One interesting aspect is the similarity between the notion of statistical proof which is important all over science and the notion of interactive proof in computer science. Unlike mathematical proofs, statistical proof are based on following certain protocols and standing alone if you cannot guarantee that the protocol was followed the proof has little value. ### Like this: This entry was posted in Probability, Statistics, Test your intuition and tagged Hypothesis testeing, Interactive proof systems, Statistics, Test your intuition. Bookmark the permalink. ### 7 Responses to Answer to Test Your Intuition (9) 1. RD says: Interesting. But I don’t see any simple way to derive the formula you used. Any suggestions? 2. Anon says: It’s probably worth mentioning that this is basically Fisher’s method for combining p-values. [Thanks! I did not know that this method goes back to Fisher. But now I found a Wikipedia article about it. G.] 3. Gil Kalai says: I just recovered a very interesing comment by Nick to the original post. 4. Gil Kalai says: I received the following email from a friend, and I couldn’t resist bragging about it. Hi Gil, I spend way too much time aimlessly surfing the Internet. Yet, every now and then, it leads to something useful. This just happened to me: Last week, in trying to help an applied researcher with a statistics problem, I began to reinvent Fisher’s method for combining tests of the same null hypothesis. Of course I understood that this must have been well known and that I should do a literature search, but I procrastinated. Then, now this morning, while doing my usual aimless tour of the Internet, something led me to ask myself what you have been up to lately, so I looked up your blog and found Fisher’s test in you very latest blog entry. Thanks! 5. Jan Vondrak says: Hi, I don’t have much background in statistics but I feel as if there is still something left unsaid here. The original question was “why can’t you multiply the probabilities”. But the outcomes in the two groups are assumed to be independent, so why not? The probability that a random person in the first group ranks within 10% & a random person in the second group ranks within 15% is, in fact, 3/200. To me, the problem with this argument is that you frame your event of choice after you see the outcome. After 100 experiments, whatever happened, you can pick a suitable event of probability 1/2^100 which just happened to occur. But this can happen even in one experiment. To make it more glaring, suppose I compare the heights of 1000 people and I see that the person who was born 7/7 ranks #128. I can say “wow, the rank of this person is the 7th power of an integer, how unlikely is that!! there must be something going on here”. It seems to me Gil’s explanation relies on some hidden assumptions, such as “down-monotone events being considered reasonable”, which seems related to p-values and such, but I’m missing the background here. 6. Gil Kalai says: Dear Jan I think you make a very correct point. The probability of the event that a random person in the first group is ranked among the top 15% AND a random person in the second group is ranked among the first 10% is indeed 3/200. But this is different from the question we were asking. It is also correct that this problem is closely related to the issue of testing statistically some apriori conjectures compared to exploring surprising facts in given data. 7. vish says: @RD y=log x then calculate the SUM of y’s • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9369601011276245, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/288362/calculating-an-unknown-point-on-a-graph/288411
# Calculating an unknown point on a graph I have a load of data that consists of websites and the "power" of their backlinks. The lowest power is 1. The highest is 949876. The average is 6056. I want to be able to assign a site a ranking from 1-100 that denotes how powerful they are. I thought I could do this using a nice graph like this one (It's supposed to be a smooth curve ;-) http://i.stack.imgur.com/0fIRa.png So, on the $x$-axis I have values from $0$ to $100$. I know the following points: $$x = 1, y = 1$$ $$x = 50, y = 6056$$ $$x = 100, y = 949876$$ How can I find $y$ for another value of $x$? .. or am I approaching this in totally the wrong way? Thanks! - Is your function a linear function? – Sigur Jan 27 at 21:31 there are several functions for which this could be the case, can we assume it would be of the form ax^2+bx+x=y? – kaine Jan 27 at 21:32 @Sigur - Sorry,it's non linear. – rastaboym Jan 27 at 21:34 2 Without knowing anything about your function, there is no way to answer the question. Knowing the function value at a few scattered points tells you absolutely nothing about the function value at other points. – mrf Jan 27 at 21:36 2 Having only three points, you can't do anything reasonable. Is the function quadratic (as suggested by kaine), exponential or something completely different? If you have more points or some explanation where your data comes from, it may be possible to pose more intelligent guesses as to what a good model would be. – mrf Jan 27 at 21:50 show 6 more comments ## 3 Answers I'm doing this as a second answer because it is completely different. I am assuming the form $ae^{bx}+c=y$ is suitable. $$a=39.150079742937$$ $$b=0.100967288104848$$ $$c=-42.309401986998$$ Will that do? I can come up with others. Obviously you don't need that many decimal places. - – rastaboym Jan 27 at 22:49 As others have pointed out, it is very difficult with the amount of information provided. We could come up with an interpolating polynomial, such as: $$f(x) = \frac{15314513}{80850}x^2 -\frac{257011521}{26950}x + \frac{15116018}{1617}$$ This meets your specified criteria, however, it could behave in ways you did not expect! Here is a WA Plot. Notice, that it meets your three point exactly, but does not appear like your graph. We could play around and define more points to make this look closer to your plot. Update I used Mathematica's Interpolating Polynomial, that is, using WA Int Ploy. If you could add more points, you would get better results (and maybe even a different curve type, using FindFit). This is part of Numerical Analysis. Regards - thanks for taking the time to answer. Wolfram Alpha plot is very useful. How did you come up with the polynomial? I could get some graph paper out and define a few more points. Then what? – rastaboym Jan 27 at 22:18 Needs an $\;\uparrow^+\;\;$ – amWhy May 5 at 2:19 If we assume that the equation is in the form $ax^2+bx+c=y$ which is likely the easiest form, replace x and y for each known value and solve for the values of a,b, and c. This would be the form I would fit data like this as an engineer unless I had more information knowing that this was a very inaccurate estimate. these would be: $$a+b+c=1$$ $$a*50^2+50b+c=6056$$ $$a*100^2+100b+c=949876$$ The values would be: $$a=218783/1155$$ $$b=-3671736/385$$ $$c=2159516/231$$ There are, however, and infinite number of other equations that would fit these 3 points. For instance: $$kx^3+jx^2+lx+m = y$$ would have those values on it for any value of k and some corresponding values of j,l, and m. If you could give some context, I can help you out further but I need more information. For simple questions like these if I can't remember how to do it, I retreat to Wolfram Alpha first. (Later I would use Matlab or Scilab for most of my calculations.) Addendum: if you have alot of them and you are not used to using other programs, place them in excel, and do curve fitting. http://www.csupomona.edu/~seskandari/documents/Curve_Fitting_William_Lee.pdf - Note that your fitted function is negative for most of the interval between $1$ and $50$, which is likely not what is wanted. This is another effect of the problem being severely underspecified. – Rahul Narain Jan 27 at 21:58 – rastaboym Jan 27 at 22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9531922936439514, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/212982-probability-tail-followed-2-heads-biased-coin-print.html
# Probability of a tail followed by 2 heads on a biased coin Printable View • February 11th 2013, 10:09 PM battery88 Probability of a tail followed by 2 heads on a biased coin Suppose we have a biased coin for which the probability of heads is 3/4 while the probability of tails is 1/4 . What is the probability of a tail followed by 2 heads on three flips of the coin? Here is what I have so far: There are 2^3 possible outcomes {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}, and only one way to get THH. I've hit a wall and any pointers would be appreciated. • February 12th 2013, 03:41 AM Plato Re: Probability of a tail followed by 2 heads on a biased coin Quote: Originally Posted by battery88 Suppose we have a biased coin for which the probability of heads is 3/4 while the probability of tails is 1/4 . What is the probability of a tail followed by 2 heads on three flips of the coin? If you were to look up the answer in the 'back-of-the-book' it would be: $\frac{3}{2^6}$. Now if you can explain to yourself WHY? or HOW? then you will understand. HINT: $TTH$ is one out of eight which is a power of two. • February 12th 2013, 12:01 PM HallsofIvy Re: Probability of a tail followed by 2 heads on a biased coin Quote: Originally Posted by battery88 Suppose we have a biased coin for which the probability of heads is 3/4 while the probability of tails is 1/4 . What is the probability of a tail followed by 2 heads on three flips of the coin? Here is what I have so far: There are 2^3 possible outcomes {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}, and only one way to get THH. I've hit a wall and any pointers would be appreciated. The probability that the first coin is tails is 1/4. The probability that the second coin is heads 3/4. The probability that the third coin is heads is 3/4. The probability of three independent results, ABC, happening is P(A)P(B)P(C). All times are GMT -8. The time now is 04:32 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9612252116203308, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/115611/list
## Return to Answer 3 edited body Here's an example I learned from John Franks. This is a nice example because it's used to produce an example of Smale's sphere eversion problem. It also generalizes to include (for example) Will Sawin's comment above. Consider the Boy's surface. This is an immersion of $\mathbb{R}P^2$ into $\mathbb{R}^3$. If you look at its normal bundle, there's no sense of +1 or -1 (because it's non-orientable) but you can look at its associated unit sphere bundle. This is the orientation double cover, immersed cover--a.k.a. the sphere--immersed into $\mathbb{R}^3$. By scaling the fibers of the normal bundle from 1 to 0, you see the "covering homotopy" of $S^2$ onto the Boy's surface($\mathbb{R} P^2$), as you ask for. So that's the thing you're looking for, but let's go further--instead of just scaling from 1 to 0, scale from 1 to -1. This is a homotopy, through immersions, of $S^2$ to itself, and it leads to one way in which you can evert the sphere (i.e., turn it inside out). I think this strategy originally came from Shapiro, though any historical corrections are welcome! More generally, if you immerse any 2-dimensional object (i.e., an un/orientable surface with or without boundary) you can perform the same trick, examining the unit-length elements of its normal bundle. In Will Sawin's example, take an embedded Mobius band and examine its unit normal bundle--this gives the non-standard embedding of the cylinder you're asking for, and scaling the normal bundle to the zero section gives you the "covering homotopy" you seek. 2 Indicated that the Boy's surface example generalizes; added 93 characters in body Here's an example I learned from John Franks. This is a nice example because it's used to produce an example of Smale's sphere eversion problem. It also generalizes to include (for example) Will Sawin's comment above. Consider the Boy's surface. This is an immersion of $\mathbb{R}P^2$ into $\mathbb{R}^3$. If you look at its normal bundle, there's no sense of +1 or -1 (because it's non-orientable) but you can look at its associated unit sphere bundle. This is the orientation double cover, immersed into $\mathbb{R}^3$. By scaling the fibers of the normal bundle from 1 to 0, you see the "covering homotopy" of $S^2$ onto the Boy's surface ($\mathbb{R} P^2$), as you ask for. So that's the thing you're looking for, but let's go further--instead of just scaling from 1 to 0, scale from 1 to -1. This is a homotopy, through immersions, of $S^2$ to itself, and it leads to one way in which you can evert the sphere (i.e., turn it inside out). I think this strategy originally came from Shapiroand Phillips, though any historical corrections are welcome! More generally, if you immerse any 2-dimensional object (i.e., an un/orientable surface with or without boundary) you can perform the same trick, examining the unit-length elements of its normal bundle. In Will Sawin's example, take an embedded Mobius band and examine its unit normal bundle--this gives the non-standard embedding of the cylinder you're asking for, and scaling the normal bundle to the zero section gives you the "covering homotopy" you seek. 1 Here's an example I learned from John Franks. This is a nice example because it's used to produce an example of Smale's sphere eversion problem. Consider the Boy's surface. This is an immersion of $\mathbb{R}P^2$ into $\mathbb{R}^3$. If you look at its normal bundle, there's no sense of +1 or -1 (because it's non-orientable) but you can look at its associated unit sphere bundle. This is the orientation double cover, immersed into $\mathbb{R}^3$. By scaling the fibers of the normal bundle from 1 to 0, you see the "covering homotopy" of $S^2$ onto the Boy's surface ($\mathbb{R} P^2$), as you ask for. So that's the thing you're looking for, but let's go further--instead of just scaling from 1 to 0, scale from 1 to -1. This is a homotopy, through immersions, of $S^2$ to itself, and it leads to one way in which you can evert the sphere (i.e., turn it inside out). I think this strategy originally came from Shapiro and Phillips, though any historical corrections are welcome!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378611445426941, "perplexity_flag": "head"}
http://mathoverflow.net/questions/28695/what-should-we-teach-to-liberal-arts-students-who-will-take-only-one-math-course
## What should we teach to liberal arts students who will take only one math course? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Even professors in academic departments other than mathematics---never mind other educated people---do not know that such a field as mathematics exists. Once a professor of medicine asked me whether it is necessary to write a thesis to get a Ph.D. in math, and then added, "After all, isn't it all already known?". Literate people generally know that physics and biology are fields in which new discoveries are constantly being made. Why should it be any more difficult to let people know that about mathematics than about physics? After all, it's not as if most people who know that about physics have any idea what those new discoveries are. Liberal arts students are often required to take one math course. Often that course consists of a bunch of useless clerical skills. How to do partial fractions decompositions and the like is what students are told "mathematical thinking" is about. In some cases professors feel the one math course that the philosophy major takes is not worth attention because students who didn't learn that material in high school the way they were supposed to aren't any good. When a university has a course intended to acquaint those who take only one math course with the fact that mathematics is an intellectual field, there are still nonetheless numerous students who take only the algebra course whose content is taught only because it's prerequisite material for other subjects that the student will never take. So what should we teach to liberal arts students who will take only one math course? - 14 Community wiki? A positive outcome for such a course would surely be an awareness of what mathematics is and, dare I say it, of its impact in society. So probably teaching them any real mathematical techniques is out of the question. There are several good books popularising mathematics which could serve as inspiration for designing such a course, for example. – José Figueroa-O'Farrill Jun 19 2010 at 0:11 5 Great question. I disagree, however, that the point of the one math course should be to show there is more math to discover. It shouldn't even be to show what "mathematical thinking" is or why math is an "intellectual field", although this seems closer. The point should be to enrich students' lives, so I think there should be two goals: the lesser one of giving them useful skills they'll need in life, and the greater one of making them better thinkers and more interested in thinking. (The first goal is lesser only because people outside science rarely need anything beyond high-school math). – Ilya Grigoriev Jun 19 2010 at 0:21 2 Isn´t that wonderful?... It really seems like there are more mathematicians knowing about (the) other fields than scientists from the other fields knowing about mathematics... – ex falso quodlibet Jun 19 2010 at 1:24 32 I disagree with accepting an answer to such a wide ranging, general interest question after such a ridiculously short time. That makes it appear as if you didn't really care about what many people had to say, somewhat disrespectful, in fact. – Victor Protsak Jun 19 2010 at 2:04 2 I've hit this question with the Wiki-hammer. – S. Carnahan♦ Jun 22 2010 at 20:18 show 3 more comments ## 20 Answers We had a discussion about this at the sbparty. The conclusion I came to is that I would cover the following three topics. 1. Basic numeracy. The main goal of this portion of the class is to convince people that 1 million dollars is a small amount of money, but 1 billion dollars is a large amount of money. (For example, if you won a million dollars tomorrow you should not drop out of school, but if you won a billion dollars you should do whatever you want to do.) Related topics include Fermi problems, understanding the scales of things, etc. If there's enough time then this unit would finish with explaining how exponential growth is much faster than linear. 2. Basic statistics. I actually don't know that much statistics so I'm not totally sure what this should cover, but the goal is for people to be able to understand polling, sampling, and common statistical fallacies. People should leave this unit understanding what the margin of error means in a poll, some rough idea of standard deviations, and why sampling would improve the accuracy of the census. 3. Why is math fun? The goal of this section is to show people some cool things that illustrate what mathematics is as practiced by mathematicians. The student's would not be expected to really learn anything here, but instead would hopefully be persuaded that mathematicians do some interesting things. In particular, it would be nice if a person in the class who would enjoy advanced math classes (but doesn't know that yet) could see that math is something they would like. If I were teaching this class I'd probably do Farey Fractions since that's my go to topic, but there are lots of good options (platonic solids, Cantor set theory, RSA, etc.). The third section would be shorter than the first two and less heavily covered in the exams. - 11 @Harry : But our students don't understand the basics. This needs to be corrected, and if we are too cool to correct it, then who will? My fear is that some other dept will get fed up and start offering such a course. At some universities, depts that use math (eg : physics and various types of engineering) were unsatisfied with how calculus was being taught by the mathematicians and decided to offer calculus classes themselves. This causes math dept enrollments (and math dept budgets) to drop. It's in our discipline's best interest to teach what students need rather than what we enjoy... – Andy Putman Jun 19 2010 at 17:05 9 Harry, I think your claims rest on a few shaky assumptions, most importantly that applied math, statistics, and heuristic reasoning aren't part of mathematics as mathematicians do it. The fact that these topics are uninteresting to a vocal subset of mathematicians doesn't imply that teaching them is a waste of time. Regarding the use of rigor, I think an analogy is most appropriate: My own experiences with required survey classes in non-math departments is that they can be quite enlightening when organized and executed well, even if they don't show me their day-to-day practice. – S. Carnahan♦ Jun 19 2010 at 18:25 9 Of course, if numeracy and basic statistics were covered in highschool (where they should be taught!) then we wouldn't need to teach them in an intro college class. In a perfect world this would be highschool material, and then an intro college class could be more geared towards interesting mathematics. – Noah Snyder Jun 19 2010 at 18:47 10 I don't really grok what scholarship in English means, so I'll avoid comment on that. But other subjects say History, Classics, Linguistics, Economics, Anthropology, Psychology, Sociology, etc. the intro level classes have almost nothing to do with how the subject is practiced in the field. Intro level classics classes are learning greek not doing research, intro level linguistics is stuff like learning what a phoneme is not doing research, intro level history is learning about the basics of some historical period not digging around in old books in some obscure library in France. – Noah Snyder Jun 20 2010 at 16:47 10 To extend Noah's last comment: we probably think that intro level courses in other subjects bear some resemblance to how those subjects are practiced in the wild because our own experience with those subjects is limited to the intro level courses. – Michael Lugo Jun 21 2010 at 12:28 show 15 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Teach them something surprising, something memorable. Looking through the contents of "The Heart of Mathematics: An Invitation to Effective Thinking" by Edward B. Burger and Michael Starbird, among other things they talk about number theory (up to RSA encryption), irrational numbers, different sizes of infinity, the fourth dimension, knot theory, fractals, and counterintuitive probability. These are all the kinds of things that excite mathematicians, and we should try to give our students some sense of that kind of excitement. Burger and Starbird's book is designed for a liberal arts'' type course, and I think they demonstrate that it's possible to give an understanding of what's going on in a way that is at least somewhat palatable to non-mathematicians. - 3 It's a great book, I've taught a course based on it, but I went more in depth into certain topics and made it very interactive and exploratory, so I ended up "covering" less than a half. The response from the students was phenomenal. – Victor Protsak Jun 19 2010 at 4:19 I've taught a "math for liberal arts" course from Burger and Starbird, and highly recommend it. – Michael Lugo Jun 19 2010 at 13:23 In surveying the other responses to date it seems like many people have assumed that without assuming calculus the most we can hope for teaching undergraduate students is probability, statistics, fractions/percentages, brain teasers and puzzles. Aren't we shooting too low? For an extreme example of how far we might actually push such a course consider a quote of Arnol'd's in On teaching mathematics: By the way, in the 1960s I taught group theory to Moscow schoolchildren. Avoiding all the axiomatics and staying as close as possible to physics, in half a year I got to the Abel theorem on the unsolvability of a general equation of degree five in radicals (having on the way taught the pupils complex numbers, Riemann surfaces, fundamental groups and monodromy groups of algebraic functions). This course was later published by one of the audience, V. Alekseev, as the book The Abel theorem in problems. Set aside the hairy issue that this high school, which is already far more specialized than US high schools, was one of the premier math/physics high schools in Russia. Rather, pay attention to the fact that the first 220 pages of Alekseev's book is self-contained and no calculus is necessary. Also, consider the idea of a year-long course following Penrose's Road to Reality. Showing people mathematics' role in unraveling the secrets of the universe has always seemed far cooler to me than other tactics for inspiration. Let me be perfectly clear that I think actually requiring a course such as Arnol'd's across the board is overly optimistic. However, I think that by selecting the topics of such a course to reflect what mathematicians generally value could go a long way towards providing both a cultural appreciation of modern mathematics (as Lockhart's Lament would like) and an opportunity to think rigorously about initially simple objects (groups) and then more complex but visual objects (Riemann surfaces). If people are pessimistic that "liberal arts majors" don't have the goods to think about group theory, then I would much rather have a history of mathematics course that follows something like Stillwell's Mathematics and Its History, leaving students with the impression that mathematics has a rich philosophical undercurrent, than bore them with a meaningless pursuit of graph theory, probability and a smorgasbord of seemingly unrelated topics. The prevalent idea among many people seems to be that we have to make sure that basic numeracy is in place and that this is the math department's job. I don't think that innumeracy is a problem with the school system as it stands. People just forget their middle school and high school math because to them mathematics is an uninspired dead subject that is just plain boring. Let's change that. - Somewhat related: I wonder whether it is possible to teach something like what you are proposing based on Conway, Burgiel, and Goodman-Strauss's Symmetries of Things. – Willie Wong Jun 20 2010 at 15:05 2 I agree completely. – Harry Gindi Jun 21 2010 at 9:06 1 About history: I remember reading about a study of the impact of a history course on math-education majors. The conclusion was that the most valuable aspect of the course was that the students who had had it became teachers who accepted more readily the idea that there is more than one way to solve a problem, compared to the control group. Well, I don't know if it would have the same effect on students who don't take any other classes, but nevertheless this seems like a valuable idea to try to impart. – Thierry Zell Apr 25 2011 at 14:28 5 "People just forget their middle school and high school math because to them mathematics is an uninspired dead subject that is just plain boring." I could not disagree more. People forget (or never really learn) high school math because it is in fact very difficult, even if it doesn't seem so from our position. I approve of cultural appreciation of modern mathematics and opportunities to think rigorously. I disapprove of the notion, widespread among mathematicians, that the mass of students would get excited about math if they saw it the way we do. – JSE Apr 25 2011 at 14:57 2 @JSE: I think you have a point. Mathematicians are not representative of the general population, and I think that the best way to see this is to realize how many major users of mathematics like scientist would not even consider becoming mathematicians themselves, even though many have the talent for it. – Thierry Zell Apr 25 2011 at 19:12 show 2 more comments The aim I set for myself is to get the students to the point that they can understand how somebody else can enjoy math. - 1 So true. And if moreover you can actually make them see how they can enjoy math themselves (even for mere 5 minutes while thinking of a specific problem) then you've accomplished a great thing. – danseetea Jun 19 2010 at 1:48 @Kevin. Total agreement,Kevin-see my answer below. – Andrew L Jun 19 2010 at 2:24 I think this question can actually be interpreted in two different ways, since there are essentially two possible courses to give: 1. One whose purpose is to enrich the students' lives (as Ilya Grigoriev put it in his comment) - such a course would probably do best to adapt ideas mainly from Noah Snyder's answer - scales, statistics (probabilistic thinking), and I'd add a few other topics (maybe small logical puzzles to make students see how "thinking mathematically" may have a positive effect on their analysis of every day situations). 2. A course whose main purpose is to convince students that "math is great". Make them drop the false impressions they've been fed with their entire lives, about mathematics being a "dead science" ("haven't all math problems already been solved?") - this can be achieved through looking at mathematics from a historical perspective, especially putting emphasis on problems solved recently (say last 50 years), open problems and new emerging fields in mathematics. Making them see that math can be fun is perhaps the ultimate goal, as Kevin O'Bryant mentioned. Now the question arises: which of the two courses should we teach? Morally, if we think of the benefit of our students, we'd have to pick the first. But if we are mainly interested in "advertising" (which I don't think is a bad idea!), we should pick the second. Perhaps if such an "advertising course" is sufficiently good, it would convince them to take a second course, more along the lines of #1 above? A compromise could be to divide the course into two parts - after we've convinced the students that math can be cool, you can go on and teach them more traditional stuff that will be beneficial to them. - 4 Perhaps there is a third possible course. Great works in the humanities are valued in part because they expose students to different states of human experience/ways of thinking about the world. Mathematics is one of only a few disciplines that can expose students to thinking about the world in a mathematical way (other disciplines might include physics, computer science, linguistics). We would like to give students a sense of this other way of thinking. To do this, we should use appealing, self contained subjects (e.g. knot theory) that they can and will want to think about. – Henry Segerman Jun 19 2010 at 16:23 1 From personal experience, I recommend not trying to combine both courses together. Both need to build up momentum, and it's different if I am trying to encourage the practice of numeracy as opposed to the sense of wonder and excitement about mathematical discovery. – Anna Varvak Dec 20 2010 at 17:20 An entirely practical point: I taught such a course one semester, with the COMAP book which is called "For All Practical Purposes." They supplied a set of videotapes, which I ignored. The course I gave was unpleasant for most concerned, certainly for me. In another semester I substituted a single day for a colleague who was using the videotapes. It was wonderful. I showed the video, they got something out of it, I explained a tiny bit. The real surprize to me was just how well the video was done. By now they must have DVD's, online stuff, etc. As that was in the 1990's, I assume that there are more recent incarnations of such course materials. - I think there is nothing that is both as elementary, useful and fun as elementary probability. Probabilistic thinking is relevant to decision making and extremely underdeveloped. Even Paul Erdös got the Monty Hall problem wrong, when he was first confronted with it. So probability is certainly not trivial. The amount of formalism needed is very small, so students afraid of complicated expressions will not be scared off. One can cover a wide range of conceptual and practical problems, from brain teasers, probabilistic paradoxes to how one should interpret medical tests. I think there are some rather simple concepts not widely understood that should be really hammered into peoples heads such as (elementary) conditional probability, differences between causation and correlation, selection bias etc. - 7 I think this would actually reinforce the idea that "math is solved" rather than expose people to genuinely new mathematical concepts that open the door to the amazingly huge sea of math concepts and problems that have not been figured out. – Matt Jun 20 2010 at 5:22 Precisely: probability is not trivial. I've known many otherwise fine mathematicians who had a blind spot as far as probability was concerned. And though the formalism necessary is very small, a good command of set notation is ultimately desirable if not indispensable to reason efficiently. So I'm not too sure that this would work. – Thierry Zell Apr 25 2011 at 14:49 " I've known many otherwise fine mathematicians who had a blind spot as far as probability was concerned. " Yes. and recently I think I have seen what makes for their problems: They get hanged up with the measure theory! Some people have only seen the measure theory defition of conditional probability, and get problems when they see an expression like $\text{E}(X | Y)$! – Kjetil B Halvorsen Jul 8 at 21:13 Look at the contents of this course, by Satyan Devadoss at Williams College: The Shape of Nature, a.k.a., the Geometry and Topology of Nature. It was just released the The Teaching Company. [Disclosure: Satyan is a coauthor.] To quote Herbert Wilf from the Notices of the AMS, "A mind is a fire to be kindled, not a bucket to be filled." The job of the teacher is to light that fire... I think material such as that in this course is suitable kindling. Addendum: In that course, Satyan manages to touch upon: The Poincaré conjecture, Voronoi diagrams, the Jones polynomial, the Seifert algorithm, and Dehn surgery. - In the May issue of the Notices of the American Mathematical Society, Underwood Dudley posits that the purpose of a mathematics education is to teach people how to reason. This would suggest that the purpose of a liberal arts mathematics course should be, more or less, to have students perform calisthenics in reasoning. We could teach mathematical push-ups and sit-ups or we could package the calisthenics into activities that have the greatest chance of maintaining student interest. To maintain student interest in physics, David Goodstein of Caltech created a course that intertwined history and experimental observation. Maybe liberal arts mathematics courses should follow his hueristic. - 1 Yes, I agree. I think that it's necessary to introduce students to proofs and the axiomatic method if this is the goal. The concept of a mathematical argument is much more elementary than any sort of argument that relies on empirical evidence. Learning to argue mathematically allows one to argue more efficiently and convincingly in other fields, because it teaches one to argue from the definitions and axioms. What are premises if not just axioms in disguise? – Harry Gindi Jun 20 2010 at 13:41 10 Harry, you are under a mistaken impression that mathematical $\textit{reasoning}$ reduces to proofs and axiomatic method. There is no doubt that mathematical reasoning skills are widely applicable; on the other hand, while "arguing from definitions and axioms" is certainly useful in law, philosophy and logic, and theology, it plays only a minor role in the methodology of most sciences, such as biology, chemistry, or physics, which are $\textit{based}$ upon empirical evidence that you seem to disparage. Look for David Mumford's excellent thoughts on this. – Victor Protsak Jun 21 2010 at 3:58 This is a really difficult question to answer because mathematics by its very nature is the proverbial snake that swallows its own tail: You can't really explain any substantial part of it to "virgins" without some background in mathematics to begin with. Even high school algebra and geometry aren't really enough of a bare minimum to make substantial mathematics intelligible to most newbies. It MIGHT be good enough to motivate calculus and its role in physics -- and perhaps some group theory and linear algebra through geometry --but anything else is going to be tough. And today's students (in the US at least) aren't even guaranteed to have a good plane geometry background anymore, which was once automatic in anyone that completed high school. In theory, you'd like a course that a) makes these students aware of what mathematics is and why it is important and b) perhaps makes enough of an impression on beginning students to whet their appetite for more. If I was forced to teach such a course knowing in all likelihood it would be the only required course they would take, I would probably teach a history of mathematics course and try and make it as geometric and story-driven as possible. Tell them about Archimedes, the great Greek traditions, and what great advancements were already made, such as proving the world is round and computing its circumference. Debunk the myths, like Newton and the apple. Tell them about the little known and fascinating figures, like late-bloomer Weierstrass and child prodigies like Gauss, tragic figures like Galois, Abel, and Turing. And lastly, tell them about the Millenium Problems, so that they can appreciate the fact that math indeed has real-world value — a million dollars! — to some people. But above all, tell a great story they'll always remember you for. That's what I'd do. - 5 Done! \ – KConrad Jun 19 2010 at 0:36 6 Yes, I agree we can do this ourselves... but wouldn't it be nice if the posters themselves checked their own spelling? – José Figueroa-O'Farrill Jun 19 2010 at 1:33 5 @Andrew L: Please stop making nasty remarks about "American values" and otherwise talking trash. It's uncalled for, and if any other country were the target, it wouldn't be acceptable. I'm not the most patriotic person, and I would say that I disagree with the nationalism in general, but you're attacking the values 300 million people with widely varying opinions. I say this only because it's the third time in the past few days I've seen you do it. – Harry Gindi Jun 19 2010 at 9:44 7 @Andrew L : Having the "right" to free speech doesn't mean that you can say whatever you want and no one has the right to be offended, complain to you, disregard everything else you have to say, etc. – Andy Putman Jun 19 2010 at 17:09 4 @Victor: No, then you misread. He said about the millenium prize: "By American moral standards they can see math indeed is seen to have real-world value (a million dollars)". Suppose we replace the word "American" in there by the name of an ethnicity or some other nationality... It amounts to saying "People X are greedy" or "People X care about money over all else". – Harry Gindi Jun 21 2010 at 9:13 show 11 more comments Among basic numeracy issues that I have smuggled in to the classroom (I say "smuggled" because there is a list of topics that I'm supposed to cover) is Euclid's algorithm for GCDs and how to use the results to reduce fractions. No student has complained about this even though I've given them no written material on it besides assigned problems (and sometimes students required to take a course they'd rather not take are inclined to find things to complain about). See #2 at http://www.math.umn.edu/~hardy/1031/hw/2nd.pdf. Another addresses the habit of almost everyone to round 400 to 399.99823764, etc. One of the simplest examples is when you want to evaluate something like $(8/3) \times 57$. Students use their calculators to find that $8/3 \approx 2.667$, then multiply that by $57$, getting $152.019$, although in fact 57 is divisible by 3. Sometimes they even do that when the question is "How many....?" (See #5 at http://www.math.umn.edu/~hardy/1031/hw/1st.pdf.) #4 at http://www.math.umn.edu/~hardy/1031/hw/1st.pdf is also a nice "basic numeracy" problem. Why is multiplication of finite cardinal numbers commutative, despite the seeming asymmetry in its definition? That's really basic numeracy, but "theoretical" and at the same time very concrete. Mentioning past geniuses also seems worthwhile. I tell them Carl Gauss was the most famous person to live on earth in the 19th century (except people who did not work in the physical or mathematical sciences) and give them a copy of Wikipedia's "list of topics named after Carl Gauss" (the one on Euler is much longer; there are also such pages on Riemann and various others). Basic probability seems worth presenting to a broad audience since there are so many different subjects that rely on statistics. The combinatorial stuff that some basic probability problems rely on afford an opportunity to do "theoretical but concrete" mathematics, as in #2 or #6 at http://www.math.umn.edu/~hardy/1031/hw/1st.pdf (#6 was discussed in class before it was assigned). ("Concrete" is necessary at this level; there is no hope that these students will learn to understand such material at a less concrete level before the semester is over.) I more frequently use exercises to call students attention to something than to challenge their cleverness. Oh: As long as I've mentioned "numeracy", how about #1 at http://www.math.umn.edu/~hardy/1031/hw/7th.pdf? It actually seems as if some instructors are not aware of this problem. Why do they neglect to know about such a thing? Today I've mentioned elsewhere on math overflow that I was amazed at how much could be done in the book by Freedman, Pisani, Purves, and Adhikari with so little knowledge of math on the part of the students. That things like that can be done encourages me to hope that there is some way to present the concept of isomorphism to non-mathematical freshman. It's what math is all about. Math is about "abstract structures" in the sense that it doesn't matter whether the chess pieces are made of wood or are images on the computer monitor, nor does $2 + 3 = 5$ depend on whether you're counting oranges or supreme court justices. Two things are the same abstract structure iff they're isomorphic. And isomorphism makes "bypass operations" possible; there must be some of those that can be presented to freshman. - These are some good ideas; the scope seems to be huge, though. Is the course you're teaching one semester or two semesters? And how many weekly hours is it? – danseetea Jun 19 2010 at 2:29 I have some experience of teaching a course in Mathematics for libearal arts and social science students. It was called last year "the beauty of mathematics". Here is a blog post about the course, and the course page which contains all the presentation (in Hebrew). The main topics of the course were Numbers: irrational numbers, imaginary numbers, different representations of numbers, prime numbers and their properties. Shapes: Geometry from two dimensions to many dimensions. Infinity: The concept of infinity. The paradox of motion. How to add up infinitly many numbers. Riddles: Mathematical puzzles and riddles. Models: Mathematical models as the gate to science. Probability: The mathematics of luck. Games: mathematical games and the theory of games. Mathematics in social sciences. Of course, there is much to be chosen from and it is quite important not to squeeze too much to a single course. - How did it go? The blog stops at lecture 1... – Victor Protsak Jun 22 2010 at 12:02 Teach them to make computer graphics that represent mathematical concepts. - There's certainly a huge amount of unrealized potential in that idea. Needhams Visual Complex Analysis is one contribution. I did this page for a very elementary class: math.umn.edu/~hardy/1031/handouts/March.3.pdf – Michael Hardy Jul 20 2010 at 22:41 In my second year at university I was approached by some staff members of our university's English department to develop a theatrical production that TAUGHT mathematics. The project was to be a part of a larger movement to try to increase interdisciplinary learning through the medium of the arts. The deal was I could do this production in exchange for having to do my second year essay, which is normally a compulsory module, and I would be marked instead for the theatre piece. In the end it was for exactly these reasons that it fell through since I knew that the theatre piece would take a far more significant amount of time than the 7 page essay that I would otherwise be writing. However before ducking out, I did put some thought towards what would be most suitable to teach. In the end I decided that set theory would be the way forward, and I propose this as a sensible answer to your question. I think that there is a great motivation to teach students about set theory. At its most basic level this would be drawing Venn Diagrams, and asking them to write certain unions and intersections in disjoint forms; or one could follow a book like Halmos. But along side this the teacher can introduce the philosophical aspects, which should go some way toward arousing their interests. Further more, the historical aspect of the topic is fascinating. And (if it couldn't get better), the number of paradoxes present in the topic which are (often) easily explainable to the uninitiated means that there will be a clear sense of how deep mathematics is, and how alive it still is today. - 6 Basic set theory is one of the dryest areas of mathematics. It's all very formal, and I don't really see how you can inspire people with it. There is a lot of very deep set theory as well, but you wouldn't get into it, and you'd lose all interest far before you got to anything resembling something interesting. – Harry Gindi Jun 19 2010 at 9:50 2 When I was in 12th grade a classmate saw me reading a thin book (maybe 80 pages) about set theory and asked me how anyone could possibly write a book that long about set theory. Everyone had been taught in 7th grade that set theory consists of understanding what unions and intersections are, and that it takes about two minutes to learn. – Michael Hardy Jun 19 2010 at 12:52 2 Michael's roommate story somehow reminds me of the following student comment on a Linear Algebra course I taught: "More time should have been spent on solving systems of linear equations, because obviously, that is the only useful part" (I think we only spent 3 weeks on them). Back to set theory: N.Ya.Vilenkin's "Stories about sets" (in Russian) is an exciting book that is perfect for high school students. – Victor Protsak Jun 21 2010 at 4:09 1 Let's not knock naive set theory. I think that if you could get liberal arts students to really understand intersections, unions, distributivity and de Morgan's laws, it could be time well spent. Being able to systematically reason about this can really be enlightening, and a valuable skill for, say, a future lawyer. – Thierry Zell Apr 25 2011 at 14:54 This year I have been teaching maths course to liberal arts and linguistics students. My practice is: tell them what maths appears in our everyday life -- tell them the *.mp3 and *.jpg files in computer are actually using the idea of function approximation; tell them the various application of maths in contemporary information technology field such as the self-error-correcting-code system, the Code-Dividing-Multiple-Address cellphone technology, etc. When the students know maths is around themselves, not just laying in the textbook, their interest will automatically come out. Even they cannot understand the logic and the principle finnally, they at least get an impression that maths is very useful in modern technologies. - A well-thought-out example, that may serve as a good model for a course on mathematics for humanities students, is Gerald Holton and Stephen G. Brush's Physics, the Human Adventure: From Copernicus to Einstein and Beyond (Rutgers University Press, 2001). It's the third edition of Introduction to Concepts and Theories in Physical Science (Addison-Wesley, 1952). Holton and Brush is not intended to be an "easy" book. The authors write in the preface, "The book is intended for a year course (two semesters or three quarters) in a general education or core program, taken primarily by nonscience majors who have an adequate background in mathematics (up to but not including calculus)" (xiv). The goal of their book is to present "a comprehensible account -- a continuous story line, as it were -- of how science evolves through the interactions of theories, experiments, and actual scientists. We hope the reader will thereby get to understand the scientific worldview. And equally important, by following the steps in key arguments and in the derivation of fundamental equations, the readers will learn how scientists think" (xiv; emphasis in original). One of the features that makes Holton and Brush unique is that the book makes use of both the history and the philosophy of science to create the story line. A course on mathematics for humanities students ought to make use of the history and philosophy of mathematics for similar reasons. Doing so creates a context for students so that they can learn how mathematicians think. - How to calculate a 15%, 17.5%, and 20% tip without using a specially designed application. In all seriousness, a mix of practical math and practical summaries of problems which are yet to be solved, etc. - 9 I have to say, "practical math" doesn't sound like a very fun course to teach -- or take, for that matter. I think that an emphasis on this sort of thing may be partly responsible for the mistaken impression that math is "done" or "dead". That said, certainly "practical math" in the sense of basic numeracy (as others have suggested) is a crucially important skill for a person to have. But I suggest that a willingness to teach this at the university level sends the wrong message to primary and secondary school educators, who, in my opinion, should ultimately be responsible for teaching it. – Sam Lichtenstein Jun 19 2010 at 4:16 6 "Practical math" seems like the worst class ever. If you want to kill any remaining interest your students have in mathematics, this is a great plan. – Harry Gindi Jun 19 2010 at 9:46 There are nice options if your university's students all took calculous in high school. In that case, you might try some light weight mixture of elementary differential equations, recurrence relations, generating functions, and game theory. You start out by explaining how differential equations arise in various branches of science. You next introduce recurrence relations explaining the distinction between discrete and continuous mathematics, indicating how they arise in science and game theory. You then remind them about Taylor series and introduce the method of generating functions, showing that differential equations are used in solving discrete problems too. In this way, you could provide a cohesive course that builds upon itself like mathematics is want to do, requires computational homeworks, seriously discusses the notion of infinity, touches upon numerous applied topics, and shows how mathematics can be simultaneously convergent, surprising, and useful by introducing generating functions. If they're very quick, there is considerable flexibility for discussing algorithm running times and P != NP, or Dirichlet series generating functions and the Riemann Zeta function, or whatever. You'd want to verify that elementary differential equations and Taylor series are still part of the AP Calculous AB syllabus, as well as the percentage of incoming students who've had that course. You should however suppress anything that requires multi-variable calculous that only falls under the AP Calculous BC syllabus, which presumably few student's took. - But I had in mind more-or-less typical liberal arts students in a state university who will take only one math course. It is rare for students of that kind who have done well in a calculus course in high school to have any idea what a theorem is, what a proof is, or what a definition is, is to suspect that mathematics doesn't just consist of dogmas to be memorized. To assume that students who got grades of "A" in calculus in high school understood any of it would be naive; the system actively discourages all understanding. – Michael Hardy Apr 25 2011 at 15:26 Let me take a different approach: that of the liberal arts student. Since my interest vary I took sailing, theater arts, philosophy, etc and outside of school fencing related martial arts. Three segments stand out in my memory: • 10 years ago when our Calculus teacher was writing the value of pi, he just kept going to 9th decimal places and we were all awed by the demonstration • Our Linear Algebra professor used to quip: Parlez vous mathematique? and he even instilled the idea in me that: Mathematics is study of forms which will remain with me forever even though I could not complete the course (2x). • The last is in our martial arts class related to stick fighting: It's not about just clicking sticks, but you must showcase yourself. Now there have been some remarkable math personalities reading from biography like Tarski who would bring incredible energy to the classroom. I felt lack of it during the class. Keeping these in mind and Howard Gardner's theory of multiple intelligence where we acquire learning in our unique way, I humbly point to this question in Math.SE which I opened under former account and admittedly the accepted bounty does not do justice to other answer, which may have been edited later. As the thread would show, I as a student whose interests are bent more on humanities side, would like to take a still-life of a daily example, if I was the teacher, and break it down mathematically. How many sands are there in the universe? Why should 2+2 = 4? (and keep carrying on the conversation with the student). Anyone interested in knitting and crochet: how would you describe the concept of knot in mathematics? etc.. But at the end, it's about showcasing your art. Sometimes if a professor memorizes, mass amount of information and gives a dramatic showdown in class without looking at notes, it will be etched forever in student's mind. - I think a little bit of mathematical logic should be included. Not-too-technical descriptions of what formalized proofs are, what models are, Tarski's definition of truth, Gödel's incompleteness theorem, the halting problem for Turing machines. Also Cantor's diagonal proof that the reals are uncountable, and maybe some historical info about how the religious community reacted to that theorem (apparently they reacted badly, see the wikipedia biography of Cantor). Chaos and the butterfly effect deserve a mention. A few examples from computational complexity theory could be cool. Scott Aaronson has some nice ones here. I once explained public-key cryptography to a music major (showing how RSA worked) and I think he understood it and appreciated it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9650983810424805, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/93761-2nd-order-ode-euler-equation.html
# Thread: 1. ## 2nd order ODE - Euler Equation Given: $2x^2y'' + 3xy' + (2x^2 - 1)y = 0$ Question: Find the indicial equation and determine the two singular roots. I know how to solve it when the equation is of the form: $x^2y'' + \alpha xy' + \beta y$ However here, $\beta = 2x^2 - 1$ and is not a constant. How am I supposed to find the indicial/characteristic equation here? In the solutions, they just ignore the $2x^2$ factor and proceed as if the equation was $2x^2y'' + 3xy' - y = 0$. I just don't understand how the $2x^2$ factor can be ignored. Any suggestions? 2. I assume you're looking for a Frobenius solution. Here's a method for finding the indicial equation when the coefficients are polynomials. Divide each term of the DE by the leading polynomial coefficient, and write the DE in the form: $y^{\prime\prime}+\frac{p_0+p_1x+p_2x^2+\ldots}{x}y ^{\prime}+\frac{q_0+q_1x+q_2x^2+\ldots}{x^2}=0.$ Now your indicial equation is just $r(r-1)+p_0r+q_0=0.$ So in your example, we can rewrite the DE as $y^{\prime\prime}+\frac{\frac{3}{2}}{x}y^\prime + \frac{-\frac{1}{2}+x^2}{x^2}y=0,$ so that $p_0=3/2$ and $q_0=-1/2.$ Now the indicial equation is $r(r-1)+\frac{3}{2}r-\frac{1}{2}=0,$ or $(r-\frac{1}{2})(r+1)=0.$ Now you're on your way.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9307551980018616, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/225922/an-example-for-a-calculation-where-imaginary-numbers-are-used-but-dont-occur-in/231641
# An example for a calculation where imaginary numbers are used but don't occur in the question or the solution. In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers. He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers. I remember once seeing such an example but cannot find one, so: Does anyone know about a good an easily explicable example of this phenomenon? ("Easily" means that enigneers and biologists can also understand it well.) - 1 some real integrals are easier to solve using complex analysis – wim Nov 1 '12 at 3:22 I wouldn't call it simple, but the derivation of the spectral theorem uses imaginary numbers even though it's a theorem about real symmetric matrices. – Adam Wuerl Nov 3 '12 at 4:47 1 – Dinesh Nov 6 '12 at 20:18 ## 16 Answers Once nice example is the sum $$\cos x + \cos 2x + \cos 3x + \cdots + \cos nx$$ This can be worked out using trigonometric identities, but it turns out to be surprisingly simple with this neat trick: $$\sum_{k=1}^n \cos(kx) = \sum_{k=1}^n\mathscr{Re}\{e^{ikx} \} = \mathscr{Re} \sum_{k=1}^n e^{ikx} = \mathscr{Re}\left\{ \frac{e^{i(n+1)x} - e^{ix}}{e^{ix} - 1} \right\}$$ because the sum turns into a geometric series. (computing the real part to get an answer in terms of trigonometric functions is not difficult, but is a little tedious) - – Gone Oct 31 '12 at 17:00 The canonical example seems to be Cardano's solution of the cubic equation, which requires non-real numbers in some cases even when all the roots are real. The mathematics is not as hard as you might think; and as an added benefit, there is a juicy tale to go with it – as the solution was really due to Scipione del Ferro and Tartaglia. Here is a writeup, based on some notes I made a year and a half ago: First, the general cubic equation $x^3+ax^2+bx+c=0$ can be transformed into the form $$x^3-3px+2q=0$$ by a simple substitution of $x-a/3$ for $x$. We may as well assume $pq\ne0$, since otherwise the equation is trivial to solve. So we substitute in $$x=u+v$$ and get the equation into the form $$u^3+v^3+3(uv-p)(u+v)+q=0.$$ Now we add the extra equation $$uv=p$$ so that $u^3+v^3+q=0$. Substituting $v=p/u$ in this equation, then multiplying by $u^3$, we arrive at $$u^6+2qu^3+p^3=0,$$ which is a quadratic equation in $u^3$. Noticing that interchanging the two roots of this equation corresponds to interchanging $u$ and $v$, which does not change $x$, we pick one of the two solutions, and get: $$u^3=-q+\sqrt{q^2-p^3},$$ with the resulting solution $$x=u+p/u.$$ The three different cube roots $u$ will of course yield the three solutions $x$ of the original equation. Real coefficients In the case when $u^3$ is not real, that is when $q^2<p^3$, we could write instead $$u^3=-q+i\sqrt{p^3-q^2},$$ and we note that in this case $\lvert u\rvert=\sqrt{p}$, so that in fact $x=u+\bar u=2\operatorname{Re} u$. In other words, all the roots are real. In fact the two extrema of $x^3-3px+2q$ are at $x=\pm\sqrt{p}$, and the values of the polynomial at these two points are $2(q\mp p^{3/2})$. The product of these two values is $4(q^2-p^3)<0$, which is another way to see that there are indeed three real zeros. - After you substitute $x= u+v$ into the cubic equation, where did $3u^2v +3uv^2$ go? More generally, I think I'm just confused about the middle term $3(uv-p)$. – Jason DeVito Oct 31 '12 at 15:47 @JasonDeVito: $3u^2v+3uv^2=3uv(u+v)$ joined up with $-3p(u+v)$ to yield $3(uv-p)(u+v)=0$, since we posited the extra equation $uv=p$. The point being, by setting $x=u+v$ we have gained an extra degree of freedom, which is then used up in the extra equation $uv=p$, so that all these terms cancel. – Harald Hanche-Olsen Oct 31 '12 at 17:31 So, should there be a $(u+v)$ with the $3(uv-p)$ term in the line right after "and get the equation into the form"? (Not that it matters, because after setting $uv=p$ that term disappears anyway.) I'm just trying to check my understanding. (I'm quite embarrassed to admit I've known about the cubic and quartic formulas (and nonexistence of higher degree formulas) for quite some time, but never actually went through the cubic or quartic derivations in any kind of detail. I had chosen your post to finally do so, so I was just making sure I understood every step.) – Jason DeVito Oct 31 '12 at 17:42 @JasonDeVito: You're right. Now fixed. Thanks. – Harald Hanche-Olsen Oct 31 '12 at 18:12 Yes, this is the example you learn about in history of math where people realized that they should actually pay attention to these complex numbers. – Graphth Nov 6 '12 at 21:46 Well, you can consider this sequence of integers: $$u_0 = 1; u_1 = 1; u_{n+2} = u_{n+1} - u_n$$ This recurring definition is closely linked to the equation: $$x^2 = x - 1 \Leftrightarrow x^2 - x + 1 = 0$$ The solutions in $\mathbb{R}$ are $\frac{1 \pm i\sqrt3}{2}$ (complex numbers), and you can easily prove by recursion that: $$u_n = \left(\frac{1 + i\sqrt3}{2}\right)^n + \left(\frac{1 - i\sqrt3}{2}\right)^n$$ which is an... integer ! So yes, you can have complex numbers that ease calculations of totally non-complex problems. - The sequence $u_n$ begins 1,1,0,1,-1,2,-3,5,-8,13,-21,34 and is the Fibonacci sequence multiplied by (-1)^n. The quadratic equation and the formula with n-th powers of 6th roots of 1 is for the recurrence, $u_{n+2} = u_{n+1} - u_n$, which is periodic of order 6, and begins 1,1,0,-1,-1,0,1,1,0,-1,-1,0,1,1,0. – zyx Nov 7 '12 at 23:39 Primes congruent to one mod 4 are sums of two squares. You won't be able to explain the whole argument to an audience of non-mathematicians in a few minutes, but you would be able to show how imaginary numbers give a way of attacking this problem: e.g. you say that 5 is prime so doesn't have a non-trivial factorization in the integers, but you can show them that $(2-i)(2+i) = 5$ so that there is an interesting factorization in the Gaussian integers, and this leads to 5 being the sum of two squares. - There is a large class of real definite integrals where the evaluation in closed form can be easily (or not so easily) done using the complex method of residues, but for which there is no known "real" method of evaluation. - One example which will be easily understood (and perhaps appreciated) by many people is the fact that complex numbers allows the complete factorization of a real polynomial. Suppose we have $p(x)=x^4 + 4$. It may not be immediately obvious how to factor this equation, but in the complex plane, it is easy to see that the equation $x^4 = -4$ has the solutions $$x_1 = 1+i,\ \ x_2 =-1+i,\ \ x_3 =-1-i,\ \ x_4 =1-i$$ This allows the factorization $$p(x) = \left[x - (1+i)\right]\left[x - (1-i)\right]\left[x - (-1+i)\right]\left[x - (-1-i)\right]$$ we can now recombine the conjugate root pairs into quadratic polynomials $$\left[x - (1+i)\right]\left[x - (1-i)\right] = x^2 - 2x + 2$$ $$\left[x - (-1+i)\right]\left[x - (-1-i)\right] = x^2 + 2x + 2$$ for the factorization $$p(x) = (x^2 - 2x + 2)(x^2 + 2x + 2)$$ Techniques like this for example, prove that every polynomial with real coefficients can be reduced into linear and quadratic factors or that every real polynomial has a partial fraction decomposition in it's familiar form. - I think one of the most useful and simple is obtaining the formulas for $\sin(n\theta)$ and $\cos (n\theta)$ using the binomial theorem on de Moivre's formula: $$(\cos\theta +i\sin\theta)^n=\cos(n\theta)+i\sin(n\theta)$$ Therefore expanding and identifying real and imaginary parts, one obtains: $$\sin (n\theta)=\sum_{k=0}^n {n \choose k}\cos^k\theta\sin^{n-k}\theta\sin\left(\frac{n-k}{2}\pi\right)\\ \cos (n\theta)=\sum_{k=0}^n {n \choose k}\cos^k\theta\sin^{n-k}\theta\cos\left(\frac{n-k}{2}\pi\right)$$ - This example is maybe not elementary enough, but it is interesting: The usual proof of the central limit theorem, uses complex analysis, that is, imaginary numbers, via the characteristic functionn of a robabolity distribution. But complex numbers do not appear in the problem formulation. - The radius of convergence for the power series of $f(x) = \frac{1}{1+e^x}$ about x=0 is $\pi$. - And how do complex numbers play a role in this case? – Peter Tamaroff Oct 31 '12 at 22:10 @PeterTamaroff: I think it is the same example as Itay Weiss's: the function has a pole at $x=i\pi$, and all the poles are of the form $x=(2k+1)i\pi$ – tomasz Oct 31 '12 at 22:12 @tomasz Oh! =) ${}{}$ – Peter Tamaroff Oct 31 '12 at 22:12 The fact that an odd degree polynomial $P$ with real coefficients must have at least one real root can be proved very quickly using complex numbers as follows. By the Fundamental Theorem of Algebra $P$ has, counting multiplicities, $n$ complex roots. It is easily verified algebraically that since the coefficients of $P$ are all real, if $z$ is a root then so is $\bar{z}$. Thus we conclude that any non-real solution brings a friend - its conjugate, which is also a root. So the non-real roots come in distinct pairs. The total number of roots is odd which means there must at at least one root with no friend. This root must thus be a real number, hence we found a real root. Edit: There is also a more analytic proof, via the IVT, of the result (see comments) and there are also proofs of the fundamental theorem of algebra that use the result above. However, there is a very elementary proof of the fundamental theorem of algebra that certainly uses about as little as is required for IVT. It is basically the complex-analytic proof via the minimum modulus argument, but just that for a polynomial the proof of the minimal modulus argument can be given directly without using any sophisticated complex analysis. (see http://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra#Complex-analytic_proofs) Another example is to show that the radius of convergence of the Maclaurin series for $1/(1+x^2)$ does not exceed $1$. By considering the given function as a complex function it is seen to have a pole at $z=i$ and since the convergence set cannot contain a pole the argument is complete. - The failure of Maclaurin (or Taylor) series due to non-real singularities was quite an intriguing thing to me back then... – Tobias Kienzler Oct 31 '12 at 13:18 2 Your first example is not convincing: For me, every proof of the Fundamental Theorem of Arithmetic uses the fact that a real odd-degree polynomial has a real root. Your second example is a very good one. – GEdgar Oct 31 '12 at 14:03 1 @GEdgar: every proof? That seems rather strong. But the standard proof for odd-degree real polynomials is way simpler than any proof of FTA, on that I agree. ($P(x)$ diverges to opposite infinities as $x\to\pm\infty$; now use the intermediate value theorem.) – Harald Hanche-Olsen Oct 31 '12 at 14:23 there is a very elementary proof of FTA that uses nothing but very straightforward properties of complex numbers. That proof is about as complicated as the more analytic proof using IVT. – Ittay Weiss Oct 31 '12 at 20:35 An easy enough example for engineers would be expressing vector rotation matrix as multiplying two complex numbers. $$\left[{x'\atop y'}\right] = \left[{x \cos a - y\sin a\atop y\cos a + x\sin a}\right]$$ - Er, is this using complex numbers? Where is $i$? Or is this just using the same matrix math that you would have used for complex multiplication? – Larry Gritz Nov 5 '12 at 20:05 R*e^iphi * e^ialpha means rotating a complex number x+iy with cos_a +i sin_a, where |x+iy|=R – Aki Suihkonen Nov 5 '12 at 20:09 Contour integration. Integrals of real-valued functions on the real line can sometimes be computed easily by transforming them to integrals over a closed path in the complex plane, using the residue theorem, and letting some part of the path go to some limit. - When you solve différential equations with the characteristic polynomial and you get complex roots, you combine solutions to get real-valued function. - You can use complex numbers to solve in $\mathbb{N}^4$ the following system of equations : $\left\{ \begin{array}{l} ac-bd=1 \\ bc+ab=2 \end{array} \right.$ Let $z_1=a+ib$ et $z_2=c+id$. Thus, the system is equivalent to $z_1z_2=1+2i$. So $(a^2+b^2)(c^2+d^2)=|z_1z_2|^2=5$, but $5$ is a prime so either $\left\{ \begin{array}{l} a^2+b^2=1 \\ c^2+d^2 =5 \end{array} \right.$ or $\left\{ \begin{array}{l} a^2+b^2=5 \\ c^2+d^2 =1 \end{array} \right.$. You deduce that $(|a|,|b|,|c|,|d|) \in \{ (0,1,1,2), (0,1,2,1),(1,0,2,1),(1,0,1,2) \}$. Finally, you find that the solutions are $(0, \pm 1, \pm 1, \mp 2)$, $(0, \pm 1,\pm 2,\mp 1)$, $(\pm 1,0, \pm 2, \pm 1)$ and $(\pm 1, 0, \pm 1, \pm 2)$. - Suppose you have a mass vibrating about equilibrium, and it's subject to a frictional force that's proportional to the velocity. The equation of motion is of the form $d^2x/dt^2+bdx/dt+cx=0$, where $b$ and $c$ are real-valued constants. By far the easiest way to solve this is to take trial solutions of the form $x=e^{rt}$. Typically (if $b$ is not too big) we end up with two possibilities $r_1$ and $r_2$, which are complex conjugates of one another. However, you can match the initial conditions with a linear combination of the corresponding solutions $x_1$ and $x_2$, and the resulting motion is purely real. - $\log(1+i*x)=\log {\sqrt(1+x^2)*\exp(i*\arctan(x))} = (1/2)*\log(1+x^2)+i*\arctan(x)... 2*\log(1+i*x)=\log(1+x^2)+i*2*\arctan(x)=\log{(1+i*x)^2} =\log{(1-x^2)+2*i*x} =\log(1+x^2)+i*\arctan{2*x/(1-x^2)}$ so $2*\arctan(x) = \arctan(2*x/(1-x^2)...$ This procedure can be extended to $3*\arctan(x)$...to $n*\arctan(x)$ - You can use LaTeX-commands in the answers. It improves the readability. – AndreasS Nov 10 '12 at 16:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416480660438538, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/176361-find-area-region-double-integration.html
# Thread: 1. ## Find the area of the region by double integration The question states: Find the area of the region by double integration. The region inside the circle $r=4cos(\Theta)$ but outside the circle $r=2$. I set up the problem with $\int_{0}^{2\pi} \int_{2}^{4cos\Theta} rdrd\Theta$ However, the answer I got in the end was $4\pi$, I'm assuming it's because of the way I set it up rather than a calculation error since I did it several times. The correct answer according to the textbook is $4\pi/3 +2\sqrt{3}$. Thanks, any help would be appreciated. 2. You're $\theta$ limits are incorrect. 3. Thank you very much, I figured out what the actual theta parameters are.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941757321357727, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=321144
Physics Forums ## Why the sequence is not convergent? In a book I am reading, they mention the following as an example of a Cauchy sequence which is not convergent: Consider the set of all bounded continuous real functions defined on the closed unit interval, and let the metric of the set be d(f,g)=$$\int_0^1 \! |f(x)-g(x)| \, dx$$. Let $$(f_n)$$ be a sequence in this space defined as: $$f_n(x)=1$$ if $$0 \leq x \leq 1/2$$ $$f_n(x)=(-2)^n(x-1/2)+1$$ if $$1/2 \leq x \leq 1/2+(1/2)^n$$ $$f_n(x)=0$$ if $$1/2+(1/2)^n\leq x \leq 1$$ I can see that this is a Cauchy sequence, but I can't see how this sequence does not converge. I would say that it converges to: $$f_n(x)=1$$ if $$0 \leq x \leq 1/2$$ $$f_n(x)=0$$ if $$1/2 < x \leq 1$$ I would appreciate if anybody can help me to see why this sequence does not converge. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Gold Member Science Advisor Staff Emeritus Quote by symbol0 I can see that this is a Cauchy sequence, but I can't see how this sequence does not converge. I would say that it converges to: $$f_n(x)=1$$ if $$0 \leq x \leq 1/2$$ $$f_n(x)=0$$ if $$1/2 < x \leq 1$$ I'm going to assume you meant to say something like the following. I would say that it converges to the function f defined by $$f(x)=1$$ if $$0 \leq x \leq 1/2$$ $$f(x)=0$$ if $$1/2 < x \leq 1$$[/quote] I would appreciate if anybody can help me to see why this sequence does not converge. A good way to see these things is to try and prove that it does, and see where things fail. You need to: (1) Show that f is well-defined (2) Show that $\lim_{n \rightarrow +\infty} d(f_n, f) = 0$ Try it out, and let us know how it goes! Thanks for the reply Hurkyl, First of all , I think this sequence is not really in the mentioned space because when n is even, the functions are not continuous. But even if we take only the functions with odd n, I think the sequence converges to the function I wrote, but this function is not continuous, so it is not in the space. Thus the sequence does not converge in the space. Am I right? Recognitions: Gold Member Science Advisor Staff Emeritus ## Why the sequence is not convergent? Quote by symbol0 I think the sequence converges to the function I wrote, but this function is not continuous, so it is not in the space. That last part is the key thing here -- f is not in the space. In fact, technically speaking, d was only defined for pairs of elements in the space, so it would be wrong to say that the sequence converges to f! You only get that convergence when looking at the larger space Thanks Hurkyl Sorry to bump an old thread, but I'm having a problem seeing how this sequence is cauchy. I used the metric on an arbitrary f_n(x) and f_m(x), but I ended up with constants and other terms that I couldn't make arbitrarily small by taking m and n large enough. Could someone give me a brief overview of how to prove it? Recognitions: Gold Member Science Advisor Staff Emeritus $f_n$ and $f_m$, with m> n, differ only on the interval from $1/2+ 1/2^m$ to $1/2+ 1/2^n$ a length of $$\frac{2^m- 2^n}{2^{m+n}}$$ which goes to 0 as m and n go to infinity. The integrand doesn't have to go to 0, it only has to be bounded. As the length of the interval goes to 0, the integral will go to 0. Thread Tools | | | | |----------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Why the sequence is not convergent? | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 1 | | | Calculus | 1 | | | Calculus & Beyond Homework | 19 | | | General Math | 7 | | | Set Theory, Logic, Probability, Statistics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 21, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454053640365601, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/169900/what-is-the-3rd-side-length-of-isosceles-triangle
# What is The 3rd side length of Isosceles Triangle I've a isosceles triangle which length is $10\;\mathrm{cm}$ , $10\;\mathrm{cm}$ and $x$. If I want to make this triangle $120^\circ$ degree then what should be the $x$? - ## 2 Answers Angles opposite to equal sides are equal, so one angle is $120^0$ while others are $30^0$ each (angles opposite to equal sides can't be $120^0$,otherwise sum of angles of triangle would be greater than $180^0$). Draw perpendicular from vertex (intersection of two equal sides) to the opposite side, it divides the opposite side into two equal halves. Let one half of that be $x$,then $$\frac{x}{10}=\cos30^0\implies x=5\sqrt 3$$ Thus the side length=$2x=10\sqrt 3$ - But also first rule out the possibility that the two equal angles are $120^\circ$. – GEdgar Jul 12 '12 at 19:24 Then the third angle is minus 60 degrees (180 - 120 - 120). – marty cohen Jul 13 '12 at 4:24 i edited my answer and mentioned the possibility.check it. – Avatar Jul 13 '12 at 4:56 Hint: Use the law of cosines. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8888353705406189, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/313062/statistics-interval-estimation-proof
# Statistics - Interval Estimation - Proof Please tell me how to start on this proof or give me some kind of hint. Please click on this link to see the question Show that if $X_1,X_2,\dots,X_n$ denotes an iid sample from $N(\mu,\sigma^2)$ and $\sigma^2$ is known, then the best $(1-\alpha)100\%$ confidence interval is $\Big(\bar{x} \mp z_\tfrac{\alpha}{2}\frac{\sigma}{\sqrt{n}}\Big).$ Thank you! Original image: http://i.stack.imgur.com/MwfbG.jpg - ## 1 Answer Hint: $$\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}\sim\mathcal{N}(0,1),$$ where $\bar{X}=\frac{1}{n}\sum_{i=1}^n X_i$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8335846662521362, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/652/enhancing-monte-carlo-convergence-crude-method/666
# Enhancing Monte-Carlo convergence (crude method) I am currently doing a project involving Monte-Carlo method. I wonder if there is papers dealing with a "learning" refinement method to enhance the MC-convergence, example : Objective : estimate of $E(X)\thickapprox \sum _{i=1}^{10 000}X_i$ -> step 1 (500 simulations): $approx_1=\sum _{i=1}^{500}X_i$ (i) Defining and 'Acceptance interval' $A_1 = [approx_1-\epsilon_1,approx_1+\epsilon_1]$ where $\epsilon_1$ could be a function of the empirical variance and other statistics -> step 2 (500 other simulations): "throwing" all simulation out of the interval $A_1$ , $approx_2=\sum _{i=1}^{500}X_i^{(2)}$ New 'acceptance interval' $A_2 = [approx_2-\epsilon_2,approx_2+\epsilon_2]$ where $\epsilon_2 < \epsilon_1$ ... $\epsilon \xrightarrow {} 0$ - 1 I am not sure that you want to discard observation $X_i$ because it produces an estimate that falls outside your target interval. You raise the number of iterations $n$ to lower the standard error of your estimate. I don't think there's a way around this. (Although if you wanted a better idea of what the distribution of estimates looks like in a certain neighborhood, then you could provide $X_i$ from a specific portion of the distribution to provide estimates in the correct neighborhood.) – richardh♦ Mar 8 '11 at 1:17 ## 1 Answer If your variable of integration is truly one-dimensional, as you seem to be saying, then you should be using quadrature to evaluate the expectation integral. The computational efficiency of quadrature is much higher than Monte Carlo in one dimension (even accounting for modified sampling). If your problem is actually multidimensional, your best bet is to use the first few iterations (you suggest 500 above) to help choose a scheme for importance sampling. Your windowing scheme is a different trick sometimes labeled stratified sampling, and tends to get tricky from a coding perspective. To perform importance sampling, you will modify the distribution with what is known as an equivalent measure of your random samples so that most of them fall in the "interesting" region. The easiest technique is to ensure you are sampling from the multivariate normal distribution, and then shift the mean and variance of your samples such that, say, 90% of them fall within your "interesting" region. Having shifted your samples, you need to then track their likelihood ratio (or Radon-Nikodym derivative) versus the original distribution, because your samples now need to be weighted by that ratio in your Monte Carlo sum. In the case of a shift in normal distributions, this is fairly easy to compute for each sample $\vec{x}$, as $$\frac{1}{\sqrt{2\pi\ \text{det}A^{-1}}}\exp{\left[-\frac12 (x-b)A(x-b)^t\right]}$$ where $A$ and $b$ are the covariance matrix and mean of your change to the original multivariate normal. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113051295280457, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/29908/how-is-calculated-the-potential-between-two-capacitors-in-series
# How is calculated the potential between two capacitors in series? Suppose to have two capacitors in series: The voltage in the middle point will be: $$V_X = V_1 \frac{C_1}{C_1+C_2}$$ How can this be explained? It's been asked in electronics, and explained in terms of impedence and charge equality, but none of the explanations is satisfying to me, as I think it should involve charge conservation (Gauss theorem?) and/or electric fields/potentials. Could you enlighten me? - The charge equalization version is a charge conservation argument. Note that there is an isolated conductor between the capacitors and that it starts off bulk neutral... – dmckee♦ Jun 11 '12 at 14:33 @dmckee: it is, but it's not quite well explained there. Note that I mentioned it in a comment, and it's been reprised in the answer, but without further explanation – clabacchio Jun 11 '12 at 14:35 ## 3 Answers Suppose you imagine the battery to be a variable voltage, and start with the voltage at zero. Obviously everything is uncharged. Now turn the battery up to 1V. As you do this positive charge leaves the positive terminal and an equal and opposite negative charge leaves the negative terminal. We know the charges leaving the positive and negative terminals must be the same because the battery is a conductor and can't develop a net charge like a capacitor. Let's call the charge that leaves the battery $Q$. The only place the charge that leaves the battery can go is onto the capacitors, so both capacitors now have a charge of $Q$ on them. We know that for a capacitor of capacitance $C$, the voltage across the capacitor is given by: $$V = \frac{Q}{C}$$ Call the voltage of the top (1$\mu$F) capacitor $V_1$, and the voltage of the bottom (2$\mu$F) capacitor $V_2$. Then: $$V_1 = \frac{Q}{C_1}$$ $$V_2 = \frac{Q}{C_2}$$ Dividing the first equation by the second plus a bit of quick rearrangement gives: $$V_1 = \frac{C_2}{C_1} V_2$$ The two voltages must add up to 1V because we have a 1V battery, therefore: $$V_1 + V_2 = 1$$ If you substitute for $V_1$ you get: $$\frac{C_2}{C_1} V_2 + V_2 = 1$$ and dividing through by $(1 + C_2/C_1)$ gives: $$V_2 = \frac{1}{1 + C_2/C_1}$$ Tidy this up by multiplying to top and bottom of the right hand side by $C_1$ and you get the equation you're trying to prove: $$V_2 = \frac{C_1}{C_1 + C_2}$$ Just to check, feed in $C_1 = 1$ and $C_2 = 2$ and $V_2$ does indeed come out as 1/3V. - Detailed, but it doesn't explain the assumption that both capacitors have charge Q – clabacchio Jun 11 '12 at 18:10 If charge +Q leaves the battery anode then charge -Q must leave the cathode because the battery can't have a net charge. That means the top plate of the top capacitor has a +Q charge and the bottom plate of the bottom capacitor has a -Q charge. But these charges are now attracting/repelling the electrons in the wire between the two capacitors. The +Q charge on the top capacitor will attract electrons from the wire until it's bottom plate builds up a charge of -Q. Likewise the -Q charge on the bottom capacitor will repel electrons until it's top plate has a +Q charge. So the charge ... – John Rennie Jun 11 '12 at 18:15 ... on both capacitors is Q. The circuit doesn't start out with any charge, so if you add up the net charge it must sum to zero. That's how we know the charges on the top plate of the top capacitor and the bottom plate of the bottom capacitor must be equal and opposite. – John Rennie Jun 11 '12 at 18:15 This is really just a restatement of John Rennie's answer, but it might be a bit easier to follow... Assume both capacitors are initially uncharged (important, since otherwise the voltage of their common node is undefined) and the voltage source is 0. Now ramp the voltage source up to 1V. During this ramp up, the same current i flows through both capacitors (since they're connected in series), so $$i= C_1\frac{dv_{c1}}{dt} = C_2\frac{dv_{c2}}{dt}$$ So the rates of change of the capacitor voltages are inversely proportional to their capacitances, and so will the final capacitor voltages after integrating (using the fact that the capacitor voltages were initially zero). From that relation, it's straightforward to get to the expression in your question. - I think the quickest way to get an intuitive feel for this situation is to recognise the voltage at that part of the circuit is the potential difference between the two capacitors. And is in reference to the $0V$ terminal of the battery. Kirchoff's voltage law tells us that we have to sum to $0V$ in a loop. As C2 = $2 *$C1 we have to lose twice as much voltage over C2. As we are dealing with $1V$ that means we lose $.66V$ over C2 and $.33V$ over C1. Now tracing around the circuit in either direction quickly leads to the answer that we have a voltage of $.33V$ between the capacitors... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 10, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456902146339417, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41210/proving-the-existence-of-the-magnetic-potential
Proving the existence of the magnetic potential Suppose $\vec{B}$ is a differentiable vector field defined everywhere such that $\nabla\cdot \vec{B}=0$. Define $\vec{A}$ by the integral $$A_1=\int_0^1 \lambda(xB_2(\lambda x,\lambda y,\lambda z)- yB_3(\lambda x,\lambda y,\lambda z)) d\lambda$$ Together with its two cyclic permutations for $A_2,A_3$ I'm trying to work out two things here: $1.$ What is $\frac{d}{d\lambda}B_i(\lambda x,\lambda y,\lambda z)$ $2.$ How we can use $1.$ to determine $\frac{\partial A_2}{\partial x}-\frac{\partial A_1}{\partial y}=B_3$ From this we can deduce the existance of the magnetic potential by extending $2.$? This is what I have so far: Is $\frac{d}{d\lambda}B_i(\lambda x,\lambda y,\lambda z)=(x,y,x) \cdot \nabla B_i$? And can we bring the partial derivative on $A_i$ inside the integral? I have proceeded along these lines but have not found a way to substitute. Any help would be greatly appreciated! - when you try to write the expression for the curl of A components what are you left with? we need to see what you get in order to help you – lurscher Oct 19 '12 at 12:19 1 Answer You are right in both your specific questions: your $\lambda$ derivative is right and the partial derivatives can go inside the integral. You have, however, one crucial mistake in your original formula, which should read $$A_1=\int_0^1 \lambda(\quad z\quad B_2(\lambda x,\lambda y,\lambda z)- yB_3(\lambda x,\lambda y,\lambda z)) d\lambda$$ - i.e., replacing $x$ by $z$. This is needed to make "permutational sense": it now reads like "(1)=(3)(2)-(2)(3)", instead of "(1)=(1)(2)-(2)(3)", which is clearly wrong. Plugging this and the cyclically-permuted expression for $A_2$ into the curl then gives (after applying $\nabla\cdot\mathbf{B}=0$, your formula for the $\lambda$ derivative, and an integration by parts) the desired $$\nabla\times\mathbf{A}=\mathbf{B}.$$ - Ah, sorry about that error. So by making the purmutations we change the z,y coefficients to y,x etc? – Freeman Oct 19 '12 at 13:19 To make the permutations, change $x,y,z$ to $x_1,x_2,x_3$, and then change all 1s to 2s, 2s to 3s, and 3s to 1s. (In this example you should have something of the form "(2)=(1)(3)-(3)(1)".) If you do not also change the coordinates $y$ and $z$ above then you are fundamentally changing the geometrical situation. (Note, though, that the arguments $(\lambda x,\lambda y, \lambda z)$ to the magnetic field components are not altered!) – Emilio Pisanty Oct 19 '12 at 18:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255905747413635, "perplexity_flag": "head"}
http://stochastix.wordpress.com/tag/lti-systems/
# Rod Carvalho ## Posts Tagged ‘LTI Systems’ ### Feedback systems in Haskell January 30, 2012 We recently studied the first-order causal LTI system which is described by the difference equation $y (n) - \alpha y (n-1) = u (n)$, where $|\alpha| < 1$. The system can be represented by the block diagram Observing the block diagram, we conclude that there are three basic operations: addition, multiplication by a constant coefficient, and delay. We arrive at the same conclusion if we rewrite the difference equation in the form $y (n) = u (n) + \alpha y (n-1)$. Do note that the output is obtained by adding the input to a scaled and delayed version of the output and, therefore, we have a feedback system. The system’s input-output relationship can be written as follows $y = u + \alpha\,\mathcal{D} (y)$ where $\mathcal{D}$ is the unit-delay operator. Note that we have signals rather than signal samples on both sides of the equation. To clarify, when I say “sample”, I mean the value of the signal at some (discrete) time instant. Let us introduce the linear operator $\mathcal{H}$ such that the output signal can be written as a function of the input, $y = \mathcal{H} (u)$, assuming zero initial condition. Hence, we obtain $\mathcal{H} (u) = u + \alpha\,\mathcal{D} (\mathcal{H} (u))$. It would be convenient to introduce also a gain operator $\mathcal{G}_{\alpha}$ to carry out the multiplication by a constant coefficient, i.e., $\mathcal{G}_{\alpha} (x) = \alpha \, x$. Composing the operators, we finally obtain the following equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ which we will now implement in Haskell. __________ Our first implementation of the LTI system under study relied on state-space models and the scanl trick to propagate the initial state forwards in time. Our second implementation was little more than a beautified version of the first one. This third implementation will be radically different from the previous two. ```type Signal a = [a] type System a b = Signal a -> Signal b``` which hopefully will make the code more readable. We build a function that takes two discrete-time signals (of the same type) and returns their elementwise addition: ```(.+) :: Num a => Signal a -> Signal a -> Signal a (.+) = zipWith (+)``` Since we represent discrete-time signals as lists, the function above merely adds two lists elementwise. Let us test this function: ```*Main> -- create test input signals *Main> let us = repeat 1.0 :: Signal Float *Main> let vs = [1..] :: Signal Float *Main> -- add two signals elementwise *Main> let ys = us .+ vs *Main> take 10 ys [2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0] *Main> -- check types *Main> :type (us,vs,ys) (us,vs,ys) :: (Signal Float, Signal Float, Signal Float)``` We now implement the unit-delay with zero initial condition: ```delay :: Num a => System a a delay us = 0 : us``` which right-shifts the input list and introduces a zero at the output list’s head. If we want a unit-delay operator with non-zero initial condition, we would have the following code instead: ```delay :: Num a => System a a delay us = ini : us``` where ini is the initial condition of the delay block. Lastly, we create the gain operator: ```gain :: Num a => a -> System a a gain alpha = map (alpha*)``` which takes a number (the gain factor) and returns a system (that maps signals to signals). Note that we use partial function application. One can think of the gain operator as a function that takes a number and a signal and returns a signal. If we fix the first argument (the gain factor), we obtain a function that maps signals to signals, i.e., a system. Here is a quick test of the delay and gain operators: ```*Main> -- create signal *Main> let xs = [1..] :: Signal Float *Main> -- delay signal *Main> let ys = delay xs *Main> take 10 ys [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0] *Main> -- amplify delayed signal *Main> let zs = gain 2.0 ys *Main> take 10 zs [0.0,2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0]``` Finally, we have the following Haskell script: ```type Signal a = [a] type System a b = Signal a -> Signal b -- signal adder (.+) :: Num a => Signal a -> Signal a -> Signal a (.+) = zipWith (+) -- delay operator delay :: Num a => System a a delay us = 0 : us -- gain operator gain :: Num a => a -> System a a gain alpha = map (alpha*) -- build feedback system sys :: Floating a => System a a sys us = us .+ (gain 0.5 . delay . sys) us``` where we used $\alpha = 0.5$. Note that the last line is a direct translation of the equation $\mathcal{H} (u) = u + (\mathcal{G}_{\alpha} \circ \mathcal{D} \circ \mathcal{H}) (u)$ to Haskell! Beautiful! To finalize, let us obtain the impulse response of the LTI system under study for $\alpha = 0.5$: ```*Main> -- create unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- compute impulse response *Main> let hs = sys delta *Main> take 8 hs [1.0,0.5,0.25,0.125,6.25e-2,3.125e-2,1.5625e-2,7.8125e-3]``` which is the expected impulse response $h (n) = \alpha^n$ for $n \geq 0$. Frankly, I am in awe. It is amazing that this implementation works! Tags:Feedback Systems, Haskell, List Processing, LTI Systems, Signals & Systems, Systems Theory Posted in Haskell, LTI Systems, Systems Theory | 3 Comments » ### Cascading systems in Haskell January 25, 2012 Last weekend we learned how to build systems (LTI or otherwise) using Haskell. We now want to construct interconnections of systems. In this post we will study the series interconnection of systems, usually known as cascade interconnection. Parallel and feedback interconnections will be discussed in future posts. __________ Cascading two LTI systems Let us consider the series interconnection of two causal discrete-time LTI systems, $\mathcal{H}_1$ and $\mathcal{H}_2$, as depicted below where $y = \mathcal{H}_1 (x)$ and $w =\mathcal{H}_2 (y)$ are the outputs of each LTI system in the cascade. Since the output of system $\mathcal{H}_1$ is the input of system $\mathcal{H}_2$, we have $w = (\mathcal{H}_2 \circ \mathcal{H}_1) (x)$. What LTI systems should we consider? Let us choose the simplest ones: the accumulator and the differentiator. Thus, let $\mathcal{H}_1$ be an accumulator (also known as “discrete-time integrator”), whose input-output relationship is as follows $y (n) = y (n-1) + x (n)$, and let $\mathcal{H}_2$ be a first difference operator (also known as “discrete-time differentiator”), whose input-output relationship is $w (n) = y (n) - y (n-1)$. Note that the output of the cascade of these two LTI systems is thus $w (n) = y (n) - y (n-1) = (y (n-1) + x (n)) - y (n-1) = x (n)$ and, hence, the cascade is input-output equivalent to the identity operator. Since $(\mathcal{H}_2 \circ \mathcal{H}_1) (x) = x$ for all signals $x$, we say that $\mathcal{H}_2$ is the left-inverse of system $\mathcal{H}_1$. Since both systems are LTI, the operators commute, i.e., $\mathcal{H}_2 \circ \mathcal{H}_1 = \mathcal{H}_1 \circ \mathcal{H}_2$ and, therefore, $\mathcal{H}_2$ is also the right-inverse of system $\mathcal{H}_1$. Since $\mathcal{H}_2$ is the left- and right-inverse of $\mathcal{H}_1$, we say that $\mathcal{H}_2$ is the inverse of system $\mathcal{H}_1$ (and vice-versa). Do keep in mind, however, that not all systems are invertible. For details, take a look at Oppenheim & Willsky [1]. __________ We can easily find a state-space realization for the accumulator, but that will not be necessary. As in previous posts, let us view discrete-signals as lists. Thus, the accumulator takes a list $[x_0, x_1, x_2, \dots]$, and returns the following list $[y_0, y_1, y_2, \dots] = [x_0, x_0 + x_1, x_0 + x_1 + x_2, \dots]$ where we assume that the initial condition of the accumulator is zero (i.e., $y_{-1} = 0$). Instead of using scanl yet once again (which would require us to drop the head of the list), let us now use scanl1 to implement the accumulator: ```acc :: Num a => System a a acc = scanl1 (+)``` Please note that if the initial condition of the accumulator is not zero, we should use the following code instead: ```acc' :: Num a => System a a acc' us = tail $ scanl (+) acc_ini us``` where acc_ini is the initial condition of the accumulator (analogous to the constant of integration in integral calculus). The differentiator is not a proper system [2] and, therefore, it has no state-space realization. The differentiator takes a list $[y_0, y_1, y_2, \dots]$, and returns the following list $[w_0, w_1, w_2, \dots] = [y_0, y_1 - y_0, y_2 - y_1, \dots]$ where we again assume that $y_{-1} = 0$. Note the following $[w_0, w_1, w_2, \dots] = [y_0, y_1, y_2, \dots] - [0, y_0, y_1, \dots]$ i.e., list $w$ is obtained by elementwise subtraction of a right-shifted version of list $y$ from list $y$ itself. Subtracting two lists elementwise can be implemented using zipWith: ```diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys)``` If the initial condition of the differentiator is not zero, we should instead use the following code: ```diff' :: Num a => System a a diff' ys = zipWith (-) ys (diff_ini : ys)``` where diff_ini is the initial condition of the differentiator. Piecing it all together, we finally obtain the following Haskell script: ```type Signal a = [a] type System a b = Signal a -> Signal b -- accumulator acc :: Num a => System a a acc = scanl1 (+) -- differentiator diff :: Num a => System a a diff ys = zipWith (-) ys (0 : ys) -- cascade of the acc. and diff. sys :: Num a => System a a sys = diff . acc``` Take a look at the last line. It says that cascading systems is the same as composing systems! Hence, the Haskell implementation is conceptually very close to the mathematical formulation using operators. Functional analysis meets functional programming… We run the script above on GHCi and then play with it: ```*Main> -- build unit impulse *Main> let delta = 1.0 : repeat 0.0 :: Signal Float *Main> -- output of the accumulator *Main> let ys = acc delta *Main> take 20 ys [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] *Main> -- output of the differentiator *Main> let ws = diff ys *Main> take 20 ws [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0] *Main> -- impulse response of the cascade *Main> let hs = sys delta *Main> take 20 hs [1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0, 0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0]``` The impulse response of the accumulator is the unit step. The impulse response of the cascade is the unit impulse, as we expected. The differentiator is the inverse of the accumulator, and vice-versa. __________ References [1] Alan V. Oppenheim, Alan S. Willsky, S. Hamid Nawab, Signals & Systems, 2nd edition, Prentice-Hall, 1997. [2] Panos Antsaklis, Anthony Michel, A Linear Systems Primer, Birkhäuser Boston, 2007. Tags:Haskell, List Processing, LTI Systems, Signals & Systems, Systems Theory ### Discrete-time LTI systems in Haskell II January 22, 2012 Yesterday I wrote a post on how to implement discrete-time LTI systems in Haskell. The code I presented did not quite please my delicate aesthetic sensitivity, as it bundled the declaration of the signals with the declaration of the LTI system under study. This is not practical. Ideally, I would like to define the LTI system in a Haskell script, run it on the GHCi interpreter, then create the signals using the GHCi command line. Why? Because I can then compute the response of the system to various test input signals without reloading the script. In this post, we will again consider the first-order causal discrete-time LTI system described by the difference equation $y (n) - \alpha y (n-1) = u (n)$ where $|\alpha| < 1$. In my previous post, I obtained a state-space representation of this LTI system. In this post, I will propose a better implementation of the given system in Haskell. __________ Signal and System type synonyms What is a discrete-time signal? We can think of a discrete-time signal as a sequence of numbers or symbols. In Haskell, we often use lists of floating-point numbers to represent real-valued discrete-time signals. More generally, we have the type synonym: `type Signal a = [a]` which says that a signal of type $a$ is a list of elements of type $a$. We can then have signals of various types: integer, fixed-point, floating-point, bit, binary word, etc. This should be particularly useful in case we want to simulate mixed-signal circuits. What is a discrete-time system? It is that which maps input discrete-time signals to output discrete-time signals. We will use the type synonym: `type System a b = (Signal a -> Signal b)` which tells us that a system of type $a \,\, b$ is a function that maps an input signal of type $a$ to an output signal of type $b$. In other words, it takes a list whose elements are of type $a$, and it returns a list whose elements are of type $b$. __________ Implementing the LTI system in Haskell In my previous post, I created the state and output sequences using ```xs = scanl f x0 us ys = zipWith g xs us``` Combining the two lines above into a single line, we obtain `ys = zipWith g (scanl f x0 us) us` which returns the output sequence when given the input sequence. Note that the state sequence is not created. Assuming zero initial condition, we can then create a system (i.e., a function that takes a list and returns a list) as follows ```sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us``` which takes signals of type $a$ and returns signals of the same type, where $a$ is a floating-point type (either Float or Double). Putting it all together in a script, we finally obtain: ```type Signal a = [a] type System a b = (Signal a -> Signal b) -- state-space model f,g :: Floating a => a -> a -> a f x u = (0.5 * x) + u -- state-transition g x u = f x u -- output -- define LTI system sys :: Floating a => System a a sys us = zipWith g (scanl f 0.0 us) us``` which defines the LTI system under study with $\alpha = 1/2$. For this choice of $\alpha$, the system is a lowpass filter. __________ Example: lowpass-filtering a square wave We run the script above on GHCi to create the desired LTI system. Let us now create an input signal, a square wave of period equal to 16 and duty cycle equal to 50%, and lowpass-filter it using the LTI system we created: ```*Main> -- create square wave *Main> let ones = take 8 $ repeat 1.0 :: Signal Float *Main> let zeros = take 8 $ repeat 0.0 :: Signal Float *Main> let us = cycle (ones ++ zeros) :: Signal Float *Main> -- create output sequence *Main> let ys = sys us *Main> -- check types *Main> :type us us :: Signal Float *Main> :type ys ys :: Signal Float *Main> :type sys sys :: Floating a => System a a``` Finally, we compute the first 64 samples of the input and output signals using function take, and plot the signals using MATLAB: In case you are wondering why the peak amplitude of the output signal is twice that of the input signal, compute the transfer function of the LTI system and note that the DC gain is equal to $\frac{1}{1 - \alpha} = 2$. Tags:Haskell, List Processing, LTI Systems, Signals & Systems, Systems Theory Posted in Haskell, LTI Systems, Systems Theory | 2 Comments » ### Discrete-time LTI systems in Haskell January 21, 2012 Consider the first-order causal discrete-time LTI system described by the following difference equation $y (n) - \alpha y (n-1) = u (n)$ where $|\alpha| < 1$. This system can be represented by the block diagram where $D$ is a unit-delay block, the output of which is $y (n-1)$. We now introduce state variable $x (n)$ denoting the output of the unit-delay block, i.e., $x(n) = y(n-1)$. Thus, $x (n+1) = y(n)$ and, since $y (n) = \alpha y (n-1) + u (n)$, we obtain the following state-transition equation $x (n+1) = \alpha x (n) + u (n)$. The initial condition is $x(0) = y(-1)$, which we assume to be zero. Note that the output is $y (n) = x(n+1)$ and, thus, the output equation is $y (n) = \alpha x (n) + u (n)$. Lastly, we define $f (x,u) = g (x,u) := \alpha x + u$, which allows us to write the state-transition and output equations as follows $\begin{array}{rl} x (n+1) &= f (x(n), u(n))\\ y (n) &= g (x(n), u(n))\\\end{array}$ We can now implement this state-space model in Haskell! __________ Given an initial state $x_0$ and an input sequence $u = [u_0, u_1, \dots]$, we can compute the state sequence using the scanl trick I wrote about last weekend. Once we have obtained the state sequence $x = [x_0, x_1, \dots]$, the output sequence is $y = [y_0, y_1, \dots] = [g (x_0, u_0), g (x_1, u_1), \dots]$. Do you recognize the pattern? We are zipping lists $x$ and $u$ using function $g$!! We thus use (higher-order) function zipWith to compute the output sequence. For $\alpha = 7/8$ and a constant input sequence, the following Haskell code computes the state and output trajectories: ```-- constant input sequence us :: Floating a => [a] us = repeat 1.0 -- parameter alpha alpha :: Floating a => a alpha = 7/8 -- state-transition function f :: Floating a => a -> a -> a f x u = (alpha * x) + u -- output function g :: Floating a => a -> a -> a g x u = f x u -- initial condition x0 :: Floating a => a x0 = 0.0 -- state sequence xs :: Floating a => [a] xs = scanl f x0 us -- output sequence ys :: Floating a => [a] ys = zipWith g xs us``` Not quite. I am still stuck in the imperative paradigm… and need to be corrected. The code above does not compute anything. It does declare what the input, state and output sequences are. After running the script above in GHCi, we compute 30 samples of the input and output sequences using function take: ```*Main> take 30 us [1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0, 1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0] *Main> take 30 ys [1.0,1.875,2.640625,3.310546875,3.896728515625, 4.409637451171875,4.858432769775391,5.251128673553467, 5.5947375893592834,5.895395390689373,6.158470966853201, 6.388662095996551,6.590079333996982,6.7663194172473595, 6.92052949009144,7.05546330383001,7.173530390851258, 7.276839091994852,7.367234205495495,7.446329929808559, 7.515538688582489,7.576096352509678,7.629084308445968, 7.675448769890222,7.716017673653944,7.751515464447201, 7.782576031391301,7.809754027467388,7.833534774033964, 7.854342927279719]``` We can plot these sequences in MATLAB: We now know how to simulate discrete-time LTI systems using Haskell. Note that our framework is based on state-space models, not on transfer functions. Hence, it should be possible to simulate discrete-time nonlinear systems in the same manner: using scanl to compute the state sequence, and zipWith to compute the output sequence. All we need is a state-space representation of the system under study. Tags:Haskell, List Processing, LTI Systems, Signals & Systems, State-Space Models, Systems Theory ### Cascading (almost) identical LTI systems October 23, 2010 Consider a causal continuous-time LTI system $\mathcal{H}$ with transfer function $H (s) = \displaystyle\frac{1}{(s+a) (s+b)}$ where $a, b \in \mathbb{R}$. Note that we have two poles (at $s = -a$ and $s = -b$), but no finite zeros. We can view $\mathcal{H}$, a 2nd order system, as the cascade connection of two causal 1st order LTI systems If $b = a$, then the transfer function becomes $H (s) = 1 / (s+a)^2$ and we have a double pole at $s = -a$. Taking the inverse Laplace transform of $H (s)$ we obtain the impulse response of the overall system $\mathcal{H}$ $h (t) = t e^{- a t} u (t)$ where $u (t)$ is the Heaviside step function. If $a > 0$, then the impulse response will eventually decay to zero. However, note that the exponential is multiplied by $t$, which means that there is a transient “peak”. If $a$ is positive but “close” to zero, this transient could be quite “wild”! This phenomenon is somewhat similar to resonance, although we are dealing with decaying exponential excitations, not sinusoidal ones. If we apply a Dirac delta impulse to the system depicted above, the output of the first system in the cascade will be the impulse response of $\mathcal{H}_1$ $h_1 (t) = e^{- a t} u (t)$, which happens to be also the impulse response of $\mathcal{H}_2$, the second system in the cascade. The lesson is the following: cascading identical LTI systems leads to resonant-like behavior. If $b \neq a$, then we have a simple pole at $s = -a$ and another simple pole at $s = -b$. Via partial fraction expansion of $H (s)$, we obtain $H (s) = \displaystyle\frac{1}{b - a} \left(\displaystyle\frac{1}{s+a} - \displaystyle\frac{1}{s+b}\right)$ and, taking the inverse Laplace transform, we get the impulse response $h (t) = \displaystyle\frac{1}{b - a} \left(e^{-a t} - e^{-b t}\right) u (t)$. Note that these are valid only if $b \neq a$. If we make $b = a$, we get pesky indeterminate stuff: $H (s) = 0 / 0$ and $h (t) = 0 / 0$. Let us see what happens if $\mathcal{H}_1$ and $\mathcal{H}_2$ are “almost identical”. To be precise, let us investigate the impulse response of the cascade when the (simple) poles at $s = -a$ and $s = -b$ are almost “on top of each other”. Let $b = a \pm \varepsilon$, where $\varepsilon > 0$ is “small”. Then, the impulse response of $\mathcal{H}$ can be written as follows $h (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(e^{-a t} - e^{-(a \pm \varepsilon) t}\right) u (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right) e^{-a t} u (t)$. For convenience, let us define $\gamma_{\varepsilon} (t) := \pm \displaystyle\frac{1}{\varepsilon} \left(1 - e^{- (\pm \varepsilon) t} \right)$ so that $h (t) = \gamma_{\varepsilon} (t) e^{-a t} u (t)$. Taking the Taylor expansion of $\gamma_{\varepsilon} (t)$ about $t = 0$, we get $\gamma_{\varepsilon} (t) = \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right)$ and, taking the limit as $\varepsilon$ approaches zero, we obtain $\displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) = \displaystyle\lim_{\varepsilon \to 0} \pm \displaystyle\frac{1}{\varepsilon} \left(\pm \varepsilon t - \displaystyle\frac{\varepsilon^2}{2} t^2 \pm \displaystyle\frac{\varepsilon^3}{3!} t^3 - \dots\right) = t$ and, therefore, we finally get $\displaystyle\lim_{\varepsilon \to 0} h (t) = \displaystyle\lim_{\varepsilon \to 0} \gamma_{\varepsilon} (t) e^{-a t} u (t) = t e^{-a t} u (t)$. This shows, albeit in a sinfully non-rigorous manner, that the impulse response of $H (s) = 1 / ((s+a) (s+b))$ approaches the impulse response of $H (s) = 1 / (s+a)^2$ as the pole at $s = -b$ gets “closer” and “closer” to the pole at $s = -a$. I concede that this result is not terribly exciting… Let us now perform a numerical experiment! Let $a = 1$, and let $b = a + \varepsilon$, so that $b$ approaches $a$ as $\varepsilon \to 0$. The plot below illustrates the impulse responses for various values of $\varepsilon$ where $\varepsilon = 0$ corresponds to the cascade of two identical systems (which creates a double pole at $s = -1$). Note that, as expected, as $\varepsilon$ approaches zero, the impulse response of the cascade approaches the resonant one, $h (t) = t e^{- a t} u (t)$. Last but not least, here’s the MATLAB script that generates the plot: ```% build transfer function of H1 a = 1; H1 = tf(1,[1 a]); % create time grid t = [0:0.01:5]; % compute impulse responses of the cascade h = []; for epsilon = [1 0.5 0.1 0]; % build transfer function of H2 b = a + epsilon; H2 = tf(1,[1 b]); % compute and store impulse response h = [h, impulse(H2 * H1, t)]; end % plot impulse responses figure; plot(t, h(:,1), 'r-', t, h(:,2), 'm-', t, h(:,3), 'b-', t, h(:,4), 'k--'); legend('\epsilon = 1', '\epsilon = 0.5', '\epsilon = 0.1', '\epsilon = 0'); xlabel('t (seconds)'); title('Impulse responses of the cascade'); ``` Tags:LTI Systems, MATLAB, Resonance, Systems Theory
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 144, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8898999691009521, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Parametric_Equations&diff=21328&oldid=20877
# Parametric Equations ### From Math Images (Difference between revisions) | | | | | |----------|--------------------------------------------------------------------------------------------|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 33: | | Line 33: | | | | | | | | | | | | | - | <math>\sin (t) = \frac{opposite \angle (t)}{hypotenuse} = \frac{y}{1}</math> | + | :<math>\sin (t) =\frac {\text{opposite}} {\text{hypotenuse}} = \frac{y}{1}</math> | | | | | | | | | + | so | | | | | | | | | + | :<math>y = \sin (t) </math> . | | | | | | | | | + | [[User:Smaurer1|Smaurer1]] <font color=brown> We have agreed to indent math displays. Also, names of more than one symbol are generally in roman. Finally, while something like <math> \sin(3t+4)</math> needs parentheses, generally <math> \sin t</math> is written without them .</font> | | | | | | | - | <math>y = \sin (t) </math> . | | | | | | | | | | Likewise; the quantity <math> x </math> can be represented in terms of <math> t </math> as | | Likewise; the quantity <math> x </math> can be represented in terms of <math> t </math> as | | | | | | | - | <math>\cos (t) = \frac{adjacent \angle (t)}{hypotenuse} = \frac{x}{1}</math> | + | : <math>\cos t = \frac{\text{adjacent}} {\text{hypotenuse}} = \frac{x}{1}</math> | | | | | | | | | | | | | | + | so | | | | | | | - | | + | <math>x = \cos t </math> | | - | <math>x = \cos (t) </math> | + | | | | | | | | | | | | ## Revision as of 18:33, 21 June 2011 Butterfly Curve Field: Algebra Image Created By: Direct Imaging Website: [1] Butterfly Curve The Butterfly Curve is one of many beautiful images generated using parametric equations. # Basic Description Parametric Equations can be used to define complicated functions and figures in simpler terms, using one or more additional independent variables, known as parameters . For the many useful shapes which are not "functions" in that they fail the vertical line test, parametric equations allow one to generate those shapes in a function format. In particular, Parametric Equations can be used to define and easily generate geometric figures, including(but not limited to) conic sections and spheres. The butterfly curve in this page's main image uses more complicated parametric equations as shown below. # A More Mathematical Explanation Note: understanding of this explanation requires: *Linear Algebra [Click to view A More Mathematical Explanation] [[Image:Animated_construction_of_butterfly_curve.gif|thumb|right|500px|Parametric construction of the [...] [Click to hide A More Mathematical Explanation] Parametric construction of the butterfly curve Sometimes curves which would be very difficult or even impossible to graph in terms of elementary functions of x and y can be graphed using a parameter. One example is the butterfly curve, as shown in this page's main image. This curve uses the following parametrization: $\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} \sin(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right) \\ \cos(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right)\\ \end{bmatrix}$ ### Parametrized Curves Many useful or interesting shapes otherwise inexpressible as xy-functions can be represented in coordinate space using a non-coordinate parameter, such as circles. A circle cannot be expressed a function where one variable is dependent on another. If a parameter (t) is used to represent an angle in the coordinate plane, the parameter can be used to generate a unit circle, as shown below. The parameter $t$ does, in the case of a unit circle, represent a physical quantity in space: the angle between the x-axis and a vector of magnitude 1 going to point $(x,y)$ on the coordinate plane. The components of the vector that goes to $(x,y)$ have magnitudes of $x$ (horizontally) and $y$ (vertically), and form a right triangle with hypotenuse 1. using trigonometric ratios, in this, the quantity $y$ can be represented in terms of $t$ as $\sin (t) =\frac {\text{opposite}} {\text{hypotenuse}} = \frac{y}{1}$ so $y = \sin (t)$ . Smaurer1 We have agreed to indent math displays. Also, names of more than one symbol are generally in roman. Finally, while something like $\sin(3t+4)$ needs parentheses, generally $\sin t$ is written without them . Likewise; the quantity $x$ can be represented in terms of $t$ as $\cos t = \frac{\text{adjacent}} {\text{hypotenuse}} = \frac{x}{1}$ so $x = \cos t$ Thus, $t$ generates physical points $(x,y)$ on the coordinate plane, controlling both variables.Since the values of the ratios have a set domain and range, the same proportional distance is maintained around the origin, creating a series of points equidistant to a fixed point,otherwise known as a circle: In other quadrants, sines and cosines are defined in terms of compliments to angles in the first quadrant (between 0° and 90°) . Thus, directed distances stay the same , creating an equidistant set of of points around the origin identified as a circle. Thus, a parameter $t$ is used to generate a shape that is otherwise not a function, with simpler component functions. ### Parametrized Surfaces The surface of a sphere can be graphed using two parameters. In the above cases only one independent variable was used, creating a parametrized curve. We can use more than one independent variable to create other graphs, including graphs of surfaces. For example, using parameters s and t, the surface of a sphere can be parametrized as follows: $\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} \sin(t)\cos(s) \\ \sin(t)\sin(s) \\\cos(t) \end{bmatrix}$ ### Parametrized Manifolds While two parameters are sufficient to parametrize a surface, objects of more than two dimensions, such as a three dimensional solid, will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry. ### Parametric Equation Explorer This applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions. If you can see this message, you do not have the Java software required to view the applet. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. # Related Links ### Additional Resources If you can see this message, you do not have the Java software required to view the applet. • applet is intended to help with understanding how changing an alpha value changes the plot of a parametric equation. See the in-applet help for instructions. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 20, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8182335495948792, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/65810/list
## Return to Question 2 Grammar, layout; edited title; edited body; edited body # NeflinebundleonArethesetwodefinitionsofnef-nessequivalentfor Moishezon manifoldmanifolds? Hi,everyone! Nowadays Recently, I has have been learning something about nef line bundle,I bundles. I know that when $X$ is projective or Moishezon,a Moishezon, a line bundle $L$ over $X$ is said to be nef iff $L.C=\int_{C}C_{1}(L)\ge 0$ $L.C=\int_{C}c_{1}(L)\ge 0$$for every curve$C$in$X$.Moverover,Demailly had given X$. Demailly gave a definition of nefness that works on an arbitrary compact complex manifold,i.e.,a manifold, i.e., a line bundle $L$ over $X$ is said to be nef if for every $\varepsilon >0$ there exists a smooth hermitian metric $h_{\varepsilon}$ on $L$ such that its curvature $\Theta_{h_{\varepsilon}}(L)\ge -\varepsilon\omega$.And for the \varepsilon\omega\$. For projective manifolds,Demailly's manifolds, Demailly's definition coincides with the above one given by integration (this is an easy consequence of Seshadri's ampleness criterion).I wonder whether it is criterion). Question: Is this equivalence also true for the Moishezon manifolds.By now,I have no manifolds? I don't know of any counter examples.If counterexamples. If it is not true,can true, could someone gives give me an counter-examplea counterexample? 1 # Nef line bundle on Moishezon manifold Hi,everyone! Nowadays I has been learning something about nef line bundle,I know when $X$ is projective or Moishezon,a line bundle $L$ over $X$ is said to be nef iff $L.C=\int_{C}C_{1}(L)\ge 0$ for every curve $C$ in $X$.Moverover,Demailly had given a definition of nefness on an arbitrary compact complex manifold,i.e.,a line bundle $L$ over $X$ is said to be nef if for every $\varepsilon >0$ there exists a smooth hermitian metric $h_{\varepsilon}$ on $L$ such that its curvature $\Theta_{h_{\varepsilon}}(L)\ge -\varepsilon\omega$.And for the projective manifolds,Demailly's definition coincides with the above one given by integration (this is an easy consequence of Seshadri's ampleness criterion).I wonder whether it is also true for the Moishezon manifolds.By now,I have no any counter examples.If it is not true,can someone gives me an counter-example?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8730915188789368, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/1312/gromov-witten-theory-and-compactifications-of-the-moduli-of-curves
## Gromov-Witten theory and compactifications of the moduli of curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Why, from a string theory perspective, is it natural to consider the Deligne-Mumford (resp. Kontsevich) compactification of the moduli of curves (resp. maps [from curves to a target space X]) rather than some other compactification? In any case, what other compactifications of the moduli of curves have been studied? Similarly, what other compactifications of the moduli of maps have been studied? Do any of these other compactifications lead to an interesting "Gromov-Witten theory"? - ## 6 Answers I can give individual answers to a lot of your questions, but I can't answer any of them completely, nor can I fit all these answers together into a coherent whole. For string theory, there does seem to be something special about the Deligne-Mumford compactification. Morally, what's going on is this: string theorists are allowing cylinder-shaped submanifolds of their Riemann surfaces to become infinitely long. The only finite energy fields on such infinitely long submanifolds are constant, so you can replace the long cylinder with a node. (Likewise, morally, if you allow vertex operators at two marked points to come together, you should take their operator product. This is what bubbling when marked points collide does for you.) Somewhat more technically: The first step in (bosonic) string theory is to compute the partition function of the nonlinear sigma model as a function on the space of metrics on your worldsheet. This function on metrics descends to a section of some line bundle on the moduli stack of complex structures on the worldsheet. When you can compute it at all, you can show that this section has exactly the right pole structure it needs to be a section of the 13th power of the canonical bundle tensored with the 2nd power of the dual of the line bundle corresponding to the boundary divisor of the Deligne-Mumford compactification. (There's an old Physics Report by Phil Nelson that explains this pretty well, although not with anything you'd call a proof. Should also credit Belavin & Knizhik, who did the initial calculations.) There's a somewhat more modern perspective on this (Zwiebach, Sullivan, Costello,...) that says that the generating function of string theory correlation functions for smooth Riemann surfaces satisfies a certain equation (a "quantum master equation"), which gives instructions for how to extend the theory to nodal Riemann surfaces. Different master equations give different recipes for extending to the boundary, if I understand your advisor correctly. People have played around with other compactifications. There are a lot of different compactifications of the stack of smooth marked curves. People have already mentioned a few of them. David Smyth has some cool results which classify the "stable modular" compactifications of the stack of curves (http://arxiv.org/abs/0902.3690 ). For compactifications of the moduli of maps, the only one that comes immediatley to mind is Losev, Nekrasov, and Shatashvili's "freckled instanton compactification", in which IIRC, you allow zeros and poles to collide and cancel each other out. - Thanks AJ. Regarding Sen-Zwiebach, Sullivan, Costello, etc.: So then is there a master equation which gives the right recipe for Gromov-Witten invariants? And do different master equations still yield CohFTs? – Kevin Lin Nov 23 2009 at 22:37 And if the answer to my question about CohFTs is "yes", then I wonder how this stuff interacts with the Givental group action on CohFTs... – Kevin Lin Nov 23 2009 at 22:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In low-dimensional topology, Thurston introduced a very interesting compactification (alas, I don't think this is at all connected to algebraic geometry, but it's beautiful and worth knowing!). If M(g) is the moduli space of genus g curves, then you can express M(g) as the quotient of T(g) by the mapping class group, where T(g) is Teichmuller space. It is classical that T(g) is homeomorphic to an open ball in R^{6g-6}. Thurston compactified T(g) by the space of "measured foliations" (or, equivalently, "measured laminations") of the surface. The Thurston compactification of T(g) is homeomorphic to a closed ball in R^{6g-g} and is compatible with the mapping class group action, so it descends to an interesting topological compactification of moduli space. Thurston used his compactification to prove the "Nielsen-Thurston" classification of surface homeomorphisms, which can be viewed as something like a "Jordan normal form" for surface homeomorphisms. Some information about this can be found in the following wikipedia article: http://en.wikipedia.org/wiki/Nielsen-Thurston_classification Another readable source of information about this is the manuscript "A Primer on Mapping Class Groups" by Farb and Margalit, which is available here : http://www.math.utah.edu/~margalit/primer/ - Here is a random thought on the second part of Kevin's question. There are various compactifications of the space of maps that should be meaningful physically but haven't been explored by the physicists. One example is Drinfeld's compactification. This is in some sense the smallest modular compactification of the space of maps - the points on the boundary have geometric meaning (Drinfeld calls them quasi-maps). It has the flavor of a gauged linear sigma model so it should be relevant to the physics somehow. Here is how one can define Drinfeld's compactification. Suppose first that $G$ is a complex reductive group, and we want to study maps from $\mathbb{P}^{1}$ to the flag variety $G/P$ for some parabolic $P \subset G$. To compactify the space of such maps Drinfeld's beautiful idea is to look at a "compactification" of $G/P$, i.e. an Artin stack which contains $G/P$ as a dense open substack. There is a natural stack like that. If $R^{u}P$ is the unipotent radical of $P$, then the quasi-affine variety $V := G/R^{u}P$ is a principal $R$-bundle on $G/P$, where $R = P/R^{u}P$ is the maximal reductive quotient of $P$. We can now consider the affinization $W$ of $V$, i.e. $W = Spec(\Gamma(V,\mathcal{O}))$. Note that $V \subset W$ is zariski open, and the action of $R$ on $V$ automatically extends to $W$. The stack $[W/R]$ then contains $V/R = G/P$ as an open dense substack. The moduli of maps from $\mathbb{P}^{1}$ to $[W/R]$ such that the generic point of $\mathbb{P}^{1}$ maps to $G/P$ turns out to be compact. This moduli space is Drinfeld's compactification. For instance if $G = SL_{2}$, then $V = \mathbb{C}^{2}-{0}$, $W = \mathbb{C}^{2}$, and $[W/R] = [\mathbb{C}^{2}/\mathbb{C}^{\times}]$. To get Drinfeld's compactification for a general projective target $X$ we can embed $X$ in a projective space $\mathbb{P}^{N}$, and then close (i.e. take the fiber product) the space of maps from $\mathbb{P}^{1}$ to $X$ in Drinfeld's compactification of the space of maps from $\mathbb{P}^{1}$ to $\mathbb{P}^{N}$. For flag variety targets Drinfeld's compactification of the space of maps is singular but it has a natural resolution - Laumon's space of quasi-flags. It is known that this resolution is semismall (this is a result of Kuznetsov). It is also known that Kontsevich's space of stable maps has a morphism onto Laumon's compactification. It seems to me, that it will be very interesting to study whether Laumon's compactification of maps from $\mathbb{P}^{1}$ to a projective variety has a virtual fundamental class. Since it is gotten by a fiber product with a map from a smooth space, Laumon's compactification will have a natural derived structure. I wonder if the obstruction theory given by this derived structure happens to be perfect. This question is very much in the spirit of the paper of AJ with Frenkel and Teleman. Only here the target quotient stack is very special and maybe the question is easier to answer. - Another compactification that's been studied a bit is the Satake compactification. Unlike the Thurston compactification, this is definitely algebraic geometry! Satake introduced a natural compactification of the moduli space A(g) of principally polarized abelian varieties in genus g. Later, Walter Baily proved that this compactification turns A(g) into a projective variety. The construction uses theta functions and can be read about in Igusa's book "Theta Functions". Anyway, let M(g) be the moduli space of curves. The map that takes an algebraic curve to its Jacobian induces a map M(g)-->A(g), and Torelli's theorem says that it is injective. Baily later showed that the closure of the image of M(g) in the Satake compactification of A(g) is also a projective variety compactifying M(g). Incidentally, this provided the first proof that M(g) is a quasiprojective variety! - I would prefer to leave this as a comment to Tony Pantev's answer, but I don't have enough reputation to do so. Anyway, as Tony mentions the Laumon compactification provides a semismall resolution of singularities of the Drinfeld compactification. However, to the best of my knowledge, the Laumon compactification is special to the case where G = SL_n, whereas the Drinfeld compactification can be defined for any reductive algebraic group. In Kuznetsov's paper, he says in the introduction that he would like to study the resolutions of the Drinfeld compactification for groups besides SL_n in a future paper. As far as I know, no such paper was ever written. Does anyone know if Kuznetsov ever wrote such a paper and, if not, if anyone else has ever worked on this? One partial solution to this question seems like it might be buried in the paper by Braverman, Finkelberg, Gaitsgory, and Mirkovic where they compute the IC sheaves of the Drinfeld compactification. This is related because, assuming a semismall resolution of singularities existed, its cohomology would compute the intersection cohomology of the Drinfeld compactification. However, given their use of finite-dimensional Zastava spaces to model the singularities of the Drinfeld compactification (and hence to compute the IC sheaf), it does not seem to me that a semismall resolution can be found in their paper (although maybe this means that if a semismall resolution existed it should also provide a resolution of each of the Zastava spaces?). If appropriate, I would be happy to start this as its own topic. - Please do start this as its own topic if you want. – Kevin Lin Jan 14 2010 at 6:23 My understanding is that, in large part, it's because the compactification by stable curves/maps can be fairly easily constructed with GIT, and have been studied before. Most of the results people actually really want to have involve showing that the class of curves in something is actually contained in the locus of smooth curves (which really is the case for rational marked curves). The intro article by Fulton and Pandharipande discusses this a bit, in the case of homogeneous varieties. Edit: In response to the answers mentioning Satake and Thurston: they don't have nice (that I know of) realizations as curves in the target space, which is somewhat important, to be able to really get your hands on what these extra points represent for the curves in a Calabi-Yau problem that enumerative geometry and string theory care about. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934031069278717, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/149173-question-cosine.html
# Thread: 1. ## A question on Cosine. Hi all, I did'nt know how to write formula here. So: 2. Originally Posted by Mathelogician Hi all, I did'nt know how to write formula here. So: Hi, (Edited) The full theorem is: $\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k. In other words, $\cos x=\cos a \Rightarrow \exists k\in\mathbb{Z}<img src=$x=2k\pi+a)\lor (x=2k\pi-a)" alt="\cos x=\cos a \Rightarrow \exists k\in\mathbb{Z}x=2k\pi+a)\lor (x=2k\pi-a)" />. This is much different from $\cos x=\cos a \Rightarrow \forall k \in \mathbb{Z}:x=2k\pi\pm a$. Among other things, this would allow us to easily show that $0=2\pi$. Regarding writing math formulae, etc., you can see the LaTeX tutorial within the LaTeX Help Subforum. 3. Thanks. No my assert is right! We know that k is an element of the set of integers and that solution is the general solution of the equation and it means that for all ks it is true! So if we choose k to be 0, then x=+-x and we choose the case x=-x wich is one of the solutions! --------- Even If that is wrong, when we get x=2k(pi)+-x the there are 2 cases: 1) 2x=2k*pi=>x=k*pi and 2) 2k*pi=0 => k=0 wich we may have choiced k=another number except 0 for example if k=1 => 2*pi=0 4. Originally Posted by Mathelogician Thanks. No my assert is right! We know that k is an element of the set of integers and that solution is the general solution of the equation and it means that for all ks it is true! So if we choose k to be 0, then x=+-x and we choose the case x=-x wich is one of the solutions! --------- Even If that is wrong, when we get x=2k(pi)+-x the there are 2 cases: 1) 2x=2k*pi=>x=k*pi and 2) 2k*pi=0 => k=0 wich we may have choiced k=another number except 0 for example if k=1 => 2*pi=0 By you argument, $cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}<img src=$0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" alt="cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" />, and letting k=2, we have $(0=4\pi+2\pi=6\pi)\lor(0=4\pi-2\pi=2\pi)$. This is a contradiction. Edit: Actually, I should have written that the full theorem is: $\forall x,a\in\mathbb{R}:\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k. Furthermore, the converse is true. So, for the most information and least amount of characters: $\forall x,a\in\mathbb{R}:\cos x=\cos a \Leftrightarrow\exists k\in\mathbb{Z}: x=2k\pi\pm a$. Edit 2: I suppose it's worth mentioning that for the correct theorem, we can't just choose an arbitrary k, like you did at the end the quoted post. Maybe it would help you to have an analogy. $\forall a,b\in\mathbb{Z},|a|+|b|\ne0:a|b\Leftrightarrow \exists k\in\mathbb{Z}:ka=b$. 5. Originally Posted by undefined By you argument, $cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}<img src=$0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" alt="cos(0)=cos(2\pi)\Rightarrow \forall k\in\mathbb{Z}0=2k\pi+2\pi)\lor(0=2k\pi-2\pi)" />, and letting k=2, we have $(0=4\pi+2\pi=6\pi)\lor(0=4\pi-2\pi=2\pi)$. This is a contradiction. Edit: Actually, I should have written that the full theorem is: $\forall x,a\in\mathbb{R}:\cos x=\cos a \Rightarrow x=2k\pi\pm a$ for some integer k. 1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!! 2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k. 6. Originally Posted by Mathelogician 1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!! 2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k. When you say $cosx = cosa$ has solutions $x = 2 \pi k \pm a$, what do you mean? You mean that there exists such a $k \in \mathbb{N}$, for which $x = 2 \pi k + a$. It does not mean that $x = 2 \pi k \pm a$ for all $k \in \mathbb{N}$! Take, for example, a quadratic equation - say $x^2 - x - 2 = 0$. We know, then, its solutions are $x_{1, 2} = \frac{1 \pm \sqrt{1 + 8}}{2} \Rightarrow x_1 = 2, \ x_2 = -1$ What does this mean? It means that if you have a number, say $a$, for which $a^2 - a - 2 = 0$, then either $a = 2$ or $a = -1$. It does not mean that $a = 2 = -1$! The case for $cosx = cosa$ is exactly the same! 7. Originally Posted by Mathelogician 1) I see this is a contradiction and i want to know why it hapens when we use allowable ways!! 2) And the theorem is: $\forall x,a\in\mathbb{R}, k\in\mathbb{Z}:\cos x=\cos a \Leftrightarrow x=2k\pi\pm a$. Infact we have infinite solutions for that, and for k=0 we have 2 solutions like other amounts of k. 1) You've heard of proof by contradiction, right? What I gave was a proof that your claim is false. You have not used "allowable ways." 2) This is false, as was already proven. In general: You are using "proof by vehement assertion." You are not presenting an actual proof, you are just stating emphatically that your claim is true, over and over. See Defunkt's post which goes along with everything I've been saying. 8. Indeed, the location of quantifiers in logic ("there exists", or, "for all") is extremely important, as demonstrated above. If a statement looks the same, but has those quantifiers in different locations, then it's not necessarily the same statement. 9. Ok, Why you make the things complicated than they are? $x = 2k\pi \pm a$ for cos(a)=cos(x). cos(-x)=cos(x) because cosine is even function. Now this works for every x in reals. 10. Mathelogician: what are you trying to do? I'm just taking a step back here. What is your goal? 11. Hello and Thanks for responses. ------------------------------ Dear Defunkt and other friends, i mean that it's true for all the integer numbers. For example: cos(x)=cos(2*pi+x)=cos(4*pi+x)=cos(6*pi+x)=cos(8*p i+x)=cos(10*pi+x)=cos(12*pi+x)=cos(14*pi+x)=cos(16 *pi+x)=cos(18*pi+x)=cos(20*pi+x)=cos(22*pi+x)=cos( 24*pi+x)=cos(26*pi+x)=cos(28*pi+x)=cos(30*pi+x)=.. ..{also for negative numbers} If you use the Unit circle you will understant my assertion. Infact the Original period of Cosine function is p=2*pi (like sine function). If a function f is periodic with period P, then for all x in the domain of f and all integers n, f(x + nP) = f(x). See: Periodic function - Wikipedia, the free encyclopedia And the quadratic equation like any other polynomial IS NOT periodic. So i think my assertion is Reasonable and yours is not! 12. I would agree with your claim, mathelogician. It is true that the sin and cos functions are $2\pi$-periodic. So, $\cos(x)=\cos(x+2\pi k)\;\forall\,k\in\mathbb{Z}$, and $\forall\,x\in\mathbb{R}$. However, reasoning backwards to any sort of equality of the x's is incorrect. Example: $\frac{1}{2}=\cos\left(\frac{\pi}{3}\right)=\cos\le ft(-\frac{\pi}{3}\right)$. But, obviously, $\frac{\pi}{3}-\left(-\frac{\pi}{3}\right)=\frac{2\pi}{3}\not=2\pi k$ for any integer $k$. The cosine and the sine functions are not 1-1; hence, reasoning from equality of the functions to equality of the arguments is not permissible. Reasoning from equality of arguments to equality of functions is permissible, since the cosine and sine functions are well-defined. So, mathelogician, I would say that your original claim in the OP is incorrect. Your original claim was that IF $\cos(x)=\cos(a)$ THEN $x=2\pi k\pm a$. But I've just shown you a counterexample to that claim. The converse of that claim, that IF $x=2\pi k\pm a$ THEN $\cos(x)=\cos(a)$, is true. A statement is not, in general, equal to its converse! But, all of this could well be irrelevant. I'm still left wondering what it is you're trying to do. 13. Thanks. I think i got the Mistake! 1) When we speak about an equation, then we must have an unknown number (called x) an we want to find all Possible values for x. So my mistake was forgetting this Important issue!! 2) Then you should note that my claim wich is the expression and its converse, is ALWAYS true for Trig EQUATIONS. Infact there we have an unknown number x and the General solution is the set of all possible values for that! There are different proofs for this claim. For example a Geometric proof exists for that(If you need, i will write it here). And almost in every Trigonometry book in you can find this solving method.(tell me if you need). -------- Why do you insist on asking my goal of questioning??!!! 14. Your Original Claim: if $\cos(x)=\cos(a)$, then $x=2\pi k\pm a$ for all k. I would love to see a proof of this false claim. Here is my proof that it is false: Let $a=\pi/3$. Then $\cos(-\pi/3)=\cos(\pi/3)$, and yet it is not true that $-\pi/3=2\pi k\pm\pi/3$ for any integer $k$. Therefore, the implication in the claim is false. 15. Originally Posted by Mathelogician 1) When we speak about an equation, then we must have an unknown number (called x) an we want to find all Possible values for x. I don't know where you get your definitions. 5=5 is an equation. And clearly in the equation 5=5 there is no unknown. Originally Posted by Mathelogician 2) Then you should note that my claim wich is the expression and its converse, is ALWAYS true for Trig EQUATIONS. Just to be perfectly clear that we are using the same language: for a statement p -> q the converse is q -> p Ackbeet very clearly explained that using "for all integers k" only one direction is true, while the other is false. Please try to understand this. Using the symbol $\displaystyle \Leftrightarrow$ is to claim that both directions are true. I will not post on this thread anymore (except to correct any mistake I may have made) because I feel the discussion is becoming unproductive, just saying the same thing over and over. Hope all this makes sense to the OP. Cheers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 47, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099410772323608, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/67763-topology-open-set-question.html
# Thread: 1. ## Topology and Open Set Question Define a topology on $\Re$(by listing the open sets within it) that contains the open set (0,2) and (1, 3) and that contains as few open sets as possible. 2. Originally Posted by r2dee6 Define a topology on $\Re$(by listing the open sets within it) that contains the open set (0,2) and (1, 3) and that contains as few open sets as possible. You basically need to go back to the definition of a topology. If $\mathcal{T}$ is a topology on the set $X$, then $X$ and $\emptyset$ are in $\mathcal{T}$ And if there are A, B are open in $\mathcal{T}$, then $A \cap B$ is open in $\mathcal{T}$ also. And if there are A, B, ... are open in $\mathcal{T}$, then $A \cup B$ is open in $\mathcal{T}$ also. Can you take it from here? 3. I think the smallest such topology contains 8 sets- but there are several correct answers. 4. Originally Posted by HallsofIvy I think the smallest such topology contains 8 sets- but there are several correct answers. Interesting. I have only come up with 6. 5. It's also important to know: If each set $\mathcal{A}_1,\mathcal{A}_2,\cdots,\mathcal{A}_n$ is open, then $\mathcal{M}_1:=\bigcup_{k=1}^{n}\mathcal{A}_k$ $\mathcal{M}_2:=\bigcap_{k=1}^{n}\mathcal{A}_k$ are open, if closed then closed. There's also a simple proof. 6. Originally Posted by chabmgph Interesting. I have only come up with 6. You are completely right. (I did say "I think"!) For some reason I was thinking that every we would need to add something like $(-\infty, 3)$ and $(0, \infty)$ to get all real numbers but we don't- they are already in R.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 18, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9787010550498962, "perplexity_flag": "head"}
http://nrich.maths.org/2383
### F'arc'tion At the corner of the cube circular arcs are drawn and the area enclosed shaded. What fraction of the surface area of the cube is shaded? Try working out the answer without recourse to pencil and paper. ### Plutarch's Boxes According to Plutarch, the Greeks found all the rectangles with integer sides, whose areas are equal to their perimeters. Can you find them? What rectangular boxes, with integer sides, have their surface areas equal to their volumes? ### Take Ten Is it possible to remove ten unit cubes from a 3 by 3 by 3 cube made from 27 unit cubes so that the surface area of the remaining solid is the same as the surface area of the original 3 by 3 by 3 cube? # Cuboids ##### Stage: 3 Challenge Level: Find a cuboid (with edges of whole number lengths) that has a surface area of exactly $100$ square units. Is there more than one? Can you find them all? Can you provide a convincing argument that you have found them all?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461132884025574, "perplexity_flag": "middle"}
http://regularize.wordpress.com/2011/05/11/what-young-measures-can-tell-about-weak-convergence-of-non-linearly-distorted-functions/
# regularize Trying to keep track of what I stumble upon May 11, 2011 ## What Young measures can tell about weak-* convergence of non-linearly distorted functions Posted by Dirk under Math | Tags: chi-function, functional analysis, weak* convergence, Young measure | 1 Comment This entry is not precisely about some thing I stumbled upon but about some thing a that I wanted to learn for some time now, namely Young measures. Lately I had a several hour train ride and I had the book Kinetic Formulation of Conservation Laws with me. While the book is about hyperbolic PDEs and their formulation as kinetic equation, it also has some pointers to Young measures. Roughly, Young measures are a way to describe weak limits of functions and especially to describe how these weak limits behave under non-linear functions, and hence, we start with this notation. 1. Weak convergence of functions We are going to deal with sequences of function ${(f_n)}$ in spaces ${L^p(\Omega)}$ for some open bounded domain ${\Omega}$ and some ${1\leq p\leq \infty}$. For ${1\leq p < \infty}$ the dual space of ${L^p(\Omega)}$ is ${L^q(\Omega)}$ with ${1/p + 1/q = 1}$ and the dual pairing is $\displaystyle \langle f,g\rangle_{L^p(\Omega)\times L^q(\Omega)} = \int_\Omega f\, g.$ Hence, a sequence ${(f_n)}$ converges weakly in ${L^p(\Omega)}$ to ${f}$, if for all ${g\in L^p(\Omega)}$ it holds that $\displaystyle \int f_n\, g \rightarrow \int f\, g.$ We denote weak convergence (if the space is clear) with ${f_n\rightharpoonup f}$. For the case ${p=\infty}$ one usually uses the so-called weak-* convergence: A sequence ${(f_n)}$ in ${L^\infty(\Omega)}$ converges weakly-* to ${f}$, if for all ${g\in L^1(\Omega)}$ it holds that $\displaystyle \int f_n\, g \rightarrow \int f\, g.$ The reason for this is, that the dual space of ${L^\infty(\Omega)}$ is not easily accessible as it can not be described as a function space. (If I recall correctly, this is described in “Linear Operators”, by Dunford and Schwarz.) Weak-* convergence will be denoted by ${f_n\rightharpoonup^* f}$. In some sense, it is enough to consider weak-* convergence in ${L^\infty(\Omega)}$ to understand what’s that about with Young measures and I will only stick to this kind of convergence here. Example 1 We consider ${\Omega = [0,1]}$ and two values ${a,b\in{\mathbb R}}$. We define a sequence of functions which jumps between these two values with an increasing frequency: $\displaystyle f_n(x) = \begin{cases} a & \text{for }\ \tfrac{2k}{n} \leq x < \tfrac{2k+1}{n},\ k\in{\mathbb Z}\\ b & \text{else.} \end{cases}$ The functions ${f_n}$ look like this: To determine the weak limit, we test with very simple functions, lets say with ${g = \chi_{[x_0,x_1]}}$. Then we get $\displaystyle \int f_n\, g = \int_{x_0}^{x_1} f_n \rightarrow (x_1-x_0)\tfrac{a+b}{2}.$ Hence, we see that the weak-* limit of the ${f_n}$ (which is, by the way, always unique) has no other chance than being $\displaystyle f \equiv \frac{a+b}{2}.$ In words: the weak-* limit converges to the arithmetic mean of the two values between which the functions oscillate. 2. Non-linear distortions Now, the norm-limit behaves well under non-linear distortions of the functions. Let’s consider a sequence ${f_n}$ which converges in norm to some ${f}$. That is, ${\|f_n -f\|_\infty \rightarrow 0}$. Since this means that ${\sup| f_n(x) - f(x)| \rightarrow 0}$ we see that for any boundedcontinuous function ${\phi:{\mathbb R}\rightarrow {\mathbb R}}$ we also have ${\sup |\phi(f_n(x)) - \phi(f(x))|\rightarrow 0}$ and hence ${\phi\circ f_n \rightarrow \phi\circ f}$. The same is totally untrue for weak-* (and also weak) limits: Example 2 Consider the same sequence ${(f_n)}$ as in example~1which has the weak-* limit ${f\equiv\frac{a+b}{2}}$. As a nonlinear distortion we take ${\phi(s) = s^2}$ which gives $\displaystyle \phi\circ f_n(x) = \begin{cases} a^2 & \text{for }\ \tfrac{2k}{n} \leq x < \tfrac{2k+1}{n},\ k\in{\mathbb Z}\\ b^2 & \text{else.} \end{cases}$ Now we see $\displaystyle \phi\circ f_n \rightharpoonup^* \frac{a^2 + b^2}{2} \neq \Bigl(\frac{a+b}{2}\Bigr)^2 = \phi\circ f.$ The example can be made a little bit more drastically by assuming ${b = -a}$ which gives ${f_n\rightharpoonup^* f\equiv 0}$. Then, for every ${\phi}$ with ${\phi(0) = 0}$ we have ${\phi\circ f\equiv 0}$. However, with such a ${\phi}$ we may construct any constant value ${c}$ for the weak-* limit of ${\phi\circ f_n}$ (take, e.g. ${\phi(b) = 0}$, ${\phi(a) = 2c}$). In fact, the relation ${\phi\circ f_n \rightharpoonup^* \phi\circ f}$ is only true for affine linear distortions ${\phi}$ (unfortunately I forgot a reference for this fact\dots). It arises the question, if it is possible to describe the weak-* limits of distortions of functions and if fact, this will be possible with the notions of Young measure. 3. Young measures In my understanding, Young measures are a method to view a function somehow a little bit more geometrically in giving more emphasis on the graph of the function rather than is mapping property. We start with defining Young measures and illustrate how they can be used to describe weak(*) limits. In what follows we use ${\mathfrak{L}}$ for the Lebesgue measure on the (open and bounded) set ${\Omega}$. A more through description in the spirit of this section is Variational analysis in Sobolev and BV spaces by Attouch, Buttazzo and Michaille. Definition 1 (Young measure) A positive measure ${\mu}$ on ${\Omega\times {\mathbb R}}$ is called a Young measureif for every Borel subset ${B}$ of ${\Omega}$ it holds that $\displaystyle \mu(B\times{\mathbb R}) = \mathfrak{L}(B).$ Hence, a Young measure is a measure such that the measure of every box ${B\times{\mathbb R}}$ is determined by the projection of the box onto the set ${\Omega}$, i.e. the intersection on ${B\times{\mathbb R}}$ with ${\Omega}$ which is, of course, ${B}$: There are special Young measures, namely these, who are associated to functions. Roughly spoken, a Young measure associated to a function ${u:\Omega\rightarrow {\mathbb R}}$ is a measure which is equidistributed on the graph of ${u}$. Definition 2 (Young measure associated to ${u}$) For a Borel measurable function ${u:\Omega\rightarrow{\mathbb R}}$ we define the associated Young measure${\mu^u}$ by defining for every continuous and bounded function ${\phi:\Omega\times{\mathbb R}\rightarrow{\mathbb R}}$ $\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega \phi(x,u(x)){\mathrm d} \mathfrak{L}(x).$ It is clear that ${\mu^u}$ is a Young measure: Take ${B\subset \Omega}$ and approximate the characteristic function ${\chi_{B\times{\mathbb R}}}$ by smooth functions ${\phi_n}$. Then $\displaystyle \int_{\Omega\times{\mathbb R}}\phi_n(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega \phi_n(x,u(x)){\mathrm d} \mathfrak{L}(x).$ The left hand side converges to ${\mu^u(B\times{\mathbb R})}$ while the right hand side converges to ${\int_B 1{\mathrm d}{\mathfrak{L}} = \mathfrak{L}(B)}$ as claimed. The intuition that a Young measure associated to a function is an equidistributed measure on the graph can be made more precise by “slicing” it: Definition 3 (Slicing a measure) Let ${\mu}$ be a positive measure on ${\Omega\times{\mathbb R}}$ and let ${\sigma}$ be its projection onto ${\Omega}$ (i.e. ${\sigma(B) = \mu(B\times{\mathbb R})}$). Then ${\mu}$ is sliced into measures ${(\sigma_x)_{x\in\Omega}}$, i.e. it holds: 1. Each ${\mu_x}$ is a probability measure. 2. The mapping ${x\mapsto \int_{\mathbb R} \phi(x,y){\mathrm d}{\mu_x(y)}}$ is measurable for every continuous ${\phi}$ and it holds that $\displaystyle \int_{\Omega\times{\mathbb R}} \phi(x,y){\mathrm d}{\mu(x,y)} = \int_\Omega\int_{\mathbb R} \phi(x,y){\mathrm d}{\mu_x(y)}{\mathrm d}{\sigma(x)}.$ The existence of the slices is, e.g. proven in Variational analysis in Sobolev and BV spaces, Theorem 4.2.4. For the Young measure ${\mu^u}$ associated to ${u}$, the measure ${\sigma}$ in Definition~3is ${\mathfrak{L}}$ and hence: $\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega\int_{\mathbb R} \phi(x,y){\mathrm d}{\mu^u_x(y)}{\mathrm d}{\mathfrak{L}(x)}.$ On the other hand: $\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^u(x,y)} = \int_\Omega\phi(x,u(x)){\mathrm d}{\mathfrak{L}} = \int_\Omega\int_{\mathbb R} \phi(x,y) {\mathrm d}{\delta_{u(x)}(y)}{\mathrm d}{\mathfrak{L}(x)}$ and we see that ${\mu^u}$ slices into $\displaystyle \mu^u_x = \delta_{u(x)}$ and this can be vaguely sketched: 4. Narrow convergence of Young measures and weak* convergence in ${L^\infty(\Omega)}$ Now we ask ourself: If a sequence ${(u^n)}$ converges weakly* in ${L^\infty(\Omega)}$, what does the sequence of associated Young measures do? Obviously, we need a notion for the convergence of Young measures. The usual notion here, is that of narrow convergence: Definition 4 (Narrow convergence of Young measures) A sequence ${(\mu_n)}$ of Young measures on ${\Omega\times{\mathbb R}}$ converges narrowly to ${\mu}$, if for all bounded and continuous functions ${\phi}$ it holds that $\displaystyle \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu_n(x,y)} \rightarrow \int_{\Omega\times{\mathbb R}} \phi(x,y){\mathrm d}{\mu(x,y)}.$ Narrow convergence will also be denoted by ${\mu_n\rightharpoonup\mu}$. One may also use the non-continuous test functions of the form ${\phi(x,y) = \chi_B(x)\psi(y)}$ with a Borel set ${B\subset\Omega}$ and a continuous and bounded ${\psi}$, leading to the same notion. The set of Young measures is closed under narrow convergence, since we may test with the function ${\phi(x,y) = \chi_B(x)\chi_{\mathbb R}(y)}$ to obtain: $\displaystyle \mathfrak{L}(B) = \lim_{n\rightarrow\infty} \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu_n(x,y)} = \int_{\Omega\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu(x,y)} = \mu(B\times E).$ The next observation is the following: Proposition 5 Let ${(u^n)}$ be a bounded sequence in ${L^\infty(\Omega)}$. Then the sequence ${(\mu^{u_n})}$ of associated Young measures has a subsequence which converges narrowly to a Young measure ${\mu}$. The proof uses the notion of tightness of sets of measures and the Prokhorov compactness theorem for Young measures (Theorem 4.3.2 in Variational analysis in Sobolev and BV spaces). Example 3 (Convergence of the Young measures associated to Example 1) Consider the functions ${f_n}$ from Example~1and the associated Young measures ${\mu^{f_n}}$. To figure out the narrow limit of these Young measures we test with a function ${\phi(x,y) = \chi_B(x)\psi(y)}$ with a Borel set ${B}$ and a bounded and continuous function ${\psi}$. We calculate $\displaystyle \begin{array}{rcl} \int_{[0,1]\times{\mathbb R}}\phi(x,y){\mathrm d}{\mu^{f_n}(x,y)} &= &\int_0^1\phi(x,f_n(x)){\mathrm d}{\mathfrak{L}(x)}\\ & = &\int_B\psi(f_n(x)){\mathrm d}{\mathfrak{L}(x)}\\ & \rightarrow &\mathfrak{L}(B)\frac{\psi(a)+\psi(b)}{2}\\ & = & \int_B\frac{\psi(a)+\psi(b)}{2}{\mathrm d}{\mathfrak{L}(x)}\\ & = & \int_{[0,1]}\int_{\mathbb R}\phi(x,y){\mathrm d}{\bigl(\tfrac{1}{2}(\delta_a+\delta_b)\bigr)(y)}{\mathrm d}{\mathfrak{L}(y)}. \end{array}$ We conclude: $\displaystyle \mu^{f_n} \rightharpoonup \tfrac{1}{2}(\delta_a+\delta_b)\otimes\mathfrak{L}$ i.e. the narrow limit of the Young measures ${\mu^{f_n}}$ is notthe constant function ${(a+b)/2}$ but the measure ${ \mu = \tfrac{1}{2}(\delta_a+\delta_b)\otimes\mathfrak{L}}$. This expression may be easier to digest in sliced form: $\displaystyle \mu_x = \tfrac{1}{2}(\delta_a+\delta_b)$ i.e. the narrow limit is something like the “probability distribution” of the values of the functions ${f_n}$. This can be roughly put in a picture: Obviously, this notion of convergence goes well with nonlinear distortions: $\displaystyle \mu^{\phi\circ f^n} \rightharpoonup \tfrac{1}{2}(\delta_{\phi(a)} + \delta{\phi(b)})\otimes\mathfrak{L}.$ Recall from Example~1: The weak-* limit of ${\phi\circ f_n}$ was the constant function ${\tfrac{\phi(a)+\phi(b)}{2}}$, i.e. $\displaystyle \phi\circ f_n \rightharpoonup^* \tfrac{\phi(a)+\phi(b)}{2}\chi_{[0,1]}.$ The observation from the previous example is in a similar way true for general weakly-* converging sequences ${f_n}$: Theorem 6 Let ${f_n\rightharpoonup^* f}$ in ${L^\infty(\Omega)}$ with ${\mu^{f_n}\rightharpoonup\mu}$. Then it holds for almost all ${x}$ that $\displaystyle f(x) = \int_{\mathbb R} y{\mathrm d}{\mu_x(y)}.$ In other words: ${f(x)}$ is the expectation of the probability measure ${\mu_x}$. ### One Response to “What Young measures can tell about weak-* convergence of non-linearly distorted functions” 1. May 22, 2011 at 10:36 pm [...] However, this type of convergence does not occur in bounded domains, and hence, can not be treated with Young measures as they have been introduced the in my previous entry. [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 143, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9390110373497009, "perplexity_flag": "head"}
http://mathoverflow.net/questions/70248/searching-for-an-unabridged-proof-of-the-basic-theorem-of-morse-theory/70322
## Searching for an unabridged proof of “The Basic Theorem of Morse Theory” ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Steven Smale labels the following statement "The Basic Theorem of Morse Theory" in A Survey of some Recent Developments in Differential Topology: Let f be a $C^\infty$ function on a closed manifold with no critical points on $f^{-1}[-\epsilon,\epsilon]$ except k nondegenerate ones on $f^{-1}(0)$, all of index $s$. Then $f^{-1}[-\infty,\epsilon]$ is diffeomorphic to $X(f^{-1}[-\infty,-\epsilon];f_1,\ldots,f_k;s)$ (for suitable fi. Here $X(M;f;s)$ for $f\colon\,(\partial D^s)\times D^{n-s}\to M$ is M with an s-handle attached by f. Where can I find a complete proof of this theorem, with all the t's crossed and i's dotted? Textbooks (Milnor, Matsumoto) only seem to prove homology/homotopy versions of the above statement, usually with substantial steps to be filled in by the reader. I nosed around some old papers for a few hours, (surely Smale himself proved it somewhere!) but to no avail. If I were to continue to search, no doubt I could eventually turn it up (there are a finite number of differential topology papers written 1958-1962, which is when I assume it was proven), but because I think that this question might be of wider interest, and to save me a lot of time, I'd like to ask: Where can I find a complete unabridged proof of "The Basic Theorem of Morse Theory"? (in fact I care only about low dimensions) What is the original paper, and is there a textbook exposition of it anywhere? - 1 I'm leaving this as a comment since I don't have the text available to confirm. "From Calculus to Cohomology" by Madsen and Tornehave has a very detailed treatment of basic Morse theory in the chapter on the Poincare-Hopf theorem, and I seem to recall that the book has an appendix which is specifically dedicated to hammering out all the details of what you want. It might be worth a look, anyway. – Paul Siegel Jul 14 2011 at 13:07 1 The original paper was: Generalized Poincaré conjecture in dimensions greater than 4, Annals of Mathematics, 74 (1961), pp. 391--406. The Kosinski book Johannes mentions and Milnor's h-cobordism theorem lectures are the best textbook references that I'm aware of. Smale's papers tend to have a lot of typos, and he also runs into several "smoothing the corner" problems that Kosinski avoids. – Ryan Budney Jul 14 2011 at 13:56 @Ryan I wasn't able to figure out where to look in Smale... I mean, definitely the core ideas are there, but Page 403 only gives a "proof sketch" of the result, without any details at all. – Daniel Moskovich Jul 14 2011 at 19:10 @Ryan isn't Milnor's h-cobordism treatment a "homotopy proof" again? Anyway, Kosinski + Palais are beautiful and simple, so now I'm happy... but I wish I understood the history better! There's also Wallace "Modifications and cobounding manifolds" Canad. J. Math. 12 (1960) 503-528, who proves a related-looking statement in Section 4. – Daniel Moskovich Jul 15 2011 at 18:23 @Paul Siegel: Madsen-Tornehave turns out to be the homotopy version also (but a nice and slightly different version). Goresky-MacPherson prove the statement in the stratified setting. The proof is a 100 page tour-de-force. The original proof appears to be Palais. – Daniel Moskovich Jun 1 at 5:24 ## 4 Answers R.Palais, Morse theory on Hilbert manifolds (main Theorem of §12). As you will see, in the infinite dimensional setting the construction looses nothing in clearness. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Kosinski, ''Differential manifolds'', Chapter VII, section 2. He gives a detailed proof in the case of just one critical point. - BANG!! This is exactly what I was looking for! Do you know the history of the argument? The construction of an explicit diffeomorphism can't be due to Kosinski; it must be from the late 50's or early 60's surely! – Daniel Moskovich Jul 13 2011 at 21:05 6 I think the proof in my paper "Morse Theory on Hilbert Manifolds" (referenced and linked to in Pietro Majer's answer) was probably the first place that the smooth handle attaching theorem was proved---it was certainly the first place that it was proved for Hilbert manifolds. Until then people settled for a homotopy version. – Dick Palais Jul 14 2011 at 15:53 1 I wish I could accept 2 answers- then I would also have accepted this one. Kosinski's treatment was really clear and helpful- thanks! – Daniel Moskovich Jul 14 2011 at 17:57 Incidentally, Kosinski's book is available in electronic form: gen.lib.rus.ec/… – Dmitri Pavlov Jan 25 2012 at 15:43 My recollection is that Milnor's proof gives exactly what you are asking. In fact, see the remark on the bottom of page 17 of his book. - This is what I was looking at, but it looks like a homotopy proof (although it can surely be upgraded). Theorem 3.2 Page 14 claims only homotopy equivalence, and Assertion 4 Page 18 pushes along horizontal lines... does this preserve smoothness? I don't follow the details of his argument (maybe I'm just not reading carefully). Also, Remark 3.3 on page 19 confuses me... what's the argument which "unmixes" the various handles? – Daniel Moskovich Jul 13 2011 at 20:10 No it really is a diffeomorphism proof. Assume for simplcity that there is a single critical point $p$ with $f(p)=c$. The idea is that $M^a \cup H$ is a deformation retract of $M^b$ where $a<c<b$ are regular values and $c$ is the only critical value in the interval $[a,b]$ (where $M^b := f^{-1}((-\infty,b])$) . Then theorem 3.1 implies that $M^a \cup H$ is diffeomorphic to $M^b$ (since the gradient field on $M^b$ had no zeros on the complement of $M^a \cup H$). – John Klein Jul 13 2011 at 21:04 As for as "unmixing the handles" goes, you can change the function slightly near the critical points so that all the critical points have distinct critical values. That will do it. – John Klein Jul 13 2011 at 21:05 I believe you, but I still don't get it... H is <i>defined</i> to be $F^{-1}(-\infty,a]-M^{a}$. Why is this diffeomorphic to an s-handle? Don't you need a smooth version of Assertion 4? (which it turns out is in Kosinski, as Johannes Ebert points out, but I can't see how to upgrade the argument in Milnor, if indeed it needs upgrading...) I'm sure I'm just missing something very simple and obvious... – Daniel Moskovich Jul 13 2011 at 21:26 Dear Daniel Moskovich, I am answering just the last part of your question. As you said this could be useful to many others beyond the original poster, so I report my experience hoping to be useful. The only textbook on differentiable manifolds including a proof of the basic theorem in Morse Theory, that until now I have met, is "Differentiable manifolds, Second Edition" by Lawrence Conlon. His presentation of Morse Theory is distributed on sections 2.9.B, 3.10, and 4.2, and is closely inspired by Milnor's book. In a certain way it requires the active cooperation of the reader by completing just some minor details, but at the end this work is doubly rewarding, it renforces your previous knowledge and assures that you grasp the content of basic morse theory. Edit: I have found that Conlon leaves apart just to recognize that a certain manifold is a $\lambda$-handle, and for this result he refers to S.Smale "Generalized Poincarè's conjecture in dimensions greater than four" - Do you know where in Smale? (I was unsuccessful finding it) On page 403, there seems only to be a "proof sketch"... also, I didn't understand how corners were being treated. I mean, somehow the "idea" is in Smale, but where is the proof? – Daniel Moskovich Jul 14 2011 at 19:07 Dear Daniel Moskovich, it seems to me that Smale says that the proof of theorem 5.1 is only sketched because its proof closely follows that of the Handlebody Theorem 1.2. Sections 2,3, and 4 are devoted to prove theorem 1.2. About straightening the angle along the corners, on page 396 in the first paragraph of §1, Smale says that he refers for such a procedure to Milnor[10] "Differentiable manifolds which are homotopy sheres". – Giuseppe Jul 14 2011 at 19:31 Thanks! But I'm still having difficulty understanding. The corners I am concerned about are when you attach the cell, especially if there are many handles (one is "easy"). So what I'm looking for is where he shows that "what you contract the saddle to" is diffeomorphic to a handle $D^s\times D^{n-s}$ (smoothed somehow), and that this diffeomorphism extends over the rest of the manifold. Could you give a page reference for this step? – Daniel Moskovich Jul 14 2011 at 19:56 @Daniel Moskovich: For your first point of interest look at the sketched proof of theorem 6.2(that is your original statement), so $f^{-1}([-\infty,\varepsilon])$ is the union of $f^{-1}([-\infty,-\varepsilon])$ and of $f^{-1}([-\varepsilon,\varepsilon])$ along their common boundary $f^{-1}(\varepsilon)$, and $f^{-1}([-\varepsilon,\varepsilon])$ is diffeomorfic to $D^\lambda\times D^{n-\lambda}$, So $f^{-1}([-\infty,\varepsilon])$ is already an handlebody. For your second point of interest, starting §1, Smale says: the smooth structure obtained straightening the angles is unique up to diffeo. – Giuseppe Jul 14 2011 at 21:18 Thanks you for your answer. I'm probably just missing something, but I can't find the relevant details written down there at these critical steps. So (although I might be wrong) it looks to me like Smale's proof is a "sketch proof", which is all it claims to be. – Daniel Moskovich Jul 15 2011 at 18:19 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501234889030457, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/5373/quantum-death-like-heat-death-possible
# Quantum death like heat death possible? Quantum decoherence is an irreversible process which is the result of interaction of the system with its environment. It prevents interference due to lack of coherence. Environment acts just like a heat bath. Now my question is, is the different branches of the wave function of the universe becoming gradually more and more decoherent so that in a far future no trace of interference can occur? In other words I would like to know whether there will be any quantum death (like heat death of thermodynamics) of the universe if one waits for a sufficiently long time? - ""Environment acts just like a heat bath"" I doubt that. A heat bath maybe enhances decoherence, but it does a lot more. And why is decoherence of "universe wave function" quantum death? – Georg Feb 17 '11 at 16:41 Why would you think the universe will have a "wave function"? In my opinion the universe viewed quantum mechanically has a density matrix composed of all the zillion independent state functions of atoms, molecules, light, etc. It is thermodynamic really. – anna v Feb 17 '11 at 16:50 @anna v: Of course there is a wave function of the universe as Hartle and Hawking showed us. Otherwise what do you mean by the subject called "quantum cosmology"? – user1355 Feb 17 '11 at 17:00 – user1355 Feb 17 '11 at 17:02 1 Dear sb1, if you consider loss of coherence "quantum death", I assure you that 99.99999999999999999999999999999% of the quantum death is completed within a tiny fraction of a second - which may be much shorter than the Planck time for macroscopic objects. In principle, the coherence is always there if you could trace the environment, directly or indirectly, but it's totally inconsequential for physics as an empirical science. The decoherence occurs almost instantly. I still don't understand why you call it "death". It's just the appearance of the classical intuition from quantum mechanics. – Luboš Motl Feb 17 '11 at 19:01 show 8 more comments ## 1 Answer This question is a bit strange, and I tend to agree with Anna that this is related to thermodynamics. The entropy involved here is an entanglement entropy. Suppose you have system A and system B which form an entanglement. The entanglement entropy of system A is $$S_A~=~-Tr[\rho_A log\rho_A],$$ which equals the entanglement entropy of system B. If the two systems form a pure state then $S_{A+B}~=~0$. The entanglement entropy comes from the fact you have access to only one part of the density matrix $\rho~=~\rho_A\otimes\rho_B$. This plays a role in cosmology at large. During inflation the vacuum energy density was huge. The cosmological constant $\Lambda~\simeq~(8\pi G/c^4)\rho$, is very large and drives a rapid exponential expansion of space. There is some theoretical controversy here, but while the energy density of the vacuum was very large, the entropy was not that large. The entropy is a measure of the number, N, of degrees of freedom in a system that are coarse grained into a macrostate $S~=~k log(N)$. The other oddball factor is that while the temperature was high, the entropy was low due to the negative heat capacity of event horizons in spacetime. During inflation the event horizon was smaller than a proton, and the entropy is proportional to the area of the horizon $S~=~k A/4L_p^2$. The bang came about because the exponential expansion rapidly came to a halt, the cosmological constant dropped to a small value (the vacuum energy dropped enormously) and the cosmological horizon adjusted to a very large value. It is now out about $10^{10}$ light years. This means a relatively small number of degrees of freedom enter into complicated entanglements which are not accessible in a local region. The entanglement entropy increases, and these states appear in a highly thermalized form. This is the bang and fire of the big bang. It is a form of latent heat of fusion in a phase transition. The large vacuum energy $\rho~\simeq~10^{100}GeV^4$ crashed into about 10 GeV^4, and the energy gap assumed the form of a thermalized gas of particles. This was the initial generation of a huge amount of entropy in the early universe. Subsequently entropy is in the form of black holes, radiation and so forth. It is interesting to think we can understand this all from the perspective of quantum mechanics. Into the future the universe will end up as a de Sitter vacuum. In the question: de sitter cosmologic limit I indicated how the universe will over a vast period of time will decay from the de Sitter vacuum configuration with a small vacuum energy to a Minkowski spacetime. The horizon will retreat of to “infinity,” which means the entropy becomes infinite. It might be problematic to think of infinite entropy. There might be some sort of cut-off in this process. On the other hand this is just a measure of how the vacuum decays away to zero and there is no energy. The retreat of the cosmological horizon off to infinity is probably a measure of continued quantum entanglement process, which proceeds almost indefnately. - Thanks for the answer. Of course it is related with thermodynamics and entropy generation but what I really wanted to know is whether it (decoherence) means a complete absence of any interference in the long run. – user1355 Feb 17 '11 at 17:24 In a measurement of a superposed pair of states the superposition or overlap is replaced by an entanglement. So what you ask could be answered in the affirmative. – Lawrence B. Crowell Feb 17 '11 at 19:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399909973144531, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/24010/the-equivalent-electric-field-of-a-magnetic-field?answertab=votes
# The equivalent electric field of a magnetic field I know that Lorentz force for a charge $q$, with velocity $\vec{v}$ in magnetic field $\vec{B}$ is given by $$\vec{F} =q \vec{v} \times \vec{B}$$ but there will exist a frame of reference where observer move at same velocity with that of charge $q$, so according to him $v=0$. hence he will see no magnetic force is exerted on charge $q$. I have work on this problem for a while and found that the special relativity predicts equivalent electric force will acting upon charge instead. I want to know the relationship between this equivalent electric force and magnetic force. Thanks in advance - I'd say the net force has to be the same. So find a frame in which $v_{rel}=0$, and the equiv. E field will be $E=Bv$. Not sure. – Manishearth♦ Apr 19 '12 at 8:35 Seems like a problem that has fairly simple answer, but I have no idea which. Actually, in any frame of reference you'll see some movement. If you stand still relative to magnetic field, you see circular movement, if you are moving relative to magnetic field you see cycloids. So this additional force that arises with the transformation will be quite complicated (not a homogenous one!). – Pygmalion Apr 19 '12 at 8:49 i can share with u my paper on this if it is of any use. but i am not able to derive relation between the two, i could only explain how magnetic force is converted into electric force in special relativity mechanism. – someone_ smiley Apr 19 '12 at 8:50 I have a feeling that a few guys that follow this forum will find this question jokingly easy and we'll all say "aha" when we see the answer. I suggest some patience. – Pygmalion Apr 19 '12 at 8:53 ## 2 Answers I haven't read them, but this, this, this and this thread (I thank a diligent Qmechanic) are related and clear up the but why-questions you might have. The transformation of the quantities in electrodynamics w.r.t. boosts are where $\gamma(v)$ and the derivation of the transformation is presented on this wikipedia page and is most transperent in a space-time geometrical picture, see for example here. Namely, the electromagnetic field strenth tensor $F_{\mu\nu}$ incorporates both electric and magnetic field $E,B$ and the transformation is the canonical one of a tensor and therefore not as all over the place as the six lines posted above. In the non-relativistic limit $v<c$, i.e. when physical boosts are not associated with Lorentz transformations, you have For the traditional force law, the first formula confirms the prediction that the new $E$ magnitude is $vB$. Also, beware and always write down the full Lorentz law when doing transformations. Lastly, I'm not sure if special relativity predicts equivalent electric force will acting upon charge instead is the right formulation you should use, because while the relation is convincingly natural in a special relativistic formulation, the statement itself is more a consistency requirement for the theory of electrodynamics. I'd almost say the argument goes in the other direction: The terrible transformation law of $E$ and $B$ w.r.t. Galilean transformations was known before 1905 and upgrading the status of the Maxwell equations to be form invariant when translating between inertial frames suggests that the Lorentz transformation (and then special relativity as a whole) is physically sensible. - We can write the Lorentz transform of the fields in a very clean and easy to understand way. To simplify the expression we use a short hand notation for the various components of the fields parallel and orthogonal to the boost $\vec{\beta}$, further simplified by setting $c$ to 1. . ## Lorentz transform of the electromagnetic field $\begin{array}{lclclcl} \mathsf{E}' & = & \mathsf{E}_\| & + & \mathsf{E}_\bot\ \gamma & + & \mathsf{B}_\otimes\ \beta\gamma \\ \mathsf{B}' & = & \mathsf{B}_\| & + & \mathsf{B}_\bot\ \gamma & - & \mathsf{E}_\otimes\ \beta\gamma \end{array}$ . The parallel and orthogonal components are defined, using the unit vector $\hat{\beta}$, as: \$\begin{array}{lcll} \mathsf{E}_\| &=& (~ \hat{\beta}~\cdot~\mathsf{E}~ )~\hat{\beta} & ~~~~\mbox{parallel component with regard to $\vec{\beta}$} \ \end{array}\$ \$\begin{array}{lcll} \mathsf{E}_\bot &=& (~ \hat{\beta}\times \mathsf{E}~ )\times\hat{\beta} & \mbox{orthogonal component with regard to $\vec{\beta}$} \ \end{array}\$ \$\begin{array}{lcll} \mathsf{E}_\otimes &=& (~ \hat{\beta}\times \mathsf{E}~~ ) & ~~~~~~\mbox{$90^o$ rotated orthogonal component} \end{array}\$ . So in words: • The fields parallel to the boost don't change • The fields orthogonal to the boost are multiplied with $\gamma$ • The E and B fields fields orthogonal to the boost are converted into each other. . Hans. For more see this chapter from my book: http://physics-quest.org/Book_Chapter_EM_LorentzContr.pdf -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894439339637756, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/hamiltonian-formalism+reference-request
# Tagged Questions 3answers 315 views ### Dirac equation as Hamiltonian system Let us consider Dirac equation $$(i\gamma^\mu\partial_\mu -m)\psi =0$$ as a classical field equation. Is it possible to introduce Poisson bracket on the space of spinors $\psi$ in such a way that ... 4answers 588 views ### Lagrangian to Hamiltonian in Quantum Field Theory While deriving Hamiltonian from Lagrangian density, we use the formula $$\mathcal{H} ~=~ \pi \dot{\phi} - \mathcal{L}.$$ But since we are considering space and time as parameters, why the formula ... 2answers 257 views ### Hamiltonian mechanics and special relativity? Is there a relativistic version of Hamiltonian mechanics? If so, how is it formulated (what are the main equations and the form of Hamiltonian)? Is it a common framework, if not then why? It would be ... 4answers 347 views ### Hamiltonian and the space-time structure I'm reading Arnold's "Mathematical Methods of Classical Mechanics" but I failed to find rigorous development for the allowed forms of Hamiltonian. Space-time structure dictates the form of ... 2answers 315 views ### Analogue of Princeton Companion to Mathematics for Physics? I would like to know if there are compendiums much like the Princeton Companion to Mathematics for physics (especially classical physics: fluid mechanics, elasticity theory, Hamiltonian formalism of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8833897113800049, "perplexity_flag": "middle"}
http://mathforum.org/mathimages/index.php?title=Divergence_Theorem&diff=6850&oldid=6807
# Divergence Theorem ### From Math Images (Difference between revisions) Line 8: Line 8: |MiddleSchool=Yes |MiddleSchool=Yes |HighSchool=Yes |HighSchool=Yes - |ImageDesc=The Divergence Theorem in its pure form applies to [[Vector Fields]]. Flowing water can be considered a vector field because at each point the water has a velocity [[vector]]. Faster moving water is represented by a larger vector in our field. The '''divergence''' of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field <math> F </math> is + |ImageDesc=The Divergence Theorem in its pure form applies to [[Vector Fields]]. Flowing water can be considered a vector field because at each point the water has a velocity [[vector]]. Faster moving water is represented by a larger vector in our field. The '''divergence''' of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field <math> F </math> is expressed in [[Partial derivative|partial derivatives]]: <math> \nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}</math>, <math> \nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}</math>, Line 19: Line 19: <math>\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .</math> <math>\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .</math> - The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the flux through the boundary. + The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the total flux through the boundary. + + Summing up divergence over the entire volume means we sum the flow into or out of each infinitely subregion, since a flow into one infinitesimal subregion means flow out of an adjacent subregion, which effects the next adjacent subregion, and so on until the boundary of the entire volume is reached. The total sum of divergence over the volume is thus equal to the flow at the boundary, as the theorem states. + + [[Image:Gauss flowboxes.PNG|center|thumb|300px|A volume can be broken into infinitely small subregions, each of whose divergence effects the adjacent regions' divergence, up to the volume's boundary.]] ===Example of Divergence Theorem Verification=== ===Example of Divergence Theorem Verification=== ## Revision as of 10:40, 2 July 2009 Fountain Flux The water flowing out of a fountain demonstrates an important theorem for vector fields, the Divergence Theorem. Fountain Flux Field: Calculus Created By: Brendan John # Basic Description Consider the top layer of the fountain pictured. The rate that water flows out of the fountain's spout is directly related to the amount of water that flows off the top layer. Because something like water isn't easily compressed like air, if more water is pumped out of the spout, then more water will have to flow over the boundaries of the top layer. This is essentially what The Divergence Theorem states: the total the fluid being introduced into a volume is equal to the total fluid flowing out of the boundary of the volume if the quantity of fluid in the volume is constant. # A More Mathematical Explanation Note: understanding of this explanation requires: *Some multivariable calculus [Click to view A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considere [...] [Click to hide A More Mathematical Explanation] The Divergence Theorem in its pure form applies to Vector Fields. Flowing water can be considered a vector field because at each point the water has a velocity vector. Faster moving water is represented by a larger vector in our field. The divergence of a vector field is a measurement of the expansion or contraction of the field; if more water is being introduced then the divergence is positive. Analytically divergence of a field $F$ is expressed in partial derivatives: $\nabla\cdot\mathbf{F} =\partial{F_x}/\partial{x} + \partial{F_y}/\partial{y} + \partial{F_z}/\partial{z}$, where $F _i$ is the component of $F$ in the $i$ direction. Intuitively, if F has a large positive rate of change in the x direction, the partial derivative with respect to x in this direction will be large, increasing total divergence. The divergence theorem requires that we sum divergence over an entire volume. If this sum is positive, then the field must indicate some movement out of the volume through its boundary, while if this sum is negative, the field must indicate some movement into the volume through its boundary. We use the notion of flux, the flow through a surface, to quantify this movement through the boundary, which itself is a surface. The divergence theorem is formally stated as: $\iiint\limits_V\left(\nabla\cdot\mathbf{F}\right)dV=\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$ The left side of this equation is the sum of the divergence over the entire volume, and the right side of this equation is the sum of the field perpendicular to the volume's boundary at the boundary, which is the total flux through the boundary. Summing up divergence over the entire volume means we sum the flow into or out of each infinitely subregion, since a flow into one infinitesimal subregion means flow out of an adjacent subregion, which effects the next adjacent subregion, and so on until the boundary of the entire volume is reached. The total sum of divergence over the volume is thus equal to the flow at the boundary, as the theorem states. A volume can be broken into infinitely small subregions, each of whose divergence effects the adjacent regions' divergence, up to the volume's boundary. ### Example of Divergence Theorem Verification The following example verifies that given a volume and a vector field, the Divergence Theorem is valid. Cutaway view of the cube used in the example. The purple lines are the vectors of the vector field F. Consider the vector field $F = \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix}$. For a volume, we will use a cube of edge length two, and vertices at (0,0,0), (2,0,0), (0,2,0), (0,0,2), (2,2,0), (2,0,2), (0,2,2), (2,2,2). This cube has a corner at the origin and all the points it contains are in positive regions. • We begin by calculating the left side of the Divergence Theorem. Step 1: Calculate the divergence of the field: $\nabla\cdot F = 2x$ Step 2: Integrate the divergence of the field over the entire volume. $\iiint\nabla\cdot F\,dV =\int_0^2\int_0^2\int_0^2 2x \, dxdydz$ $=\int_0^2\int_0^2 4\, dydx$ $=16$ • We now turn to the right side of the equation, the integral of flux. Step 3: We first parametrize the parts of the surface which have non-zero flux. Notice that the given vector field has vectors which only extend in the x-direction, since each vector has zero y and z components. Therefore, only two sides of our cube can have vectors normal to them, those sides which are perpendicular to the x-axis. Furthermore, the side of the cube perpendicular to the x axis with all points satisfying x = 0 cannot have any flux, since all vectors on this surface are zero vectors. We are thus only concerned with one side of the cube since only one side has non-zero flux. This side is parametrized using $X=\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} 2 \\ u\\ v\\ \end{bmatrix}\, , u \in (0,2)\, ,v \in (0,2)$ Step 4: With this parametrization, we find a general normal vector to our surface. To find this normal vector, we find two vectors which are always tangent to (or contained in) the surface, and are not collinear. The cross product of two such vectors gives a vector normal to the surface. The first vector is the partial derivative of our surface with respect to u: $\frac{\part{X}}{\part{u}} = \begin{bmatrix} 0\\ 1\\ 0\\ \end{bmatrix}$ The second vector is the partial derivative of our surface with respect to v: $\frac{\part{X}}{\part{v}} = \begin{bmatrix} 0\\ 0\\ 1\\ \end{bmatrix}$ The normal vector is finally the cross product of these two vectors, which is simply $N = \begin{bmatrix} 1\\ 0\\ 0\\ \end{bmatrix}.$ Step 5: Integrate the dot product of this normal vector with the given vector field. The amount of the field normal to our surface is the flux through it, and is exactly what this integral gives us. $\iint\limits_{\partial V}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\;\;\;\subset\!\supset \mathbf F\;\cdot\mathbf n\,{d}S .$ $= \int_0^2 \int_0^2 F \cdot N \,dsdt$ $= \int_0^2 \int_0^2 \begin{bmatrix} x^2 \\ 0\\ 0\\ \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 0\\ 0\\ \end{bmatrix} \,dsdt = \int_0^2 \int_0^2 x^2dsdt = \int_0^2 \int_0^2 4 \,dsdt$ $=16$ • Both sides of the equation give 16, so the Divergence Theorem is indeed valid here. ■ # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.891960620880127, "perplexity_flag": "head"}
http://regularize.wordpress.com/2011/11/02/semi-continuous-sparse-reconstruction-and-compressed-sensing/
# regularize Trying to keep track of what I stumble upon November 2, 2011 ## Semi-continuous sparse reconstruction and compressed sensing Posted by Dirk under Math, Regularization, Signal and image processing, Sparsity | Tags: Basis pursuit, compressed sensing, sparsity | [7] Comments How many samples are needed to reconstruct a sparse signal? Well, there are many, many results around some of which you probably know (at least if you are following this blog or this one). Today I write about a neat result which I found quite some time ago on reconstruction of nonnegative sparse signals from a semi-continuous perspective. 1. From discrete sparse reconstruction/compressed sensing to semi-continuous The basic sparse reconstruction problem asks the following: Say we have a vector ${x\in{\mathbb R}^m}$ which only has ${s<m}$ non-zero entries and a fat matrix ${A\in{\mathbb R}^{n\times m}}$ (i.e. ${n>m}$) and consider that we are given measurements ${b=Ax}$. Of course, the system ${Ax=b}$ is underdetermined. However, we may add a little more prior knowledge on the solution and ask: Is is possible to reconstruct ${x}$ from ${b}$ if we know that the vector ${x}$ is sparse? If yes: How? Under what conditions on ${m}$, ${s}$, ${n}$ and ${A}$? This question created the expanding universe of compressed sensing recently (and this universe is expanding so fast that for sure there has to be some dark energy in it). As a matter of fact, a powerful method to obtain sparse solutions to underdetermined systems is ${\ell^1}$-minimization a.k.a. Basis Pursuit on which I blogged recently: Solve $\displaystyle \min_x \|x\|_1\ \text{s.t.}\ Ax=b$ and the important ingredient here is the ${\ell^1}$-norm of the vector in the objective function. In this post I’ll formulate semi-continuous sparse reconstruction. We move from an ${m}$-vector ${x}$ to a finite signed measure ${\mu}$ on a closed interval (which we assume to be ${I=[-1,1]}$ for simplicty). We may embed the ${m}$-vectors into the space of finite signed measures by choosing ${m}$ points ${t_i}$, ${i=1,\dots, m}$ from the interval ${I}$ and build ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ with the point-masses (or Dirac measures) ${\delta_{t_i}}$. To a be a bit more precise, we speak about the space ${\mathfrak{M}}$ of Radon measures on ${I}$, which are defined on the Borel ${\sigma}$-algebra of ${I}$ and are finite. Radon measures are not very scary objects and an intuitive way to think of them is to use Riesz representation: Every Radon measure arises as a continuous linear functional on a space of continuous functions, namely the space ${C_0(I)}$ which is the closure of the continuous functions with compact support in ${{]{-1,1}[}}$ with respect to the supremum norm. Hence, Radon measures work on these functions as ${\int_I fd\mu}$. It is also natural to speak of the support ${\text{supp}(\mu)}$ of a Radon measure ${\mu}$ and it holds for any continuous function ${f}$ that $\displaystyle \int_I f d\mu = \int_{\text{supp}(\mu)}f d\mu.$ An important tool for Radon measures is the Hahn-Jordan decomposition which decomposes ${\mu}$ into a positive part ${\mu^+}$ and a negative part ${\mu^-}$, i.e. ${\mu^+}$ and ${\mu^-}$ are non-negative and ${\mu = \mu^+-\mu^-}$. Finally the variation of a measure, which is $\displaystyle \|\mu\| = \mu^+(I) + \mu^-(I)$ provides a norm on the space of Radon measures. Example 1 For the measure ${\mu = \sum_{i=1}^m x_i \delta_{t_i}}$ one readily calculates that $\displaystyle \mu^+ = \sum_i \max(0,x_i)\delta_{t_i},\quad \mu^- = \sum_i \max(0,-x_i)\delta_{t_i}$ and hence $\displaystyle \|\mu\| = \sum_i |x_i| = \|x\|_1.$ In this sense, the space of Radon measures provides a generalization of ${\ell^1}$. We may sample a Radon measure ${\mu}$ with ${n+1}$ linear functionals and these can be encoded by ${n+1}$ continuous functions ${u_0,\dots,u_n}$ as $\displaystyle b_k = \int_I u_k d\mu.$ This sampling gives a bounded linear operator ${K:\mathfrak{M}\rightarrow {\mathbb R}^{n+1}}$. The generalization of Basis Pursuit is then given by $\displaystyle \min_{\mu\in\mathfrak{M}} \|\mu\|\ \text{s.t.}\ K\mu = b.$ This was introduced and called “Support Pursuit” in the preprint Exact Reconstruction using Support Pursuit by Yohann de Castro and Frabrice Gamboa. More on the motivation and the use of Radon measures for sparsity can be found in Inverse problems in spaces of measures by Kristian Bredies and Hanna Pikkarainen. 2. Exact reconstruction of sparse nonnegative Radon measures Before I talk about the results we may count the degrees of freedom a sparse Radon measure has: If ${\mu = \sum_{i=1}^s x_i \delta_{t_i}}$ with some ${s}$ than ${\mu}$ is defined by the ${s}$ weights ${x_i}$ and the ${s}$ positions ${t_i}$. Hence, we expect that at least ${2s}$ linear measurements should be necessary to reconstruct ${\mu}$. Surprisingly, this is almost enough if we know that the measure is nonnegative! We only need one more measurement, that is ${2s+1}$ and moreover, we can take fairly simple measurements, namely the monomials: ${u_i(t) = t^i}$ ${i=0,\dots, n}$ (with the convention that ${u_0(t)\equiv 1}$). This is shown in the following theorem by de Castro and Gamboa. Theorem 1 Let ${\mu = \sum_{i=1}^s x_i\delta_{t_i}}$ with ${x_i\geq 0}$, ${n=2s}$ and let ${u_i}$, ${i=0,\dots n}$ be the monomials as above. Define ${b_i = \int_I u_i(t)d\mu}$. Then ${\mu}$ is the unique solution of the support pursuit problem, that is of $\displaystyle \min \|\nu\|\ \text{s.t.}\ K\nu = b.\qquad \textup{(SP)}$ Proof: The following polynomial will be of importance: For a constant ${c>0}$ define $\displaystyle P(t) = 1 - c \prod_{i=1}^s (t-t_i)^2.$ The following properties of ${P}$ will be used: 1. ${P(t_i) = 1}$ for ${i=1,\dots,s}$ 2. ${P}$ has degree ${n=2s}$ and hence, is a linear combination of the ${u_i}$, ${i=0,\dots,n}$, i.e. ${P = \sum_{k=0}^n a_k u_k}$. 3. For ${c}$ small enough it holds for ${t\neq t_i}$ that ${|P(t)|<1}$. Now let ${\sigma}$ be a solution of (SP). We have to show that ${\|\mu\|\leq \|\sigma\|}$. Due to property 2 we know that $\displaystyle \int_I u_k d\sigma = (K\sigma)k = b_k = \int_I u_k d\mu.$ Due to property 1 and non-negativity of ${\mu}$ we conclude that $\displaystyle \begin{array}{rcl} \|\mu\| & = & \sum_{i=1}^s x_i = \int_I P d\mu\\ & = & \int_I \sum_{k=0}^n a_k u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\mu\\ & = & \sum_{k=0}^n a_k \int_I u_k d\sigma\\ & = & \int_I P d\sigma. \end{array}$ Moreover, by Lebesgue’s decomposition we can decompose ${\sigma}$ with respect to ${\mu}$ such that $\displaystyle \sigma = \underbrace{\sum_{i=1}^s y_i\delta_{t_i}}_{=\sigma_1} + \sigma_2$ and ${\sigma_2}$ is singular with respect to ${\mu}$. We get $\displaystyle \begin{array}{rcl} \int_I P d\sigma = \sum_{i=1}^s y_i + \int P d\sigma_2 \leq \|\sigma_1\| + \|\sigma_2\|=\|\sigma\| \end{array}$ and we conclude that ${\|\sigma\| = \|\mu\|}$ and especially ${\int_I P d\sigma_2 = \|\sigma_2\|}$. This shows that ${\mu}$ is a solution to ${(SP)}$. It remains to show uniqueness. We show the following: If there is a ${\nu\in\mathfrak{M}}$ with support in ${I\setminus\{x_1,\dots,x_s\}}$ such that ${\int_I Pd\nu = \|\nu\|}$, then ${\nu=0}$. To see this, we build, for any ${r>0}$, the sets $\displaystyle \Omega_r = [-1,1]\setminus \bigcup_{i=1}^s ]x_i-r,x_i+r[.$ and assume that there exists ${r>0}$ such that ${\|\nu|_{\Omega_r}\|\neq 0}$ (${\nu|_{\Omega_r}}$ denoting the restriction of ${\nu}$ to ${\Omega_r}$). However, it holds by property 3 of ${P}$ that $\displaystyle \int_{\Omega_r} P d\nu < \|\nu|_{\Omega_r}\|$ and consequently $\displaystyle \begin{array}{rcl} \|\nu\| &=& \int Pd\nu = \int_{\Omega_r} Pd\nu + \int_{\Omega_r^C} P d\nu\\ &<& \|\nu|_{\Omega_r}\| + \|\nu|_{\Omega_r^C}\| = \|\nu\| \end{array}$ which is a contradiction. Hence, ${\nu|_{\Omega_r}=0}$ for all ${r}$ and this implies ${\nu=0}$. Since ${\sigma_2}$ has its support in ${I\setminus\{x_1,\dots,x_s\}}$ we conclude that ${\sigma_2=0}$. Hence the support of ${\sigma}$ is exactly ${\{x_1,\dots,x_s\}}$. and since ${K\sigma = b = K\mu}$ and hence ${K(\sigma-\mu) = 0}$. This can be written as a Vandermonde system $\displaystyle \begin{pmatrix} u_0(t_1)& \dots &u_0(t_s)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_s) \end{pmatrix} \begin{pmatrix} y_1 - x_1\\ \vdots\\ y_s - x_s \end{pmatrix} = 0$ which only has the zero solution, giving ${y_i=x_i}$. $\Box$ 3. Generalization to other measurements The measurement by monomials may sound a bit unusual. However, de Castro and Gamboa show more. What really matters here is that the monomials for a so-called Chebyshev-System (or Tchebyscheff-system or T-system – by the way, have you ever tried to google for a T-system?). This is explained, for example in the book “Tchebycheff Systems: With Applications in Analysis and Statistics” by Karlin and Studden. A T-system on ${I}$ is simply a set of ${n+1}$ functions ${\{u_0,\dots, u_n\}}$ such that any linear combination of these functions has at most ${n}$ zeros. These systems are called after Tchebyscheff since they obey many of the helpful properties of the Tchebyscheff-polynomials. What is helpful in our context is the following theorem of Krein: Theorem 2 (Krein) If ${\{u_0,\dots,u_n\}}$ is a T-system for ${I}$, ${k\leq n/2}$ and ${t_1,\dots,t_k}$ are in the interior of ${I}$, then there exists a linear combination ${\sum_{k=0}^n a_k u_k}$ which is non-negative and vanishes exactly the the point ${t_i}$. Now consider that we replace the monomials in Theorem~1 by a T-system. You recognize that Krein’s Theorem allows to construct a “generalized polynomial” which fulfills the same requirements than the polynomial ${P}$ is the proof of Theorem~1 as soon as the constant function 1 lies in the span of the T-system and indeed the result of Theorem~1 is also valid in that case. 4. Exact reconstruction of ${s}$-sparse nonnegative vectors from ${2s+1}$ measurements From the above one can deduce a reconstruction result for ${s}$-sparse vectors and I quote Theorem 2.4 from Exact Reconstruction using Support Pursuit: Theorem 3 Let ${n}$, ${m}$, ${s}$ be integers such that ${s\leq \min(n/2,m)}$ and let ${\{1,u_1,\dots,u_n\}}$ be a complete T-system on ${I}$ (that is, ${\{1,u_1,\dots,u_r\}}$ is a T-system on ${I}$ for all ${r<n}$). Then it holds: For any distinct reals ${t_1,\dots,t_m}$ and ${A}$ defined as $\displaystyle A=\begin{pmatrix} 1 & \dots & 1\\ u_1(t_1)& \dots &u_1(t_m)\\ \vdots & & \vdots\\ u_n(t_1)& \dots & u(t_m) \end{pmatrix}$ Basis Pursuit recovers all nonnegative ${s}$-sparse vectors ${x\in{\mathbb R}^m}$. 5. Concluding remarks Note that Theorem~3 gives a deterministic construction of a measurement matrix. Also note, that nonnegativity is crucial in what we did here. This allowed (in the monomial case) to work with squares and obtain the polynomial ${P}$ in the proof of Theorem~1 (which is also called “dual certificate” in this context). This raises the question how this method can be adapted to all sparse signals. One needs (in the monomial case) a polynomial which is bounded by 1 but matches the signs of the measure on its support. While this can be done (I think) for polynomials it seems difficult to obtain a generalization of Krein’s Theorem to this case… ### 7 Responses to “Semi-continuous sparse reconstruction and compressed sensing” 1. Martin Burger Says: November 3, 2011 at 5:35 pm The pursuit appears like a fake. Actually every (!) nonnegative measure minimizes the norm subject to appropriate data. The analysis is only taking into account nonnegativity. If you would minimize any other functional subject to nonnegativity you get the same results. 2. Dirk Says: November 3, 2011 at 8:04 pm Probably there is a misunderstanding: Nonnegativity is not a constraint in the minimization. “Support pursuit” minimizes over all signed Radon measures. However, the “exact reconstruction” from 2s+1 only holds for measures which are nonnegative. 1. Martin Burger Says: November 3, 2011 at 8:54 pm No. If nonnegativity is a constraint it would be completely trivial, since the scalar product with the function 1 is fixed by the data, i.e. the 1-norm for positive measures. However, since the constant function 1 is always part of the system, one trivially has the constant function 1 in the range of the adjoint operator. Thus, the source condition is satisfied for any nonnegative measure, thus it is a norm-minimizing solution. It suffices to check that for s-sparse solutions there is no other. The same theory can be built for nonpositive solutions, the main thing is that this kind of systems does not promote sign-changes. 1. Dirk Says: November 4, 2011 at 8:27 am Right, one needs to know that the “strict source condition” (or existence of a dual certificate) implies uniqueness of the solution – and in the case of nonnegativity (or nonpositivity) the construction of the dual certificate is trivial. I would say: Simple proof for a cool fact… 3. November 9, 2011 at 9:28 pm [...] and from there dives it a bit down the rabbit hole, and, returning to a more applied topic, Regularize gives a nice introduction on reconstructing [...] 4. April 2, 2012 at 3:30 pm [...] seems to be pretty close in spirit to Exact Reconstruction using Support Pursuit on which I blogged earlier. They model the sparse signal as a Radon measure, especially as a sum of Diracs. However, different [...] 5. September 20, 2012 at 11:18 am [...] that something similar to “support-pursuit” does not work here: The minimization problem does not make much sense, since for all [...] Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 162, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406882524490356, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/07/12/the-radon-nikodym-chain-rule/?like=1&source=post_flair&_wpnonce=a082172354
# The Unapologetic Mathematician ## The Radon-Nikodym Chain Rule Today we take the Radon-Nikodym derivative and prove that it satisfies an analogue of the chain rule. If $\lambda$, $\mu$, and $\nu$ are totally $\sigma$-finite signed measures so that $\nu\ll\mu$ and $\mu\ll\lambda$, then $\lambda$-a.e. we have $\displaystyle\frac{d\nu}{d\lambda}=\frac{d\nu}{d\mu}\frac{d\mu}{d\lambda}$ By the linearity we showed last time, if this holds for the upper and lower variations of $\nu$ then it holds for $\nu$ itself, and so we may assume that $\nu$ is also a measure. We can further simplify by using Hahn decompositions with respect to both $\lambda$ and $\mu$, passing to subspaces on which each of our signed measures has a constant sign. We will from here on assume that $\lambda$ and $\mu$ are (positive) measures, and the case where one (or the other, or both) has a constant negative sign has a similar proof. Let’s also simplify things by writing $\displaystyle\begin{aligned}f&=\frac{d\nu}{d\mu}\\g&=\frac{d\mu}{d\lambda}\end{aligned}$ Since $\mu$ and $\nu$ are both non-negative there is also no loss of generality in assuming that $f$ and $g$ are everywhere non-negative. So, let $\{f_n\}$ be an increasing sequence of non-negative simple functions converging pointwise to $f$. Then monotone convergence tells us that $\displaystyle\begin{aligned}\lim\limits_{n\to\infty}\int\limits_Ef_n\,d\mu&=\int\limits_Ef\,d\mu\\\lim\limits_{n\to\infty}\int\limits_Ef_ng\,d\lambda&=\int\limits_Efg\,d\lambda\end{aligned}$ for every measurable $E$. For every measurable set $F$ we find that $\displaystyle\int\limits_E\chi_F\,d\mu=\mu(E\cap F)=\int\limits_{E\cap F}\,d\mu=\int\limits_{E\cap F}g\,d\lambda=\int\limits_E\chi_Fg\,d\lambda$ and so for all the simple $f_n$ we conclude that $\displaystyle\int\limits_Ef_n\,d\mu=\int\limits_Ef_ng\,d\lambda$ Passing to the limit, we find that $\displaystyle\nu(E)=\int\limits_E\,d\nu=\int\limits_Ef\,d\mu=\int\limits_Efg\,d\lambda$ and so the product $fg$ serves as the Radon-Nikodym derivative of $\nu$ in terms of $\lambda$, and it’s uniquely defined $\lambda$-almost everywhere. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 8 Comments » 1. [...] Corollaries of the Chain Rule Today we’ll look at a couple corollaries of the Radon-Nikodym chain rule. [...] Pingback by | July 13, 2010 | Reply 2. Could you quickly clarify why there is no loss of generality in assuming that f and g are everywhere non-negative? Sorry for the frequent recent posts, and thank you for the great resource too! Comment by Bobby Brown | March 30, 2011 | Reply 3. Basically, if they’re not you can always decompose into positive and negative parts, use the result there, and put everything back together. Comment by | March 30, 2011 | Reply 4. Could anyone tell me, were we exactly use the increasing sequences and why? I dont really see it Comment by Wiebs91 | November 16, 2012 | Reply 5. You mean where we use the fact that the sequence $\{f_n\}$ is increasing? That’s a requirement of the monotone convergence theorem. Comment by | November 16, 2012 | Reply 6. I know that theorem. I just wanted to know why we need it here, where it is used in the last part V(E)=… . I’m sorry, im really not used to this notations at all and don’t get the point of it by now Comment by Wiebs91 | November 16, 2012 | Reply 7. Okay, well the point is that any measurable function $f$ can be approximated as the limit of an increasing sequence of measurable functions $f_n$, and the simple functions are basically constants times characteristic functions. So what we do is prove our result for the characteristic functions $\chi(F)$. Then the fact that everything in sight is linear means that it holds for simple functions. And finally we can pass to the limit (using monotone convergence) and get the result for all measurable functions. Comment by | November 16, 2012 | Reply 8. Ah, now I see it, thanks for your time an help Comment by Wiebs91 | November 16, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 37, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320136904716492, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/203283-few-exponential-problems.html
# Thread: 1. ## A Few exponential problems. just need to get these questions out of the way, help is greatly appreciated. first question. The population of insects has been increasing exponentially by 15% per year how long will it take for the population to double. *I take it i will need to create formule of something like y = (1.15)^x but how would i know what to set my y to as i am not given any initial population Second Question a 50mg sample of cobalt 60 decays to 40mg after 1.6 minutes how long will it take the sample to decay to 5% of its initial amount Thanks for taking time to look at my questions 2. ## Re: A Few exponential problems. Originally Posted by waleedrabbani just need to get these questions out of the way, help is greatly appreciated. first question. The population of insects has been increasing exponentially by 15% per year how long will it take for the population to double. *I take it i will need to create formule of something like y = (1.15)^x but how would i know what to set my y to as i am not given any initial population let P = initial population ... 2P = P(1.15)t Second Question a 50mg sample of cobalt 60 decays to 40mg after 1.6 minutes how long will it take the sample to decay to 5% of its initial amount 40 = 50 e1.6k solve for the decay constant, k , then sub in for k in the equation below ... solve for t 2.5 = 50ekt ... 3. ## Re: A Few exponential problems. Did you mean to put e in as a variable or does it have the numerical value of 2.(digits i am unsure of) 4. ## Re: A Few exponential problems. radioactive substances decay at a continuous exponential rate. you should already know that the equation for such decay (or growth) is $y = y_0 e^{kt}$ where $e \approx 2.718 ...$ ... use the "e" key on your calculator. 5. ## Re: A Few exponential problems. No, e is not a variable, it is the base of the natural logarithms, about 2.7182.... You can do the second problem without using "e" (because all exponentials are equivalent). If "50mg sample of cobalt 60 decays to 40mg after 1.6 minutes" then whatever the inital amount is is multiplied by $\frac{40}{50}= \frac{4}{5}$ every 1.6 minutes. In t minutes, there are $\frac{t}{1.6}$ intervals of 1.6 minutes so after t minutes, the original amount will have multiplied by $\frac{4}{5}$ $\frac{t}{1.6}$ times: $\left(\frac{4}{5}\right)^\frac{t}{1.6}$. Set that equal to .05 and solve for t. Of course, you will need to use a logarithm to do that: $log(\left(\frac{4}{5}\right)^\frac{t}{1.6})= \frac{t}{1.6}log(\frac{4}{5})$. "All exponential are equivalent" because $e^x$ and $ln(x)$ are inverse functions: $e^{ln(x)}= x$. In particular, $\left(\frac{4}{5}\right)^{t/1.6}=e^{ln((\frac{4}{5})^{t/1.6})= e^{(t/1.6)ln(4/5)}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9001258015632629, "perplexity_flag": "middle"}
http://nrich.maths.org/6629&part=
nrich enriching mathematicsSkip over navigation ### Overarch 2 Bricks are 20cm long and 10cm high. How high could an arch be built without mortar on a flat horizontal surface, to overhang by 1 metre? How big an overhang is it possible to make like this? ### Stonehenge Explain why, when moving heavy objects on rollers, the object moves twice as fast as the rollers. Try a similar experiment yourself. ### Maximum Flow Given the graph of a supply network and the maximum capacity for flow in each section find the maximum flow across the network. # Pumping the Power ##### Stage: 5 Challenge Level: An electric power source produces an AC voltage $v = V_0\sin(\omega t)$, where $V_0$ is the peak voltage. A filament light bulb resistor R is placed across the source. Can you find a formula for the power (which changes over time) produced in the bulb? What is the average power produced in terms of $V_0$ and R? If you would like to keep using the power formulae you learned for DC electricity, what would be a good representative voltage for this AC source, in terms of $V_0$? Can you draw a graph of the light produced by the bulb over time if the bulb has been on for a long time? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237210154533386, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/4924/early-execise-of-american-call-on-non-dividend-paying-stock
# Early execise of American Call on Non-Dividend paying stock. Let us consider an American call option with strike price K and the time to maturity be T. Assume that the underlying stock does not pay any dividend. Let the price of this call option is C$^a$ today (t = 0). Now, suppose that at some intermediate time t ($<$T), I decide to exercise my call option. Hence the profit is: P1 = S(t) - K - C$^a$ I could then earn the interest on this profit and hence at maturity i will have: P2 = P1*e$^{r(T-t)}$ = (S(t) - K - C$^a$)e$^{r(T-t)}$ Instead, I could have waited and exercised it at maturity. My profit would then be: P3 = S(T) - K + Ke$^{rT}$ - C$^a$ I write this because i could have kept \$K in the bank at t = 0 and earned a risk-free interest on it till maturity time T. So here is my question: Merton (in 1973) said that it an American call on a non-dividend paying stock should not be exercised before expiration. I am just trying to figure out why it is true. Because there might be a possibility that P2 > P3. P.S: I am not contesting that what Merton said is wrong. I totally respect him and am sure what he is saying is correct. But I am not able to see it mathematically. Any help will be appreciated!. Thank You. - ## 1 Answer You compare apples and oranges here. You can't possibly compare the profit generated involving S(t) on one side and S(T) on the other side. at time t you do not know what the stock will be worth at time T. Merton made the statement in the context of deciding whether • to exercise the call option at any time before expiration OR • to simply sell the call option in the market and came to the conclusion that it is sub-optimal to exercise the option before expiration, but in light of the fact that he meant a comparison between exercise vs. selling the option, not between exercising and waiting till expiration. - Ah! Thanks a lot! Got it now. – Prakhar Mehrotra Jan 6 at 5:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425601959228516, "perplexity_flag": "middle"}
http://ldtopology.wordpress.com/category/4-manifolds/
# Low Dimensional Topology ## May 16, 2013 ### Organizing knot concordance Filed under: 3-manifolds,4-manifolds,knot concordance — Ryan Budney @ 10:10 am Tags: 3-manifolds, 4-manifolds, knot concordance I have a rather naive question for the participants here.  I’m at the Max Planck 4-manifolds semester, currently sitting through many talks about knot concordance and various filtrations of the knot concordance group. Do any of you have a feeling for how knot concordance should be organized, say if one was looking for some global structure?    In the purely 3-dimensional world there are many very “tidy” ways to organize knots and links.  There’s the associated 3-manifold, geometrization.  There’s double branched covers and equivariant geometrization, arborescent knots and tangle decompositions.  I find these perspectives to be rather rich in insights and frequently they’re computable for reasonable-sized objects. But knot concordance as a field feels much more like the Vassiliev invariant perspective on knots: graded vector spaces of invariants.  Typically these vector spaces are very large and it’s difficult to compute anything beyond the simplest objects. My initial inclination is that if one is looking for elegant structure in knot concordance, perhaps it would be at the level of concordance categories.  But what kind of structure would you be looking for on these objects?   I don’t think I’ve seen much in the way of natural operations on slice discs or concordances in general, beyond Morse-theoretic cutting and pasting.   Have you? ## August 15, 2012 ### Generalizations of open books Filed under: 3-manifolds,4-manifolds,contact structures — Jesse Johnson @ 11:37 am I’m going to take a break from data topology for this post and write about an interesting construction that I heard Jeremy Van Horn-Morris talk about at the Georgia Topology conference at the beginning of the summer. I should admit that it took me a while to appreciate this definition of a generalized open book decomposition because they only occur in toroidal 3-manifolds with very specific JSJ decompositions. However, they come out of a very natural generalization of 4-dimensional Lefschetz fibrations in which the 3-manifold arises as the boundary of the 4-manifold. These were first developed by Jeremy, Sam Lisi, and Chris Wendl, in a preprint that is still being written. Jeremy and Inanc Baykur [1] also use this construction to produce contact structures that disprove a number of former conjectures, so even though these 3-manifold are not hyperbolic, they are interesting from the perspective of contact topology. (more…) ## July 12, 2012 ### Symmetric decompositions of the 4-sphere Filed under: 3-manifolds,4-manifolds,Knot theory — Ryan Budney @ 4:43 pm Rob Kusner recently pointed out to me that the 4-sphere has a very natural differential-geometric decomposition as a double mapping cylinder S^3/Q_8 –> RP^2. Here Q_8 is the group {\pm 1, \pm i, \pm j, \pm k} in the unit quaternions and RP^2 is the real projective plane. Another way to say this is take the Voronese projective plane in S^4, a regular neighbourhood of it is a mapping cylinder S^3/Q_8 –> RP^2. Moreover, the *complement* of that regular neighbourhood is another such mapping cylinder. (more…) ## January 15, 2012 ### Beyond the trivial connection Filed under: 3-manifolds,4-manifolds,Quantum topology — dmoskovich @ 10:21 pm One of the foundational papers in Quantum Topology, and one of the main reasons that the subject is called Quantum Topology, is Edward Witten’s landmark paper Quantum field theory and the Jones polynomial. One of the things Witten did in that paper was to define a $3$–manifold invariant as a partition function with action functional proportional to the Chern-Simons $3$–form. A partition function is a path integral, so Witten’s invariant is a physical construction rather than a mathematical one. Quantum topology of $3$–manifolds is, to a large extent, the field whose goal is to mathematically reconstruct, and to understand, Witten’s invariant. Meanwhile, for $4$–manifolds with a metric, Witten defined a $4$–manifold invariant as a partition function in another landmark paper Topological quantum field theory. I should warn you that I don’t know any physics so some (all?) of what I say below might be rubbish. Still, pressing boldly ahead… Up until recently, mathematicians only understood tiny corners of Witten’s invariants, or, more broadly, of invariants (topological or otherwise) of manifolds (with or without extra structure) which come from quantum field theory partition functions. But I’ve recently glanced through two papers which seem to finally be going further, seeing more. The tiny corners we have seen already give rise to mathematical invariants of preternatural power (surely that’s the best word to describe it!), such as Ohtsuki series of rational homology $3$–spheres ($\mathbb{Q}HS$), Donaldson invariants, and Seiberg–Witten invariants. (more…) ## May 11, 2011 ### MO-problems: codimension zero embeddings Filed under: 4-manifolds,Algebraic topology — Ryan Budney @ 7:38 pm Tags: mathoverflow, PlanetMO Greetings, Jesse recently recruited me as a special correspondent for the goings-on at Math Overflow. Perhaps he’ll eventually let me blog about other things! To begin I’d like to point out a lovely and easy-to-state but not-so-little problem that appeared on MO. Is the universal covering of an open subset of Euclidean space diffeomorphic to an open subset of the same Euclidean space? The above problem is perhaps a representative problem in a family of problems that have received little attention by the geometric topology community, which is the issue of low co-dimension embeddings. They are not well understood. This is because these can be rather difficult problems. More than that, there isn’t an edifice — there’s no standard machine to play with. (more…) Theme: Rubric. Blog at WordPress.com.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 7, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9209410548210144, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/173093/homeomorphisms-of-x-form-a-topological-group
# Homeomorphisms of X form a topological group So I'm just learning about the compact-open topology and am trying to show that for a compact, Hausdorff space ,$X$, the group of homeomorphisms of $X$, $H(X)$, is a topological group with the compact open topology. This topology has a subbasis of sets $\{f\in H(X):f(C)\subseteq V\}=S(C,V)$ for compact $C$, open $V$. This is my first attempt at getting to know this topology, so I'd appreciate some help with the part of a proof I have so far, possibly a better way to go about proving this, and any other help understanding this topology. First, let $c:H(X)\times H(X)\rightarrow H(X)$ be composition. For $f,g\in H(X)$, let $S(C,V)$ be a subbasis set with $f\circ g\in S(C,V)$. So $f(g(C)\subseteq V$ which means $f\in S(g(C),V)$ and $g\in S(C,f^{-1}(V)$. The product of these open sets works since if $h_1(C)\subseteq f^{-1}(V)$ and $h_2(g(C))\subseteq V$, then $h_2\circ h_1(C)\subseteq V$. Then let $i:H(X)\rightarrow H(X)$ be inversion. Take a subbasis set $O=\{g:g(C)\subseteq U\}$ and consider $i^{-1}(O)=\{g^{-1}:g(C)\subseteq U\}$. If $h^{-1}\in i^{-1}(O)$, then one thing we have is that $h(C)\subset U$, but I'm not totally sure what to do with this. This part seems like it should be easier, but I am just not seeing it. Thanks. - ## 1 Answer Here you don't need much topology, it boils down to pure manipulation of sets and bijective functions, and the two following facts: $(*)$ closed subsets of compact spaces are compact, and $(**)$ compact subsets of Hausdorff spaces are closed. Just take complements and use the fact that $h$ is by definition a bijection, so $$h\in S(C,U)\Leftrightarrow h(C)\subset U\Leftrightarrow X\setminus U\subset X\setminus h(C)=h(X\setminus C)\Leftrightarrow h^{-1}(X\setminus U)\subset X\setminus C \\ \Leftrightarrow h^{-1}\in S(X\setminus U,X\setminus C)$$ (which is a subbasic open neighborhood) i.e. $$\mathrm{inv}^{-1}( S(C,U))=\mathrm{inv}(S(C,U))=S(X\setminus U,X\setminus C)$$ which proves the inversion map $\mathrm{inv}$ to be continuous. - Thanks! I suspected it wouldn't be so bad, but I just couldn't see where to go. I figured those would be the facts about $X$ being compact Hausdorff we would have to use. – Francis Adams Jul 20 '12 at 12:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9627397060394287, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/140871/strongly-exposed-points-exposed-points
# Strongly exposed points/Exposed points I was studying and I got the next doubt: We suppose that $(X,\|\cdot\|)$ is a Banach space and $C$ it is a convex closed subset of X. We say that $x\in C$ it is an exposed point of $C$ if $\exists x^*\in X^*$ such that $x$ it is the only one point of $C$ that $x^*(x)=\displaystyle\sup_C x^*$. We say that $x\in C$ it is an strongly exposed point of $C$ if $\exists x^*\in X^*$ such that for every sequence $(x_n)\subset C$ that verifies $x^*(x_n)\to \displaystyle\sup_C x^*$ then $\|x_n-x\|\to 0$. It's true that every strongly exposed point it is an exposed point. So I try to view if every exposed point it is an strongly exposed point (it would be false!). Suppose that $x\in C$ it is an exposed point, then there is $x^*\in X^*$ such that $x^*(x)=\displaystyle\sup_C x^*$. Let $(x_n)\in C$ such that $x^*(x_n)\to \displaystyle\sup_C x^*$. $x_n$ must converge ($x^*$ is continuous), so we suppose that $x_n\to y\in C$ ($C$ is closed). Then, by continuity $x^*(x_n)\to x^*(y)=\displaystyle\sup_C x^*$. So by the uniqueness, $x=y$ and then $\|x_n-x\|=\|x_n-y\|\to 0$. Then $x$ it is an strongly exposed point. The question is: what is wrong in my reasoning? There is a counterexample of Lindenstrauss for example (7.73 in the a book of Fabian: Functional Analysis and Infinite-Dimensional Geometry). Many thanks beforehand. - – Martin Sleziak May 4 '12 at 12:57 Ah, I refer to 7.73 of another book of Fabian: Banach Space (the basis for...). It's similar. There are counterexamples, so my reasoning is wrong, but I don't know where and it has me baffled. – Shrek May 4 '12 at 13:44 Your problem is with the line "$x_n$ must converge". You have that $x^*(x_n) \to x^*(x)$ for one fixed $x^*$, not for all $x^*$. Even if you had $x^*(x_n) \to x^*(x)$ for all $x^* \in X^*$ (which you do not have), this would be weak convergence and not convergence in the norm topology. – Tom Cooney May 4 '12 at 15:02 Amm, thanks. That was the fail. – Shrek May 4 '12 at 22:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964228630065918, "perplexity_flag": "head"}
http://cms.math.ca/10.4153/CJM-2010-046-0
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view Reducibility of the Principal Series for Sp~2(F) over a p-adic Field Read article [PDF: 476KB] http://dx.doi.org/10.4153/CJM-2010-046-0 Canad. J. Math. 62(2010), 914-960 Published:2010-05-20 Printed: Aug 2010 • Christian Zorn, Mathematics Department, The Ohio State University, Columbus, OH, USA Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: HTML LaTeX MathJax Abstract Let $G_n=\mathrm{Sp}_n(F)$ be the rank $n$ symplectic group with entries in a nondyadic $p$-adic field $F$. We further let $\widetilde{G}_n$ be the metaplectic extension of $G_n$ by $\mathbb{C}^{1}=\{z\in\mathbb{C}^{\times} \mid |z|=1\}$ defined using the Leray cocycle. In this paper, we aim to demonstrate the complete list of reducibility points of the genuine principal series of ${\widetilde{G}_2}$. In most cases, we will use some techniques developed by Tadić that analyze the Jacquet modules with respect to all of the parabolics containing a fixed Borel. The exceptional cases involve representations induced from unitary characters $\chi$ with $\chi^2=1$. Because such representations $\pi$ are unitary, to show the irreducibility of $\pi$, it suffices to show that $\dim_{\mathbb{C}}\mathrm{Hom}_{{\widetilde{G}}}(\pi,\pi)=1$. We will accomplish this by examining the poles of certain intertwining operators associated to simple roots. Then some results of Shahidi and Ban decompose arbitrary intertwining operators into a composition of operators corresponding to the simple roots of ${\widetilde{G}_2}$. We will then be able to show that all such operators have poles at principal series representations induced from quadratic characters and therefore such operators do not extend to operators in $\mathrm{Hom}_{{\widetilde{G}_2}}(\pi,\pi)$ for the $\pi$ in question. MSC Classifications: 22E50 - Representations of Lie and linear algebraic groups over local fields [See also 20G05] 11F70 - Representation-theoretic methods; automorphic representations over local and global fields © Canadian Mathematical Society, 2013 © Canadian Mathematical Society, 2013 : http://www.cms.math.ca/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8309991359710693, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/22215/what-physical-processes-may-underly-the-collisional-term-in-the-boltzmann-equati?answertab=votes
# What physical processes may underly the collisional term in the Boltzmann equation, and how do they increase entropy? Consider particles interacting only by long-range (inverse square law) forces, either attractive or repulsive. I am comfortable with the idea that their behavior may be described by the collsionless Boltzmann equation, and that in that case the entropy, defined by the phase space integral $-\int f \log f \, d^3x \, d^3v$, will not increase with time. All the information about the initial configuration of the particles is retained as the system evolves with time, even though it becomes increasingly harder for an observer to make measurements to probe that information (Landau damping). But after a long enough time most physical systems relax to a Maxwellian velocity distribution. The entropy of the system will increase for this relaxation to occur. Textbooks tend to explain this relaxation through a collisional term in the Boltzmann equation ('collisions increase the entropy'). A comment is made in passing that an assumption of 'molecular chaos' is being made, or sometimes 'one-sided molecular chaos.' My question is, how do the collisions that underlie the added term in the Boltzmann equation differ from any collision under an inverse square law, and why do these collisions increase entropy when it is clear that interactions with an inverse square law force do not generally increase entropy (at least on the time scale of Landau damping?) And finally, how valid is the so-called molecular chaos assumption? EDIT: I should clarify that, if entropy is to increase, then it is probably necessary to invoke additional short-range forces in addition to the long range inverse square law forces. I suppose I could rephrase my question as 'what sort of short-range forces are necessary to explain the collisional term in the Boltzmann equation, and how do they increase entropy when inverse-square law collisions do not?' If the question is too abstract as written, then feel free to pick a concrete physical system such as a plasma or a galaxy and answer the question in terms of what happens there. - hmm... this question is interesting but it seems a bit too general. possibly if you could define your system more precisely we could get a better understanding. I don't think the inverse square force law comes into play at all in the maxwell-boltzmann distributions. they arise purely from statistical considerations and momentum conservation. – Timtam Mar 11 '12 at 9:26 @Timtam I do want to keep the discussion rather general, in case the same answer can apply to different systems such as plasmas and galaxies. For the sake of concreteness, I suppose one could focus on either one of those two systems, and make as many additional assumptions about them as necessary to answer the question. Also, I've made an edit so as to allow the explanation to rely on other (short range) particle interactions, in case that's necessary to answer the question. – kleingordon Mar 11 '12 at 9:31 ## 2 Answers The statement that the entropy increases because of collisions is incorrect. The conservation of phase space volume is a theorem of Hamiltonian mechanics, and therefore applies to all known physical systems, regardless of whether they contain nonlinear forces, collisions or anything else. What actually happens is that although the phase space volume doesn't change as you integrate the trajectories forward, it does get distorted and squished and folded in on itself until the system becomes experimentally indistinguishable from one with a bigger phase space volume. The information that was originally in the particles' velocity distribution ends up in subtle correlations between the particles' motions, and if you ignore those correlations, that's when you get the Maxwell distribution. The increase in entropy is not something that happens on the level of the system's microscopic dynamics; instead it occurs because some of the information we have about the system's initial conditions becomes irrelevant for making future predictions, so we choose to ignore it. There is an excellent passage about this (in a slightly different context) in this paper by Edwin Jaynes, which gives a thorough criticism of the kind of textbook explanation that you mention. (See sections 4, 5 and 6.) It explains the issues involved in this much more eloquently than I can, so I highly recommend you give it a look. - Thanks, that helps. I'm still confused as to how the definition of entropy can exist without reference to a physical scale on which correlations are ignored. – kleingordon Mar 11 '12 at 11:31 – Nathaniel Mar 11 '12 at 13:49 But we have the unambiguous definition of entropy as -$\int f \, \log f \, d^3x \, d^3p$. In the context of classical physics, shouldn't this have a unique value regardless of who makes the measurement? Or are you saying that there is ambiguity in the way the integral is computed depending on how one handles the implicit limit that is used to define the integral? – kleingordon Mar 12 '12 at 5:50 No, I'm saying that there's ambiguity in how $f$ is defined in that equation. Traditionally, $f$ was defined as the fraction of the time that the system spends in a given state, in the limit of infinite time. But this only makes sense if you assume the system is already in equilibrium, because how can something that's defined in terms of an infinite time period change over time? In the more modern interpretation, $f$ represents an experimenter's knowledge of the system's microstate - it's a probability distribution because that knowledge is incomplete. Thus it depends on what you can measure. – Nathaniel Mar 12 '12 at 9:30 Hmm, okay, I'm starting to get it. At some point in the not-too-distant future I might want to chat to work out some of the remaining things that are niggling me, if you're willing. Thanks for the help. – kleingordon Mar 12 '12 at 9:39 show 1 more comment The entropy increase comes from the assumption that you can close the system on the kinetic level, thereby (i) making the dynamics tractable and getting a transport equation, and (ii) disregarding extremely high frequency contributions and paying for this with an entropy increase. Any interaction leads to collision terms; the details only matter for the particular form of the collision integral but not for its existence. There are different ways to obtain the Boltzmann equation, but all share the above features. The molecular chaos assumption works only for classical ideal gases. For a modern derivation of kinetic equations and in particular the Boltzmann equation from fundamental principles (i.e., quantum field theory), see Yu. B. Ivanov, J. Knoll, and D. N. Voskresensky, Self-Consistent Approximations to Non-Equilibrium Many-Body Theory, Nucl. Phys. A 657 (1999), 413--445. hep-ph/9807351 and related papers. See also Good reading on the Keldysh formalism Edit: In an operator-based formalism, the kinetic approximation forces the density matrix to take the form $e^{-S/k_B}$, where $S$ is a 1-particle operator. This eliminates lots of (not all) high frequency contributions, as the exact dynamics destroys this form, so the approximation must project it back to it instantaneously. For understanding how the projection works see the book by Grabert on Projection operator techniques. Calzetta did some work on kinetic theory in curved spaces (search the arXiv: http://lanl.arxiv.org); maybe this is more directly related to your question. - Thanks for the response. I haven't yet consulted the references but I do have follow-up questions based on what you've posted. If the entropy increase comes from disregarding high frequency contributions to the distribution function, it would seem that some sort of cut-off scale would be required. But such a scale does not appear in the definition of the entropy. How can this be? Also, when you say that any interaction leads to collision terms, how does this work in a specific case like gravity? That is, how do some gravitational interactions increase entropy but others don't? – kleingordon Mar 11 '12 at 11:26 Any interaction produces collision terms. You need to work out the corresponding microscopic expression to the collision integral. The details are always messy, so I won't try to give a sample calculation. Look at work by Calzetta (scholar.googel.com: author:Calzetta kinetic) for work in thisdirection. – Arnold Neumaier Mar 11 '12 at 11:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9299591183662415, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/180125-another-pulley-problem-resolving.html
# Thread: 1. ## Another pulley problem/resolving Hey, another question whereby i don't get how my method gets the wrong answer.... solution is herehttp://clip2net.com/s/VKcK, question is here:part 1 and question i need help with tension in the string is 6g, where g=9.8, sin theta is 3/5. I resolve tension from the left so that i get R= T(0.6) + T , because one tension is sloped (therefore needs verticall resolved) and one is already completely downward, right? I don't understand why the mark scheme suggests that the vertically resolved downward compenent X2 is the force acting down, can someone please explain this to me? 2. Where did you get 0.6T? If you want to resolve the tension on the left, you have a vertical component of $T\cos\alpha$ and a horizontal component of $T\sin\alpha$ Then, teh net force downwards is $T\cos\alpha + T$ while the net left force is $T\sin\alpha$. The net force on the pulley is then the vector sum of those two forces.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432852864265442, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/159534-diagonalized-matrices.html
# Thread: 1. ## diagonalized matrices Hi guys, I'm hoping you can help me. I'm sure I've done something wrong, though I can't figure out what. The question I have to answer is this: Find invertible U and diagonal D such that $A = UDU^{-1}$. A is the matrix: 2, -4, 2 -4, 2, -2 2, -2, -1 I have found the three eigenvalues -2, -2, 7. I understand that these values, along the diagonal of a matrix otherwise filled with zeros, make up the D matrix. I have also found the eigenvectors that go with the eigenvalues. Respectively they are [0,1,2], [1,0,-2], [1,-1,1/2]. I understand that these three vectors together make up U. My problem is how to fit it all together. I don't understand what order I have to put the vectors in within U, or the values in within D, to prove that $A = UDU^{-1}$. I've tried to brute force it but I'm not getting anywhere and it's making me think I've done everything wrong. Can anyone explain my mistake, please? 2. Define $D = \begin{bmatrix} \lambda_1 & 0 & 0 & \cdots & 0 \\ <br /> 0 & \lambda_2 & 0 & \cdots & \vdots \\ <br /> 0 & 0 & \lambda_3 & \ddots & \vdots \\ <br /> \vdots & \cdots & \ddots & \ddots & 0 \\ <br /> 0 & \cdots &\cdots &0& \lambda_n\end{bmatrix}$ and then $U = \begin{bmatrix} v_1 & v_2 & v_3 & \cdots & v_n\end{bmatrix}$ where $\lambda_i$ is the ith eigenvalue and $v_i$ is the corresponding eigenvector. This is called the Jordan Decomposition of A. 3. Yes, but I don't understand which eigenvalue is lambda1, which is lambda2 etc. I'm sure I don't just pick randomly? 4. Well as long as you are consistent it it doesn't matter (number of eigenvalues is meaningless anyway). You just have to make sure that column 1 of matrix U corresponds to the eigenvector for the first eigenvalue in D, etc. 5. So look at it this way $AU = UD$ $A \begin{bmatrix}v_1 & v_2 & v_3 & \cdots & v_n\end{bmatrix} = \begin{bmatrix}v_1 & v_2 & v_3 & \cdots & v_n\end{bmatrix} \begin{bmatrix} \lambda_1 & 0 & 0 & \cdots & 0 \\ <br /> 0 & \lambda_2 & 0 & \cdots & \vdots \\ <br /> 0 & 0 & \lambda_3 & \ddots & \vdots \\ <br /> \vdots & \cdots & \ddots & \ddots & 0 \\ <br /> 0 & \cdots &\cdots &0& \lambda_n\end{bmatrix}$ Which leads to $\begin{bmatrix}\lambda_1 v_1 & \lambda_2 v_2 & \lambda_3 v_3 & \cdots & \lambda_n v_n\end{bmatrix} = \begin{bmatrix}\lambda_1 v_1 & \lambda_2 v_2 & \lambda_3 v_3 & \cdots & \lambda_n v_n\end{bmatrix}$. So if you rearrange things you get the same result, but in a different order. 6. Ok, thanks. It's sounding like I did everything right then got cold feet at the end!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9621076583862305, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/29306/why-is-the-fibered-coproduct-of-affine-schemes-not-affine/29311
## Why is the fibered coproduct of affine schemes not affine? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am confused about the following issue: Let $X=SpecS$, $U_1=SpecR_1$, $U_2=SpecR_2$. and suppose we have maps $S \rightarrow R_1$, $S \rightarrow R_2$. Let $U_3=Spec (R_1 \otimes_S R_2)$. We have scheme maps $U_1 \rightarrow X$, $U_2 \rightarrow X$, $U_3 \rightarrow U_1$, $U_3 \rightarrow U_2$. The particular situation I have in mind is when $U_1$ and $U_2$ are distinguished (corresponding to localization of S at some element) open subschemes of $X$. Intersection of $U_1$ and $U_2$ is $U_3$, and the inclusion of $U_3$ in $X$ corresponds to $S$-algebra structure on $(R_1 \otimes_S R_2)$. The category of affine schemes (ASch) is the opposite category of commutative rings (CRing). In CRing kernels (equalisers) of pairs of maps and products exist, so by a lemma from category theory limits should exist, in particular fibered coproducts should exist, so union of two affine schemes $U_1$ and $U_2$ over $U_3$ should be affine scheme $U_4$! But we know that in general it is not so! Maybe the problem is that abstractly it is an affine scheme but what is it's inclusion map into X? Actually there exists an obvious map on the ring side from $S$ to kernel of a pair of maps $R_1 \rightarrow (R_1 \otimes_S R_2)$, $R_2 \rightarrow (R_1 \otimes_S R_2)$. Thank you! - I think some of the maps in your opening paragraph are backwards. If you want to discuss coproducts of schemes, then the maps on rings should be $R_1 \to S$ and $R_2 \to S$ . (As in Andreas's example.) There may be some later typos of this sort, I'm not sure. – David Speyer Jun 24 2010 at 2:26 Just keep in mind in which categories you talk about coequalizers. In the category of affine schemes, well, the coequalizer is of course affine (and it exists). But probably you want to think about the coequalizer of schemes, whose objects happen to be affine. See also mathoverflow.net/questions/9961/… and (sorry!) mathoverflow.net/questions/23478/… ;-) – Martin Brandenburg Jun 24 2010 at 9:01 ## 2 Answers The short answer is that the category of affine schemes does have pushouts, but these are not the same as pushouts of affine schemes calculated in the category of all schemes. For a longer answer, consider an example that's small enough to compute: The projective line (over the complex numbers, say) is not affine but it is gotten by gluing two copies of the affine line along the punctured affine line, so it is the pushout (in the category of schemes) of a diagram of affine schemes. Now what's the pushout of that same diagram in the category of affine schemes? Well, the rings involved are two copies of $C[x]$ and a copy of `$C[x,x^{-1}]$`. The two maps are the two injections of $C[x]$ into `$C[x,x^{-1}]$`, one sending $x$ to $x$, and the other sending $x$ to `$x^{-1}$`. The pullback of these, in the category of commutative rings, is just $C$, because the only way a polynomial in $x$ can equal a polynomial in `$x^{-1}$` is for both of them to be constant. Therefore, the affine-scheme pushout is not the projective line but a point. Intuitively, if you glue together two copies of the line along the punctured line "gently," allowing the result to be non-affine, you get the projective line, but if you demand that the result be affine then your projective line is forced to collapse to a point. - Thanks Andreas! My confusion is resolved. So when we calculate global sections of structure sheaf over some open set $U$ (not necessarily on affine scheme) which we can write as a union of open affines ${U_f}$, whose sections are known, we can calculate a limit (pull-back) of corresponding diagram of rings? – Mikhail Gudim Jun 24 2010 at 6:12 1 This is essentially the definition of a sheaf. – Martin Brandenburg Jun 24 2010 at 9:02 I never realized this simple fact, but it is quite surprising! – Andrea Ferretti Jun 25 2010 at 10:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. From the categorical point of view the situation is the following. We have the category of affine schemes $AffSch$, the category of all schemes $Sch$, and the inclusion functor $i:AffSch \to Sch$. The functor $i$ has a left adjoint functor $i^*:Sch \to AffSch$, $S \mapsto Spec \Gamma(S,{\mathcal{O}}_S)$, which is sometimes called the affine envelope. Now, the coequalizer (the pushout) by definition is the object which corepresents a certain contravariant functor to $Sets$. Note that whenever we have a functor $i:C \to D$ having a left adjoint and a contravariant functor $F:D \to Sets$, if $F$ is corepresentable by an object $X$, then $F\circ i$ is corepresentable as well and the corepresenting object is $i^{\ast}(X)$. Indeed, if $F(Y) = Hom(X,Y)$ then $$F(i(Z)) = Hom(X,i(Z)) = Hom(i^{\ast}(X),Z).$$ Applying this to the situation of the first paragraph we see that the restriction of a corepresentable (by an scheme $S$) functor from $Sch$ to $AffSch$ is correspresentable by $i^{\ast}(S)$. In particular, the coequalizer in the category of affine schemes is the affine envelope of a coequalizer in the category of all schemes. - Oh, I see. Thanks very much, you made it very clear for me! – Mikhail Gudim Jun 27 2010 at 21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9281581044197083, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/295303/what-are-all-isometry-classes-of-the-2-sphere?answertab=votes
# What are all isometry classes of the 2-sphere? In topology, one learns how to classify the compact surfaces up to homeomorphism. And in fact, since "homeomorphic" and "diffeomorphic" coincide in dimension 2, we can classify the compact (smooth) surfaces up to diffeomorphism. This makes me wonder about classifying compact Riemannian 2-manifolds up to isometry. In particular: Is there a classification of all Riemannian 2-manifolds that are diffeomorphic to the 2-sphere? I imagine this to be a very difficult question. As such, I have two follow-up questions: • If this is a tractable question, how much progress has been made in this direction? What is known and what isn't? • If the question is considered too difficult to have a real answer (is this the case?), then I imagine there to be simpler, related questions. What are some examples of these? - 2 If you replace isomorphic by pointwise-conformal equivalent, then the answer is given by the uniformization theorem of Riemannian surfaces. – Sam Feb 5 at 9:51 Oh, I see... is this why everyone always talks about metrics "within a conformal class"? Because we already have a classification up to pointwise-conformal equivalence? Mmm. – Jesse Madnick Feb 5 at 9:56 1 The point about conformal equivalence is the following: Any Riemannian metric $g$ induces a complex structure $J$ on $S^2$, by $\omega(\cdot,J\cdot) = g(\cdot,\cdot)$. Here $\omega$ is the volume form of $g$ and $J \in End(TS^2)$ a tensor which induces maps $J_p : T_pS^2 \rightarrow T_pS^2$. It is easy to check that $J^2 = - id$. However it is non-trivial so see that there exists a complex atlas (with holomorphic charts) und which $J$ corresponds to multiplication by $i$. Now, two metrics induce the same complex structures if and only if they are conformal equivalent. – Sam Feb 5 at 14:23 ## 1 Answer There are a couple of directions to go with your question, but the general feeling I get are that these kinds of questions are considered hopeless. I mean, pick any Riemannian metric on $\mathbb{R}^2$. There is an infinite dimensional space of such metrics. Via a partition of unity, one can view this as the metric chosen on the northern hemisphere of $S^2$. On the other hand, here are two results which may be to your liking. First, due to Kazdan and Warner (see the linked lecture notes near the bottom of Kazdan's website) states the following: Suppose $\kappa: S^2\rightarrow\mathbb{R}$ is any smooth function satisfying $\int_{S^2} \kappa dA = 4\pi$, then there is a Riemannian metric $g$ on $S^2$ with curvature given by $\kappa$. (By the Gauss-Bonnet theorem, the integral condition is necessary. Kazdan and Warner prove sufficiency.) Second, due to Weinstein (see here for the original paper): Every smooth manifold, other than $S^2$ has a metric $g$ and for which, in the tangent space at some point $p$, the cut locus and conjugate locus do not intersect. On the other hand, on $S^2$, for any metric and any point, the cut locus and conjugate locus must intersect. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366046786308289, "perplexity_flag": "head"}
http://dsp.stackexchange.com/questions/tagged/kalman-filters
# Tagged Questions The Kalman filter is a mathematical method using noisy measurements observed over time to produce values that tend to be closer to the true values of the measurements and their associated calculated values. 0answers 32 views ### (Unscented) Kalman Filter with variable state dimensions i have to estimate a process which changes over time, not with respect to the system-evolution or the measurement-function, but regarding the number of objects that have to be estimated. So every ... 2answers 121 views ### Smoothing data by using Kalman filter I would like to ask about smoothing data by using Kalman filter. Due to quantization, I have data that is not smooth. How can I smooth this data by using Kalman Filter. For your information, the data ... 1answer 166 views ### More on: Kalman filter for position and velocity Thanks to everyone who posted comments/answers to my query yesterday (Implementing a Kalman filter for position, velocity, acceleration ). I've been looking at what was recommended, and in particular ... 1answer 107 views ### Implementing a Kalman filter for position, velocity, acceleration I've used Kalman filters for various things in the past, but I'm now interested in using one to track position, speed and acceleration in the context of tracking position for smartphone apps. It ... 0answers 38 views ### Kalman Filter: continuous state space, discrete observations This is just an idea. How can we model the kalman filter to get the state representation in continuous space when the observations to the system are actually from the discrete space. The discrete ... 1answer 73 views ### information filter instead of kalman filter approach I read many sources about kalman filter, yet no about the other approach to filtering, where canonical parametrization instead of moments parametrization is used. So I would like to learn on examples ... 0answers 29 views ### Sequential processing of uncorrelated measurements in Kalman Filter I'm starting to brush up on the Kalman Filtering I learned a couple decades ago. From what I remember, if you have a measurement vector $$z=H x + v$$ and the $n$ components of the measurement ... 1answer 118 views ### Is Kalman smoother symmetric in time? Suppose my model is reversible in time (e.g. GPS + accelerometers for a vehicle), so that I can run Kalman filter forwards or backwards. Kalman filter, of course, cannot be symmetric, because it is ... 1answer 98 views ### How to combine a perfect signal with a limited dynamic range with a poor one with high dynamic range? I have two sensors that measure speed $v(t)$ of a moving vehicle. The first sensor produces a signal $f(t)$ which is a very accurate estimation of speed. However, it only works for slow to moderate ... 2answers 175 views ### Can Kalman Filter be used to track Randomly Moving Target? i want to track random moving object with a camera using kalman filter...i have the following questions... Randomly moving target means $Corelation(t) = E[ x(T)x(T+t) ]$ is very low...where $x(T)$ ... 1answer 72 views ### estimation of the position of the magnetic source I have a flying robot with magnetic coil as a sensor. An output from the coil is measured every second in different position. I know the position of the coil and its angles. I need to estimate the ... 0answers 59 views ### Optimal inference for nonlinear state space models When considering a linear-Gaussian state space model, it is often referred that, optimal inference is tractable which is very rare in state space models. When considering a nonlinear state space ... 1answer 101 views ### Doubt on Weighted Least Square Estimation This is a page from the book linear algebra,geodesy and gps by Gilbert Strang.... the page explains about the justification of the inverse of the of the co variance matrix of measurement vector $b$ ... 2answers 91 views ### Least Square Error Estimation doubt when can we write $(A^TA)^{ -1} = A^{-1}(A^T)^{-1}$? I am new to linear algebra and have this simple question... in least sqaure estimation...the best estimation of the equation $Ax = b$ is $x_{Estimated} = A (A^t A )^{-1} A^t b$...the projection of $b$ ... 5answers 298 views ### Good book or reference to learn Kalman Filter I am totally new to the Kalman filter. I've had some basic courses on conditional probability and linear algebra. Can someone suggest a good book or any resource on the web which can help me can ... 2answers 774 views ### Estimating velocity from known position and acceleration I am stuck at modeling a system model, i.e. getting my state vector and input vector. My guess is that position and velocity are state vector and acceleration is input vector. My 2nd guess is that all ... 1answer 388 views ### Kalman filter in practice I have read the description of the Kalman filter, but I am not clear on how it comes together in practice. It appears to be primarily targeted at mechanical or electrical systems since it wants linear ... 2answers 175 views ### When to use EKF and when Kalman Filter? I'm learning Kalman Filter for a week now. I just discovered that EKF (extended Kalman Filter) might be more appropriate for my case. Le't suppose I'm applying KF/EKF for variometer (the device that ... 0answers 68 views ### Estimating the input to a system from a system state using EKF [ Cross-posted from: http://math.stackexchange.com/questions/164169/estimating-the-input-to-a-system-from-a-system-state ] I have a system for which I have obtained a non-linear time-varying ... 0answers 89 views ### Extended Kalman Filter - how do I get transition functions I am measuring position and velocity, both have some noise in them. Velocity is defined as derivative of position. The system is apparently non-linear so I need to use EKF. Model: Questions: ... 1answer 92 views ### Role of Kalman filter in nonlinear dynamics I am quite interested to know the significance of using kalman filter,unscented kalman filter and extended kalman filter in chaos synchronization when infact the very basics of chaos ... 2answers 297 views ### Kalman filter - implementation and deciding parameters First of all, this is the first time I try to make a Kalman filter. I earlier posted this thread on stackoverflow which describes the background for this post. This is a typical sample of values I'm ... 1answer 87 views ### Filtering difference of correlated measurements I have an application with two separated GPS receivers giving live positions and I'm deriving a heading/displacement from the vector between them. Each set of position measurements is noisy and has ... 0answers 74 views ### Statistical properties of the Kalman estimates under Gaussian noise For a linear state-space model with independent Gaussian state and output noises and perfect guess for initial state, do Kalman estimates have the following properties: E(\hat{x}_{k|k} - x_k) = 0 ... 0answers 176 views ### Optimal measurement model for Kalman in Augmented Reality I am developing an augmented reality SDK that uses Kalman for tracking a planar marker. My state is composed of 3D position, a quaternion, velocity and angular velocity. ... 4answers 1k views ### What is the relationship between Kalman filter and polynomial regression? What is the relationship, if any, between Kalman filtering and (repeated, if necessary) least squares polynomial regression? 2answers 469 views ### How to understand Kalman gain intuitively? The Kalman filter algorithm works as follows Initialize $\hat{\textbf{x}}_{0|0}$ and $\textbf{P}_{0|0}$. At each iteration $k=1,\dots,n$ Predict Predicted (a priori) state estimate ... 2answers 368 views ### Different state-space representations for Auto-Regression and Kalman filter I see that there are different ways to write an AR model into a state-space representation, so that we can apply Kalman filter to estimate the signal. See Example 1, 2 and 3 here. I wonder what ... 4answers 546 views ### Intuitive explanation of tracking with Kalman filters I would much appreciate an intuitive explanation for (visual) tracking with Kalman filters. what I know: Prediction step: Dynamic system state Xt: target ... 0answers 162 views ### Estimate process error for Kalman filter on financial data I'd like to apply a Kalman filter, using Octave, to financial data but due to the nature of the data it will be difficult to impossible to specific the process error in advance of applying the filter. ... 1answer 376 views ### How to derive the stationary Kalman filter predictor? In its chapter on Kalman filters, my DSP book states, seemingly out of the blue, that the stationary Kalman filter for a system \begin{cases} x(t+1) &= Ax(t) + w(t) \\ y(t) &= Cx(t) + v(t) ... 2answers 995 views ### Is a Kalman filter suitable to filter projected points positions, given Euler angles of the capturing device? My system is the following. I use the camera of a mobile device to track an object. From this tracking, I get four 3D points that I project on the screen, to get four 2D points. These 8 values are ... 2answers 388 views ### Should the input of a Kalman filter always be a signal and its derivative? I always see the Kalman filter used with such input data. For example, the inputs are commonly a position and the correspondent velocity: $$(x, \dfrac{dx}{dt})$$ In my case, I only have 2D ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9070296287536621, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Noether's_theorem
# Noether's theorem Noether's (first) theorem states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proved by German mathematician Emmy Noether in 1915 and published in 1918.[1] The action of a physical system is the integral over time of a Lagrangian function (which may or may not be an integral over space of a Lagrangian density function), from which the system's behavior can be determined by the principle of least action. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalization of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g. systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law. ## Basic illustrations and background As an illustration, if a physical system behaves the same regardless of how it is oriented in space, its Lagrangian is rotationally symmetric: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry — it is the laws of its motion that are symmetric. As another example, if a physical process exhibits the same outcomes regardless of place or time (having the same outcome, say, somewhere in Asia on a Tuesday or in America on a Friday), then its Lagrangian is symmetric under continuous translations in space and time: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively. Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a new field is discovered that conserves a quantity X. Using Noether's theorem, the types of Lagrangians that conserve X through a continuous symmetry may be determined, and their fitness judged by further criteria. There are numerous versions of Noether's theorem, with varying degrees of generality. The original version only applied to ordinary differential equations (particles) and not partial differential equations (fields). The original versions also assume that the Lagrangian only depends upon the first derivative, while later versions generalize the theorem to Lagrangians depending on the nth derivative. There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces are also available. ## Informal statement of the theorem All fine technical points aside, Noether's theorem can be stated informally If a system has a continuous symmetry property, then there are corresponding quantities whose values are conserved in time.[2] A more sophisticated version of the theorem involving fields states that: To every differentiable symmetry generated by local actions, there corresponds a conserved current. The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation. The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern (since ca. 1980[3]) terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field. In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants:[4] If an integral I is invariant under a continuous group Gρ with ρ parameters, then ρ linearly independent combinations of the Lagrangian expressions are divergences. ## Historical context Main articles: Constant of motion, conservation law, and conserved current A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion — it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) vanishes, $\frac{dX}{dt} = 0 ~.$ Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws. The earliest constants of motion discovered were momentum and energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's third law. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of sum of the stress–energy tensor (non-gravitational stress–energy) and Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, was the Laplace–Runge–Lenz vector. In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L $I = \int L(\mathbf{q}, \dot{\mathbf{q}}, t) \, dt ~,$ where the dot over q signifies the rate of change of the coordinates q, $\dot{\mathbf{q}} = \frac{d\mathbf{q}}{dt} ~.$ Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations, $\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{\mathbf{q}}} \right) = \frac{\partial L}{\partial \mathbf{q}} ~.$ Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that $\frac{d}{dt} \left( \frac{\partial L}{\partial \dot{q}_k} \right) = \frac{dp_k}{dt} = 0~,$ where the momentum pk, defined as the left-hand quantity in parentheses, is conserved during the motion (on the physical path). Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem. Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach and perhaps the most efficient for finding conserved quantities is the Hamilton–Jacobi equation. ## Mathematical expression See also: Perturbation theory ### Simple form using perturbations The essence of Noether's theorem is generalizing the ignorable coordinates outlined. Imagine that the action I defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q; in a notation commonly used in physics, $t \rightarrow t^{\prime} = t + \delta t$ $\mathbf{q} \rightarrow \mathbf{q}^{\prime} = \mathbf{q} + \delta \mathbf{q} ~,$ where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, …, N. Then the resultant perturbation can be written as a linear sum of the individual types of perturbations, $\delta t = \sum_r \epsilon_r T_r \!$ $\delta \mathbf{q} = \sum_r \epsilon_r \mathbf{Q}_r ~,$ where εr are infinitesimal parameter coefficients corresponding to each: • generator Tr of time evolution, and • generator Qr of the generalized coordinates. For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle. Using these definitions, Noether showed that the N quantities $\left(\frac{\partial L}{\partial \dot{\mathbf{q}}} \cdot \dot{\mathbf{q}} - L \right) T_r - \frac{\partial L}{\partial \dot{\mathbf{q}}} \cdot \mathbf{Q}_r$ (which have the dimensions of [energy]·[time] + [momentum]·[length] = [action]) are conserved (constants of motion). #### Examples Time invariance For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes t → t + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H[5] $H = \frac{\partial L}{\partial \dot{\mathbf{q}}} \cdot \dot{\mathbf{q}} - L.$ Translational invariance Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qk → qk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding momentum pk[6] $p_k = \frac{\partial L}{\partial \dot{q_k}}.$ In special and general relativity, these apparently separate conservation laws are aspects of a single conservation law, that of the stress–energy tensor,[7] that is derived in the next section. Rotational invariance The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart.[8] It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation $\mathbf{r} \rightarrow \mathbf{r} + \delta\theta \mathbf{n} \times \mathbf{r}.$ Since time is not being transformed, T=0. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by $\mathbf{Q} = \mathbf{n} \times \mathbf{r}.$ Then Noether's theorem states that the following quantity is conserved, $\frac{\partial L}{\partial \dot{\mathbf{q}}} \cdot \mathbf{Q}_{r} = \mathbf{p} \cdot \left( \mathbf{n} \times \mathbf{r} \right) = \mathbf{n} \cdot \left( \mathbf{r} \times \mathbf{p} \right) = \mathbf{n} \cdot \mathbf{L}.$ In other words, the component of the angular momentum L along the n axis is conserved. If n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved. ### Field theory version Although useful in its own right, the version of her theorem just given was a special case of the general version she derived in 1915. To give the flavor of the general theorem, a version of the Noether theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used version (or most often implemented) of Noether's theorem. Let there be a set of differentiable fields φ defined over all space and time; for example, the temperature T(x, t) would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time $I = \int L \left(\boldsymbol\phi, \partial_\mu{\boldsymbol\phi}, x^\mu \right) \, d^4 x$ (the theorem can actually be further generalized to the case where the Lagrangian depends on up to the nth derivative using jet bundles) Let the action be invariant under certain transformations of the space–time coordinates xμ and the fields φ $x^{\mu} \rightarrow x^\mu + \delta x^\mu \!$ $\boldsymbol \phi \rightarrow \boldsymbol \phi + \delta \boldsymbol \phi$ where the transformations can be indexed by r = 1, 2, 3, …, N $\delta x^\mu = \epsilon_r X^\mu_r \,$ $\delta \boldsymbol\phi = \epsilon_r \boldsymbol\Psi_r ~.$ For such systems, Noether's theorem states that there are N conserved current densities $j^\nu_r = - \left( \frac{\partial L}{\partial \boldsymbol\phi_{,\nu}} \right) \cdot \boldsymbol\Psi_r + \sum_{\sigma} \left[ \left( \frac{\partial L}{\partial \boldsymbol\phi_{,\nu}} \right) \cdot \boldsymbol\phi_{,\sigma} - L \delta^{\nu}_{\sigma} \right] X_{r}^{\sigma}$ In such cases, the conservation law is expressed in a four-dimensional way $\sum_\nu \frac{\partial j^\nu}{\partial x^\nu} = 0$ which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere. For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, $L \left(\boldsymbol\phi, \partial_\mu{\boldsymbol\phi}, x^\mu \right)$ is constant in its third argument. In that case, N = 4, one for each dimension of space and time. Since only the positions in space–time are being warped, not the fields, the Ψ are all zero and the Xμν equal the Kronecker delta δμν, where we have used μ instead of r for the index. In that case, Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν[7] $T_\mu{}^\nu = \sum_{\sigma} \left[ \left( \frac{\partial L}{\partial \boldsymbol\phi_{,\nu}} \right) \cdot \boldsymbol\phi_{,\sigma} - L\,\delta^\nu_\sigma \right] \delta_\mu^\sigma = \left( \frac{\partial L}{\partial \boldsymbol\phi_{,\nu}} \right) \cdot \boldsymbol\phi_{,\mu} - L\,\delta_\mu^\nu$ The conservation of electric charge, by contrast, can be derived by considering zero Xμν=0 and Ψ linear in the fields φ themselves.[9] In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as $\psi \rightarrow e^{i\theta} \psi \ ,\ \psi^{*} \rightarrow e^{-i\theta} \psi^{*}~,$ a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to iψ and −iψ*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density $L = \psi_{,\nu} \psi^{*}_{,\mu} \eta^{\nu \mu} + m^2 \psi \psi^{*}.$ In this case, Noether's theorem states that the conserved (∂⋅j = 0) current equals $j^{\nu} = i \left( \frac{\partial \psi}{\partial x^{\mu}} \psi^{*} - \frac{\partial \psi^{*}}{\partial x^{\mu}} \psi \right) \eta^{\nu \mu}~,$ which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics. ## Derivations ### One independent variable Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral $I = \int_{t_1}^{t_2} L [\mathbf{q} [t], \dot{\mathbf{q}} [t], t] \, dt$ is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations $\frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{q}}} [t] = \frac{\partial L}{\partial \mathbf{q}} [t].$ And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows $t \rightarrow t' = t + \epsilon T \!$ $\mathbf{q} [t] \rightarrow \mathbf{q}' [t'] = \phi [\mathbf{q} [t], \epsilon] = \phi [\mathbf{q} [t' - \epsilon T], \epsilon]$ where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time. $\dot{\mathbf{q}} [t] \rightarrow \dot{\mathbf{q}}' [t'] = \frac{d}{dt} \phi [\mathbf{q} [t], \epsilon] = \frac{\partial \phi}{\partial \mathbf{q}} [\mathbf{q} [t' - \epsilon T], \epsilon] \dot{\mathbf{q}} [t' - \epsilon T] .$ The action integral flows to $\begin{align} I' [\epsilon] & = \int_{t_1 + \epsilon T}^{t_2 + \epsilon T} L [\mathbf{q}'[t'], \dot{\mathbf{q}}' [t'], t'] \, dt' \\[6pt] & = \int_{t_1 + \epsilon T}^{t_2 + \epsilon T} L [\phi [\mathbf{q} [t' - \epsilon T], \epsilon], \frac{\partial \phi}{\partial \mathbf{q}} [\mathbf{q} [t' - \epsilon T], \epsilon] \dot{\mathbf{q}} [t' - \epsilon T], t'] \, dt' \end{align}$ which may be regarded as a function of ε. Calculating the derivative at ε = 0 and using the symmetry, we get $\begin{align} 0 & = \frac{d I'}{d \epsilon} [0] = L [\mathbf{q} [t_2], \dot{\mathbf{q}} [t_2], t_2] T - L [\mathbf{q} [t_1], \dot{\mathbf{q}} [t_1], t_1] T \\[6pt] & {} + \int_{t_1}^{t_2} \frac{\partial L}{\partial \mathbf{q}} \left( - \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} T + \frac{\partial \phi}{\partial \epsilon} \right) + \frac{\partial L}{\partial \dot{\mathbf{q}}} \left( - \frac{\partial^2 \phi}{(\partial \mathbf{q})^2} {\dot{\mathbf{q}}}^2 T + \frac{\partial^2 \phi}{\partial \epsilon \partial \mathbf{q}} \dot{\mathbf{q}} - \frac{\partial \phi}{\partial \mathbf{q}} \ddot{\mathbf{q}} T \right) \, dt. \end{align}$ Notice that the Euler–Lagrange equations imply $\begin{align} \frac{d}{dt} \left( \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} T \right) & = \left( \frac{d}{dt} \frac{\partial L}{\partial \dot{\mathbf{q}}} \right) \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \left( \frac{d}{dt} \frac{\partial \phi}{\partial \mathbf{q}} \right) \dot{\mathbf{q}} T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \ddot{\mathbf{q}} \, T \\[6pt] & = \frac{\partial L}{\partial \mathbf{q}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \left( \frac{\partial^2 \phi}{(\partial \mathbf{q})^2} \dot{\mathbf{q}} \right) \dot{\mathbf{q}} T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \ddot{\mathbf{q}} \, T. \end{align}$ Substituting this into the previous equation, one gets $\begin{align} 0 & = \frac{d I'}{d \epsilon} [0] = L [\mathbf{q} [t_2], \dot{\mathbf{q}} [t_2], t_2] T - L [\mathbf{q} [t_1], \dot{\mathbf{q}} [t_1], t_1] T - \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} [t_2] T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} [t_1] T \\[6pt] & {} + \int_{t_1}^{t_2} \frac{\partial L}{\partial \mathbf{q}} \frac{\partial \phi}{\partial \epsilon} + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial^2 \phi}{\partial \epsilon \partial \mathbf{q}} \dot{\mathbf{q}} \, dt. \end{align}$ Again using the Euler–Lagrange equations we get $\frac{d}{d t} \left( \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \epsilon} \right) = \left( \frac{d}{d t} \frac{\partial L}{\partial \dot{\mathbf{q}}} \right) \frac{\partial \phi}{\partial \epsilon} + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial^2 \phi}{\partial \epsilon \partial \mathbf{q}} \dot{\mathbf{q}} = \frac{\partial L}{\partial \mathbf{q}} \frac{\partial \phi}{\partial \epsilon} + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial^2 \phi}{\partial \epsilon \partial \mathbf{q}} \dot{\mathbf{q}}.$ Substituting this into the previous equation, one gets $\begin{align} 0 & = L [\mathbf{q} [t_2], \dot{\mathbf{q}} [t_2], t_2] T - L [\mathbf{q} [t_1], \dot{\mathbf{q}} [t_1], t_1] T - \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} [t_2] T + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} [t_1] T \\[6pt] & {} + \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \epsilon} [t_2] - \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \epsilon} [t_1]. \end{align}$ From which one can see that $\left( \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \mathbf{q}} \dot{\mathbf{q}} - L \right) T - \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \epsilon}$ is a constant of the motion, i.e. a conserved quantity. Since φ[q, 0] = q, we get $\frac{\partial \phi}{\partial \mathbf{q}} = 1$ and so the conserved quantity simplifies to $\left( \frac{\partial L}{\partial \dot{\mathbf{q}}} \dot{\mathbf{q}} - L \right) T - \frac{\partial L}{\partial \dot{\mathbf{q}}} \frac{\partial \phi}{\partial \epsilon}.$ To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case. ### Field-theoretic derivation Noether's theorem may also be derived for tensor fields φA where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ=0) and three spatial dimensions (μ=1,2,3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written $x^{\mu} \rightarrow \xi^{\mu} = x^{\mu} + \delta x^{\mu} \!$ whereas the transformation of the field variables is expressed as ${\phi}^A \rightarrow \alpha^A (\xi^{\mu}) = \phi^A (x^{\mu}) + \delta \phi^A (x^{\mu})\,.$ By this definition, the field variations δφA result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined $\alpha^A (x^{\mu}) = \phi^A (x^{\mu}) + \bar{\delta} \phi^A (x^{\mu})\,.$ If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively. Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as $\int_{\Omega^{\prime}} L \left( \alpha^A, {\alpha^A}_{,\nu}, \xi^{\mu} \right) d^{4}\xi - \int_{\Omega} L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) d^{4}x = 0$ where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g. ${\phi^A}_{,\sigma} = \frac{\partial \phi^A}{\partial x^{\sigma}}\,.$ Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form $\int_{\Omega} \left\{ \left[ L \left( \alpha^A, {\alpha^A}_{,\nu}, x^{\mu} \right) - L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \right] + \frac{\partial}{\partial x^{\sigma}} \left[ L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \delta x^{\sigma} \right] \right\} d^{4}x = 0 \,.$ The difference in Lagrangians can be written to first-order in the infinitesimal variations as $\left[ L \left( \alpha^A, {\alpha^A}_{,\nu}, x^{\mu} \right) - L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \right] = \frac{\partial L}{\partial \phi^A} \bar{\delta} \phi^A + \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \bar{\delta} {\phi^A}_{,\sigma} \,.$ However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute $\bar{\delta} {\phi^A}_{,\sigma} = \bar{\delta} \frac{\partial \phi^A}{\partial x^{\sigma}} = \frac{\partial}{\partial x^{\sigma}} \left( \bar{\delta} \phi^A \right) \,.$ Using the Euler–Lagrange field equations $\frac{\partial}{\partial x^{\sigma}} \left( \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \right) = \frac{\partial L}{\partial \phi^A}$ the difference in Lagrangians can be written neatly as $\left[ L \left( \alpha^A, {\alpha^A}_{,\nu}, x^{\mu} \right) - L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \right] = \frac{\partial}{\partial x^{\sigma}} \left( \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \right) \bar{\delta} \phi^A + \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \bar{\delta} {\phi^A}_{,\sigma} = \frac{\partial}{\partial x^{\sigma}} \left( \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \bar{\delta} \phi^A \right) \,.$ Thus, the change in the action can be written as $\int_{\Omega} \frac{\partial}{\partial x^{\sigma}} \left\{ \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \bar{\delta} \phi^A + L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \delta x^{\sigma} \right\} d^{4}x = 0 \,.$ Since this holds for any region Ω, the integrand must be zero $\frac{\partial}{\partial x^{\sigma}} \left\{ \frac{\partial L}{\partial {\phi^A}_{,\sigma}} \bar{\delta} \phi^A + L \left( \phi^A, {\phi^A}_{,\nu}, x^{\mu} \right) \delta x^{\sigma} \right\} = 0 \,.$ For any combination of the various symmetry transformations, the perturbation can be written $\delta x^{\mu} = \epsilon X^{\mu}\!$ $\delta \phi^A = \epsilon \Psi^A = \bar{\delta} \phi^A + \epsilon \mathcal{L}_X \phi^A$ where $\mathcal{L}_X \phi^A$ is the Lie derivative of φA in the Xμ direction. When φA is a scalar or ${X^\mu}_{,\nu} = 0 \,$, $\mathcal{L}_X \phi^A = \frac{\partial \phi^A}{\partial x^{\mu}} X^{\mu}\,.$ These equations imply that the field variation taken at one point equals $\bar{\delta} \phi^A = \epsilon \Psi^A - \epsilon \mathcal{L}_X \phi^A\,.$ Differentiating the above divergence with respect to ε at ε=0 and changing the sign yields the conservation law $\frac{\partial }{\partial x^{\sigma}} j^{\sigma} = 0$ where the conserved current equals $j^{\sigma} = \left[\frac{\partial L}{\partial {\phi^A}_{,\sigma}} \mathcal{L}_X \phi^A - L \, X^{\sigma}\right] - \left(\frac{\partial L}{\partial {\phi^A}_{,\sigma}} \right) \Psi^A\,.$ ### Manifold/fiber bundle derivation Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let $\mathcal{C}$ be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle over M.) Examples of this M in physics include: • In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold R, representing time and the target space is the cotangent bundle of space of generalized positions. • In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, $\phi_1,...,\phi_m$, then the target manifold is Rm. If the field is a real vector field, then the target manifold is isomorphic to R3. Now suppose there is a functional $\mathcal{S}:\mathcal{C}\rightarrow \mathbf{R},$ called the action. (Note that it takes values into R, rather than C; this is for physical reasons, and doesn't really matter for this proof.) To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume $\mathcal{S}[\phi]$ is the integral over M of a function $\mathcal{L}(\phi,\partial_\mu\phi,x)$ called the Lagrangian density, depending on φ, its derivative and the position. In other words, for φ in $\mathcal{C}$ $\mathcal{S}[\phi]\,=\,\int_M \mathcal{L}[\phi(x),\partial_\mu\phi(x),x] \mathrm{d}^nx.$ Suppose we are given boundary conditions, i.e., a specification of the value of φ at the boundary if M is compact, or some limit on φ as x approaches ∞. Then the subspace of $\mathcal{C}$ consisting of functions φ such that all functional derivatives of $\mathcal{S}$ at φ are zero, that is: $\frac{\delta \mathcal{S}[\phi]}{\delta \phi(x)}\approx 0$ and that φ satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action) Now, suppose we have an infinitesimal transformation on $\mathcal{C}$, generated by a functional derivation, Q such that $Q \left[ \int_N \mathcal{L} \, \mathrm{d}^n x \right] \approx \int_{\partial N} f^\mu [\phi(x),\partial\phi,\partial\partial\phi,\ldots] \mathrm{d}s_{\mu}$ for all compact submanifolds N or in other words, $Q[\mathcal{L}(x)]\approx\partial_\mu f^\mu(x)$ for all x, where we set $\mathcal{L}(x)=\mathcal{L}[\phi(x), \partial_\mu \phi(x),x].\$ If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group. Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have $Q\left[\int_N \mathcal{L} \, \mathrm{d}^nx \right]$ $=\int_N \left[\frac{\partial\mathcal{L}}{\partial\phi}- \partial_\mu\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}\right]Q[\phi] \, \mathrm{d}^nx + \int_{\partial N} \frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}Q[\phi] \, \mathrm{d}s_\mu$ $\approx\int_{\partial N} f^\mu \, \mathrm{d}s_\mu .$ Since this is true for any N, we have $\partial_\mu\left[\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}Q[\phi]-f^\mu\right]\approx 0.$ But this is the continuity equation for the current $J^\mu\,\!$ defined by:[10] $J^\mu\,=\,\frac{\partial\mathcal{L}}{\partial(\partial_\mu\phi)}Q[\phi]-f^\mu,$ which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity). ### Comments Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that $\int_{\partial N} J^\mu \mathrm{d}s_\mu \approx 0~.$ The quantum analogs of Noether's theorem involving expectation values, e.g. ⟨∫d4x ∂·J⟩=0, probing off shell quantities as well are the Ward–Takahashi identities. ### Generalization to Lie algebras Suppose say we have two symmetry derivations Q1 and Q2. Then, [Q1, Q2] is also a symmetry derivation. Let's see this explicitly. Let's say $Q_1[\mathcal{L}]\approx\partial_\mu f_1^\mu$ and $Q_2[\mathcal{L}]\approx\partial_\mu f_2^\mu$ Then, $[Q_1,Q_2][\mathcal{L}]=Q_1[Q_2[\mathcal{L}]]-Q_2[Q_1[\mathcal{L}]]\approx\partial_\mu f_{12}^\mu$ where f12=Q1[f2μ]-Q2[f1μ]. So, $j_{12}^\mu=\left(\frac{\partial}{\partial (\partial_\mu\phi)}\mathcal{L}\right)(Q_1[Q_2[\phi]]-Q_2[Q_1[\phi]])-f_{12}^\mu.$ This shows we can extend Noether's theorem to larger Lie algebras in a natural way. ### Generalization of the proof This section needs attention from an expert in Mathematics. Please add a reason or a talk parameter to this template to explain the issue with the section. WikiProject Mathematics (or its Portal) may be able to help recruit an expert. (February 2009) This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for any ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem. To see how the generalization related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on φ and its first derivatives. Also, assume $Q[\mathcal{L}]\approx\partial_\mu f^\mu$ Then, $\begin{align} q[\epsilon][\mathcal{S}] & = \int q[\epsilon][\mathcal{L}] \, \mathrm{d}^n x \\ & = \int \left\{ \left(\frac{\partial}{\partial \phi}\mathcal{L}\right) \epsilon Q[\phi]+ \left[\frac{\partial}{\partial (\partial_\mu \phi)}\mathcal{L}\right]\partial_\mu(\epsilon Q[\phi]) \right\} \, \mathrm{d}^n x \\ & = \int \left\{ \epsilon Q[\mathcal{L}] + \partial_{\mu}\epsilon \left[\frac{\partial}{\partial \left( \partial_{\mu} \phi\right)} \mathcal{L} \right] Q[\phi] \right\} \, \mathrm{d}^n x \\ & \approx \int \epsilon \partial_\mu \Bigg\{f^\mu-\left[\frac{\partial}{\partial (\partial_\mu\phi)}\mathcal{L}\right]Q[\phi]\Bigg\} \, \mathrm{d}^n x \end{align}$ for all ε. More generally, if the Lagrangian depends on higher derivatives, then $\partial_\mu\left[f^\mu-\left[\frac{\partial}{\partial (\partial_\mu\phi)}\mathcal{L}\right]Q[\phi]-2\left[\frac{\partial}{\partial (\partial_\mu \partial_\nu \phi)}\mathcal{L}\right]\partial_\nu Q[\phi]+\partial_\nu\left[\left[\frac{\partial}{\partial (\partial_\mu \partial_\nu \phi)}\mathcal{L}\right] Q[\phi]\right]-\,\cdots\right]\approx 0.$ ## Examples ### Example 1: Conservation of energy Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is: $\begin{align} \mathcal{S}[x] & = \int L[x(t),\dot{x}(t)] \, dt \\ & = \int \left(\frac{m}{2}\sum_{i=1}^3\dot{x}_i^2-V(x(t))\right) \, dt. \end{align}$ The first term in the brackets is the kinetic energy of the particle, whilst the second is its potential energy. Consider the generator of time translations Q = ∂/∂t. In other words, $Q[x(t)]=\dot{x}(t)$. Note that x has an explicit dependence on time, whilst V does not; consequently: $Q[L]=m \sum_i\dot{x}_i\ddot{x}_i-\sum_i\frac{\partial V(x)}{\partial x_i}\dot{x}_i = \frac{d}{dt}\left[\frac{m}{2}\sum_i\dot{x}_i^2-V(x)\right]$ so we can set $f=\frac{m}{2} \sum_i\dot{x}_i^2-V(x).$ Then, $\begin{align} j & = \sum_{i=1}^3\frac{\partial L}{\partial \dot{x}_i}Q[x_i]-f \\ & = m \sum_i\dot{x}_i^2 -\left[\frac{m}{2}\sum_i\dot{x}_i^2 -V(x)\right] \\ & = \frac{m}{2}\sum_i\dot{x}_i^2+V(x). \end{align}$ The right hand side is the energy and Noether's theorem states that $\dot{j}=0$ (i.e. the principle of conservation of energy is a consequence of invariance under time translations). More generally, if the Lagrangian does not depend explicitly on time, the quantity $\sum_{i=1}^3 \frac{\partial L}{\partial \dot{x}_i}\dot{x_i}-L$ (called the Hamiltonian) is conserved. ### Example 2: Conservation of center of momentum Still considering 1-dimensional time, let $\begin{align} \mathcal{S}[\vec{x}] & = \int \mathcal{L}[\vec{x}(t),\dot{\vec{x}}(t)] \, \mathrm{d}t \\ & = \int \left [\sum^N_{\alpha=1} \frac{m_\alpha}{2}(\dot{\vec{x}}_\alpha)^2 -\sum_{\alpha<\beta} V_{\alpha\beta}(\vec{x}_\beta-\vec{x}_\alpha)\right] \, \mathrm{d}t \end{align}$ i.e. N Newtonian particles where the potential only depends pairwise upon the relative displacement. For $\vec{Q}$, let's consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words, $Q_i[x^j_\alpha(t)]=t \delta^j_i. \,$ Note that $\begin{align} Q_i[\mathcal{L}] & = \sum_\alpha m_\alpha \dot{x}_\alpha^i-\sum_{\alpha<\beta}\partial_i V_{\alpha\beta}(\vec{x}_\beta-\vec{x}_\alpha)(t-t) \\ & = \sum_\alpha m_\alpha \dot{x}_\alpha^i. \end{align}$ This has the form of $\frac{\mathrm{d}}{\mathrm{d}t}\sum_\alpha m_\alpha x^i_\alpha$ so we can set $\vec{f}=\sum_\alpha m_\alpha \vec{x}_\alpha.$ Then, $\vec{j}=\sum_\alpha \left(\frac{\partial}{\partial \dot{\vec{x}}_\alpha}\mathcal{L}\right)\cdot\vec{Q}[\vec{x}_\alpha]-\vec{f}$ $=\sum_\alpha (m_\alpha \dot{\vec{x}}_\alpha t-m_\alpha \vec{x}_\alpha)$ $=\vec{P}t-M\vec{x}_{CM}$ where $\vec{P}$ is the total momentum, M is the total mass and $\vec{x}_{CM}$ is the center of mass. Noether's theorem states: $\dot{\vec{j}} = 0 \Rightarrow {\vec{P}}-M \dot{\vec{x}}_{CM} = 0.$ ### Example 3: Conformal transformation Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime. $\mathcal{S}[\phi]\,$ $=\int \mathcal{L}[\phi (x),\partial_\mu \phi (x)] \, \mathrm{d}^4x$ $=\int \left( \frac{1}{2}\partial^\mu \phi \partial_\mu \phi -\lambda \phi^4\right ) \, \mathrm{d}^4x$ For Q, consider the generator of a spacetime rescaling. In other words, $Q[\phi(x)]=x^\mu\partial_\mu \phi(x)+\phi(x). \!$ The second term on the right hand side is due to the "conformal weight" of φ. Note that $Q[\mathcal{L}]=\partial^\mu\phi\left(\partial_\mu\phi+x^\nu\partial_\mu\partial_\nu\phi+\partial_\mu\phi\right)-4\lambda\phi^3\left(x^\mu\partial_\mu\phi+\phi\right).$ This has the form of $\partial_\mu\left[\frac{1}{2}x^\mu\partial^\nu\phi\partial_\nu\phi-\lambda x^\mu\phi^4\right]=\partial_\mu\left(x^\mu\mathcal{L}\right)$ (where we have performed a change of dummy indices) so set $f^\mu=x^\mu\mathcal{L}.\,$ Then, $j^\mu=\left[\frac{\partial}{\partial (\partial_\mu\phi)}\mathcal{L}\right]Q[\phi]-f^\mu$ $=\partial^\mu\phi\left(x^\nu\partial_\nu\phi+\phi\right)-x^\mu\left(\frac{1}{2}\partial^\nu\phi\partial_\nu\phi-\lambda\phi^4\right).$ Noether's theorem states that $\partial_\mu j^\mu = 0 \!$ (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side). (Aside: If one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies.) ## Applications Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example: • the invariance of physical systems with respect to spatial translation (in other words, that the laws of physics do not vary with locations in space) gives the law of conservation of linear momentum; • invariance with respect to rotation gives the law of conservation of angular momentum; • invariance with respect to time translation gives the well-known law of conservation of energy In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential. The Noether charge is also used in calculating the entropy of stationary black holes.[11] ## Notes 1. Noether E (1918). "Invariante Variationsprobleme". Nachr. D. König. Gesellsch. D. Wiss. Zu Göttingen, Math-phys. Klasse 1918: 235–257. 2. Thompson, W.J. (1994). Angular Momentum: an illustrated guide to rotational symmetries for physical systems 1. Wiley. p. 5. ISBN 0-471-55264-X. 3. The term "Noether charge" occurs in Seligman, Group theory and its applications in physics, 1980: Latin American School of Physics, Mexico City, American Institute of Physics, 1981. It comes enters wider use during the 1980s, e.g. by G. Takeda in: Errol Gotsman, Gerald Tauber (eds.) From SU(3) to Gravity: Festschrift in Honor of Yuval Ne'eman, 1985, p. 196. 4. ^ a b 5. Michael E. Peskin, Daniel V. Schroeder (1995). An Introduction to Quantum Field Theory. Basic Books. p. 18. ISBN 0-201-50397-2. 6. Vivek Iyer; Wald (1995). "A comparison of Noether charge and Euclidean methods for Computing the Entropy of Stationary Black Holes". 52 (8): 4430–9. arXiv:gr-qc/9503052. Bibcode:1995PhRvD..52.4430I. doi:10.1103/PhysRevD.52.4430. ## References • Goldstein, H (1980). (2nd ed.). Reading MA: Addison-Wesley. pp. 588–596. ISBN 0-201-02918-9. • Kosmann-Schwarzbach, Yvette (2010). The Noether theorems: Invariance and conservation laws in the twentieth century. Sources and Studies in the History of Mathematics and Physical Sciences. Springer-Verlag. ISBN 978-0-387-87867-6 • Lanczos, C. (1970). The Variational Principles of Mechanics (4th ed.). New York: Dover Publications. pp. 401–5. ISBN 0-486-65067-7. • Olver, Peter (1993). Applications of Lie groups to differential equations. Graduate Texts in Mathematics 107 (2nd ed.). Springer-Verlag. ISBN 0-387-95000-1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 118, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8973119258880615, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/41374/is-there-any-identification-assumption-for-iv
# Is there any identification assumption for IV? I am asking this because I know there are IA's for cross sectional estimators and D-in-D estimators, but am unsure if there are any for IV estimation. Does anyone know if there is or isn't, and if there is - what would it be? - ## 1 Answer The instrumental variable $Z$ must be: 1. Uncorrelated with the error term (IV exogeneity) 2. Correlated with the regressor $X$ for which it is to serve as instrument (IV relevance). There is also a third assumption that has to do with what parameter you are trying to estimate. It only applies if there's heterogeneity in the effect of X on Y. Simply put, IV measures the average effect of X on Y caused by wiggling X through Z. If there are 2 types of people who respond differently when X changes, and our instrument only wiggles X for the first type, IV will give you the effect of X on Y for only those folks. OLS would give you a weighted average of the two (assuming you solved the endogeneity problem somehow). If everyone is the same (no heterogeneity), you don't need to worry about this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567875266075134, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/orchard-planting-problem/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘orchard planting problem’ tag. ## On sets defining few ordinary lines 24 August, 2012 in math.CO, paper | Tags: Ben Green, Dirac-Motzkin conjecture, orchard planting problem, ordinary lines | by Terence Tao | 14 comments Ben Green and I have just uploaded to the arXiv our new paper “On sets defining few ordinary lines“, submitted to Discrete and Computational Geometry. This paper asymptotically solves two old questions concerning finite configurations of points ${P}$ in the plane ${{\mathbb R}^2}$. Given a set ${P}$ of ${n}$ points in the plane, define an ordinary line to be a line containing exactly two points of ${P}$. The classical Sylvester-Gallai theorem, first posed as a problem by Sylvester in 1893, asserts that as long as the points of ${P}$ are not all collinear, ${P}$ defines at least one ordinary line: It is then natural to pose the question of what is the minimal number of ordinary lines that a set of ${n}$ non-collinear points can generate. In 1940, Melchior gave an elegant proof of the Sylvester-Gallai theorem based on projective duality and Euler’s formula ${V-E+F=2}$, showing that at least three ordinary lines must be created; in 1951, Motzkin showed that there must be ${\gg n^{1/2}}$ ordinary lines. Previously to this paper, the best lower bound was by Csima and Sawyer, who in 1993 showed that there are at least ${6n/13}$ ordinary lines. In the converse direction, if ${n}$ is even, then by considering ${n/2}$ equally spaced points on a circle, and ${n/2}$ points on the line at infinity in equally spaced directions, one can find a configuration of ${n}$ points that define just ${n/2}$ ordinary lines. As first observed by Böröczky, variants of this example also give few ordinary lines for odd ${n}$, though not quite as few as ${n/2}$; more precisely, when ${n=1 \hbox{ mod } 4}$ one can find a configuration with ${3(n-1)/4}$ ordinary lines, and when ${n = 3 \hbox{ mod } 4}$ one can find a configuration with ${3(n-3)/4}$ ordinary lines. Our first main result is that these configurations are best possible for sufficiently large ${n}$: Theorem 1 (Dirac-Motzkin conjecture) If ${n}$ is sufficiently large, then any set of ${n}$ non-collinear points in the plane will define at least ${\lfloor n/2\rfloor}$ ordinary lines. Furthermore, if ${n}$ is odd, at least ${3\lfloor n/4\rfloor}$ ordinary lines must be created. The Dirac-Motzkin conjecture asserts that the first part of this theorem in fact holds for all ${n}$, not just for sufficiently large ${n}$; in principle, our theorem reduces that conjecture to a finite verification, although our bound for “sufficiently large” is far too poor to actually make this feasible (it is of double exponential type). (There are two known configurations for which one has ${(n-1)/2}$ ordinary lines, one with ${n=7}$ (discovered by Kelly and Moser), and one with ${n=13}$ (discovered by Crowe and McKee).) Our second main result concerns not the ordinary lines, but rather the ${3}$-rich lines of an ${n}$-point set – a line that meets exactly three points of that set. A simple double counting argument (counting pairs of distinct points in the set in two different ways) shows that there are at most $\displaystyle \binom{n}{2} / \binom{3}{2} = \frac{1}{6} n^2 - \frac{1}{6} n$ ${3}$-rich lines. On the other hand, on an elliptic curve, three distinct points P,Q,R on that curve are collinear precisely when they sum to zero with respect to the group law on that curve. Thus (as observed first by Sylvester in 1868), any finite subgroup of an elliptic curve (of which one can produce numerous examples, as elliptic curves in ${{\mathbb R}^2}$ have the group structure of either ${{\mathbb R}/{\mathbb Z}}$ or ${{\mathbb R}/{\mathbb Z} \times ({\mathbb Z}/2{\mathbb Z})}$) can provide examples of ${n}$-point sets with a large number of ${3}$-rich lines (${\lfloor \frac{1}{6} n^2 - \frac{1}{2} n + 1\rfloor}$, to be precise). One can also shift such a finite subgroup by a third root of unity and obtain a similar example with only one fewer ${3}$-rich line. Sylvester then formally posed the question of determining whether this was best possible. This problem was known as the Orchard planting problem, and was given a more poetic formulation as such by Jackson in 1821 (nearly fifty years prior to Sylvester!): Our second main result answers this problem affirmatively in the large ${n}$ case: Theorem 2 (Orchard planting problem) If ${n}$ is sufficiently large, then any set of ${n}$ points in the plane will determine at most ${\lfloor \frac{1}{6} n^2 - \frac{1}{2} n + 1\rfloor}$ ${3}$-rich lines. Again, our threshold for “sufficiently large” for this ${n}$ is extremely large (though slightly less large than in the previous theorem), and so a full solution of the problem, while in principle reduced to a finitary computation, remains infeasible at present. Our results also classify the extremisers (and near extremisers) for both of these problems; basically, the known examples mentioned previously are (up to projective transformation) the only extremisers when ${n}$ is sufficiently large. Our proof strategy follows the “inverse theorem method” from additive combinatorics. Namely, rather than try to prove direct theorems such as lower bounds on the number of ordinary lines, or upper bounds on the number of ${3}$-rich lines, we instead try to prove inverse theorems (also known as structure theorems), in which one attempts a structural classification of all configurations with very few ordinary lines (or very many ${3}$-rich lines). In principle, once one has a sufficiently explicit structural description of these sets, one simply has to compute the precise number of ordinary lines or ${3}$-rich lines in each configuration in the list provided by that structural description in order to obtain results such as the two theorems above. Note from double counting that sets with many ${3}$-rich lines will necessarily have few ordinary lines. Indeed, if we let ${N_k}$ denote the set of lines that meet exactly ${k}$ points of an ${n}$-point configuration, so that ${N_3}$ is the number of ${3}$-rich lines and ${N_2}$ is the number of ordinary lines, then we have the double counting identity $\displaystyle \sum_{k=2}^n \binom{k}{2} N_k = \binom{n}{2}$ which among other things implies that any counterexample to the orchard problem can have at most ${n+O(1)}$ ordinary lines. In particular, any structural theorem that lets us understand configurations with ${O(n)}$ ordinary lines will, in principle, allow us to obtain results such as the above two theorems. As it turns out, we do eventually obtain a structure theorem that is strong enough to achieve these aims, but it is difficult to prove this theorem directly. Instead we proceed more iteratively, beginning with a “cheap” structure theorem that is relatively easy to prove but provides only a partial amount of control on the configurations with ${O(n)}$ ordinary lines. One then builds upon that theorem with additional arguments to obtain an “intermediate” structure theorem that gives better control, then a “weak” structure theorem that gives even more control, a “strong” structure theorem that gives yet more control, and then finally a “very strong” structure theorem that gives an almost complete description of the configurations (but only in the asymptotic regime when ${n}$ is very large). It turns out that the “weak” theorem is enough for the orchard planting problem, and the “strong” version is enough for the Dirac-Motzkin conjecture. (So the “very strong” structure theorem ends up being unnecessary for the two applications given, but may be of interest for other applications.) Note that the stronger theorems do not completely supercede the weaker ones, because the quantitative bounds in the theorems get progressively worse as the control gets stronger. Before we state these structure theorems, note that all the examples mentioned previously of sets with few ordinary lines involved cubic curves: either irreducible examples such as elliptic curves, or reducible examples such as the union of a circle (or more generally, a conic section) and a line. (We also allow singular cubic curves, such as the union of a conic section and a tangent line, or a singular irreducible curve such as ${\{ (x,y): y^2 = x^3 \}}$.) This turns out to be no coincidence; cubic curves happen to be very good at providing many ${3}$-rich lines (and thus, few ordinary lines), and conversely it turns out that they are essentially the only way to produce such lines. This can already be evidenced by our cheap structure theorem: Theorem 3 (Cheap structure theorem) Let ${P}$ be a configuration of ${n}$ points with at most ${{}Kn}$ ordinary lines for some ${K \geq 1}$. Then ${P}$ can be covered by at most ${500K}$ cubic curves. This theorem is already a non-trivial amount of control on sets with few ordinary lines, but because the result does not specify the nature of these curves, and how they interact with each other, it does not seem to be directly useful for applications. The intermediate structure theorem given below gives a more precise amount of control on these curves (essentially guaranteeing that all but at most one of the curve components are lines): Theorem 4 (Intermediate structure theorem) Let ${P}$ be a configuration of ${n}$ points with at most ${{}Kn}$ ordinary lines for some ${K \geq 1}$. Then one of the following is true: 1. ${P}$ lies on the union of an irreducible cubic curve and an additional ${O(K^{O(1)})}$ points. 2. ${P}$ lies on the union of an irreducible conic section and an additional ${O(K^{O(1)})}$ lines, with ${n/2 + O(K^{O(1)})}$ of the points on ${P}$ in either of the two components. 3. ${P}$ lies on the union of ${O(K)}$ lines and an additional ${O(K^{O(1)})}$ points. By some additional arguments (including a very nice argument supplied to us by Luke Alexander Betts, an undergraduate at Cambridge, which replaces a much more complicated (and weaker) argument we originally had for this paper), one can cut down the number of lines in the above theorem to just one, giving a more useful structure theorem, at least when ${n}$ is large: Theorem 5 (Weak structure theorem) Let ${P}$ be a configuration of ${n}$ points with at most ${{}Kn}$ ordinary lines for some ${K \geq 1}$. Assume that ${n \geq \exp(\exp(CK^C))}$ for some sufficiently large absolute constant ${C}$. Then one of the following is true: 1. ${P}$ lies on the union of an irreducible cubic curve and an additional ${O(K^{O(1)})}$ points. 2. ${P}$ lies on the union of an irreducible conic section, a line, and an additional ${O(K^{O(1)})}$ points, with ${n/2 + O(K^{O(1)})}$ of the points on ${P}$ in either of the first two components. 3. ${P}$ lies on the union of a single line and an additional ${O(K^{O(1)})}$ points. As mentioned earlier, this theorem is already strong enough to resolve the orchard planting problem for large ${n}$. The presence of the double exponential here is extremely annoying, and is the main reason why the final thresholds for “sufficiently large” in our results are excessively large, but our methods seem to be unable to eliminate these exponentials from our bounds (though they can fortunately be confined to a lower bound for ${n}$, keeping the other bounds in the theorem polynomial in ${K}$). For the Dirac-Motzkin conjecture one needs more precise control on the portion of ${P}$ on the various low-degree curves indicated. This is given by the following result: Theorem 6 (Strong structure theorem) Let ${P}$ be a configuration of ${n}$ points with at most ${{}Kn}$ ordinary lines for some ${K \geq 1}$. Assume that ${n \geq \exp(\exp(CK^C))}$ for some sufficiently large absolute constant ${C}$. Then, after adding or deleting ${O(K^{O(1)})}$ points from ${P}$ if necessary (modifying ${n}$ appropriately), and then applying a projective transformation, one of the following is true: 1. ${P}$ is a finite subgroup of an elliptic curve (EDIT: as pointed out in comments, one also needs to allow for finite subgroups of acnodal singular cubic curves), possibly shifted by a third root of unity. 2. ${P}$ is the Borozcky example mentioned previously (the union of ${n/2}$ equally spaced points on the circle, and ${n/2}$ points on the line at infinity). 3. ${P}$ lies on a single line. By applying a final “cleanup” we can replace the ${O(K^{O(1)})}$ in the above theorem with the optimal ${O(K)}$, which is our “very strong” structure theorem. But the strong structure theorem is already sufficient to establish the Dirac-Motzkin conjecture for large ${n}$. There are many tools that go into proving these theorems, some of which are extremely classical (with at least one going back to the ancient Greeks), and others being more recent. I will discuss some (not all) of these tools below the fold, and more specifically: 1. Melchior’s argument, based on projective duality and Euler’s formula, initially used to prove the Sylvester-Gallai theorem; 2. Chasles’ version of the Cayley-Bacharach theorem, which can convert dual triangular grids (produced by Melchior’s argument) into cubic curves that meet many points of the original configuration ${P}$); 3. Menelaus’s theorem, which is useful for producing ordinary lines when the point configuration lies on a few non-concurrent lines, particularly when combined with a sum-product estimate of Elekes, Nathanson, and Ruzsa; 4. Betts’ argument, that produces ordinary lines when the point configuration lies on a few concurrent lines; 5. A result of Poonen and Rubinstein that any point not on the origin or unit circle can lie on at most seven chords connecting roots of unity; this, together with a variant for elliptic curves, gives the very strong structure theorem, and is also (a strong version of) what is needed to finish off the Dirac-Motzkin and orchard planting problems from the structure theorems given above. There are also a number of more standard tools from arithmetic combinatorics (e.g. a version of the Balog-Szemeredi-Gowers lemma) which are needed to tie things together at various junctures, but I won’t say more about these methods here as they are (by now) relatively routine. Read the rest of this entry »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 124, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9399155974388123, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/interactions
# Tagged Questions The interactions tag has no wiki summary. 1answer 62 views ### How fair is it to say that all chemistry arises from failures of the ideal gas law? I was reading here about how the ideal gas law assumes point masses and non-interaction. Is it fair to say that all chemistry arises from failures of that? Of course, such a sweeping generalization ... 3answers 69 views ### What is the cause the light is affected by gravity? [duplicate] I know that photons have no mass and that a photons exist only moving at the speed of light. So what is the cause that a massive astronomical object can bend a ray of light? I have two thoughts, but I ... 1answer 66 views ### Interacting particles We are familiar with the grand partition function for the grand canonical ensemble. This makes me wonder: what kinds of modifications would be required if the particles interacted? Thanks. 1answer 70 views ### Gell-Mann Low Theorem and Vacuum Energy I know that the sum of vacuum bubbles can be related to the Vacuum energy, but I'm trying to understand how this follows from the Gell-Mann Low theorem/equation. My question will use equations from ... 1answer 260 views ### How to measure a solid-solid surface energy? Many techniques exist to measure the surface energy between a liquid and a liquid or a liquid and a gas (see e.g. the wiki page). Methods to measure the surface energy between a solid and a fluid are ... 1answer 54 views ### Strong interaction and the Lagrangian for electromagnetic interaction The Lagrangian for electromagnetic field has the following expression: $$L = -\frac{1}{c^{2}}A_{\alpha}j^{\alpha} - \frac{1}{8 \pi c}(\partial_{\alpha} A_{\beta})(\partial^{\alpha}A^{\beta})$$ (I ... 0answers 37 views ### Rotating Frame with degenerate levels I'm working with a angular momentum transition J=0 -> J=1 with no applied magnetic field; so, the upper level has degeneracy 3. This atom is coupled with an electric field propagatin in the ... 0answers 69 views ### Range of forces from mass of force carrier? Why is $\frac{\hbar}{mc}$ a good estimate of the range of the four forces, where $m$ is the mass of the carrier particle of the force? Inputting the pion mass gives $1.4\ \mathrm{fm}$ for the hadronic ... 1answer 91 views ### A strange particle, $X$, decays in the following way: $X → π^– + p$. State what interaction is involved in this decay A strange particle, $X$, decays in the following way: $X → π^– + p$. State what interaction is involved in this decay. I know the answer to be weak interaction, but why is it weak interaction? What ... 1answer 176 views ### How are forces related to decays? How are decays related to forces, what is meant by particle X decays through the, say, strong force? The way I understand forces is by how they change the acceleration of particles with the right ... 1answer 252 views ### Interpretation of derivative interaction term in QFT I am trying to understand what a term like $$\mathcal{L}_{int} = (\partial^{\mu}A )^2 B^2$$ with $A$ and $B$ being scalar fields for instance means. I understand how to draw an interaction term in ... 2answers 116 views ### Interpretation of an “interaction” term In QFT a polynomial (of degree >2) in the fields is said to be an interaction term, Ex.: $\lambda\phi^4$. Question Is it possible to give an interpretation to terms like $\frac{1}{\phi^n}$? (for ... 1answer 176 views ### How does the dressed Klein-Gordon propagator look in position space? The free Klein-Gordon propagator in momentum space $\sim (p^2-m^2+i\epsilon)^{-1}$ has just a single pole at $p^2=m^2$. The passage to Fourier space is difficult but possible. The result is very ... 3answers 243 views ### Long/short-range interaction A potential of the form $r^{-n}$ is often considered long-range, while one that decays exponentially is considered short-range. Is this characterization simply relative/conventional, or is there a ... 2answers 308 views ### Why $\lambda\phi^4$ theory, where $\lambda>0$, is not bounded from below? Why the following interaction, in QFT, $$\displaystyle{\cal L}_{\rm int} ~=~\frac{\lambda}{4!}\phi^4$$ where $\lambda$ is positive, represents a theory that is unstable (or unbounded from below as it ... 2answers 154 views ### Interaction speed between electric charges and magnetic materials Einstein said that the speed of a matter in universe cannot exceed the speed of light. Is it correct for electric force transmission speed from one electric charge to other one? What is ... 1answer 141 views ### Interacting system and relaxation times I got a question I'm not sure how to state precisely or is it even valid. Any help is most welcomed. I stripped the question of all details because I wanted to emphasize my problem, but should ... 7answers 354 views ### Macroscopic laws which haven't been derived from microscopic laws Can you think of examples where a macroscopic law coexists with a fully known microscopic law, but the former hasn't been derived from the latter (yet)? Or maybe a rule of thumb, which works but ... 1answer 93 views ### Intuitive picture for spin-fluctuations contribution to specific heat of He3 Usually when discussing Fermi liquid theory, it is stated that due to the quasiparticles effectively behaving like a free electron gas with effective mass, the specific heat is linear in $T$ at small ... 3answers 322 views ### How do magnets work? I've read a classbook on the field theory (including EM): it perfectly describes quantitive patterns in EM-theory, but I have no luck understanding how and why it works. I mean, magnetic substances ... 2answers 115 views ### Interacting classical strings? May classical strings be interacting? I would guess no, I can not see any way to break a classical closed string in two of them (the "pants" diagram); but maybe I'm missing something. 1answer 103 views ### What's the meaning of the coupling change after a renormalization (in the 1-dim Ising Model)? What does it mean that after the theory (1-dim Ising model here, but the question is general) is renormalized one time and $g_i\rightarrow g_i'$, that the couplings are weaker, even if the theory ... 0answers 42 views ### What sources can you recommend to understand the basics of the Coulomb interaction of particles [closed] Maybe it's not the best place for this issue yet. I am in graduate school. Field of knowledge of my supervisor - quantum physics - or rather, he examines the interaction of particles. Sources which ... 1answer 57 views ### Expectation values of interacting fields I was motivated to ask this question by the equality claimed in equation 10.3.3 of Weinberg's volume 1 of QFT books. My interpretation of that, If $O_s$ is a quantum field of spin $s$, $\psi_s$ is ... 2answers 200 views ### Lorentz transformation in light cone coordinates in string theory What is the explicit form of the Lorentz transformation changing the light cone coordinates in the light cone gauge in string theory? The extended nature of the strings complicate matters, especially ... 4answers 924 views ### How can I explain why the weak nuclear interaction between individual nucleons is 'weak'? By considering the energy-time uncertainty principle, estimate the range of the weak nuclear interaction at low energies. Compare this range to the size of a typical nucleon (for example, a proton) ... 2answers 749 views ### How do Leptons arise from Lambda decay? I have a question for an assignment: Use your understanding of the quark model of hadrons and the boson model of the weak nuclear interaction to explain how leptons can arise from lambda decay, ... 3answers 294 views ### Fermionic interaction potentials Are there any examples of fermionic particles or quasiparticles for which the interaction potential is a globally smooth function? i.e. no singularities or branch points. As an example, in Flügge's ... 3answers 681 views ### What is physical in the principle of local gauge invariance? [closed] Modern theories of interactions in particle physics are gauge ones. I know how the gauge fields are introduced in equations ($D = \partial + A$). I just do not see any physical motivation in it. I am ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262052774429321, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/87231-trigonometric-identity-print.html
# trigonometric identity Printable View • May 3rd 2009, 05:29 PM i_cow044 trigonometric identity Hello, i was wondering if anyone could help me with this identity. cosθ + sinθtanθ = 1/cosθ. I've only started learning these and i'm already stuck on this one. I cannot figure it out, could someone show me the solution perhaps, or a nudge in the right direction, thank you. • May 3rd 2009, 05:35 PM TheEmptySet Quote: Originally Posted by i_cow044 Hello, i was wondering if anyone could help me with this identity. cosθ + sinθtanθ = 1/cosθ. I've only started learning these and i'm already stuck on this one. I cannot figure it out, could someone show me the solution perhaps, or a nudge in the right direction, thank you. Hint: write $\tan(\theta)=\frac{\sin(\theta)}{\cos(\theta)}$ and get a common denominator on the left hand side. All times are GMT -8. The time now is 03:32 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485315084457397, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/211352/sum-of-the-alternating-series-sum-n-0-infty-1n-n1-n/211446
sum of the alternating series $\sum_{n=0}^\infty (-1)^n (n+1)/n!$ Is there a nice way to see that the series $$\sum_{n=0}^\infty (-1)^n (n+1)/n!$$ converges to $0$? I just did the computation numerically. Thanks - 2 Hint: compute $\sum_{n=1}^N (-1)^n (n+1)/n!$. To this end, write $\sum_{n=1}^N (-1)^n (n+1)/n!$ as $\sum_{n=1}^N (-1)^n \cdot n/n! + \sum_{n=1}^N (-1)^n \cdot 1/n!$ – Yury Oct 11 '12 at 23:28 2 Answers $$\eqalign{ & \sum\limits_{n = 0}^\infty {{{( - 1)}^n}} \frac{{n + 1}}{{n!}} = \cr & = \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}n}}{{n!}}} + \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}}}{{n!}}} = \cr & = \sum\limits_{n = 1}^\infty {\frac{{{{( - 1)}^n}}}{{\left( {n - 1} \right)!}}} + \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}}}{{n!}}} = \cr & = \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^{n + 1}}}}{{n!}}} + \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}}}{{n!}}} = - \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}}}{{n!}}} + \sum\limits_{n = 0}^\infty {\frac{{{{( - 1)}^n}}}{{n!}}} = 0 \cr}$$ - pretty cool actually) – Alex Oct 12 '12 at 1:02 First notice that $$e^{-x}=\sum_{n=0}^\infty \frac{(-1)^nx^n}{n!}.$$ If we multiply this by $x$, we get $$xe^{-x}=\sum_{n=0}^\infty \frac{(-1)^nx^{n+1}}{n!}.$$ Now if we differentiate, we get $$e^{-x}-xe^{-x}=\sum_{n=0}^\infty \frac{(-1)^n (n+1)x^{n}}{n!}.$$ We now set $x=1$ to get $$0=e^{-1}-e^{-1}=\sum_{n=0}^\infty \frac{(-1)^n (n+1)}{n!}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8949739933013916, "perplexity_flag": "head"}
http://nrich.maths.org/7498/index?nomenu=1
## 'Walking the Squares' printed from http://nrich.maths.org/ ### Show menu So here's a special square of tiles to walk on! The black square in the middle contains some very special prizes. You can get the prizes by collecting tokens which are on each tile. So you have to step on as many tiles as possible. BUT You cannot go onto any tile more than once. You are not allowed to step on more than two tiles of the same colour one after another. So this path would be OK. But this path is not allowed; Why? Because the path goes along $3$ blues which is not allowed, the blue to green is OK but then tere are $4$ green tiles one this path next to each other and that is also not allowed. You can start anywhere on the outside like in the OK example above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9518290162086487, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/27175/analog-hawking-radiation
# Analog Hawking radiation I am confused by most discussions of analog Hawking radiation in fluids (see, for example, the recent experimental result of Weinfurtner et al. Phys. Rev. Lett. 106, 021302 (2011), arXiv:1008.1911). The starting point of these discussions is the observation that the equation of motion for fluctuations around stationary solutions of the Euler equation have the same mathematical structure as the wave equation in curved space (there is a fluid metric $g_{ij}$ determined by the background flow). This background metric can have sonic horizons. The sonic horizons can be characterized by an associated surface gravity $\kappa$, and analog Hawking temperature $T_H \sim \kappa\hbar/c_s$. My main questions is this: Why would $T_H$ be relevant? Corrections to the Euler flow are not determined by quantizing small oscillations around the classical flow. Instead, hydrodynamics is an effective theory, and corrections arise from higher order terms in the derivative expansion (the Navier-Stokes, Burnett, super-Burnett terms), and from thermal fluctuations. Thermal fluctuations are governed by a linearized hydro theory with Langevin forces, but the strength of the noise terms is governed by the physical temperature, not by Planck's constant. A practical question is: In practice $T_H$ is very small (because it is proportional to $\hbar$). How can you claim to measure thermal radiation at a temperature $T_H << T$? - 4 Great question! – user566 Jan 6 '12 at 22:08 @Thomas, I went ahead and merged both accounts, to avoid confusion, hope that is OK with you. – user566 Jan 6 '12 at 23:12 1 @Thomas: I used to be puzzled by this a bit when I was doing stuff on it. I think I eventually convinced myself that it's because the thermal state you get is actually "classical", in that e.g. as a Wigner distribution it's always positive. That turns out to be the weird thing about Unruh/Hawking radiation --- the form is actually not very quantum at all. – genneth Jan 10 '12 at 13:28 1 If you have radiation at a rate proportional to $\hbar$, how can it not be a quantum effect? – Thomas Jan 12 '12 at 4:09 1 @FrédéricGrosshans: and, of course, blackbody radiation was how quantum mechanics was originally discovered in the first place. – Jerry Schirmer May 23 '12 at 14:07 show 12 more comments ## 2 Answers The conditions for the existence of the Hawking effect are described in classical terms, i.e you need 1) A Lorentz signature metric 2) A horizon (given, for example, by space flowing into a BH faster than the speed of light, or fluid flowing downstream faster than the speed of sound) 3) Surface gravity at the horizon Those conditions are then applied to a quantum field which satisfies a wave equation (e.g. Klein Gordon field on spacetime). The standard analysis proceeds by treating photon paths as null geodesics (eikonal approximation). The acoustic analogue of this is that sound waves (the quantum excitations of which are phonons) follow null geodesics in the Lorentzian acoustic metric. So my guess is that the reason for the presence of Planck's constant in the Hawking temperature expression in the acoustic case is that they're treating Hawking radiation in the phonon (quantum) field. Indeed, Visser says an acoustic event horizon will emit Hawking radiation in the form of a thermal bath of phonons at a temperature $$kT_H=\frac{\hbar g_H}{2\pi c}$$ Here $g_H$ is the acoustic surface gravity and $c$ is the speed of sound - I remember attending a seminar by Unruh a few months ago and the same question arised. As far as I remember, he enfasized that in these hydrodynamic analogs of black holes, the flow is not quantized, it is a classical fluid, and everything is classical and that the dumb hole behaves like a quantum amplifier emitting quantum noise from the Horizon. Calculations indicate that the spectrum of these outgoing phonons must be "thermal" (like in black hole case) and a small temperature can be associated to it. I would not call this temperature "quantum", even if it is proportional to $\hbar$ since it can be completely determined from classical attributes of the fluid (speed of flow, adiabatic properties of the fluid, etc) which can be measured classically. The appearance of $\hbar$ is supposed to be a consequence of quantum amplification of the "phonon field" near the Horizon. Looking at the experimental paper I don't see anywhere a direct measurement of temperature. They don't stick a thermometer in the water stream (for obvious reasons). They just verify that the dumb hole seems to have a "thermal character" while acting on surface waves. They just measure the amplitude of ingoing and outgoing waves and find that their squared ratios follows the Boltzmann distribution in the frequency which agrees with Hawking radiation. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9343391060829163, "perplexity_flag": "middle"}
http://cstheory.stackexchange.com/questions/tagged/quantum-information
# Tagged Questions Theoretical issues related to the quantum treatment of information 1answer 85 views ### Why spectral norms are used for computing the complexity of adiabatic Hamiltonian? In the context of adiabatic quantum computation the spectral norm was first used in the first adiabatic paper by Farhi et. al. when he demonstrated the relation of it to the conventional quantum ... 2answers 388 views ### Is adiabatic quantum computing as powerful as qubit computing? Much of quantum computing literature focuses on qubit-based computation. Adiabatic quantum computing is not based on qubits. I am looking for insight into any of the following. Is adiabatic quantum ... 1answer 142 views ### 1st & 2nd quantization from TCS Last year I attended Scott Aaronson's talk Hawking Quantum Wares at the Classical Complexity Bazaar. Being intrigued by his argument that "[e]ven if quantum mechanics hadn't existed, theoretical ... 1answer 88 views ### Finding all solutions by Grover search(not superposition) When there are multiple marked elements, grover search provides only superposition of them. If I want to find all the marked elements, not superposition, I could try this: 1) Do Grover search, get ... 1answer 162 views ### Polynomial speedups with algorithms based on semidefinite programming This is a followup of a recent question asked by A. Pal: Solving semidefinite programs in polynomial time. I am still puzzling over the actual running time of algorithms that compute the solution of ... 1answer 275 views ### Is quantum annealing faster than simulated annealing/genetic/other state-of-the-art optimization algorithms? Forgive me wise men for my simple words, for I am but a noob. There's the idea of quantum annealing being used to solve optimization problems in terms of a QUBO problem for D-Wave's quantum ... 1answer 298 views ### Using MATLAB's CVX Package for Semidefinite Programming in Quantum Information I'm attempting to formulate the semidefinite programs used in the paper "Hedging Bets with Correlated Quantum Strategies" (specifically those on page 7) into CVX so that I can play around with the ... 2answers 149 views ### Largest set allowing one-step unstructured quantum search What is the largest set admitting a deterministic quantum search algorithm, for a single marked element, that operates with only a single call to the oracle? The question is interesting since ... 0answers 99 views ### Threshold for non-zero quantum capacity of depolarizing channels In "Quantum-channel capacity of very noisy channels", DiVincenzo, Shor and Smolin showed that it is possible to perform quantum communication over depolarizing channels provided that the fidelity was ... 1answer 134 views ### Lower bounds on $Q_{\epsilon}(IP)$ I want to show that $Q_{\epsilon}(IP) \geq (1-O(\epsilon))n$, where $IP:\{0,1\}^n \times \{0,1\}^n \rightarrow \{0,1\}$ is the usual mod 2 inner product. I have Nayak's lower bound, but I am not sure ... 1answer 137 views ### Communicating a string of zeros and ones quantumly Alice wants to communicate an arbitrary $x \in \{0 ,1\}^n$ to Bob. Alice and Bob communicate in rounds, in each round Alice (or Bob) applies a unitary transformation on his/her part and transmits a ... 1answer 252 views ### Distinguishing between $N$ quantum states Given a quantum state $\rho_A$ chosen uniformly at random from a set of $N$ mixed states $\rho_1 ... \rho_N$, what is the maximum average probability of correctly identifying $A$? This problem can be ... 1answer 122 views ### Known properties of a specific class of quantum states Recently, I have been studying a quantum protocol for the "Hidden Matching" problem that makes use of states that can be expressed as \$|\psi\rangle=\frac{1}{\sqrt{n}}\sum_{i=1}^n ... 1answer 145 views ### A promise problem to decide whether two given pure quantum states are close or far apart Consider this problem in quantum cryptography: We have two pure states $\phi_1,\phi_2$ as input and constants $0 \leq \alpha <\beta \leq 1$, where "Yes instances" are those for which ... 3answers 271 views ### Complexity of optimization over unitary group What is the computational complexity of optimizing various functions over the unitary group $\mathcal{U}(n)$? A typical task, arising often in quantum information theory, would be maximizing a ... 1answer 118 views ### Best method of Error Correction in Quantum Key Distribution As far as I can tell, almost all implementations of QKD use Brassard and Salvail's CASCADE algorithm for error correction. Is this really the best known method of correcting errors in a shared ... 1answer 89 views ### Quantum capacity for ensemble of Pauli channels In Preskill's quantum computing notes Chapter 7 approximate page 82, he shows that a Pauli channel has capacity $Q \geq 1-H(p_I,p_X,p_Y,p_Z)$ where $H$ is Shannon entropy and $p_I, p_X, p_Y, p_Z$ are ... 1answer 24 views ### Optimal measurement for MUBs Let $\mathcal{B} = \{B_1, \dots, B_k\}$ be a set of Mutually Unbiased Bases (MUB) in $\mathbb{C}^n$, i.e. each $B_i$ is an orthonormal basis and for $v \in B_i, w \in B_j, i \neq j$ we have \$|\langle ... 0answers 157 views ### Do the quantum communication complexity lower bounds hold when parties can send a “duplicated” qubits? This question continues from the previous question where I mistakenly asked a question that is too general. In quantum communication complexity, we always assume that Alice and Bob have unlimited ... 2answers 253 views ### Are Alice and Bob allowed to copy qubits in quantum communication complexity model? In quantum communication complexity, we always assume that Alice and Bob have unlimited computational power and are still prove lower bounds such as the $\Omega(n)$ lower bounds of parity. What ... 4answers 246 views ### Master Equations and Operator Sum Form I'm more of a quantum optics guy than a quantum info guy, and deal mainly in master equations. I'm interested in operator-sum form, and I'd like to derive the errors in this form for a small quantum ... 0answers 87 views ### Non-tomographical certification of projectors, using product states? I'm interested in operational ways of demonstrating (with high probability of confidence, in an error-free setting) that a POVM operator on n-qubit states is a projector. Specifically, I'm interested ... 1answer 228 views ### Proof that Entanglement Cannot Increase the Capacity of a Noiseless Classical Channel I am aware that quantum entanglement cannot increase the asymptotic capacity of a noiseless classical channel. However, can anyone provide some type of reference in the literature that contains a ... 2answers 289 views ### Nonlocal Games and Quantum Communication I'm currently on the look out for some good reference material relating non-local games with beneficial aspects in quantum communication. For instance, I am aware that non-local games are good at ... 4answers 347 views ### Quantum Bell-Type Inequalities I'm curious if someone could recommend some supplementary material for gaining a deeper understanding of the paper : "Some Results and Problems on Quantum Bell-Type Inequalities - Tsirelson". ... 1answer 184 views ### Quantum Channel Decoding Let a quantum channel $\Phi(\cdot)$ between two Hilbert spaces $\mathcal{H}_{in}$ and $\mathcal{H}_{out}$. What is the quantum channel $\Phi_{inv}(\cdot)$ that best reverses $\Phi(\cdot)$ ? \$\forall ... 1answer 472 views ### Does cryptography have an inherent thermodynamic cost? Reversible computing is a computational model that only allows thermodynamically reversible operations. According to Landauer's principle, which states that erasing a bit of information releases \$kT ... 0answers 198 views ### Approximation of Quantum Channels Background: In quantum information theory, a wide class of processes acting on stochastic quantum states can be described using the formalism of Quantum Channels: A quantum channel is a linear, ... 2answers 204 views ### Polynomial algorithms for UPB (Unextendable Product Bases) Consider a Hilbert space $H = H_1 \otimes \dots \otimes H_n$. An Unextendable Product Basis (UPB) is a set of product vectors \$\vert v_i \rangle = \vert v_i^1 \rangle \otimes \dots \otimes \vert v_i^n ... 3answers 903 views ### Is there any connection between the diamond norm and the distance of the associated states? In quantum information theory, the distance between two quantum channels is often measured using the diamond norm. There are also a number of ways to measure distance between two quantum states, such ... 6answers 5k views ### Universities for Quantum Computing / Information? Which universities have a strong quantum computing curriculum, and offer some type of quantum computing/information courses/research? The aim here is to collect a useful list for someone considering ... 2answers 795 views ### Does the trace norm of the difference of two density matrices being one imply these two density matrices can be simultaneously diagonalizable? I believe the answer to this question is well-known; but, unfortunately, I don't know. In quantum computing, we know that mixed states are represented by density matrices. And the trace norm of the ... 7answers 917 views ### Quantum Computation - Postulates of QM I have just started (independent) learning about quantum computation in general from Nielsen-Chuang book. I wanted to ask if anyone could try finding time to help me with whats going on with the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9049215912818909, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2816/how-secure-is-my-otp-program?answertab=oldest
# How secure is my OTP program? I'm writing an One-Time Pad encryption program, because I got really interested in the idea of " encryption which has been proven to be impossible to crack if used correctly". I'm writing the program just for fun, as programming and cryptography is my hobby. Therefore, I don't expect the program to be used for highly vulnerable data. I just wonder how secure my program is, and what I can improve (considering that the hacker don't have any access to the key file). The program starts off by: 1. Make a list of 50 numbers from the mouse movement coordinates at the start of the program. 2. The number list is then the seed to the ISAAC Random Number Generater (More info here) 3. A key file is generated based on the size of the file to be encrypted. The Key file is generated from random numbers from the ISAAC Generator 4. The File-To-Be-Encrypted and the generated Key File are XOR'ed and saved to a new file - The cipher file. 5. (deciphering) Happens by XOR'ing the Key File and Cipher File, and you get the plain text file. - 3 You are implementing a "stream cipher" and not the One Time Pad. The OTP requires the full pad to be completely random, which is not the case here. – MartinSuecia Jun 6 '12 at 8:30 – user2262 Jun 7 '12 at 8:15 ## 1 Answer The perfect security of OTP hinges on the fact, that keys must be chosen truly at random and uniformly from the domain of all possible keys, i.e. all bitstrings of a certain length. The problem with your approach is that you use a pseudorandom number generator to generate the key. It does not matter how good the generator is, because the entropy that can be used to generate the key is limited by the seed you use. Let's, assume that the 50 numbers you use are really random and distributed uniformly -- and that is at least debatable for mouse movement. If you use 50 number in some range, lets say between $0$ and $x-1$, then for files of any size, you only ever produce at most $x^{50}$ different keys. Obviously, for large enough files, this is much smaller than the total number of all possible keys and therefore, your perfect security does no longer hold. An attack would for example consist of deciding which of two messages $m_1,m_2$ is encrypted in a ciphertext (your basic indistinguishability game). Keep in mind that for perfect security the runtime of the adversary is unbounded. That means that $\mathcal{A}$ could enumerate all $x^{50}$ possible keys and check if any of those decrypts the ciphertext to one of the two messages. This works basically, because the number of possible keys is much smaller than it should be and the chance that the ciphertext could also be decrypted to the other message is very small (for large enough messages). - Thanks for the nice and detailed answer. – Janman Jun 6 '12 at 8:53 So if I understand this right, I have to make an infinite large list of numbers to use as a seed? That is no question a problem, even on the most high-end systems nowadays, so I guess that's no way around. Is there an efficient way to seed a RNG with enough numbers to actually make the algorithm secure? – Janman Jun 6 '12 at 9:00 7 – Maeher Jun 6 '12 at 9:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285855293273926, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/20954-exponents-square-roots.html
# Thread: 1. ## Exponents and square roots For the question : The square root of the fifth root of 4a^4 is the answer: $2^\frac15a^\frac25$ that was the answer in my book and i do not understand how to get it. 2. $\sqrt {\sqrt[5]{{4a^4 }}} = \left( {\left( {4a^4 } \right)^{\frac{1}{5}} } \right)^{\frac{1}{2}} = \left( {\left( {4a^4 } \right)^{\frac{1}{2}} } \right)^{\frac{1}{5}} = \left( {2a^2 } \right)^{\frac{1}{5}} = \left( {2^{\frac{1}{5}} } \right)\left( {a^{\frac{2}{5}} } \right)<br />$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9553399682044983, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Coplanarity
Coplanarity In geometry, a set of points in space is coplanar if all the points lie in the same geometric plane. For example, three distinct points are always coplanar; but a fourth point or more added in space can exist in another plane, or, incoplanarly. Two lines in three-dimensional space are coplanar if there is a plane that includes them both. This occurs if the lines are parallel, or if they intersect each other. Distance geometry provides a solution to the problem of determining if a set of points is coplanar, knowing only the distances between them. Properties If three vectors $\mathbf{a}, \mathbf{b}$ and $\mathbf{c}$ are coplanar, and $\mathbf{a}\cdot\mathbf{b} = 0$, then $(\mathbf{c}\cdot\mathbf{\hat a})\mathbf{\hat a} + (\mathbf{c}\cdot\mathbf{\hat b})\mathbf{\hat b} = \mathbf{c},$ where $\mathbf{\hat a}$ denotes the unit vector in the direction of $\mathbf{a}$. Or, the vector resolutes of $\mathbf{c}$ on $\mathbf{a}$ and $\mathbf{c}$ on $\mathbf{b}$ add to give the original $\mathbf{c}$. Plane Formula Another technique involves computing the formula for the planes defined by each subset of three points. First, the normal-vector for each plane is computed using some Orthogonalization technique. If the planes are parallel, then the dot-product of their normal-vectors will be 1 or -1. More specifically, the angle between the normal vectors can be computed. This is called the dihedral angle, and represents the smallest possible angle between the two planes. The formula for a plane is: $ax+by+cz+d=0$, where $(a,b,c)$ is the normal vector of the plane. The value $d$ can be computed by plugging in one of the points and then solving. If $d$ is the same for all subsets of three points, then the planes are the same. One advantage of this technique is that it can work in hyper-dimensional space. For example, suppose you wanted to compute the dihedral angle between two m-dimensional hyperplanes defined by m points in n-dimensional space. If $n-m>1$, then there are an infinite number of normal vectors for each hyperplane, so the angle between two of them is not necessarily the dihedral angle. However, if you use Gram-Schmidt process using the same initial vector in both cases, then the angle between the two normal vectors will be minimal, and therefore will be the dihedral angle between the hyperplanes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142724275588989, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Laplace_expansion
# Laplace expansion This article is about the expansion of the determinant of a square matrix as a weighted sum of determinants of sub-matrices. For the expansion of an 1/r-potential using spherical harmonical functions, see Laplace expansion (potential). In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression for the determinant |B| of an n × n square matrix B that is a weighted sum of the determinants of n sub-matrices of B, each of size (n−1) × (n−1). The Laplace expansion is of theoretical interest as one of several ways to view the determinant, as well as of practical use in determinant computation. The i, j cofactor of B is the scalar Cij defined by $C_{ij}\ = (-1)^{i+j} M_{ij}\,,$ where Mij is the i, j minor matrix of B, that is, the determinant of the (n–1) × (n–1) matrix that results from deleting the i-th row and the j-th column of B. Then the Laplace expansion is given by the following Theorem. Suppose B = (bij) is an n × n matrix and fix any i, j ∈ {1, 2, ..., n}. Then its determinant |B| is given by: $\begin{align}|B| & {} = b_{i1} C_{i1} + b_{i2} C_{i2} + \cdots + b_{in} C_{in} \\ & {} = b_{1j} C_{1j} + b_{2j} C_{2j} + \cdots + b_{nj} C_{nj} \\ & {} = \sum_{j'=1}^{n} b_{ij'} C_{ij'} = \sum_{i'=1}^{n} b_{i'j} C_{i'j} . \end{align}$ ## Examples Consider the matrix $B = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{bmatrix}.$ The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields: $|B| = 1 \cdot \begin{vmatrix} 5 & 6 \\ 8 & 9 \end{vmatrix} - 2 \cdot \begin{vmatrix} 4 & 6 \\ 7 & 9 \end{vmatrix} + 3 \cdot \begin{vmatrix} 4 & 5 \\ 7 & 8 \end{vmatrix}$ ${} = 1 \cdot (-3) - 2 \cdot (-6) + 3 \cdot (-3) = 0.$ Laplace expansion along the second column yields the same result: $|B| = -2 \cdot \begin{vmatrix} 4 & 6 \\ 7 & 9 \end{vmatrix} + 5 \cdot \begin{vmatrix} 1 & 3 \\ 7 & 9 \end{vmatrix} - 8 \cdot \begin{vmatrix} 1 & 3 \\ 4 & 6 \end{vmatrix}$ ${} = -2 \cdot (-6) + 5 \cdot (-12) - 8 \cdot (-6) = 0.$ It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero. ## Proof Suppose $B$ is an n × n matrix and $i,j\in\{1,2,\dots,n\}.$ For clarity we also label the entries of $B$ that compose its $i,j$ minor matrix $M_{ij}$ as $(a_{st})$ for $1 \le s,t \le n-1.$ Consider the terms in the expansion of $|B|$ that have $b_{ij}$ as a factor. Each has the form $\sgn \tau\,b_{1,\tau(1)} \cdots b_{i,j} \cdots b_{n,\tau(n)} = \sgn \tau\,b_{ij} a_{1,\sigma(1)} \cdots a_{n-1,\sigma(n-1)}$ for some permutation τ ∈ Sn with $\tau(i)=j$, and a unique and evidently related permutation $\sigma\in S_{n-1}$ which selects the same minor entries as $\tau.$ Similarly each choice of $\sigma$ determines a corresponding $\tau,$ i.e. the correspondence $\sigma\leftrightarrow\tau$ is a bijection between $S_{n-1}$ and $\{\tau\in S_n\colon\tau(i)=j\}.$ The permutation $\tau$ can be derived from $\sigma$ as follows. Define $\sigma'\in S_n$ by $\sigma'(k) = \sigma(k)$ for $1 \le k \le n-1$ and $\sigma'(n) = n$. Then $\sgn\sigma'=\sgn\sigma$ and $\tau\,=(n,n-1,\ldots,i)\sigma'(j,j+1,\ldots,n)$ Since the two cycles can be written respectively as $n-i$ and $n-j$ transpositions, $\sgn\tau\,= (-1)^{2n-(i+j)} \sgn\sigma'\,= (-1)^{i+j} \sgn\sigma.$ And since the map $\sigma\leftrightarrow\tau$ is bijective, $\sum_{\tau \in S_n\colon\tau(i)=j}$ $\sgn \tau\,b_{1,\tau(1)} \cdots b_{n,\tau(n)}$ $= \sum_{\sigma \in S_{n-1}} (-1)^{i+j}\sgn\sigma\, b_{ij} a_{1,\sigma(1)} \cdots a_{n-1,\sigma(n-1)}$ $=\ b_{ij} (-1)^{i+j} |M_{ij}|,$ from which the result follows. ## Laplace expansion of a determinant by complementary minors Laplaces cofactor expansion can be generalised as follows. ### Example Consider the matrix $A = \begin{bmatrix} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end{bmatrix}.$ The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in $\left\{1,2,3,4\right\}$, namely let $S=\left\{\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}\right\}$ be the aformentioned set. By defining the complementary cofactors to be $b_{\{j,k\}}=\begin{vmatrix} a_{1j} & a_{1k} \\ a_{2j} & a_{2k} \end{vmatrix}$, $c_{\{j,k\}}=\begin{vmatrix} a_{3j} & a_{3k} \\ a_{4j} & a_{4k} \end{vmatrix}$, and the sign of their permutation to be $\varepsilon^{\{i,j\},\{p,q\}}=\mbox{sgn}\begin{bmatrix} 1 & 2 & 3 & 4 \\ i & j & p & q \end{bmatrix}$. The determinant of B can be written out as $|B| = \sum_{H \in S} \varepsilon^{H,H^\prime}b_{H}c_{H^\prime},$ where $H^{\prime}$ is the complemenatary set to $H$. In our explicit example this gives us $|B| = b_{\{1,2\}}c_{\{3,4\}} -b_{\{1,3\}}c_{\{2,4\}} +b_{\{1,4\}}c_{\{2,3\}} +b_{\{2,3\}}c_{\{1,4\}} -b_{\{2,4\}}c_{\{1,3\}} +b_{\{3,4\}}c_{\{1,2\}}$ ${} = \begin{vmatrix} 1 & 2 \\ 5 & 6 \end{vmatrix} \cdot \begin{vmatrix} 11 & 12 \\ 15 & 16 \end{vmatrix} - \begin{vmatrix} 1 & 3 \\ 5 & 7 \end{vmatrix} \cdot \begin{vmatrix} 10 & 12 \\ 14 & 16 \end{vmatrix} + \begin{vmatrix} 1 & 4 \\ 5 & 8 \end{vmatrix} \cdot \begin{vmatrix} 10 & 11 \\ 14 & 15 \end{vmatrix} + \begin{vmatrix} 2 & 3 \\ 6 & 7 \end{vmatrix} \cdot \begin{vmatrix} 9 & 12 \\ 13 & 16 \end{vmatrix} - \begin{vmatrix} 2 & 4 \\ 6 & 8 \end{vmatrix} \cdot \begin{vmatrix} 9 & 11 \\ 13 & 15 \end{vmatrix} + \begin{vmatrix} 3 & 4 \\ 7 & 8 \end{vmatrix} \cdot \begin{vmatrix} 9 & 10 \\ 13 & 14 \end{vmatrix}$ ${} = -4 \cdot (-4) -(-8) \cdot (-8) +(-12) \cdot (-4) +(-4) \cdot (-12) -(-8) \cdot (-8) +(-4) \cdot (-4)$ ${} = 16 - 64 + 48 + 48 - 64 + 16 = 0.$ As above, It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero. ## No good for high dimension For NxN matrices, the computational effort goes with N!. Therefore, the Laplace expansion is not suitable for large N. Using a decomposition into trigonal matrices, one can determine determinants with effort N3/3.[1] ## References 1. Stoer Bulirsch: Introduction to Numerical Mathematics • David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0-534-99845-3, p. 265-267 (restricted online copy at Google Books) • Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3-7643-6905-1, p. 57-60 (restricted online copy at Google Books)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 54, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8991579413414001, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/08/09/completeness-of-the-metric-space-of-a-measure-space/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## Completeness of the Metric Space of a Measure Space Our first result today is that the metric space associated to the measure ring of a measure space $(X,\mathcal{S},\mu)$ is complete. To see this, let $\{E_n\}$ be a Cauchy sequence in the metric space $\mathfrak{S}$. That is, for every $\epsilon>0$ there is some $N$ so that $\rho(E_m,E_n)<\epsilon$ for all $m,n>N$. Unpacking our definitions, each $E_n$ must be an element of the measure ring $(\mathcal{S},\mu)$ with $\mu(E_n)<0$, and thus must be (represented by) a measurable subset $E_n\subseteq X$ of finite measure. On the side of the distance function, we must have $\mu(E_m\Delta E_n)<\epsilon$ for sufficiently large $m$ and $n$. Let’s recast this in terms of the characteristic functions $\chi_{E_n}$ of the sets in our sequence. Indeed, we find that $\chi_{E_m\Delta E_n}=\lvert\chi_{E_m}-\chi_{E_n}\rvert$, and so $\displaystyle\mu(E_m\Delta E_n)=\int\chi_{E_m\Delta E_n}\,d\mu=\int\lvert\chi_{E_m}-\chi_{E_n}\rvert\,d\mu$ that is, a sequence $\{E_n\}$ of sets is Cauchy in $\mathfrak{S}(\mu)$ if and only if its sequence of characteristic functions $\left\{\chi_{E_n}\right\}$ is mean Cauchy. Since mean convergence is complete, the sequence of characteristic functions must converge in mean to some function $f$. But mean convergence implies convergence in measure, which is equivalent to a.e. convergence on sets of finite measure, which is what we’re dealing with. Thus the limiting function $f$ must — like the characteristic functions in the sequence — take the value $0$ or $1$ almost everywhere. Thus it is (equivalent to) the characteristic function of some set. Since $f$ must be measurable — as the limit of a sequence of measurable functions — it’s the characteristic function of a measurable set, which must have finite measure since its measure is the limit of the Cauchy sequence $\{\mu(E_n)\}$. That is, $f=\chi_E$, where $E\in\mathfrak{S}(\mu)$, and $E$ is the limit of $\{E_n\}$ under the metric of $\mathfrak{S}(\mu)$. Thus $\mathfrak{S}(\mu)$ is complete as a metric space. ### Like this: Posted by John Armstrong | Analysis, Measure Theory ## 1 Comment » 1. [...] the is all of . Thus the countable union of these closed subsets has an interior point. But since is a complete metric space, it is a Baire space as well. And thus one of the must have an interior point as [...] Pingback by | August 16, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331245422363281, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/16849/list
Return to Answer 2 latex correction The first reference attempts to solve this problem but only gives a partial answer. The second reference shows that every 3-manifold can be obtained by surgery on a link (but does not discuss Kirby calculus). MR1075370 (91k:57019) Rêgo, Eduardo ; de Sá, Eugénia César . Special Heegaard diagrams and the Kirby calculus. Topology Appl. 37 (1990), no. 1, 11--24. MR0809959 (87f:57016) Rourke, Colin . A new proof that $\Omega\sb 3$ \Omega_3\$ is zero. J. London Math. Soc. (2) 31 (1985), no. 2, 373--376. I apologise if you are already aware of these references. 1 The first reference attempts to solve this problem but only gives a partial answer. The second reference shows that every 3-manifold can be obtained by surgery on a link (but does not discuss Kirby calculus). MR1075370 (91k:57019) Rêgo, Eduardo ; de Sá, Eugénia César . Special Heegaard diagrams and the Kirby calculus. Topology Appl. 37 (1990), no. 1, 11--24. MR0809959 (87f:57016) Rourke, Colin . A new proof that $\Omega\sb 3$ is zero. J. London Math. Soc. (2) 31 (1985), no. 2, 373--376. I apologise if you are already aware of these references.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7556508183479309, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2856/md5-implementation-doubt
# MD5 implementation doubt In MD5, there are four rounds. After every round, why do we need to add the computed Q values to the initial values and then take this value as input to the next round. For example after the first round, we compute Qnew1=Q60+Q-4, Qnew2=Q61+Q-3, Qnew3=Q62+Q-2, Qnew4=Q63+Q-1. Why cant we directly take the computed values as the input to the next round without adding them to the initial values ? - – Maeher Jun 11 '12 at 10:16 yes i am referring to the steps h0 := h0 + a; h1 := h1 + b; h2 := h2 + c; h3 := h3 + d; I want to know that why do we need to add the values a, b, c, d to the initial values. Will there be any problem if i just take the values a, b, c, d as input to the next round. – cryptofreak Jun 11 '12 at 10:40 Yes, it is required and you must do it (but it is only done at the very end of the compression function, not after each round). Are you asking for why it needs to be done or just if it is really necessary to do so? – Thomas Jun 11 '12 at 10:55 I want to know that why it needs to be done. – cryptofreak Jun 11 '12 at 11:23 ## 2 Answers The reason we, at the end of the compression function, add the input to the compression function, well, that's because otherwise the compression function would be invertible, and that would be bad. Without that final step, the compression function would be invertible in this sense: given a desired compression function output and a message block, we would be able to find the compression function input. We would be able to do this because the MD5 compression function is made of 64 steps, and each step is itself easily invertible. Adding in the compression input at the very end breaks up this invertibility, because the analyst is unable to invert that final step (because he doesn't know the compression function input yet). He could guess the input at that stage, and then run the rest of the compression function backwards; this turns out to be no more efficient that just guessing the compression function input, and running it forwards. This invertibility would be bad, because it allows an attacker to do tricks we'd prefer him not to. For example, we'd want a preimage attack (that is, given a hash value, give me a message that hashes to that) to take (with a 128 bit hash like MD5) roughly $2^{128}$ steps; that is, there is no method that is drastically more efficient that trying random messages and hashing them until you stumble across one with the correct hash. However, if the compression function is invertible, here is what that attacker can do, given a $Target$ hash value: • Generate $2^{64}$ message prefixes, and generate the partial hashes to that point • Generate $2^{64}$ message suffixes. When these, he would start with the $Target$ hash value, and compute the hash backwards until he gets the inverse partial hash (having first applied the MD5 final padding). • Given those two lists of 128 bit items, he then looks for a match (and, by the birthday paradox, there's a good chance of being one. And, if the partial hash of $P_i$ is the same as the inverse partial hash of $S_j$, then he knows that $Hash( P_i || S_j ) = Target$. This is because, when we evaluate the hash, we first run the compression function on the blocks from $P_i$; at the end of this evaluation, he would come up with the common partial hash value. Then, he would run the compression function on the blocks from $S_j$ after padding; he knows that, starting at this common value, that results in $Target$. So, because of this, an attack we had hoped would take $2^{128}$ steps could be done with roughly $2^{64}$ steps. Making the compression function noninvertible (which the real MD5 does) prevents this line of attack. - This answer is incorrect and resulted from a misinterpretation of the relevant RFC In the relevant RFC the authors state under "Differences Between MD4 and MD5" 1. Each step now adds in the result of the previous step. This promotes a faster "avalanche effect". So this seems to be their rationale for that step. If it is reasonable, I really can't say. But irrespective of that, if you want to implement MD5, then yes that step is necessary, as otherwise your implementation would be incompatible with other implementations. - thanks for the answer Maeher. But I still dont get that how it can promote a faster avalanche effect. If you have any idea, please let me know. – cryptofreak Jun 11 '12 at 11:10 MD4 has exactly the same step at the end of its compression function, hence this is not the answer. – poncho Jun 11 '12 at 14:21 You are indeed correct. I must have misread that. Apparently the step they are referring to is each of the 64 round operations. – Maeher Jun 11 '12 at 14:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9230188727378845, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/159562/rectangles-diagonal-calculation?answertab=oldest
# Rectangles Diagonal Calculation I was having a problem with the following question, and could use some help: If a rectangle with a perimeter of 48 inches is equal in area to a right triangle with legs of 12 inches and 24 inches, what is the rectangle's diagonal? The answer to the above question is $12\sqrt{2}$. Frankly I am a bit confused by a part of the question stating in area to a right triangle with legs of 12 inches and 24 inches Is the area of rectangle equal to the area of the triangle? Frankly, the phrase "equal to triangle" doesn't say much. And 12 and 24 the lengths of which sides of a triangle? Am I missing something here or are my concerns valid? Edit: After reading the suggestions posted here, here is what I did, and I am getting a square instead of a rectangle: $$2x + 2y = 48 \qquad (A)$$ $$xy = 144 \qquad (B)$$ so $x=144/y$, inserting in $(A)$ I get $x=12$. So is this actually a square? - The phrase "with a perimeter of 48 inches" describes the rectangle; the phrase "with legs of 12 inches and 24 inches" describes the right triangle. If you hide these phrases, then you can read the question like this: "If this rectangle is equal in area to that right triangle, then what is the rectangle's diagonal?" Does that help? – Blue Jun 17 '12 at 17:42 The legs part is throwing me off too .. are they perpendicular,base or the hypotenuse ? – MistyD Jun 17 '12 at 17:46 ## 1 Answer There is a specific rectangle that the question is referring to, but we don't know all of its properties. The question tells us that this rectangle has a perimeter of 48 inches. When the question says that this rectangle is equal in area to a right triangle with legs of 12 inches and 24 inches it just means that the area of this rectangle is equal to the area of such a triangle. The area of a shape doesn't depend on how you orient it. If you look at a right triangle that has legs of lengths $A$ and $B$, and draw it like this: then the "height" is $B$, the "base" is $A$, and the area is $$\frac{1}{2}\times \text{base}\times\text{height}=\frac{1}{2} AB.$$ If you take the same right triangle and draw it like this: then the "height" is $A$, the "base" is $B$, and the area is $$\frac{1}{2}\times \text{base}\times\text{height}=\frac{1}{2} BA=\frac{1}{2}AB.$$ So the area of a right triangle that has legs of length $A$ and $B$ is always $\frac{1}{2}AB$. Your calculations are correct; the rectangle is a square. Note that a square is a quadrilateral with four right angles, and therefore is a rectangle (all squares are rectangles, though of course not all rectangles are squares). Now, it should be fairly straightforward to show that, for a square with side lengths of $12$ inches, the length of the diagonal is $12\sqrt{2}$ inches (hint: use the Pythagorean theorem). - Okay what about the legs of the triangle ? How do we know are they the base or height etc ? – MistyD Jun 17 '12 at 17:42 Draw a right triangle - do you see why you it doesn't matter which leg you take be the height, and which to be the base? The area of a right triangle with legs of length $a$ and $b$ is $$\frac{1}{2}ab$$ so the area of the triangle mentioned in the question is $$\frac{1}{2}\times (12\text{ inches})\times(24\text{ inches})=144\text{ sq. inches}.$$ – Zev Chonoles♦ Jun 17 '12 at 17:45 After solving this problem I get the answer but it turns out its a square with sides 12 and not a rectangle. Am i wrong ? – MistyD Jun 17 '12 at 17:56 1 Regarding base vs. height: There's an old joke about a guy standing out in a field, trying and trying to push a tape measure up a pole, but failing with each attempt. After watching the guy struggle for a while, his buddy asks, "Why don't you just knock the pole over so that you can easily measure it along the ground?" The guy says, "Because I want to know how tall it is, not how long it is!" – Blue Jun 17 '12 at 17:56 @MistyD: Nope, you did it exactly right! I've added some more to my answer in response. – Zev Chonoles♦ Jun 17 '12 at 18:12 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393208622932434, "perplexity_flag": "head"}
http://www.reference.com/browse/Word+metric
Definitions Nearby Words # Word metric In group theory, a word metric on a group $G$ is a way to measure distance between any two elements of $G$. As the name suggests, the word metric is a metric on $G$, assigning to any two elements $g$, $h$ of $G$ a distance $d\left(g,h\right)$ that measures how efficiently their difference $g^\left\{-1\right\} h$ can be expressed as a word whose letters come from a generating set for the group. The word metric on G is very closely related to the Cayley graph of G: the word metric measures the length of the shortest path in the Cayley graph between two elements of G. A generating set for $G$ must first be chosen before a word metric on $G$ is specified. Different choices of a generating set will typically yield different word metrics. While this seems at first to be a weakness in the concept of the word metric, it can be exploited to prove theorems about geometric properties of groups, as is done in geometric group theory. ## Examples ### The group of integers Z The group of integers Z is generated by the set {-1,+1}. The integer -3 can be expressed as -1-1-1+1-1, a word of length 5 in these generators. But the word which expresses -3 most efficiently is -1-1-1, a word of length 3. The distance between 0 and -3 in the word metric is therefore equal to 3. More generally, the distance between two integers m and n in the word metric is equal to |m-n|, because the shortest word representing the difference m-n has length equal to |m-n|. ### The group $mathbb\left\{Z\right\}oplus mathbb\left\{Z\right\}$ For a more illustrative example, the elements of the group $mathbb\left\{Z\right\}oplusmathbb\left\{Z\right\}$ can be thought of as vectors in the Cartesian plane with integer coefficients. The group $mathbb\left\{Z\right\}oplusmathbb\left\{Z\right\}$ is generated by the standard unit vectors $e_1 = <1,0>$, $e_2 = <0,1>$ and their inverses $-e_1=<-1,0>$, $-e_2=<0,-1>$. The Cayley graph of $mathbb\left\{Z\right\}oplusmathbb\left\{Z\right\}$ is the so-called taxicab geometry. It can be pictured in the plane as an infinite square grid of city streets, where each horizontal and vertical line with integer coordinates is a street, and each point of $mathbb\left\{Z\right\}oplusmathbb\left\{Z\right\}$ lies at the intersection of a horizontal and a vertical street. Each horizontal segment between two vertices represents the generating vector $e_1$ or $-e_1$, depending on whether the segment is travelled in the forward or backward direction, and each vertical segment represents $e_2$ or $-e_2$. A car starting from $<1,2>$ and travelling along the streets to $<-2,4>$ can make the trip by many different routes. But no matter what route is taken, the car must travel at least |1 - (-2)| = 3 horizontal blocks and at least |2 - 4| = 2 vertical blocks, for a total trip distance of at least 3 + 2 = 5. If the car goes out of its way the trip may be longer, but the minimal distance travelled by the car, equal in value to the word metric between <1,2> and <-2,4> is therefore equal to 5. In general, given two elements and of $mathbb\left\{Z\right\}oplusmathbb\left\{Z\right\}$, the distance between $v$ and $w$ in the word metric is equal to $|i-k| + |j-l|$. ## Definition Let G be a group, let S be a generating set for G, and suppose that S is closed under the inverse operation on G. A word over the set S is just a finite sequence $w = s_1 ldots s_L$ whose entries $s_1, ldots, s_L$ are elements of S. The integer L is called the length of the word $w$. Using the group operation in G, the entries of a word $w = s_1 ldots s_L$ can be multiplied in order, remembering that the entries are elements of G. The result of this multiplication is an element $bar w$ in the group G which is called the evaluation of the word w. As a special case, the empty word $w = emptyset$ has length zero, and its evaluation is the identity element of G. Given an element g of G, its word norm |g| with respect to the generating set S is defined to be the shortest length of a word $w$ over S whose evaluation $bar w$ is equal to g. Given two elements g,h in G, the distance d(g,h) in the word metric with respect to S is defined to be $|g^\left\{-1\right\} h|$. Equivalently, d(g,h) is the shortest length of a word w over S such that $g bar w = h$. The word metric on G satisfies the axioms for a metric, and it is not hard to prove this. The proof of the symmetry axiom d(g,h) = d(h,g) for a metric uses the assumption that the generating set S is closed under inverse. ### Variations The word metric has an equivalent definition formulated in more geometric terms using the Cayley graph of G with respect to the generating set S. When each edge of the Cayley graph is assigned a metric of length 1, the distance between two group elements g,h in G is equal to the shortest length of a path in the Cayley graph from the vertex g to the vertex h. The word metric on G can also be defined without assuming that the generating set S is closed under inverse. To do this, first symmetrize S, replacing it by a larger generating set consisting of each $s$ in S as well as its inverse $s^\left\{-1\right\}$. Then define the word metric with respect to S to be the word metric with respect to the symmetrization of S. ## Example in a free group Suppose that F is the free group on the two element set $\left\{a,b\right\}$. A word w in the symmetric generating set $\left\{a,b,a^\left\{-1\right\},b^\left\{-1\right\}\right\}$ is said to be reduced if the letters $a,a^\left\{-1\right\}$ do not occur next to each other in w, nor do the letters $b,b^\left\{-1\right\}$. Every element $g in F$ is represented by a unique reduced word, and this reduced word is the shortest word representing g. For example, since the word $w = b^\left\{-1\right\} a$ is reduced and has length 2, the word norm of $bar w$ equals 2, so the distance in the word norm between b and $a = b \left(b^\left\{-1\right\} a\right)$ equals 2. This can be visualized in terms of the Cayley graph, where the shortest path between b and a has length 2. ## Theorems ### Isometry of the left action The group G acts on itself by left multiplication: the action of each $k in G$ takes each $g in G$ to kg. This action is an isometry of the word metric. The proof is simple: the distance between $kg$ and $kh$ equals $|\left(kg\right)^\left\{-1\right\} \left(kh\right)| = |g^\left\{-1\right\} h|$ which equals the distance between $g$ and $h$. ### Bilipschitz invariants of a group The word metric on a group G is not unique, because different symmetric generating sets give different word metrics. However, finitely generated word metrics are unique up to bilipschitz equivalence: if $S$, $T$ are two symmetric, finite generating sets for G with corresponding word metrics $d_S$, $d_T$, then there is a constant $K ge 1$ such that for any $g,h in G$, $frac\left\{1\right\}\left\{K\right\} , d_T\left(g,h\right) le d_S\left(g,h\right) le K , d_T\left(g,h\right)$. This constant K is just the maximum of the $d_S$ word norms of elements of $T$ and the $d_T$ word norms of elements of $S$. This proof is also easy: any word over S can be converted by substitution into a word over T, expanding the length of the word by a factor of at most K, and similarly for converting words over T into words over S. The bilipschitz equivalence of word metrics implies in turn that the growth rate of a finitely generated group is a well-defined isomorphism invariant of the group, independent of the choice of a finite generating set. This implies in turn that various properties of growth, such as polynomial growth, the degree of polynomial growth, and exponential growth, are isomorphism invariants of groups. This topic is discussed further in the article on the growth rate of a group. ### Quasi-isometry invariants of a group In geometric group theory, groups are studied by their actions on metric spaces. A principle which generalizes the bilipschitz invariance of word metrics says that any finitely generated word metric on G is quasi-isometric to any proper, geodesic metric space on which G acts, properly discontinuously and cocompactly. Metric spaces on which G acts in this manner are called model spaces for G. It follows in turn that any quasi-isometrically invariant property satisfied by the word metric of G or by any model space of G is an isomorphism invariant of G. Modern geometric group theory is in large part the study of quasi-isometry invariants. ## References • J. W. Cannon, Geometric group theory, in Handbook of geometric topology pages 261--305, North-Holland, Amsterdam, 2002, ISBN 0-444-82432-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 67, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257852435112, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/13230/theorems-on-instability-of-classical-systems-of-charged-particles/13613
# Theorems on instability of classical systems of charged particles? Classically, a hydrogen atom should not be stable, since it should radiate away all its energy. I remember hearing from my favorite freshman physics prof ca. 1983 about a general theorem to the effect that all classical systems of charged particles were unstable, and I went so far as to contact him in 2009 and ask him if he remembered anything about it, but he didn't. There is something called Earnshaw's theorem (there's a Wikipedia article) that says that static equilibrium is impossible. It certainly seems plausible that something similar holds for dynamical equilibrium, in some sense that I don't know how to define properly, much less prove rigorously. Intuitively, I would expect that any classical system of $m$ charged particles should end up having no "interesting structure," in the sense that the final state will consist of radiation plus $n \le m$ zero-size clusters whose charges $q_1,\ldots q_n$ all have the same sign (some possibly being zero); asymptotically, they all end up diverging radially from some central point. Does anyone know of any formal proof along these lines? Are there counterexamples, although possibly ones with initial conditions corresponding to zero volume in phase space? How would one go about stating the initial conditions on the radiation fields? - – Raskolnikov Aug 6 '11 at 9:28 Thanks, Raskolnikov, but the paper is paywalled for me. Can you tell me the journal reference? – Ben Crowell Aug 6 '11 at 16:45 – mmc Aug 7 '11 at 0:09 @mmc: Thanks! It's completely quantum-mechanical, though. – Ben Crowell Aug 7 '11 at 3:10 1 – Willie Wong Aug 17 '11 at 1:58 show 10 more comments ## 1 Answer I don't think there's an easy answer to your question but as for some possible leads and maybe a setup of the problem: I'd try to write down the Hamiltonian for your configuration of particles. (e.g chapter two in this fella's thesis in section 2.3 eq. 2.28 shows the Hamiltonian for two charged classical particles using center of mass coordinates. You have n charged particles but the generalization to n is pretty straight-forward and contained in most undergrad books on mechanics.) Then from there you want to proceed in your analysis as you would with the classical N-body problem as outlined in something like Meyer and Hall Chapter 1. Specifically look at things like Ch. 1, Sec. 4 on 'Equilibria for the Restricted 3-body problem' and techniques for finding critical points of modified potentials. I'm willing to bet that for certain initial conditions (perhaps as an initial condition all particles are contained in the same plane) you'll be able to make a statement about the final configuration in a limiting case. For the more general setup -- that's just GOTTA be an open problem. I'd be really surprised if that was actually known at this point in time. Regarding the question "How would one go about stating the initial conditions on the radiation fields?".. I'm not sure how to interpret a radiation field in a classical unless you push off into derivations that you find in plasma physics. For a Hamiltonian setup tho this paper looks promising.. Truth is the more I think on it the more I think you might find an answer to your question in L&L's Physical Kinetics. Chapter IV on instabilities might have something to say on this. Anyways.. good luck. - Nice ideas, thanks! I don't think Adams' thesis helps, however, because he explicitly assumes no radiation (p. 10) -- which in my case is what I want to prove. Amazon doesn't let me view very much of Meyer and Hall's ch. 1, but from what I can see, it looks like they're talking about the Newtonian gravitational n-body problem, for which radiation is not an issue. – Ben Crowell Aug 16 '11 at 3:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9600207805633545, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/tagged/algebraic-number-theory
## Tagged Questions 1answer 206 views ### Simplifying an algebraic integer expression I have an expression where the variables are algebraic integers: $p4 = \frac{p12 - p41 \cdot p21}{p22}$ p12 is degree 48 and p22 is most likely degree 48 too. p41 is degree 32 and … 0answers 366 views ### Orders in number fields Let $K$ be a degree $n$ extension of ${\mathbb Q}$ with ring of integers $R$. An order in $K$ is a subring with identity of $R$ which is a ${\mathbb Z}$-module of rank $n$. Quest … 0answers 74 views ### Bounding number of solutions to an equation: I have an equation that I think should not have too many solutions, but I don't see a way to argue this. Given $a, b, c, N \in \mathbb{N}$, how many positive integer solutions \$x, … 1answer 89 views ### ramification of discrete valuation field Let $K$ be a discrete valuation field with valuation $v:K\rightarrow \mathbb Z\cup {\infty}$ which is normalized by $v(\pi)=1$ for a prime element $\pi$. Let \$v:\overline K\rightar … 1answer 347 views ### how to visualize the class number of an imaginary quadratic field? Let me detail the title of the question. I'm trying to give students an intuition of what the class number is. Let $K=\mathbb{Q}(\sqrt{-d})$, with $d>0$ a square-free integer, be … 1answer 323 views ### Numbers integrally represented by a ternary cubic form Given integers $a,b,c,$ and cubic form $$f(a,b,c) = a^3 + b^3 + c^3 + a^2 b - a b^2 + 3 a^2 c - a c^2 + b^2 c - b c^2 - 4 a b c$$ f(a,b,c) = \det \left( \begin{array}{ccc … 0answers 60 views ### Decompositions of representations of pro-p groups Let $P$ be a pro-p group. Assume that there is a filtration of $P$ by normal subgroups $P_i$ such that $P_0=P$ and $P_{i+1} < P_i(i\in\mathbb N)$. Let $V$ be an $l$-adic represe … 0answers 58 views ### Artin L- Function properties Hi, I'm trying to understand the proof of one of the properties of the Artin L-function. I have the following doubts; Why take on $f_i =|G_{P_i}: H_{P_i}I_{G,P_i}|$, \$H_{P_i}I_{G … 0answers 61 views ### points in $V(\bar K \otimes_{\bar Q} \bar L)$ rational over tensor product of fields Let V be a variety over a number field, and let K and L be two algebraically closed What is known about the points of $V(\bar K \otimes_{\bar Q} \bar L )$ ? Are there results cla … 1answer 201 views ### Inertia subgroup in the ordinary reduction case when $p=2$ Dear MO, Let $K/\mathbb{Q}_2$ be a finite extension, and let $E/K$ be an elliptic curve with good ordinary reduction, and such that $\mathbb{Q}_2(j(E))=K$. Let \$\rho:\operatorname … 3answers 234 views ### Computing certain class numbers modulo 4 Let $p \equiv 5 \pmod{8}, q \equiv 7 \pmod{8}$ be primes and $N = pq$. I want to show that the class number $n$ of $\mathbb{Q}(\sqrt{-N})$ satisfies $n \equiv 2 \pmod{4}$ if \$\lef … 0answers 98 views ### P-adic Weierstrass Lemma for several variables The p-adic Weiestrass lemma asserts that a power series $f(z)$ with coefficients in the ring of integers of a local field can be factored as $π^n·u(z)·p(z)$ where u(z) is a unit in … 1answer 525 views ### Principal maximal ideals in Z[x]/(F) Is there some irreducible $F \in \mathbb{Z}[x]$ such that $\mathbb{Z}[x]/(F)$ has no principal maximal ideal? Equivalently, is it possible that the $1$-dimensional integral domain … 1answer 190 views ### local field and number field Let $K$ be a local field (locally compact topological field) of characteristic zero. Is it true that $K$ is isomorphic to the completion of a number field under some valuations? I … 2answers 517 views ### Exercise in Milne’s CFT notes On page 156 of Milne's Class field theory notes available online here, he claims that the Hilbert class field of $K = \mathbb Q(\sqrt{-6})$ is the splitting field of $x^2+3$ but I … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8749463558197021, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/99221/does-n-log-n-or-log-nn-grow-faster/99223
# Does $n^{\log n}$ or $(\log n)^n$ grow faster? Which grows faster? $n^{\log n}$ or $(\log n)^n$ and how can we prove this? This was presented as a "challenge question" for students to try ahead of the next class meeting. Any help would be appreciated! - ## 2 Answers Hint: take logarithms of both of these. - 6 Does that mean comparing the growth of log f vs log g is equivalent to comparing the growth of f vs g? – David Lee Jan 15 '12 at 8:59 5 @DavidLee: $\log$ is a monotonic function... – Fabian Jan 15 '12 at 9:04 2 @David Lee:As Fabian says, $\log$ is a monotonic function. That means that if $f\geq g$, then $\log f \geq \log g$, and conversely. However, monotonicity alone isn't enough to preserve differences with order of growth. For example, $e^x$ and $\log$ are monotonic, and $x$ and $x^n$ have different orders of growth, but $\log x$ and $\log x^n=n\log x$ have the same order of growth, and conversely, $f$ and $nf$ have the same order of growth for any $f$, but $e^f$ and $e^{nf}=(e^f)^n$ generally do not. However, if $\log f \gg \log g$, we do have $f \gg g$. – Aaron Jan 15 '12 at 12:15 Hint substitute $e^t$ in place of $n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9558181166648865, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/196109/minimal-polynomial-of-a-matrix-over-a-field/196113
# Minimal polynomial of a matrix over a field I remember from my linear algebra courses that if I have a $n\times n$ matrix with coefficients in a field (denoted as $A$) and I have a polynomial $P$ over the field s.t. $P(A)=0$ and a decompostion $P=f(x)g(x)$ over the field then $f(A)=0$ or $g(A)=0$. This was used to calculate the minimal polynomial of $A$. My question is: Is the statement above that $f(A)=0$ or $g(A)=0$ is correct or maybe I remember wrong ? the reason I am asking is that there are non zero matrices $B,C$ s.t. $BC=0$ so I don't see how the conclusion was made - 1 I think it's wrong, consider matrix $2\times 2$, such that the left top entry equal $1$ and zero otherwise, we have $A^2-A=0$, and $P(x)=x(x-1)$, so $f(x)=x$ and $g(x)=x-1$, but clearly neither $f(A)$ or $g(A)$ is zero. – Ajat Adriansyah Sep 15 '12 at 10:22 ## 2 Answers This isn't correct. For example, the matrix $A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$ satisfies $p(A)=0$ where $p(t)=t^2$, however $p(t)=f(t)g(t)$ where $f(t)=g(t)=t$ and $f(A) \ne 0 \ne g(A)$. There are some similar-ish results that you might be thinking of. I'll list a few. Let $A$ be a matrix with characteristic polynomial $p$ and minimal polynomial $m$ over a given (well-behaved) field. • $A$ satisfies its minimal polynomial; that is, $m(A) = 0$. • If $f$ is a polynomial then $f(A)=0$ if and only if $m\, |\, f$; in particular, $p(A)=0$. • If $m(t)=f(t)g(t)$ and $f(A)=0$ then $g$ is constant. - Thanks, I was thinking on the second point – Belgi Sep 15 '12 at 10:28 It is not true. The minimal polynomial doesn't have to be irreducible.For instance if you consider the $2 \times 2$ diagonal matrix with $1$ and $2$ on the main diagonal, then it's minimal polynomial would be $X^2-3X+2$, but $X-1$ and $X-2$ won't vanish the matrix. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9383607506752014, "perplexity_flag": "head"}
http://conservapedia.com/Real_number
# Real number ### From Conservapedia x2 − 5x + 6 = 0 x = ? This article/section deals with mathematical concepts appropriate for a student in mid to late high school. The real numbers are a set of numbers with extremely important theoretical and practical properties. They can be considered to be the numbers used for ordinary measurement of physical things like length, area, weight, charge, etc. Mathematicians denote the set of real numbers with an ornate capital letter: $\mathbb{R}$. They are the 4th item in this hierarchy of types of numbers: • The "natural numbers", 1, 2, 3, ... (There is controversy about whether zero should be included. It doesn't matter.) • The "integers"—positive, negative, and zero • The "rational numbers", or fractions, like 355/113 • The "real numbers", including irrational numbers • The "complex numbers", which give solutions to polynomial equations Real numbers are typically represented by a decimal (or any other base) representation, as in 3.1416. It can be shown that any decimal representation that either terminates or gets into an endless repeating pattern is rational. The other numbers are real numbers that are irrational. Examples are $\sqrt{10} = 3.162277660168...\,$ and $\pi = 3.1415926535...\,$. These decimal representations neither repeat nor terminate. ## Formal definition Formally, real numbers are defined as the unique field which is ordered, metrically complete, and Archimedean. The reals can be constructed from the rationals by means of Dedekind cuts or Cauchy sequences, as outlined below. ## Real line The real numbers can be thought of as a line, called the real line. Each real number represents a point on the real line. [1] The real line is useful as a coordinate system for graphing functions. Thus, the x-axis and y-axis are both instances of the real line. The real line is the basis for geometric measurements, and more generally for ideas in metric topology. ## What is the problem? Aren't rational numbers good enough? Any real-world measurement that anyone could possibly make, one can make as accurately as one wants with rational numbers. For example, one can calculate the ratio of the circumference of a circle to its diameter to within one part is a trillion using the number 3.1415926535898 ($\pi\,$ itself is irrational.) Put another way, you never have to worry about the difference between the rationals and the reals in a lumber yard or a laboratory. The technical term that topologists use for this state of affairs is that the rationals are dense. The shortcoming of the rationals, that is overcome by defining the reals, is a somewhat subtle theoretical point. The most direct example is that, if one lived in a world with only rational numbers, 2 has no square root, even though it obviously should have one. One can easily prove that there is no rational number m/n such that (m/n)2 = 2. The factors of m2 all come in pairs, as do the factors of n2. But the factors of m2 must be the same as the factors of n2 except for a single extra factor of 2. The theoretical property that the rational numbers lack is called the least upper bound property. Definition: A number B is an upper bound for a set of numbers if no element of the set is greater than B. (There is also the notion of a lower bound.) For example, 10 is an upper bound for the open interval $(3, 6)\,$. 7 is also an upper bound, as is 6. 5 is not. 2 is a lower bound. Some sets do not have upper bounds. For example, all rational or real numbers, or all odd integers. Definition: A number L is a least upper bound (often abbreviated "lub") if it is an upper bound and no other upper bound is smaller. (There is also the notion of a greatest lower bound, abbreviated "glb".) 6 is the lub of the open interval $(3, 6)\,$. 3 is its glb. 6 and 3 are also the lub and glb of the closed interval $[3, 6]\,$—the inclusion of the endpoints makes no difference. The least upper bound is also sometimes called the "supremum", abbreviated "sup". The greatest lower bound is also sometimes called the "infimum", abbreviated "inf". A set has the least upper bound property if every set that has an upper bound has a least upper bound. There is also a greatest lower bound property, and any reasonable set having one property has the other. The least upper bound property is extremely important in calculus and analysis. It is essential for many theorems, notably the mean value theorem and the intermediate value theorem. The rational numbers do not satisfy the least upper bound property. For example, if we can only use rational numbers, the set of numbers that have squares less then 2 has no rational least upper bound. 1.4142136 is an upper bound, but 1.41421357 is a smaller one. The exact square root of 2 is the least upper bound that we need, but it isn't rational. ## Two ways to define the reals formally There are two ways of formally constructing the reals from the rationals. The simpler way is as Dedekind cuts, which see. A Dedekind cut could be thought of as a formal least upper bound. That is, the real number $\sqrt{2}$ is, in effect, defined as "the least upper bound of the set of numbers whose squares are less than 2". (This is a common motif in theoretical mathematics—you define something as the abstract set of things that have the properties that you want, and then show that they obey all the familiar properties of the original set.) The set thus created is "Dedekind complete", which is the same as having the least upper bound and greatest lower bound properties. The second way is as Cauchy sequences, which see. The rationals are not "metrically complete" or "Cauchy complete", in that Cauchy sequences do not necessarily converge. The reals can be, in effect, defined as "the things that Cauchy sequences would converge to". The reals are both Dedekind complete and metrically complete. The rationals are neither. (In general, the two properties are not the same—the complex numbers are metrically complete but not Dedekind complete.) ## Infinity The real numbers do not include infinity. Every real number is finite, though the set of reals is an infinite set. INFINITY IS NOT A NUMBER! However, there are non-standard models of real numbers which include $\infty$ or include both $\infty$ and $-\infty$. There is no largest real number, because you can always make a real number larger by adding 1 (or 137.035 or 6.023·1023) to it, and, similarly, no smallest real number. ## Topological properties $\pi_1(S^1)=?\,$ This article/section deals with mathematical concepts appropriate for a student in late university or graduate level. In the field of topology, an important difference between the rational and real numbers is that the rational numbers are "totally disconnected", whereas the real numbers are connected. To see this, observe that the set of numbers strictly less than $\sqrt{2}$ is open in both sets, but it is also closed in the rationals but not in the reals. In each case, it has a limit point, namely, $\sqrt{2}$, which it clearly does not contain. In the reals, the fact that $\sqrt{2}$ is not contained makes the set not closed, whereas in the rationals, it doesn't matter because that point doesn't exist. In the rationals, the fact that the set is both open and closed makes it disconnected from its complement. Any open set in the rationals contains situations like this, which makes the rationals totally disconnected. The fact that the reals are topologically connected makes them the standard starting point for many topological topics, such as homotopy, homology, and manifolds. ## History The ancient Greek mathematicians (Archimedes, Euclid, Pappus, Pythagoras and Zeno) are perhaps the first people to have created the abstract notion of a "number" (a real number; not an integer) to represent a geometrical measurement. They developed the correspondence between numbers and measurements such as distances, areas, and angles. To honor Archimedes' contribution, real analysts have named a property of the real numbers the Archimedean property. Real analysis remained in geometry's shadow until the development of the subfield of calculus. This subject subsumed all geometry known at the time, creating the field of analytic geometry.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499455094337463, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/20875/does-the-existence-of-dualities-imply-a-more-fundamental-structure
# Does the existence of dualities imply a more fundamental structure? I was wondering if the existence of some kind of duality in physics always implies the existence of some underlying more fundamental structure/concept? Let me give a few example from history: 1. Wave-particle duality$~\Rightarrow~$ Existence of quantum particle. 2. Heisenberg's Matrix Mechanics $\&$ Schrodinger's wave formulation of QM $~\Rightarrow~$ Existence of Dirac formulation of QM. 3. Magnetic field $\&$ electric field $~\Rightarrow~$ Existence of an electromagnetic theory. Similarly, can one conclude that 1. for example, from AdS/CFT correspondence, 2. or more generally, because there is an holographic equivalence between quantum gravity in $D+1$ dimensions, and QFT in $D$ dimensions, then there must be a more fundamental underlying structure that incorporate both sides of the correspondence? - 1 Can you please clarify your abbreviations to me? – Bernhard Feb 12 '12 at 13:32 3 Qm=quantum mechanics. Qft= Quantum field theory. Cft=Conformal field theory. Ads/cft= anti de Sitter/CFT – Manishearth♦ Feb 12 '12 at 13:38 1 If we have two different descriptions of the same phenomenon, then yeah, we always hope that there is a "unified" picture that tells us why. Logically there's no reason that this must be the case though. Maybe we have a philosopher here who can argue that a thing's thingness depends on it having a unique description. – wsc Feb 12 '12 at 22:46 1 @Qmechanic why is this closed? – Dilaton Dec 29 '12 at 17:13 1 Hi @Qmechanic, maybe it is somewhat broad but I thought it nevertheless contains an interesting ordered line of thought which could be adressed by someone (else) knowledgabel enough. I would be interested in seing such an answer too. Maybe Lumo (or Ron :-( ...) could do it. – Dilaton Dec 29 '12 at 17:49 show 9 more comments ## 2 Answers I think AdS/CFT went the other way. People knew about the unifying concept (string theory) first and "derived" AdS/CFT from worldsheet duality in string theory. But I guess it could've gone the other way in an alternate history. - I asked Lumo if he has an answer to this question. He did not like the question too much... ;-) Nevertheless he gave some nice clarifying comments and explained what is wrong with it and how he thinks about the issues mentioned. I think his comments make a very decent answer here anyway (and hope he does not mind that I post them here). So here we go: \begin{quote} Dualities are obviously important and unify several seemingly different descriptions. This is by definition of dualities. In this most general sense, they are analogous to the wave-particle dualism and unification of pictures in quantum mechanics and perhaps other things (unification of electricity and magnetism is substantially different). The quantum particle is the same thing as the object displaying both wave and particle properties, so the "two" concepts related by the arrow on that line are really the same concept, and the whole relationship claim is vacuous or tautological. In the same way, the matrix and wave mechanics may be unified but the unification is nothing else than the Dirac formalism for quantum mechanism so the two parts of the relationship are - assuming that the relationship between the pictures is found - a priori equivalent, too. We already have this description for dualities in string theory, sort of, too. One may discuss physics in the description-invariant way. The problem is that we don't have a universal definition of the "Hamiltonian" or "action" but we may still write the general equations with a Hamiltonian or an action that is duality-invariant. This situation differs from the simplest models of quantum mechanics where the Hamiltonian could have been written down "exactly". In string theory, the expressions for the "Hamiltonian" or whatever defines the dynamics depends on the description and it is often incomplete, so the dualities can't be formulated as a sharp mathematical claim at this moment. They're still perfectly true according to all the evidence and tests we may do and assuming it is indeed the case, and it seems to be the case beyond any reasonable doubt, the equivalence is the same equivalence as the equivalence between pictures (Heis/Schr) in quantum mechanics or representations (position/momentum) in quantum mechanics. Electromagnetism is a bit different because the electromagnetic field contains both the electric vector and the magnetic vector as independent degrees of freedom, so electromagnetism isn't about 2 views on the same 1 thing. It is about 2 things that naturally collaborate and are linked by symmetries and transform into each other under the Lorentz transformations. It's a different relationship than the equivalence in dualities. \end{quote} - I made this answer CW because it contains what Lumo said; so everybody who agrees with his comments can freely upvote and I dont need to have a bad concience for any upvotes that may occur :-) – Dilaton Mar 20 at 21:54 Hi @Dilaton: Do you have an Internet link to the above quote by Lubos Motl? – Qmechanic♦ Mar 20 at 22:19 @Qmechanic not directly, it is in the comments below a TRF article and at these comments I can not link directly ... I can at most link to the corresponding TRF post. – Dilaton Mar 20 at 22:22 – Dilaton Mar 20 at 22:27 Ok. Link good enough for now. – Qmechanic♦ Mar 20 at 22:30 ## protected by Qmechanic♦Mar 20 at 21:25 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391041994094849, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/98925-factoring-polynomials-factoring-techniques.html
# Thread: 1. ## Factoring Polynomials and factoring techniques Hello I have come across 4 questions that I simply can't solve with ease, I don't know if there is a universal factoring technique that I am unfamiliar with, but the process of solving the question seems very long and I would like to seek assistance in order to determine whether or not what I am doing is right or wrong. Thank You for your assistance and cooperation. Attached Thumbnails 2. start by factoring out the common factor ... $3(5-2x)\textcolor{red}{(7x-8)^2} - 2\textcolor{red}{(7x-8)^3}$ $\textcolor{red}{(7x-8)^2}[3(5-2x) - 2(7x-8)]$ distribute and combine like terms in the [brackets] ... $\textcolor{red}{(7x-8)^2}[15-6x-14x+16]$ $(7x-8)^2(31 - 20x)$ follow the same procedure with the other expressions 3. Thank you very much, that was much simpler than I expected. I was unfolding the whole thing which made it take very long.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9737157225608826, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=328121
Physics Forums Recognitions: Gold Member ## understanding angular momentum I'm trying to wrap my head around angular momentum using the following setup: There's a ball on a frictionless table that's connected to a string. The other end of the string is connected to a hole in the table. In the initial position the ball spins around the hole. Now, we start pulling the string through the hole and thus shortening the amount that's connected to the ball. From conservation of angular momentum the ball is now spinning faster because it's radius is smaller. Now, angular momentum is conserved because the force of the string only pulls the ball towards the center. But in that case, what causes the angular velocity to increase? I think that the answer lies in the fact that even though the force is radial, it still causes acceleration in the direction of the spin via the Coriolis acceleration. Assuming that the speed with which the ball moves towards the center is constant $$v_{c}$$ then the angular acceleration caused by the Coriolis acceleration is $$a(r) = -2 \omega v_{c} = -\frac{2 v(r) v_{c}}{r}$$ This is a differential equation of $$v(r)$$ whose solution is $$v(r) = \frac{v_{0} r_{0}}{r} => v(r)r = v_{0}r_{0}$$ where $$v_{0}$$ and $$r_{0}$$ are the initial velocity in the direction of the spinning and the radius. This equation is exactly the conservation of angular momentum. So apparently, the conservation of angular momentum is just another way of saying that forces at right angles to the velocity still change the velocity because of Coriolis acceleration. Is my conclusion correct? If so, how can I solve the differential equation when $$v_{c}$$ changes with $$r$$? Thanks PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 1 To understand this first, instead of thinking about the angular velocity think of how the tangential velocity changes. The speed of the ball |v| is constant even when you pull the string in closer, however because the ball is now bound to a shorter circumference it must more rapidly change direction. Take your finger and trace out a large circle, then at the same speed trace out a smaller circle and you will find that you complete the smaller circle much faster. The reason the angular velocity increases is because it is in radians/degrees which is independent of the actual distance the ball has to travel. So it just sees that the ball completes the circle faster, even though it is moving at the same speed, just a shorter distance. (less time and less distance = same speed). The trade-off for making the ball faster is that now you have to hold onto the string tighter because the force required to keep the ball along the new path has increased. Recognitions: Homework Help Quote by daniel_i_l even though the force is radial The force is radial, but not perpendicualr to the spiral path of the ball that occurs when the string is pulled in or released out. There's a non-zero component of force in the direction of the ball, which speeds up (or slows down if tension reduced) the ball. Here's a diagram, showing a short line segment perpendicualr to the ball and the radial string. For the math part, angular momentum is m x r x s, so if r is halfed, then to conserve momentum, it becomes m x (r/2) x (2s). Based on this assumption, then the relation ship between tension and r is: $$t(r)\ =\ -m\ (s_0)^2\ (r_0)^2\ /\ r^3$$ So if string is pulled from $r_0$ to $r_0/2$, then work done is: $$\int _{r_0} ^{r_0/2} (-m\ (s_0)^2\ (r_0)^2\ /\ r^3)\ dr$$ $$\left[ \frac{1}{2} m\ (s_0)^2\ (r_0)^2\ /\ r^2 \right]_{r_0}^{r_0/2}$$ $$\frac{1}{2} m\ (s_0)^2\ (r_0)^2\ /\ (\frac{r_0}{2})^2 - \frac{1}{2} m\ (s_0)^2\ (r_0)^2\ /\ (r_0)^2$$ $$\frac{4}{2} m\ (s_0)^2\ (r_0)^2\ /\ (r_0)^2 - \frac{1}{2}\ m\ (s_0)^2\ (r_0)^2\ /\ (r_0)^2$$ $$\frac{3}{2} m\ (s_0)^2\ (r_0)^2\ /\ (r_0)^2$$ $$\frac{3}{2} m\ (s_0)^2$$ Original KE: $${KE}_0 = \frac{1}{2} m (s_0)^2$$ KE after work done: $${KE}_1= \frac{1}{2} m(s_0)^2 + \frac{3}{2} m\ (s_0)^2 = \frac{4}{2} m\ (s_0)^2$$ $${KE}_1= \frac{1}{2} m (2\ s_0)^2$$ So if the radius is decreased by 1/2, the speed is doubled and the tension increased by a factor of 8. Recognitions: Homework Help ## understanding angular momentum If the goal was to keep the speed of the ball constant, this can be accomplished by having the string wind or unwind from a pole. Here it appears that angular momentum isn't being conservered unless you note that the tension of the string on the pole exerts a torque on whatever the pole is attached to (usually the earth), and only by considering what the pole is attached to will angular momentum be conserved. In this case the force is not radial, but it is perpendicular to the path of the ball: The math demonstrating that the line perpendicular to the path of the ball is the same as the tangent line to the pole is shown here: http://www.physicsforums.com/showpos...2&postcount=32 Recognitions: Gold Member Quote by Jeff Reid For the math part, angular momentum is m x r x s, so if r is halfed, then to conserve momentum, it becomes m x (r/2) x (2s). Based on this assumption, then the relation ship between tension and r is: $$t(r)\ =\ -m\ (s_0)^2\ (r_0)^2\ /\ r^3$$ I don't understand how you got this equation for the tension. Are you taking into account the acceleration in both the tangent and radial directions? Thanks Recognitions: Gold Member Can anyone explain that equation please? Thanks Recognitions: Homework Help Quote by Jeff Reid For the math part, angular momentum is m x r x s, so if r is halfed, then to conserve momentum, it becomes m x (r/2) x (2s). Based on this assumption, then the relation ship between tension and r is: t(r) = -m (s0)2 (r0)2 / r3 Quote by daniel_i_l I don't understand how you got this equation for the tension. Are you taking into account the acceleration in both the tangent and radial directions? Only the radial direction, since that's the direction that the string is being moved so that it peforms work. Movement perpendicular to the string doesn't involve any work performed by the string, so I'm only condering changes in the radial direction. t(r0) = -m (s0)2 / (r0) let r = c r0 then s = (1/c) s0 t(r) = -m ( (1/c) s0)2 / (c r0) t(r) = -m s02 / ( c3 r0) t(r) = -m s02 r02 / ( c3 r03) t(r) = -m s02 r02 / ( c r0)3 t(r) = -m s02 r02 / r3 Recognitions: Gold Member Thank you. Thread Tools | | | | |-----------------------------------------------------|-------------------------------|---------| | Similar Threads for: understanding angular momentum | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 2 | | | Classical Physics | 3 | | | Introductory Physics Homework | 12 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 3 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 19, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9160826206207275, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/03/09/functions-of-bounded-variation-iii/?like=1&source=post_flair&_wpnonce=18b5244a97
The Unapologetic Mathematician Functions of Bounded Variation III I’ve been busy the last couple of days, so this post got delayed a bit. We continue our study of functions of bounded variation by showing that total variation is “additive” over its interval. That is, if $f$ is of bounded variation on $\left[a,b\right]$ and $c\in\left[a,b\right]$, then $f$ is of bounded variation on $\left[a,c\right]$ and on $\left[c,b\right]$. Further, we have $V_f(a,b)=V_f(a,c)+V_f(c,b)$. First, let’s say we’ve got a partition $(x_0,...,x_m)$ of $\left[a,c\right]$ and a partition $(x_m,...,x_n)$ of $\left[c,b\right]$. Then together they form a partition of $\left[a,b\right]$. The sum for both partitions together must be bounded by $V_f(a,b)$, and so the sum of each partition is also bounded by this total variation. Thus $f$ is of bounded variation on each subinterval. This also establishes the inequality $V_f(a,c)+V_f(c,b)\leq V_f(a,b)$. On the other hand, given any partition at all of $\left[a,b\right]$ we can add the point $c$ to it. This may split one of the parts of the partition, and thus increase the sum for that partition. Then we can break this new partition into a partition for $\left[a,c\right]$ and a partition for $\left[c,b\right]$. The first will have a sum bounded by $V_f(a,c)$, and the second a sum bounded by $V_f(c,b)$. Thus we find that $V_f(a,b)\leq V_f(a,c)+V_f(c,b)$. So, with both of these inequalities we have established the equality we wanted. Now we can define the “variation function” $V$ on the interval $\left[a,b\right]$. Just set $V(x)=V_f(a,x)$ (and $V(a)=0$). It turns out that both $V$ and $D=V-f$ are increasing functions on $\left[a,b\right]$. Indeed, given points $x<y$ in $\left[a,b\right]$ we can see that $V_f(a,y)=V_f(a,x)+V_f(x,y)$, and so $V(x)\leq V(y)$. On the other hand, $D(y)-D(x)=V(y)-V(x)-(f(y)-f(x))=V_f(x,y)-(f(y)-f(x))$. But by definition we must have $f(y)-f(x)\leq V_f(x,y)$! And so $D(x)\leq D(y)$. Given a function $f$ of bounded variation, we have constructed two increasing functions $V$ and $D$. It is easily seen that $f=V-D$, so any function of bounded variation is the difference between two increasing functions. On the other hand, we know that increasing functions are of bounded variation. And we also know that the difference of two functions of bounded variation is also of bounded variation. And so the difference between two increasing functions is a function of bounded variation. Thus this condition is both necessary and sufficient! Even better, since many situations behave nicely with respect to differences of functions, it’s often enough to understand how increasing functions behave. Then we can understand the behavior of functions of bounded variation just by taking differences. For example, we started talking about functions of bounded variation to discuss integrators $\alpha$ in Riemann-Stieltjes integrals. If we study these integrals when $\alpha$ is increasing, then we can use the linearity of the integral with respect to the integrator to understand what happens when $\alpha$ is of bounded variation! Like this: Posted by John Armstrong | Analysis 5 Comments » 1. [...] Variation addenda I left a few things out of last Saturday’s post. Since I’ve spent all morning finishing off that paper (I’ll post the arXiv link [...] Pingback by | March 11, 2008 | Reply 2. [...] back when we originally discussed integrators of bounded variation, we can write our integrator as the difference of two increasing functions. It’s no loss of generality, then, to assume that is increasing. We also remember that the [...] Pingback by | January 12, 2010 | Reply 3. [...] is sort of like how we found that functions of bounded variation can be written as the difference between two strictly increasing func…. In fact, if we’re loose about what we mean by “function”, and [...] Pingback by | May 7, 2010 | Reply 4. hi , i am aya from egypt .it is good but i want to know how to construct a partition to prove that a function is of bounded variation or not and i have some problems that i cannot solve or at least have some idea about the solution via f(x)=(sin x)^2 on the interval [0,3.14]. thanks a lot Comment by aya mohamed hussein | October 24, 2010 | Reply • That function is pretty clearly continuous on the given interval. That should at least tell you the direction to try to prove. Comment by | October 24, 2010 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406548738479614, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/7784/how-does-the-curvature-of-spacetime-induce-gravitational-attraction
# How does the curvature of spacetime induce gravitational attraction? I don't know how to ask this more clearly than in the title. - 4 Consider posting where you heard about this, what your current understanding is, and what, specifically, you find confusing about it. Right now we have no idea if we need to write answers assuming you understand basic space time concepts, or whether this is some phrase you read somewhere and you're starting from elementary physics. – Adam Davis Mar 30 '11 at 5:24 I have a minor in physics and have taken 4 semesters of Physics at the University level (all As and Bs). – JoeHobbit Mar 30 '11 at 13:39 ## 5 Answers I'm a bit worried about getting a reputation for citing myself too much, but I'll go for it anyway. (In my defense, I always admit it when I'm doing it!) John Baez's and my pedagogical paper The Meaning of Einstein's Equation aims to address exactly this question. We describe the meaning of spacetime curvature and the way that Einstein's equation connects spacetime curvature to the matter content of a region of space. As one example, we use this description to heuristically "derive" Newtonian gravity. I think the most important point is that, in "ordinary" situations involving particles moving at speeds much less than $c$, the "time" part of spacetime curvature is by far the most important part. Intuition about curved space (as opposed to spacetime) only gets you so far. - 1 Hey man, if it is possible to cite yourself, then by all means do so. Not many people can claim such a privilege. – user346 Mar 30 '11 at 15:05 This is actually a very nice paper. – MBN Mar 30 '11 at 15:27 Thanks! (I only mentioned the self-citing issue because I noticed there was a discussion about it on Meta. I'm not really worried about it, at least in this case, because the paper really is relevant.) – Ted Bunn Mar 30 '11 at 15:45 1 I love that paper - never put two and two together that you were an author on it. Thanks. – Mark Eichenlaub Mar 30 '11 at 18:51 As David's answer there suggests, it is absolutely acceptable to self-cite if that is disclaimed (you did so) and is really helpful to answer the question (it does). I wouldn't see the point of you having to repeat what is stated there (especially since you mention the main point here) or searching another citation that you are probably not as acquainted with as with your own (and John Baez's of course) work – Tobias Kienzler Mar 31 '11 at 7:38 The simple answer is that the curvature of spacetime does not induce gravitational attraction. It describes it. In all of classical physics (not only general relativity), the motion of bodies can be described as motion along the geodesics of a suitable differential geometry. Body A acts on body B by influencing the curvature of the geodesic along which body B moves. See the illustrations on this page of my personal website for both classical electrodynamics and general relativity. - – Mark Eichenlaub Mar 30 '11 at 9:56 why is the link about "citeing yourself" here? – JoeHobbit Mar 30 '11 at 13:44 @user2843 Because Koantum's answer links to his own web page without identifying it as such. – Mark Eichenlaub Mar 30 '11 at 18:52 Apologies, @Mark. I now made it clear that the link is to my personal website. – Koantum Mar 31 '11 at 2:52 Great, thanks. – Mark Eichenlaub Mar 31 '11 at 4:18 The best way to see this, I think, is using Einstein's 1907 approximation. Einstein noted that gravity slows down clocks, so that where the potential is more negative, clocks run slow, and where the potential is small, clocks run fast, so that the rate at which a clock ticks at position $x$ is $\sqrt{(1-2\phi(x))}$. Assume that space is completely flat, and only the clock rate changes from point to point. This is a curved space-time, but the space-part is flat. Then you ask: "what is the curve in space-time which is extremal for the relativistic distance from point A to point B?". This is the analog of the question "what is the shortest path from A to B" in geometry. The integral for this is: $\int \sqrt{(1-2\phi(x)) - |v|^2)} dt = \int -\phi(x(t)) + {v^2\over 2}$ Which, if you multiply by the mass m, gives you the action for a Newtonian particle in a gravitational potential. So Einstein's law reduces to Newton's. The remaining work is to see that the equation for the time-time component of the metric reduces to Laplace's equation for static masses, and this is not too hard to do, but annoying to do twice or to write up. That this reduction works was one of Einstein's criteria for a good field equation. - If You are in flat spacetime and move from A to B, the easiest way is to travel by fragment of an straight line between points A and B. When there is gravitation, spacetime is bending. So the easiest way is to follow the line which is the shortest, but it is not straight line, because of the curvature. So another choice is the line which is shortest but remain within spacetime, and is "minimal". And this is exactly physical trajectory of small particle in such curved space. As gravitation mass of a body and inertial mass of a body are exactly equal, there is equivalence between such geometrical description (when curvature is dependent of gravitational mass) with dynamical one which uses forces to describe particle movement ( because inertial mass is equal to gravitational one - so the two pictures are perfectly equivalent). So when You go from point A towards point B you follow no stright line but more complicated curve such as parabola or ellipse - because it is has "minimal feature" among accessible trajectories. Of course You may run some kind of engine, and run on different - non minimal trajectory - for example straight line. This cost You some bill for fuel, and is not minimal at all... As planets, meteorites, comets, thrown rocks etc. do not have its own engines, they have to move along "minimal" trajectories.. As "minimal" lines in curved spacetime are exactly trajectories of body within gravitational fields, this two pictures describes the same move with different language. Of course attraction has its causes in curvature when You use geometrical language - many trajectories goes close and close each other. So You may say - gravitation is represented by curvature. - It is a geometrical theory, so best explained with graphics:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9422824382781982, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/algebraic-number-theory+intuition
# Tagged Questions 1answer 126 views ### How can a subfield of an abelian extension fail to be cyclic when subjected to a norm-like condition. (How can I understand the supplied explanation) I recently posted a question on MathOverflow (if you're interested it can be found here). While some answers were quickly produced there were a few points that I found confusing. I requested some ... 3answers 397 views ### Intuition regarding Chevalley-Warning Theorem Three versions of the theorem are stated on pages 1-2 in these notes by Pete L. Clark: http://math.uga.edu/~pete/4400ChevalleyWarning.pdf Could anyone offer some intuitive way to think about this ... 3answers 581 views ### What is the intuition behind Gauss sums? Let $\chi$ be a character on the field $F_p$, and fix some $a \in F_p$. We define a Gauss sum to be: $g_a (\chi) = \sum_{t\in F_p}\chi(t)\zeta^{at}$ where $\zeta$ is a primitive $p^{th}$ root of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455080032348633, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/130295-hnn-extension.html
# Thread: 1. ## HNN extension I have a question. Let $G=\langle b,t | t^{-1}b^{\beta}tb^{\beta} \rangle$ be a HNN extension of base group $\langle b \rangle$ with stable letter $t$. Is it true that $\langle b \rangle \cap \langle t \rangle=1$? I think is it true. 2. Originally Posted by deniselim17 I have a question. Let $G=\langle b,t | t^{-1}b^{\beta}tb^{\beta} \rangle$ be a HNN extension of base group $\langle b \rangle$ with stable letter $t$. Is it true that $\langle b \rangle \cap \langle t \rangle=1$? I think is it true. As an HNN extension , we have that in fact $G=\left<<b>,t\;\slash\;t^{-1}b^kt=\phi(b^k)\right>$ , for some isomorphism $\phi:<b>\rightarrow <b>$ . Since you give $G=\langle b,t | t^{-1}b^{\beta}tb^{\beta} \rangle$ , this means you chose $\phi(b)=b^{-1}$ = the inverse involution automorphism of $<b>$ , so your group is the amalgamated product (as any other HNN extension) of $<b>$ with itself via the above involution. As t is a "foreign" letter to the group $<b>$ then clearly $<b>\cap <t>=1$ Tonio 3. Originally Posted by deniselim17 I have a question. Let $G=\langle b,t | t^{-1}b^{\beta}tb^{\beta} \rangle$ be a HNN extension of base group $\langle b \rangle$ with stable letter $t$. Is it true that $\langle b \rangle \cap \langle t \rangle=1$? I think is it true. HNN-extensions were constructed to show that we can embed a group in another group in a special way (if we are given two isomorphic subgroups of a group then the group can be embedded in a bigger group such that these groups are conjugate. I can't remember who asked this question - anyone know? It's been bugging me since I started typing this!) So, without proving anything you know that if $G^{\ast} = <G, t: A^t = A\phi>$ then $G = <G>$ and clearly $t \notin G$. My point is that the result is clear and is the whole point of HNN-extensions! Although it does still require proof...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505683183670044, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/182046/the-emptiness-problem-for-lunatic-and-crazy-turing-machines
# The emptiness problem for “lunatic” and “crazy” Turing machines Crazy Turing Machine is the same as Turing machine with one stripe , except of the fact that after each ten steps the head jumps back to the beginning of the stripe. Lunatic Turing Machine is the same as Turing machine with one stripe , except of the fact that after each 10, 100, 1000, ... steps the head jumps back to the beginning of the stripe. How come that for a crazy Turing machine we can decide the emptiness problem($L(M)= \emptyset$) and for the lunatic one it stays at $co-RE \setminus R$? I didn't come with any smart conclusion. - 2 The standard word for the tape of a Turing machine is "tape" rather than "stripe". – Carl Mummert Aug 13 '12 at 12:45 Thanks Carl, I'll keep it in mind for the next time. – Jozef Aug 13 '12 at 12:56 Jozef, I don't really understand your remark at the end about the complexity of the usual emptiness problem. You write RE-co-RE, but this would seem to be empty. I think you mean to suggest that the usual emptiness problem is co-RE (or what I would call co-CE), which it is since the programs accepting a non-empty set are enumerable, as one can try everything and then include a program on the list whenever it is found to accept something. – JDH Aug 13 '12 at 13:27 @JDH: Thank I fixed it. – Jozef Aug 13 '12 at 13:29 ## 1 Answer The crazy Turing machine can never get beyond the $10^{th}$ cell of the tape, and has ultimately only finitely many configurations. The complete spectrum of behavior of such a machine is therefore computable by a regular Turing machine, which can build the graph of all configurations and compute which configurations reach others and then find if there is an accepting path through these configurations. In particular, the emptiness problem for the crazy Turing machines will be decidable. According to my understanding of the lunatic Turing machines, however, they get longer and longer periods of time during which they may perform "normal" computation, and so they can systematically make progress on deciding a semi-decidable question. Whenever the lunatic behaviro sets in, this will be detectable, and the machine can simply return to where it was and continue its calculation further. It was merely interrupted by a small hiccup. So these machines are fully as powerful as regular Turing machines, and so their emptiness problem is co-c.e. (since non-emptiness is verified by an accepting instance). - Thank you @JDH. – Jozef Aug 13 '12 at 12:50 I should have known that the "crazy" one is the same as bounded TM :) – Jozef Aug 13 '12 at 12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9519325494766235, "perplexity_flag": "middle"}
http://www.reference.com/browse/electron%20spin
Definitions # electron spin resonance (ESR) or electron paramagnetic resonance (EPR) Technique of spectroscopic analysis (see spectroscopy) used to identify paramagnetic substances (see paramagnetism) and investigate the nature of the bonding within molecules by identifying unpaired electrons and their interaction with their immediate surroundings. Unpaired electrons, because of their spin, behave like tiny magnets and can be lined up in an applied magnetic field; energy applied by alternating microwave radiation is absorbed when its frequency coincides with that of precession of the electron magnets in the sample. The graph or spectrum of radiation absorbed as the field changes gives information valuable in chemistry, biology, and medicine. Encyclopedia Britannica, 2008. Encyclopedia Britannica Online. The electron is a fundamental subatomic particle that was identified and assigned the negative charge in 1897 by J.J. Thomson and his team of British physicists. These electrically-charged particles, together with the protons and neutrons that comprise atomic nuclei, make up atoms. Electron–electron interaction between atoms is the main cause of chemical bonding. Electrons also play an essential role in electricity and magnetism. All electrons are identical particles that belong to the first generation of the lepton particle family. Each electron carries a negative elementary charge and participates in electromagnetic and weak interactions. It has a property of intrinsic angular momentum called spin, with a standardized value of . The mass of an electron is approximately of that of the proton, and it is believed to be a point particle with no apparent substructure. The properties of the electron are determined by its interaction with other particles. ## Etymology The English name electron is a combination of the word and the suffix -on, with the latter now used to designate a subatomic particle. Both electric and electricity are derived from the Latin ēlectrum, which in turn came from the Greek word ēlektron (ήλεκτρον) for amber; a gemstone that is formed from the hardened sap of trees (the ancient Greeks noticed that amber, when rubbed with fur, attracted small objects). Apart from lightning this phenomenon was man's earliest experience of electricity. ## History As early as 1838–51, the British natural philosopher Richard Laming conceived the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electrical charges. Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, the Anglo-Irish physicist G. Johnstone Stoney suggested that there existed a "single definite quantity of electricity." He was able to estimate the value of the charge e of a monovalent ion by means of Faraday's laws of electrolysis. However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity". In 1894, Stoney coined the term electron to represent these elementary charges. ### Identification Progress in the study of electrons began to occur once a cathode ray tube was developed that had a high vacuum within its interior. Once he had accomplished this during the 1870s, English chemist and physicist Sir William Crookes was able to show that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Further, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged. In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode. The German-born British physicist Arthur Schuster expanded upon Crookes's experiments by placing metal plates in parallel to the cathode rays and applying an electrical potential between the plates. The resulting field deflected the rays toward the positive plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced such an unexpectedly large value that little credence was given to his calculations at the time. In 1896, British physicist J.J. Thomson, with his colleagues John S. Townsend and H. A. Wilson, performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier. Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known (hydrogen). He also showed that their charge to mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal. The name electron was again proposed for these particles by the Irish physicist George F. Fitzgerald, and it has since gained universal acceptance. While studying naturally fluorescing minerals in 1896, French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, based on their ability to penetrate matter. In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electrical field, and that their mass-to-charge ratio was the same as for cathode rays. This evidence strengthened the view that electrons existed as components of atoms. The electron's charge was more carefully measured by American physicist Robert Millikan in his oil-drop experiment of 1909. This experiment used an electrical field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electrical charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team, using a clouds of charged water droplets generated by electrolysis. However, oil drops, were more stable than water drops due to their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time. Around the beginning of the twentieth century, it was found that under certain conditions a charged particle caused a condensation of water vapor. In 1911, Charles Wilson used this principle to devise his cloud chamber, allowing the tracks of charged particles, such as fast-moving electrons, to be photographed. This and subsequent particle detectors allowed electrons to be studied individually, rather than in bulk as had been the case before. ### Atomic theory By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower mass electrons. In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with the energy determined by the angular momentum of the electron's orbits about the nucleus. The electrons could move between these states, or orbits, by the emission or absorption of photons at specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of hydrogen that were formed when the gas is energized by heat or electricity. However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectrum of more complex atoms. Chemical bonds between atoms were now explained, by Gilbert Newton Lewis in 1916, as the interactions between their constituent electrons. As the chemical properties of the elements were known to largely repeat themselves according to the periodic law, in 1919 the American chemist Irving Langmuir suggested that this could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells about the nucleus. In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained if each quantum energy state was described by a set of four parameters, as long as each state was inhabited by no more than a single electron. (This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.) However, what physicists lacked was a physical mechanism to explain the fourth parameter, which had two possible values. This was provided by the Dutch physicists Abraham Goudsmith and George Uhlenbeck when they suggested that an electron, in addition to the angular momentum of its orbit, could possess an intrinsic angular momentum. This property became known as spin, and it explained the previously mysterious splitting of spectral lines observed with a high resolution spectrograph; a phenomenon known as fine structure splitting. ### Modern particle physics During his 1924 dissertation Recherches sur la théorie des quanta, French physicist Louis de Broglie hypothesized that all matter possesses a wave–particle duality similar to photons. That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The wave-like nature of light occurs, for example, when light is passed through parallel slits, resulting in interference patterns. In 1937, a similar effect was demonstrated from a beam of electrons by English physicist George Paget Thomson with a thin metal film and by American physicists Clinton Davisson and Lester Germer using a crystal of nickel. The success of de Broglie's prediction led to the publication, by Erwin Schrödinger in 1926, of the wave equation that successfully describes how electron waves propagated. Rather than yielding a solution that determines the location of an electron over time, this wave equation gives the probability of finding an electron near a position. This approach became the theory of quantum mechanics, which provided an exact derivation to the energy states of an electron in a hydrogen atom. Once the electron spin and the interaction between multiple electrons is taken into consideration, the Schroedinger wave equation successfully predicted the configuration of electrons in atoms with higher atomic numbers than hydrogen. However, for atoms with multiple electrons, the exact solution to the wave equation is much more complicated, so approximations were often necessary. In 1948, Richard Feynman proposed an alternative view of the electron's quantum mechanical behavior, known as the path integral formulation. He suggested that a particle simultaneously traversed every possible path to reach its destination. Thus, in the double-slit experiment, an electron passed through both of the slits, rather than choosing one or the other. Each of the possible paths could be assigned a value in such a manner that the averaged behavior matched the probabilities computed from the wave function. When an electron is detected following a particular path, the numerical contributions all of the other paths cancel themselves out. This formulation has proved crucial to the subsequent development of theoretical physics. With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles. The first successful attempt to accelerate electrons using magnetic induction was made in 1942 by Donald Kerst. His first betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at GE. This radiation was caused by the acceleration of electrons, moving near the speed of light, through a magnetic field. With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968. This device accelerated electrons and positrons (the antiparticle of the electron) in opposite directions, effectively doubling the energy of their collision (when compared to striking a static target). The Large Electron-Positron Collider at CERN, which was operational from 1989–2000, achieved energies of 209 GeV and made important measurements for the Standard Model of particle physics. ## Characteristics ### Classification The electron belongs to the group of subatomic particles called leptons, which are believed to be fundamental particles. Electrons have lepton number 1, and have the lowest mass of any electrically charged lepton. In the Standard Model of particle physics, the electron is the first-generation charged lepton. It forms a weak isospin doublet with the electron neutrino; an uncharged, first generation lepton with little or no mass. The electron is very similar to the two more massive particles of higher generations, the muon and the tau lepton, which are identical in charge, spin, and interaction, but differ in mass. All members of the lepton group belong to the family of fermions. This family includes all elementary particles with half-odd integer spin; the electron has spin . Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. The antiparticle of an electron is the positron, which has the same mass and spin as the electron but a positive rather than negative charge. The discoverer of the positron, Carl D. Anderson, proposed calling standard electrons negatrons, and using electron as a generic term to describe both the positively and negatively charged variants. This usage of the term "negatron" is still occasionally encountered today, and it may also be shortened to "negaton". ### Fundamental properties When an electron is stationary, its rest mass is 9.11 kg. On the atomic scale, this is equal to 5.489 u, where 1 u is one-twelfth the mass of a neutral 12C atom. Based on Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV, where an eV, or electron volt, is defined as the energy acquired by an electron being accelerated through an electrical potential of one volt. The relative mass of the electron (when compared to the remainder of a hydrogen atom) is given by the proton-to-electron mass ratio, which is about 1836. This ratio is one of the fundamental constants of physics, and the Standard Model of particle physics assumes this and other constants are unchanging. Astronomical measurements show that the ratio has held the same value for at least half the age of the universe. However, the rest energy of the electron has been shown to vary by 10−6–10−9 eV because of local fluctuations of temperature and magnetic field. Electrons have an electric charge of −1.602 × 10−19 C, which is used as a standard unit of elementary charge for subatomic particles. Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign. As the symbol e is used for the constant of electrical charge, the electron is commonly symbolized by e−, where the minus sign indicates the negative charge. The electron is described as a fundamental or elementary particle. It has no known substructure. Hence, for convenience, it is usually defined or assumed to be a point charge with no spatial extent; a point particle. Observation of a single electron in a Penning trap shows the upper limit of the particle's radius is 10−22 m. The classical electron radius is 2.8179 m. This is the radius that is inferred from the electron's electric charge, by using the classical theory of electrodynamics alone, ignoring quantum mechanics. Several elementary particles are known to spontaneously decay into different particles. An example is the muon, which decays into an electron and two neutrinos with a half life of 2.2 seconds. However, the electron is thought to be stable on theoretical grounds; an electron decaying into a neutrino and photon would mean that electrical charge is not conserved. The experimental lower bound for the electron's mean lifetime is 4.6 years, with a 90% confidence interval. ### Quantum mechanics As with all particles, electrons can also act as waves. This is called the wave-particle duality and can be demonstrated using the double-slit experiment. The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of an electron is described mathematically by the wavefunction, which is represented by the Greek letter Psi (Ψ). When this function is squared, it gives the probabilility that an electron will be observed near a location—the electron density. Electrons are identical particles because they can not be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to their condition. That is, the probability distribution for an identical pair must remain unchanged after they switch positions. The wavefunction describing such an interaction can either remain the same following a particle swap or it change sign; mathematically, the square of −Ψ will have the same probability density as the function with a positive sign. The sign-changing case is called an antisymmetrical wavefunction and it is characteristic of all identical fermions, including electrons. Bosons, such as the photon, have symmetric wave functions. In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the exact same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same energy state. This principle explains many of the properties of electrons. For example, this causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit. ### Virtual particles Physicists believe that empty space may be continually creating pairs of virtual particles, such as a positron and electron, which rapidly annihilate each other shortly thereafter. The net energy from this reaction is zero. The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, $begin\left\{smallmatrix\right\}Delta E Delta t ge hbarend\left\{smallmatrix\right\}$. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, such that their product is no more than the Dirac constant, . Thus, for a virtual electron, Δt is at most . While a electron-positron virtual pair is in existence, the coulomb force from the ambient electrical field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes the two charged virtual particles to physically separate for a brief period before merging back together, and during this period they behave like an electric dipole. The combined effect of many such pair creations is to partially shield the charge of the electron, a process called vacuum polarization. Thus the effective charge of an electron is actually smaller than its true value, and the charge increases with decreasing distance from the electron. This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator. A comparable shielding effect is seen for the mass of the electron. The equivalent rest energy consists of the mass-energy of the "bare" particle plus the energy of the surrounding electric field. In classical physics, the energy of the electric field is dependent upon the size of the charged object, which, for a dimensionless particle, results in an infinite energy. Instead, because of vacuum fluctuations, allowance must be made for an electron–positron pair appearing in the electric field and the positron annihilating the original electron; causing the virtual electron to become a real electron via the emitted photon. This interaction creates a negative energy imbalance that counteracts the radius-dependency of the electric field. The resulting total mass is referred to as the renormalized mass, because a technique called renormalization is used by physicists to relate the observed and bare mass of the electron. The electron has an intrinsic angular momentum of spin as measured in units of ħ, and an intrinsic magnetic moment along its spin axis. The concept of a dimensionless particle possessing properties that, in classical electromagnetism, normally require a physical size is unclear. A possible explanation lies in the formation of virtual photons in the electric field generated by the electron. The continual creation and absorption of these photons causes the electron to move about in a jittery fashion (known as zitterbewegung). As photons possess angular momentum, this jittering of the electron causes a net precession, which, on average, results in a circulatory motion of the mass and charge. In atoms, this creation of virtual photons is also responsible for the Lamb shift that causes a small difference in electron energy for quantum states that, otherwise, ought to be identical. The gyromagnetic ratio of an electron is the ratio of its magnetic moment to its angular momentum. Virtual particles and antiparticles provide a correction of just over 0.1% to the electron's gyromagnetic ratio, compared to the value of exactly 2 predicted by Paul Dirac's single-particle model. The extraordinarily precise agreement of this prediction with the experimentally determined value is viewed as one of the great achievements of modern physics. ### Interaction Electrons are a key element in electromagnetism, a theory that is accurate for macroscopic systems, and for classical modeling of microscopic systems. An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force is determined by Coulomb's law. The Coulomb force between charged particles is mediated by photons, which are quanta of electromagnetic energy. However, an isolated electron that is not undergoing acceleration is unable to emit or absorb energy via a photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum (but no net energy) between two charged particles. It is this exchange of virtual photons that generates the Coulomb force. Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The deceleration of the electron results in the emission of Bremsstrahlung radiation. The outcome of an elastic collision between a photon and a solitary elecron is called Compton scattering. This collision results in a transfer of momentum between the particles, which modifies the wavelength of the photon by an amount called the Compton shift. The maximum magnitude of this wavelength shift is h/mc, which is known as the Compton wavelength. For a electron, it has a value of 2.43 m. The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of repulsion at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α = (7.29720±0.00003), which is approximately equal to 1/137. This constant appears frequently in the physics of atoms and in the theory of quantum electrodynamics. When an electron is in motion, it generates a magnetic field. This magnetic field is related to the motion of one or more electrons (the "current") with respect to an observer by the Ampère-Maxwell law. As an example, it is this property of induction which supplies the magnetic field that drives an electric motor. The full electromagnetic effect from a moving charge can be derived mathematically using the Liénard-Wiechert potential, which includes special corrections for when the velocity is close to the speed of light; known as relativistic velocities. When an electron is moving through a magnetic field, it is subject to the Lorentz force that exerts an influence in a direction perpendicular to the plane defined by the magnetic field and the electron velocity. This causes the electron to follow a helical trajectory through the field at a radius equal to the Gyroradius. The curving motion creates a centripetal force on the particle, and the resulting acceleration causes the electron to radiate energy. At relatively low velocities the energy emission in a magnetic field is called cyclotron radiation, while for electrons moving at relativistic velocities it is termed synchrotron radiation. The energy emission in turn causes a recoil of the electron, known as the Abraham-Lorentz-Dirac force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself. In the theory of electroweak interaction, the electron forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a W boson and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, cancelling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can also undergo a neutral current interaction via a Z0 boson exchange, and this is responsible for neutrino-electron elastic scattering. When electrons and positrons collide, they annihilate each other, giving rise to two gamma-ray photons emitted at roughly 180° to each other. If the electron and positron had negligible momentum, each gamma ray will have an energy of 0.511 MeV. On the other hand, high-energy photons may transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus. ### Atoms An electron can be bound to an atom by the attractive coulomb force generated by the nucleus. The wave-like behavior of a bound electron is described by a function called an atomic orbital. An orbital consists of a set of quantum states that have a particular energy, and only a discrete set of these orbitals exist around the nucleus. Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential. In order to escape the atom, the energy of the electron must be increased above its binding energy. This occurs with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron. A bound electron gains a quantized angular momentum from its orbital state; this is analogous to the angular momentum of an orbit in classical mechanics. Because the electron is charged, this produces a magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of all its component orbital and spin magnetic moments. Because of the Pauli exclusion principle, pairs of electrons in an atom align their spins in opposite directions, resulting in different spin quantum numbers. Thus the magnetic moments of an atom's electrons cancel each other out, with the exception of the outermost electron. The nucleus also contributes a magnetic moment, but this is negligible compared to the effect from the electrons. The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum electrodynamics. The strongest bonds are formed by the sharing or transfer of electrons between atoms. Within a molecule, these electrons move under the influence of the nuclei, and occupy molecular orbitals. ### Conductivity A body has an electric charge when that object has more or fewer electrons than are required to balance the positive charge of the nuclei. (For a single atom or molecule, the object is termed an ion.) When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than protons, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the phenomenon of triboelectricity. Electrons moving freely in vacuum, space or certain media are free electrons. When free electrons move, there is a net flow of charge called an electric current. A current of electrons acquires the cumulative electromagnetic properties of the individual particles, so it generates a magnetic field. Likewise a current can be created by a moving magnetic field. These interactions are described mathematically by Maxwell's equations. At a given temperature, each material has a level of electrical conductivity that measures the electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold. A material with metallic bonds has an electronic band structure that allows for delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas through the material. However, unlike an atmospheric gas (which follows the Maxwell–Boltzmann distribution of energies), the states of this cloud of electrons obeys Fermi–Dirac statistics; hence the reason for the electron's family name, fermions. Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimetres per second. However, the speed at which a current at one point in the material causes a current in other parts of the material, the velocity of propagation, is typically about 75% of light speed. This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material. In dielectric materials, the electrons remain bound to their respective atoms and the material behaves as an insulator. Semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation. In some superconductors, pairs of electrons move as Cooper pairs in which their motion is coupled to nearby matter via lattice vibrations called phonons. The distance of separation between Cooper pairs is roughly 100 nm. ### Motion and energy The speed of an electron can approach, but never reach the speed of light in a vacuum, c. This limitation is attributed to Einstein's theory of special relativity which defines the speed of light as a constant regardless of the relative velocity of observers—their inertial frames. However, when relativistic electrons are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint bluish light called Cherenkov radiation. The effects of special relativity are based on a quantity known as the Lorentz factor γ, which is a function of the velocity v of the particle compared to c. The kinetic energy Ke of an electron moving with velocity v is: $begin\left\{smallmatrix\right\}K_e = left\left(gamma - 1right\right)m_e c^2.end\left\{smallmatrix\right\}$ where me is the electron mass. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV. This gives a value of 100,000 for γ, since the mass of an electron is 0.51 MeV/c2. The relativistic momentum of this electron is 100,000 times the classical momentum of an electron at the same speed. Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is Planck's constant and p is the momentum. At energies of just a few electron volts this wavelength determines the size of atoms, while at thousands of electron volts this results in the Bragg angles for electron diffraction. (J. J. Thomson's son G. P. Thomson discovered this angle to be much smaller than one degree.) For the 51 GeV electron above, proper-velocity is approximately γc, making the wavelength of those electrons small enough to explore structures well below the size of an atomic nucleus. ## Production The big bang theory is the accepted scientific theory to explain the early stages in the evolution of the Universe. For the first millisecond of the big bang, the temperatures were over 10 billion K and photons had mean energies over a million electron volts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons, $begin\left\{smallmatrix\right\}gamma + gamma leftrightharpoons e^\left\{+\right\} + e^\left\{-\right\}end\left\{smallmatrix\right\}$ where γ is a photon, e+ is a positron and e- is an electron. Likewise, positron-electron pairs annihilated each other, emitting photons of gamma rays with energies of 511 keV. An equilibrium between electrons, positrons and protons was maintained during this creation and destruction cycle. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe. For reasons that remain uncertain, there was a slight excess in the number of electrons over positrons; a problem known as baryon asymmetry. Hence a few electrons survived the annihilation process. This excess also matched the excess of protons over anti-protons, resulting in a net charge of zero for the universe. The surviving protons and neutrons begin to undergo nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after a few hundred seconds, and any leftover neutrons thereafter underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process, $begin\left\{smallmatrix\right\}n Rightarrow p + e^\left\{-\right\} + bar\left\{nu\right\}_eend\left\{smallmatrix\right\}$ where n is a neutron, p is a proton, e- is an electron and $begin\left\{smallmatrix\right\}bar\left\{nu\right\}_eend\left\{smallmatrix\right\}$ is an electron antineutrino. For the next million years, the excess electrons remained too energetic to bind with atomic nuclei. Once atoms were formed, the universe became transparent to radiation and it continued to cool and expand. The concentrations of mass in the universe allow stars to form. Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can also result in the synthesis of radioactive isotopes. Some of these isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus. An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60Ni). Electrons (and positrons) are also thought to be created at the event horizon of a black hole. According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, including radiation, from escaping past the Schwarzschild radius. However, it is believed that quantum mechanical effects may allow Hawking radiation to be emitted at this distance. When pairs of virtual particles (such as an electron and positron) are created just inside the event horizon, the random spacial distribution of these particles may permit one of them to appear on the exterior; a process called quantum tunneling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space. In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes. Cosmic rays are particles travelling through space with high energies. Energy events as high as 3.0 eV have been recorded. When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions. More than half of the cosmic radiation observed from the Earth's surface consists of muons. This particle is a lepton which is produced in the upper atmosphere by the decay of pions. Muons in turn can decay to form an electron or positron by means of the weak force. Thus, for the negatively charged pion $pi^\left\{-\right\}$, $begin\left\{smallmatrix\right\}$ pi^{-} Rightarrow mu^{-} + nu_{mu}end{smallmatrix} $begin\left\{smallmatrix\right\}mu^\left\{-\right\} Rightarrow e^\left\{-\right\} + bar\left\{nu\right\}_e + nu_\left\{mu\right\}end\left\{smallmatrix\right\}$ where $mu^\left\{-\right\}$ is a muon, $nu_\left\{mu\right\}$ is a muon neutrino and $begin\left\{smallmatrix\right\}bar\left\{nu\right\}_eend\left\{smallmatrix\right\}$ is an electron antineutrino. ## Observation In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge. More distant observation of electrons requires the detection of their radiated energy. For example, in high energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung. Electron gas can also undergo plasma oscillation, which are waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected using radio telescopes. The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This allows very precise measurements to be made of the particle properties. For example, in one instance the Penning trap was used to contain a single electron for a period of 10 months. Measurements allowed the dimensionless g-factor of the electron to be measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant. The first video images of an electron were captured by a team at Lund University in Sweden, February 2008. To capture this event, the scientists used extremely short flashes of light. To produce this light, newly developed technology for generating short pulses from intense laser light, called attosecond pulses, allowed the team at the university’s Faculty of Engineering to capture the electron's motion for the first time. "It takes about 150 attoseconds for an electron to circle the nucleus of an atom. An attosecond is related to a second as a second is related to the age of the universe," explained Johan Mauritsson, an assistant professor in atomic physics at the Faculty of Engineering, Lund University. The distribution of the electrons in solid materials can be visualized by angle resolved photoemission spectroscopy (ARPES). This technique uses the photoelectric effect to measure the reciprocal space, a mathematical representation of periodic structures that can be used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material. ## Applications At some level, virtually every developed technology depends upon electrons. The chemical industry is based upon the chemical properties of atoms, which in turn depend on the interaction of bound electrons. Thus the thermodynamic properties for the solid, liquid and gaseous phases of matter are all decided by the interactions of electrons in atoms. In the electronics industry, electrical devices rely on the flow of electrons. Even technology that generates electromagnetic radiation, such as lasers, depend upon the electron. There are also certain specialized applications that primarily use free electrons. ### Industry Electron beams are used in welding, lithography, scanning electron microscopes and transmission electron microscopes. LEED and RHEED are surface-imaging techniques that use electrons. Electrons are also at the heart of cathode ray tubes, which are used extensively as display devices in laboratory instruments, computer monitors and television sets. In a photomultiplier tube, one photon strikes the photocathode, initiating an avalanche of electrons that produces a detectable current. ### Laboratory The uniquely high charge-to-mass ratio of electrons means that they interact strongly with atoms, and are easy to accelerate and focus with electric and magnetic fields. Hence some of today's aberration-corrected transmission electron microscopes use 300keV electrons with velocities greater than the speed light travels in water (approximagely 1/2 to 2/3 of c), wavelengths below 2 picometers, transverse coherence-widths over a nanometer, and longitudinal coherence-widths 100 times that. This allows such microscopes to image scattering from individual atomic-nuclei (HAADF) as well as interference-contrast from solid-specimen exit-surface deBroglie-phase (HRTEM) with lateral point-resolutions down to 60 picometers. Magnifications approaching 100 million are needed to make the resulting image detail comfortably visible to the naked eye. Quantum effects of electrons are also used in the scanning tunneling microscope to study features on solid surfaces with lateral-resolution at the atomic scale (around 200 picometers) and vertical-resolutions much better than that. In such microscopes, the quantum tunneling is strongly dependent on tip-specimen separation, and, precise control of the separation (vertical sensitivity) is made possible with a piezoelectric scanner. ### Medicine In radiation therapy, electron beams are used for treatment of superficial tumours. ## External links • The Discovery of the Electron from the American Institute of Physics History Center • Particle Data Group • Eric Weisstein's World of Physics: Electron • Researchers Catch Motion of a Single Electron on Video
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 11, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352117776870728, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/34016/what-are-the-k-rational-points-of-kt/34084
What are the k-rational points of k[[t]]? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $k$ be a field. What are the $k$-rational points of the affine $k$-scheme $\mathrm{Spec}(k[[t]])$, where $k[[t]]$ is the power series ring over $k$ (equivalently, what are the $k$-algebra morphisms $k[[t]] \rightarrow k$?) I'm only sure about one point, namely the map $t \mapsto 0$. Do I have to assume some sort of completeness of $k$ to get more points? Is there a nice presentation of $k[[t]]$, i.e. a quotient of some polynomial ring that is isomorphic to $k[[t]]$? - Okay, a question into another direction: Why do people consider deformations parametrized by $k[[t]]$? Parametrizing over $k[t]$ makes perfect sense to me; I have a fiber over any $\alpha \in k$. But in case of $k[[t]]$? – Georg S. Jul 31 2010 at 13:24 Perhaps you should post this comment as a separate question. I know there are times when you can embed a complete DVR with residue field $k$ into a field $K$ with nicer properties than $k$ (e.g., characteristic zero rather than characteristic $p$), but there are probably much better answers out there. – Charles Staats Jul 31 2010 at 15:19 1 Well, it is easier to give deformations over $k[[t]]$ because its spectrum is small compared to $k[t]$. The ring $k[[t]]$ can be written as projective limit over $k[t]/(t^n)$ and those are local artin algebras, i.e. their spectrum are just points with some tangent vectors attached. Now, you can use deformation theory (in the sense of Schlessinger) to produce deformations those over artin algebras. If you have a system of deformation (say over each $k[t]/(t^n)$ then there are techniques (like Grothendieck's existence theorem) which sometimes allow you to pass to familiy over $k[[t]]$. – Holger Partsch Jul 31 2010 at 15:20 2 Dear Georg, Regarding your question about deformations: the topic you are (implicitly) asking about is whether deformations can be algebraized. It would be easier to answer if you posted it as a separate question (and there are several people on MO who could give you good answers about it). Here I will just say that if C is any smooth curve over k, and you had a family over C, then looking at the formal n.h. of a point will give you something over k[[t]]. In other words, k[t] is not the unique way of algebraizing k[[t]]; any smooth curve will do. Thus you shouldn't prejudge the ... – Emerton Aug 1 2010 at 4:24 2 ... situation and expect to have a family over k[t], just because there is one over k[[t]]. For example, if you look at families of elliptic curves with an 11-torsion point, there are is no interesting such family over an affine line, but there is an interesting such family over a (several times punctured) elliptic curve. In any event, people are very often interested in algebraic families of the type you are wondering about (this is the study of moduli problems), but computing formal deformations is typically much easier, and an important first step even if the moduli space is your goal. – Emerton Aug 1 2010 at 4:28 show 3 more comments 2 Answers $k[[t]]$ is a local ring with maximal ideal $(t)$ and the kernel of every $k$-homomorphism $k[[t]] \to k$ is a maximal ideal, thus the maximal ideal. Thus it factors as $k[[t]] \to k[[t]]/(t) = k \to k$ and $t \mapsto 0$ is the unique $k$-rational point. - Why is the kernel maximal? (This is probably obvious...) – Georg S. Jul 31 2010 at 13:26 Every $k$-homomorphism to $k$ is surjective. This also shows: Every $k$-rational point of a $k$-scheme is closed. – Martin Brandenburg Jul 31 2010 at 13:31 Ah, of course! Thanks. Can you also give me a hint concerning my deformation question above? – Georg S. Jul 31 2010 at 13:34 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Perhaps one answer to your question about deformations is something like the following. A deformation over a complete local ring A (such as k[[t]]) is just a family X $\to$ Spec(A). Suppose that the fibers belong to some sort of moduli space M, such as the moduli space of curves. In the functorial point of view of moduli spaces, the family X $\to$ Spec(A) corresponds to a morphism Spec(A) $\to$ M that assigns to a point of Spec(A) the moduli of the fiber over this point. So, one parameter formal deformations (by this I just mean that A = k[[t]]) correspond precisely to the morphisms Spec(k[[t]]) $\to$ M. The scheme Hom(k[[t]], M) is called the space of arcs in M. If we fix the central fiber of the deformation then we get the space of arcs in M at the point corresponding to the central fiber. The space of arcs is a subtle and important invariant of a singularity. One can think of an arc (that is, a morphism Spec(k[[t]]) $\to$ M) as follows: if we had a curve in M then the arc would be the collection of jets this curve determines, ie all the derivatives of all orders of the curve (think of the way morphisms Spec$(k[t]/(t^2)) \to M$ determine the tangent vectors at the image of the closed point). So these deformations are telling us something significant about the local structure of the moduli space. The construction of these one parameter formal deformations works regardless of the existence of any moduli space. It tells us what the space of arcs on the moduli space of whatever it is you are deforming should be. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284840822219849, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/177704/taking-the-partial-derivative-of-benedict-webb-rubine-equation
# Taking The Partial Derivative Of Benedict-Webb-Rubine Equation How do i begin taking the partial derivative of the below equation with respect to T?$$p = \frac{RT}{V}+\left(B_0 RT-A_0-\frac{C_0}{T^2}\right)\frac{1}{V^2}+(bRT-a)\frac{1}{V^3}+\frac{A\alpha}{V^6}+\frac{c\left(1+\frac{\gamma}{V^2}\right)}{T^2}\left(\frac{1}{V^3}\right)e^\frac{-\gamma}{V^2}$$ I need to show that $$\frac{\partial^2p}{\partial T^2} = \frac{6}{V^2T^4}\left({\frac{C}{V}{\left(1+\frac{\gamma}{V^2}\right) e^\frac{-\gamma}{V^2}-C_0}}\right)$$ Thank You - ## 1 Answer You hold all variables other than $T$ fixed, and then differentiate with respect to $T$. To see that taking the first partial is easy, note that your expression has the form $$p=a_1 T +a_2 T+a_3+a_4T^{-2}+a_5 T+a_6+a_7 T^{-2}$$ where $\qquad a_1={R\over V}$, $\qquad a_2={B_0R\over V^2}$, $\qquad a_3={-A_0\over V^2}$, $\qquad a_4={-C_0\over V^2}$, $\qquad a_5= {bR\over V^3}$, $\qquad a_6={-a\over V^3}+{A\alpha\over V^6}$, and $\qquad a_7={c(1+{\gamma\over V^2}})\cdot{1\over V^3}\cdot e^{-\gamma/V^2}$. Note that none of the $a_i$ depend on $T$; thus, when taking the first partial of $p$, we are differentiating a sum whose terms are multiples of powers of $T$. We have $$\tag{1}\eqalign{ {\partial p\over\partial T} &=a_1+a_2+0-2a_4T^{-3}+a_5+0-2a_7T^{-3}\cr &=a_1+a_2+a_5-2a_4T^{-3}-2a_7T^{-3} }$$ Now take the partial of $(1)$ with respect to $T$ above to obtain the second partial of $p$: $$\eqalign{ {\partial^2 p\over\partial T^2} &={\partial\over\partial T}\bigl(a_1+a_2+a_5-2a_4T^{-3}-2a_7T^{-3}\bigr)\cr &=6a_4T^{-4}+6a_7T^{-4}\phantom{T\over T}\cr &=6\cdot{-C_0\over V^2}\cdot{1\over T^4}+6\cdot{c(1+{\gamma\over V^2})}\cdot{1\over V^3}\cdot e^{-\gamma/V^2}\cdot{1\over T^4}\cr &={6\over V^2T^4}\Bigl( {c\over V}(1+{\gamma\over V^2}) e^{-\gamma/V^2} -C_0\Bigr). }$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140210747718811, "perplexity_flag": "head"}
http://mathoverflow.net/questions/121735/recovering-an-abelian-category-out-of-its-derived-category
## Recovering an abelian category out of its derived category ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm trying to learn more about derived category stuff and my curiosity has made me to ask these questions. Sorry if I'm being sloppy, I'm a new learner. In Wikipedia it has been stated that since different abelian categories can give rise to equivalent derived categories, so it is impossible to reconstruct $\mathcal{A}$ (an abelian category) from its derived category $D(\mathcal{A}).$ 1. To what extent it is possible to recover $\mathcal{A}?$ what would be the frame work for studying such questions? 2. Is there any notion of moduli space of t-structures on the derived category of a given abelian category? - ## 2 Answers The derived category of an abelian category has a t-structure, so obviously that's the first thing you want. To a t-structure corresponds a heart, which is an abelian category, whose derived category might be different than the one you started with. To further complicate matters, as you noted, you could have a heart whose derived category is actually equivalent to the one you started with, but the heart itself is not the original abelian category. It's not clear what you can recover from a triangulated category alone, or even from a triangulated category + t-structure. If you like algebraic geometry and are willing to consider additional structures then you can recover the abelian category. The derived category of a scheme, considered as a monoidal category (coming from tensor products) recovers the original scheme (and thus the abelian category). The same is true if you start with a variety with an ample canonical bundle, then the category plus the bundle do recover the variety. Somehow this flexibility of derived categories is a nice feature, as it gives rise to interesting (and hidden?) "symmetries" and "relationships" between spaces. As per the second question I can only think of what Sasha said, that is stability conditions. Given a stability condition one automatically gets a heart of a t-structure (which again may not have anything to do with the original abelian category) and the slices of the stability condition may be seen as a continuous family of t-structures. It would indeed be really nice to have such a thing as a moduli space of t-structures! - 3 Dear Jacob, you can recover a variety with ample or anti-ample canonical bundle from its derived category alone, you don't need the canonical bundle. – Piotr Achinger Feb 13 at 22:55 3 To be precise, the varieties should be smooth and projective (Serre duality is used); and the reference for this is Bondal-Orlov, "Reconstruction of a variety from the derived category...". By the way, this uses only the graded structure, not the triangulated structure at all (!). For reconstructing a (Noetherian) scheme from the derived category considered as a monoidal triangulated category, see Balmer, "Presheaves of triangulated categories and reconstruction of schemes". – Adeel Feb 13 at 23:54 dear Piotr, thanks for the correction, I always get the Bondal-Orlov stuff wrong! – Jacob Bell Feb 14 at 0:37 Dear Jacob, thank you, those algebraic geometry examples are indeed very good motivations. – Ehsan M. Kermani Feb 14 at 5:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There is a notion of "Bridgeland stability condition" which includes a t-structure. Those have a reasonable moduli space. See papers of Bridgeland for details. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335025548934937, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/centrifugal-force+orbital-motion
# Tagged Questions 1answer 150 views ### Condition for circular orbit I am a little confused about the condition for circular orbit. Goldstein's Classical Mechanics has the condition for circular orbit as $$f^'=0\tag1$$ where $f^'$ is the effective force. I understand ... 4answers 624 views ### Increasing mass' effect on the balance between centripetal force and centrifugal force Okay, this is nothing more than a thought experiment which popped into my head while driving home from work today. Take the case of a single body orbiting another, larger body, as in a planet and a ... 3answers 324 views ### Why don't astronauts in orbit get stuck to the “ceiling”? When a shuttle is in orbit, it is essentially rotating around the "centre" of the Earth at a great speed. So why does there seem to be no centrifugal force sticking them to the 'ceiling' of the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9405093193054199, "perplexity_flag": "middle"}
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.hmj/1333113006
### Paperfolding sequences, paperfolding curves and local isomorphism Francis Oger Source: Hiroshima Math. J. Volume 42, Number 1 (2012), 37-75. #### Abstract For each integer $n$, an $n$-folding curve is obtained by folding $n$ times a strip of paper in two, possibly up or down, and unfolding it with right angles. Generalizing the usual notion of infinite folding curve, we define complete folding curves as the curves without endpoint which are unions of increasing sequences of $n$-folding curves for $n$ integer. We prove that there exists a standard way to extend any complete folding curve into a covering of $R^2$ by disjoint such curves, which satisfies the local isomorphism property introduced to investigate aperiodic tiling systems. This covering contains at most six curves. First Page: Primary Subjects: 05B45 Secondary Subjects: 52C20, 52C23 Full-text: Open access
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8893528580665588, "perplexity_flag": "middle"}
http://dwave.wordpress.com/category/quantum-computer-programming/
# Sparse coding on D-Wave hardware: structured dictionaries Posted on April 29, 2013 by The underlying problem we saw last time, that prevented us from using the hardware to compete with tabu on the cloud, was the mismatch of the connectivity of the problems sparse coding generates (which are fully connected) and the connectivity of the hardware. The source of this mismatch is the quadratic term in the objective function, which is of the form $2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$. The coupling terms are proportional to the dot product of the dictionary atoms. Here’s an idea. What if we demand that $\vec{d}_j \cdot \vec{d}_m$ has to be zero for all pairs of variables $j$ and $m$ that are not connected in hardware? If we can achieve this structure in the dictionary, we get a very interesting result. Instead of being fully connected, the QUBOs with this restriction can be engineered to exactly match the underlying problem the hardware solves. If we can do this, we get closer to using the full power of the hardware. L0-norm sparse coding with structured dictionaries Here is the idea. Given 1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components; 2. A $K x N$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$; 3. An $S x N$ binary valued matrix $\hat{W}$; 4. And a real number $\lambda$, which is called the regularization parameter, Find $\hat{W}$ and $\hat{D}$ that minimize $G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{j=1}^{K} w_{js} \vec{d}_j ||^2 + \lambda \sum_{s=1}^S \sum_{j=1}^{K} w_{js}$ subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used. The only difference here from what we did before is the last sentence, where we add a set of constraints on the dictionary atoms. Solving the sparse coding problem using block coordinate descent We’re going to use the same strategy for solving this as before, with a slight change. Here is the strategy we’ll use. 1. First, we generate a random dictionary $\hat{D}$, subject to meeting the orthogonality constraints we’ve imposed on the dictionary atoms. 2. Assuming these fixed dictionaries, we solve the optimization problem for the dictionary atoms $\hat{W}$. These optimization problems are now Chimera-structured QUBOs that fit exactly onto the hardware by construction. 3. Now we fix the weights to these values, and find the optimal dictionary $\hat{D}$, again subject to our constraints. We then iterate steps 2 and 3 until $G$ converges to a minimum. Now we’re in a different regime than before — step 2 requires the solution of a large number of chimera-structured QUBOs, not fully connected QUBOs. So that makes those problems better fits to the hardware. But now we have to do some new things to allow for both steps 1 and 3, and these initial steps have some cost. The first of these is not too hard, and introduces a key concept we’ll use for Step 3 (which is harder). In this post I’ll go over how to do Step 1. Step 1: Setting up an initial random dictionary that obeys our constraints Alright so the first step we need to do is to figure out under what conditions we can achieve Step 1. There is a very interesting result in a paper called Orthogonal Representations and Connectivity of Graphs. Here is a short explanation of the result. Imagine you have a graph on $V$ vertices. In that graph, each vertex is connected to a bunch of others. Call $p$ the number corresponding to the connectivity of the least connected variable in the graph. Then this paper proves that you can define a set of real vectors in dimension $V - p$ where non-adjacent nodes in the graph can be assigned orthogonal vectors. So what we want to do — find a random dictionary $\hat{D}$ such that $\vec{d}_j \cdot \vec{d}_m = 0$ for all $k, m$ not connected in hardware — can be done if the length of the vectors $\vec{d}$ is greater than $V - p$. For Vesuvius, the number $V$ is 512, and the lowest connectivity node in a Chimera graph is $p = 5$. So as long as the dimension of the dictionary atoms is greater than 512 – 5 = 507, we can always perform Step 1. Here is a little more color on this very interesting result. Imagine you have to come up with two vectors $\vec{g}$ and $\vec{h}$ that are orthogonal (the dot product $\vec{g} \cdot \vec{h}$ is zero). What’s the minimum dimension these vectors have to live in such that this can be done? Well imagine that they both live in one dimension — they are just numbers on a line. Then clearly you can’t do it. However if you have two dimensions, you can. Here’s an example: $\vec{g} = \hat{x}$ and $\vec{h} = \hat{y}$. If you have more that two dimensions, you can also, and the choices you make in this case are not unique. More generally, if you ask the question “how many orthogonal vectors can I draw in an $V$-dimensional space?”, the answer is $V$ — one vector per dimension. So that is a key piece of the above result. If we had a graph with $V$ vertices where NONE of the vertices were connected to any others (minimum vertex connectivity $p = 0$), and want to assign vectors to each vertex such that all of these vectors are orthogonal to all the others, that’s equivalent to asking “given a $V$-dimensional space, what’s the minimum dimension of a set of vectors such that they are all orthogonal to each other?”, and the answer is $V$. Now imagine we start drawing edges between some of the vertices in the graph, and we don’t require that the vectors living on these vertices be orthogonal. Conceptually you can think of this as relaxing some constraints, and making it ‘easier’ to find the desired set of vectors — so the minimum dimension of the vectors required so that this will work is reduced as the graph gets more connected. The fascinating result here is the very simple way this works. Just find the lowest connectivity node in the graph, call its connectivity $p$, and then ask “given a graph on $V$ vertices, where the minimum connectivity vertex has connectivity $p$, what’s the minimum dimension of a set of vectors such that non-connected vertices in the graph are all assigned orthogonal vectors?”. The answer is $V - p$. Null Space Null Space is also an ASCII-based adventure game: https://students.digipen.edu/~tbrosman/null_space_download.html . Now just knowing we can do it isn’t enough. But thankfully it’s not hard to think of a constructive procedure to do this. Here is one: 1. Generate a matrix $\hat{D}$ where all entries are random numbers between +1 and -1. 2. Renormalize each column such that each column’s norm is one. 3. For each column in $\hat{D}$ from the leftmost to the rightmost in order, compute the null space of that column, and then replace that column with a random column written in the null space basis. If you do this you will get an initial random orthonormal basis as required in our new procedure. By the way, here is some Python code for computing a null space basis for a matrix $\hat{A}$. It’s easy but there isn’t a native function in numpy or scipy that does it. import numpy from scipy.linalg import qr def nullspace_qr(A): A = numpy.atleast_2d(A) Q, R = qr(A.T) ns = Q[:, R.shape[1]:].conj() return ns OK so step 1 wasn’t too bad! Now we have to deal with step 3. This is a harder problem, which I’ll tackle in the next post. | # Sparse coding on D-Wave hardware: things that don’t work Posted on April 17, 2013 by Ice, ice baby. For Christmas this year, my dad bought me a book called Endurance: Shackleton’s Incredible Voyage, by Alfred Lansing. It is a true story about folks who survive incredible hardship for a long time. You should read it. Shackleton’s family motto was Fortitudine Vincimus — “by endurance we conquer”. I like this a lot. On April 22nd, we celebrate the 14th anniversary of the incorporation of D-Wave. Over these past 14 years, nearly everything we’ve tried hasn’t worked. While we haven’t had to eat penguin (yet), and to my knowledge no amputations have been necessary, it hasn’t been a walk in the park. The first ten things you think of always turn out to be dead ends or won’t work for some reason or other. Here I’m going to share an example of this with the sparse coding problem by describing two things we tried that didn’t work, and why. Where we got to last time In the last post, we boiled down the hardness of L0-norm sparse coding to the solution of a large number of QUBOs of the form Find $\vec{w}$ that minimizes $G(\vec{w}; \lambda) = \sum_{j=1}^{K} w_j [ \lambda + \vec{d}_j \cdot (\vec{d}_j -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$ I then showed that using this form has advantages (at least for getting a maximally sparse encoding of MNIST) over the more typical L1-norm version of sparse coding. I also mentioned that we used a variant of tabu search to solve these QUBOs. Here I’m going to outline two strategies we tried to use the hardware to beat tabu that ended up not working. These QUBOs are fully connected, and the hardware isn’t The terms in the QUBO that connect variables $j$ and $m$ are proportional to the dot product of the $j^{th}$ and $m^{th}$ dictionary atoms $\vec{d}_j$ and $\vec{d}_m$. Because we haven’t added any restrictions on what these atoms need to look like, these dot products can all be non-zero (the dictionary atoms don’t need to be, and in general won’t be, orthogonal). This means that the problems generated by the procedure are all fully connected — each variable is influenced by every other variable. Unfortunately, when you build a physical quantum computing chip, this full connectivity can’t be achieved. The chip you get to work with connects any given variable with only a small number of other variables. There are two ways we know of to get around the mismatch of the connectivity of a problem we want to solve, and the connectivity of the hardware. The first is called embedding, and the second is by using the hardware to perform a type of large neighborhood local search as a component of a hybrid algorithm we call BlackBox. Solving problems by embedding In a quantum computer, qubits are physically connected to only some of the other qubits. In the most recent spin of our design, each qubit is connected to at most 6 other qubits in a specific pattern which we call a Chimera graph. In our first product chip, Rainier, there were 128 qubits. In the current processor, Vesuvius, there are 512. Chimera graphs are a way to use a regular repeating pattern to tile out a processor. In Rainier, the processor graph was a four by four tiling of an eight qubit unit cell. For Vesuvius, the same unit cell was used, but with an eight by eight tiling. For a detailed overview of the rationale behind embedding, and how it works in practice for Chimera graphs, see here and here, which discuss embedding into the 128-qubit Rainier graph (Vesuvius is the same, just more qubits). The short version is that an embedding is a map from the variables of the problem you wish to solve to the physical qubits in a processor, where the map can be one-to-many (each variable can be mapped to many physical qubits). To preserve the problem structure we strongly ‘lock together’ qubits corresponding to the same variable. In the case of fully connected QUBOs like the ones we have here, it is known that you can always embed a fully connected graph with $K$ vertices into a Chimera graph with $(K-1)^2/2$ physical qubits — Rainier can embed a fully connected 17 variable graph, while Vesuvius can embed a fully connected 33 variable graph. Shown to the right is an embedding from this paper into Rainier, for solving a problem that computes Ramsey numbers. The processor graph where qubits colored the same represent the same computational variable. So one way we could use Vesuvius to solve the sparse coding QUBOs is to restrict $K$ to be 33 or less and embed these problems. However this is unsatisfactory for two (related) reasons. The first is that 33 dictionary atoms isn’t enough for what we ultimately want to do (sparse coding on big data sets). The second is that QUBOs generated by the procedure I’ve described are really easy for tabu search at that scale. For problems this small, tabu gives excellent performance with a per problem timeout of about 10 milliseconds (about the same as the runtime for a single problem on Vesuvius), and since it can be run in the cloud, we can take advantage of massive parallellism as well. So even though on a problem by problem basis, Vesuvius is competitive at this scale, when you gang up say 1,000 cores against it, Vesuvius loses (because there aren’t a thousand of them available… yet ). So this option, while we can do it, is out. At the stage we’re at now this approach can’t compete with cloud-enabled tabu. Maybe when we have a lot more qubits. Solving sparse coding QUBOs using BlackBox BlackBox is an algorithm developed at D-Wave. Here is a high level introduction to how it works. It is designed to solve problems where all we’re given is a black box that converts possible answers to binary optimization problems into real numbers denoting how good those possible answers are. For example, the configuration of an airplane wing could be specified as a bit string, and to know how ‘good’ that configuration was, we might need to actually construct that example and put it in a wind tunnel and measure it. Or maybe just doing a large-scale supercomputer simulation is enough. But the relationship between the settings of the binary variables and the quality of the answer in problems like this is not easily specified in a closed form, like we were able to do with the sparse coding QUBOs. BlackBox is based on tabu search, but uses the hardware to generate a model of the objective function around each search point that expands possibilities for next moves beyond single bit flips. This modelling and sampling from hardware at each tabu step increases the time per step, but decreases the number of steps required to reach some target value of the objective function. As the cost of evaluating the objective function goes up, the gain in making fewer ‘steps’ by making better moves at each tabu step goes up. However if the objective function can be very quickly evaluated, tabu generally beats BlackBox because it can make many more guesses per unit time because of the additional cost of the BlackBox modeling and hardware sampling step. BlackBox can be applied to arbitrary sized fully connected QUBOs, and because of this is better than embedding because we lose the restriction to small numbers of dictionary atoms. With BlackBox we can try any size problem and see how it does. We did this, and unfortunately BlackBox on Vesuvius is not competitive with cloud-enabled tabu search for any of the problem sizes we tried (which were, admittedly, still pretty small — up to 50 variables). I suspect that this will continue to hold, no matter how large these problems get, for the following reasons: 1. The inherently parallel nature of the sparse coding problem ($S$ independent QUBOs) means that we will always be up against multiple cores vs. a small number of Vesuvius processors. This factor can be significant — for a large problem with millions of data objects, this factor can easily be in the thousands or tens of thousands. 2. BlackBox is designed for objective functions that are really black boxes, so that there is no obvious way to attack the structure of the problem directly, and where it is very expensive to evaluate the objective function. This is not the case for these problems — they are QUBOs and this means that attacks can be made directly based on this known fact. For these problems, the current version of BlackBox, while it can certainly be used, is not in its sweet spot, and wouldn’t be expected to be competitive with tabu in the cloud. And this is exactly what we find — BlackBox on Vesuvius is not competitive with tabu on the cloud for any of the problem sizes we tried. Note that there is a small caveat here — it is possible (although I think unlikely) that for very large numbers of atoms (say low thousands) this could change, and BlackBox could start winning. However for both of the reasons listed above I would bet against this. What to do, what to do We tried both obvious tactics for using our gear to solve these problems, and both lost to a superior classical approach. So do we give up and go home? Of course not! We shall go on to the end… we shall never surrender!!! We just need to do some mental gymnastics here and be creative. In both of the approaches above, we tried to shoehorn the problem our application generates into the hardware. Neither solution was effective. So let’s look at this from a different perspective. Is it possible to restrict the problems generated by sparse coding so that they exactly fit in hardware — so that we require the problems generated to exactly match the hardware graph? If we can achieve this, we may be able to beat the classical competition, as we know that Vesuvius is many orders of magnitude faster than anything that exists on earth for the native problems it’s solving. | # Sparse coding on D-Wave hardware: setting up the problem Posted on April 1, 2013 by Sparse coding is a very interesting idea that we’ve been experimenting with. It is a way to find ‘maximally repeating patterns’ in data, and use these as a basis for representing that data. Some of what’s going to follow here is quite technical. However these are beautiful ideas. They are probably related to how human perception and cognition functions. I think sparse coding is much more interesting and important than, say, the Higgs Boson. Everything starts from data Twenty five data objects. Each is a 28×28 pixel greyscale image. Sparse coding requires data. You can think of ‘data’ as a (usually large) set of objects, where the objects each can be represented by a list of real numbers. As as example, we’ll use the somewhat pathological MNIST handwritten digits data set. But you can use any dataset you can imagine. Each data object in this set is a small (28×28 pixel) greyscale image of a handwritten digit. A 28×28 pixel greyscale image can be represented by 28×28 = 784 numbers, each in the range 0..255. The training set has 60,000 of these. We can represent the entire data set using a two-dimensional array, where there are 60,000 columns (one per image), and 784 rows (one for each pixel). Let’s call this array ${Z}_0$. Technical detail: this thing is bigger than it has to be What the first few MNIST images look like, keeping an increasingly large number of SVD modes. One thing you may notice about MNIST is that each image kind of looks mostly similar. In fact we can exploit this to get a quick compression of the data. The trick we will use is called Singular Value Decomposition (SVD). SVD quickly finds a representation of the data that allows you to reduce its dimensionality. In the case of MNIST, it turns out that instead of using 784 pixel values, we can get away with using around 30 SVD modes instead, with only a small degradation in image quality. If we perform an SVD on ${Z}_0$ and keep only the parts corresponding to the largest 30 singular values, we get a new matrix ${Z}$ which still has 60,000 columns, but only 30 rows. Let’s call the $s^{th}$ column vector $\vec{z}_s$, which stores the $s^{th}$ compressed image. This trick, and others related to it, can always be used to pre-process any raw data we have. The Dictionary Let’s now create a small number — let’s say 32 — of very special images. The exact number of these actually matters quite a bit (it’s an example of something called a hyperparameter, many of which lurk in machine learning algorithms and are generally a giant pain in the ass), but the most important thing is that it should be larger than the dimension of the data (which in this case is 30). When this is the case, it’s called an overcomplete basis, which is good for reasons you can read about here. These images will be in the exact same format as the data we’re learning over. We’ll put these in a new two dimensional array, which will have 32 columns (one for each image) and 30 rows (the same as the post-SVD compressed images above). We’ll call this array $\hat{D}$ the dictionary. Its columns are dictionary atoms. The $j^{th}$ dictionary atom is the column vector $\vec{d}_j$. These dictionary atoms will store the ‘maximally repeating patterns’ in our data that are what we’re looking for. At this stage, we don’t know what they are — we need to learn them from the data. The sparse coding problem Now that we have a data array, and a placeholder array for our dictionary, we are ready to state the sparse coding problem. Now to see how all this works, imagine we want to reconstruct an image (say $\hat{z}_s$) with our dictionary atoms. Imagine we can only either include or exclude each atom, and the reconstruction is a linear combination of the atoms. Furthermore, we want the reconstruction to be sparse, which means we only want to turn on a small number of our atoms. We can formalize this by asking for the solution of an optimization problem. L0-norm sparse coding Define a vector of binary (0/1) numbers $\vec{w}$ of length 32. Now solve this problem: Find $\vec{w}$ and $\hat{D} = [\vec{d}_1 \vec{d}_2... \vec{d}_{31} \vec{d}_{32}]$ (remember the vectors $\vec{d}_k$ are column vectors in the matrix $\hat{D}$) that minimize $G(\vec{w}, \hat{D} ; \lambda) = || \vec{z}_s - \sum_{k=1}^{32} w_k \vec{d}_k ||^2 + \lambda \sum_{k=1}^{32} w_k$ The real number $\lambda$ is called a regularization parameter (another one of those hyperparameters). The larger this number is, the bigger the penalty for adding more dictionary atoms to the reconstruction — the rightmost term counts the number of atoms used in the reconstruction. The first term is a measure of the reconstruction error. Minimizing this means minimizing the distance between the data (sometimes called the ground truth) $\vec{z}_s$ and the reconstruction of the image $\sum_{k=1}^{32} w_k \vec{d}_k$. [Note that this prescription for sparse coding is different that the one typically used, where the weights $\vec{w}$ are real valued and the regularization term is of the L1 (sum over absolute values of the weights) form.] You may see a simple way to globally optimize this. All you have to do is set $\vec{d}_1 = \vec{z}_s$, $\vec{d}_k = 0$ for all the other k, $w_1 = 1$ and $w_k = 0$ for all the other k — in other words, store the image in one of the dictionary atoms and only turn that one on to reconstruct. Then the reconstruction is perfect, and you only used one dictionary atom to do it! OK well that’s useless. But now say we sum over all the images in our data set (in the case of MNIST this is 60,000 images). Now instead of the number of images being less than the number of dictionary atoms, it’s much, much larger. The trick of just ‘memorizing’ the images in the dictionary won’t work any more. Now over all the data Let’s call the total number of data objects $S$. In our MNIST case $S = 60,000$. We now need to find $\hat{W}$ (this is a 32xS array) and $\hat{D}$ that minimize $G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{32} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{32} w_{ks}$ This is the full sparse coding prescription — in the next post I’ll describe how to solve it, what the results look like, and introduce a bunch of ways to make good use of the dictionary we’ve just learned! | # Quantum computing for learning: Shaping reality both figuratively and literally Posted on March 21, 2013 by I’d like to document in snippets and thought-trains a little more of the story behind how my co-workers and I are trying to apply quantum computing to the field of intelligence and learning. I honestly think that this is the most fascinating and cool job in the world. The field of Artificial Intelligence (AI) – after a period of slowdown – is now once again exploding with possibility. Big data, large-scale learning, deep networks, high performance computing, bio-inspired architectures… There have been so many advancements lately that it’s kind of hard to keep up! Similarly, the work being done on quantum information processing here at D-Wave is ushering in a new computational revolution. So being a multi-disciplinary type and somewhat masochistic, I find it exciting to explore just how far we can take the union of these two fields. Approximately 5 years ago, while working on my PhD at University, I started to have an interest in whether or not quantum computing could be used to process information in a brain-like way. I’m trying to remember where this crazy obsession came from. I’d been interested in brains and artificial intelligence for a while, and done a bit of reading on neural nets. Probably because one of my friends was doing a PhD that involved robots and I always thought it sounded super-cool. But I think that it was thinking about Josephson junctions that really got me wondering. Josephson junctions are basically switches. But they kind of have unusual ways of switching (sometimes when you don’t want them to). And because of this, I always thought that Josephson junctions are a bit like neurons. So I started searching the literature to find ways in which researchers had used these little artificial neurons to compute. And surprisingly, I found very little. There were some papers about how networks of Josephson junctions could be used to simulate neurons, but no-one had actually built anything substantial. I wrote a bit about this in a couple of old posts (from Physics and Cake blog): I’d read about the D-Wave architecture and I’d been following the company’s progress for some time. After reading a little about the promise of Josephson junction networks, and the pitfalls of the endeavour (mostly because making the circuits reproducible is extremely difficult), I then began wondering whether or not the D-Wave processor could be used in this way. It’s a network of qubits made from Josephson junctions after all, and they’re connected together so that they talk to each other. Yeah, kind of like neurons really. Isn’t that funny. And hey, those D-Wave types have spent 8 years getting that network of Josephson junctions to behave itself. Getting it to be programmable, addressable, robust, and scalable. Hmm, scalable…. I particularly like that last one. Brains are like, big. Lotsa connections. And also, I thought to myself (probably over tea and cake), if the neurons are qubits, doesn’t that mean you can put them in superposition and entangled states? What would that even mean? Boy, that sounds cool. Maybe they would process information differently, and maybe they could even learn faster if they could be in combinations of states at the same time and … could you build a small one and try it out? The train of thought continued. From quantum physics to quantum brains That was before I joined D-Wave. Upon joining the company, I got to work applying some of my physics knowledge to helping build and test the processors themselves. However there was a little part of me that still wanted to actually find ways to use them. Not too long after I had joined the company there happened to be a competition run internally at D-Wave known as ‘Apps Day’, open to everyone in the company, where people were encouraged to try to write an app for the quantum computer. Each candidate got to give a short presentation describing their app, and there were prizes at stake. I decided to try and write an app that would allow the quantum computer to learn how to play the board game Go. It was called QUAGGA, named after an extinct species of zebra. As with similar attempts involving the ill-fated zebra, I too might one day try to resurrect my genetically-inferior code. Of course this depends on whether or not I ever understand the rules of Go well enough to program it properly Anyway… back to Apps Day. There were several entries and I won a runner-up prize (my QUAGGA app idea was good even though I hadn’t actually finished coding it or run it on the hardware). But the experience got me excited and I wanted to find out more about how I could apply quantum processing to applications, especially those in the area of machine learning and AI. That’s why I moved from physics into applications development. Since then the team I joined has been looking into applying quantum technology to various areas of machine learning, in a bid to unite two fields which I have a really strong feeling are made for each other. I’ve tried to analyse where this hunch originates from. The best way to describe it is that I really want to create models of machine intelligence and creativity that are bio-inspired. To do that I believe that you have to take inspiration from the mammalian brain, such as its highly parallel, hierarchical arrangement of substructures. And I just couldn’t help but keep thinking: D-Wave’s processors are highly parallel systems with qubits that can be in one of two states (similar to firing or not firing neurons) with connections between them that can be inhibitory or excitory. Moreover, like the brain, these systems, are INCREDIBLY energy efficient because they are designed to do parallel processing. Modern CPUs are not – hence why brain simulations and machine learning programs take so much energy and require huge computer clusters to run. I believe we need to explore many different hardware and software architectures if we want to get smarter about intelligent computing and closer to the way our own minds work. Quantum circuits are a great candidate in that hunt for cool brain-like processing in silicon. So what on earth happened here? I’d actually found a link between my two areas of obsession interest and ended up working on some strange joint project that combined the best of both worlds. Could this be for real? I kept thinking that maybe I wanted to believe so badly I was just seeing the machine-learning messiah in a piece of quantum toast. However, even when I strive to be truly objective, I still find a lot of evidence that the results of this endeavour could be very fruitful. Our deep and ever-increasing understanding of physics (including quantum mechanics) is allowing us to harness and shape the reality of the universe to create new types of brains. This is super-cool. However, the thing I find even cooler is that if you work hard enough at something, you may discover that several fascinating areas are related in a deeper way than you previously understood. Using this knowledge, you can shape the reality of your own life to create a new, hybrid project idea to work on; one which combines all the things you love doing. | # It’s like the quantum computer is playing 20 questions… Posted on April 1, 2012 by I’ve been thinking about the BlackBox compiler recently and came up with a very interesting analogy to the way it works. There are actually lots of different ways to think about how BlackBox works, and we’ll post more of them over time, but here is a very high level and fun one. The main way that you use BlackBox is to supply it with a classical function which computes the “goodness” of a given bitstring by returning a real number (the lower this number, the better the bitstring was). Whatever your optimization problem is, you need to write a function that encodes your problem into a series of bits (x1, x2, x3…. xN) to be discovered, and which also computes how “good” a given bitstring (e.g. 0,1,1…0) is. When you pass such a function to Blackbox, the quantum compiler then repeatedly comes up with ideas for bitstrings, and using the information that your function supplies about how good its “guesses” are, it quickly converges on the best bitstring possible. So using this approach the quantum processor behaves as a co-processor to a classical computing resource. The classical computing resources handles one part of the problem (computing the goodness of a given bitstring), and the quantum computer handles the other (suggesting bitstrings). I realized that this is described very nicely by the two computers playing 20 questions with one another. The quantum computer suggests creative solutions to a problem, and then the classical computer is used to give feedback on how good the suggested solution is. Using this feedback, BlackBox will intelligently suggest a new solution. So in the example above, Blackbox knows NOT to make the next question “Is it a carrot?” There is actually a deep philosophical point here. One of the pieces that is missing in the puzzle of artificial intelligence is how to make algorithms and programs more creative. I have always been an advocate of using quantum computing to power AI, but we now start to see concrete ways in which it could really start to address some of the elusive problems that crop up when trying to build intelligent machines. At D-Wave, we have been starting some initial explorations in the areas of machine creativity and machine dreams, but it is early days and the pieces are only just starting to fall into place. I was wondering if you could use the QC to actually play 20 questions for real. This is quite a fun application idea. If anyone has any suggestions for how to craft 20 questions into an objective function, let me know. My first two thoughts were to do something with Wordnet and NLTK. You could try either a pattern matching or a machine learning version of ‘mining’ wordnet for the right answer. This project would be a little Watson-esque in flavour. | | # The dreams of spiritual machines Posted on March 2, 2012 by When I was in middle school, every year we had to select a project to work on. These projects came from a list of acceptable projects. The projects were typical science-ish projects you’d expect a seventh grader to take on. One year my project was about whooping cranes. Not sure why I picked that one. Maybe I thought it might be related to whooping cough. One year the subject I picked was dreams. What were they? How did they come about? What, if anything did they tell us about our waking life? I remember being intensely fascinated by the topic at the time, feeling that the answers I was getting to my questions from grown-ups and the encyclopedias checked out from the school library (there was no internet back then, at least in a form I could access) were not satisfactory at all. This was one of my earliest realizations that there were questions no-one yet knew the answers to. The subject of dreams has come up in my adult life several times, and each time the same questions about them bubble up from my early encounter with them. An acquaintance of mine went through a period of having night terrors, where she would scream so loud that it would wake people in neighboring houses. She described them as being a sense of horror and dread of the most intense and indescribable kind, with sure knowledge that it would never end. This led to multiple 911 calls over periods of years. Several trips to specialists and tests revealed nothing out of the ordinary. Then one day they suddenly stopped. To this day no one has a good explanation for why they started, or why they stopped. One of my friends has multiple vivid, realistic dreams every night, and he remembers them. They are also often terrifying. I on the other hand rarely dream, or if I do, I don’t remember them. Recently I have been thinking of dreams again, and I have four computer scientists to thank. One of them is Bill Macready, who is my friend and colleague at D-Wave, and inventor of the framework I’ll introduce shortly. The second is Douglas Hofstadter. The third is Geoff Hinton. The fourth is David Gelertner. Gelertner is a very interesting guy. Not only is he a rock star computer scientist (Bill Joy called him “one of the most brilliant and visionary computer scientists of our time”), he is also an artist, entrepreneur and a writer with an MA is classical literature. He was injured badly opening a package from the Unabomber in 1993. He is the author of several books, but the one I want to focus on now is The Muse in the Machine, which  is must-read material for anyone interested in artificial intelligence. In this book, Gelertner presents a compelling theory of cognition that includes emotion, creativity and dreams as a central critically important aspect of the creation of machines that think, feel and act as we do. In this theory, emotion, creativity, analogical thought and even spirituality are viewed as being essential to the creation of machines that behave as humans do. I can’t do the book justice in a short post – you should read it. I am going to pull one quote out of the book though, but before I do I want to briefly touch on what Geoff Hinton has to do with all of this. Hinton is also a rock star in the world of artificial intelligence, and in particular in machine learning. He was one of the inventors of back propagation, and a pioneer in deep belief nets and unsupervised learning. A fascinating demo I really like starts around the 20:00 mark of this video. In this demo, he runs a deep learning system ‘in reverse’, in generative mode. Hinton refers to this process as the system “fantasizing” about the images it’s generating; however Hinton’s fantasizing can also be thought of as the system hallucinating, or even dreaming, about the subjects it has learned. Systems such as these exhibit what I believe to be clear instances of creativity – generating instances of objects that have never existed in the world before, but share some underlying property. In Hinton’s demo, this property is “two-ness”. Alright so back to Gelertner, and the quote from The Muse in the Machine: A computer that never hallucinates cannot possibly aspire to artificial thought. While Gelertner speaks a somewhat different language than Hinton, I believe that the property of a machine that he is referring to here – the ability to hallucinate, fantasize or dream – is exactly the sort of thing Hinton is doing with his generative digit model. When you run that model I would argue that you are seeing the faintest wisps of the beginning of true cognition in a machine. Douglas Hofstadter is probably the most famous of the four computer scientists I’ve been thinking about recently. He is of course the author of Godel, Escher, Bach, which every self-respecting technophile has read, but more importantly he has been a proponent for the need to think about cognition from a very different perspective than most computer scientists. For Hofstadter, creativity and analogical reasoning are the key points of interest he feels we need to understand in order to understand our own cognition. Here he is in the “Pattern-finding as the Core of Intelligence” introduction to his Fluid Analogies book: In 1977, I began my new career as a professor of computer science, aiming to specialize in the field of artificial intelligence. My goals were modest, at least in number: first, to uncover the secrets of creativity, and second, to uncover the secrets of consciousness, by modeling both phenomena on a computer. Good goals. Not easy. All four of these folks share a perspective that understanding how analogical thinking and creativity work is an important and under-studied part of building machines like us. Recently we’ve been working on a series of projects that are aligned with this sort of program. The basic framework is introduced here, in an introductory tutorial. This basic introduction is extended here. One of the by-products of this work is a computing system that generates vivid dreamscapes. You can look at one of these by clicking on the candle photograph above, or by following through the Temporal QUFL tutorial, or by clicking on the direct link below. The technical part of how these dreamscapes are generated is described in these tutorials.  I believe these ideas are important. These dreamscapes remind me of H.P. Lovecraft’s Dreamlands, and this from Celephais: There are not many persons who know what wonders are opened to them in the stories and visions of their youth; for when as children we learn and dream, we think but half-formed thoughts, and when as men we try to remember, we are dulled and prosaic with the poison of life. But some of us awake in the night with strange phantasms of enchanted hills and gardens, of fountains that sing in the sun, of golden cliffs overhanging murmuring seas, of plains that stretch down to sleeping cities of bronze and stone, and of shadowy companies of heroes that ride caparisoned white horses along the edges of thick forests; and then we know that we have looked back through the ivory gates into that world of wonder which was ours before we were wise and unhappy. I hope you like them. | | # Quantum computing and light switches Posted on November 25, 2011 by So as part of learning how to become a quantum ninja and program the D-Wave One, it is important to understand the problem that the machine is designed to solve. The D-Wave machine is designed to find the minimum value of a particular mathematical expression which I can write down in one line: As people tend to be put off by mathematical equations in blogposts, I decided to augment it with a picture of a cute cat. However, unless you are very mathematically inclined (like kitty), it might not be intuitive what minimizing this expression actually means, why it is important, or how quantum computing helps. So I’m going to try to answer those three questions in this post. . 1.) What does the cat’s expression mean? The machine is designed to solve discrete optimization problems. What is a discrete optimization problem? It is one where you are trying to find the best settings for a bunch of switches. Here’s a graphical example of what is going on. Let’s imagine that our switches are light switches which each have a ‘bias value’ (a number) associated with them, and they can each be set either ON or OFF: The light switch game The game that we must play is to set all the switches into the right configuration. What is the right configuration? It is the one where when we set each of the switches to either ON or OFF (where ON = +1 and OFF = -1) and then we add up all the switches’ bias values multiplied by their settings, we get the lowest answer. This is where the first term in the cat’s expression comes from. The bias values are called h’s and the switch settings are called s’s. So depending upon which switches we set to +1 and which we set to -1, we will get a different score overall. You can try this game. Hopefully you’ll find it easy because there’s a simple rule to winning: We find that if we set all the switches with positive biases to OFF and all the switches with negative biases to ON and add up the result then we get the lowest overall value. Easy, right? I can give you as many switches as I want with many different bias values and you just look at each one in turn and flip it either ON or OFF accordingly. OK, let’s make it harder. So now imagine that many of the pairs of switches have an additional rule, one which involves considering PAIRS of switches in addition to just individual switches… we add a new bias value (called J) which we multiply by BOTH the switch settings that connect to it, and we add the resulting value we get from each pair of switches to our overall number too. Still, all we have to do is decide whether each switch should be ON or OFF subject to this new rule. But now it is much, much harder to decide whether a switch should be ON or OFF, because its neighbours affect it. Even with the simple example shown with 2 switches in the figure above, you can’t just follow the rule of setting them to be the opposite sign to their bias value anymore (try it!). With a complex web of switches having many neighbours, it quickly becomes very frustrating to try and find the right combination to give you the lowest value overall. . 2.) It’s a math expression – who cares? We didn’t build a machine to play a strange masochistic light switch game. The concept of finding a good configuration of binary variables (switches) in this way lies at the heart of many problems that are encountered in everyday applications. A few are shown in figure below (click to expand): Even the idea of doing science itself is an optimization problem (you are trying to find the best ‘configuration’ of terms contributing to a scientific equation which matches our real world observations). . 3.) How does quantum mechanics help? With a couple of switches you can just try every combination of ON’s and OFF’s, there are only four possibilities: [ON ON], [ON OFF], [OFF ON] or [OFF OFF]. But as you add more and more switches, the number of possible ways that the switches can be set grows exponentially: You can start to see why the game isn’t much fun anymore. In fact it is even difficult for our most powerful supercomputers. Being able to store all those possible configurations in memory, and moving them around inside conventional processors to calculate if our guess is right takes a very, very long time. With only 500 switches, there isn’t enough time in the Universe to check all the configurations. Quantum mechanics can give us a helping hand with this problem. The fundamental power of a quantum computer comes from the idea that you can put bits of information into a superposition of states. Which means that using a quantum computer, our light switches can be ON and OFF at the same time: Now lets consider the same bunch of switches as before, but now held in a quantum computer’s memory: Because all the light switches are on and off at the same time, we know that the correct answer (correct ON/OFF settings for each switch) is represented in there somewhere… it is just currently hidden from us. What the D-Wave quantum computer allows you to do is take this ‘quantum representation’ of your switches and extract the configuration of ONs and OFFs with the lowest value. Here’s how you do this: You start with the system in its quantum superposition as described above, and you slowly adjust the quantum computer to turn off the quantum superposition effect. At the same time, you slowly turn up all those bias values (the h and J’s from earlier). As this is performed, you allow the switches to slowly drop out of the superposition and choose one classical state, either ON or OFF. At the end, each switch MUST have chosen to be either ON or OFF. The quantum mechanics working inside the computer helps the light switches settle into the right states to give the lowest overall value when you add them all up at the end. Even though there are 2^N possible configurations it could have ended up in, it finds the lowest one, winning the light switch game. | | # The Developer Portal Posted on November 23, 2011 by Keen-eyed readers may have noticed a new section on the D-Wave website entitled ‘developer portal’. Currently the devPortal is being tested within D-Wave, however we are hoping to open it up to many developers in a staged way within the next year. We’ve been getting a fair amount of interest from developers around the world already, and we’re anxious to open up the portal so that everyone can have access to the tools needed to start programming quantum computers! However given that this way of programming is so new we are also cautious about carefully testing everything before doing so. In short, it is coming, but you will have to wait just a little longer to get access! A few tutorials are already available for everyone on the portal. These are intended to give a simple background to programming the quantum systems in advance of the tools coming online. New tutorials will be added to this list over time. If you’d like to have a look you can find them here: DEVELOPER TUTORIALS In the future we hope that we will be able to grow the community to include competitions and prizes, programming challenges, and large open source projects for people who are itching to make a contribution to the fun world of quantum computer programming. | | # IEEE talk at Johns Hopkins University Posted on October 23, 2011 by I gave a talk at Johns Hopkins University on Monday entitled ‘Why the world needs quantum computing’. The video was for a general audience (no specific quantum physics background required). Here is a link to the talk. The video is available for download from this site! | | Follow ### Follow “Hack The Multiverse” Get every new post delivered to your Inbox. Join 1,035 other followers Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 94, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454793334007263, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/58660-direct-products.html
# Thread: 1. ## direct products Show that the center of a direct product is the direct product of centers: $Z(G_1$ x $G_2$ x ... x $G_n)=Z(G_1)$ x $Z(G_2)$ x ... x $Z(G_n)$. Deduce that a direct product of groups is abelian if and only if each of the factors is abelian. I know that $G_1$x $G_2$ x ... x $G_n$ is isomorphic to $G_1G_2...G_n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9246034026145935, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/72901-finding-area-bounded-two-cardioids.html
# Thread: 1. ## Finding area bounded by two cardioids Find the region common to the interiors of the cardioids r=1+costheta and r=1-costheta -I understand what the picture looks like I'm having difficulty with the limits of integration. Since both the circles are the same you can find the area of the top one and multiply the double integral by 2. in this case I'm just not sure what the limits of integration of r would be. I think is would be from zero to ?? and also I think the limit of integration for theta would be 0 to pi. Help! One more thing. Is the actual integral just of rdrd(theta) 2. Hello, s7b! Are you sure you have the right graph? Find the area of the region common to the interiors of the cardioids; . . $r\:=\:1+\cos\theta\,\text{ and }\,r\:=\:1-\cos\theta$ Code: ``` | * | * * * | * * * * *:|:* * *::|::* * *:|:* * - - * - - - - - - - - - - * - - - - - - - - - - * - - * *:|:* * *::|::* * *:|:* * * * * | * * * | * |``` Then intersect when: . $1 + \cos\theta \:=\:1-\cos\theta \quad\Rightarrow\quad \theta \:=\:\pm\tfrac{\pi}{2}$ The region has four-way symmetry. We can find the area in Quadrant 1 and multiply by 4. The integral is: / $\text{Area} \;=\;4 \times\tfrac{1}{2}\int^{\frac{\pi}{2}}_0\left(1 - \cos\theta\right)^2d\theta$ 3. common area using symmetry ... $A = 4 \int_0^{\frac{\pi}{2}} \frac{(1 - \cos{\theta})^2}{2} \, d\theta$ or ... $A = 4 \int_{\frac{\pi}{2}}^{\pi} \frac{(1 + \cos{\theta})^2}{2} \, d\theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412811994552612, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/110607-find-d-such-f-x-converges.html
# Thread: 1. ## Find d such that f(x) converges EDIT: I see now to late that the title of the thread says: "Find d..." , but it is supposed to be: "Find c..." Let: $f(x)=x-(4+x^4+x^6)^c$ The question is, what is c such that when $x\to\infty$ $f(x)\to k$ where k is some finite number. Now what I did is something like this: $\lim\limits_{x\to\infty}x-(4+x^4+x^6)^c=\lim\limits_{x\to\infty}x-\lim\limits_{x\to\infty}(x^6(\frac{4}{x^6}+\frac{1 }{x^2}+1))^c$ $=\lim\limits_{x\to\infty}x + \lim\limits_{x\to\infty}x^{6c} \cdot \lim\limits_{x\to\infty}(\frac{4}{x^6}+\frac{1}{x^ 2}+1)^c$ $=\lim\limits_{x\to\infty}x-\lim\limits_{x\to\infty}x^{6c}=\lim\limits_{x\to\i nfty}x-x^{6c}=k$ where $k\in\mathbb{R}$ Now the only way such a limit is to converge to k is that x have equal powers, that is $c=\frac{1}{6}$, then $k=0$. Now the main problem is: How do I prove there is no other c that such that f(x) converges when $x\to\infty$ ? I just need you to point me in the right direction. 2. Nothing ? Any way, I think I am supposed to express $(4+x^4+x^6)^c$ as a taylor-series. I just started to learn Taylor series today, so I have no idea how I am supposed to do that.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9637304544448853, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/2048/solutions-to-power-equations
# Solution(s) to 'power equations' I'm not sure a 'power equation' is the right name for the equation I'd like to know more about (and, specifically, about its solutions), but I don't know the 'proper' way to name it. The simplest 'power polynomial' is : $P(x) = x^x$. The simplest 'power equation is: (1) $x^x = c$ for some $c \in \mathbb{R}$. What are the exact solutions to (1) of x (in $\mathbb{C}$) with regards to c, apart from the 'obvious' solutions (x=0, c=0), (x=1, c=1) and (x=2, c=4) and all the other solutions of the form $x^x = n^n$ for $n \in \mathbb{N}$ ? We could extend the power polynomial: $P_{2}(x) = x^{{ax}^{bx}} + x^{cx}$. What are the exact solutions of $P_{2}(x) = d$ for $a,b,c,d \in \mathbb{R}, x\in \mathbb{C}$ ? Or perhaps I should ask what the exact form of the solutions is. We could generalize the power polynomial even further to $P_{3}(x)$ and $P_{n}(x)$, but I don't know how to write down the latter, general polynomial. Thanks in advance, Max (P.S. I References are always welcome. II If you think this question belongs to MO, please tell me). - – Tom Boardman Aug 10 '10 at 13:23 @ Tom and MRA: thanks, this a good start indeed. – Max Muller Aug 10 '10 at 13:26 There is no reason to expect that such equations have solutions in terms of familiar functions. – Qiaochu Yuan Aug 10 '10 at 16:45 There is also no reason to expect that anyone has ever studied this problem. These kinds of equations don't, to my knowledge, appear naturally in any problem. – Qiaochu Yuan Aug 10 '10 at 17:08 x=0, c=0 isn't an "obvious solution." The expression $0^0$ is indeterminate. If anything, you should use $0^0=1$ here since $\lim_{x\rightarrow 0} x^x = 1$. – Corey Jun 27 '11 at 2:32 ## 2 Answers Maybe you should read about the Lambert W-function which gives the solutions to expressions like $z=x^x$. However, I am not sure what to do in case of such a "power tower" like $P_2$. - These types of object are called super(hyper) polynomials which are polynomials of power towers or tetrations. You may want to look at http://en.wikipedia.org/wiki/Tetration for more information and references. These are fairly complicated structures and an object of current research. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413460493087769, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/12/09/direct-sums-of-representations/?like=1&_wpnonce=76a2d9cd23
# The Unapologetic Mathematician ## Direct Sums of Representations We know that we can take direct sums of vector spaces. Can we take representations $\rho:A\rightarrow\mathrm{End}(V)$ and $\sigma:A\rightarrow\mathrm{End}(W)$ and use them to put a representation on $V\oplus W$? Of course we can, or I wouldn’t be making this post! This is even easier than tensor products were, and we don’t even need $A$ to be a bialgebra. An element of $V\oplus W$ is just a pair $(v,w)$ with $v\in V$ and $w\in W$. We simply follow our noses to define $\displaystyle\left[\left[\rho\oplus\sigma\right](a)\right](v,w)=\left(\left[\rho(a)\right](v),\left[\sigma(a)\right](w)\right)$ The important thing to notice here is that the direct summands $V$ and $W$ do not interact with each other in the direct sum $V\oplus W$. This is very different from tensor products, where the tensorands $V$ and $W$ are very closely related in the tensor product $V\otimes W$. If you’ve seen a bit of pop quantum mechanics, this is exactly the reason quantum system exhibit entanglement while classical systems don’t. Okay, so we have a direct sum of representations. Is it a biproduct? Luckily, we don’t have to bother with universal conditions here, because a biproduct can be defined purely in terms of the morphisms $\pi_i$ and $\iota_i$. And we automatically have the candidates for the proper morphisms sitting around: the inclusion and projection morphisms on the underlying vector spaces! All we need to do is check that they intertwine representations, and we’re done. And we really only need to check that the first inclusion and projection morphisms work, because all the others are pretty much the same. So, we’ve got $\iota_1:V\rightarrow V\oplus W$ defined by $\iota_1(v)=(v,0)$. Following this with the action on $V\oplus W$ we get $\displaystyle\left(\left[\rho(a)\right](v),\left[\sigma(a)\right](0)\right)=\left(\left[\rho(a)\right](v),0\right)$ But this is the same as if we applied $\iota_1$ to $\left[\rho(a)\right](v)$. Thus, $\iota_1$ is an intertwiner. On the other hand, we have $\pi_1:V\oplus W\rightarrow V$, defined by $\pi_1(v,w)=v$. Acting now by $\rho$ we get $\left[\rho(a)\right](v)$, while if we acted by $\rho\oplus\sigma$ beforehand we’d get $\displaystyle\pi_1\left(\left[\rho(a)\right](v),\left[\sigma(a)\right](w)\right)=\left[\rho(a)\right](v)$ Just as we want. The upshot is that taking the direct sum of two representations in this manner is a biproduct on the category of representations. ### Like this: Posted by John Armstrong | Algebra, Representation Theory ## 2 Comments » 1. [...] the composition of intertwiners is bilinear, this makes into an -category. Secondly, we can take direct sums of representations, which is a categorical biproduct. Thirdly, every intertwiner has a kernel and a [...] Pingback by | December 15, 2008 | Reply 2. [...] Weyl group acts irreducibly on . That is, we cannot decompose the representation of on as the direct sum of two other representations. Even more explicitly, we cannot write for two nontrivial subspaces [...] Pingback by | February 11, 2010 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 30, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279977083206177, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/7048/other-means-of-calibrating-heston-models/7051
# Other means of calibrating Heston models I understand that the simplest way of calibrating a Heston model for volatility surface is to use Monte-Carlo to simulate the vol and stock price trajectories and then use the observed price to do a optimization. However, I am just wondering if there is a more "clean" way to calibrate the model and how would it be better compare to the MC method? Also, what might be the potential issues of calibrating Heston models using MC? And what would be some variance reduction techniques that could be used during the calibration? Thanks! - ## 4 Answers You can find the derivation of the Heston characteristic function (its Fourier Transform) in Gatheral (2006). Using the characteristic function, you can optimize the model on the prices. There are multiple approaches to optimize, among others pattern search (which is very slow) and stochastic optimization (randomly jump around and stop after n iterations), but i recommend a mix of both. I often use adaptive simulated annealing for an inital calibration and then run a pattern search. Depending on the language you use, these are available as functions and its pretty simple to implement. If I recall correctly, the Fourier transform/characteristic function of the Heston model is $$\phi_T(u) = \exp\{C(u,\tau)\theta + D(u,\tau)v_0\}$$ where $$C(u,\tau)=\ \kappa \left[r_{-} \tau - \frac{2}{\eta^2}\log\left(\frac{1-g e^{-d\tau}}{1-g}\right) \right]$$ $$D(u,\tau)=\ r_{-} \frac{1-e^{-d\tau}}{1-ge^{-d\tau}}$$ $$g =\ \frac{r_{-}}{r_{+}}$$ $$r_{\pm} =\ \frac{b\pm d}{\eta^2}$$ $$d =\ d=\sqrt{b^2-4ac}$$ $$c =\ \frac{\eta^2}{2}$$ $$b =\ \kappa-\rho\eta iu$$ $$a =\ -\frac{u^2}{2} - \frac{iu}{2}$$ Gatheral provides derivations for SVJ, SVJJ, VarG, etc as well. - It is really difficult to implement I think, the best way is probably just through numerical integration I guess, but thanks for the answer – AZhu Feb 8 at 0:22 Doesn't the Heston model have some Fourier transform formulae for pricing vanillas? I think one could use those to calibrate to the vanillas. Can't provide references at this moment, on the road. Edit: check out http://www.visixion.com/dok/Visixion_Calibrating_Heston.pdf -- I haven't read this closely but it sounds familiar - Nice paper, I was to include it as well but you had it already referenced. – Freddy Jan 22 at 3:16 I highly recommend you to stick with the error function (RMSE) value minimization approach. I love MC techniques for this and related problem solving and thus do not recommend you to use anything else because of its simplicity and transparency. It comes down to using the right discretization function and to possibly implement variance reduction approaches. Re variance reduction have you tried the standard approaches ? (Common random numbers, antithetic variates, control variates, importance sampling and stratified sampling) Here a reference to a paper that I find quite neatly describes potential pitfalls and model calibration around the Heston model: http://www.math.umn.edu/~bemis/IMA/MMI2008/calibrating_heston.pdf and another one for its elegance to describe in simple terms: http://ta.twi.tudelft.nl/mf/users/oosterle/oosterlee/chen.pdf Here a link to actually implement the calibration in Matlab: http://www.mathworks.com/matlabcentral/fileexchange/29446-heston-model-calibration-and-simulation - Yes I suppose with the long-jumping discretizations Monte Carlo methods could be used for calibration as well, but I still have the feeling some folks invert the vanilla formulae to calibrate to a given set of European options (if that is the goal). Not 100% sure though. Nice links, will read this week hopefully - never enough time to read everything! – experquisite Jan 22 at 3:49 @experquisite, I hear you, if I printed all the papers of links I stored on my reading list on my ipad they would not fit into the bed room. I see your point re inversion of the close form. What I like is that you have control over the process using a discretization plus having so many variance reduction techniques at hand plus much more powerful means to run MC (parallelization, concurrency, GPU matrix computations,...) does not give an excuse to exotic desks anymore to having to wait till the next morning to get their risk ;-) – Freddy Jan 22 at 3:56 1 Yeah, it also gives hope that models won't be popularized solely based on their analytical tractability anymore, but on the merits of their accurate description of underlying dynamics, even if they must be numerically solved for everything. – experquisite Jan 22 at 4:02 Nice way of thinking, gotta upvote that ;-) Very much in line with my thoughts, too much attention is paid, imho, to deriving analytic closed-form solutions, and not getting the underlying dynamics right. Look at some volatility models, they arrive at totally messed up smile dynamics. – Freddy Jan 22 at 4:13 Here's a decent study of calibration performance using fast fourier transforms versus other techniques. It concludes Gaussian quadrature works better than other techniques. http://www.frankfurt-school.de/dms/publications-cqf/CPQF_Arbeits6.pdf Edit: AZhu points out the link above is dead and that a working link is http://mpra.ub.uni-muenchen.de/2975/1/MPRA_paper_2975.pdf - – AZhu Jan 22 at 16:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028873443603516, "perplexity_flag": "middle"}
http://terrytao.wordpress.com/tag/permutations/
What’s new Updates on my research and expository papers, discussion of open problems, and other maths-related topics. By Terence Tao # Tag Archive You are currently browsing the tag archive for the ‘permutations’ tag. ## The number of cycles in a random permutation 23 November, 2011 in expository, math.CO, math.PR | Tags: bijective proof, cycles, permutations, Stirling numbers of the first kind | by Terence Tao | 15 comments Let ${n}$ be a natural number, and let ${\sigma: \{1,\ldots,n\} \rightarrow \{1,\ldots,n\}}$ be a permutation of ${\{1,\ldots,n\}}$, drawn uniformly at random. Using the cycle decomposition, one can view ${\sigma}$ as the disjoint union of cycles of varying lengths (from ${1}$ to ${n}$). For each ${1 \leq k \leq n}$, let ${C_k}$ denote the number of cycles of ${\sigma}$ of length ${k}$; thus the ${C_k}$ are natural number-valued random variables with the constraint $\displaystyle \sum_{k=1}^n k C_k = n. \ \ \ \ \ (1)$ We let ${C := \sum_{k=1}^n C_k}$ be the number of cycles (of arbitrary length); this is another natural number-valued random variable, of size at most ${n}$. I recently had need to understand the distribution of the random variables ${C_k}$ and ${C}$. As it turns out this is an extremely classical subject, but as an exercise I worked out what I needed using a quite tedious computation involving generating functions that I will not reproduce here. But the resulting identities I got were so nice, that they strongly suggested the existence of elementary bijective (or “double counting”) proofs, in which the identities are proven with a minimum of computation, by interpreting each side of the identity as the cardinality (or probability) of the same quantity (or event), viewed in two different ways. I then found these bijective proofs, which I found to be rather cute; again, these are all extremely classical (closely related, for instance, to Stirling numbers of the first kind), but I thought some readers might be interested in trying to find these proofs themselves as an exercise (and I also wanted a place to write the identities down so I could retrieve them later), so I have listed the identities I found below. 1. For any ${1 \leq k \leq n}$, one has ${{\bf E} C_k = \frac{1}{k}}$. In particular, ${{\bf E} C = 1 + \frac{1}{2} + \ldots + \frac{1}{n} = \log n + O(1)}$. 2. More generally, for any ${1 \leq k \leq n}$ and ${j \geq 1}$ with ${jk \leq n}$, one has ${{\bf E} \binom{C_k}{j} = \frac{1}{k^j j!}}$. 3. More generally still, for any ${1 \leq k_1 < \ldots < k_r \leq n}$ and ${j_1,\ldots,j_r \geq 1}$ with ${\sum_{i=1}^r j_i k_i \leq n}$, one has $\displaystyle {\bf E} \prod_{i=1}^r \binom{C_{k_i}}{j_i} = \prod_{i=1}^r \frac{1}{k_i^{j_i} j_i!}.$ 4. In particular, we have Cauchy’s formula: if ${\sum_{k=1}^n j_k k = n}$, then the probability that ${C_k = j_k}$ for all ${k=1,\ldots,n}$ is precisely ${\prod_{k=1}^n \frac{1}{k^{j_k} j_k!}}$. (This in particular leads to a reasonably tractable formula for the joint generating function of the ${C_k}$, which is what I initially used to compute everything that I needed, before finding the slicker bijective proofs.) 5. For fixed ${k}$, ${C_k}$ converges in distribution as ${n \rightarrow \infty}$ to the Poisson distribution of intensity ${\frac{1}{k}}$. 6. More generally, for fixed ${1 \leq k_1 < \ldots < k_r}$, ${C_{k_1},\ldots,C_{k_r}}$ converge in joint distribution to ${r}$ independent Poisson distributions of intensity ${\frac{1}{k_1},\ldots,\frac{1}{k_r}}$ respectively. (A more precise version of this claim can be found in this paper of Arratia and Tavaré.) 7. One has ${{\bf E} 2^C = n+1}$. 8. More generally, one has ${{\bf E} m^C = \binom{n+m-1}{n}}$ for all natural numbers ${m}$. ### Recent Comments Sandeep Murthy on An elementary non-commutative… Luqing Ye on 245A, Notes 2: The Lebesgue… Frank on Soft analysis, hard analysis,… andrescaicedo on Soft analysis, hard analysis,… Richard Palais on Pythagoras’ theorem The Coffee Stains in… on Does one have to be a genius t… Benoît Régent-Kloeck… on (Ingrid Daubechies) Planning f… Luqing Ye on 245B, Notes 7: Well-ordered se… Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… %anchor_text% on Books Luqing Ye on 245B, Notes 7: Well-ordered se… Arjun Jain on 245B, Notes 7: Well-ordered se… Luqing Ye on 245A, Notes 2: The Lebesgue…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 43, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9053693413734436, "perplexity_flag": "middle"}