url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://math.stackexchange.com/questions/202760/prove-the-trigonometric-identity?answertab=oldest
# Prove the trigonometric identity $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\frac{1}{2}-\frac{1}{2}\cos2\alpha$$ - Welcome to math.SE: since you are new, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are; this will prevent people from telling you things you already know, and help them give their answers at the right level. – Julian Kuelshammer Oct 14 '12 at 13:25 ## 3 Answers Implement the formula: 1) $1-\cos^2\alpha=\sin^2\alpha$ 2) $\cos2\alpha=\cos^2\alpha-\sin\alpha$ 3) $1=\sin^2\alpha+\cos^2\alpha$ Now turn the proof given identity. $\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\frac{1}{2}-\frac{1}{2}\cos2\alpha$ $\cos^2\alpha(1-\cos^2\alpha)+\sin^4\alpha=\frac{1}{2}(1-\cos2\alpha)$ $\cos^2\alpha\sin^2\alpha+\sin^4\alpha=\frac{1}{2}(\sin^2\alpha+\cos^2\alpha-\cos^2\alpha+\sin^2\alpha)$ $\sin^2\alpha(\cos^2\alpha+\sin^2\alpha)=\frac{1}{2}\cdot 2\sin^2\alpha$ $\sin^2\alpha=\sin^2\alpha$ - This is badly formatted: One says in effect that if a certain equality holds, then $\sin^2\alpha=\sin^2\alpha$, and concludes that that equality holds. One should be "$=$" between, for example, $\cos^2\alpha-\cos^4\alpha+\sin^4\alpha$ and the thing on the line after it, $\cos^2\alpha(1-\cos^2\alpha+\sin^4\alpha$, and so on. – Michael Hardy Sep 26 '12 at 13:09 Use the identities, $\sin^2\alpha+\cos^2\alpha=1$ and $\cos2\alpha=1-2\sin^2\alpha$ Since, $\cos^2\alpha-\cos^4\alpha=\cos^2\alpha(1-\cos^2\alpha)=\cos^2\alpha\cdot\sin^2\alpha$ So, $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\cos^2\alpha\cdot\sin^2\alpha+\sin^4\alpha$$ $$=\sin^2\alpha(\cos^2\alpha+\sin^2\alpha)$$ $$=\sin^2\alpha=\frac{1-\cos2\alpha}{2}$$ - $$\cos^2\alpha-\cos^4\alpha+\sin^4\alpha=\cos^2\alpha+(\sin^4\alpha-\cos^4\alpha)=$$ $$=\cos^2\alpha+(\sin^2\alpha+\cos^2\alpha)(\sin^2\alpha-\cos^2\alpha)=\cos^2\alpha+\sin^2\alpha-\cos^2\alpha=$$ $$=\sin^2\alpha=1/2-1/2\cos2\alpha$$ Over! - Yes ,youare right! – Riemann Sep 26 '12 at 11:46 This is a better answer than the "accepted" one. – Michael Hardy Sep 26 '12 at 13:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8953561782836914, "perplexity_flag": "middle"}
http://physics.stackexchange.com/users/163/joe-fitzsimons?tab=activity&sort=comments&page=3
# Joe Fitzsimons reputation 1237 bio website jfitzsimons.org location Singapore, Singapore age 31 member for 2 years, 6 months seen 1 hour ago profile views 453 I have just moved to the Center for Quantum Technologies in Singapore, after spending the last 3 years as a Merton College JRF in Theoretical Physics and a Senior Research Fellow in Oxford University Department of Materials. My research focuses largely on theoretical aspects of quantum information processing. In particular I am interested in spin networks, measurement based computation, cryptography and computational complexity. | | | bio | visits | | | |-------------|------------------|----------------------|-----------------|------------|-------------------| | | 5,596 reputation | website | jfitzsimons.org | member for | 2 years, 6 months | | 1237 badges | location | Singapore, Singapore | seen | 1 hour ago | | # 257 Comments | | | | |-------|---------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Dec7 | comment | Uniqueness of eigenvector representation in a complete set of compatible observables@Moshe: I didn't bother looking at the Physics.SE link before answering, but now you've pointed it out I agree that genetth's answer was perfect. | | Dec6 | comment | Uniqueness of eigenvector representation in a complete set of compatible observablesSecondly, in quantum mechanics observable and Hermitian operator are synonymous. You can construct a physical measurement (in principle at least) for any Hermitian operator, and any physical observable is Hermitian. | | Dec6 | comment | Uniqueness of eigenvector representation in a complete set of compatible observables$\psi_i$ above are a basis for the Hilbert space in which all measurements are diagonal. If the set of measurements is maximal then it necessarily contains $D$ for some specific choice of basis. Since you specify the set of observables by their eigenvectors, you can explicitly construct $D$. | | Dec2 | comment | Examples of number theory showing up in physicsThat's weird, and certainly interesting. I'll give the paper a look. | | Dec2 | comment | Examples of number theory showing up in physicsThanks for taking the time to compose an answer. I don't really consider quantum algorithms as fundamental physics in the sense of this question, particularly given that the hidden subgroup stuff is driven by a generalization of problems from number theory (factoring/discrete logs). The graph state observation seems more related to the fact that you are looking at factoring a Hilbert space, which directly relates to primality of the dimesnionality, etc. | | Dec2 | comment | Examples of number theory showing up in physics@Artem: It's true that you can arrive at above result via finite fields, but the structure of the partial results is governed by number theoretic properties. I don't really see the way of arriving at a given result as particularly fundamental, as there are often multiple paths to the result. | | Dec2 | comment | What videos should everyone watch?You may want to expand the question with a bit more explanation as in the cstheory question. I've marked this CW. | | Dec1 | comment | Examples of number theory showing up in physicsThis seems more like engineering a physical system to embody certain number theoretic properties, rather than them occurring unexpectedly. | | Dec1 | comment | What papers should everyone read?@TsuyoshiIto: I must admit I didn't check the claim that it was 4 pages. | | Nov28 | comment | Quantum causal structure@lurscher: Good catch. I'm not really sure where things stand then. I've updated my answer to include the reference in your comment. | | Nov27 | comment | Causality and operationalism: from sets and functions to monads"Of course, states don't exist, only processes do." - That's one hell of a statement. Perhaps we would be better sticking to physics than philosophy. | | Nov25 | comment | Quantum causal structure@Peter: Sorry Peter, I was referring to the original question, not your comment. I don't dispute that the quantum gravity case is open, but we know so little about that area that it is hardly surprising that it is open. I know some of the results the question refers to, but can't make much sense of what the poster has in mind. I'll cast a virtual vote to close, but since my vote is binding, I won't actually close the question yet. I'd like to give the OP time to actually explain what they mean. If nothing happens in a few days, I'll kill the question. | | Nov25 | comment | Quantum causal structureI don't understand what you mean by quantum causal structure. Quantum mechanics is non-signalling so causal structure is the same as in the classical case. | | Nov25 | comment | Hilbert-Schmidt basis for many qubits - reference+1 from me. I use it a lot too, but couldn't think of anything interesting to say! | | Nov24 | comment | What papers should everyone read?Four pages is hardly unusual in physics. | | Nov21 | comment | CHSH violation and entanglement of quantum states@PiotrMigdal: Perhaps you should post that as an answer. | | Nov16 | comment | Accurate quantum state estimation via “Keeping the experimentalist honest”@ChrisFerrie: Sorry, I meant if the measurements can depend on $\sigma$ not $\rho$. I've edited my above comments to reflect this. | | Nov15 | comment | Accurate quantum state estimation via “Keeping the experimentalist honest”And hence you need only measure in the $X$ basis, even though this does not have sufficiently many linearly independent outcomes to uniquely identify an arbitrary density matrix. | | Nov15 | comment | Accurate quantum state estimation via “Keeping the experimentalist honest”This argument is for the case where the set of measurements is fixed and independent of $\sigma$. If the scheme need only work for certain classes of $\sigma$ then this imposes correlations between entries in the density matrix which reduces the number of linearly independent measurement operators required to uniquely identify it. An example of this is where $\sigma = |+\rangle\langle +|$, where purity implies that the state has expectation value 0 for $Z$ and $Y$ measurements, and if $\mbox{Tr}(\rho X) = \mbox{Tr}(\sigma X)$ then $\mbox{Tr}(\rho Y) = \mbox{Tr}(\sigma Y)$, etc. | | Nov15 | comment | Accurate quantum state estimation via “Keeping the experimentalist honest”Also, if you take any complete or over-complete basis for tomography you can make the measurements and make them arbitrarily weak, you still satisfy the criterion (though Alice's expected loss tends towards zero as the measurement tends towards the identity). |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9313480854034424, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9719?sort=newest
## What is the Zariski closure of the space of semisimple Lie algebras? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given Leonid Positselski's excellent answer and comments to this question, I expect that the present one is a hard question. Recall that the Lie algebra structures on a (finite-dimensional over $\mathbb C$, say) vector space $V$ are the maps $\Gamma: V^{\otimes 2} \to V$ satisfying $\Gamma^k_{ij} = -\Gamma^k_{ji}$ and $\Gamma^m_{il}\Gamma^l_{jk} + \Gamma^m_{jl}\Gamma^l_{ki} + \Gamma^m_{kl}\Gamma^l_{ij} = 0$; thus the space of Lie algbera structures is an algebraic variety in $(V^*)^{\otimes 2} \otimes V$. A Lie algebra structure $\Gamma$ is semisimple if the bilinear pairing $\beta_{ij} = \Gamma^m_{il}\Gamma^l_{jm}$ is nondegenerate; thus the semisimples are a Zariski-open subset of the space of all Lie algebra structures. Because the Cartan classification of isomorphism classes of semisimples is discrete (no continuous families), connected components of the space of semisimples are always contained within isomorphism classes. The semisimples are not dense among all Lie algebra structures: if $\Gamma$ is semisimple, then $\Gamma_{il}^l = 0$, whereas this is not true for the product of the two-dimensional nonabelian with an abelian. 1. Is there a (computationally useful) characterization of the Zariski closure of the space of semisimple Lie algebra structures? (LP gives more equations any semisimple satisfies.) 2. Suppose that $\Gamma$ is not semisimple but is in the closure of the semisimples. How can I tell for which isomorphism classes of semisimples is $\Gamma$ in the closure of the isomorphism class? (I.e. $\Gamma$ is a limit of what algebras?) 3. To what extent can I understand the representation theory of algebras in the closure of the semisimples based on understanding their neighboring semisimple algebras? For 3., I could imagine the following situation. There is some natural "blow up" of the closure of the semisimples, with at the very least each element of the boundary is a limit of only one isomorphism class in the Cartan classification. Then any representation of the blown-down boundary is some combination of representations of the blown-up parts. - ## 3 Answers This is a really a comment to rajamanikkam's answer, but it does not fit in the comment box. What rajamanikkam is describing in the last paragraph is essentially a Lie algebra contraction of $L$. It is well known that this is how one obtains Lie algebras which are close in some sense to the original Lie algebra. (I'm afraid that I do not speak the right language, so I am not sure whether this is in the Zariski closure.) One way to define a contraction of a given Lie algebra $L$, say complex of dimension $n$, is to consider a continuous curve $A(t)$ in $\mathfrak{gl}_n(\mathbb{C})$ which for $t$ in some interval, say $[1,\infty)$, lies in $\mathrm{GL}_n(\mathbb{C})$. For $t$ in that interval the Lie algebras $L(t)$ related to $L$ via $A(t)$ will be be isomorphic to $L$, but if `$\lim_{t\to\infty} L(t)$` exists -- which is by no means the case for all curves $A(t)$ -- then it will give rise to a Lie algebra which may or may not be isomorphic to $L$. I'm not sure if contractions are sufficient to generate the full closure, or indeed whether this is the sort of closure that the originally question intended. - Ah right, thanks! I am fairly sure that they do not generate the full closure - from the constructions I tried above, these seem to be "smaller" homomorphic images of the Lie algebra L in part, combined with some "other stuff" to account for the rest of the dimension; for simple Lie algebras homomorphic images would be trivial so it's entirely "other stuff", and I'm not sure what restrictions there would be on this "other stuff". I'm sure someone with more experience can help. – Vinoth Dec 25 2009 at 6:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A (hopefully helpful) comment. I've thought about the problem of finding the closure of one isomorphism class - I haven't got an answer but I had an idea that I hope is helpful towards a solution. Consider the closure of the set S of Lie algebras isomorphic to a fixed semisimple lie algebra $L$ of dimension $n$, and fix a basis of $L$ which gives you the structure constants. Then there is a surjection from invertible matrices $GL_n(\mathbb{C})$ to $S$ , namely by acting on a fixed standard basis of $V$ with $x \in GL_{n}(\mathbb{C})$ to give you another basis, and now you force this other basis to have the properties of the basis of $L$ fixed above, i.e. the structure constants - then trace this back to get the values of $\Gamma^k_{ij}$ defining this particular Lie algebra. To be precise with the above, I think it is best described as a transitive action of the algebraic group $GL_{n}(\mathbb{C})$ on the variety $S$. I think there are some matrices which act trivially however, and that these matrices correspond to automorphisms of the Lie algebra (which leave the structure constants invariant) – i.e. the point stabilizers correspond to automorphisms of the Lie algebras, so the homogenous space has the structure of the quotient of $GL_{n}(\mathbb{C})$ by this point stabilizer, which is the Lie group of automorphisms of $L$. I think this could help getting the closure of a single isomorphism class of Lie algebras (and since there are only finitely many isomorphism classes of semisimple Lie algebras of fixed dimension, should help with that problem too). But I’m not sure how – I tried naively by saying that perhaps this closure consists of the union of isomorphism classes which you get, in an intuitive sense, by replacing the invertible matrix $x$, by allowing singular matrices as well; but what I get from that seems to be some rubbish so I’m sure that path is mistaken. - As a warmup to this question you might want to think about the closure of semisimple associative algebras inside all finite dimensional asociative algebras of a given dimension. As a warmup to that question you might want to think about the commutative case. As a warmup to that question you might look at a beautiful paper by Bjorn Poonen (in particular, Section 6 and Remark 6.11). - I'll definitely look at the Poonen paper. I had intended to ask my question dimension-by-dimension; in the "follow-down" question (or whatever it's called that this is the follow-up to) I was more explicit a picked a finite-dimensional vector space over C and asked for Lie algebra structures on it. – Theo Johnson-Freyd Dec 25 2009 at 6:59 Oh, I reread what you wrote: when you say (semisimple) algebra, you mean (semisimple) associative algebra, as opposed to (semisimple) Lie algebra. – Theo Johnson-Freyd Dec 25 2009 at 7:02 Good point, fixed. – Noah Snyder Dec 25 2009 at 17:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413444995880127, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Change_of_Coordinate_Systems&diff=10209&oldid=4290
Change of Coordinate Systems From Math Images (Difference between revisions) | | | | | |----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | Current revision (12:28, 16 October 2009) (edit) (undo) | | | (19 intermediate revisions not shown.) | | | | | Line 1: | | Line 1: | | | - | {{Image Description | + | {{Image Description Ready | | | |ImageName=Change of Coordinates | | |ImageName=Change of Coordinates | | | |Image=Coordchange.JPG | | |Image=Coordchange.JPG | | - | |ImageIntro=The same object, here a circle, can look completely different depending on which coordinate system is used. | + | |ImageIntro=The same object, here a disk, can look completely different depending on which coordinate system is used. | | | |ImageDescElem=It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y '''Cartesian Coordinates''', represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates. | | |ImageDescElem=It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y '''Cartesian Coordinates''', represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates. | | | | + | <br> | | | | + | <br> | | | | | | | - | Under this transformation, a set of points would become stretched in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). A set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in this page's main image. | + | This transformation is shown below in the following to images. In the image on the left we have a square with four points marked: (0.2,0.7), (0.3,0.9), (0.4,0.3), and (0.9,0.3). The image on the right has undergone the transformation: instead of having a square, we now have a rectangle with the points (0.4,0.7), (0.6,0.9), (0.8,0.3), and (1.8,0.3). We can see that in call cases, y dimensions and coordinates remain the same, while all x coordinates and dimensions are doubled. | | | | + | [[Image:Unstretched.png|left|220px]] [[Image:Stretched.png|400px]] | | | | | | | - | Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle. | + | <br><br> | | | | | | | - | |ImageDesc= Some of these mappings can be neatly represented in matrix notation, in the form | + | Under this transformation, a set of points would be stretched out in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). | | | | + | | | | | + | We can also see that a set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in the top two figures of this page's main image. | | | | + | | | | | + | Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle. | | | | + | |ImageDesc=Some of these mappings can be neatly represented by [[Vector|vectors]] and [[Matrix|matrices]], in the form | | | | | | | | <math> A\vec{x}=\vec{x'} </math> | | <math> A\vec{x}=\vec{x'} </math> | | | | | | | - | Where <math> \vec{x}</math> is the coordinate [[vector]] of our point in the original coordinate system and <math> \vec{x'} </math> is the coordinate vector of our point in the new coordinate system. | + | Where <math> \vec{x}</math> is the <balloon title="A vector for which each coordinate represents a component.">coordinate vector</balloon> of our point in the original coordinate system and <math> \vec{x'} </math> is the coordinate vector of our point in the new coordinate system. | | | | | | | | For example the transformation in the basic description, doubling the value of the x-coordinate, is represented in this notation by | | For example the transformation in the basic description, doubling the value of the x-coordinate, is represented in this notation by | | Line 24: | | Line 32: | | | | As can be easily verified. | | As can be easily verified. | | | | | | | - | The ellipse that is tilted relative to the coordinate axes is created by a combination of rotation and stretching, represented by the matrix | + | In the main image of the page, the ellipse that is tilted relative to the coordinate axes is created by a combination of rotation and stretching, represented by the matrix | | | : <math>\begin{bmatrix} | | : <math>\begin{bmatrix} | | | | | | | Line 31: | | Line 39: | | | | \end{bmatrix}\vec{x} = \vec{x'}</math> | | \end{bmatrix}\vec{x} = \vec{x'}</math> | | | | | | | - | Often useful is mapping points from '''Cartesian Coordinates''' to '''[[Polar Coordinates]]'''. Such a mapping, as shown in this page's main image, can map a disk to a rectangle. Each origin-centered ring in the disk consists of points at constant distance from the origin and angles ranging from 0 to <math> 2\pi </math>. These points create a vertical line in Polar Coordinates. Each ring at a different distance from the origin creates its own line in the polar system, and the collection of lines creates a rectangle. | + | Some very useful mappings cannot be represented in matrix form, such as mapping points from '''Cartesian Coordinates''' to '''[[Polar Coordinates]]'''. Such a mapping, as shown in this page's main image, can map a disk to a rectangle. We can think of the disk as a series of rings wrapped around the origin, and the rectangle as a series of lines. Each of these rings is a different distance from the origin, and gets mapped to a different line within the rectangle. | | | | + | | | | | + | Each origin-centered ring in the disk consists of points at constant distance from the origin and angles ranging from 0 to <math> 2\pi </math>. These points create a vertical line in Polar Coordinates. Each ring at a different distance from the origin creates its own line in the polar system, and the collection of these lines creates a rectangle. | | | | + | | | | | + | The transformation from Cartesian coordinates to Polar Coordinate can be represented algebraically by | | | | + | | | | | + | <math> | | | | + | | | | | + | \begin{bmatrix} | | | | + | r\\ | | | | + | \theta\\ | | | | + | \end{bmatrix} | | | | + | = | | | | + | \begin{bmatrix} | | | | + | \sqrt{x^2 + y^2}\\ | | | | + | \arctan{y/x}\\ | | | | + | \end{bmatrix} | | | | + | </math> | | | | + | | | | | + | ===Three-Dimensional Coordinates=== | | | | + | In 3 dimensions, similar coordinate systems and transformations between them exist. Three common systems are rectangular, cylindrical and spherical coordinates: | | | | + | | | | | + | :*Rectangular Coordinates use standard <math> x,y,z </math> coordinates, where the three coordinates represent left-right position, up-down position, and forward-backward position, respectively. These three directions are mutually <balloon title="At right angles to each other">perpendicular </balloon>. | | | | + | <br> | | | | + | [[Image:Cylindrical.png|200px|right]] | | | | + | :*Cylindrical Coordinates use <math> r,\theta,z</math>, where <math> r, \theta </math> are the same as two-dimensional polar coordinates and ''z'' is distance from the x-y plane as shown on the right. | | | | + | | | | | + | <br><br><br> | | | | + | [[Image:Spherical.png|200px|left]] | | | | + | <br><br> | | | | + | :*Spherical Coordinates use <math> \rho, \theta, \phi </math>, where <math> \rho </math> is the distance from the origin, <math> \theta </math> is rotation from the positive x-axis as in polar coordinates, <math> \phi </math> and is rotation from the positive z-axis. Note that this standard varies from discipline to discipline. For example, the standard in physics is to switch the <math> \phi </math> and <math> \theta </math> labeling. Always be aware of what standard you should be using given a particular textbook or course. The mathematics standard noted above and shown in the image on the left is used for this page. | | | | + | | | | | + | <br><br> | | | | + | | | | | + | ====Converting between these coordinates==== | | | | + | | | | | + | The conversion from rectangular (Cartesian) coordinates to cylindrical coordinates is almost identical to the conversion between Crtesian coordinates and polar coordinates. | | | | + | : <math> r= \sqrt{x^2+y^2} </math> | | | | + | : <math> \theta=\tan^{-1}\left(y/x\right) </math> | | | | + | : <math> z=z </math> | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | We can calculate the cylindrical coordinates for the point given by (1,2,3) in Cartesian coordinates. | | | | + | : <math> r=\sqrt{x^2+y^2}=\sqrt{1^2+2^2}=\sqrt{5} </math> | | | | + | : <math> \theta =\tan^{-1}\left(y/x\right)=\tan^{-1}\left(2/1\right)\approx 1.1 </math> radians | | | | + | : <math> z=3 </math> | | | | + | So we have the point <math> (\sqrt{5}, 1.1,3 )</math> in cylindrical coordinates.}} | | | | + | | | | | + | The conversion from cylindrical coordinates to Cartesian coordinates is given by | | | | + | : <math> x=r \cos \theta </math> | | | | + | : <math> y=r \sin \theta </math> | | | | + | : <math> z=z </math>. | | | | + | | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | Now we can convert the point <math> (\sqrt{5}, 1.1,3 )</math> in cylindrical coordinates back to Cartesian coordinates. | | | | + | : <math> x=\sqrt{5} \cos (1.1)=\sqrt{5}(0.454)\approx 1 </math> | | | | + | : <math> y= \sqrt{5} \sin (1.1)=\sqrt{5}(0.891)\approx 2 </math> | | | | + | : <math> z=3 </math>. | | | | + | We see that we do indeed get back the point (1,2,3). The approximately equal to signs are due to rounding involved in dealing with the square root of five and sine and cosine. }} | | | | + | | | | | + | In order to go from Cartesian to spherical coordinates, we have | | | | + | : <math> \rho=\sqrt{x^2+y^2+z^2} </math> | | | | + | : <math> \tan\phi=\frac{\sqrt{x^2+y^2}}{z} </math> | | | | + | : <math> \tan \theta = y/x </math> | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | Starting out with the point (2,1,2) in Cartesian coordinates, we find that | | | | + | : <math> \rho=\sqrt{x^2+y^2+z^2}=\sqrt{2^2+1^2+2^2}=\sqrt{4+1+4}=\sqrt{9}=3 </math> | | | | + | : <math> \tan\phi=\frac{\sqrt{x^2+y^2}}{z}=\frac{\sqrt{2^1+1^2}}{2}=\frac{\sqrt{5}}{2} </math> | | | | + | ::<math> \rightarrow \phi=\tan^{-1}\left(\sqrt{5}/2\right)=0.841</math> radians | | | | + | : <math> \tan \theta = y/x=1/2 </math> | | | | + | ::<math> \rightarrow \theta=\tan^{-1}=0.464 </math> radians | | | | + | So we have the point (3,0.841,0.464) in spherical coordinates}} | | | | + | | | | | + | The transformation from spherical coordinates to Cartesian coordinates is given by | | | | + | :<math>x=\rho \sin \phi \cos \theta </math> | | | | + | :<math> y=\rho \sin \phi \sin \theta </math> | | | | + | :<math> z= \rho \cos \phi </math>. | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | We can convert our previous result back to Cartesian coordinates: | | | | + | :<math>x=\rho \sin \phi \cos \theta=3\sin(0.841)\cos(0.464)\approx 2 </math> | | | | + | :<math> y=\rho \sin \phi \sin \theta =3\sin(0.841)\sin(0.464)\approx 1 </math> | | | | + | :<math> z= \rho \cos \phi =3 cos(0.841)\approx 2</math> | | | | + | Again, we retrieve our original ordered triple (2,1,2) by rounding.}} | | | | + | | | | | + | We can also write the transformation from cylindrical coordinates to spherical coordinates: | | | | + | :<math> \rho=\sqrt{r^2+z^2} </math> | | | | + | :<math> \tan \phi = z/r </math> | | | | + | :<math> \theta = \theta </math>. | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | Beginning with the point (3,\pi/3,4) in cylindrical coordinates, we see that | | | | + | :<math> \rho=\sqrt{r^2+z^2}=\sqrt{3^2+4^2}=\sqrt{25}=5 </math> | | | | + | :<math> \tan \phi = z/r=4/3 </math> | | | | + | ::<math> \rightarrow \phi=\tan^{-1}(.75)=0.644 </math> | | | | + | :<math> \theta = \pi/3 </math>. | | | | + | So the point in spherical coordinates is (5,\pi/3,0.644). }} | | | | + | | | | | + | Finally, the transformation from spherical to cylindrical coordinates is given by | | | | + | :<math> r=\rho \sin \phi </math> | | | | + | :<math> \theta =\theta </math> | | | | + | :<math> z=\rho \cos \phi </math>. | | | | + | {{Switch|link1=Click to show example|link2=Click to hide example |1= |2= | | | | + | For one last example, we will take our previous result and transform it back to cylindrical coordinates | | | | + | :<math> r=\rho \sin \theta=5 \sin (0.644)\approx 3 </math> | | | | + | :<math> \theta =\pi/3</math> | | | | + | :<math> z=\rho \cos \theta5 \cos (0.644)\approx 4 </math> | | | | + | Therefore the point in cylindrical coordinates is (3, \pi/3, 4) as expected. }} | | | | + | | | | | | | | | | + | ===Interactive Demonstration=== | | | | + | <htmlet>ChangeOfCoordsApp</htmlet> | | | | | | | | | + | *add examples of transformations between three dimensional coordinate systems. | | | |AuthorName=Brendan John | | |AuthorName=Brendan John | | - | |Field=Algebra | + | |Field=Calculus | | - | |InProgress=Yes | + | |InProgress=No | | | }} | | }} | Current revision Change of Coordinates Field: Calculus Image Created By: Brendan John Change of Coordinates The same object, here a disk, can look completely different depending on which coordinate system is used. Basic Description It is a common practice in mathematics to use different coordinate systems to solve different problems. An example of a switch between coordinate systems follows: suppose we take a set of points in regular x-y Cartesian Coordinates, represented by ordered pairs such as (1,2), then multiply their x-components by two, meaning (1,2) in the old coordinates is matched with (2,2) in the new coordinates. This transformation is shown below in the following to images. In the image on the left we have a square with four points marked: (0.2,0.7), (0.3,0.9), (0.4,0.3), and (0.9,0.3). The image on the right has undergone the transformation: instead of having a square, we now have a rectangle with the points (0.4,0.7), (0.6,0.9), (0.8,0.3), and (1.8,0.3). We can see that in call cases, y dimensions and coordinates remain the same, while all x coordinates and dimensions are doubled. Under this transformation, a set of points would be stretched out in the horizontal x-direction since each point becomes further from the vertical y-axis (except for points originally on the y-axis, which remain on the axis). We can also see that a set of points that was originally contained in a circle in the old coordinates would be contained by a stretched-out ellipse in the new coordinate system, as shown in the top two figures of this page's main image. Many other such transformations exist and are useful throughout mathematics, such as mapping the points in a disk to a rectangle. A More Mathematical Explanation [Click to view A More Mathematical Explanation] Some of these mappings can be neatly represented by vectors and matrices, in th [...] [Click to hide A More Mathematical Explanation] Some of these mappings can be neatly represented by vectors and matrices, in the form $A\vec{x}=\vec{x'}$ Where $\vec{x}$ is the coordinate vector of our point in the original coordinate system and $\vec{x'}$ is the coordinate vector of our point in the new coordinate system. For example the transformation in the basic description, doubling the value of the x-coordinate, is represented in this notation by $\begin{bmatrix} 2 & 0 \\ 0 & 1 \\ \end{bmatrix}\vec{x} = \vec{x'}$ As can be easily verified. In the main image of the page, the ellipse that is tilted relative to the coordinate axes is created by a combination of rotation and stretching, represented by the matrix $\begin{bmatrix} 2cos(\theta) & -sin(\theta)\\ 2sin(\theta) & cos(\theta) \\ \end{bmatrix}\vec{x} = \vec{x'}$ Some very useful mappings cannot be represented in matrix form, such as mapping points from Cartesian Coordinates to Polar Coordinates. Such a mapping, as shown in this page's main image, can map a disk to a rectangle. We can think of the disk as a series of rings wrapped around the origin, and the rectangle as a series of lines. Each of these rings is a different distance from the origin, and gets mapped to a different line within the rectangle. Each origin-centered ring in the disk consists of points at constant distance from the origin and angles ranging from 0 to $2\pi$. These points create a vertical line in Polar Coordinates. Each ring at a different distance from the origin creates its own line in the polar system, and the collection of these lines creates a rectangle. The transformation from Cartesian coordinates to Polar Coordinate can be represented algebraically by $\begin{bmatrix} r\\ \theta\\ \end{bmatrix} = \begin{bmatrix} \sqrt{x^2 + y^2}\\ \arctan{y/x}\\ \end{bmatrix}$ Three-Dimensional Coordinates In 3 dimensions, similar coordinate systems and transformations between them exist. Three common systems are rectangular, cylindrical and spherical coordinates: • Rectangular Coordinates use standard $x,y,z$ coordinates, where the three coordinates represent left-right position, up-down position, and forward-backward position, respectively. These three directions are mutually perpendicular . • Cylindrical Coordinates use $r,\theta,z$, where $r, \theta$ are the same as two-dimensional polar coordinates and z is distance from the x-y plane as shown on the right. • Spherical Coordinates use $\rho, \theta, \phi$, where $\rho$ is the distance from the origin, $\theta$ is rotation from the positive x-axis as in polar coordinates, $\phi$ and is rotation from the positive z-axis. Note that this standard varies from discipline to discipline. For example, the standard in physics is to switch the $\phi$ and $\theta$ labeling. Always be aware of what standard you should be using given a particular textbook or course. The mathematics standard noted above and shown in the image on the left is used for this page. Converting between these coordinates The conversion from rectangular (Cartesian) coordinates to cylindrical coordinates is almost identical to the conversion between Crtesian coordinates and polar coordinates. $r= \sqrt{x^2+y^2}$ $\theta=\tan^{-1}\left(y/x\right)$ $z=z$ [Click to show example] [Click to hide example] We can calculate the cylindrical coordinates for the point given by (1,2,3) in Cartesian coordinates. $r=\sqrt{x^2+y^2}=\sqrt{1^2+2^2}=\sqrt{5}$ $\theta =\tan^{-1}\left(y/x\right)=\tan^{-1}\left(2/1\right)\approx 1.1$ radians $z=3$ So we have the point $(\sqrt{5}, 1.1,3 )$ in cylindrical coordinates. The conversion from cylindrical coordinates to Cartesian coordinates is given by $x=r \cos \theta$ $y=r \sin \theta$ $z=z$. [Click to show example] [Click to hide example] Now we can convert the point $(\sqrt{5}, 1.1,3 )$ in cylindrical coordinates back to Cartesian coordinates. $x=\sqrt{5} \cos (1.1)=\sqrt{5}(0.454)\approx 1$ $y= \sqrt{5} \sin (1.1)=\sqrt{5}(0.891)\approx 2$ $z=3$. We see that we do indeed get back the point (1,2,3). The approximately equal to signs are due to rounding involved in dealing with the square root of five and sine and cosine. In order to go from Cartesian to spherical coordinates, we have $\rho=\sqrt{x^2+y^2+z^2}$ $\tan\phi=\frac{\sqrt{x^2+y^2}}{z}$ $\tan \theta = y/x$ [Click to show example] [Click to hide example] Starting out with the point (2,1,2) in Cartesian coordinates, we find that $\rho=\sqrt{x^2+y^2+z^2}=\sqrt{2^2+1^2+2^2}=\sqrt{4+1+4}=\sqrt{9}=3$ $\tan\phi=\frac{\sqrt{x^2+y^2}}{z}=\frac{\sqrt{2^1+1^2}}{2}=\frac{\sqrt{5}}{2}$ $\rightarrow \phi=\tan^{-1}\left(\sqrt{5}/2\right)=0.841$ radians $\tan \theta = y/x=1/2$ $\rightarrow \theta=\tan^{-1}=0.464$ radians So we have the point (3,0.841,0.464) in spherical coordinates The transformation from spherical coordinates to Cartesian coordinates is given by $x=\rho \sin \phi \cos \theta$ $y=\rho \sin \phi \sin \theta$ $z= \rho \cos \phi$. [Click to show example] [Click to hide example] We can convert our previous result back to Cartesian coordinates: $x=\rho \sin \phi \cos \theta=3\sin(0.841)\cos(0.464)\approx 2$ $y=\rho \sin \phi \sin \theta =3\sin(0.841)\sin(0.464)\approx 1$ $z= \rho \cos \phi =3 cos(0.841)\approx 2$ Again, we retrieve our original ordered triple (2,1,2) by rounding. We can also write the transformation from cylindrical coordinates to spherical coordinates: $\rho=\sqrt{r^2+z^2}$ $\tan \phi = z/r$ $\theta = \theta$. [Click to show example] [Click to hide example] Beginning with the point (3,\pi/3,4) in cylindrical coordinates, we see that $\rho=\sqrt{r^2+z^2}=\sqrt{3^2+4^2}=\sqrt{25}=5$ $\tan \phi = z/r=4/3$ $\rightarrow \phi=\tan^{-1}(.75)=0.644$ $\theta = \pi/3$. So the point in spherical coordinates is (5,\pi/3,0.644). Finally, the transformation from spherical to cylindrical coordinates is given by $r=\rho \sin \phi$ $\theta =\theta$ $z=\rho \cos \phi$. [Click to show example] [Click to hide example] For one last example, we will take our previous result and transform it back to cylindrical coordinates $r=\rho \sin \theta=5 \sin (0.644)\approx 3$ $\theta =\pi/3$ $z=\rho \cos \theta5 \cos (0.644)\approx 4$ Therefore the point in cylindrical coordinates is (3, \pi/3, 4) as expected. Interactive Demonstration Change of Coordinate Systems Applet Future Ideas for this Page • add examples of transformations between three dimensional coordinate systems. Teaching Materials There are currently no teaching materials for this page. Add teaching materials. Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Categories: | | | | | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 57, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8066497445106506, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/143840/issues-with-text-problems?answertab=active
# Issues with text problems When I tutor, I often see people who kind of know the stuff they cover in school at the moment and succeed at straight problems like: Find the derivative of $f(x) = \frac 12 x^2$ But when it comes to text problems, they struggle to get started with the problem at all. A hill that can be modeled with a parabola like $f(x) = \frac 12 x^2$. Somebody wants to walk up that hill but cannot cope with slopes more than 10%. To which point can that person walk? My problem is that I am not really sure what their issue is. When I ask them what the derivative is, they can tell me. If I ask what the tangent is, they also know that. But they somehow cannot solve this problem. When I solve such problems I start with "What do I know?", then "What does that mean in mathematical definitions?" and then try to think what the result has to look like. The steps in between are easily filled then. How can I teach people this general, diffuse "art of problem solving"? - 6 Maybe their difficulties begin with "A hill that can be modeled with a parabola like $f(x)=\frac{1}{2}x^2$" – Salech Alhasov May 11 '12 at 11:40 I think this question is extremely difficult to answer. Personally, I don't believe that there is a "one size fits all" approach to problem solving which will work for everyone. Your general method of: (1)what do I have? (2)what do I want? (3)how do I turn what I have into what I want? Is as good as one can do, in general (in my opinion). The problem in this particular question seems to be that the student is unsure of what the question is actually asking them to calculate. – Sam Jones May 11 '12 at 11:43 That is just an example question that I made up to illustrate the problem. The „not sure what is asked“ seems to be the biggest issue, but do not have an idea how to help them understand questions better. – queueoverflow May 11 '12 at 11:45 2 Queueoverflow, I've upvoted this question because I find myself in the same situation. I've worked as a tutor for five years and the only conclusion I've come to is that I fail to interest the students in the subject and their brains won't start because of that. When they can't start doing an exercise, I'll often give them hints. They'll get it finally but when confronted with another similar problem, they'll have the same problems. – user23211 May 11 '12 at 11:50 1 A 10% slope means that when you go 100 meters (seen from above), you will gain 10 meters (10% of 100 meters) in height. That is a slope of 0.1. So you want to find the $x$ where $f'(x) = 0.1$. – queueoverflow May 11 '12 at 16:07 show 2 more comments ## 5 Answers For this particular case: We may understand that the definition of a 10% slope is mathematically obvious. But, by couching it in these natural English language terms, it could give the impression to students that they require outside knowledge (whether or not they actually do need outside knowledge). Students may be aware that "10% slope" has a particular meaning in English, having seen it on road signs, but probably have not directly associated that to a mathematical meaning. This could be one source of their confusion. When I see 10% used in natural English, I wouldn't immediately think about how that converts to a decimal, say. More generally: Plenty of words have a slightly different meaning in natural English and mathematical language - including 'slope' itself. Students frequently (and correctly) understand that they have to handle their use mathematical language in a very careful way. By phrasing a question using natural language, this link with precision might be broken and confusing to students. It may not be clear to students how to cross-interpret a question that is partly in mathematical language and partly in natural English. It doesn't help that questions written in natural language are sometimes badly phrased! Students might have developed a fear of these questions because their understanding of the reality of the situation doesn't match their own internal mathematical knowledge. Indeed, there may be a disassociation between mathematical questions and actual reality. Question setters tend to insert mathematics into 'real' situations where they don't belong (or don't apply exactly correctly). In reality, as I understand, it would be unlikely and impractical to model a hill as a parabola. Apart from the apparent sign error, a hill is 3D and a slope can be overcome by tracking a zigzag ascent up diagonally. It is (perhaps) an unnatural and contrived question, phrased in natural language in a context which is unfamiliar to students. This could be a further source of confusion. I think that interpreting (and writing) natural language questions is a difficult skill that has to be learned (and taught). I think all mathematicians come across situations where they have suddenly realised how a particular skill can be used in an unfamiliar or surprising situation - it's clear that this might happen more often to novice students. To answer your question: I think you could help students by teaching this specifically: When I solve such problems I start with "What do I know?", then "What does that mean in mathematical definitions?" Interpreting an English statement is not straightforward, and students would benefit from learning how to do this. Explaining how you do this, yourself, is a good idea! - When I used to tutor Algebra at a community college one of the biggest challenges that students encountered was the abstract thinking needed to solve word problems. Just as you describe the students would have no trouble with the mechanics of the math but would stumble when they were asked to move between the abstractions present in the mathematics and the concrete realities of the problems. So what do do about it? My approach was to explicitly try to to explain the concept of abstraction. That the math is a representation of a reality that we wish to explain. I would do this initially outside the concept of math. A typical approach might be: 1. Ask the student to draw a tree. (this usually resulted in a typical lollypop type tree representation) 2. Ask them to explain why their drawing represents a tree. (During this discussion I am trying to get them to identify the "treeness" that makes a tree a tree - e.g., leaves and a trunk) 3. Once we have identified that a tree can be represented by leaves and a trunk, ask them to simplify and generalize (i.e., abstract) their drawing further (at the end of this we usually wind up with a circle on top of a rectangle that represents a abstract tree) 4. Depending on how well the student is getting it, we may go through a few more examples (e.g., a car, a house, etc...) until it makes sense that reality can be abstracted by identifying essential components. 5. After this I introduce math as a similar method of simplifying and representing the complexity of reality. Depending on the student's experience you could build off the tree example to show that the abstract tree could be represented by the diameter of the circle of "leaves" and the height of the trunk. 6. Finally I try to show how by abstracting the reality with math we can use the tools of math to ask and answer questions. The key in my experience is to get them to practice abstract thinking as a skill independent of the mechanics of solving the mathematical problems. - This is a very interesting question. On the one hand, we're told that we need to make maths more interesting by relating it to real life as in your sample question, but on the other, doing so creates the heavy cognitive load that Jim Conant refers to. To try to answer the question rather than just discuss this topic, these guys suggest that showing worked examples can lessen the load and help students to learn how to solve similar problems. And since problem solving is mentioned, there's Polya or John Mason. (I forget the source now and can't track it down, but I recall reading a quote by a teacher of problem solving that his approach was to give his students problems and not tell them the answer!) - 2 I remember, from my own student years, feeling rather bored and irritated by some math problems that were formulated as "real world applications" ... as this "hill than can be modeled by a parabola". It's not that I was allergic to applied math (i'm an engineer), rather that in that context I felt that it was an unhelpful, artificial and somewhat patronizing "motivation". Just noise. – leonbloy May 11 '12 at 17:32 One idea is cognitive load. The student knows enough to establish any particular piece of the problem, but has trouble juggling everything when there are too many components. I think word problems have a higher cognitive load than drill problems because the reader has to read through the whole thing and keep several ideas in their head at once before beginning to synthesize a response. This is related to math anxiety where in a sense a student's own negative perceptions about math and their ability to do it start to drain cognitive energy. I agree with Krishnakanth that this problem has been exacerbated by the educational system in some places, which teaches math as a list of recipes to follow and doesn't emphasize the analytical, critical thinking aspects. - The problem lies within the educational system, we're feeding the students bits and pieces and expect them to connect the dots... I remember in school I was derivatives as just that, derivatives... later on I just found out that a derivative signifies the change in a function based on the input changes... that's just an example... So, I'd suggest our methodology of educating students should change from providing formulas and theories to making them understand the significance of the formulae and the practical use of them... My opinion, making them realize how it can be used in real life, would be the best way to teach... - 1 I think feeding the students bits and pieces and expecting them to connect the dots is a wonderful way of teaching! – user1729 May 11 '12 at 14:39 When I talk to people who are fascinated about some topic, I just need to give them pieces and they are all over it. Like I told another Vim enthusiast about the rectangular selection with `<C-v>`. I only mentioned it and he read in the manual how to use it. But I think this approach does not work, if the person is not interested and curious. – queueoverflow May 11 '12 at 16:10 Although I understand your point, it's still a great leap to go from "signifies the change in a function based on the input changes" up to "realize how it can be used in real life". How would you make that connection? – Ronald May 11 '12 at 16:53 So both the problem and the solution lie within the education system... It is good to know that students are not required to actively participate. – Joshua Shane Liberman May 11 '12 at 18:11 The problem is that the self-motivated students do not require tutoring. So only the ones that are not interested end up with a tutor :-/ – queueoverflow May 11 '12 at 18:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9738554358482361, "perplexity_flag": "middle"}
http://en.wikiversity.org/wiki/Units,_significant_figures,_and_standard_procedures
# Units, significant figures, and standard procedures From Wikiversity This lesson is a part of the School of Physics In order to understand physics and work with physical theory accurately, there must first be a consistent understanding of the conventions and procedures of working out quantities and numbers. ## Units Most quantities that can be measured must be expressed in the correct units. The unit differentiates between whether one is talking about a measure of time, or a measure of distance, or several other fundamental types of observables that have come to light during our study of the universe. ### Standard Prefixes You already use units in everyday conversation. When you want to purchase something, the price of the item is measured in some combination of monetary units. In the U.S., these units are usually dollars and cents. In this system, a dollar is equivalent to 100 cents, although very few people would pay for an item using hundreds of cents. The American dollar is a simple example of a multiplicative unit. When reading the price of an item that costs more than 99 cents, you will most likely read it in terms of dollars, as that is the most convenient term to use. Seldom is the case where one reads an item priced at 10 dollars as being "a thousand cents". Similarly, scientists must use appropriate types of scale when talking about large and small quantities. You would not want to talk about the mass of the Earth in the same unit as the mass of a few subatomic particles, for example, unless making a pointed comparison. One of the technical reasons for this is simple accuracy. It is just not feasible to be as accurate in talking about the mass of the Earth as one would be in measuring the mass of a few salt crystals. While the salt crystals can easily be measured to the nearest gram, the Earth's mass to the nearest gram is changing constantly (due to atmospheric evaporation and bombardment by radiation, particles, and tiny meteorites). Scientists have developed a standard for referring to large and small quantities of units by referring to them as multiples of powers of ten. Each power is referred to by appending a prefix to the standard unit. The most common prefixes are listed below: Multiple SI Prefix SI Abbreviation 1012 tera T 109 giga G 106 mega M 103 kilo k 102 hecto h 101 deka da 10-1 deci d 10-2 centi c 10-3 milli m 10-6 micro µ 10-9 nano n 10-12 pico p 10-15 femto f 10-18 atto a In terms of dollars and cents, we would then call a dollar a hectocent and if we used c as the unit abbreviation for cents, we would write 200 c = 2 hc. We generally don't go around talking about dollars and cents in this manner, but these prefixes are great way to quantify the myriad units we will encounter in the physical sciences. You may already be familiar with the kilogram as a unit of mass, and from the list above, you know that each kilogram is 1000 grams. You might purchase a few kilograms of cement or add a few grams of salt to a recipe, but you wouldn't want to add a few kilograms of salt to a recipe. This is again just an example of what physicists call order of magnitude. ### Types of Units The units of length and distance may seem obvious to you from everyday experience, but there are many other important units (ie., how would you measure the color of an object? We will learn more about color when we study electromagnetism), and there are ways of relating what may seem like completely different types of units to each other in a standard manner, including time and distance. Finding out that some units are really just products or multiples of some familiar unit is part of the allure of discovering new physics. Units, most often the SI Unit Standard signify first and foremost what the quantity is relevant to. All SI units can themselves be expressed in terms of the seven SI base units below: • length, measured in metres, expressed as the symbol m • mass, measured in kilogrammes, expressed as the symbol kg • time, measured in seconds, expressed as the symbol s • electric current, measured in amperes, expressed as the symbol A • temperature, measured in kelvin, expressed as the symbol K • amount of substance, measured in moles, expressed as the symbol mol • luminous intensity, measured in candela, expressed as the symbol cd The bold script indicating the symbols above is for emphasis; no special format is required when hand-writing these quantities provided that they are written clearly and in the correct case as shown. There are many other SI units which have their own symbol, but every one of them is a shorthand for some product of the base units shown above (or equivalently, a product of other SI units which themselves are equivalent to some other product of base units, and so on). SI units are given their own symbol for two reasons; firstly, that the product they represent might be long and awkward to write, and secondly that units relevant to important scientific discoveries may be renamed and given a symbol of their own to honour their creator if the theory which uses them is a particularly significant discovery. ### Writing Units Many units belonging to quantities we can measure in physics are products of other units. As you know from mathematics, a product can include dividing one term by another - and from the Law of Indices we can also show that this is equivalent to multiplying the numerator by a negative power of the denominator. So, as an example, if we take a measurement of the velocity of an object, we express this in metres per second which we can write as: • $m / s$ Or equivalently as: • $ms^{-1}$ According to the the BIPM (Bureau Internationale de Poids et Measures), the international authority responsible for standardising systems of measurement, either of these methods is acceptable in scientific literature as long as the form of the unit is clear and unambiguous. ## Significant figures and Uncertainty We will frequently be using measured quantities (which will be expressed as some number or collection of numbers) in equations and formulas in order to relate them to quantities we wish to predict the value of. In all of these cases, it will be very important to pay attention to the precision of the numbers that we have actually measured, as it is very easy to get lots of meaningless numbers from applying basic formulas, all the while forgetting that we did not measure the observables with the precision that we are getting out of blindly applying the formulas. ### Instrument Precision As an example, when you measure the length of something short with a ruler, you may notice that between the centimeter marks, there are ten marks, allowing you to eyeball measurements like 2.64 cm. However, note that the 0.04 cm in the previous measurement is not a precise measure (it is not infallible). The ruler does not give markings down to hundredths of a centimeter; you can really only say that "It looks like the length is 2.64, but my instrument can only report the length as being between 2.60 and 2.70 cm somewhere close to that measurement." You have no hard evidence to support the 0.04; you only hypothesize that its somewhere around the 4 hundredths mark. It could actually be near the 3 hundredths mark or near the 5 hundredths mark, if there were such markings, but it definitely would not be around the 6 hundredths mark. All numerical measurements have this sort of error involved and scientists include the instrumental precision in every measurement by using the notation $2.64 \pm 0.01$ cm which means the object you measured is between 2.63 and 2.65 centimeters in length. That is definitely a statement you can support using your ruler. More advanced instruments also have statistical fluctuations in either the property they are measuring or the process/physical properties of the parts being used to measure the phenomenon and they all report (either live or in the manual) statistical uncertainty. Check your scale at home for its uncertainty. While you may expect your scale to tell whether you have sneakers on or not, it may not be able to tell whether you got a haircut today. Unqualified measurements, those without explicit errors written in, are usually assumed to have a unit error in the most precise digit unless otherwise stated. Ie., if you see a measurement given as 5.4 g, it is usually meant as a shorthand for $5.4 \pm 0.1$ g. ### Significant Figures As we have seen above, all instruments have some finite precision. Significant figures are a method of keeping track of the order of magnitude of the error involved in a calculation. For example, You might read 3.26 cm off of a ruler. We say that such a measurement has 3 significant digits, which means all 3 digits present were measured on the ruler, implying the last digit, 6, is uncertain by some amount. Suppose you measure something to be 0.05 cm using the same ruler. This measurement has only one significant digit (we didn't measure those two zeroes; they are only placeholders, an artifact of our decimal system), and it is uncertain, making it a very weak measurement (you wouldn't use this type of ruler to measure the thickness of paper). If you are ever confused about whether the digits in a measurement are significant and the context does not specify, simply write the measurement in scientific notation. The number of digits in the multiple of the power of ten are the number of significant digits. Above, 0.05 cm becomes 5 x 10-2 cm, with one significant digit. #### Addition and Subtraction For simple calculations, it is important to recognize that you can't get more information from a formula than the information contained by the weakest measurement you put into the formula. Ie., if you measure one object with a normal centimeter ruler and a larger object with a meter ruler, the smallest marking of which is one centimeter, and then add the lengths together, you should not expect to get a length that is precise down to the hundredth of a centimeter. Why not? Because you don't know how many hundredths of centimeters may be present in the length of the larger object. It could be anywhere from none at all to hundreds, which would make your overly precise sum quite nonsensical. This yields the following rule when dealing with addition: The result of an addition or subtraction of two measurements is only as precise as the least precise of the two measurements. For example, if we have 3.4 g of water and 5.504 g of salt, we expect to get 8.9 g of salt water when we mix them together, not 8.904 g. We do not know what's going on in the hundredths and thousandths place of the mass of the water, only that it's between 3.3 g and 3.5 g of water that we have. For all we know, we may get 8.813 g of salt water as measured by a more precise instrument. As you can see, the 0.004 g carries no useful information. #### Multiplication and Division For reasons similar to the above, if one is taking the product or quotient of two numbers, the result is always rounded to the precision of the least precise measurement involved. Suppose a car is measured by a radar gun to be travelling at 10 m/s (accurate to the nearest meter per second), and you time the car as taking 10.7 seconds to go between point A and point B at that speed. As we will learn later, we can get the distance travelled by the car by multiplying the speed by the time taken. We do this and get 107 m. Rounding to account for the least precise measurement, which only has 2 significant digits, we get 110 m as the approximate distance between A and B. In this last value, note that the trailing 0 is only a placeholder and is not significant. Some texts will denote whether the trailing zeroes are significant or not by placing a decimal point after the last zero. This can get rather pedantic, so whenever the trailing zero is significant, we will simply alert the reader to the precision of the measurement, as in the radar gun measurement above. We round to 110 m in order to give the correct impression of the uncertainty of the result. The value of 107 m has 3 significant digits, misleading one to believe there is only an error in the units place. If one looks at the possible range of distances (from 95.4 m to 118.8 m using the lowest possible and highest possible values for the underlying measurements, respectively), one sees how inaccurate that would be. ## References The NIST Reference on Constants, Units and Uncertainty, http://physics.nist.gov/cuu/Units/units.html, retrieved 21.08.06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9504069089889526, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/12737?sort=newest
## Necessary and sufficient criteria for a surface to cover a surface ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let S and S' be closed [Edit: orientable] surfaces, then it is well known that for S' to cover S it is necessary and sufficient that chi(S)|chi(S'). (Here 'chi' denotes the Euler characteristic). However, if S and S' are punctured surfaces then the above condition is necessary but no longer sufficient. Is the question of determining necessary and sufficient criteria for S' to cover S answered in the research or expository literature? I think that I know such criteria and want to use them a paper that I'm working on but (a) may be deceiving myself and (b) want to know whether I should write up a proof anew or whether there's a suitable reference. Surely the relevant criteria have been rediscovered many times, but I've never seen them discussed in writing. Edit: Thanks Pete, I should have demanded that my surfaces be orientable - Are you sure your criterion is correct in the closed case? Let S be the closed, nonorientable surface of Euler characteristic -2 and let S' be the closed orientable surface of Euler characteristic -6. Then $\chi(S')/\chi(S) = 3$, so your divisibility condition is satisfied, but since $3$ is odd, the covering map does not factor through the orientation cover (a degree 2 cover) and therefore passing to an odd degree cover cannot make a nonorientable surface orientable. – Pete L. Clark Jan 23 2010 at 11:22 I had a previous answer which Dmitri Panov pointed out was wrong. Take II (as a comment this time): we start with an arbitrary branched covering of compact Riemann surfaces $S' \rightarrow S$. Then we remove a finite set of points on $S$ together with the complete preimage in $S'$. In order to get an unramified covering we must remove at least the branch locus from $S$; we can also remove any finite number of unbranched points if we wish. I think that every topological unramified cover $S' \rightarrow S$ arises in this way. Now we examine cases using Riemann-Hurwitz... – Pete L. Clark Jan 23 2010 at 12:00 Anyway, I realize I am not answering your actual question: i.e., do I know a reference? Unfortunately no, sorry, although I agree that it must have been worked out dozens of times. – Pete L. Clark Jan 23 2010 at 12:05 For a reference you might check out the literature on dessin d'enfants; I feel like I've seen such criteria discussed in that more combinatorial context. – Tom Church Jan 23 2010 at 18:32 ## 2 Answers In a recent paper by Calegari, Sun, and Wang, the authors cite W.S. Massey, Finite covering spaces of 2-manifolds with boundary. Duke Math. J. 41 (1974), no. 4, 875-887. which proves that Dmitri's conditions (1) and (2) are sufficient if the S and S' are compact, orientable with non-empty boundary. (Dmitri's condition (3) is redundant as Massey points out in the second page of his article: writing chi(S') = 2 - 2g(S') - p(S') and chi(S) = 2 - 2g(S) - p(S) Dimitri's condition (1) gives 2 - 2g(S') - p(S') = d(2 - 2g(S) - p(S)) and reducing (mod 2) gives that p(S') and dp(S) have the same parity. But the proof therein is longer than the one that Dimitri for the case that he treats (if one takes the Ore result as given). - That's a neat trick: have people answer your question, wait a while, then write your own equivalent answer and mark it as "correct". How fair is that? – Victor Protsak Jun 12 2010 at 5:59 I appreciated Dimitri's answer, but my question was asking for a reference that I could cite, and I finally found on my own. – Jonah Sinick Jun 13 2010 at 20:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know the reference for this question, but I am pretty sure that it should follow from some known statement. Anyway let me give the answer in the case when S has negative Euler characteristic, orientable and CONNECTED. At least you can compare with your own answer. Denote by p(S) the number of punctures. 1) chi(S')/chi(S)=d with $d$ positive integer 2) $p(S)\le p(S') \le p(S)d$. 3) $p(S)d-p(S')$ should be even. Moreover, in the case Genus(S)=0 you have an additional condition $p(S)d-p(S')>d-2$. This condition assures that S' is connected. I think that these conditions are necessary and sufficient. It is obvious that 1), 2) are necessary. Condition 3) comes from the fact, that the permutation corresponding to going around all punctures on S is a commutator, so it should be even. In order to show that these conditions are sufficient, you could use the old result of Ore that tells that every even permutation is a commutator of two permutations. Oystein Ore. Some remarks on commutators. Proc. Amer. Math. Soc., 2:307–314, 1951. Let me prove that conditions are sufficient in the case when genus of S is two or more. Sketch of a proof. We want to show that there exists a collection of permutation in $S_d$, $s_1,...,s_{2g}, t_1,...,t_p$ ($p=p(S)$) that act transitively on ${1,...,d}$ such that $s_1s_2s_1^{-1}s_2^{-1}...=t_1...t_p$, where $t_1,...,t_p$ are given permutations with the product in the alternating group $A_d$. Then we chose $s_1$ and $s_2$ in such a way that $s_1s_2s_1^{-1}s_2^{-1}=t_1...t_p$ (Ore result) and take $s_3=s_4$ - cycles of length $d$, while all other permutations $s_i$ should be trivial. Clearly the action on ${1,...d}$ is transitive. Now the existence of a cover follows by standard arguments. If you manage to make the proof very short it is worth to put it in the article, or at least give a hint. Otherwise, indeed, as Pete said it would be nice to find a reference. - @Dmitri: I think what I said above is consistent with this. – Pete L. Clark Jan 23 2010 at 12:16 Pete, I ageree with you, just the answer is slightly more tricky than I thought in the beguinning. Hope it is correct though – Dmitri Jan 23 2010 at 12:32 @Dmitri: agreed. The first two conditions are obviously necessary, but I wouldn't have predicted 3). I would say that I changed my mind and that, as a reader, I would prefer to see a reference! – Pete L. Clark Jan 23 2010 at 12:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9413589239120483, "perplexity_flag": "head"}
http://mathoverflow.net/questions/48693/smooth-structures-on-the-connected-sum-of-a-manifold-with-an-exotic-sphere
## Smooth structures on the connected sum of a manifold with an Exotic sphere ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What can we say about the connected sum of a manifold $M^n$ with an Exotic sphere? Is is possible some of them are still diffemorphic to $M^n$. Is it possible to classifying all the exotic smooth structures for a given $M^n$? - 3 When $n \neq 4$, $\mathbb R^n \# M \simeq \mathbb R^n$ for all homotopy $n$-spheres $M$. The group of homotopy spheres acts on the smooth structures of a given manifold, and some subgroup acts trivially. It's generally an interesting question as to what that subgroup is precisely but it can also be difficult to compute. – Ryan Budney Dec 8 2010 at 23:49 3 These matters are classically studied by surgery theory but I do not know any elementary exposition of your question in the literature. I struggled with these very issues while writing section 3 of arxiv.org/abs/0912.4874 which you may find useful. – Igor Belegradek Dec 9 2010 at 0:55 1 Farrell-Jones [JAMS 1989] showed that for certain closed hyperbolic manifolds taking connected sum with an exotic sphere always changes the smooth structure. On the other hand, they showed in [JDG 1993] that for a non-compact manifold of dimension >4 connected sum with an exotic sphere never changes the smooth structure; I am somewhat confused about their proof but this is what I think is claimed there. – Igor Belegradek Dec 9 2010 at 2:09 3 Google "inertia group" of M: this is the name given in surgery theory to the subgroup of the group of homotopy spheres whose connected sum doesn't change M. – Paul Dec 9 2010 at 3:15 2 Ryan Budney's and Paul's answers lead me to wonder when the group of smooth structures on the sphere acts transitively on the set of smooth structures of $M$. Also, when a group acts transitively on a set abstractly, the set generally has no distinguished points, but this action is not abstract. So I wonder if there are topological criteria that single out certain smooth structures. In particular, do some manifolds possess smooth structures that deserve to be called "standard" because they share some topological property that characterizes the standard smooth structure on the sphere? – David Feldman Dec 9 2010 at 5:27 show 6 more comments ## 2 Answers Surgery theory provides a framework for classifying closed higher-dimensional manifolds, but unfortunately, a definitive classification is known only for a very few homotopy types. Here is how surgery attempts to classify smooth structures on a given manifold $M$. A basic object is a smooth structure set $S(M)$, which is the set of equivalence classes of simple homotopy equivalences $f: N\to M$ where $f_1: N_1\to M$, $f_2: N_2\to M$ are considered equivalent if there is a diffeomorphism $d: N_1\to N_2$ such that $f_2\circ d$ is homotopic to $f_1$. For example, every homeomorphism is a simple homotopy equivalence, so $S(M)$ contains all manifolds homeomorphic to $M$. The set $S(M)$ fits into the sugery exact sequence. Roughly, to every $f: N\to M$ one associates the so called normal invariant which lives in $[M, G/O]$, the homotopy classes of maps from $M$ to the classifying space $G/O$. In a sense, the normal invariant $n(f)$ records tangential data of $f$, but it is more compicalted than that, e.g. $n(f)$ need not be trivial even when $f$ is a homeomorphism that preserves the tangent bundles. If $n(f)$ is trivial, then by exactness $f$ lies in the orbit of the action of the surgery $L$-group. If $M$ is simply-connected, this action is given by connected sums of $f$ with (the identity maps of) homotopy spheres bounding parallelizable manifolds; if a homotopy sphere does not bound a parallelizable manifold the connected sum may (will?) change the normal invariant. In the non-simply-connected case undertsanding $L$-groups and their action may involve heavy algebra; of course, in this case the group of homotopy spheres bounding parallelizable manifolds still acts on $S(M)$, but there is much more stuff in the $L$-group than this. Even we are lucky to compute $S(M)$, we are not done yet because $S(M)$ could contain "repetitions", e.g. there could be homeomorphism $f_1: N\to M$, $f_2: N\to M$ that are different in $S(M)$ even though their domain $N$ is the same smooth manifold. Thus if we really want to have a list of manifolds homeomorphic to $M$, we should not count $f_1$, $f_2$ as different elements. Sorting out these repetitions has a strong homotopy theoretic flavor, and is notoriously hard. I mentioned some papers in comments where the above classification scheme was made work, but again this is quite rare, to my knowledge. For example, even for product of several (maybe even two) spheres or complex projective spaces the classification seems to be unknown. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is more a long comment on a special case of the first two questions than a complete answer, but I am surprised no one else mentioned this. One can, even without a proof or refutation of the smooth 4-dimensional Poincare conjecture, state something about the case $M^4 = \sharp m S^2 \times S^2$, where $m$ is greater than or equal to a particular positive integer $k$. Assume there exists an exotic 4-dimensional sphere $\textbf{S}^4$ homeomorphic but not diffeomorphic to a standard $S^4$. Then, by a theorem of Wall, there exists a positive integer $k$ such that $S^4 \sharp k S^2 \times S^2$ is diffeomorphic to $\textbf{S}^4 \sharp k S^2 \times S^2$. Thus, $S^4 \sharp m S^2 \times S^2$ is diffeomorphic to $\textbf{S}^4 \sharp m S^2 \times S^2$ for $m \ge k$. Trivially, $S^4 \sharp m S^2 \times S^2$ is diffeomorphic to $\sharp m S^2 \times S^2$. Thus, $\textbf{S}^4 \sharp m S^2 \times S^2$ is diffeomorphic to $\sharp m S^2 \times S^2$ for $m \ge k$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 60, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400609135627747, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/3952/is-it-possible-to-obtain-aes-128-key-from-a-known-ciphertext-plaintext-pair/3954
# Is it possible to obtain AES-128 key from a known ciphertext-plaintext pair? I have a file, which was encrypted with AES-128 in ECB mode. I know the format of the original file and know that all files in this format have the same headers. So, I have an encrypted block and the original block. Can I get the encryption key, using this information? - ## 2 Answers As far as is publicly known, no, you can't. If you could, that would constitute a practical known-plaintext key recovery attack on AES, and the existence of such an attack would mean that AES would be considered totally insecure by modern cryptographic standards. If you do figure out how to do that, publish it and you'll be famous. (Or, if you'd prefer money over fame, take it to your local/favorite intelligence agency or organized crime syndicate. But note that this option carries some significant risk, since whoever you sold that information to would now have a considerable interest in preventing you from selling it to anyone else). One exception to this is the case where the keyspace is (or you suspect it may be) sufficiently small to allow a brute force exhaustive search of it. That could e.g. be the case if the key is derived from a passphrase that may not have been chosen securely (that is, at random from a sufficiently large pool of combinations). In that case — especially if no delibrately slow key derivation function was used to strengthen the key — you could just write some code (or build some hardware) to try all likely passphrases until you find one that gives the correct encryption. - Thank you! I'm not a cracker. I am developing a cryptosystem, which is the need for rapid deciphering arbitrary blocks of the file. So I chose the ECB instead of CBC and wanted to make sure that in this case I did not make a hole in my system. – Denis Bezrukov Oct 4 '12 at 11:36 1 @Denis Bezrukov: CBC also allows you to decipher an arbitrary block of the file; compared to ECB, the extra cost is only a XOR with the previous ciphertext block, following decryption. – fgrieu Oct 4 '12 at 12:21 1 – Ilmari Karonen Oct 4 '12 at 14:16 @Denis You are making a hole in your system by using ECB. There are other modes of operation that you can use which provide the ability to decrypt random blocks. Check the disk encryption theory Wikipedia article for a primer on this. – Stephen Touset Oct 4 '12 at 17:43 No for practical definitions of possible, assuming the key was chosen truly randomly, and no side-channel information is available (such as the power-consumption traces of the encrypting device, or the time it took, for many encryptions). The design of AES strives to be such that the best way to find the key from plaintext-ciphertext examples is to try keys among the $2^{128}$ possible keys. As far as we know this goal is reached for all practical purposes (within a small factor like 4, subject to debate, which we can neglect). If we tested 10 thousand of million keys per second for a year on a million of million devices, odds of hitting the right key are less than 1 in a thousand of million.I've purposely avoided the ambiguous billion. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458363056182861, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/847/law-of-cosines-with-impossible-triangles
Law of cosines with impossible triangles Is there any mathematical significance to the fact that the law of cosines... $$\cos(\textrm{angle between }a\textrm{ and }b) = \frac{a^2 + b^2 - c^2}{2ab}$$ ... for an impossible triangle yields a cosine $< -1$ (when $c > a+b$), or $> 1$ (when $c < \left|a-b\right|$) For example, $a = 3$, $b = 4$, $c = 8$ yields $\cos(\textrm{angle }ab) = -39/24$. Or $a = 3$, $b = 5$, $c = 1$ yields $\cos(\textrm{angle }ab) = 33/30$. Something to do with hyperbolic geometry/cosines? - 3 Answers This is not a directly a matter of hyperbolic geometry but of complex Euclidean geometry. The construction of "impossible" triangles is the same as the construction of square roots of negative numbers, when considering the coordinates the vertices of those triangles must have. If you calculate the coordinates of a triangle with sides 1,3,5 or 3,4,8 you get complex numbers. In ordinary real-coordinate Euclidean geometry this means there is no such triangle. If complex coordinates are permitted, the triangle exists, but not all its points are visible in drawings that only represent the real points. In plane analytic geometry where the Cartesian coordinates are allowed to be complex, concepts of point,line, circle, squared distance, dot-product, and (with suitable definitions) angle and cosine can be interpreted using the same formulas. This semantics extends the visible (real-coordinate) Euclidean geometry to one where any two circles intersect, but possibly at points with complex coordinates. We "see" only the subset of points with real coordinates, but the construction that builds a triangle with given distances between the sides continues to work smoothly, and some formulations of the law of Cosines will continue to hold. There are certainly relations of this picture to hyperbolic geometry. One is that $cos(z)=cosh(iz)$ so you can see the hyperbolic cosine and cosine as the same once complex coordinates are permitted. Another is that the Pythagorean metric on the complex plane, considered as a 4-dimensional real space, is of the form $x^2 + y^2 - w^2 - u^2$, so that the locus of complex points at distance $0$ from the origin contains copies of the hyperboloid model of hyperbolic geometry. But there is no embedding of the hyperbolic plane as a linear subspace of the complex Euclidean plane, so we don't get from this an easier way of thinking about hyperbolic geometry. To help visualize what is going on it is illuminating to calculate the coordinates of a triangle with sides 3,4,8 or other impossible case, and the dot-products of the vectors involved. - Neat... I get it. It's one thing I thought about -- like imaginary numbers. Now, another question... How can the complex plane be considered as a 4-dimensional real space? I thought it was a two-dimensional real space. I can (I think) see how the two dimensional Euclidean plane with complex points could be interpreted as a 4-dimensional real space, with each point (a,b,c,d) in R4 being (a+bi,c+di) in the complex Euclidean plane (call it C2). Or symbolically, R4 = R2xR2 = CxC = C2. This make sense? – David Lewis Aug 6 '10 at 1:03 Yes, by complex Euclidean geometry I mean the geometry with points coordinatized by (z,w) with z = a+bi and w = c+di, which has 2 complex dimensions and thus 4 real dimensions. I guess "geometry" means "concepts invariant under transformations preserving the dot-product". The real part of the metric based on extending the usual inner-product to this complexified plane is a^2 - b^2 + c^2 - d^2, which has signature (2,2) though maybe it's better to use the Hermitean metric which would be real and of the usual form a^2 + b^2 + c^2 + d^2. – T.. Aug 6 '10 at 1:13 One can prove the triangle inequality in any abstract inner product space, such as a Hilbert space; it is a consequence of the Cauchy-Schwarz inequality $\langle a, b \rangle^2 \le ||a||^2 ||b||^2$. In order for the triangle inequality to fail, the Cauchy-Schwarz inequality has to fail, and in order for Cauchy-Schwarz to fail (which corresponds to the "cosine" no longer being between $1$ and $-1$), some axiom of an abstract inner product space has to be given up. One choice is to give up positive-definiteness; in other words, lengths of vectors are no longer always non-negative. This leads to geometries like Minkowski spacetime which are relevant to relativity. In Minkowski spacetime, there is a reverse triangle inequality instead. Edit: I should also mention that the "unit sphere" in Minkowski spacetime is a model for hyperbolic geometry called the hyperboloid model. - – Jonathan Fischoff Jul 27 '10 at 23:45 So is it related to geometry on a hyperboloid? Can triangles (triples) impossible in Euclidean space work on a hyperboloid? – David Lewis Jul 27 '10 at 23:54 That depends on what you mean by triangle, and also on what you mean by length. I'm not too familiar with these weird geometries, so hopefully someone else can elaborate on this point. But a geodesic triangle on a hyperboloid is very different from a collection of three points in Minkowski spacetime and the "lengths" (spacetime separation) of the vectors between them. – Qiaochu Yuan Jul 28 '10 at 0:08 1 This is not directly related to Lorentzian geometry which has signature (n,1). The impossible triangles arise from complex geometry which in this case has signature (2,2). The hyperboloid model embeds into the 2-dimensional complex space using the same equation that embeds it into 3-dimensional real space but I see no advantage (or difference) in this. Fundamentally this question is not about hyperbolic geometry. – T.. Aug 4 '10 at 21:54 1 Also, in the hyperbolic plane, the triangle inequality is satisfied. Hyperbolic distance is a metric, and sides of triangles are (in all models of hyperbolic geometry, i.e., under any definition of "line" that might be used) geodesics for this metric. – T.. Aug 4 '10 at 21:57 show 1 more comment For some a, b, and c that form a triangle: increasing the length of c increases the measure of angle C and as m∠C approaches 180°, cos C approaches -1; decreasing the length of c increases the measure of angle C and as m∠C approaches 0°, cos C approaches 1. Extending this pattern, it makes sense that if c > a + b, c has gotten bigger past making a triangle with a and b, so cos C < -1, and if c < |a-b|, c has gotten smaller past making a triangle with a and b, so cos C > 1. In hyperbolic geometry, the definition of lines (and hence triangles) is different and the sum of the measures of the angles in a triangle is less than 180°. There is a Hyperbolic Law of Cosines, but it's not quite the same. I don't think there's a sensible way to relate hyperbolic cosine to the Law of Cosines in Euclidean geometry. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.916827380657196, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/311172/can-we-say-that-2-random-variables-x-y-are-independent-if-px-mid-y-y-i
# Can we say that 2 random variables $(X, Y)$ are independent if $P(X \mid Y=y)$ is a function without $y$ Discrete Case: Random variable $X, Y$ are independent if $P(X \mid Y=y)$ is a function (which can be deemed as pmf for random variable $X \mid Y$) without $y$. Continuous Case: Random variable $X, Y$ are independent if $f(X \mid Y=y)$ is a function (which can be deemed as pdf for random variable $X \mid Y$) without $y$. It seems to be true and I came across the use of that in many context. However, I cannot find any reference of the formalized theorem about it? Could anyone direct me any reliable reference? Update: Are there any non-Measure reference on such topic? - what is $P(X|Y=y)$, a probability? density function? – Sasha Feb 22 at 13:17 I guess, a random variable, and perhaps its meant that their density functions altogether are independent from $y$. – Berci Feb 22 at 13:56 @Sasha Hopefully I made it clearer after edit. – colinfang Feb 22 at 14:57 ## 1 Answer I suppose you mean the following: Let $(\Omega, \Sigma, P)$ be a probability space and for simplicity assume that $X,Y$ take their values in $(\mathbb{R},\mathcal{B}(\mathbb{R}))$ (any other measurable spaces, even different ones for each variable would be fine). Let $P(\cdot | Y=-):\: \Sigma \times \mathbb{R} \rightarrow [0,1]$ be a measurable function for fixed first variable such that $P(\cdot | Y=-) \circ (\mathrm{id_\Sigma},Y):\: \Sigma \times \Omega \rightarrow [0,1]$ is a regular conditional probability given $Y$. Then $P(X \in \cdot | Y=-):\: \mathcal{B}(\mathbb{R}) \times \mathbb{R} \rightarrow [0,1], P(X \in A | Y=y) := P(\{X \in A\} | Y=y)$ defines the conditional distribution of $X$ given $Y=-$. And your claim is that $X,Y$ are independent iff this map is constant in the second variable. If $X,Y$ are independent, then $$\int_{\{Y \in B\}} {P(X \in A)} \mathrm{d}P =P(X \in A) P(Y \in B) = P(X \in A, Y \in B) = \int_{\{Y \in B\}} {P(\{X \in A\} | \sigma(Y)) \mathrm{d}P}$$ for all $A,B \in \mathcal{B}(\mathbb{R})$, so by definition of the conditional probability, we have for all $A \in \mathcal{B}(\mathbb{R})$: $P(\{X \in A\} | \sigma(Y)) = P(X \in A)$ P-a.s. But then $P(\{X \in A\}|Y=-)$ can be chosen to be constant $=P(X \in A)$. If $P(X \in \cdot | Y=-)$ is constant for fixed first variable, we must have $$P(X \in A | Y=y_0) \cdot P(Y \in B) = \int_B {P(X \in A | Y =y)} \mathrm{d}P_Y(y) = \int_{\{Y \in B\}} {P(X \in A | \sigma(Y))} \mathrm{d}P = P(X\in A, Y \in B)$$ for all $y_0 \in \mathbb{R}$, $A,B \in \mathcal{B}(\mathbb{R})$. But then choosing $B=\mathbb{R}$, we see that $P(X \in A | Y=y_0) = P(X \in A)$, and the independence follows. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9397544264793396, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/46526/potential-energy-tends-to-infinity-on-the-n-body-problem
# Potential Energy tends to infinity on the N-Body Problem I need help to solve this problem related with the N-Body problem, i dont understand quite well what I need to define or to express in order to solve it. We assume a particular solution to the N-Body Problem, for all $t>0$, and $h>0$, where $h$ is the total energy of the N-Bodies, show that $U\rightarrow \infty$ as $t\rightarrow \infty$. This mean that the distance between a pair of particles goes to infinity? (No.) In the N-Body problem $U$ is given by $U=\sum_{1\leq i< j\leq N}\frac{Gm_{i}m_j}{\left \| q_i-q_j \right \|}$, where $G$ is the gravitational constant. The Kinetic energy is $T=\sum_{i=1}^{N}\frac{\left \| p_i \right \|^2}{2m_i}=\frac{1}{2}\sum_{i=1}^{N}m_i{\left \| \dot{q}_i \right \|^2}$ The vector $q_{i}$ define the position vector of the $i$ particle. So Basically $U$ is like the sum of all the potential energies between all the $N$ particles. Also by the Lagrange Jacobi Formula, we have that $I$ is the moment of inertia, $T$ the kinetic energy so we can express: $$\ddot{I}=2T-U=T+h\quad,$$ where $h$ is a conserved quantity. I think that if $U\rightarrow \infty$, then $T\rightarrow \infty$ (because $h$ is constant), the problem is that the only way that i see to $U\rightarrow \infty$ is when the distance between all the particles $\left \| q_i-q_j \right \| \rightarrow 0$, but it means that it will be a collision, so if we have a collision then $t\rightarrow t_1$ and not to $\infty$, because a collision takes a finite amount of time (Sundmanns theorem of total collapse), as I said i dont know what i have to define to show that $U\rightarrow \infty$ as $t\rightarrow \infty$, or maybe i need to define a $q_i(t)$ that in some way that $\left \| q_i-q_j \right \|$ goes very near to zero, but never zero, so $t$ can $t\rightarrow \infty$? Also, what about the question of a pair of particles going to infinity? It is clear that they should not go to $\infty$ because then $U\rightarrow 0$, and we are trying to prove the other case. - 1 Are you miss the -ve sign in front of the potential? I means $U=-GMm/r^2$ so that $U \to -\infty$. Also, the definition of the total energy $h$ should be $h = T+U$ not $T-U$ – hwlau Dec 11 '12 at 5:06 ## 1 Answer From virial theorem, stationary states are given by $2T=U$. The "particular solution" your teacher is assuming is a gravitational collapse where $U \gt 2T$ and therefore $U\rightarrow \infty$ as $t\rightarrow \infty$. Of course, the interparticle distance goes to zero in a collapse but this is not a collision: there is a lower bound in a collision and after collision particles increase their separation. In a collapse there is an asymptotic evolution towards a singularity. - woow, interesting!, it blown my mind, Thank you now everything is clear – JHughes Dec 11 '12 at 20:15 You are welcome and thanks by the vote. – juanrga Dec 12 '12 at 9:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416642785072327, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/13757/how-was-avogadros-number-first-determined/13936
# How was Avogadro's number first determined? I read on Wikipedia how the numerical value of Avogadro's number can be found by doing an experiment, provided you have the numerical value of Faraday's constant, but it seems to me that Faraday's constant could not be known before Avogadro's number ever was because it's the electric charge per mole (how could you measure the charge per mole before you knew you had a mole?). I just want to know the method physically used and the reasoning and calculations done by the first person who found the number 6.02214179(30)×10^23 (or however accurate it first was). Note: I see on the Wikipedia page for Avogadro constant that the numerical value was first obtained by "Johann Josef Loschmidt who, in 1865, estimated the average diameter of the molecules in air by a method that is equivalent to calculating the number of particles in a given volume of gas." but I can't access any of the original sources that are cited. Can somebody explain or give an accessible link so I can read about what exactly Loschmidt did? - "how could you measure the charge per mole before you knew you had a mole?" You certainly do not need to know Avogadro's number to know that you have a mole of something!! If you want 1 mole of hydrogen gas, just measure out 1 gram of it. If you want 1 mole of water, measure out 18 grams of it. Etc. etc. – Steve B Jul 16 '12 at 15:23 ## 5 Answers It was determined from viscosity and Mean Free Path. See here for an explanation - The first estimate of Avogadro's number was made by a monk named Chrysostomus Magnenus in 1646. He burned a grain of incense in an abandoned church and assumed that there was one 'atom' of incense in his nose at soon as he could faintly smell it; He then compared the volume of the cavity of his nose with the volume of the church. In modern language, the result of his experiment was $N_A \ge 10^{22}$ ... quite amazing given the primitive setup. Please remember that the year is 1646; the 'atoms' refer to Demokrit's ancient theory of indivisible units, not to atoms in our modern sense. I have this information from a physical chemistry lecture by Martin Quack at the ETH Zurich. Here are further references (see notes to page 4, in German): http://edoc.bbaw.de/volltexte/2007/477/pdf/23uFBK9ncwM.pdf The first modern estimate was made by Loschmidt in 1865. He compared the mean free path of molecules in the gas phase to their liquid phase. He obtained the mean free path by measuring the viscosity of the gas and assumed that the liquid consists of densely packed spheres. He obtained $N_A \approx 4.7 \times 10^{23}$ compared to the modern value $N_A = 6.022 \times 10^{23}$. - Avogadro's number was estimated at first only to order of magnitude precision, and then over the years by better and better techniques. Ben Franklin investigated thin layers of oil on water, but it was only realized later by Rayleigh that Franklin had made a monolayer: http://en.wikipedia.org/wiki/Langmuir%E2%80%93Blodgett_film If you know it's a monolayer, you can estimate the linear dimensions of a molecule and then get an order of magnitude estimate of Avogadro's number (or something equivalent to it). Some of the early estimates of the sizes and masses of molecules were based on viscosity. E.g., the viscosity of a dilute gas can be derived theoretically, and the theoretical expression depends on the scale of its atoms or molecules. Textbooks and popularizations often present a decades-long experimental program as a single experiment. Googling shows that Loschmidt did a whole bunch of different work on gases, including studies of diffusion, deviations from the ideal gas law, and liquified air. He seems to have studied these questions by multiple techniques, but it sounds like he got his best estimate of Avogadro's number from rates of diffusion of gases. It seems obvious to us now that setting the scale of atomic phenomena is an intrinsically interesting thing to do, but it was not always considered mainstream, important science in that era, and it didn't receive the kind of attention you'd expect. Many chemists considered atoms a mathematical model, not real objects. For insight into the science culture's attitudes, take a look at the story of Boltzmann's suicide. But this attitude doesn't seem to have been monolithic, since Loschmidt seems to have built a successful scientific career. - There's also a (maybe small) push to just define Avogadro's number exactly as a fundamental constant, which would, if I understand correctly, also get rid of the problem of Le Grand K as a reference mass. See americanscientist.org/issues/pub/… – Willie Wong Aug 20 '11 at 14:34 – Georg Oct 22 '11 at 10:31 The first undeniably reliable measurements of Avogadro's number came right at the turn of the twentieth century, with Millikan's measurement of the charge of the electron, Planck's blackbody radiation law, and Einstein's theory of Brownian motion. Earlier measurements of Avogadro's number were really only estimates, they depended on the detailed model for atomic forces, and this was unknown. These three methods were the first model-independent ones, in that the answer they got was limited only by the experimental error, not by theoretical errors in the model. When it was observed that these methods gave the same answer three times, the existence of atoms became an established experimental fact. ### Millikan Faraday discovered the law of electrodeposition. When you run a current through a wire suspended in an ionic solution, as the current flows, material will deposit on the cathode and on the anode. what Faraday discovered is that the number of moles of the material is strictly proportional to the total charge that passes from one end to the other. Faraday's constant is the number of moles deposited per unit of charge. This law is not always right, sometimes you get half of the expected number of moles of material deposited. When the electron was discovered in 1899, the explanation of Faraday's effect was obvious--- the ions in solution were missing electrons, and the current flowed from the negative cathode by depositing electrons on the ions in solution, thereby removing them from the solution and depositing them on the electrode. Then Faraday's constant is the charge on the electron times Avogadro's number. The reason that you sometimes get half the expected number of moles is that sometimes the ions are doubly-ionized, they need two electrons to become uncharged. Millikan's experiment found the charge on the electron directly, by measuring the discreteness of the force on a droplet suspended in an electric field. This determined Avogadro's number. ### Planck's blackbody law Following Boltzmann, Planck found the statistical distribution of electromagnetic energy in a cavity using Boltzmann's distribution law: the probability of having energy E was $\exp(-E/kT)$. Planck also introduced Planck's constant to describe the discreteness of the energy of the electromagnetic oscillators. Both constants, k and h, could be extracted by fitting the known blackbody curves. But Boltzmann's constant times Avogadro's number has a statistical interpretation, it is the "Gas constant" R you learn about in high school. So measuring Boltzmann's constant produces a theoretical value for Avogadro's number with no adjustible model parameters. ### Einstein's diffusion law A macroscopic particle in a solution obeys a statistical law--- it diffuses in space so that its average square distance from the starting point grows linearly with time. The coefficient of this linear growth is called the diffusion constant, and it seems hopeless to determine this constant theoretically, because it is determined by innumerable atomic collisions in the liquid. But Einstein in 1905 discovered a fantastic law: that the diffusion constant can be understood immediately from the amount of friction force per unit velocity. The equation of motion for the Brownian particle is: $m{d^2x\over dt^2} + \gamma {dx\over dt} + C\eta(t)$ = 0 Where m is the mass, $\gamma$ is the friction force per unit velocity, and $C\eta$ is a random noise that describes the molecular collisions. The random molecular collisions at macroscopic time scales must obey the law that they are independent Gaussian random variables at each time, because they are really the sum of many independent collisions which have a central limit theorem. Einstein knew that the probability distribution of the velocity of the particle must be the Maxwell-Boltzmann distribution, by general laws of statistical thermodynamics: $p(v) \propto e({-v^2\over 2mkT})$. Ensuring that this is unchanged by the molecular noise force determines C in terms of m and kT. Einstein noticed that the $d^2x\over dt^2$ term is irrelevant at long times. Ignoring the higher derivative term is called the "Smoluchowski approximation", although it is not really an approximation by a long-time exact description. It is explained here: Cross-field diffusion from Smoluchowski approximation, so the equation of motion for x is $\gamma {dx\over dt} + C\eta = 0$, and this gives the diffusion constant for x. The result is that if you know the macroscopic quantities $m,\gamma,T$, and you measure the diffusion constant to determine C, you find Boltzmann's constant k, and therefore Avogadro's number. This method required no photon assumption and no electron theory, it was based only on mechanics. The measurements on Brownian motion were carried out by Perrin a few years later, and earned Perrin the Nobel prize. - The Avogadro Number was discovered by Sir Michael Faraday but its importance and significance was realized much later by Avogadro while dealing with industrial synthesis and chemical reactions. In those days the chemists weren't aware of law of equal proportions which led to wastage of chemicals in industrial synthesis. Faraday passed 96480 C of electricity throush hydrogen cations and found that 1gram hydrogen was formed. Then he analysed that when 1 electron with the charge of 1.6 X 10 to the power -19 coulombs gave 1 hydrogen atom then 96480C must give 6.023 X 10 to the power 23 atoms of hydrogen. By this research scientists started calculating relative atomic masses of other atoms with respect to hydrogen. Later hydrogen became difficult for experiment, so C-12 was chosen for the determination of relative atomic masses. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9549273252487183, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/86346/list
## Return to Question 3 added 110 characters in body A graph $G$ is connected if and only if the second-largest eigenvalue $\lambda_2$ of the Laplacian of $G$ is greater than zero. (See, e.g., the Wikipedia article on algebraic connectivity.) Is there an analogous statement for the eigenvalue $\lambda_2(M)$ of the Laplacian operator $\Delta$ for an $n$-dimensional connected, closed Riemannian manifold $M$? ($\Delta(f) = \nabla^2(f) = −\mathrm{div}(\mathrm{grad}(f))$.) I am trying to understand the relationship between Laplacians on graphs and Laplacians on Riemannian manifolds. Pointers to help elucidate the connection would be greatly appreciated! Addendum. See Richard Montgomery's interesting new comment on the Laplacian on the integer lattice. 2 Removed "connected." A graph $G$ is connected if and only if the second-largest eigenvalue $\lambda_2$ of the Laplacian of $G$ is greater than zero. (See, e.g., the Wikipedia article on algebraic connectivity.) Is there an analogous statement for the eigenvalue $\lambda_2(M)$ of the Laplacian operator $\Delta$ for an $n$-dimensional connected, closed Riemannian manifold $M$? ($\Delta(f) = \nabla^2(f) = −\mathrm{div}(\mathrm{grad}(f))$.)Of course, $M$ is already connected, so the analog, if it exists, cannot be that naively straightforward. I am trying to understand the relationship between Laplacians on graphs and Laplacians on Riemannian manifolds. Pointers to help elucidate the connection would be greatly appreciated! 1 # Laplacians on graphs vs. Laplacians on Riemannian manifolds: $\lambda_2$? A graph $G$ is connected if and only if the second-largest eigenvalue $\lambda_2$ of the Laplacian of $G$ is greater than zero. (See, e.g., the Wikipedia article on algebraic connectivity.) Is there an analogous statement for the eigenvalue $\lambda_2(M)$ of the Laplacian operator $\Delta$ for an $n$-dimensional connected, closed Riemannian manifold $M$? ($\Delta(f) = \nabla^2(f) = −\mathrm{div}(\mathrm{grad}(f))$.) Of course, $M$ is already connected, so the analog, if it exists, cannot be that naively straightforward. I am trying to understand the relationship between Laplacians on graphs and Laplacians on Riemannian manifolds. Pointers to help elucidate the connection would be greatly appreciated!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8927294611930847, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/45963/do-black-holes-have-infinite-areas-and-volumes?answertab=active
# Do black holes have infinite areas and volumes? How to calculate the area / volume of a black hole? Is there a corresponding mathmetical function such as rotating 1/x around the x-axis or likewise to find the volume? - 1 Presumably the area and volumes are that of and inside the event horizon. – namehere Dec 5 '12 at 3:55 5 How much do you know about integrating over forms? Also, the volume of a black hole is coordinate dependent. – Jerry Schirmer Dec 5 '12 at 4:11 ## 2 Answers The event horizon is a lightlike surface, and so its area is coordinate-invariant. For a Schwarzschild black hole, $$ds^2 = -\left(1-\frac{2m}{r}\right)dt^2 + \left(1-\frac{2m}{r}\right)^{-1}dr^2 + r^2(d\theta^2+\sin^2\theta\,d\phi^2)$$ The horizon suface is at $r = 2m$ of Schwarzschild radial coordinate, and so at any particular Schwarzschild time ($dt = 0$) has the metric $$dS^2 = (2m)^2(d\theta^2 + \sin^2\theta\,d\phi^2),$$ which is just the metric on a standard 2-sphere of radius $2m$. You can find the square area element explicitly as the determinant of the metric, $dA^2 = (2m)^4\sin^2\theta\,d\theta^2d\phi^2$, and integrating: $$A = 16\pi m^2 = \frac{16\pi G^2}{c^4}m^2.$$ The volume of the black hole is not invariant, as Jerry Schirmer says. If you try to apply anything the above Schwarzschild coordinates above, then since the coefficients of $dt^2$ and $dr^2$ switch signs across the horizon, $t$ is spacelike and $r$ is timelike. Therefore, since the black hole is eternal, it could be said to have infinite volume (classically, but a real astrophysical black hole would have a finite but still extraordinarily high lifetime), as you'll be integrating $dt$ across its lifetime. Technically, the above argument is a bit flawed, because the Schwarzschild coordinate chart is not defined across the event horizon, so one should be more careful how they're continued across the horizon (e.g., with Kruskal-Szekeres coordinates). But this can be made more rigorous. In another coordinate chart, e.g., the Gullstrand-Painlevé coordinates adapted to a family of freely falling observers, $$ds^2 = -\left(1-\frac{2m}{r}\right)dt^2-2\sqrt{\frac{2m}{r}}\,dt\,dr + \underbrace{dr^2 + r^2(d\theta^2 + \sin^2\theta\,d\phi^2)}_{\text{Euclidean in spherical coord.}},$$ at any instant of time ($dt = 0$), space is precisely Euclidean; since the horizon is still $r = 2m$ in these coordinates, "the" volume is $$V_{\text{GP}} = \frac{4}{3}\pi(2m)^3.$$ If you pick yet another coordinate chart, you may get a yet different answer. - A better answer than mine! +1 – John Rennie Dec 5 '12 at 9:11 If you are a Schwarzschild observer the radial co-ordinate $r$ is defined as the circumference of a circle around the black hole divided by $2\pi$. Also the event horizon is at a Schwarzschild radius of $2M$ (in geometrised units), so for a Schwarzschild observer the area of the event horizon is simply $16\pi M^2$. But this is a somewhat trivial answer as in effect I'm just saying "well that's how we define $r$". I get the impression you're looking for a deeper answer than this, but you'll need to expand on your question a bit. Later: I've just noticed Is a black hole's surface area invariant for distant intertial observers? that states the area of the event horizon is an invarient. This means all observers will measure the area to be $16\pi M^2$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9127525091171265, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/174745-need-help-kinematics-question-print.html
# Need help with a kinematics question Printable View • March 16th 2011, 03:25 AM Unagi9 Need help with a kinematics question Hi, I have a kinematics question which i can't solve and i need help. Here it is. A particle moves in a straight line passing a fixed point O with a velocity of 3m/s. It moves in such a manner that t seconds after passing O, its velocity is given by v= at²+b. If the particle is again at O after 3 seconds, find its speed at that instant. Find the total distance traveled between t=0 and t=3. This is what i have tried so far : v= at²+b subst. t=0, v=3. 3 = a(0)² + b b=3 ∴ subst. b=3, t=3 into v= at²+b v = a(3)² + 3 v = 9a + 3 ---------------------------------------------------------------------------- I couldn't continue after arriving here, can't get the unknown,a. I only need help here. I understand how to get the answer for the distance part. Thanks (Happy) • March 16th 2011, 04:03 AM Sambit As far as I see, the problem involves 2 unknowns with 1 equation!!(Wondering) • March 16th 2011, 04:08 AM Unagi9 Ya! I've been thinking for a long time to get the other unknown • March 16th 2011, 06:10 AM earboth I don't understand the quoted part of your question: Quote: Originally Posted by Unagi9 ... A particle moves in a straight line passing a fixed point O If the particle is again at O ... In my opinion the particle can pass a certain place several times if it moves on a closed curve (circle, ellipse, ...). But then it doesn't move in a straight line. Or: If the particle moves in a straight line it passes a certain place only once. • March 16th 2011, 07:50 AM skeeter Quote: Originally Posted by Unagi9 A particle moves in a straight line passing a fixed point O with a velocity of 3m/s. It moves in such a manner that t seconds after passing O, its velocity is given by v= at²+b. If the particle is again at O after 3 seconds, find its speed at that instant. Find the total distance traveled between t=0 and t=3. the particle changes direction as it moves in a straight line ... $v(0) = 3 \, m/s$ $v(t) = at^2 + b$ $b = 3$ since the particle returns to point O at $t = 3$ , then its displacement from $t = 0$ to $t = 3$ is zero ... $\displaystyle \int_0^3 at^2 + 3 \, dt = 0$ $\left[\dfrac{at^3}{3} + 3t\right]_0^3 = 0<br />$ $9a + 9 = 0$ $a = -1$ $v(t) = 3 - t^2$ $v(3) = 3 - 9 = -6$ , so its speed at $t = 3$ is $|-6| = 6 \, m/s$ total distance = $\displaystyle \int_0^3 |3-t^2| \, dt = \int_0^{\sqrt{3}} 3 - t^2 \, dt - \int_{\sqrt{3}}^3 3 - t^2 \, dt$ you can finish up by finding the total distance. • March 17th 2011, 12:15 AM Unagi9 Thanks skeeter! (Happy) I didn't know that you could use definite integrals. All times are GMT -8. The time now is 10:50 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437366724014282, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/313706/let-g-be-a-bipartite-graph-all-of-whose-vertices-have-the-same-degree-d-show-th
# Let G be a bipartite graph all of whose vertices have the same degree d. Show that there are at least d distinct perfect matchings in G Let G be a bipartite graph all of whose vertices have the same degree d. Show that there are at least d distinct perfect matchings in G. (Two perfect matchings M1 and M2 are distinct if M1 does not equal to M2 as sets.) I am probably not understanding the definition of distinct perfect matching correctly. For example, if we have a regular bipartite graph with d = 2 like the one below: It says M1 and M2 are different if they are different "as sets". Say we are looking at the perfect matching of X, or does it mean sets of edges or sets of vertices? If it's sets of vertices, it can be {Y1, Y2}, {Y2, Y3} and {Y1, Y2, Y3}. If it's sets of edges there are apparently 2 x 2 x 2 = 6 different sets. And besides these, I have no clue how to attack this problem anyway, regardless of what it meant by "sets"... I thought the answer would be d^(d/2) which is way off and absolutely wrong... Help with intuitive explanation and steps are appreciated!!! - 2 The tag graph is intended for questions about graphs of functions, see the tag-wiki and tag-excerpt. (The tag-excerpt is also shown when you are adding a tag to a question.) – Martin Sleziak Feb 25 at 6:52 Thanks for fixing that. – Angus Leo Feb 25 at 6:55 – joriki Feb 25 at 8:14 ## 1 Answer Hint: • It is trivial for $d = 1$. • For $d > 1$, remove edges of any perfect matching to get $(d-1)$-regular bipartite graph with $(d-1)$ distinct perfect matchings. Good luck! - This gives matchings that are not just distinct but disjoint! But you do have to say why there is even one perfect matching. – Gerry Myerson Feb 25 at 12:17 I don't see how you can always "remove edges of any perfect matching to get (d−1)-regular bipartite graph with (d−1) distinct perfect matchings." How? Is it possible that such removal cannot be done? Let's say we start removing one edge from each vertex from the side that has a perfect matching into the other side. What if 2 edges got removed connect with the same vertex on the other side and that vertex cannot maintain a d-1 degree??? – Angus Leo Feb 25 at 12:51 1 – dtldarek Feb 25 at 13:15 @AngusLeo You didn't provide any context for your question, so all I can provide is a hint. BTW, if $\geq 2$ edges connect the same vertex on the other side, would that be a matching? – dtldarek Feb 25 at 13:21 Never mind, I have figured it out. Thanks! :) – Angus Leo Feb 25 at 14:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271178841590881, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/22480?sort=oldest
## triangulations of torus, general, and Euler number. (Hopefully more interesting/relevant) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, everyone: I have been going over some simplicial homology recently, hoping to get some geometric insight that I don't know how to get from the algebraic machinery alone. I have been trying to find the homology of the torus this way, i.e., by triangulating it ( i.e., finding a carrier for the torus), but the smallest triangulation I have been able to do , has 18 triangles/faces --I checked it works; there are 8 vertices and 26 edges. Still: does anyone know of a simpler triangulation, ie., one with a smaller total number of triangles (and, of course, fewer vertices and edges resp.). ? I had tried the long shot of solving the very simple equation: V-E+F =0 in positive integers. but this alone does not seem to help . Any ideas.?. Any ideas for finding minimal triangulations of surfaces, or higher-dimensional manifolds.? Thanks. - My apology. I mistakenly, and carelessly, entered triangulated categories as tags. Sorry. – Herb Apr 25 2010 at 3:53 This can be found in an undergraduate textbook titled "Basic Concepts of Algebraic Topology" by Fred Croom. We really, really shouldn't be working standard questions from algebraic topology classes here. And yet, most of the algebraic topology questions are things I assign as exercises in my graduate classes. – Charlie Frohman Apr 26 2010 at 17:36 Well, the question about "finding minimal triangulations of ... higher-dimensional manifolds" is not at all trivial. – John Palmieri Apr 26 2010 at 18:08 1 The pattern is pretty clear. Low points, because they just created an account, name that can't be traced back to an individual. Homework question dressed up to look like a little more. – Charlie Frohman Apr 26 2010 at 18:44 To Charlie Frohman: This is not a homework question. I am computing the actual simplicial homology of spaces to get insights I do not know how to get by using the algebraic machinery alone (e.g., with simplicial homology) As to not stating my name, I have to admit I feel somewhat intimidated in this forum, being a first-year student at a school other than one of the top 10, specially after having read the resumes/CV's of many here. If this is against MO policy, I apologize, and I will drop out. – Herb Apr 28 2010 at 4:11 show 2 more comments ## 2 Answers If you're just looking to glue triangles together along their edges, you can do it with two triangles, glued together to form a square, and then with opposite sides of the square glued to form a torus in the usual way. The resulting mesh has one vertex and three edges. But if the triangles have to form a simplicial complex (meaning that the intersection of any two triangles is empty, a single vertex, or an edge) then I think the smallest mesh for a torus has 14 triangles, connected to each other in the pattern of the Heawood graph. The resulting mesh has seven vertices and 21 edges. It can be embedded into space as the Császár polyhedron. - 3 Herb, if you want to employ David's "triangulation" for computing homology of the torus, take a look at Hatcher's algebraic topology book. He explains how to use Delta complexes in place of simplicial complexes (David's decomposition of the torus into two triangles is a Delta complex). This gives a theory in which it's easy to find (generalized) triangulations of spaces you may encounter, and also makes the resulting homology computations very clean. – Dan Ramras Apr 25 2010 at 23:22 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. For the particular case of a simplicial complex structure for a torus, David Eppstein is right: the minimal triangulation has 7 vertices, 21 edges, and 14 triangles. For a sphere, the minimal triangulation has $(v,e,f) = (4, 6, 4)$. For a real projective plane, the minimal triangulation has $(v,e,f) = (6, 15, 10)$. For the general situation of finding minimal triangulations of manifolds, Frank Lutz has written a nice preprint, and he also has some information and other references on The Manifold Page. There are plenty of unsolved problems in this area, it seems... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357728958129883, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/125089/continuity-in-the-extended-complex-plane
# Continuity in the extended complex plane Ahlfors says that for rational function $R(x) = \frac{P(z)}{Q(z)}$, we define $R(z)$ to be $\infty$ when $Q(z) = 0$. Then he says that $R(z)$ is clearly continuous. To me, $R(z)$ is clearly continuous at the points where $Q(z) \not = 0$. But for continuity, you need $|R(z) - \infty| = \infty < \epsilon$ for $z$ which are close to the zero. So $R(z)$ can't be continuous in the sense that it is continuous at every point. Is Ahlfors being informal here, or am I missing something fundamental? - ## 1 Answer First off, you need the additional assumption that $P(z)$ is nonzero where $Q(z)=0$ to make sense of this statement. Otherwise, all bets are off when you take the limit. The usual $\epsilon$-$\delta$ definition doesn't apply to inifinity. Instead, we say that $\lim_{z \to z_0}f(z)=\infty$ if for all $M\in \mathbb{R}$ there exists an $\delta$ so that whenever $|z-z_0|<\delta$, $|f(z)|>M$. It now makes sense to say the $R$ is continuous at $z$ in the sense that the value for the function is equal to the limit as the independent variable approaches $z$. (Notice that in real analysis, we make a distinction between positive and negative $\infty$, but in complex analysis we just work with a single infinite limit point. A good visual for this is the Riemann Sphere, which you will no doubt encounter as you continue reading Ahlfors.) - 1 Note that this is not an ad hoc definition just for infinite limits; it's the application of the standard topological definition of a limit to the standard topology for the real numbers extended by $\pm\infty$ or the complex numbers extended by $\infty$. In addition to the open sets of unextended numbers, that topology has open sets containing $\infty$, namely the complements of compact sets. The above definition says precisely that the function is outside any compact set and thus inside any neighbourhood of $\infty$ for sufficiently small $\delta$. – joriki Mar 27 '12 at 16:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938349187374115, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/1002/fourier-transform-for-dummies/1054
# Fourier transform for dummies A vague question of Kevin Lin which didn't quite fit at Mathoverflow: So ... what is the Fourier transform? What does it do? Why is it useful (both in math and in engineering, physics, etc)? (Answers at any level of sophistication are welcome.) - 2 – J. M. May 8 '11 at 21:49 When I was learning about FTs for actual work in signal processing, years ago, I found R. W. Hamming's book Digital Filters and Bracewell's The Fourier Transform and Its Applications good intros to the basics. Strang's Intro. to Applied Math. would be a good next step. Do a discrete finite FT by hand of a pure tone signal over a few periods to get a feel for the matched filtering and the relation of constructive and destructive interference to orthogonality. – Tom Copeland May 9 '12 at 6:29 Is there a similar question about Laplace Transforms? – Alyosha May 1 at 19:45 ## 10 Answers The ancient Greeks had a theory that the sun, the moon, and the planets move around the Earth in circles. This was soon shown to be wrong. The problem was that if you watch the planets carefully, sometimes they move backwards in the sky. So Ptolemy came up with a new idea - the planets move around in one big circle, but then move around a little circle at the same time. Think of holding out a long stick and spinning around, and at the same time on the end of the stick there's a wheel that's spinning. The planet moves like a point on the edge of the wheel. Well, once they started watching really closely, they realized that even this didn't work, so they put circles on circles on circles... Eventually, they had a map of the solar system that looked like this: This "epicycles" idea turns out the be a bad theory. One reason it's bad is that we know now that planets orbit in ellipses around the sun. (The ellipses are not perfect because they're perturbed by the influence of other gravitating bodies, and by relativistic effects.) But it's wrong for an even worse reason that that, as illustrated in this wonderful youtube video: In the video, by adding up enough circles, they made a planet trace out Homer Simpson's face. It turns out we can make any orbit at all by adding up enough circles, as long as we get to vary their size and speeds. So the epicycle theory of planetary orbits is a bad one not because it's wrong, but because it doesn't say anything at all about orbits. Claiming "planets move around in epicycles" is mathematically equivalent to saying "planets move around in two dimensions". Well, that's not saying nothing, but it's not saying much, either! A simple mathematical way to represent "moving around in a circle" is to say that positions in a plane are represented by complex numbers, so a point moving in the plane is represented by a complex function of time. In that case, moving on a circle with radius $R$ and angular frequency $\omega$ is represented by the position $$z(t) = Re^{i\omega t}$$ If you move around on two circles, one at the end of the other, your position is $$z(t) = R_1e^{i\omega_1 t} + R_2 e^{i\omega_2 t}$$ We can then imagine three, four, or infinitely-many such circles being added. If we allow the circles to have every possible angular frequency, we can now write $$z(t) = \int_{-\infty}^{\infty}R(\omega) e^{i\omega t} \mathrm{d}\omega$$ The function $R(\omega)$ is the Fourier transform of $z(t)$. If you start by tracing any time-dependent path you want through two-dimensions, your path can be perfectly-emulated by infinitely many circles of different frequencies, all added up, and the radii of those circles is the Fourier transform of your path. Caveat: we must allow the circles to have complex radii. This isn't weird, though. It's the same thing as saying the circles have real radii, but they do not all have to start at the same place. At time zero, you can start however far you want around each circle. If your path closes on itself, as it does in the video, the Fourier transform turns out to simplify to a Fourier series. Most frequencies are no longer necessary, and we can write $z(t) = \sum_{k=-\infty}^\infty c_k e^{ik \omega_0 t}$ where $\omega_0$ is the angular frequency associated with the entire thing repeating - the frequency of the slowest circle. The only circles we need are the slowest circle, then one twice as fast as that, then one three times as fast as the slowest one, etc. There are still infinitely-many circles if you want to reproduce a repeating path perfectly, but they are countably-infinite now. If you take the first twenty or so and drop the rest, you should get close to your desired answer. In this way, you can use Fourier analysis to create your own epicycle video of your favorite cartoon character. That's what Fourier analysis says. The questions that remain are how to do it, what it's for, and why it works. I think I will mostly leave those alone. How to do it - how to find $R(\omega)$ given $z(t)$ is found in any introductory treatment, and is fairly intuitive if you understand orthogonality. Why it works is a rather deep question. It's a consequence of the spectral theorem. What it's for has a huge range. It's useful in analyzing the response of linear physical systems to an external input, such as an electrical circuit responding to the signal it picks up with an antenna or a mass on a spring responding to being pushed. It's useful in optics; the interference pattern from light scattering from a diffraction grating is the Fourier transform of the grating, and the image of a source at the focus of a lens is its Fourier transform. It's useful in spectroscopy, and in the analysis of any sort of wave phenomena. It converts between position and momentum representations of a wavefunction in quantum mechanics. Check out this question on physics.stackexchange for more detailed examples. Fourier techniques are useful in signal analysis, image processing, and other digital applications. Finally, they are of course useful mathematically, as many other posts here describe. - 1 – J. M. Dec 17 '11 at 9:39 Thanks! I will check it out. – Mark Eichenlaub Dec 17 '11 at 15:44 3 So. Freaking. Cool. – AndrewG Oct 25 '12 at 23:33 It took me quite a while to understand what exactly is meant by Fourier transform since it can refer to various algorithms, operations and results. Though I'm quite new in this topic, I'll try to give a short but hopefully intuitive overview on what I came up with (feel free to correct me): Let's say you have a function $f(t)$ that maps some time value $t$ to some value $f(t)$. Now we'll try to approximate $f$ as the sum of simple harmonic oscillations, i.e. sine waves of certain frequencies $\omega$. Of course, there are some frequencies that fit well to $f$ and some that approximate it less well. Thus we need some value $\hat{f}(\omega)$ that tells us how much of a given oscillation with frequency $\omega$ is present in the approximation of $f$. Take for example the red function from here which is defined as $$f(t) = \sin(t)+0.13\sin(3t)$$ The green oscillation with $\omega=1$ has the biggest impact on the result, so let's say $$\hat{f}(1)=1$$ The blue sine wave ($\omega=3$) has at least some impact, but it's amplitude is much smaller. Thus we say $$\hat{f}(3)=0.13$$ Other frequencies may not be present in the approximation at all, thus we would write $$\hat{f}(\omega) = 0$$ for these. Now if we knew $\hat{f}(\omega)$ not only for some but all possible frequencies $\omega$, we could perfectly approximate our function $f$. And that's what the continuous Fourier transform does. It takes some function $f(t)$ of time and returns some other function $\hat{f}(\omega) = \mathcal{F}(f)$, it's Fourier transform, that describes how much of any given frequency is present in $f$. It's just another representation of $f$, of equal information but with a completely different domain. Often though, problems can be solved much easier in this other representation (which is like finding the appropriate coordinate system). But given a Fourier transform, we can integrate over all frequencies, put together the weighted sine waves and get our $f$ again, which we call inverse Fourier transform $\mathcal{F}^{-1}$. Now why should one want to do that? Most importantly, the Fourier transform has many nice mathematical properties (i.e. convolution is just multiplication). It's often much easier to work with the Fourier transforms than with the function itself. So we transform, have an easy job with filtering, transforming and manipulating sine waves and transform back after all. Let's say we want to do some noise reduction on a digital image. Rather than manipulating a function $\text{image} : \text{Pixel} \to \text{Brightness}$, we transform the whole thing and work with $\mathcal{F}(\text{image}) : \text{Frequency} \to \text{Amplitude}$. Those party of high frequency that cause the noise can simply be cut off - $\mathcal{F}(\text{image})(\omega) = 0, \omega > ...Hz$. We transform back et voilà. - Let me partially steal from the accepted answer on MO, and illustrate it with examples I understand: The Fourier transform is a different representation that makes convolutions easy. Or, to quote directly from there: "the Fourier transform is a unitary change of basis for functions (or distributions) that diagonalizes all convolution operators." This often involves expressing an arbitrary function as a superposition of "symmetric" functions of some sort, say functions of the form eitx — in the common signal-processing applications, an arbitrary "signal" is decomposed as a superposition of "waves" (or "frequencies"). ### Example 1: Polynomial multiplication This is the use of the discrete Fourier transform I'm most familiar with. Suppose you want to multiply two polynomials of degree n, given by their coefficients (a0, …, an) and (b0, …, bn). In their product, the coefficient of xk is ck = ∑aibk-i. This is a convolution, and doing it naively would take O(n2) time. Instead, suppose we represent the polynomials by their values at 2n points. Then the value of the product polynomial (the one we want) at any point is simply the product of the values of our original two polynomials. Thus we have reduced convolution to pointwise multiplication. The Fourier transform and its inverse correspond to polynomial evaluation and interpolation respectively, for certain well-chosen points (roots of unity). The Fast Fourier Transform (FFT) is a way of doing both of these in O(n log n) time. ### Example 2: Convolution of probability distributions Suppose we have two independent (continuous) random variables X and Y, with probability densities f and g respectively. In other words, P(X ≤ x) = ∫x-∞ f(t)dt and P(Y ≤ y) = ∫y-∞ f(t)dt. We often want the distribution of their sum X+Y, and this is given by a convolution: P(X+Y ≤ z) = ∫f(t)g(z-t)dt. This integration may be hard. But instead of representing the random variables by their densities, we can also represent them by their characteristic functions φX(t) = E[eitX] and φY(t) = E[eitY]. Then the characteristic function of X+Y is just: φX+Y(t) = E[eit(X+Y)] = φX(t)φY(t) since they're independent. The characteristic function is the continuous Fourier transform of the density function; it is a change of representation in which convolution becomes pointwise multiplication. To quote again the answer on MO, many transformations we want to study (translation, differentiation, integration, …) are actually convolutions, so the Fourier transform helps in a wide number of instances. - Think about light coming from stars. The light has colour or "spectrum" but of course the data comes in a 1-D stream. The Fourier transform gives you the spectrum of the time series. You can also think about the EQ on your stereo -- the 2kHz slider, the 5kHz slider, etc. Those sliders are adjusting the constants in a Fourier-like realm. (see @leonbloy's caveats below) (Inverse Fourier just takes you back from spectrum to signal. So what does it mean that $\mathcal{F}^{-1} = \mathcal{F}$?) To get into the maths of it, remember that $\cos$ and $\sin$ are just phase-shifted versions of one another. Mathematically, you add together different amounts (amplitudes) of various phase-shifted $\sin$ waves and it's a surprising fact that doing so can add up to any function. (How would you get a straight line like $y={1 \over 3} x$ for example?) note: The transformed series don't have to be a time series exactly. You could parametrize lots of curves by $t$. For example handwriting or the outline of dinosaur footprints. Why is it useful in physics? One use is to express the definitiveness of Heisenberg uncertainty. A given wavefunction $\Psi$ in space (position) can be $\mathcal{F}(\Psi)$ to time (momentum). Since the time-space conversion is bijective, position & momentum (anti)covary i.e. you can't increase one without decreasing the other. Frank Wilczek makes use of $\mathcal{F}$ in this video explaining QCD for example. How is it used in engineering? Signal processing, image processing (PDF, jump to page 5), and video processing use the Fourier basis to represent things. - hint. the answer to the second puzzle involves limits. – isomorphismes Mar 24 '11 at 1:23 1 One caveat: in most uses, a "spectrum" (including the EQ) measures the enery per frequency, which relates to the absolute value of the Fourier transform. That is "a part" of the Fourier transform (you lack the "phase"), and hence, from the spectrum you cannot get back the signal. (A second caveat would related to the fact that the EQ measures a time-windowed spectrum, which varies on time; the Fourier transform does not depend on time). – leonbloy Oct 14 '11 at 2:02 – isomorphismes Jan 4 at 8:24 I'll give an engineering answer. If you have a time series that you think is the result of a additive collection of periodic function, the Fourier transform will help you determine what the dominant frequencies are. This is the way guitar tuners work. The perform and FFT on the sound data and pick out the frequency with the greatest power (squares of the real and imaginary parts) and consider that the "note." This is called the fundamental frequency. There are many other uses, so you might want to add big list as a tag. - I interpreted the question as asking "explain why this is useful," rather than "list some examples of its use." Only the latter would deserve a `big-list` tag. – Larry Wang Jul 28 '10 at 20:31 You could think of a Fourier series expanding a function as a sum of sines and cosines analogous to the way a Taylor series expands a function as a sum of powers. Or you could think of the Fourier series as a change of variables. A fundamental skill in engineering and physics is to pick the coordinate system that makes your problem simplest. Since the derivatives of sines and cosines are more sines and cosines, Fourier series are the right "coordinate system" for many problems involving derivatives. - Here's some simple Matlab code to play around with if you like. `````` % This code will approxmmate the function f using the DFT clear all close all a=0;b=2*pi; % define interval, i.e. endpoints of domain(f) N=20; % number of sample points to take from f % build vector of points in domain(f) to sample from for j=1:N+1 x(j) = (b-a)*(j-1)/N; end f= cos(x); % approximate cos(x) with resulting Fourier series % build matrix of powers of roots of unity for m=1:N+1 for n=1:N+1 F(m,n) = exp((m-1)*(n-1)*(2*pi*i)/N); end end % solve f = Fc by domng F\f c = F\f'; % c is vector of Fourier coefficients % plot discrete cos(x) using N points xx=0:0.01:2*pi; plot(xx,cos(xx),'g') hold on % build the Fourier series using coefficients from c summ=0; for k=1:length(c) summ = summ + c(k)*exp(i*(k-1)*x); end % plot the fourier series against the discrete sin function plot(x,summ) legend('actual','approx.') ``` ``` As written you will have the first N=20 terms of the Fourier approximation to the cosine on the interval [a,b]=[0,2*pi]. Not very interesting as is... Reference: Gilbert Strang. - +1 for the MATLAB code – draks ... Jan 21 at 12:47 I'm in a calc 2 class and the Fourier Series are sort of the crowning achievement of the class. Still, I had a hard time figuring out what it was used for. From what I know, and I could be wrong, signals or sin/cos waveforms can be additive or subtractive. So if you take a look at the picture at the top of the page, you'll see a green and blue signal. Well that's all well and good, but what happens if you add them together? Their periods are different, so it's not going to result in just an average of the two forms. So you end up with the red line. It's y value is large like the green, but it's period is smaller than that of the green. The top and bottom look more like the blue line. So this is what a fourier series does. It takes two signals and puts them together to make a new signal. With more and more signals added together, you can approach very specific wave forms, like a square wave or a saw tooth wave (triangular). It's not perfect though, and the difference between green and red waves can be explained with the Gibbs Phenomenon. I hope this helps. - The Fourier transform returns a representation of a signal as a superposition of sinusoids. Fourier transforms are used to perform operations that are easy to implement or understand in the frequency domain, such as convolution and filtering. If the signal is well-behaved, one can transform to and from the frequency domain without undue loss of fidelity. - A more complicated answer (yet it's going to be imprecise, because I haven't touched this in 15 years...) is the following. In a 3-dimentional space (for example) you can represent a vector v by its end point coordinates, x, y, z, in a very simple way. You choose three vectors which are of unit length and orthogonal with each other (a base), say i, j and k, and calculate the coordinates as such: x = v ∙ i y = v ∙ j z = v ∙ k In multidimentional space, the equations still hold. In a discrete infinite space, the coordinates and the base vectors become a sequence. The dot product becomes an infinite sum. In a continuous infinite space (like the space of good functions) the coordinates and the bases become functions and the dot product an infinite integral. Now, the Fourier transform is exactly this kind of operation (based on a set of base functions which are basically a set of sines and cosines). In other words, it is a different representation of the same function in relation to a particular set of base functions. As a consequence, for example, functions of time, represented against functions of time and space (in other words integrated over time multiplied by functions of space and time), become functions of space, and so on. Hope it helps! - 3 The Fourier transform is a much more specific operation than this. The basis you choose is very special, and explaining why the Fourier transform is interesting should involve explaining that choice of basis. – Qiaochu Yuan Jul 28 '10 at 19:53 1 I said it would be imprecise... however it's a "for dummies" question, no? – Sklivvz♦ Jul 28 '10 at 19:59 1 I agree with Qiaochu, this answer is too vague to be useful. – Larry Wang Jul 28 '10 at 20:12 @Sklivvz: I didn't downvote this, but the point is that your answer just explains what a change of basis is, not what's special about the Fourier transform. – ShreevatsaR Jul 28 '10 at 21:20 My point was that a Fourier transform is a change of basis (which is what I personally find interesting about it) - which in turn (in my humble opinion) totally answers the question... but then again the whole point of this site is that one says what he thinks and then the opinion of others values the answer. So, fair enough. =) – Sklivvz♦ Jul 28 '10 at 21:47 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 40, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353837966918945, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/09/03/the-weak-yoneda-lemma/?like=1&source=post_flair&_wpnonce=18b5244a97
The Unapologetic Mathematician The Weak Yoneda Lemma The Yoneda Lemma is so intimately tied in with such fundamental concepts as representability, universality, limits, and so on, that it’s only natural for us to want to enrich it. Unfortunately, we’re only ready to talk about bijections of sets, not about isomorphisms of $\mathcal{V}$-objects. So this will give back Yoneda when we consider $\mathbf{Set}$-categories, but in general it won’t yet have the right feel. So let’s say we’ve got a $\mathcal{V}$-functor $F:\mathcal{C}\rightarrow\mathcal{V}$, an object $K\in\mathcal{C}$, and a natural transformation $\eta:\hom_\mathcal{C}(K,\underline{\hphantom{X}})\rightarrow F$. We can construct the composite $\mathbf{1}\rightarrow\hom_\mathcal{C}(K,K)\rightarrow F(K)$, giving an element of the underlying set of $F(K)$. The weak Yoneda Lemma states that this construction gives a bijection between $\mathcal{V}\mathrm{-nat}(\hom_\mathcal{C}(K,\underline{\hphantom{X}}),F)$ — the set of $\mathcal{V}$-natural transformations from the $\mathcal{V}$-functor represented by $K$ and the $\mathcal{V}$-functor $F$ — and the underlying set of the $\mathcal{V}$-object $F(K)$. We have the function going one way. We must now take an “element” $\xi:\mathbf{1}\rightarrow F(K)$ and build from it a natural transformation with components $\eta_C:\hom_\mathcal{C}(K,C)\rightarrow F(C)$. And we must also show that it inverts the previous function. First off, since $F$ is a functor we have an arrow $\hom_\mathcal{C}(K,C)\rightarrow\hom_\mathcal{V}(F(K),F(C))$, which is the same as $F(C)^{F(K)}$. Now we can use the arrow $\xi$ to get an arrow $F(C)^\mathbf{1}$, which is isomorphic to $F(C)$. Every step here is natural in each variable by the litany of natural maps we laid down. Now, if we compose this natural isomorphism with the identity arrow, it’s not hard to see that we get back $\xi$. In fact, the identity arrow $i_K:\mathbf{1}\rightarrow\hom_\mathcal{C}(K,K)$ followed by the application of $F$ gives the identity arrow $i_{F(K)}:\hom_\mathcal{V}(F(K),F(K))$. But then the exponential $\hom_\mathcal{V}(\xi,1_{F(K)})$ just says to compose $\xi$ with the identity on $F(K)$, and we’re left with $\xi$. For the other direction — that starting with an isomorphism, constructing an element, and then constructing another isomorphism gives us back the isomorphism we started with — I refer you to this diagram: We start with the isomorphism $\eta$ and construct the isomorphism along the lower-left of the diagram. The top row of the diagram is the identity (show it), and so the upper-right of the diagram is the original isomorphism $\eta$. I leave it to you to show that each of the three squares commute, and this our two constructions invert each other. Like this: Posted by John Armstrong | Category theory 1 Comment » 1. [...] Strong Yoneda Lemma We gave a weak, “half-enriched” version of the Yoneda Lemma earlier. Now it’s time to pump it up to a fully-enriched [...] Pingback by | September 12, 2007 | Reply « Previous | Next » About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9061500430107117, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/160450-convergences.html
# Thread: 1. ## convergences -working in real numbers, true or false? proof or counterexample: (a) If $a_n$ is decreasing sequence of positive numbers, and $na_n \rightarrow 0$ as $n \rightarrow \infty$, then $\sum a_n$ converges. (b)If $\sum a_n$ converges, then $\sum \frac{a_n}{\sqrt{n}}$converges. (c)If $\sum a_n$ converges, then $\sum \frac{|a_n|}{n}$ converges. (d)If $\sum a_n$ converges, then $\sum \frac{|a_n|}{n^{3/2}}$converges. 2. Originally Posted by DontKnoMaff working in real numbers, true or false? proof or counterexample: (a) If $a_n$ is decreasing sequence of positive numbers, and $na_n \rightarrow 0$ as $n \rightarrow \infty$, then $\sum a_n$ converges. Consider: $a_n=\frac{1}{n\ln(n)}$ $\displaystyle \lim_{n\to \infty} na_n=\frac{1}{\ln(n)}=0$ but: $\sum_{n=2}^{\infty}a_n$ diverges (you might have an interesting time proving this ) CB 3. could compare it to $\sum_{n=1}^{\infty} \frac{1}{n}$. 4. Originally Posted by Time could compare it to $\sum_{n=1}^{\infty} \frac{1}{n}$. NO since $\frac{1}{n\ln(n)}<\frac{1}{n}$ unless you have something mor sophisticated in mind. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8063913583755493, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/98775/ultimate-maximality-principle/98776
## Ultimate Maximality Principle ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I wonder if it's possible to formulate an "ultimate" maximality principle (UMP) and prove its consistency. I envision UMP to express the idea that no matter how we enlarge the universe of set theory V (by any means e.g. set forcing, class forcing, infinite model theory), we would gain n o t h i n g. Let W be the ultimate enlargement of V. Then UMP would say that a statement is true in W iff it's true in V. So any statement that is true in W is already true in V. Questions: 1) Are there available reference in literature concerning UMP? 2) If not, what is the prospect of UMP in foundational research? - ## 1 Answer There are several maximality principles that already have some of this flavor, with a growing literature surrounding them. For example, the maximality principle MP as introduced in my paper A simple maximality principle, and also in a paper of Stavi and Vanaanan, is the scheme asserting that any statement $\varphi$ that is forceable by set forcing in such a way that it remains true in all further set forcing extensions, is already true in $V$. This axiom MP is actually equiconsistent with ZFC. Stronger versions of the axiom allow countable parameters (the axiom is inconsistent with uncountable parameters, since they can become countable by forcing). The strongest version of the axiom is the Necessary Maximality Principle NMP, which asserts that $\text{MP}(\mathbb{R})$ holds in all set forcing extensions, and this has determinacy consequences, but has strength below $\text{AD}_{\mathbb{R}}+\Theta$ is regular. The natural analogue of MP for class forcing is either inconsistent or not expressible in first order set theory. Another tack on the issue is the Inner Model Hypothesis of Sy Friedman, which aspires more in the universal direction of your question. Namely, the IMH asserts that if there is any outer model of the universe having an inner model satisfying a certain assertion, then there is already such an inner model with that feature. This axiom has the flavor of what you have wanted; it has a strong consistency strength, but it itself is inconsistent with the actual existence of large cardinals, as opposed to their existence in inner models. The penalty for the greater universality of the IMH is that it is not expressible in first order set theory as an axiom about $V$. One can, however, understand it as an external assertion about a countable model, treating that countable model as a kind of universe substitute. Both the MP and the IMH are naturally expressible in modal terms by the S5 axiom $\Diamond\square\varphi\to\square\varphi$, which expresses the idea that anything that could become necessarily true is already necessarily true. Benedikt Loewe and I explored the nature of the forcing modality in our paper The modal logic of forcing. Your proposed Ultimate Maximality Principle would seem to need a more detailed fleshing out: in the axiom you refer to an "ultimate" enlargement $W$ of $V$, but what is this $W$? After all, for any enlargement $W$ of $V$ we may continue to form the forcing extensions of $W$, so strictly speaking, there is no largest one. For example, $W$ itself would have forcing extensions, some with CH and others with $\neg\text{CH}$. Similarly, any set in $W$ can be made countable by forcing, and so if you are entertaining the idea of a single largest one, then every set in $V$ would have to be countable in it. So the idea that one can achieve literal maximality as you describe becomes problematic, and this is the reason why the MP and the IMH make use of the S5 style maximality, which asserts that anything that could become true forever afterwards is already true, an assertion that takes the place of an actual maximal extension. Meanwhile, there is current work to investigate the extent to which we may have maximality-type principles for class forcing and for arbitrary extensions. For example, it appears that one may get it for extensions with the approximation and cover properties without much modification from the original work. - Thank you for your extensive reply Joel.I just read the papers you mentioned above. It's very interesting. In your last paragraph, you mention that there is current work on maximality-type principles for class forcing and for arbitrary extensions. Are you talking about Sy. Friedman's work? He has a new paper on the Hyperuniverse program on his website. – Lianna Jun 5 at 15:19 I was referring to that and also to the program of replacing "set forcing" in various research contexts by "extensions with the approximation and cover properties". For example, this approach seems promising with the modal logic of forcing and with the maximality principle. Meanwhile, the hyperuniverse seems to be a natural context for the IMH and related principles, because of the manner in which they are formalized. – Joel David Hamkins Jun 5 at 16:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9496083855628967, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/23745/walter-lewin-lecture-16-ball-bouncing-on-wall
# Walter Lewin Lecture 16 - Ball bouncing on wall? I never did Physics in university and I consider that a mistake so I am correcting that now by teaching myself. To that extent I have been watching the MIT lecture videos by Walter Lewin and I am currently up to Lecture 16. The lecture is about Elastic Collisions. In this lecture he poses this question which I invite you to watch (The link should take you to the correct time in the video but if it does not then the question is posed at 22m52s). I suspect that the reason that the wall can have that momentum but no Kinetic Energy is that: $$p = mv$$ $$KE = \frac{mv^2}{2}$$ But in this case `m` is extremely large and therefore `v` is going to be very very small even when it is squared. Thus, if it happened to be a completely elastic collision and the ball bounced off then the value of 'v' for the wall would be nothing and the laws of physics hold. So I think that the wall will have a velocity of zero in the question and thus doubling it is zero. I suspect that in reality 'v' for the wall is not quite zero so the wall does get some small amount Kinetic energy. Therefore it will need to loose that energy somehow (otherwise you could keep throwing tennis balls against the wall until it exploded with energy). My suspicion is that walls loose this in vibrations and heat. I also suspect that is why floppy pieces of metal wobble around for a bit when you throw a ball against them. I obviously cannot ask Walter Lewin if I got that correct so I am asking here. Am I on the right track or did I get that completely wrong? Thankyou so much for your help. - If you are interested in motion of the wall, it is negligible, the wall velocity is zero. Now what the use of the wall momentum and energy, if the wall does not move? – Vladimir Kalitvianski Apr 14 '12 at 9:33 ## 1 Answer You are right. Supposing that the wall (and in fact the whole Earth) is a perfectly rigid body, it will have both momentum and kinetic energy after the collision. And kinetic energy, expressed as huge mass - tiny velocity squared, will be negligible, while momentum expressed as huge mass - tiny velocity will not be negligible. $$M \gg m, V \ll v$$ $$\frac{M V^2}{2} \ll \frac{m v^2}{2}$$ $$m v \approx M V$$ - 3 Let me add a comment showing that the statements are not independent. It's useful to write the kinetic energy in terms of the momentum $P$, namely $E=(mv)^2/2m = p^2/2m$. Now, $p$ of the wall and the object are the same, up to the sign, due to the conservation of the momentum i.e. action and reaction. So you see that $p^2/2m$ is a decreasing function of $m$ for a fixed $p$, so the heavier object you have in the collision, the smaller kinetic energy it will carry. – Luboš Motl Apr 14 '12 at 10:04 Right! Good point, Luboš. – Pygmalion Apr 14 '12 at 11:42 Thankyou for the good answer Pygmalion and that is a clever rearrangement of the Kinetic Energy equation Lubos. I will keep that in mind for the future and try to remember to apply it in problems. – Robert Massaioli Apr 15 '12 at 22:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472547769546509, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2110/why-does-space-expansion-not-expand-matter?answertab=active
# Why does space expansion not expand matter? REFORMULATED: I have looked at the other questions (ie "why does space expansion affect matter") but can't find the answer I am looking for. My question: There is always mention of space expanding when we talk about the speed of galaxies relative to ours. Why, if space is expanding, does matter not also expand? If a circle is drawn on balloon (2d plane), and the balloon expands, then the circle also expands. If matter is an object with 3 spatial dimensions, then when those 3 dimensions expand, so should the object. If that was the case, we wouldn't see the universe as expanding at all, because we would be expanding (spatially) with it. I have a few potential answers for this, which raise their own problems: 1. Fundamental particles are 'point sized' objects. They cannot expand because they do not have spatial dimension to begin with. The problem with this is that while the particles would not expand, the space between them would, leading to a point where the 3 non-gravity forces would no longer hold matter together due to distance 2. Fundamental particles are curled up in additional dimensions ala string theory. These dimensions are not expanding. Same problems as 1, with the added problem of being a bit unsatisfying. 3. The answer seems to be (from Marek in the previous question) that the gravitational force is so much weaker than the other forces that large (macro) objects move apart, but small (micro) objects stay together. However, this simple explanation seems to imply that expansion of space is a 'force' that can be overcome by a greater one. That doesn't sound right to me. I think some of the problems in this question verge into metaphysics, but I think from his (?) previous answer, Marek can probably explain the physical side of things a bit more thoroughly. I will leave it there cos anything else I write sounds rambling! - Good question but I think it's basically the same thing that has already been asked. – David Zaslavsky♦ Dec 21 '10 at 4:24 Sorry David, but I had read that question (love the auto-similar question thing by the way), and don't think it covers this at all. That poster seemed to ask about space expanding, and matter not moving, like it was sliding. I am asking why matter, which exists in a portion of space, does not get bigger when that portion gets bigger. – SoulmanZ Dec 21 '10 at 4:39 OK... well, it seems to me that your question and the earlier question are actually polar opposites. You're asking why matter doesn't move along with space, and the other poster was asking why matter does move along with space. And you can actually use either description validly. Either way, I think you'd just get essentially the same answers to this question as are already on the other one, and that's why I closed it. – David Zaslavsky♦ Dec 21 '10 at 7:20 2 If you really think this shouldn't be a duplicate, I'd definitely suggest editing your question to say that you read the other one and its answers. Try focusing on what you want to know that the other question (and answers) didn't cover. (A question like "I read <question> and its answers but there's something I still don't understand: <blah blah blah>?" has a pretty decent chance of being fine.) If your question is suitably edited, I'll be happy to reopen it. – David Zaslavsky♦ Dec 21 '10 at 7:24 @David: I actually think this is not the same question at all. This one has to do with gravity being weak at micro-scale. I.e. table isn't expanded because it's held together by quantum and EM effects. Space is expanded because there are no longer these effects on macro-scale. – Marek Dec 21 '10 at 9:18 show 2 more comments ## 6 Answers To accept that the space is expanding you have to admit that the ruler, made of atoms, is invariant, i.e. it has always the same length, and no one has provided a convincing argument of this. The space expantion relies in the belief that this is a fact. If atoms are expanding at the same rate we were not able to measure any expantion. If, on the contrary, the atoms are shrinking thru time we can measure a space expantion without any de facto happening to the space. I dont know why the space is expanding except that we measure it. The matter may be contracting because the gravitoelectric fields have energy that is expanding and are sourced by the particles since matter is born and, obviously, we are not able to measure this fact in the lab. Un discussed model, out of academia, un-peer-reviewed, is 'A self-similar model of the Universe unveils the nature of dark energy' that does not need any Dark Energy, Inflation, etc. Concluding: to the question 'Why does space expansion not expand matter?': if matter expanded at the same rate no one would be able to measure any variation. The measuring act is to obtain a ratio between two quantities and both the numerator or the denominator (the standard) can change to obtain a specific measured value. But the standard is based in the 'atom' properties (in the first link and in this recent one The physical basis of natural units and truly fundamental constants) that we presume invariables. Both links provide an inside about units, but the first link is much more interesting because it provides an insight on the rationale of the measuring act. EDIT add: Usually it is accepted that there are no effects on Solar system scale of the space expansion, but recently it was reported, i.e. measured, that the SUN-Earth distance is increasing much more than expected: Experimental measurement of growth patterns on fossil corals: Secular variation in ancient Earth-Sun distances by Weijia Zhang , 2010 (behind paywall) Experimental results indicate a special expansion with an average expansion coefficient of $0.57H_0$ Secular increase of astronomical unit from analysis of the major planet motions, and its interpretation by G. A. Krasinsky, 2004 (behind paywall) measured \$\fraq{dAU,dt}=15 \pm 4 m/cy .. at present there is no satisfactory explanation of the detected secular increase of AU -- not peer reviewed by Weijia pdf of 'A test of the suggestion of an eternally constant Earth orbit in both Phanerozoic and Cryptozoic from astronomical observations and geological rhythms' (on http://www.paper.edu.cn ) The author reviewed all developments in lunar system research and paleontology since 1963, found three contradictions between different methods: ... This means that the ancient Earth is closer to the Sun. .. The revolution period of Earth is increasing, recorded by NASA. The semimajor axis of Earth is increasing, recorded by JPL. in the page 13 we find a table with the measured values of length of a sidereal year (increasing) after 1900. The increasing distance is deduced in the presented model, as seen at eq. 35), pag 10 of the preliminary paper of 2002 (arxiv) by Alfredo Oliveira A relativistic time variation of matter/space fits both local and cosmic data So, to the question 'Why does space expansion not expand matter?' the answer is because 'the space expansion is the result of the evanescence of matter' i.e. matter is shrinking. As an exercise: imagine you are siting in the midlle of a room and you start to see the walls moving away of you. When wake of that dream, or hallucination if you are doped, how would you describe it? : I was sh-shr-shri-shrinking, as Alice in the Wonderland naturally did, or that the house is getting bigger-BIGGER ? - Once more a downvote without a 'why' you think this model is wrong. Untill now I've collected 0 (zero) arguments against it. (some disagreable (fr) words do not count). – Helder Velez Sep 17 '12 at 0:17 If the question is interpreted as why don't atoms and other bound systems expand the answer is that the general expansion of space cannot do continuous work against the electromagnetic force that holds an atom together or any other force that holds a bound system together. However the accelerating expansion of the universe can exert a small "constant" negative force between the electrons and nucleus and make the atom very very slightly bigger than it would have been in a non-accelerating expanding universe. But in the current best theory of dark energy which is that it is a constant vacuum energy, this effect will be constant and the atoms have already expanded as much as they ever would. There are theoretical speculations of an acceleration of the accelerating expansion of the universe where this effect increases with time such that eventually in an exponential way the universe ends in a big rip where atoms and eventually nuclei would be ripped apart. On another website, I answered a question about whether we could extract energy from the expanding universe and this is the answer I wrote which I think will be helpful in understanding this issue: The universe is expanding at 74 km/sec/Mpc (Mpc is a mega parsec which is 3.26 million light years). So let's take two heavy objects and place them far from any galaxy cluster or other influence and space them just one parsec apart (3.26 light years).  Then they will effectively be moving apart at 7.4 cm/sec.  Now imagine that your monomolecular filament rope between the objects puts a force on the objects that will decelerate the objects.  Then during the time that they are decelerating you can extract work from the objects. That work per second comes from the force the rope is exerting being applied over the 7.4 cm/sec that the objects are moving apart. However, once the force causes their relative velocity to drop to 0, you won't be able to get any more energy from the objects since they are no longer moving apart.  There will still be a constant force on your rope but you need to have a force applied over a distance to get work. Now this is all from just the "Big Bang" expansion of space.  Once the rope's force has gotten their relative velocity to zero, the two objects are like a gravitational bound system and it will stop "expanding".  However, in addition to the "standard" expansion of space, we now know that there is dark energy which is causing an accelerating expansion of the universe.  This means that the two objects are not just "moving" apart at constant 7.4 cm/sec but that this velocity is actually increasing with time.  So if you setup your rope such that the force it is exerting on the objects results in an deceleration that is slightly smaller than this cosmic acceleration, you can extract work continuously and indefinitely.  Unfortunately, I have not been able to convert the dark energy measurements into units of acceleration in this particular case of objects at one parsec.  I suspect it is a small number but current estimates are that it is definitely positive.  Note that if your rope exerts more force that causes a deceleration larger than the cosmic acceleration then the objects will eventually stop moving apart and the work you can extract will drop to zero again. Note that from just the normal expansion of the universe you can only extract a finite total amount of energy, but that with the accelerated expansion you can extract a small but positive amount of energy per second forever.  However, your rope needs to get longer and longer with time (at the rate of 7.4 cm/sec, in this example), so, as they say TANSTAFL (there ain't no such thing as a free lunch). The rope needs to get longer because you have to have your very small force applied to continuously moving objects to get work done.  Since it will take continuous energy to make a continuously lengthening rope, and you cannot win this battle by starting with objects that are further apart since then the rope is lengthening at an even faster rate than the 7.4 cm/sec of this example.  You can increase the energy per second you extract by making the objects more massive, but then the force on the rope increases so you need to make a thicker rope. The bottom line is that I think this free energy project is impractical, even though it is theoretically possible.  The problem that needs to be solved is the energy cost of the continuously lengthening rope. - – Qmechanic♦ Sep 16 '12 at 23:03 I didn't think it was a problem since the other question was closed as a duplicate of this. Sure I will go delete it. – FrankH Sep 17 '12 at 0:10 What about two objects connected with a spring? It sounds like the 'outward' movement would be accelerated by the expansion factor, while the inward movement would be retarded by the same amount. Not disagreeing that it's impractical, but still interesting that it's even possible in theory. – kbelder Mar 12 at 15:59 @kbelder Two objects connected by a spring, when all oscillations die away due to friction (in the spring itself if not due to air etc) will have an equilibrium separation that would be very very slightly larger appart in an accelerated expanding universe and when there is oscillation out and back, they would just oscillate around that new equilibrium location. – FrankH Mar 12 at 17:17 The right answer is 3. Space expansion manifests itself like a repelling force. - "3"? What in the world does 3 mean? Was this supposed to be a comment on another answer? – FrankH Sep 16 '12 at 17:28 This was written for a question that closed during my composition of this. The question is how does the CC effect atomic physics, by Ashton. Dark energy has the mass-energy equivalent of a proton every 1-10 cubic meters. That is a pretty diffuse energy. An atom is on the scale of $10^{-8}$cm in length or has a volume of about $10^{-30}m^3$. So about that proportion of a proton’s mass-energy worth of dark energy acts on an atom, or perturbs its atomic levels. That is about $10^{-21}$ Gev or $10^{-12}$ev. That is very small. Now your question is not entirely without merit. Some very sensitive atomic measurements get atomic level splittings to within $10^{-6}$ev. I will not say for certain, but these atomic-quantum optics people can be quite clever on the bench. It is not entirely unimaginable that with squeezed states, entangled squeezed states of photons and electrons and so forth that this might be measured. If there is an EM response due to a level splitting the wave would be around the sub Hertzian range. The interaction Hamiltonian for the cosmological constant would be an inverted harmonic oscillator potential $H_{cc}~=~\Lambda r^2/3$. Some analysis for avoided crossings of energy levels and states and the rest might not be an unreasonable thing to work on. - This problem has been extensively studied. These are some references that I think answer various aspects of your question. I would say that it is not exactly a solved problem. In an expanding universe, what doesn't expand? Size of a hydrogen atom in the expanding universe Evolution of the Solar System and the Expansion of the Universe Multiparticle Dynamics in an Expanding Universe Life, The Universe, and Nothing: Life and Death in an Ever-Expanding Universe Update: This review (Rev.Mod.Phys.) is free from arXiv, Influence of global cosmological expansion on local dynamics and kinematics - 1 thanks very much. the first two specifically seem good, although the second is behind a paywall – SoulmanZ Dec 22 '10 at 3:21 Let's talk about the balloon first because it provides a pretty good model for the expanding universe. It's true that if you draw a big circle then it will quickly expand as you blow into the balloon. Actually, the apparent speed with which two of the points on the circle in a distance $D$ of each other would move relative to each other will be $v = H_0 D$ where $H_0$ is the speed the balloon itself is expanding. This simple relation is known as Hubble's law and $H_0$ is the famous Hubble constant. The moral of this story is that the expansion effect is dependent on the distance between objects and really only apparent for the space-time on the biggest scales. Still, this is only part of the full picture because even on small distances objects should expand (just slower). Let us consider galaxies for the moment. According to wikipedia, $H_0 \approx 70\, {\rm km \cdot s^{-1} \cdot {Mpc}^-1}$ so for Milky way which has a diameter of $D \approx 30\, {\rm kPc}$ this would give $v \approx 2\,{\rm km \cdot s^{-1}}$. You can see that the effect is not terribly big but the given enough time, our galaxy should grow. But it doesn't. To understand why, we have to remember that space expansion isn't the only important thing that happens in our universe. There are other forces like electromagnetism. But most importantly, we have forgotten about good old Newtonian gravity that holds big massive objects together. You see, when equations of space-time expansion are derived, nothing of the above is taken into account because all of it is negligible on the macroscopic scale. One assumes that universe is a homogenous fluid where microscopic fluid particles are the size of the galaxies (it takes some getting used to to think about galaxies as being microscopic). So it shouldn't be surprising that this model doesn't tell us anything about the stability of galaxies; not to mention planets, houses or tables. And conversely, when investigating stability of objects you don't really need to account for space-time expansion unless you get to the scale of galaxies and even there the effect isn't that big. - if the expansion is accelerating, will it reach a theoretical point where that expansion does override gravity or EM? even nuclear forces? I am still not entirely sure I can conceptualise the expansion as a pseudo-force, that is overcome by other forces. – SoulmanZ Dec 22 '10 at 0:37 – Marek Dec 22 '10 at 0:40 @SoulmanZ: it's not a pseudo-force. It's a completely usual force and the basic point about forces is that it doesn't matter where they are coming from. If you pull an object with a force of $10N$ to the one side using gravitation and to the other with a force of $10N$ using electromagnetism, it wouldn't move. So it's only important how big a force is and it shouldn't be surprising that certain forces are more important than others in certain situations. We use just gravitation to describe Solar System but we use electromagnetism to describe electric circuits. – Marek Dec 22 '10 at 0:44 1 @SoulmanZ: expansion is just an apparent force arising from GR, as described in my previous comment. But for the purposes of local observers it's a completely usual force. You look at the sky and you see galaxies speeding away from you, accelerating even. So there must be some force acting on them, you tell yourself. This is just an illusion created by GR though. In the same way, when you jump, something pulls you down, so you'd imagine there must be gravitation. But in fact, there are no forces acting on you, it's just that you are moving in the curved space-time. – Marek Dec 22 '10 at 0:59 1 @SoulmanZ: I am not sure how much sense is this making to you. Depending on your exposition to GR, this might appear to be a complete blabbering. Nevertheless, the moral of the story is that GR reduces all of the gravitational effects to movement in and movement of space-time. There are no gravitational forces in that description. But often it is very convenient to describe some of those GR phenomena as forces and work with simplified picture. – Marek Dec 22 '10 at 1:02 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597215056419373, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/54975/quantum-entanglement-whats-the-big-deal/55209
# Quantum Entanglement - What's the big deal? Bearing in mind I am a layman - with no background in physics - please could someone explain what the "big deal" is with quantum entanglement? I used to think I understood it - that 2 particles, say a light-year apart spatially, could affect each other physically, instantly. Here I would understand the "big deal". On further reading I've come to understand (maybe incorrectly) that the spatially separated particles may not affect each other, but in knowing one's properties you can infer the other's. If that it the case, I don't see what the big deal is... 2 things have some properties set in correlation to each other at the point of entanglement, they are separated, measured, and found to have these properties...? What am I missing? Is it that the particles properties are in an "un-set" state, and only when measured do they get set? (i.e. the wave-function collapses). If this is true - why do we think this instead of the more intuitive thought that the properties were set at an earlier time? - 1 – EnergyNumbers Feb 24 at 18:46 1 I don't personally think this is a duplicate of that question, although they are, I agree, closely related. This question is asking for more of an intuition behind he strangeness of entanglement which I don't think the other question and its answers fully addresses. – joshphysics Feb 24 at 18:54 4 Dear @Pete, your basic reasoning is quite correct. Entanglement is nothing else than correlation between two objects ("subsystems") and this correlation is always a consequence of their mutual contact or common origin in the past. Entanglement is the most general type of correlation that may be described using the formalism of QM (none of the properties is determined) but it's still correlation, leads and requires no "action at a distance", and everyone who is seeing something totally mysterious behind the entanglement is overlooking the forest - that it's just correlation - over some trees. – Luboš Motl Feb 24 at 19:02 – Eduardo Guerras Valera Feb 24 at 21:09 2 @LubošMotl, my jaw dropped at your answer because I know you are very aware of Bell's inequality and the math behind it. Sure, it's correlation, but not correlation that can be done with hidden variables (no "action at a distance"). Am I totally missing your intent somehow? How can you reconcile what you just said with the experimental evidence (and actual devices based on) Bell's inequality? – Terry Bollinger Feb 25 at 3:24 show 1 more comment ## 7 Answers I understand your confusion, but here's why people often feel that quantum entanglement is rather strange. Let's first consider the following statement you make: 2 things have some properties set in correlation to each other at the point of entanglement, they are separated, measured, and found to have these properties A classical (non-quantum) version of this statement would go something like this. Imagine that you take two marbles and paint one of them black, and one of them white. Then, you put each in its own opaque box and to send the white marble to Los Angeles, and the black marble to New York. Next, you arrange for person L in Los Angeles and person N in New York to open each box at precisely 5:00 PM and record the color of the ball in his box. If you tell each of person L and person N how you have prepared the marbles, then they will know that when they open their respective boxes, there will be a 50% chance of having a white marble, and a 50% chance of having a black marble, but they don't know which is in the box until they make the measurement. Moreover, once they see what color they have, they know instantaneously what the other person must have measured because of the way the system of marbles was initially prepared. However, because you painted the marbles, you know with certainty that person L will have the white marble, and person N will have the black marble. In the case of quantum entanglement, the state preparation procedure is analogous. Instead of marbles, we imagine having electrons which have two possible spin states which we will call "up" denoted $|1\rangle$ and "down" denoted $|0\rangle$. We imagine preparing a two-electron system in such a way that the state $|\psi\rangle$ of the composite system is in what's called a superposition of the states "up-down" and "down-up" by which I mean $$|\psi\rangle = \frac{1}{\sqrt 2}|1\rangle|0\rangle + \frac{1}{\sqrt{2}}|0\rangle|1\rangle$$ All this mathematical expression means is that if we were to make a measurement of the spin state of the composite system, then there is a 50% probability of finding electron A in the spin up state and electron B in the spin down state and a 50% probability of finding the reverse. Now me imagine sending electron $A$ to Los Angeles and electron B to New York, and we tell people in Los Angeles and New York to measure and record the spin state of his electron at the same time and to record his measurement, just as in the case of the marbles. Then, just as in the case of the marbles, these observers will only know the probability (50%) of finding either a spin up or a spin down electron after the measurement. In addition, because of the state preparation procedure, the observers can be sure of what the other observer will record once he makes his own observation, but there is a crucial difference between this case and the marbles. In electron case, even the person who prepared the state will not know what the outcome of the measurement will be. In fact, no one can know with certainty what the outcome will be; there is an inherent probabilistic nature to the outcome of the measurement that is built into the state of the system. It's not as though there is someone who can have some hidden knowledge, like in the case of the marbles, about what the spin states of the electrons "actually" are. Given this fact, I think most people find it strange that once one observer makes his measurement, he knows with certainty what the other observer will measure. In the case of the marbles, there's no analogous strangeness because each marble was either white or black, and certainly no communication was necessary for each observed to know what the other would see upon measurement. But in the case of the electrons, there is a sort of intrinsic probability to the nature of the state of the electron. The electron truly has not "decided" on a state until right when the measurement happens, so how is it possible that the electrons always "choose" to be in opposite states given that they didn't make this "decision" right until the moment of measurement. How will they "know" what the other electron picked? Strangely enough, they do, in fact, somehow "know." Addendum. Certainly, as Lubos points out in his comment, there is nothing actually physically paradoxical or contradictory in entanglement, and it is just a form of correlation, but I personally think it's fair to call it a "strange" or "unintuitive" form of correlation. IMPORTANT DISCLAIMER I put a lot of things in quotes because I wanted to convey the intuition behind the strangeness of entanglement by using analogies; these descriptions are not meant to be scientifically precise. In particular, any anthropomorphisations of electrons should be taken with a large grain of conceptual salt. - 1 A nice answer. One of the experimenters can even have the facility to change the orientation of his/her aparattus quite randomly and very quickly, and yet the measurements do show correlations that violate Bell's inequality!! – JKL Feb 24 at 19:07 5 So how is it known that the electron's wave function hasn't collapsed until it has been measured? Is that what's confirmed in the double-split experiment? – Pete Oakey Feb 24 at 19:45 3 Actually I'm asking how do we know it's in a superposition in the first place? – Pete Oakey Feb 24 at 22:37 1 I think he is asking how do you know that the superposition isn't just a classical lack of information. The answer lies in Bell's inequality, which I think he needs an intuitive explanation of. – Chris Feb 25 at 0:18 3 This nice analogy just claims you can't have hidden variables ("no one know in advance" the otcome with electrons, in contrast to marbles). But this argument does not distiugish classical mixture from proper entanlement, and thus, in my tastes, does not realy address the "weirdness". – Slaviks Feb 25 at 8:11 show 10 more comments Rather than repeat some very good standard answers, I want to discuss this issue from the perspective as to why classical systems should be viewed as strange. If we accept quantum mechanics as being fundamental, then in some sense we shouldn't really find things like entanglement to be strange at all. As pointed out by the answer given by joshphysics, as well as the answer given by Lubos Motl in the similar question, entanglement is really just correlation. The strangeness enters because we are accustomed to the the idea of classical locality and separability of systems. Locality is best understood as the concept prohibiting action-at-a-distance, and is closely tied to Newton's Third Law of Motion. Newton's third law is the statement, Every action has an equal and opposite reaction which basically tells us that forces on an object are the result of the interaction by another object. Action-at-a-distance is a situation where to objects separated in space share perfect correlation in their motion, implying that one object is directly responsible for the other objects activities. In Newtonian mechanics, there is no limit on velocity, so action-at-a-distance, while seemingly unbelievable, is not prohibited. This situation changed when it was realized that there is an ultimate speed limit to how fast two objects can communicate, or rather influence each other via the third law. This is the speed of light, as enshrined in the theories of special relativity and general relativity. This ultimate speed limit on the transfer of real information between two spatially separated regions is where our "classical intuition" fails (which is not a statement about human intuition, it is a statement about an apparent contradiction that arises in the logical statements one can make in the context of a particular theory). So really the question isn't so much, "Why is quantum mechanics weird?" its "Why does our classical intuition fail?" Much of this failure in our intuition is related to the separability of states which is an inherent feature of classical mechanics. Separability of states is possible when one is able is able to describe composite states as direct products of subsystem state vectors. To explain this a little better, there is a postulate of quantum mechanics that states The Hilbert space of a composite system is the Hilbert space tensor product of the state spaces associated with the component systems This is written mathematically as $$\mathcal{H}_{AB} = \mathcal{H}_{A} \otimes \mathcal{H}_{B}$$ This can be imagined as just an abstract infinite dimensional space (just a really big space). The direct product $\otimes$ tells us to take every component of the each space times every component of the other space (e.g. if I can provide a basis for one space as $x$,$y$,$z$ and the basis for the second space as $a$,$b$,$c$; the direct product space would be $xa$,$xb$,$xc$,$ya$,$yb$,$yc$,$za$,$zb$,$zc$ ) As implied above, the component subspace can be given a basis that spans the space (span = provide a complete coordinate system that can describe every point): $$\mathcal{H}_{A} \rightarrow \{ |a_i \rangle \}$$ and $$\mathcal{H}_{B} \rightarrow \{ |b_j \rangle \}$$ with our basis chosen, the pure state of the composite system can be defined as: $$|\psi\rangle = \sum_{i,j} c_{ij} |a_i\rangle \otimes |b_j \rangle$$ As discussed in the wikipedia article, if the state $$|\psi\rangle \in \mathcal{H}_{A} \otimes \mathcal{H}_{B}$$ can be written as $$|\psi_A\rangle \otimes |\psi_B\rangle$$ and $$|\psi_i\rangle$$ is a pure subsystem (e.g. also has an independent Hilbert space), then the system is described as separable. If it is not separable, it is entangled, and therefore: $$|\psi\rangle = \sum_{i,j} c_{ij} |a_i\rangle \otimes |b_j \rangle \neq |\psi_A\rangle \otimes |\psi_B\rangle$$ (Update Example borrowed from Marcini and Severini: Let $|\psi_{A1}\rangle$, $|\psi_{A\perp}\rangle$ be orthogonal states in $\mathcal{H_A}$, and $|\psi_{B1}\rangle$, $|\psi_{B\perp}\rangle$ be orthogonal states in $\mathcal{H_B}$. Then $$|\psi_{A1}\rangle \otimes |\psi_{B1}\rangle \in \mathcal{H_A} \otimes \mathcal{H_B}$$ as well as $$a|\psi_{A1}\rangle \otimes |\psi_{B1}\rangle + b|\psi_{A\perp}\rangle \otimes |\psi_{B\perp}\rangle \in \mathcal{H_A} \otimes \mathcal{H_B}$$ with $a$,$b$ $\in \mathbb{C}$. The first can be factorized into states of the subsystems, the second cannot. The existence of this second state would result in the above inequality.) In our classical intuition, systems are separable, and it is only through some direct classical mechanical coupling that they show any correlation. So in the marble examples, there is some mechanical process that is involved in mixing marbles together. The marbles are still separable systems, and the correlation of one person finding a white marble, and one finding a black marble is still rooted in classical statistical mechanics, simply by the fact that the marbles have a definite color associated with them before they are put in the box. This means that the state of color for either marble is known and is not correlated with the state of the other marble. It makes sense for one to talk about the marbles being in a black or white state in classical mechanics. This is not a typical state in quantum mechanics, and systems having a definite state prior to observation is the root cause for the failure of our classical intuition We must understand that the full state space in the entangled system is much larger than the space of separable systems. There is a good analogy in understanding the different size of state spaces in the context of the Born Oppenheimer approximation (and Emilio Pisanty does a good job explaining the derivation in his answer to this SE question). The Born Oppenheimer approximation provides a justification for allowing for the separation of the nuclear and electronic states of a molecular system: $$\Psi_{Total} = \psi_{electronic} \times \psi_{nuclear}$$ This is possible by showing that one can ignore "vibronic coupling" associated with transitions of particles which would be represented by off-diagonal terms in the complete Hamiltonian matrix. Similarly in our "classical intuition" we can ignore many terms that describe the state of the system simply because their effects are too small to be considered. As systems become smaller, these effects are harder to ignore, and the notion of a quantum object being able to have a definite state (e.g. being a definitely black or white marble) prior to our observation is not possible. However, the correlation of the outcomes is not removable from the system, in this sense the correlation must be viewed as more fundamental then the definiteness of the state. This is a very different state of affairs than what we find in classical mechanics, where definiteness of state is viewed as more fundamental. So hopefully this gives a little more clarity as to why we think quantum entanglement is a "big deal". It requires a fundamental change in our understanding and approach to physics. - What happened here? I am in complete shock that so many respondents are answering "yeah, no big deal, nothing really spooky going on, yeah it's just correlation..." What in the world is everyone talking about? Peter Oakey, forget all the math for a minute. This will take a few minutes of detailed but entirely non-mathematical setup, but if you can bear with me I can explain to you in a very pointed way why entanglement is spooky and cannot be explained by classical correlations alone. First, we need something easily visualized with which to set up the situation. A clock with only one hand, an hour hand, works nicely. Did I mention that the hands on these clocks are a bit weird? Well, actually... a lot weird. Instead of being sharp pointy lines, the hands are painted onto a disk... badly. They are severely smeared, to the point that they are fully black only in the exact direction of the time they represent, e.g. 3 o'clock. From that direction they fade off into gray as you go around the disk on which the hand is painted. In fact, the disk remains pure white only on the exact opposite side from the pure black direction. So, if the pure black is pointing at 3 o'clock, the pure white is pointing the opposite direction at 9 o'clock. (I may add some simple graphics for this tomorrow, but it's too late tonight.) Through a Slot Darkly Um, did I mention that reading these clocks is also a bit weird? Well... a lot weird. That's because you are only permitted to read them by looking through a single slot that you can dial into any position you want, such as 12 o'clock. Now, you might think that would make seeing the hand impossible most of the time, but don't forget: the hands on these clocks are so badly smeared out that in most cases when you look through the slot you will see some kind of gray, probably at least 50 different shades of it. Occasionally, though, you will happen to see pure black or pure white. That means you got lucky and set your analyzer to one of the two positions from which you can read the clock with 100% certainty. Thus if you set the slot to 3 o'clock and see pure black, it means the clock was set to that same time, 3 o'clock. But notice that if you had instead set the slot to 9 o'clock, you would have seen the pure white that is always opposite to the pure black, and you again would have known with certainty that the time was 3 o'clock. Alas, if you had instead chosen any other setting for the slot, you would have seen only some shade of gray. Darker grays would have meant you were "closer" to the time on the clock, while lighter grays would have meant you are farther away from it. But for any of the gray shades you can only make a guess about the exact time. Game, Reset, Match Which leads to a final but very important oddity about these clocks: Every time you read one, the hand resets itself to match the orientation of your reading slot. Now that's really weird! How does this final twist work? It's somewhat random, actually, but in a way that is strongly guided by how gray the disk is at the point where you read it. If you happened to read pure black or pure white, there is no problem: The hand simply stays exactly where it was, on black or white. If instead you happened to see the shade of gray that lies $90^{\circ}$ away from pure black or pure white, e.g. 12 or 6 for a hand pointing at 3, then the dial resets in a fully random fashion, with a 50/50 chance of moving either pure black or pure white into the reading slot position after. Everything in between becomes a probability that is more in favor of black or white. Thus a very dark shade of gray will almost always cause the clock dial to rotate pure black into the reading slot position... but not quite every time. As long as the disk has some white mixed in with the black, the pure white side of the dial disk will occasionally get rotated into the reading position. Incidentally, in case you were wondering how to translate some weird shade of gray into a specific reading of the clock, this gray-based resetting feature provides the answer. What happens is that you final answer always is based on how your set your analyzing slot, specifically on the value that gets rotated into that position after you read the original value on the clock. So for example, if you set your analyzer to 12 o'clock, you will always get an answer of either 12 o'clock (pure black rotated into the 12 o'clock position) or 6 o'clock (pure white rotated into the 12 o'clock position). The original clock dial position no longer matters at that point, since the very act of reading the clock resets it and makes the new value into the only one that matters. Strange Times Strange times (and time pieces) indeed! But if you are wondering why I am putting in so many seemingly pointless restrictions, I assure you they are not as arbitrary as they seem. What I am doing it translating large chunks of quantum mechanics into a physical model that helps visualize certain types of quantum relationships. Because quantum mechanics deals with small systems that contain very little information, it is all about understanding these odd constraints that do not allow the huge freedoms to which we are so accustomed from the classical world. I'll call these constructions fuzzy clocks due to all the probabilities going on in reading them. Igor, Pull the Lever! Next comes the experimental arrangement using these clocks, one that is the same for both classical correlation and quantum entanglement: 1. Set two fuzzy clocks to have exactly opposite but randomly selected times, e.g. 1-and-7 or 10-and-4. Keep these times secret from everyone in the universe. 2. Place the fuzzy clocks in two spaceships A and B and fly them to locations very far apart. For example, you could fly them so far away from each other that light takes an hour just to get from one spaceship to the other. 3. Have your observers in each spaceship read their clocks. There are multiple ways to do this, but in this case we'll prearrange for the observers to use identical orientations of their slot readers. For the rest of this discussion, we'll assume their slot readers are set to 3 o'clock. Recall that when a slot reader is set to 3 o'clock, the final reading will always be either 3 o'clock (pure black) or 9 o'clock (pure white). That's because reading the clock causes it to reset (rotate) based on how much gray is seen through the slot. It is those new pure black or pure white values that become the final readings of the clocks. Workin' 12 to 6, What A Way to Make a Livin' Now let's focus on a particular subset of correlated fuzzy clocks, which is the ones that were set originally to either 12 or 6. What happens to these clocks when they are read by the 3 o'clock slot readers on both spaceships? Recall that any clock value initially set to 12 or 6 will for a 3 o'clock slot reader show the shade of gray that results in a 50/50 toss-up. So, half of these clocks will end up with pure black at the slot position (3 o'clock), and the other half with pure white (9 o'clock). Let's assume that ship A reads one of these 12-or-6 clocks and gets a value of pure black, meaning that it has been reset to 3 o'clock. What can the observer say then about what the other spaceship will see when looking at the correlated clock in the same way? Losing It Well... nothing, really. From the perspective of the observer, this worst-case scenario of 50/50 random reassignment has completely erased any information that would have been available about the time on the other fuzzy clock. So, all the observer on ship A can say for this group of clocks is "since this is the 12-or-6 clock group, ship B will have a 50/50 chance of reading black or white." Which is exactly correct: Ship B will get just as random a result in this case as ship A did. The correlation that potentially existed was in effectively erased by the nature of the reading procedure, so neither ship can say anything about what the other would have seen. That's the classical case: No correlation -- no predictability -- is possible between ships for the 12-or-6 clock pairs analyzed using 3 o'clock slots. Finding It So, what if the clocks are quantum entangled instead of just sharing a correlated past? Easy: When the observer on ship A sees pure black at 3 o'clock for a 12-or-6 clock pair, she knows that the observer on ship B will see pure white. Always. 100%. Oops. Um... how exactly did that happen? Spooky Is as Spooky Does Spooky action at a distance remains a pretty good name for it, because I guarantee you are not going to be able to construct a meaningful explanation for it in terms of actual experimentally accessible parameters. Nor is it a hypothetical effect. Real examples of this effect are always more complicated than the intentionally extreme version I've used here, but it doesn't get any less weird. John Bell was the fellow who first figured out that this effect was real and testable, decades after minds as great as Einstein and Bohr came very close to it but missed seeing the opportunity. The fuzzy clocks provide a quite physical image of what has to happen. When one of the two spaceships A or B analyzes their clock, they cause it to reset (rotate) to the new time enforced by their slot position, e.g. from 12-or-6 to 3-or-9. In classical physics, that's the end of it. Each disk rotates into its new position locally and without any connection to the other disk. In entangled physics, the act of resetting the disk in A or B disturbs a very unforgiving conservation law, in this case the conservation of angular momentum (but other laws can also be used). It turns out that the universe is so unforgiving for such absolute conservation rules that issues such as the speed of light become secondary to ensuring that the quantity is absolutely conserved. So, spooky-style, the universe as a whole does not allow you to reset just one of the entangled disks, which would cause a slight non-conservation of angular momentum. Instead, when you must reset both. So, when A analyzes her 12-or-6 clock with a 3 o'clock slot analyzer, she ends up resetting both disks to the new 9-or-3 orientation. All of this happens "instantly," even across light-years, whatever "instantly" means in such cases. (It doesn't really mean much when entanglement is involved, which is why I usually avoid such terminology.) The Bottom (Entangled) Line There are many ways to get lost in the weeds in all of this. However, entanglement in terms of a "something" that instantly resets the options available to distant events, even while prohibiting the conveyance of conventional information (a point I've skipped over), is both quite real experimentally and quite weird conceptually. It's one of those little mysteries of the universe that is still worth contemplating from time to time. - 1 Your clocks not have conservation laws in them. It is conservation laws in the quantum mechanical system that assure if you find a -1/2 spin at -infinity the partner will have +1/2 at +infinity. It momentum conservation that says one has gone to - infinity and angular momentum conservation that tags the other once one is seen. Unless you consider conservation laws spooky? – anna v Feb 25 at 9:10 Anna, thanks: The clocks are hokey constructions intended only to capture in physical form the implications of an entangled pair of spin $\frac{1}{2}$ particles, e.g. an electron and positron created by a perfect two-gamma collision. The clock itself exhibits no entanglement, any more than a bra-ket notation does. And I think I agree completely and enthusiastically with you that "it is conservation laws that assure ... etc."? Gugg, thanks, I'll look this evening. – Terry Bollinger Feb 25 at 17:20 1 Link mangling is my forte, it's sort of like an exercise in Internet entanglement don't you know... :) – Terry Bollinger Feb 25 at 17:38 The "big deal" seems to be that, due to Bell's theorem* and "given" quantum mechanics, we can only choose between non-locality ("spooky action at a distance") being true and/or counterfactual definiteness being violated (possibly implying no "free will", whatever that means), if we want to choose at all. The first is "unintuitive" and (the possible implication of) the second is, well, a "big deal" for many people (including at least some scientists who argue that implicitly science relies on "free will"). *"You can't argue with a theorem." - Here is the answer that made me realise what the big deal is. The description below is basically an expanded version of this blog post, which I came across a long time ago. Imagine we are going to play a game. It's a cooperative game, so we'll either both win or both lose. If we win, we get lots of money, but if we lose we both die, so we should do our best to win. The game is played as followed: you will be taken on a spacecraft to Pluto, whereas I will stay here on Earth. When you arrive at Pluto, someone will flip a fair coin. Depending on its result, they will ask one of the following two questions: 1. Do you like dogs? 2. Do you like cats? You will then have to answer "yes" or "no". At the same moment, someone on Earth will flip a different fair coin and ask me one of the same two questions based on its result. The rules of the game are slightly strange. They are as follows: we win the game if we each give a different answer from the other, unless we're both asked about cats, in which case we have to give the same answer as each other in order to avoid losing. Since we're several light-hours apart there's no way we can communicate with each other during the game, but we can spend as long as we like discussing strategies before we go, and each of us can take anything we want along with us to help us answer the questions. Now, with a little bit of thought you should be able to convince yourself that in a classical world, the best we can do is to have a $75\%$ chance of winning the game. To do this, we just agree that no matter which question we're asked, you'll say "yes" and I'll say "no". If we do this, we'll win unless we both get asked about cats, and the probability of that happening is 1 in 4. It doesn't matter what we take with us - as long as it behaves according to the familiar rules of classical mechanics, it can't help us do any better than this simple strategy. In particular, it doesn't make any difference if we each take a hidden object with us, which we later measure in some way. However, in a quantum world things are slightly different: we can win the game $85.3\%$ of the time. I'm not going to go into the details of exactly how we achieve this, but it involves creating an entangled pair of particles, of which you take one and I take the other. Depending on whether you're asked about cats or dogs, you make one of two different measurements on your particle, and I do something similar. It just works out according to the rules of quantum mechanics that if we follow this procedure correctly, we'll win this game with a probability of $\cos^2(\pi/8)$ , or $85.3\%$. Many experiments that are equivalent to this game have been performed (they're called Bell test experiments) and the game is indeed won $85\%$ of the time. There are other games that can be constructed, which are slightly more complicated to explain, where using entanglement allows you to win $100\%$ of the time, even though in the classical world you can't avoid losing some of the time. A paper describing such a game (among other examples of such quantum games) can be found here. This is why entanglement is a big deal. It allows us to make things be correlated in this sort of way slightly more than they can be correlated in the classical world. It allows us to do something that wouldn't possible if entanglement didn't exist. As an aside, there's another reason why entanglement is a bit weird: in the cats and dogs game, why does entanglement only allow us to win $85\%$ of the time and not $100\%$? It turns out that you can invent universes with "alternative physics" in which this game can be won $100\%$ of the time, while still not letting information be transmitted faster than light, but in our universe, $85.3\%$ is the maximum possible score. The reason why entanglement should be limited in this way is an open question in the foundations of quantum mechanics. - Your mistake in interpreting entanglement as mere correlation is a very common one. In fact, Einstein's whole argument against quantum mechanics in EPR was to restore causality to quantum mechanics by interpreting entanglement as nothing more than a pre-existing correlation. However, Bell showed that this fails. Unfortunately, many people, Lubos Motl for example, have failed to understand this insight half a century after Bell's discovery. The entangled particles have to get their instructions on how to behave somehow, and this has to happen instantaneously. - @Gugg The Consistent histories authors (Griffiths) humiliate themselves and demonstrate that they have no clue at any level what they are talking about in the foundations of quantum mechanics. They give an example of different colored strips of paper, when you see one strip, you know the strip of the other paper. They haven't the slightest clue what Bell's theorem means, because Bell's theorem exactly means that this analogy is completely wrong. They don't understand Bertlemann's socks at all. Nature has spooky action at a distance. – user7348 Feb 26 at 19:14 @Gugg, I would like to mention that I am not the professor in that video. – user7348 Feb 26 at 19:16 – Gugg Feb 26 at 19:31 – user7348 Feb 26 at 19:47 Consider this. Bell's theorem depends on the assumption of counterfactual definiteness (CFD). This CH doesn't have CFD, so Bell's theorem doesn't rule it out. Bell's theorem actually proves that every type of quantum theory must necessarily violate either locality or CFD. It's not that it's invalid, it simply doesn't apply. How about that? – Gugg Feb 26 at 20:52 show 6 more comments there is no big deal. Usually people who don't uderstand it will tell you that it is a big deal... Let's say,you have 2 objects and observable with only two eigenvalues. One object is in state "+1" and the other "-1". The world that these objects live in has a rule that the sum of all these values is constant (zero in this case). Let's imagine that these object collide (interact in a manner that this observable can be changed). Now, the best guess (if you don't know any details) is just to assume that the system is in state "+1"×"-1" or "-1"×"+1". And that's it. If you look at the one object and determine the state, then you immediatelly know the state of the other, because of the conservation rule. What is kind of a big deal (but I'd rather say that it's just 'cool' and not a big deal), that there are states that preserve corelations for multiple observables ("+-"-"-+" spin state if measured along any axis will always produce correlated results). - 1 This is just plain, empirically wrong. The whole point of Bell's theorem is that you cannot reproduce quantum mechanical predictions with a local hidden variable model like the one you described! It doesn't suffice to look at how measurement results are correlated for any given, arbitrary measurement axis as you have suggested. You have to look at correlations when the two observers vary their measurement axes relative to each other. Please read the Wiki link, you will learn something cool; you might even consider it a 'big deal' after all! – Mark Mitchison Feb 26 at 22:41 But this doesn't change the fact, that allowed pure states that will make together the mixed state after collision must follow symmetries of the world... If you have two eletrons, one spin up, one spin down, than total angular momentum is zero and there is nothing you can do about it... I might not have expressed myself clearly - the conserved quantity determines allowed states. – asdf Feb 27 at 16:07 Sorry, but this answer demonstrates a complete failure to understand the difference between entanglement and classical correlations. This is made all the worse by the statement: "Usually people who don't uderstand it will tell you that it is a big deal...", when you clearly don't understand it at all. There is much more to entanglement that simply conservation laws being respected on-shell. Please read up some more about Bell's theorem, I'd recommend Bell's book "Speakable and Unspeakable...". – Mark Mitchison Feb 27 at 16:26 Bell's theorem will tell you what will be measured when you HAVE the state to begin with. It doesn't tell you how the state is chosen in the first place... – asdf Feb 27 at 17:14 Just to show what I meant - example: Let's assume that we have 2 particle system with Hamiltonian(Ss are spin operators)... H=Sx×Sx+Sy×Sy. Comutators [H,Sz×I] and [H,I×Sz] are non-zero, but [H,Sz×I+I×Sz]=0, that means that any unitary process will preserve sum of the spins along z axis, but individuals spins are not conserved. If there is a colision and we don't know any details, than we must assume the state with the maximum entropy, BUT we have to take into account our knowledge of the conserved quatity. – asdf Feb 27 at 18:04 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9580665826797485, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/22073/a-simple-generalization-of-the-littlewood-conjecture
## A Simple Generalization of the Littlewood Conjecture ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The Littlewood Conjecture asserts that for all real numbers $r$ and $s$, and for every `$\epsilon > 0$`, the inequality `$|x(rx-y)(sx-z)| < \epsilon$` is solvable in integers $x, y, z$ with `$x > 0$`. The Littlewood conjecture is clearly a consequence of the following: For all real numbers $r$ and $s$, and for every `$\epsilon > 0$` the inequalities `$|x(rx-y)| < 1, \,\,|sx-z| < \epsilon$` are solvable in integers $x, y, z$ with `$x > 0$`. Does anyone know a counter-example to the latter statement? Does anyone know of any references to it in the literature? Note that the inequality `$|x(rx-y)| < 1$` always has infinitely many solutions $(x,y)$ with `$x > 0$`. This is a consequence of Dirichlet's Approximation Theorem. So it is natural to ask: "How does $sx$ behave mod 1 as $(x,y)$ runs through the solutions of `$|x(rx-y)| < 1$`?" For example, can the closure of the $sx$ mod 1 contain some non-empty open set and be disjoint from another? Experiments with Sage seem to "suggest" that the numbers $sx$ are either dense mod 1 or have just finitely many limit points mod 1, depending on whether the numbers $1,r,s$ are linearly independent over the rationals. Again, any counter-examples or references relevant to this statement would be appreciated. - ## 2 Answers $\vert x(r x-y)\vert <1$ implies that $y/x$ is a convergent of the continued fraction expansion of $r$. This can be used to construct a counter-example as follows. Consider for $r$ for example a fairly large irrational quadratic real number with constant continued fraction $[l,l,l,l,\dots]$. Let $d_1 < d_2 < d_3,\dots$ be the sequence of denominators appearing in the convergents of $r$. Consider $s$ of the form $s=1/2\sum_{n=1}^\infty \alpha_n/d_n$ with $\alpha_i\in{0,1}$ recursively defined such that the distance of $d_i(1/2\sum_{n=1}^i\alpha_i/d_i)$ to the nearest integer is $\geq 1/4$. This implies that the distance of $d_i(1/2\sum_{n=1}^\infty \alpha_n/d_n)$ to the nearest integer is $>1/4-\epsilon$ for $l$ large enough. Indeed, the sequence $d_1,d_2,\dots$ grows roughly like a geometric sequence of argument $l$. This implies that $d_i(1/2\sum_{n=i+1}^\infty \alpha_n/d_n)$ is at most of absolute value roughly given by $1/(2l(1-1/l))=1/(2l-2)$ which can be made arbitrarily small by choosing $l$ large. - Roland, It is not quite true that the convergents of the continued fraction expansion of $r$ coincide with the solutions of `$x(rx-y)| < 1$` in the way you describe. Non-convergents can satisfy such an inequality and moreover there can be solutions $(x,y)$ with $x$ and $y$ not relatively prime. See Rockett and Szusz, Continued Fractions, Section II.5. Does this fatally break the estimate given in the last two sentences of your argument? I'm not sure.... but it needs another look . (Note: This is a corrected version of my initial comment) – SJR Apr 22 2010 at 6:08 You are of course right. My attempts to fix the resulting complications were unsuccessful. Perhaps it is more promising to consider for $r$ the golden number for which my claim (without error of my part) is indeed correct. Unfortunately it is to small to give useful error-terms and it is difficult to push through the end of the argument. – Roland Bacher Apr 22 2010 at 15:14 2 I can't quite follow what your argument is, but I would point out that it is a known fact that if a sequence $d_1,d_2,...$ grows geometrically (in the strong sense that there is a positive constant c such that all ratios are at least 1+c) then there must exist a real number s and a positive epsilon such that for every n the distance from $sd_n$ to the nearest integer is at least epsilon. – gowers Apr 22 2010 at 16:01 But that settles it! As I think Roland was suggesting, if $r=(1+5^{1/2})/2$, then the positive values of $x$ that appear in solutions to the inequality `$|x(rx-y)| < 1$` are exactly the Fibonacci numbers, which grow exponentially in the sense you describe. Then the $s$ of your known fact gives a counter-example to my generalization of Littlewood's conjecture. Maybe for ANY irrational $r$ the $x$'s will grow exponentially, but I'm not sure about this. I think I see how to attack the proof of your "known fact". Is there a name attached to it? Thanks. – SJR Apr 22 2010 at 17:26 @gowers: Do you have a reference for the "known fact" of your last comment? I can prove it if the ratio $d_{n+1}/d_n$ is at least $2+c$, but "$1+c$" is giving me trouble. – SJR Oct 11 2010 at 15:11 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I can't find the paper online, but this looks rather like a question that is answered by a paper of Pollington and Vellani. Here is a link to an abstract of the paper. (It may be clear from the abstract that they answer your question -- I am feeling lazy and so have not checked.) http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=121841 - @gowers: Thanks for the reference. The theorem mentioned in the abstract doesn't seem to settle my question, at least directly... Maybe I can use the result mentioned in the abstract to push through an argument like the one that Roland gave? – SJR Apr 22 2010 at 3:30 Hmm, I have now thought a bit more and I think this paper is irrelevant -- though it is interesting in its own right. I like your question too. – gowers Apr 22 2010 at 11:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947236955165863, "perplexity_flag": "head"}
http://gilkalai.wordpress.com/2012/01/18/a-theorem-about-infinite-cardinals-everybody-should-know/?like=1&source=post_flair&_wpnonce=bbf4ba6a50
Gil Kalai’s blog ## A Theorem About Infinite Cardinals Everybody Should Know Posted on January 18, 2012 by Cantor proved and we all know that for every cardinal  $\kappa$ we have $2^{\kappa}>{\kappa}.$ This is a very basic fact about cardinal arithmetic and it is nice that the proof works for finite and infinite cardinals equally well. (For the finite case it looks that Cantor’s proof is genuinly different than the ordinary proof by induction.) Do you know some other results about the arithmetic of infinite cardinals? We know that there are many statement that are independent from ZFC the axioms of sets theory but are there some results which can be proved unconditionally? Here is a theorem of Shelah. For simplicity we will assume that the special continuum hypothesis $2^{\aleph_0}=\aleph_1$. Theorem: $\prod_{i-0}^{\infty}\aleph_i$ $<\aleph_{\omega_4}.$ Here $\omega_4$ is the first ordinal which corresponds to $\aleph_4$. Remark: without assuming the special continuum hypothesis if $2^{\aleph_0}=\aleph_{\alpha}$ then the theorem asserts that $\prod_{i-0}^{\infty}\aleph_{\alpha+i}<\aleph_{\alpha+\omega_4}.$ Want to know more? Read Uri Avraham and Menachem Magidor chapter on Cardinal Arithmetics;
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919381856918335, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/21610-rates-change-print.html
rates of change Printable View • October 29th 2007, 08:59 PM mer1988 rates of change Sand falls from a hopper at a rate of 0.6 cubic meters per hour and forms a conical pile beneath. Suppose the radius of the cone is always half the height of the cone. (a) Find the rate at which the radius of the cone increases when the radius is 2 meters. dr/dt = meters per hour (b) Find the rate at which the height of the cone increases when the radius is 2 meters. dh/dt = meters per hour and A ruptured oil tanker causes a circular oil slick on the surface of the ocean. When its radius is 150 meters, the radius of the slick is expanding by 0.1 meter/minute and its thickness is 0.08 meter. (a) At that moment, how fast is the area of the slick expanding? (b) The circular slick has the same thickness everywhere, and the volume of oil spilled remains fixed. How fast is the thickness of the slick decreasing when the radius is 150 meters? • October 29th 2007, 11:12 PM kalagota Quote: Originally Posted by mer1988 Sand falls from a hopper at a rate of 0.6 cubic meters per hour and forms a conical pile beneath. Suppose the radius of the cone is always half the height of the cone. (a) Find the rate at which the radius of the cone increases when the radius is 2 meters. dr/dt = meters per hour (b) Find the rate at which the height of the cone increases when the radius is 2 meters. dh/dt = meters per hour a) $V_{cone}=\frac{1}{3} \pi r^2 h$ note that $h=2r$ and $\frac{dV}{dt}=0.6 m^3/hr$ so $V_{cone}=\frac{1}{3} \pi r^2 h=\frac{2}{3}\pi r^3$ $\frac{dV}{dt}=\frac{6}{3}\pi r^2 \frac{dr}{dt}$ then evaluate at r=2 to solve for $\frac{dr}{dt}$ b) $h=2r$, then $\frac{dh}{dt}=2\frac{dr}{dt}$ Quote: Originally Posted by mer1988 and A ruptured oil tanker causes a circular oil slick on the surface of the ocean. When its radius is 150 meters, the radius of the slick is expanding by 0.1 meter/minute and its thickness is 0.08 meter. (a) At that moment, how fast is the area of the slick expanding? (b) The circular slick has the same thickness everywhere, and the volume of oil spilled remains fixed. How fast is the thickness of the slick decreasing when the radius is 150 meters? a) $A_{slick}=\pi r^2$ $\frac{dA}{dt}=2\pi r \frac{dr}{dt}$ use the given to solve for the rate.. b) $V_{slick}=\pi r^2 h$ this is just similar to the first item. the only difference is this is circular cylinder while the first one was a cone.. Ü All times are GMT -8. The time now is 05:43 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9256203770637512, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=387947&page=3
Physics Forums Page 3 of 3 < 1 2 3 ## Could Dark Energy be Dark Matter cooling? This is amazing - I posted this question 2 years ago and then forgot about it - and just found it in my bookmarks... I love that a simple question has generated 3 pages of good debate from clever people... Job done! Thanks all for the replies. In the career's thread I always mention that I've been fascinated with the politics and economics of astrophysics, and the HET is one of the things that got me interested in that. Let's go back to the 1980's. Oil was super expensive, Texas was getting flooded with money, and people at the universities there were talking about using oil money to make some big enormous telescopes. Then the boom collapsed, and people looked at the money available and there wasn't enough to build the perfect telescope. So then what people did was to figure out how to maximize science and minimize cost. Most of the cost of a telescope turns out to be in the mount. You have a multi-ton piece of metal and then you have this big machine to point the telescope at what you want to point it at. Another big cost is the internal mirrors. In order to get an image you need to mount a giant mirror inside that that makes thing expensive. So people figure out that instead of making a big expensive general telescope, you could with very little cost make a telescope that's really cheap but it good at one thing, which is doing spectroscopic surveys. Now if you are doing galaxy surveys, the cool thing is that you don't care where you point your telescope at. You point your telescope at some random part of the sky and if it's not getting blocked by the Milky Way, then any random part of the sky is as good as any other random part. So you remove the ability to point the telescope everywhere and that saves \$ since you don't have huge motors. Now you are doing spectroscopy. That means no images. Without images you can then use fiber optics to move light from the top of the telescope down to the instruments Again lots of savings. Since you don't have to spend money pointing the telescope or dealing with internal mirrors you can use some of that to make the big light bucket at the bottom bigger. What's really cool is that all of these decisions were being made in the late-1980's and early-1990's before people had even dreamed of dark energy. It turns out that HET is the perfect instrument for studying dark energy, but none of the designers realized that. Gosh. It's really not *that* complicated. We already know one "dark matter" particle: it's neutrino. It has mass, but does not interact electromagnetically. If a narrow beam of one solar mass of neutrinos would fly through Solar System, we wouldn't see it, but sure as hell we will feel its attraction. (Thankfully, neutrinos never travel in such humongous massive tight beams) So dark matter not merely CAN exist, we know it DOES EXIST, at least a part of it, and we are pretty confident what it is. It is not mysterious. But since we also are pretty sure that (known) neutrinos alone can't explain what we see, it's not too difficult to postulate that there exist other similar particles: ones which have mass, but do not interact electromagnetically. To match observations, we postulate that these particles are more massive than (known) neutrinos. So why are you guys so freaked out by "mysterious" dark matter? Are you feeling the same about neutrinos? I think not... Quote by nikkkom But since we also are pretty sure that (known) neutrinos alone can't explain what we see, it's not too difficult to postulate that there exist other similar particles: ones which have mass, but do not interact electromagnetically. To match observations, we postulate that these particles are more massive than (known) neutrinos. It is in fact *very* difficult to postulate that there are other similar particles. Basically you start off with one equation that describes how particles behave. It's very, very difficult to add a new particle without causing the equation to either predict things that we don't see or be inconsistent theoretically. One particular problem with massive particles is that massive particles will decay into less massive particles unless there is something that prevents that from happening. For example, there is something called baryon number. Because the proton is the lightest particle with a non-zero baryon number, it can't decay to something lighter. However heavier particles can and do decay into the proton. So if you invent a "heavy particle" you are going to have to mathematically describe how that particle interacts with other particles, and this is rather difficult to do without tripping of something that we already know So why are you guys so freaked out by "mysterious" dark matter? Are you feeling the same about neutrinos? I think not... Everything is easy until you know why it's hard. The bottom line is that you just can't randomly add a particle. Adding a particle is like adding an element in the periodic table. If you want to add something at the end, no problem. If you want to add something between carbon and nitrogen, big problem. You can graph the known particles and the form a nice chart. There is no obvious place to put another particle. You can assume that there is a heavier neutrino, but that means you need a heavy quark and all of that violates experiments that say that you have only three families of particles. Quote by twofish-quant It is in fact *very* difficult to postulate that there are other similar particles. Basically you start off with one equation that describes how particles behave. It's very, very difficult to add a new particle without causing the equation to either predict things that we don't see or be inconsistent theoretically. You did not understand me. I am not saying that to add a particle to the Standard Model is very easy. I am somewhat familiar with the math involved, I know that it's not trivial. I am saying that some people seem to think that postulated dark matter is a very unusual kind of matter we never saw before, and thus they find it hard to believe it may be a viable theory. But dark matter is not something unlike we ever saw before - neutrinos are similar to it, and we know about neutrinos for what, 80 years already. Quote by twofish-quant You can graph the known particles and the form a nice chart. There is no obvious place to put another particle. You can assume that there is a heavier neutrino, but that means you need a heavy quark and all of that violates experiments that say that you have only three families of particles. Well, how about right-handed, so-called "sterile neutrinos"? Seesaw mechanism which gives them large mass? People are working on such models right now... Quote by nikkkom I am saying that some people seem to think that postulated dark matter is a very unusual kind of matter we never saw before, and thus they find it hard to believe it may be a viable theory. Dark matter passes the principle of "least weirdness". It's weird but everything else is even weirder. But dark matter is not something unlike we ever saw before - neutrinos are similar to it, and we know about neutrinos for what, 80 years already. We know that ordinary neutrinos are *not* similar to dark matter. You can invent something about weird neutrinos. One other thing is that even if you restrict yourself to baryonic matter, most of that material is dark. Well, how about right-handed, so-called "sterile neutrinos"? Seesaw mechanism which gives them large mass? People are working on such models right now... Yes, exactly.... http://arxiv.org/pdf/1102.4774.pdf http://arxiv.org/abs/1204.3902 But the point here is that you just can't invoke a new particle. Every time you invoke a new particle you have to do a ton of work to justify that new particle. Quote by twofish-quant We know that ordinary neutrinos are *not* similar to dark matter. Are you argumentative a-hole or something? I am not going to argue what level of similarity is required to qualify for word "similar". If you think neutrinos are sufficiently different from hypothetical dark matter particles (they have different mass! wooo hooo) so that word "similar" can't be applied, feel free to think that way. I don't care. One other thing is that even if you restrict yourself to baryonic matter, most of that material is dark. "Dark matter" is a misnomer. It is not in fact dark, it seems to be transparent. Baryonic matter is not. Even as a dilute gas, it is detectable by observations in EM. Recognitions: Gold Member Science Advisor Neutrinos are very 'hot' compared to dark matter. The only DM models that appear to work require non-relativistic velocities. Calm down. Quote by nikkkom If you think neutrinos are sufficiently different from hypothetical dark matter particles (they have different mass! wooo hooo) so that word "similar" can't be applied, feel free to think that way. I don't care. This is the problem with these sorts of arguments. You see two things and they are "similar". I see the same two things and they aren't. Now there are reasons why neutrinos don't look the same as other particles to me is that I did a lot of research on neutrino radiation hydrodynamics. To me saying that the dark matter particle and neutrinos are similar because they both interact with the weak force only is like saying that a bowling ball and an orange are similar because they are both round. This doesn't make much sense to a professional bowler or an orange grower. Now I'm not going to get annoyed if someone says that things look similar. Just don't get too annoyed at me if I tell you that they don't look similar to me, because they don't. Remember ***you*** are the one that asked: So why are you guys so freaked out by "mysterious" dark matter? Are you feeling the same about neutrinos? I think not.. Now I answer the question by saying, WIMP's may look like neutrinos to you, but to someone that has researched neutrinos for a decade, they look very different to me. Now if they look the same to you, that's fine, but you asked the question. "Dark matter" is a misnomer. It is not in fact dark, it seems to be transparent. Baryonic matter is not. Even as a dilute gas, it is detectable by observations in EM. If it's not ionized or not in compact bodies. Ionized hydrogen is quite difficult to detect. One interesting thing is that if you add up all of the baryonic matter that we can account for, it's still much, much less than the amount that we infer is out there from cosmology. Quote by JcX The idea of dark matter comes from mathematical model of gravitational force. Scientist suggests that there must be matter that we don't see in our universe, maybe between galaxies that exert the gravitational effects that we feel. The content/percentage of dark matter is chosen to fit and balance all the gravitational effects that has been observed in the universe (expansion of universe, perturbation of orbits, etc). We predicted the existence of them, but we can't be really sure of what dark matter is. Some theories such one those who suggest multi-dimensional universe says that dark matter is actually some matters in other universe. String Theory says that only gravitons are able to escape from the membrane of the dimension, hence only gravitational forces are able to penetrate between dimensions. Living in this dimension, we can feel the gravitational effects from other dimension, but we don't see them, that's why we called them Dark Matter. That's one of the explanation that I liked a lot... although...... it's hard to imagine. Is it ? I was led to believe that it was more of an observational find rather than resolving a mathematical discrepancy. We are trying to answer the accountability of mass surrounding the center of galaxies. Basically , Astronomers assumed that mass density (hence radial speed) would decrease as the distance from the galactic radius increases , however, to their surprise it did not and stayed fairly constant. There are various contenders of DM: Mainly the WIMPs , MACHOS however recent studies highlight towards WIMPs. Page 3 of 3 < 1 2 3 Tags dark energy, dark matter Thread Tools | | | | |----------------------------------------------------------------|-------------------|---------| | Similar Threads for: Could Dark Energy be Dark Matter cooling? | | | | Thread | Forum | Replies | | | Astrophysics | 5 | | | Astrophysics | 0 | | | Cosmology | 0 | | | General Astronomy | 14 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638418555259705, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/176269-find-limit.html
Thread: 1. Find this limit (x^-2 - 3^-2)/ (x-3) find the limit as x approaches 3 i've tried changing the exponents into radical form so i can multiply the top and bottom by the conjugate but i can't seem to make it work.. can someone solve this for me please? 2. $\displaystyle \frac{x^{-2} - 3^{-2}}{x - 3} = \frac{\frac{1}{x^2} - \frac{1}{3^2}}{x - 3}$ $\displaystyle = \frac{\frac{3^2 - x^2}{3^2x^2}}{x - 3}$ $\displaystyle = \frac{(3 - x)(3 + x)}{3^2x^2(x - 3)}$ $\displaystyle = \frac{-(x - 3)(3 + x)}{3^2x^2(x - 3)}$. Go from here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9237836003303528, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/89584/prove-or-disprove-validity-forall-x-exists-y-px-supset-qy-supset
Prove or disprove validity: $(\forall x \exists y (P(x) \supset Q(y))) \supset(\exists y \forall x (P (x) \supset Q(y)))$ I have working on this formula $(\forall x \exists y (P(x) \supset Q(y))) \supset (\exists y \forall x (P (x) \supset Q(y)))$ to either prove or disprove it. First, I tempted to disprove it, but I changed my mind. I wrote down "for all x that there exists some y satisfies corresponding condition", and "there exists some y that for all x satisfies corresponding condition." I think these statements refer to the same idea. Any suggestions? - If you replace the specific clause $P(x)\implies Q(y)$ with an arbitrary formula $\Phi(x,y)$, this formula isn't generally true. The left-hand side means "for every specific $x$, we can find some $y$ that makes $\Phi(x,y)$ true". The right-hand side means "there is a particular $y$ that makes $\Phi(x,y)$ true regardless of $x$". This is a much stronger statement in general. In other words, the two statements don't refer to the same idea in general. – mjqxxxx Dec 8 '11 at 15:05 – Oleksandr Kozlov Dec 8 '11 at 16:31 I have looked Confused between Nested Quantifiers, but there is only one predicate. I think this is different. – mert Dec 8 '11 at 21:47 There are cases where P(x) and Q(y) have different truth values. – mert Dec 8 '11 at 21:53 2 Answers In the usual context of nonempty structures, the statement is valid, meaning that it is true in all models. One way to see this is that if there is any $y$ in the structure for which $Q(y)$ holds, then the statement is true, since we may always choose that $y$, and this will make $Q(y)$ true and hence the final implication true, regardless of any $x$, and so the whole implication is true. Otherwise, we are in the case where $Q(y)$ is always false, in which case $P(x)\to Q(y)$ is logically equivalent to $\neg P(x)$, and so the quantification over $y$ becomes irrelevent (provided the structure is nonempty). So again the statement is true. So the statement is true in any nonempty structure. Edit. Meanwhile, in the context of first-order logic allowing the empty structure (which is a bit unusual, and which is only possible if your language has no constant symbols), then the statement is not valid, since it is false in the empty structure. This is because all universal statements hold vacuously in the empty structure and all existential statements fail in the empty structure, so the antecedent is true and the conclusion is false, and so the implication fails in the empty structure. Conclusion: your statement is valid if the language has constant symbols; valid if your logic disallows the empty structure; but otherwise it is not valid. - I need to prove its validity by proof theory. – mert Dec 8 '11 at 12:48 Then you need to consult the course instructor, since he (not us) knows the rules for proofs that you can use. – GEdgar Dec 8 '11 at 13:30 2 Ah, I had understood instead that you wanted us merely to prove the statement, which I did. If you want a formal derivation of it in some formal proof system, then we would need to know the details of your system; there are many proof systems. If your proof system is complete, then the existence of a formal derivation follows from the statement's validity. So it seems that you want an actual formal derivation (which I rarely find illuminating, and so I will leave this). But my argument suggests an approach: first derive $(\exists y Q(y))\vee(\forall y \neg Q(y))$ and then break into cases. – JDH Dec 8 '11 at 13:43 $(\forall x \exists y (P(x) \supset Q(y))) \supset (\exists y \forall x (P(x) \supset Q(y)))$ The formula is not valid. Consider the following idea: $P^M$ is the relation that {x : x moves first} $Q^M$ is the relation that {x : x wins the game} For the left part of implication $(\forall x \exists y (P(x) \supset Q(y)))$ it indicates that for all $x$ making the first move, there exists some $y$ that wins the game. However, considering right part of implication $(\exists y \forall x (P(x) \supset Q(y)))$ there exists some $y$ that beats all first moves of $x$. This is obviously different and harder than the left part of implication, which can be false while first part is true so that makes the theory false. The idea is from this page. - 1 This answer is incorrect. If there is any $y$ that is a winner, then that $y$ will satisfy $Q(y)$, and so the final implication will be true, regardless of anything that is claimed about some $x$ or other, and so the whole implication will be true. Otherwise, there is no $y$ that is a winner, and so $P(x)\to Q(y)$ is logically equivalent to $\neg P(x)$, and the quantification over $y$ becomes irrelevant. This is the argument I give in my answer. I believe that you have in mind a two-place property $P(x,y)$, asserting something like, $y$ is the move that wins the game that began with move $x$. – JDH Dec 9 '11 at 0:02 Thus, you are refuting the implication $(\forall x\exists y P(x,y))\to(\exists y\forall x P(x,y))$, which is indeed not valid. – JDH Dec 9 '11 at 0:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.935606062412262, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/1407/i-have-a-few-questions-about-the-random-oracle-model/1412
# I have a few questions about the random oracle model 1.) Proving the security of a scheme with the ROM takes two steps; first you prove that the sceme is secure in this idealized world where a random oracle exists and then you implement this scheme in the real world by replacing the RO with a hash function. Why wouldn't you prove the security of the scheme with the hash function that you end up implementing the world with? Is it because no proof has been found or are there any other reasons? Could you give me some examples? 2.) Could you explain to me the "programmability" feature of the ROM? In my book (Katz-Lindell) it says: The reduction may choose values for the output of H as it likes (as long as these values are correctly distributed, i.e. uniformly random). 3.) If I understand correctly, the function H that will act as a random oracle can be fixed beforehand or it can be generated "on the fly", by generating a table of inputs and outputs and if there is a new input then the function generates a new output but the table is not fully filled. What is the theoretical difference between the two? Thank you! - To prove security (question 1) you don't have to implement. – mikeazo♦ Dec 7 '11 at 12:51 Yes, I understand that. But let's assume that you want to prove the security of scheme A, you prove this security in the ROM. And then you implement this scheme A and instead of using the random oracle (because it doesn't exist), you implement it with using a hash function B. So why don't you prove that scheme A is secure when you use function B as opposed to proving that scheme A is secure with a random oracle? – dira Dec 7 '11 at 13:18 Right, the process you give is the process followed (typically referred to as the random oracle methodology). I just wanted to point out that both steps are not necessary to prove security. The reason you don't prove security using a hash function is that there are no practical hash functions which are provably secure. – mikeazo♦ Dec 7 '11 at 13:27 I see what you mean about the hash functions, thank you! I now realize I worded my question wrong, I did mean to say methodology. – dira Dec 7 '11 at 21:33 1 – Paŭlo Ebermann♦ Dec 7 '11 at 23:43 ## 2 Answers A random oracle is an ideal object; see this previous question for some details. What makes a random oracle convenient for proofs is the part about knowing nothing on the output for a given input if you do not try it. For instance, consider the following encryption scheme: • $H$ is a random oracle which outputs $n$-bit values. • The key is a $K$, a string of $k$ bits. • A single message $m$ is encrypted by computing $c = m \oplus H(K || 1) || H(K || 2) || ...$ (you repeatedly "hash" with the random oracle the successive strings obtained by concatenating $K$ with a counter, and you concatenate the oracle outputs into a big stream which is XORed with the message to encrypt). If $H$ is a random oracle, then it is reasonably easy to prove that the encryption is secure up to a work factor of $2^{k-1}$ invocations of $H$: for any index $i$, you can learn nothing on the bit $i$ of the generated stream ("nothing" as in "not even the slightest statistical bias") unless you do invoke $H$ on the exact input which yielded it; since there are $2^k$ possible values for $K$, the best possible attack is just trying them out (in any order), which, on average, will hit the right key after $2^{k-1}$ guesses. Each guess involve invoking $H$. Now, with a cryptographic hash function, things are not that easy. A cryptographic hash function is defined as being resistant to preimages, second preimages, and collisions. These are much weaker properties. A function could be a good hash function and still fail to be a random oracle. This is especially true with the commonly used Merkle-Damgård function such as SHA-256 and SHA-512: these functions suffer from the so-called "length extension attack". Given $H(x)$, it is possible, under certain conditions but without knowing $x$, to compute $H(x||x')$ for some values of $x'$. A random oracle would not allow that. And this specific property totally destroys our attempts at proving the security of the encryption scheme described above. Nevertheless, the "length extension" does not seem to help in any way when you try to compute preimages or collisions. Indeed, no efficient preimage or collision attack is currently known on SHA-256 or SHA-512. To sum up, a random oracle is an ideal object which allows for easy proofs of constructions in which they are used, proofs which rely on properties that actual hash functions do not necessarily exhibit (even if they are "secure" hash functions). When we "implement" a random oracle, we use hash functions and other primitives in such a way to mimic the ideal properties of a random oracle. Existing hash functions are not good enough for that by themselves, as seen above. A common tool is HMAC, which uses a key and is built over an existing hash function, but invokes it twice in a specific way so as to avoid known shortcomings of concrete functions such as SHA-256. This is why, for instance, when building a cryptographically secure pseudorandom number generator as described by NIST, you may prefer the "HMAC_DRBG" construction over the faster, but "less proven" Hash_DRBG. There is still a bit of technicality, intuition, and downright faith, in using a given construction with hash function in a protocol where a random oracle should be used. But we have nothing better: we do not know if random oracles really exist (or even secure hash functions, for that matter). Whether a given implementation uses tables to store precomputed results has no influence whatsoever on theoretical security: a proof in the random oracle model relies on the number of times the oracle must have been invoked (on distinct inputs), but not on when the invocations took place. You can use internal tables as you wish. There is a subtlety on the birth date of the function: if the oracle is HMAC with a key, then the oracle "exists" since the key was generated, and all oracle invocations must have used that key; on the other hand, a key-less hash function such as SHA-256 can be thought of as a kind-of random oracle which has existed since SHA-256 was first defined, more than ten years ago, and the whole World may have been busy invoking it during the last decade. So using raw SHA-256 as a random oracle (if we ignore the bit about length extensions) is equivalent to considering that the attacker could be as powerful as the whole World with a ten-years computational head start. To avoid that, it is commonplace to define protocols which use a keyed function as random oracle. - This is great! Thanks a lot! – dira Dec 7 '11 at 21:25 I have one last question: All the queries to the random oracle are private, but "the reduction" can see all the queries that are made to the oracle. Does this mean that when proving with reduction, we introduce a new adversary who can see all the queries but no one else can? Doesn't this contradict the random oracle's definition if it is defined in a way that all the queries are private? – dira Dec 7 '11 at 21:30 @dira: it depends on the context. The proof ends up saying: "attacker's advantage is no more than $X$ when up to $q$ queries to the oracle are allowed". If the oracle is public (it is a hash function), then $q$ may be quite high; limit is on the computational power (e.g. $q = 2^{128}$). When the oracle is private, each query is part of an active attack, so it makes sense to disallow $q$ higher than some sensible value. – Thomas Pornin Dec 7 '11 at 21:56 I think I put the question in a wrong way. I was reading the random oracle model chapter in the Katz-Lindell book and it says that in the random oracle, all queries that any of the parties make to the oracle are public, and then it writes that: "The reduction sees all the queries that A makes to the random oracle. This does not contradict the fact that queries are private. While that is true in the formal model itself, here we are using A as a subroutine within a reduction." I just don't quite understand how it is different and how this is "allowed". – dira Dec 7 '11 at 22:12 Your second question was about programmability. This hasn't been directly addressed yet by Thomas' answer or the comments, so I'll focus on that question only. Unfortunately I don't know of a simple primitive that is secure in the random oracle model that requires programmability, but I'll use one that is hopefully clear once I explain the background. It's called the Fiat-Shamir heuristic; it's a nice trick to make non-interactive zero knowledge proofs. Before getting to Fiat-Shamir, consider how your favorite basic zero-knowledge proof works. Since this is Crypto SE, not CSTheory SE, hopefully you are thinking about proving knowledge of discrete logarithms and quadratic residues, not graph isomorphisms or 3-coloring graphs. ;) [Aside: technically these are not true zero-knowledge proofs, they are honest-verifier zero-knowledge proofs (sometimes called $\Sigma$-protocols) but we don't care about that distinction here] ### Schnorr's proof of knowledge of a discrete logarithm $P$ (for prover) comes along with two values $g$ and $y$ in some group $\mathbb{G}_q$ where the discrete logarithm is hard. She claims: I know the value $x$ such that $y=g^x$. As $x$ is the discrete logarithm of $y$ base $g$, computing $x$ directly is infeasible so $V$ (verifier) cannot initially be sure if she really knows $x$ or not. The Schnorr protocol lets $P$ prove knowledge of $x$ to $V$ in a way that does not disclose anything about $x$. It goes as follows: 1. $P$ generates a random value $a$, computes $b=g^a$, and sends $b$ to $V$ 2. $V$ generates a random value $c$ and sends $c$ back to $P$ 3. $P$ computes $d=a+cx$ and sends $d$ to $V$ 4. $V$ accepts $\langle b,c,d\rangle$ as proof for $\langle g,y \rangle$ iff $g^d=by^c$ ### Security Analysis We can ask ourselves, what do you we want in terms of security from such a protocol? $V$ is concerned that sending a bunch of numbers back and forth might not actually constitue a proof that $P$ knows such an $x$. If he can actually conclude that $P$ must know $x$ if $P$ can produce many accepting $\langle b,c,d\rangle$ transcripts, the proof is said to be sound. $P$ may be concerned that $V$ might learn some information about $x$ from seeing one or more accepting transcripts. This is supposed to be a proof that leaks zero information about $x$ (glossing over the honest verifier technicality). If it leaks zero information, it is said to be zero-knowledge. ### Soundness (via Extraction) To show the Schnorr protocol is sound, we are actually going to do it indirectly. We are first going to show it is something called "extractable" and then show that extractability implies soundness. I'm not going to give actual definitions or proofs, just a sketch of what is going on. Schnorr protocols have a special soundness property (called, you guessed it, special soundness): if there are two accepting transcripts $t_1=\langle b,c,d \rangle$ and $t_2=\langle b,c',d' \rangle$ where $t_1$ shares the same value of $b$ with $t_2$ but $c$ (and thus $d$) are different, then it is possible to calculate the value of $x$: $x=(d-d')/(c-c')$. If $P$ can reliably generate accepting transcripts, then there is no reason to suppose she couldn't generate $t_1$. Likewise $t_2$. And if she can produce both, then she "knows" $x$ in the sense that the knowledge required to produce $t_1$ and $t_2$ is sufficient to produce $x$ itself. When we eventually get to Fiat-Shamir, it will be important to have formalized this notion of "extractability" a little bit. Consider the situation where $P$ is a compiled binary program instead of a person. You can run $P$ which will perform the protocol and you can rewind $P$ to a previous internal state, but you can't decompile it or look at the internal state (this is called rewindable blackbox access; why these special powers are allowed in proving extractability is a topic for another time). We say that a protocol is extractable if you can get $x$ from interacting with such a black box. And we say a protocol is sound if it is extractable in this way (a blackbox that you can rewind). Both of these propositions have proofs in the literature with lots of fine-print I am omitting. Note that you can prove soundness in other ways than extractability or other flavours than blackbox-rewindable extractability (extractability is sufficient but not necessary). For Schnorr, it should be obvious how, but you do the following: 1. Let $P$ output $b$ 2. Give $P$ a random $c$ as input 3. Let $P$ output $d$ 4. Rewind $P$ to after step 1 and before step 2 5. Give $P$ a different random $c'$ as input 6. Let $P$ output $d'$ 7. Compute $x$ from $\langle b,c,d \rangle$ and $\langle b,c',d' \rangle$ ### Zero-knowledge (via Simulation) Similarly, we can indirectly prove the protocol is zero-knowledge by showing it has a different property: simulatability. In this case, we get a compiled binary of $V$ and have to reliably supply it with acceptable $b$ and $d$ values for the $c$'s it gives us. However the protocol is for knowledge of an $x$ we do not actually know! If we can simulate acceptable protocol runs without knowing $x$, then the values in the protocol must not really be leaking any information about $x$. So if the protocol is simulatable in this regard, then it is zero-knowledge. I mentioned before that Schnorr is not actually a zero-knowledge protocol. This creates some problems with simulating Schnorr transcripts that will get resolved when we use a random oracle with Fiat-Shamir. To simulate Schnorr protocols, we do the following: 1. Generate random value $d$ 2. Guess the value of $c$ 3. Supply $b=g^dy^{-c }$ as input to $V$ 4. Let $V$ output $c'$ 5. If $c'\neq c$ (you guessed wrong), rewind to step 2. Else continue 6. Supply $d$ to $V$ which will accept If the values of $c$ are really short (say a bit), then the simulator is efficient. For longer values, you can't prove the zero-knowledgeness of Schnorr with this method. There are a handful of tricks to convert Schnorr into something that is true zero-knowledge. ### Fiat-Shamir Heuristic Reading the above, you might do a double-take: on one hand, you can show that $x$ must be known if transcripts accept and on the other, you can generate transcripts that accept without $x$: what gives? If you look closely, you'll see that the simulated transcripts are generated out of order while the extractable ones are generated in order. In fact, by generating out of order, we cannot produce $\langle b,c,d \rangle$ and $\langle b,c',d'\rangle$ transcripts since the value of $b$ is no longer being chosen: it is determined by $d$ and $c$. The idea of Fiat-Shamir is to make Schnorr (and related) protocols non-interactive. This means $P$ can produce all three values $\langle b,c,d \rangle$ instead of relying on $V$ to provide $c$. Furthermore, since we know transcripts are simulatable, $P$ can produce a value of $c$ that has to have been generated after the value $b$ thus ruling out any simulation. How? It is really easy actually: set $c=\mathcal{H}(b)$. The verifier additionally checks that $c=\mathcal{H}(b)$. [Aside: there is actually a neat optimization here where you don't have to send the value $b$ at all but leave that aside]. Finally we can introduce random oracles. It turns out that if you use regular hash functions, you can't wrestle extractability or simulation out of the protocol. We'll try but ultimately we will require a random oracle that can be programmed. ### Extraction with Fiat-Shamir heuristic Recall that extraction relies on pairs of transcripts like $\langle b,c,d \rangle$ and $\langle b,c',d' \rangle$. With Fiat-Shamir, $c=\mathcal{H}(b)$ so if the values of $b$ between two transcripts are identical, then $c$ and thus $d$ will be as well. Therefore, we cannot get two such transcripts with a regular deterministic hash function. But if $\mathcal{H}$ is a programmable random oracle, we can get it to produce different values for the same input. Once again, we play the game of having rewindable blackbox access to $P$ but this time we also get the random oracle: 1. Let $P$ generate $b$ 2. See $P$ query $O$ with $b$ for $\mathcal{H}(b)$ 3. Generate random $c$ and program $c=\mathcal{H}(b)$ in $O$ 4. Let $O$ answer query 5. Let $P$ compute $d$ 6. Let $P$ output $\langle b,c,d \rangle$ 7. Rewind to end of step 2 8. Generate random $c'$ and program $c'=\mathcal{H}(b)$ in $O$ 9. Proceed as before and eventually let $P$ output $\langle b,c',d' \rangle$ A few notes: (1) because this is non-interactive, $P$ does not output $b$ after step 1, so we rely on the ability to see queries to the random oracle; (2) if the oracle generates answers "on the fly" (instead of entering the protocol with a codebook of all queries/responses), we don't actually have to program it with different values of $c$. We just rewind to before the point it is about to generate a response and let it generate a random value (which will overwhelmingly be different than in the first execution). This sheds some light on the original poster's third question. ### Simulation with Fiat-Shamir Similarly to extraction, the use of a random oracle makes simulation a breeze. Assuming you've read this far, you can probably see how so I will just say it in a sentence: Set a random value for $c$, compute $\langle b,c,d \rangle$ by choosing $d$ first, and when the verifier checks with the oracle that $c=\mathcal{H}(b)$, program $c$ as the response. - Wow, this was great! Thank you so much! – dira Dec 11 '11 at 23:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 174, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924898087978363, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/stress-energy-tensor+fluid-dynamics
Tagged Questions 1answer 83 views Stress energy tensor of a perfect fluid and four-velocity In the following demonstration, there is an error, but I cannot find where. (I explicitely put the $c^2$ to keep track of units). We consider a metric $g_{\mu\nu}$ with a signature $(-, +, +, +)$ : ... 1answer 92 views Showing symmetry of the stress tensor by applying divergence theorem to $\int\int_{\delta V(t)} \vec{x}\times \vec{t} dS$ I'm currently working through the symmetry of the stress tensor, in relation to viscous flow. I am looking at this by examining the conservation of angular momentum equation for a material volume ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268789291381836, "perplexity_flag": "head"}
http://nrich.maths.org/6076/solution?nomenu=1
## 'Harmonically' printed from http://nrich.maths.org/ ### Show menu The Harmonic Series. Murat Aygen from Turkey sent this solution to part (a) and Dapeng Wang from Claremont Fan Court School and Sunil Ghosh from the Royal Grammar School, Guildford sent in essentially the same solution. Let us partition the terms of our series, starting from every $k= 2^{i-1}+ 1$, as follows: $\sum_{k=1}^{n} \left({1\over{k}}\right) = 1 + \left({1\over{2}}\right) + \left({1\over{3}} + {1\over{4}}\right) + \left({1\over{5}} + {1\over{6}} + {1\over{7}} + {1\over{8}}\right) + \sum_{i=4}^{n} \left(\frac{1}{2^{i-1}+1} + \dots + {1\over2^{i}}\right)$ How many terms are there in each partition? We see that there are 1, 2, 4, 8, 16... terms in the successive partitions given by $2^i - 2^{i -1}$ where $2^i - 2^{i -1} = 2^{i -1}(2-1) = 2^{i -1}$. The smallest term in a partition is clearly the rightmost term ${1\over 2^i}$. This smallest term multiplied by the number of terms in the partition is equal to $${1\over 2^i}\times 2^{i -1} = {1\over 2}$$ which is always less than the sum of the terms of the partition. Anyway it is a positive constant! Since the number of terms in the series is as many as one wishes, we can form as many partitions as we wish whose partial sums are not less than 1/2. For reaching a sum of 100 only 200 partitions are needed. Noah and Ariya from The British School of Boston, USA and Aled from King Edward VI Camp Hill School for Boys sent excellent solutions to part (b) . Each of the areas of the pink rectangles is representative of one fraction in the sum $$S_n = 1 +{1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n}.$$This is verified by knowing that each rectangle x has a base of length 1 and a height of length 1/x. Every rectangle has an area greater than that under the curve $y=1/x$ it overlaps, as illustrated above (note that this will remain so because the function $y= 1/x$ is monotonic for positive numbers). Consider the area under the graph $y = 1/x$ between $x=a$ and $x=b$. This area lies between two rectangles and so we get $${b-a \over b} < \int_a^b{ 1\over x }dx = \ln b - \ln a < { b-a\over a}$$ If we evaluate the expression between $a = {1\over n}$ and $b= {1 \over {n-1}}$ we get: $${1\over n} < \ln {1 \over {n-1}} - \ln {1 \over n} = \ln n - \ln (n-1) < { 1\over n-1}$$ and this gives: $${1 \over 2} < \ln 2- \ln 1< { 1\over 1}$$ $${1 \over 3} < \ln 3- \ln 2< { 1\over 2}$$ $${1 \over 4} < \ln 4- \ln 3< { 1\over 3}$$ ...and so on $${1 \over n} < \ln n- \ln (n-1)< { 1\over n-1}$$ Summing these expressions (noting that $\ln1 = 0$) we get: $${1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n} < \ln n < 1 + {1\over 2} + {1\over 3} + {1\over 4} + ... + {1\over n-1} .$$ The series on each side of this inequality grow infinitely large and differ by less than 1 so the series grows like $\ln n$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9529218077659607, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/55588/how-do-you-calculate-this-limit-mathop-lim-limits-x-to-0-frac-sin/55596
# How do you calculate this limit $\mathop {\lim }\limits_{x \to 0} \frac{{\sin (\sin x)}}{x}$? How do you calculate this limit $$\mathop {\lim }\limits_{x \to 0} \frac{{\sin (\sin x)}}{x}?$$ without derivatives please. Thanks. - 17 Hint: ${\displaystyle {\sin(\sin(x)) \over x} = {\sin(\sin(x)) \over \sin(x)} {\sin(x) \over x}}$. – Zarrax Aug 4 '11 at 14:51 2 Or, intuitively, since $\lim\limits_{x\to 0}\frac{\sin(x)}{x}=1$, then $\sin(x)\approx x$ when $x\approx 0$, so you expect $\sin(\sin(x))\approx \sin(x)\approx x$ when $x$ is very close to $0$. – Arturo Magidin Aug 4 '11 at 14:56 Thanks Zarrax, is just the trick I needed. : D – mathsalomon Aug 4 '11 at 14:58 @mathsalomon Since a number of nice answers have been given already, please consider accepting one so that the question shows up as answered in the future. – Srivatsan Aug 31 '11 at 12:19 ## 4 Answers Write the limit as $$\lim_{x \to 0} \frac{\sin(\sin x)}{\sin x} \cdot \frac{\sin x}{x}.$$ It is well-known that $$\lim_{x \to 0} \frac{\sin x}{x} = 1,$$ and since $\sin x \to 0$ as $x \to 0$, we get that also $$\lim_{x \to 0} \frac{\sin(\sin x)}{\sin x} = 1.$$ Therefore the limit is $1 \cdot 1 = 1$. - Yess!! thank you very much :D – mathsalomon Aug 4 '11 at 14:58 1 @J.J Zarrax might have a bone to pick with you. – Peter Tamaroff Feb 22 '12 at 2:32 It might be worth noting that while the solution is pretty natural and standard, in this case you are actually calculating the derivative of $\sin(\sin(x))$ at $x=0$ by using the chain rule. – N. S. Aug 27 '12 at 16:57 Edit: The solution below should not does not follow the OPs guidelines that derivatives not be used. However, I will leave it since it's correct and shows how L'Hôpital's rule makes the problem much easier. If you think this answer should be deleted, please let me know why and I'll consider it. Since this limit is of $\frac{0}{0}$ form, we can apply L'Hôpital's rule, which yields $$\lim_{x\to 0} \frac{\sin (\sin x)}{x} = \lim_{x\to 0} \frac{\frac{d}{dx}\sin (\sin x)}{\frac{d}{dx}x} = \lim_{x \to 0} \frac{\cos(\sin x) \cos x}{1} = 1.$$ - Taking derivatives are not allowed :( – user9413 Aug 4 '11 at 15:02 @Chandru Oops. I didn't see that. Thanks. – Quinn Culver Aug 4 '11 at 15:05 Actually you cannot apply L'H here, because the limit is the definition of the derivative of $\sin( \sin (x))$ at $x=0$. – N. S. Aug 27 '12 at 16:56 @N.S. Why does that preclude use of L'H? – Quinn Culver Sep 9 '12 at 21:22 Because you USE the derivative of $\sin(\sin(x))$ to calculate ITSELF. That is circular logic.... – N. S. Sep 10 '12 at 0:10 show 3 more comments Note that : • $$\sin(\sin{x}) = \sin{x} - \frac{(\sin{x})^{3}}{3!} + \frac{(\sin{x})^{5}}{5!} + \cdots$$ • $\displaystyle \lim_{x \to 0} \frac{\sin{x}}{x} =1$. - 1 Thanks Chandru, but I can not use the series expansion when I'm on chapter limits. But thanks for the extraordinary speed in responding. – mathsalomon Aug 4 '11 at 14:57 @mathsalomon: You didn't mention that before :) – user9413 Aug 4 '11 at 14:58 Here is a page with a geometric proof that $$\lim_{x\to 0}\frac{\sin(x)}{x}=\lim_{x\to 0}\frac{\tan(x)}{x}=1$$ You can skip the Corollaries. Then you can use the fact that $\lim_{x\to 0}\sin(x)=0$ and the fact mentioned by J.J. and Zarrax that $$\lim_{x\to 0}\frac{\sin(\sin(x))}{x}=\lim_{x\to 0}\frac{\sin(\sin(x))}{\sin(x)}\lim_{x\to 0}\frac{\sin(x)}{x}=1$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305505156517029, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-math-topics/103875-continued-function-print.html
# Continued function? Printable View • September 23rd 2009, 04:23 AM batman Continued function? I want to solve y=f(y+f(y+f(y+f(y+...)))), where f(x) is some bounded function increasing in x. First, what is the name of this kind of problem? I know of continued fractions, but here f(x) isn't of the form f(x)=1/(a+x). Second, how can I find y? (numerically is fine) • September 25th 2009, 02:59 PM halbard Quote: Originally Posted by CaptainBlack If this has a solution for a given function f, then: $y=f(y+f(y))$ Hang on, if $y=f(y+f(y+f(y+f(y+\dots))))$ isn't $y=f(2y)$? Then all you need to find are the fixed points of the function $g(y)=f(2y)$. This can sometimes be realised by a simple iteration $y_{n+1}=g(y_n)$ for a suitable starting value $y_0$, but you shouldn't rule out the need for more advanced methods. • September 26th 2009, 02:01 AM CaptainBlack Quote: Originally Posted by halbard Hang on, if $y=f(y+f(y+f(y+f(y+\dots))))$ isn't $y=f(2y)$? Then all you need to find are the fixed points of the function $g(y)=f(2y)$. This can sometimes be realised by a simple iteration $y_{n+1}=g(y_n)$ for a suitable starting value $y_0$, but you shouldn't rule out the need for more advanced methods. Yes, now you mention it. That is what I started with but for some reason changed it to what I had posted (Shake) CB All times are GMT -8. The time now is 02:27 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9453104734420776, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/37947/implications-of-unruh-inertia-to-theories-of-gravity?answertab=votes
# Implications of Unruh-inertia to theories of gravity If it turns out to be true that the galaxy rotation curves can be explained away by Unruh modes that become greater than the Hubble scale at accelerations around $10^{-10} m/s^2$ as proposed in here, what modifications would have to be done to the effective General Relativity equations? Also, if anyones knows at this early point, what would be the implications for gravity theories that rely on the equivalence principle at a fundamental level, i.e: string theory? - 1 Nice question, it would be even better to see good answers. ;-) – Luboš Motl Sep 21 '12 at 17:19 – Mitchell Porter Sep 21 '12 at 22:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354651570320129, "perplexity_flag": "middle"}
http://linbaba.wordpress.com/2010/06/22/potts-model-and-monte-carlo-slow-down/
# Journey into Randomness ## Potts model and Monte Carlo Slow Down June 22, 2010 at 12:24 am (markov chain, Monte Carlo, probability) Tags: Ising, MCMC, Metropolis, Monte Carlo, Potts model A simple model of interacting particles The mean field Potts model is extremely simple: there are ${N}$ interacting particles ${x_1, \ldots, x_N}$ and each one of them can be in ${q}$ different states ${1,2, \ldots, q}$. Define the Hamiltonian $\displaystyle H_N(x) = -\frac{1}{N} \sum_{i,j} \delta(x_i, x_j)$ where ${x=(x_1, \ldots, x_N)}$ and ${\delta}$ is the Kronecker symbol. The normalization ${\frac{1}{N}}$ ensures that the energy is an extensive quantity so that the mean energy per particle ${h_N(x) = \frac{1}{N} H_N(x)}$ does no degenerate to ${0}$ or ${+\infty}$ for large values of ${N}$. The sign minus is here to favorize configurations that have a lot of particles in the same state. The Boltzman distribution at inverse temperature ${\beta}$ on ${\{1, \ldots, q\}^N}$ is given by $\displaystyle P_{N,\beta} = \frac{1}{Z_N(\beta)} e^{-\beta H_N(x)}$ where ${Z_N(\beta)}$ is a normalization constant. Notice that if we choose a configuration uniformly at random in ${\{1, \ldots, q\}^N}$, with overwhelming probability the ratio of particles in state ${k}$ will be close to ${\frac{1}{q}}$. Also it is obvious that if we define $\displaystyle L^{(N)}_k(x) = \frac{1}{N} \, \Big( \textrm{Number of particles in state }k \Big)$ then ${L=(L^{(N)}_1, \ldots, L^{(N)}_q)}$ will be close to ${(\frac{1}{q}, \ldots, \frac{1}{q})}$ for a configuration taken uniformly at random. Stirling formula even says that the probability that ${L}$ is close to ${\nu = (\nu_1, \ldots, \nu_q)}$ is close to ${e^{-N \, R(\nu)}}$ where $\displaystyle R(\nu) = \nu_1 \ln(q\nu_1) + \ldots + \nu_q \ln(q\nu_q).$ Indeed ${(\frac{1}{q}, \ldots, \frac{1}{q}) = \textrm{argmin} \, R(\nu)}$. The situation is quite different under the Boltzman distribution since it favorizes the configurations that have a lot of particles in the same state: this is because the Hamiltonian ${H_N(x)}$ is minimized for configurations that have all the particles in the same state. In short there is a competition between the entropy (there are a lot of configurations close to the ratio ${(\frac{1}{q}, \ldots, \frac{1}{q})}$) and the energy that favorizes the configurations where all the particles are in the same state. With a little more work, one can show that there is a critical inverse temperature ${\beta_c}$ such that: • for ${\beta < \beta_c}$ the entropy wins the battle: the most probable configurations are close to the ratio ${(\frac{1}{q}, \ldots, \frac{1}{q})}$ • for ${\beta > \beta_c}$ the energy effect shows up: there are ${q}$ most probable configurations that are the permutations of ${(a_{\beta},b_{\beta},b_{\beta}, \ldots, b_{\beta})}$ where ${a_{\beta}}$ and ${b_{\beta}}$ are computable quantities. The point is that above ${\beta_c}$ the system has more than one stable equilibrium point. Maybe more important, if we compute the energy of these most probable states $\displaystyle h(\beta) = \lim \frac{1}{N} H_N(\textrm{most probable state})$ then this function has a discontinuity at ${\beta=\beta_c}$. I will try to show in the weeks to come how this behaviour can dramatically slow down usual Monte-Carlo approach to the study of these kind of models. Hugo Touchette has a very nice review of statistical physics that I like a lot and a good survey of the Potts model. Also T. Tao has a very nice exposition of related models. The blog of Georg von Hippel is dedicated to similar models on lattices, which are far more complex that this mean field approximation presented here. MCMC Simulations These is extremely easy to simulate this mean field Potts model since we only need to keep track of the ratio ${L=(L_1, \ldots, L_q)}$ to have an accurate picture of the system. For example, a typical Markov Chain Monte Carlo approach would run as follows: • choose a particle ${x_i}$ uniformly at random in ${\{1,2, \ldots, N\}}$ • try to switch its value uniformly in ${\{1,2, \ldots, q\} \setminus \{x_i\}}$ • compute the Metropolis ratio • update accordingly. If we do that ${10^5}$ times for ${q=3}$ states at inverse temperature ${\beta=1.5}$ and for ${100}$ particles (which is fine since we only need to keep track of the ${3}$-dimensional ratio vector) and plot the result in barycentric coordinates we get a picture that looks like: Here I started with a configuration where all the particles were in the same states i.e ratio vector equal to ${(1,0,0)}$. We can see that even with ${10^5}$ steps, the algorithm struggles to go from one most probable position ${(a,b,b)}$ to the other two ${(b,a,b)}$ and ${(b,b,a)}$ – in this simulation, one of the most probable state has even not been visited! Indeed, this approach was extremely naive, and this is quite interesting to try to come up with better algorithms. Btw, Christian Robert’s blog has tons of interesting stuffs related to MCMC and how to boost up the naive approach presented here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9378984570503235, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/10063/inflation-factor-and-doubling-time
# Inflation factor and doubling time I get the general idea about cosmic inflation, but the numbers associated with it seem to be pulled out at random. For example, in The Elegant Universe, Brian Greene says that the universe doubled every 10^(-34) seconds about a hundred times to an overall factor of 10^30. Another astronomy text I read recently gives the doubling time as 10^(-35) secs and the overall expansion factor as 10^50. Where are all these numbers coming from ? For example, why couldn't the universe have doubled every 10^(-40) seconds many billions of times during inflation ? - ## 1 Answer Dear Cosmonut, the overall factor is usually expressed as a number of e-foldings. You need at least 60 e-foldings or so to explain why the current Universe is apparently so flat and homogeneous. And $e^{60}\approx 10^{26}$ or so. That's where the $10^{30}$ estimate comes from. Less than 60 e-foldings of inflation would mean that inflation doesn't solve a significant part of the problems it is supposed to solve. However, there is nothing wrong with inflation that lasts for a much longer time. There could have been 50, 100, or 10,000 e-foldings as well and there are papers that find circumstantial evidence why such a very long inflation could be preferred. We don't know how many e-foldings have taken place but if inflation is good for the things we think it is, it should be more than 60 e-foldings. The time scale when the inflation occurred is also unknown but it had to be a tiny fraction of a second after the true beginning of our Universe. The higher energy scale - typical "inflaton mass" etc. - you use, the shorter figure for the time after the big bang you get. The typical figure is that inflation began $10^{-36}$ seconds after the big bang and ended around $10^{-33}$ or $10^{-32}$ seconds after the big bang. That's the high-scale, GUT-scale inflation. Note that the shorter time scale is just a little bit longer than the Planck time $10^{-43}$ seconds. It's unlikely that inflation occurred earlier, closer to the Planck scale, because that would predict bigger-than-observed non-uniformities of the cosmic microwave background temperature (they're relatively small because the time when inflation occurred is larger than the Planck time). However, inflation could have occurred later than that. Of course, it had to occur before the epoch when the temperatures were close to the electroweak scale, e.g. before $10^{-15}$ seconds or so, because we know that the inflaton isn't too light (lighter than the electroweak scale) and because of other reasons. What inflation achieves is mostly of a qualitative character - making the space flat and uniform, diluting bad exotic objects, and so on - and this qualitative job may be done at many different time scales and with many different durations of the cosmic inflation. The inflaton couldn't have been directly detected so far so you shouldn't be surprised that all the numerical details about inflation remain largely unknown even though they're constrained by inequalities and they become sharper in particular models. - Thanks, Lubos. Very clear answer. – Cosmonut May 19 '11 at 20:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9629600644111633, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/152046-implicit-differentiation-partial-differentiation.html
# Thread: 1. ## Implicit differentiation/ partial differentiation How do i find the partial derivative in terms of Y from the equation z = x^y even if X is held as a constant i dont know how to differentiate it :S 2. Actually, is the answer just zero? 3. Follow these steps, where $z_y$ means the derivative of z with respect of y. $\displaystyle{z=x^y}$ $\displaystyle{ln(z) = y\cdot ln(x)}$ $\displaystyle{\frac{\partial}{\partial y} ln (z) = \frac{\partial}{\partial y}\left(y\cdot ln (x)\right)}$ $\displaystyle{\frac{z_y}{z} = ln (x)}$ $\displaystyle{z_y = z\cdot ln (x) = x^y\cdot ln (x)}$ Got it?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9457406401634216, "perplexity_flag": "middle"}
http://math.stackexchange.com/users/22446/thomas-e?tab=activity&sort=all&page=3
# Thomas E. reputation 2724 bio website location age 22 member for 1 year, 4 months seen 2 mins ago profile views 407 Doing math. | | | bio | visits | | | |-------------|------------------|---------|----------|------------|------------------| | | 3,197 reputation | website | | member for | 1 year, 4 months | | 2724 badges | location | | seen | 2 mins ago | | # 1,086 Actions | | | | |------|----------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Mar5 | revised | The space of continuous, bounded functions from a metric space $X$ to $\mathbb R$added 23 characters in body | | Mar5 | comment | Determining if these sets are compact?I believe this question is a duplicate of something that was asked just few days ago. Just can't find it. | | Mar5 | comment | Simplifying $\{ x \in \mathbb{R} : x^2 - x - 6 \geq 0 \}$ when possible@DominicMichaelis. I believe the question asks to simplify the current expression to interval notation. | | Mar5 | comment | Continuity Properties@OC89: In your proof, what exactly are $\varepsilon$ and $\delta$? How did you choose them? | | Mar5 | revised | Continuity Propertiesdeleted 24 characters in body | | Mar5 | comment | Continuity PropertiesBy $B(x,\varepsilon)$ I of course mean the $\varepsilon$-radius open ball around $x$. | | Mar5 | answered | Continuity Properties | | Mar5 | revised | Suppose that $x$ is a fixed nonnegative real number such that for all positive real numbers $E$ , $0\leq x\leq E$. Show that $x=0$.added 12 characters in body; edited title | | Mar5 | answered | Suppose that $x$ is a fixed nonnegative real number such that for all positive real numbers $E$ , $0\leq x\leq E$. Show that $x=0$. | | Mar5 | comment | How to show that the vector subspaces of $\mathbb{R}^{n}$ are closed in $\mathbb{R}^{n}$?@Oliver: You use the fact that $\mathbb{R}^{m}$ is complete and not that it is closed. The closedness of $\mathbb{R}^{m}$ alone implies via the map $T:U\to \mathbb{R}^{m}$ that $U$ is closed in its own subspace topology, but not necessarily in the ambient space $\mathbb{R}^{n}$. Similarly it is also open, again in its subspace topology, since $\mathbb{R}^{m}$ is open. And if you change the word closedness to completeness, then this is basically just the answer of Seirios with some details opened up. | | Mar3 | reviewed | Looks Good Rational numbers form a field. | | Mar3 | revised | Are these subsets of $\mathbb{R}$ homeomorphic?Edited the body. | | Mar3 | reviewed | Edit suggested edit on projection - linear alebra | | Mar3 | revised | projection - linear alebraedited body | | Mar3 | comment | Prove $\operatorname{dist}(\overline{A},\overline{B}) = \operatorname{dist}(A, B)$@vermiculus. Great. Let me know if some points need further clarification. | | Mar3 | comment | Prove $\operatorname{dist}(\overline{A},\overline{B}) = \operatorname{dist}(A, B)$@vermiculus. I'm not quite sure what you are after with that implication. You would have to justify it by using the definition, and thus infimum, anyways. And yes, the easiest way I see is just to show that $d(\overline{A},\overline{B})\leq d(a,b)$ for all $a\in A$, $b\in B$ to conclude that $d(\overline{A},\overline{B})\leq d(A,B)$. And for the reverse inequality, you should show that $d(A,B)\leq d(a,b)$ for all $a\in \overline{A}$, $b\in\overline{B}$, to conclude that $d(A,B)\leq d(\overline{A},\overline{B})$. | | Mar3 | comment | Prove $\operatorname{dist}(\overline{A},\overline{B}) = \operatorname{dist}(A, B)$@vermiculus. The point is to show that $d(\overline{A},\overline{B})\leq d(a,b)$ is true for all $a\in A,\,b\in B$. Thus, by taking infimum over all $a\in A$ and $b\in B$ in this inequality, we obtain $d(\overline{A},\overline{B})\leq d(A,B)$. But out of curiosity, how do you justify $d(\overline{A},\overline{B})\leq d(A,B)$ without showing that $d(\overline{A},\overline{B})\leq d(a,b)$ for all $a\in A$, $b\in B$? | | Mar3 | answered | Prove $\operatorname{dist}(\overline{A},\overline{B}) = \operatorname{dist}(A, B)$ | | Mar3 | comment | Prove $\operatorname{dist}(\overline{A},\overline{B}) = \operatorname{dist}(A, B)$You probably want inf over $x\in B$ in the definition of dist$(A,B)$. | | Mar3 | comment | $x_n \in \mathbb R, \quad x_n \to A \implies \max \{x_n,A-x_n\} \to ?$And with $B(x,\varepsilon)$ I of course mean the $\varepsilon$-radius ball around $x$. |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922364354133606, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/64286/dual-borel-conjecture-in-lavers-model
## Dual Borel conjecture in Laver’s model ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A set $X\subseteq 2^\omega$ of reals is of strong measure zero (smz) if $X+M\not=2^\omega$ for every meager set $M$. (This is a theorem of Galvin-Mycielski-Solovay, but for the question I am going to ask we may as well take it as a definition.) A set $Y$ is strongly meager (sm) if $Y+N\not=2^\omega$ for every Lebesgue null set $N$. The Borel conjecture (BC) says that every smz set is countable; the dual Borel conjecture (dBC) says that every sm set is countable. In Laver's model (obtained by a countable support iteration of Laver reals of length $\aleph_2$) the BC holds. Same for the Mathias model. In a paper that I (with Kellner+Shelah+Wohofsky) just sent to arxiv.org, we claim that it is not clear if Laver's model satisfies the dBC. QUESTION: Is that correct? Or is it perhaps known that Laver's model has uncountable sm sets? Additional remark 1: Bartoszynski and Shelah (MR 2020043) proved in 2003 that in Laver's model there are no sm sets of size continuum ($\aleph_2$). (The MR review states that the paper proves that the sm sets are exactly $[\mathbb R] ^{\le \aleph_0}$. This is obviously a typo in the review.) Additional remark 2: If many random reals are added to Laver's model (either during the iteration, or afterwards), then BC still holds, but there will be sm sets of size continuum, so dBC fails in a strong sense. - 4 The paper mentioned in the question is arxiv.org/abs/1105.0823 – Andres Caicedo May 8 2011 at 13:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9319534301757812, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/75867-simple-trigonometric-equations.html
# Thread: 1. ## Simple Trigonometric Equations These are hard for me right now, it will be fine once I understand it but I'm not sure because it's very confusing! If I had cos 3x = 0.5, would I: Use cos^-1 0.5 = 60 and because cos is positive in the second quadrant, would I add the 60 to 90 to make 150... then divide both answers by 3 to get 20 and 50? Is this right, and would it work for any other including sin? Also if I had tan x = -root 3, would I solve it like: tan^-1 root 3 = 60 and because tan is positive in the third quadrant, would I add the 60 again to 180 to make 240 and are my solutions 60 and 240, and nothing else? 2. ## Trigonometry Hello db5vry Originally Posted by db5vry If I had cos 3x = 0.5, would I: Use cos^-1 0.5 = 60 and because cos is positive in the second quadrant, would I add the 60 to 90 to make 150... then divide both answers by 3 to get 20 and 50? Look at the attached diagram, showing $\cos 60^o = 0.5$. The other angle in the diagram whose cosine is also $0.5$ is in the fourth quadrant: it is $360^o - 60^o = 300^o$. If we allow angles of more than $360^o$, the next angle whose cosine is 0.5 is $360 + 60 = 420^o$; the next is $660^o$, and so on. So if $\cos 3x = 0.5$, the possible values of $3x$ are $60^o, 300^o, 420^o, 660^o, \dots$ To find the values of $x$, then, you'll need to divide each of these by $3$ to get: $x = 20^o, 100^o, 140^o, 220^o, \dots$ (Cosine is positive in the first and fourth quadrants, of course.) Grandad Attached Thumbnails 3. Originally Posted by Grandad Hello db5vryLook at the attached diagram, showing $\cos 60^o = 0.5$. The other angle in the diagram whose cosine is also $0.5$ is in the fourth quadrant: it is $360^o - 60^o = 300^o$. If we allow angles of more than $360^o$, the next angle whose cosine is 0.5 is $360 + 60 = 420^o$; the next is $660^o$, and so on. So if $\cos 3x = 0.5$, the possible values of $3x$ are $60^o, 300^o, 420^o, 660^o, \dots$ To find the values of $x$, then, you'll need to divide each of these by $3$ to get: $x = 20^o, 100^o, 140^o, 220^o, \dots$ (Cosine is positive in the first and fourth quadrants, of course.) Grandad Thank you for this help! Of course I remember cosine is positive in the fourth now! It's sine that is positive in the second, don't know why I thought that.. But that was excellent thanks very much!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9616616368293762, "perplexity_flag": "middle"}
http://www.cfd-online.com/W/index.php?title=Navier-Stokes_equations&oldid=14164
[Sponsors] Home > Wiki > Navier-Stokes equations # Navier-Stokes equations ### From CFD-Wiki Revision as of 14:42, 14 April 2012 by Michail (Talk | contribs) The Navier-Stokes equations are the basic governing equations for a viscous, heat conducting fluid. It is a vector equation obtained by applying Newton's Law of Motion to a fluid element and is also called the momentum equation. It is supplemented by the mass conservation equation, also called continuity equation and the energy equation. Usually, the term Navier-Stokes equations is used to refer to all of these equations. ## Equations The instantaneous continuity equation (1), momentum equation (2) and energy equation (3) for a compressible fluid can be written as: $\frac{\partial \rho}{\partial t} + \frac{\partial}{\partial x_j}\left[ \rho u_j \right] = 0$ (1) $\frac{\partial}{\partial t}\left( \rho u_i \right) + \frac{\partial}{\partial x_j} \left[ \rho u_i u_j + p \delta_{ij} - \tau_{ji} \right] = 0, \quad i=1,2,3$ (2) $\frac{\partial}{\partial t}\left( \rho e_0 \right) + \frac{\partial}{\partial x_j} \left[ \rho u_j e_0 + u_j p + q_j - u_i \tau_{ij} \right] = 0$ (3) For a Newtonian fluid, assuming Stokes Law for mono-atomic gases, the viscous stress is given by: $\tau_{ij} = 2 \mu S_{ij}^*$ (4) Where the trace-less viscous strain-rate is defined by: $S_{ij}^* \equiv \frac{1}{2} \left(\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_i} \right) - \frac{1}{3} \frac{\partial u_k}{\partial x_k} \delta_{ij}$ (5) The heat-flux, $q_j$, is given by Fourier's law: $q_j = -\lambda \frac{\partial T}{\partial x_j} \equiv -C_p \frac{\mu}{Pr} \frac{\partial T}{\partial x_j}$ (6) Where the laminar Prandtl number $Pr$ is defined by: $Pr \equiv \frac{C_p \mu}{\lambda}$ (7) To close these equations it is also necessary to specify an equation of state. Assuming a calorically perfect gas the following relations are valid: $\gamma \equiv \frac{C_p}{C_v} ~~,~~ p = \rho R T ~~,~~ e = C_v T ~~,~~ C_p - C_v = R$ (8) Where $\gamma$, $C_p$, $C_v$ and $R$ are constant. The total energy $e_0$ is defined by: $e_0 \equiv e + \frac{u_k u_k}{2}$ (9) Note that the corresponding expression (15) for Favre averaged turbulent flows contains an extra term related to the turbulent energy. Equations (1)-(9), supplemented with gas data for $\gamma$, $Pr$, $\mu$ and perhaps $R$, form a closed set of partial differential equations, and need only be complemented with boundary conditions. ## Derivation ### Derivation of continuity equation A fundamental laws of Newtonian mechanics states the conservation of mass in an arbitrary material control volume varying in time $V_{m}$. A material volume contains the same portions of a fluid at all times. It may be defined by a closed bounding surface $S_{m}$ eneveloping a portion of a fluid at certain time. Fluid elements cannot enter or leave this control volume. The movement of every point on the surface $S_{m}$ is defined by the local velocity $\boldsymbol{u}$. So one can define: $0= \frac {dM}{dt}= \frac {d}{dt} \int_{V_{m}} \rho \, dV.$ Applying the Reynolds transport theorem and divergence theorem one obtains: $0= \frac {dM}{dt}= \int_{V_{m}} \frac {\partial \rho}{\partial t} dV + \int_{S_m} \rho \boldsymbol{u}\cdot\boldsymbol{n} \, dS = \int_{V_{m}} \left( \frac{\partial \rho}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \boldsymbol{u} \right) \right) dV$ Since this relation is valid for an arbitrary volume $V_{m}$, the integrand must be zero. Note that now it can easily be assumed that the volume is a fixed control volume (where fluid particles can freely enter and leave the volume) by taking account of mass fluxes through the surface $S_{m}$. Thus $\frac{\partial \rho}{\partial t} + \boldsymbol{\nabla} \cdot \left( \rho \boldsymbol{u} \right)$ (1) at all points of the fluid. For an incompressible fluid the change rate of density is zero. One can simplify (1) to: $\boldsymbol{\nabla} \cdot \boldsymbol{u} = 0.$ ### Derivation of momentum equations Expansion, rotation and deformation of a fluid parsel #### forces and stresses $\textbf{V} = \textbf{i}u + \textbf{j}v + \textbf{k}w$ (2) $\rho \frac{D \textbf{V}}{Dt} = \rho \textbf{F} + \textbf{P}$ (3) where $\textbf{F}$ - mass force per volume unit $\textbf{P}$ - surface force per volume unit $\textbf{F} = \textbf{i}F_{x} + \textbf{j}F_{y} + \textbf{k}F_{z}$ (4) There are two types of forces: body(mass) forces and surface forces. Body forces act on the entire control volume. The most common body force is that due to gravity. Electromagnetic phenomena may also create body forces, but this is a rather specialized situation. Surface forces act on only surface of a control volume at a time and arise due to pressure or viscous stresses. We find a general expression for the surface force per unit volume of a deformable body. Consider a rectangular parallelepiped with sides $dx,dy,dz$ and hence with volume $dV = dxdydz$ At the moment we assume this parallelepiped isolated from the rest of the fluid flow , and consider the forces acting on the faces of the parallelepiped. Let the left forward top of a parallelepiped lies in a point O To both faces of the parallelepiped perpendicular to the axis $x$ and having the area $dydz$ applied resulting stresses , equal to $\textbf{p}_{x}$ and $\textbf{p}_{x}+\frac{\partial \textbf{p}_{x}}{ \partial x}$ respectively So we have for $x$ - direction $\frac{ \partial \textbf{p}_{x}}{ \partial x} dxdydz$ for $y$ - direction $\frac{ \partial \textbf{p}_{y}}{ \partial y} dxdydz$ for $z$ - direction $\frac{ \partial \textbf{p}_{z}}{ \partial z} dxdydz$ $\textbf{P} = \frac{\partial \textbf{p}_{x}}{ \partial x} + \frac{\partial \textbf{p}_{y}}{ \partial y} + \frac{\partial \textbf{p}_{z}}{ \partial z}$ (6) $\rho \frac{d \textbf{V}}{dt} = \rho \textbf{F} + \frac{\partial \textbf{p}_{x} }{ \partial x} + \frac{\partial \textbf{p}_{y} }{ \partial y} + \frac{\partial \textbf{p}_{z} }{ \partial z}$ (7) $\left. \begin{array}{c} \rho \frac{du}{dt}= \rho F_{x} + \frac{ \partial p_{xx}}{\partial x} + \frac{ \partial p_{yx}}{\partial y} + \frac{ \partial p_{zx}}{\partial z} \\ \rho \frac{dv}{dt}= \rho F_{y} + \frac{ \partial p_{xy}}{\partial x} + \frac{ \partial p_{yy}}{\partial y} + \frac{ \partial p_{zy}}{\partial z} \\ \rho \frac{dw}{dt}= \rho F_{z} + \frac{ \partial p_{xz}}{\partial x} + \frac{ \partial p_{yz}}{\partial y} + \frac{ \partial p_{zz}}{\partial z} \\ \end{array} \right\}$ (8) $\left. \begin{array}{c} \rho \left( \frac{\partial u}{ \partial t} + u \frac{ \partial u}{ \partial x} + v \frac{ \partial u}{ \partial y} + w \frac{ \partial u}{ \partial z} \right) = \rho F_{x} + \frac{ \partial p_{xx}}{\partial x} + \frac{ \partial p_{yx}}{\partial y} + \frac{ \partial p_{zx}}{\partial z} \\ \rho \left( \frac{\partial u}{ \partial t} + u \frac{ \partial u}{ \partial x} + v \frac{ \partial u}{ \partial y} + w \frac{ \partial u}{ \partial z} \right) = \rho F_{y} + \frac{ \partial p_{xx}}{\partial x} + \frac{ \partial p_{yx}}{\partial y} + \frac{ \partial p_{zx}}{\partial z} \\ \rho \left( \frac{\partial u}{ \partial t} + u \frac{ \partial u}{ \partial x} + v \frac{ \partial u}{ \partial y} + w \frac{ \partial u}{ \partial z} \right) = \rho F_{z} + \frac{ \partial p_{xx}}{\partial x} + \frac{ \partial p_{yx}}{\partial y} + \frac{ \partial p_{zx}}{\partial z} \\ \end{array} \right\}$ (9) The force due to the stress is the product of the stress and the area over which it acts. $\textbf{P}_{x} = \textbf{i} \sigma_{xx} + \textbf{j} \tau_{xy} + \textbf{k} \tau_{xz}$ (10) $\textbf{P}_{y} = \textbf{i} \tau_{yx} + \textbf{j} \sigma_{yy} + \textbf{k} \tau_{yz}$ (11) $\textbf{P}_{z}=\textbf{i}\tau_{zx} + \textbf{j} \tau_{zy} + \textbf{k}*\sigma_{zz}$ (12) $\Pi = \left( \begin{array}{ccc} \sigma_{xx} & \tau_{xy} & \tau_{xz} \\ \tau_{yx} & \sigma_{yy} & \tau_{yz} \\ \tau_{zx} & \tau_{zy} & \sigma_{zz} \\ \end{array} \right)$ (13) $\left. \begin{array}{c} \rho \frac{du}{dt} = \rho F_{x} - \frac{\partial p}{\partial x} + \left( \frac{\partial \sigma_{x} '}{\partial x} + \frac{\partial \tau_{xy}}{\partial y} + \frac{\partial \tau_{xz}}{\partial z} \right) \\ \rho \frac{dv}{dt} = \rho F_{y} - \frac{\partial p}{\partial y} + \left( \frac{\partial \tau_{xy} }{\partial x} + \frac{\partial \sigma_{y} '}{\partial y} + \frac{\partial \tau_{yz}}{\partial z} \right) \\ \rho \frac{dw}{dt} = \rho F_{z} - \frac{\partial p}{\partial z} + \left( \frac{\partial \tau_{xz} '}{\partial x} + \frac{\partial \tau_{yz}}{\partial y} + \frac{\partial \sigma_{z}'}{\partial z} \right) \end{array} \right\}$ (14) #### deformation and rotation $\left. \begin{array}{c} u = u_{0} + \left( \frac{\partial u}{\partial x} \right)_{0} \left( x - x_{0} \right) + \left( \frac{\partial u}{\partial y} \right)_{0} \left( y - y_{0} \right) + \left( \frac{\partial u}{\partial z} \right)_{0} \left( z - z_{0} \right) \\ v = v_{0} + \left( \frac{\partial v}{\partial x} \right)_{0} \left( x - x_{0} \right) + \left( \frac{\partial v}{\partial y} \right)_{0} \left( y - y_{0} \right) + \left( \frac{\partial v}{\partial z} \right)_{0} \left( z - z_{0} \right) \\ w = w_{0} + \left( \frac{\partial w}{\partial x} \right)_{0} \left( x - x_{0} \right) + \left( \frac{\partial w}{\partial y} \right)_{0} \left( y - y_{0} \right) + \left( \frac{\partial w}{\partial z} \right)_{0} \left( z - z_{0} \right) \\ \end{array} \right\}$ (15) $\left. \begin{array}{c} u= u_{0} + \omega_{y} \left( z - z_{0} \right) - \omega_{z} \left( y - y_{0} \right) \\ v= v_{0} + \omega_{z} \left( x - x_{0} \right) - \omega_{x} \left( z - z_{0} \right) \\ w= w_{0} + \omega_{x} \left( y - y_{0} \right) - \omega_{y} \left( x - x_{0} \right) \\ \end{array} \right\}$ (16) $\left. \begin{array}{c} \omega_{x} = \frac{1}{2} \left( \frac{\partial w}{\partial y} - \frac{\partial v}{\partial z} \right) \\ \omega_{y} = \frac{1}{2} \left( \frac{\partial u}{\partial z} - \frac{\partial w}{\partial x} \right) \\ \omega_{z} = \frac{1}{2} \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right) \\ \end{array} \right\}$ (18) $\left. \begin{array}{c} u_{solid} = u_{0} + \frac{1}{2} \left( \frac{\partial u}{\partial z} - \frac{\partial w}{\partial x} \right)_{0} \left( z - z_{0} \right) - \frac{1}{2} \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right)_{0} \left( y - y_{0} \right) \\ v_{solid} = v_{0} + \frac{1}{2} \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right)_{0} \left( x - x_{0} \right) - \frac{1}{2} \left( \frac{\partial w}{\partial y} - \frac{\partial v}{\partial z} \right)_{0} \left( z - z_{0} \right) \\ w_{solid} = w_{0} + \frac{1}{2} \left( \frac{\partial w}{\partial y} - \frac{\partial v}{\partial z} \right)_{0} \left( y - y_{0} \right) - \frac{1}{2} \left( \frac{\partial u}{\partial z} - \frac{\partial w}{\partial x} \right)_{0} \left( x - x_{0} \right) \\ \end{array} \right\}$ (19) $\left. \begin{array}{c} u = u_{solid} + u_{def} \\ v = v_{solid} + v_{def} \\ w = w_{solid} + w_{def} \\ \end{array} \right\}$ (20) $\left. \begin{array}{c} u_{def} = \left( \frac{ \partial u}{ \partial x} \right)_{0} \left( x - x_{0} \right) + \frac{1}{2} \left( \frac{ \partial v}{ \partial x} + \frac{ \partial u}{ \partial y} \right)_{0} \left( y - y_{0} \right) + \frac{1}{2} \left( \frac{ \partial u}{ \partial z} + \frac{ \partial w}{ \partial x} \right)_{0} \left( z - z_{0} \right)\\ v_{def} = \frac{1}{2} \left( \frac{ \partial v}{ \partial x} + \frac{ \partial u}{ \partial y} \right)_{0} \left( x - x_{0} \right) + \left( \frac{ \partial v}{ \partial y} \right)_{0} \left( y - y_{0} \right) + \frac{1}{2} \left( \frac{ \partial u}{ \partial z} + \frac{ \partial w}{ \partial x} \right)_{0} \left( z - z_{0} \right)\\ w_{def} = \frac{1}{2} \left( \frac{ \partial u}{ \partial z} + \frac{ \partial w}{ \partial x} \right)_{0} \left( x - x_{0} \right) + \frac{1}{2} \left( \frac{ \partial w}{ \partial y} + \frac{ \partial v}{ \partial z} \right)_{0} \left( y - y_{0} \right) + \left( \frac{ \partial w}{ \partial z} \right)_{0} \left( z - z_{0} \right)\\ \end{array} \right\}$ (21) $\dot{\epsilon}_{ij} \equiv \left( \begin{array}{ccc} \dot{\epsilon}_{x} & \dot{\epsilon}_{xy} & \dot{\epsilon}_{xz} \\ \dot{\epsilon}_{yx} & \dot{\epsilon}_{y} & \dot{\epsilon}_{yz} \\ \dot{\epsilon}_{zx} & \dot{\epsilon}_{zy} & \dot{\epsilon}_{z} \\ \end{array} \right) \equiv$ (11) $\equiv \left( \begin{array}{ccc} \frac{\partial u}{ \partial x} & \frac{1}{2} \left( \frac{\partial v}{ \partial x} + \frac{\partial u}{ \partial y} \right) & \frac{1}{2} \left( \frac{\partial w}{ \partial x} + \frac{\partial u}{ \partial z} \right) \\ \frac{1}{2} \left( \frac{\partial u}{ \partial y} + \frac{\partial v}{ \partial x} \right) & \frac{\partial u}{ \partial x} & \frac{1}{2} \left( \frac{\partial v}{ \partial x} + \frac{\partial u}{ \partial y} \right) \\ \frac{1}{2} \left( \frac{\partial u}{ \partial y} + \frac{\partial v}{ \partial x} \right) & \frac{1}{2} \left( \frac{\partial u}{ \partial y} + \frac{\partial v}{ \partial x} \right) & \frac{\partial u}{ \partial x} \end{array} \right)$ (22) #### Newtonian Fluids Newton came up with the idea of requiring the stress $\tau$ to be linearly proportional to the time rate at which at which strain occurs. Specifically he studied the following problem. There are two flat plates separated by a distance $h$. The top plate is moved at a velocity $V$, while the bottom plate is held fixed. Newton postulated (since then experimentally verified) that the shear force or shear stress needed to deform the fluid was linearly proportional to the velocity gradient: $\tau \propto \frac{V}{h}$ (2) The proportionality factor turned out to be a constant at moderate temperatures, and was called the coefficient of viscosity, $\mu$. Furthermore, for this particular case, the velocity profile is linear, giving $v / h= \partial u / \partial y$. Therefore, Newton postulated: $\tau = \mu \frac{\partial u}{\partial y}$ (2) Fluids that have a linear relationship between stress and strain rate are called Newtonian fluids. This is a property of the fluid, not the flow. Water and air are examples of Newtonian fluids, while blood is a non-Newtonian fluid. #### Stokes Hypothesis Stokes extended Newton's idea from simple 1-D flows (where only one component of velocity is present) to multidimensional flows. He developed the following relations, collectively known as Stokes relations $\sigma_{x} = 2 \mu \frac{ \partial u}{ \partial x} + \lambda \left( \frac{ \partial u}{ \partial x} + \frac{ \partial v}{ \partial y} + \frac{ \partial w}{ \partial z} \right)$ (12) $\sigma_{y} = 2 \mu \frac{ \partial v}{ \partial y} + \lambda \left( \frac{ \partial u}{ \partial x} + \frac{ \partial v}{ \partial y} + \frac{ \partial w}{ \partial z} \right)$ (12) $\sigma_{z} = 2 \mu \frac{ \partial w}{{\partial}z} + \lambda \left( \frac{ \partial u}{ \partial x} + \frac{ \partial v}{ \partial y} + \frac{ \partial w}{ \partial z} \right)$ (12) $\tau_{xy} = \tau_{yx} = \mu \left( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \right)$ (12) $\tau_{xz} = \tau_{zx} = \mu \left( \frac{\partial u}{\partial z} + \frac{\partial w}{\partial x} \right)$ (12) $\tau_{zy} = \tau_{yz} = \mu \left( \frac{\partial w}{\partial y} + \frac{\partial v}{\partial z} \right)$ (12) The quantity $\mu$ is called molecular viscosity, and is a function of temperature. The coefficient $\lambda$ was chosen by Stokes so that the sum of the normal stresses $\sigma_{x}$,$\sigma_{y}$ and $\sigma_{z}$ are zero. Then $\lambda = - \frac{2}{3}\mu$ (12) #### substitution $\left. \begin{array}{c} \rho \frac{du}{dt} = \rho F_{x}- \frac{\partial p}{ \partial x} + 2 \frac{\partial}{ \partial x} \left( \mu \frac{ \partial u }{ \partial x } \right) + \frac{\partial }{ \partial y} \left[ \mu \left( \frac{\partial u}{ \partial y} + \frac{\partial v}{ \partial x} \right)\right] + \frac{\partial }{ \partial z} \left[ \mu \left( \frac{\partial u}{ \partial z} + \frac{\partial w}{ \partial x} \right)\right] - \frac{2}{3} \frac{\partial}{\partial x}\left( \mu div \textbf{V}\right)\\ \rho \frac{dv}{dt} = \rho F_{y} - \frac{\partial p}{ \partial y} + \frac{\partial }{ \partial x} \left[ \mu \left( \frac{\partial u}{ \partial y} + \frac{\partial v}{ \partial x} \right)\right] + 2 \frac{\partial}{ \partial y} \left( \mu \frac{ \partial v }{ \partial y } \right) + \frac{\partial }{ \partial z} \left[ \mu \left( \frac{\partial v}{ \partial z} + \frac{\partial w}{ \partial y} \right)\right] - \frac{2}{3} \frac{\partial }{ \partial y} \left( \mu div \textbf{V} \right) \\ \rho \frac{dw}{dt} = \rho F_{z} - \frac{\partial p}{ \partial z}+ \frac{\partial }{ \partial x} \left[ \mu \left( \frac{\partial u}{ \partial z} + \frac{\partial w}{ \partial x} \right)\right] + \frac{\partial }{ \partial y} \left[ \mu \left( \frac{\partial v}{ \partial z} + \frac{\partial w}{ \partial y} \right)\right] + 2 \frac{\partial}{ \partial z} \left( \mu \frac{ \partial w }{ \partial z } \right) - \frac{2}{3} \frac{\partial }{ \partial z} ( \mu div \textbf{V} ) \\ \end{array} \right\}$ (12) #### Other formulation Cauchy's equation of motion $\rho \frac{Du_{i}}{Dt} = \rho g_{i} + \frac{\partial \tau_{ij}}{ \partial x_{j}}$ (14) The equation of motion for a Newtonian fluid is obtained by constitutive equation into Cauchy's equation to obtain $\rho \frac{Du_{i}}{Dt} = - \frac{\partial p}{ \partial x_{i}} + \rho g_{i} + \frac{\partial }{ \partial x_{j}} \left[ 2 \mu e_{ij} - \frac{2}{3} \mu \left( \nabla \cdot \textbf{u} \right) \delta_{ij} \right]$ (14) where $e_{ij}$ is the strain rate tensor $e_{ij} \equiv \frac{1}{2} \left( \frac{\partial u_{i}}{\partial x_{j}} + \frac{\partial u_{j}}{\partial x_{i}} \right)$ (15) If the temperature differences are small within the fluid, then $\mu$ can be taken outside the derivative, which then reduces to $\begin{array}{ccccc} \rho \frac{Du_{i}}{Dt} & = & - \frac{\partial p}{ \partial x_{i}} + \rho g_{i} & + & 2 \mu \frac{\partial e_{ij}}{ \partial x_{j}} - \frac{ 2 \mu}{3} \frac{ \partial }{ \partial x_{i}} \left( \nabla \cdot \textbf{u} \right) \\ & = & - \frac{\partial p}{ \partial x_{i}} + \rho g_{i} & + & \mu \left[ \nabla^{2} u_{i} + \frac{1}{3} \frac{ \partial }{ \partial x_{i}} \left( \nabla \cdot \textbf{u} \right) \right]\\ \end{array}$ (16) where $\nabla^{2} u_{i} \equiv \frac{\partial^{2} u_{i}}{ \partial x_{j} \partial x_{j}} = \frac{\partial^{2} u_{i}}{ \partial x^{2}_{1}} + \frac{\partial^{2} u_{i}}{ \partial x^{2}_{2}} + \frac{\partial^{2} u_{i}}{ \partial x^{2}_{3}}$ (16) is the Laplasian of $u_{i}$. For incompressible fluids $\nabla \cdot \textbf{u} = 0$, and using vector notation, the incompressible Navier-Stokes equation reduces to $\rho \frac{Du_{i}}{Dt} = - \nabla p + \rho \textbf{g} + \mu \nabla^{2} \textbf{u}$ (16) If the viscous effects are negligible, we obtain the Euler equation $\rho \frac{Du_{i}}{Dt} = - \nabla p + \rho \textbf{g}$ (16) ### Derivation of the energy equation By applying the first law of thermodynamics to a material volume $V(t)$ we find $\frac{d}{dt} \int\limits_{V\left( t \right)} \rho E dV = W + Q$ (2) with $E$ the total energy per unit mass $E = e + \frac{1}{2} u_{\alpha}u_{\alpha}$ (2) Furthermore, $W$ is the rate of work expended by the surroundings on the fluid in $V(t)$, and $Q$ is the rate of heat addition $W = \underbrace{\int\limits_{V\left( t \right)} u_{\alpha} f^{b}_{\alpha} \rho dV }_{body force}+ \underbrace{\int\limits_{S\left( t \right)} u_{\alpha} f^{s}_{\alpha} dS}_{surface force}$ (2) $W = \int\limits_{V\left( t \right)} \left\{ \rho u_{\alpha} f^{b}_{\alpha} + \left( u_{\alpha} \tau_{\alpha \beta} \right)_{, \beta} \right\} dV$ (2) $Q = \int\limits_{V\left( t \right)} \rho q dV + \int\limits_{S\left( t \right)} \sigma dS$ (2) $\sigma = k \textbf{n} \cdot \textbf{{grad}} T$ (2) $Q = \int\limits_{V\left( t \right)} \left\{ \rho q + \left( k T_{, \alpha} \right)_{, \alpha} \right\} dV$ (2) $\int\limits_{V\left( t \right)} \left\{ \frac{\partial \rho E}{\partial t} + \left( \rho u_{\alpha} E \right)_{, \alpha } \right\} dV = \int\limits_{V\left( t \right)} \left\{ \left( u_{\alpha} \tau_{\alpha \beta} \right)_{, \beta} + \left( k T_{, \alpha} \right)_{, \alpha} + \rho u_{\alpha} f^{b}_{\alpha} + \rho q \right\} dV$ (2) $\frac{\partial \rho E}{\partial t} + \left( \rho u_{\alpha} E \right)_{, \alpha } = \left( u_{\alpha} \tau_{\alpha \beta} \right)_{, \beta} + \left( k T_{, \alpha} \right)_{, \alpha} + \rho u_{\alpha} f^{b}_{\alpha} + \rho q$ (2) ## Existence and uniqueness The existence and uniqueness of classical solutions of the 3-D Navier-Stokes equations is still an open mathematical problem and is one of the Clay Institute's Millenium Problems. In 2-D, existence and uniqueness of regular solutions for all time have been shown by Jean Leray in 1933. He also gave the theory for the existence of weak solutions in the 3-D case while uniqueness is still an open question. However, recently, Prof. Penny Smith submitted a paper, Immortal Smooth Solution of the Three Space Dimensional Navier-Stokes System, which may provide a proof of the existence and uniqueness.(It has a serious flaw, so the author withdrew the paper) ## History Claude Louis Marie Henri Navier’s name is associated with the famous Navier-Stokes equations that govern motion of a viscous fluid. He derived the Navier-Stokes equations in a paper in 1822. His derivation was however based on a molecular theory of attraction and repulsion between neighbouring molecules. Euler had already derived the equations for an ideal fluid in 1755 which did not include the effects of viscosity. Navier did not recognize the physical significance of viscosity and attributed the viscosity coefficient to be a function of molecular spacing. The equations of motion were rederived by Cauchy in 1828 and by Poisson in 1829. In 1843 Barre de Saint-Venant published a derivation of the equations that applied to both laminar and turbulent flows. However the other person whose name is attached with Navier is the Irish mathematician-physicist George Gabriel Stokes. In 1845 he published a derivation of the equations in a manner that is currently understood. ## References C. L. M. H. Navier (1822), "Memoire sur les lois du mouvement des fluides", Mem. Acad. Sci. Inst. France, 6, 389-440. Loiciansky, L.G. (1978), "Mechanics of Fluid and Gas", 5 edn., p. 736. Nauka, Moscow. Pieter Wesseling (2001), "Principles of computational fluid dynamics", Springer-Verlag Berlin Heidelberg.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 109, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9047446846961975, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/14939/does-godel-preclude-a-workable-toe/14944
# Does Godel preclude a workable TOE? Godel's incompleteness theorem prevents a universal axiomatic system for math. Is there any reason to believe that it also prevents a Theory of everything for physics? Edit: I haven't before seen a formulation of Godel that included time. The formulation I've seen is that any axiomatic systems capable of doing arithmetic can express statements that will be either 1) impossible to prove true or false or 2) possible to prove both true and false. This leads to the question: are theories of (nearly) everything, axiomatic systems capable of doing arithmetic? (Given they are able to describe a digital computer, I think it's safe to say they are.) If so, it follows that such a theory will be able to describe something that the theory will be either unable to analyse or will result in an ambiguous result. (Might this be what forces things like the Heisenberg uncertainty principle?) - – Qmechanic♦ Feb 10 at 19:50 ## 6 Answers The answer is no, because a "Theory of Everything" means a computational method of describing any situation, It does allow you to predict the eventual outcome of the evolution an infinite time into the future, but only to plod along, prediciting the outcome little by little as you go on. Godel's theorem is a statement that it is impossible to predict the infinite time behavior of a computer program. Theorem: Given any precise way of producing statements about mathematics, that is, given any computer program which spits out statements about mathematics, this computer program either produces falsehoods, or else does not produce every true statement. Proof: Given the program "THEOREMS" which outputs theorems (it could be doing deductions in Peano Arithmetic, for example), write the computer program SPITE to do this: • SPITE prints its own code into a variable R • SPITE runs THEOREMS, and scans the output looking for the theorem "R does not halt" • If it finds this theorem, it halts. If you think about it, the moment THEOREMS says that "R does not halt", it is really proving that "SPITE does not halt", and then SPITE halts, making THEOREMS into a liar. So if "THEOREMS" only outputs true theorems, SPITE does not halt, and THEOREMS does not prove it. There is no way around it, and it is really trivial. The reason it has a reputation for being complicated is due to the following properties of the logic literature: • Logicians are studying formal systems, so they tend to be overly formal when they write. This bogs down the logic literature in needless obscurity, and holds back the development of mathematics. There is very little that can be done about this, except exhorting them to try to clarify their literature, as physicists strive to do. • Logicians made a decision in the 1950s to not allow computer science language in the description of algorithms within the field of logic. They did this purposefully, so as to separate the nascent discipline of CS from logic, and to keep the unwashed hordes of computer programmers out of the logic literature. Anyway, what I presented is the entire proof of Godel's theorem, using a modern translation of Godel's original 1931 method. For a quick review of other results, and for more details, see this mathoverflow answer:http://mathoverflow.net/questions/72062/what-are-some-proofs-of-godels-theorem-which-are-essentially-different-from-th/72151#72151 . As you can see, Godel's theorem is a limitation on understanding the eventual behavior of a computer program, in the limit of infinite running time. Physicists do not expect to figure out the eventual behavior of arbitrary systems. What they want to do is give a computer program which will follow the evolution of any given system to finite time. A TOE is like the instruction set of the universe's computer. It doesn't tell you what the output is, only what the rules are. A TOE would be useless for predicting the future, or rather, it is no more useful for prediction than Newtonian mechanics, statistics, and some occasional quantum mechanics for day-to-day world. But it is extremely important philosophically, because when you find it, you have understood the basic rules, and there are no more surprises down beneath. ### Incorporating Comments There were comments which I will incorporate into this answer. It seems that comments are only supposed to be temporary, and some of these observations I think are useful. Hilbert's program was an attempt to establish that set theoretic mathematics is consistent using only finitary means. There is an interpretation of Godel's theorem that goes like this: • Godel showed that no system can prove its own consistency • Set theory proves the consistency of Peano Arithmetic • Therefore Godel kills Hilbert's program of proving the consistency of set theory using arithmetic. This interpretation is false, and does not reflect Hilbert's point of view, in my opinion. Hilbert left the definition of "finitary" open. I think this was because he wasn't sure exactly what should be admitted as finitary, although I think he was pretty sure of what should not be admitted as finitary: 1. No real numbers, no analysis, no arbitrary subsets of Z. Only axioms and statements expressible in the language of Peano Arithmetic. 2. No structure which you cannot realize explicitly and constructively, like an integer. So no uncountable ordinals, for example. Unlike his followers, he did not say that "finitary" means "provable in Peano Arithmetic", or "provable in primitive recursive Arithmetic", because I don't think he believed this was strong enough. Hilbert had experience with transfinite induction, and its power, and I think that he, unlike others who followed him in his program, was ready to accept that transfinite induction proves more theorems than just ordinary Peano induction. What he was not willing to accept was axioms based on a metaphysics of set existence. Things like the Powerset axiom and the Axiom of choice. These two axioms produce systems which not only violate intuition, but are further not obviously grounded in experience, so that the axioms cannot be verified by intuition. Those that followed Hilbert interpreted finitary as "provable in Peano Arithmetic" or a weaker fragment, like PRA. Given this interpretation, Godel's theorem kills Hilbert's program. But this interpretation is crazy, given what we know now. Hilbert wrote a book on the foundations of mathematics after Godel's theorem, and I wish it were translated into English, because I don't read German. I am guessing that he says in there what I am about to say here. ### What Finitary Means The definition of finitary is completely obvious today, after 1936. A finitary statement is a true statement about computable objects, things that can be represented on a computer. This is equivalent to saying that a finitary statement is a proposition about integers which can be expressed (not necessarily proved) in the language of Peano Arithmetic. This includes integers, finite graphs, text strings, symbolic manipulations, basically, anything that Mathematica handles, and it includes ordinals too. You can represent the ordinals up to $\epsilon_0$, for example, using a text string encoding of their Cantor Normal form. The ordinals which can be fully represented by a computer are limited by the Church-Kleene ordinal, which I will call $\Omega$. This ordinal is relatively small in traditional set theory, because it is a countable ordinal, which is easily exceeded by $\omega_1$ (the first uncountable ordinal), $\omega_\Omega$ (the Church-Kleene-th uncountable ordinal), and the ordinal of a huge cardinal. But it is important to understand that all the computational representations of ordinals are always less than this. So when you are doing finitary matheamtics, it means that you are talking about objects you can represent on a machine, you should be restricting yourself to ordinals less than Church-Kleene. The following argues that this is no restriction at all, since the Church-Kleene ordinal can establish the consistency of any system. ### Ordinal Religion Godel's theorem is best interpreted as follows: Given any (consistent, omega-consistent) axiomatic system, you can make it stronger by adding the axiom "consis(S)". There are several ways of making the system stronger, and some of them are not simply related to this extension, but consider this one. Given any system and a computable ordinal, you can iterate the process of strengthening up to a the ordinal. So there is a map from ordinals to consistency strength. This implies the following: • Natural theories are linearly ordered by consistency strength. • Natural theories are well-founded (there is no infinite descending chain of theories $A_k$ such that $A_k$ proves the consistency of $A_{k+1}$ for all k). • Natural theories approach the Church Kleene ordinal in strength, but never reach it. It is natural to assume the following • Given a sequence of ordinals which approaches the Church Kleene ordinal, the theories corresponding to this ordinal will prove every theorem of Arithmetic, including the consistency of arbitrarily strong consistent theories. Further, the consistency proofs are often carried out in constructive logic just as well, so really: • Every theorem that can be proven, in the limit of Church Kleene ordinal, gets a constructive proof. This is not a contradiction with Godel's theorem, because generating an ordinal sequence which approaches $\Omega$ cannot be done algorithmically, it cannot be done on a computer. Further, any finite location is not really philosophically much closer to Church Kleene than where you started, because there is always infinitely more structure left undescribed. So $\Omega$ knows all and proves all, but you can never fully comprehend it. You can only get closer by a series of approximations which you can never precisely specify, and which are always somehow infinitely inadequate. You can believe that this is not true, that there are statements that remain undecidable no matter how close you get to Church-Kleene, and I don't know how to convince you otherwise, other than by pointing to longstanding conjectures that could have been absolutely independent, but fell to sufficiently powerful methods. To believe that a sufficiently strong formal system resolves all questions of arithmetic is an article of faith, explicitly articulated by Paul Cohen in "Set Theory and the Continuum Hypothesis". I believe it, but I cannot prove it. ### Ordinal Analysis So given any theory, like ZF, one expects that there is a computable ordinal which can prove its consistency. How close have we come to doing this? We know how to prove the consistency of Peano Arithmetic--- this can be done in PA, in PRA, or in Heyting Arithmetic (constructive Peano Arithmetic), using only the axiom • Every countdown from $\epsilon_0$ terminates. This means that the proof theoretic ordinal of Peano Arithmetic is $\epsilon_0$. That tells you that Peano arithmetic is consistent, because it is manifestly obvious that $\epsilon_0$ is an ordinal, so all its countdowns terminate. There are constructive set theories whose proof theoretic ordinal is similarly well understood, see here: http://en.wikipedia.org/wiki/Ordinal_analysis#Theories_with_larger_proof_theoretic_ordinals To go further requires an advance in our systems of ordinal notation, but there is no limitation of principle to establishing the consistency of set theories as strong as ZF by computable ordinals which can be comprehended. Doing so would complete Hilbert's program--- it would removes any need for an ontology of infinite sets in doing mathematics. You can disbelieve in the set of all real numbers, and still accept the consistency of ZF, or of inaccessible cardinals (using a bigger ordinal), and so on up the chain of theories. ### Other interpretations Not everyone agrees with the sentiments above. Some people view the undecidable propositions like those provided by Godel's theorem as somehow having a random truth value, which is not determined by anything at all, so that they are absolutely undecidable. This makes mathematics fundamentally random at its foundation. This point of view is often advocated by Chaitin. In this point of view, undecidability is a fundamental limitation to what we can know about mathematics, and so bears a resemblence to a popular misinterpretation of Heisenberg's uncertainty principle, which considers it a limitation on what we can know about a particle's simultaneous position and momentum (as if these were hidden variables). I believe that Godel's theorem bears absolutely no resemblence to this misinterpretation of Heisenberg's uncertainty principle. The preferred interpretation of Godel's theorem is that every sentence of Peano Arithmetic is still true or false, not random, and it should be provable in a strong enough reflection of Peano Arithmetic. Godel's theorem is no obstacle to us knowing the answer to every question of mathematics eventually. Hilbert's program is alive and well, because it seems that countable ordinals less than $\Omega$ resolve every mathematical question. This means that if some statement is unresolvable in ZFC, it can be settled by adding a suitable chain of axioms of the form "ZFC is consistent", "ZFC+consis(ZFC) is consistent" and so on, transfinitely iterated up to a countable computable ordinal, or similarly starting with PA, or PRA, or Heyting arithmetic (perhaps by iterating up the theory ladder using a different step-size, like adding transfinite induction to the limit of all provably well-ordered ordinals in the theory). Godel's theorem does not establish undecidability, only undecidability relative to a fixed axiomatization, and this procedure produces a new axiom which should be added to strengthen the system. This is an essential ingredient in ordinal analysis, and ordinal analysis is just Hilbert's program as it is called today. Generally, everyone gets this wrong except the handful of remaining people in the German school of ordinal analysis. But this is one of those things that can be fixed by shouting loud enough. ### Torkel Franzen There are books about Godel's theorem which are more nuanced, but which I think still get it not quite right. Greg P says, regarding Torkel Franzen: • I thought that Franzen's book avoided the whole 'Goedel's theorem was the death of the Hilbert program' thing. In any case he was not so simplistic and from reading it one would only say that the program was 'transformed' in the sense that people won't limit themselves to finitary reasoning. As far as the stuff you are talking about, John Stillwell's book "Roads to Infinity" is better. But Franzen's book is good for issues such as BCS's question (does Godel's theorem resemble the uncertainty principle). Finitary means computational, and a consistency proof just needs an ordinal of sufficient complexity. Greg P responded: • The issue is then what 'finitary' is. I guess I assumed it excluded things like transfinite induction. But it looks like you call that finitary. What is an example of non-finitary reasoning then? When the ordinal is not computable, if it is bigger than the Church Kleene ordinal, then it is infinitary. If you use the set of all reals, or the powerset of Z as a set with discrete elements, that's infinitary. Ordinals which can be represented on a computer are finitary, and this is the point of view that I believe Hilbert pushes in "The Grundlagen", but it's not translated. - – David Zaslavsky♦ May 22 '12 at 4:12 1 @Argus: The Grudnlagen is a whole book, you just gave a link to a short statement of formalism, which is not the main point of the Grundlagen. – Ron Maimon May 22 '12 at 19:38 @ronmaimon: he is talking about Hilberts book released after grundlagen was already published the link was to a translation of Hilbert statements that were directly related to grundlagens statements – Argus May 22 '12 at 21:18 @Argus: The grundlagen is a book after Godel's theorem and Gentzen, and it is not useful to pick out vague summary statements--- the meat is in the details. How did Hilbert respond to Godel's theorem? This is not clear. Did he propose ordinal analysis? Did he say that countable ordinals are finitary? None of the quotes reveal anything about this question, and I believe the book should. – Ron Maimon May 22 '12 at 21:27 1 – Argus May 22 '12 at 22:04 show 6 more comments I think Conway's Game Of Life is a great example here. We have the "Theory of Everything" for Conway's Game Of Life--the laws that determine the behavior of every system. They're extremely simple! These simple "rules of the game" are analogous to a "theory of everything" that would satisfy a physicist living in the Game Of Life universe. On the other hand, you can build a Turing-complete computer in The Game Of Life, which means you can formulate questions about the asymptotic behavior of (extremely complex) configurations of dots within the Game Of Life which have no mathematically provable answer. The two things aren't really related. Of course we can understand the extremely simple "theory of everything" for the Game Of Life. At the same time, of course we cannot mathematically prove the answer to every question about the asymptotic behavior of very complicated configurations of dots within the Game Of Life. Likewise, we can (one hopes) find the ToE for our universe. But we certainly will not be able to mathematically prove every possible theorem about the asymptotic behavior of things following the laws of the universe. No one expected to do that anyway. - I think we are agreeing to some extent. see my answer (primarily the first section) to this question. – BCS Sep 26 '11 at 3:27 Nobody that needs hope of a resolution as their motivation. So everyone besides me wants to prove that wrong maybe maybe not you would only have to disprove it once for the hundreds of examples supporting it. Same with any proof. Never gonna happen but hope beyond hope is a key aspect of the human condition – Argus May 25 '12 at 0:50 One way to look at this is in terms of Hilbert's 6th problem, i.e. axiomatizing physics. Now, it may be said that what Hilbert understood from "axiomatizing" is refuted by Godel's (and Gentzen's) results (see his 2nd problem). - Hilbert's axiomatization program is not refuted by Godel, but is enhanced by it. Although Hilbert's Grundlagen der Mathematische (sp? Foundation of Mathematics) is not available in English, the basic response to Godel's theorem outlined there is that of Gentzen and the rest of the German school. Gentzen worked with Hilbert, and followed his program. He proved the consistency of Peano Arithmetic by finitary means, and only a postwar politically motivated redefinition of finitary to exclude ordinal countdowns made his proof "infinitary". – Ron Maimon Sep 24 '11 at 4:01 I don't agree with your statement of Godel's theorem. Godel's incompleteness theorem says that in any formal language that is strong enough to do arithmetic (ie you can write down Peano's axioms) there will always be a true statement that can not be proven. What Godel did to prove this was to construct something like the liars paradox in any such language "this sentence is not provable." I don't think this has any effect on whether or not there is a workable TOE, but I don't know much about TOE. I feel like Godel's incompleteness theorem is misunderstood a lot. It makes no claims as to whether or not statements are true, it simply says we can not prove everything that is true, somethings just are. - What would the physical theory equivalent be of a true statement that can not be proven? A physical arrangement where the "next step" can't be deduced exactly? (Sounds like the Heisenberg uncertainty principle.) – BCS Sep 22 '11 at 23:05 The statment "we cannot prove everything that is true" is a completely wrong intepretation of Godel's theorem. Godel's theorem does not mean that there are unprovable theorems in an absolute sense, rather it says that given any (consistent, omega-consistent) axiom system S we can find a computational statement which is obviously true but unprovable, which is equivalent to the formal statement "S is consistent". What Godel's theorem says is that given a system S, you can add "S is consistent" to produce a stronger system, and iterate this process transfinitely over all ordinals you can name. – Ron Maimon Sep 24 '11 at 3:49 Every time you take a step up in the tower of axiom system, you prove more theorems. As you go higher, you need to name higher countable recursive ordinals, and this requires more complex computer programs. In the limit that the ordinals get closer to the Church-Kleene ordinal, every true theorem should be provable. This is not a contradiction, because the Church Kleene ordinal is not computable, so the process of getting there is infinitely complex. Set theory is higher up the chain than Peano Arithmetic, and various large cardinals are higher still. Each theory is indexed by an ordinal. – Ron Maimon Sep 24 '11 at 3:54 I don't think anyone here has made the claim that Godel's theorem makes any assertion about unprovable theorems in an absolute sense, but only about systems based on a finite set of axioms. If you allow an infinite set of axioms, then there is a trivial system that is consistent and complete: the system consisting of an axiom asserting every true proposition. – BCS Sep 28 '11 at 2:13 @RonMaimon: I am confused, you say that "The statment "we cannot prove everything that is true" is a completely wrong intepretation of Godel's theorem." Then you say "we can find a computational statement which is obviously true but unprovable." Maybe the differences are above my pay grade... – Sean Tilson Oct 6 '11 at 15:35 show 13 more comments tl;dr; All possible universes are finite is scale and are "to small" to be able to encode all possible conjectures so they can't operate on them and thus can't prove their truthfulness. Therefore a fully computable universe model can't violate Gödel's Theorem. Extracts form various other places in the answers: I think the answer becomes one of two things: Option A: Gödel's Theorem does not prevent the existence of mechanistic means for determining the the truthfulness of an arbitrary conjecture. (While I'm not sure that Gödel's preclude this, it is precluded by reduction to the halting problem.) Option B: that Gödel's Theorem implies that even given a valid, computable, TOE, there is no mapping between arithmetic conjectures and states of the universe such that some identifiable property will hold iff the conjectures is correct. This could be (and I suspect is) true simply by the set of all possible conjectures being larger (a higher order infinity, or larger ordinals) than the set of all possible state of universes that can exist under the TOE. - In Aether Wave Theory the Universe is random and it can be infinitely dimensional system of the nested density fluctuations of hypothetical infinitely dense Boltzmann gas. With introduction of sufficient number of dimensions all formal theories will converge mutually into relevant description of Universe, but their determinism will decrease during it. It means, we cannot have the deterministic and universal TOE at the same moment - it's sort of uncertainty principle of quantum mechanics. Even fuzzy theory can still make robust testable predictions, but if would become too general and "universal", it would change into self-referencing implicit tautology. This theorem can be understood with implicate geometry of AWT: the postulates of formal theories are forming zero rank tensors (tautologies) in casual space and the implications are higher rank tensors, the orientation of which is determined with logical time arrow (implication vectors). These postulates must remain mutually inconsistent, or we could substitute them and replace with single one - which would lead into reduction of theory into tautology and into the lost of its ability to provide testable predictions ("we can draw infinite number of vectors through single point in space"). It means, the axioms of formal theory must remain mutually inconsistent, or we couldn't use this theory for predictions, testable the less - which is basically, what the Goedel's theorems are about for the natural numbers set. There are already two theories which describe the observable Universe from intrinsic perspective of transverse energy waves (general relativity) and extrinsic perspective (quantum mechanics). IMO these two theories are most deterministic models usable for description of observable Universe, but they cannot be reconciled mutually in solely deterministic way at the price (we cannot mix the intrinsic and extrinsic perspectives in deterministic way). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9490079879760742, "perplexity_flag": "middle"}
http://www.newton.ac.uk/programmes/MOS/seminars/2011031610009.html
Navigation:  Home > MOS >  Seminars >  Guichard, O # MOS ## Seminar ### Domains of Discontinuity for Anosov Representations and Generalized Teichmüller Spaces Guichard, O (Paris-Sud 11) Wednesday 16 March 2011, 10:00-11:00 Satellite #### Abstract Many representations of surface groups (in particular those belonging to "generalized" Teichmüller spaces) are known to satisfy a strong dynamical property: they are Anosov representations. We shall first explain more fully this notion due to F. Labourie. Secondly we will explain how an Anosov representation $\Gamma \to G$ (for any group $\Gamma$) can be interpreted as the holonomy representation of a geometric structure by constructing a domain of discontinuity with compact quotient for $\Gamma$ into a homogenous $G$-space. At last we shall see to what extent this construction can be used in interpreting the generalized Teichmüller spaces as moduli of geometric structures. This is a joint work with Anna Wienhard.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8382362723350525, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/20059/computing-3-points-gromov-witten-invariants-of-the-grassmannian
## Computing 3 points Gromov-Witten invariants of the Grassmannian ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is from an exercise in Koch, Vainsencher - An invitation to quamtum cohomology. Background The exercise asks to compute the 3-points Gromov-Witten invariants of the Grassmannian $G = \mathop{Gr}(1, \mathbb{P}^3) = \mathop{Gr}(2, 4)$ via the enumerative interepretation. In particular my problem is with computing the invariant $I_2(p \cdot p \cdot p)$, where $p$ is the class of a point on $G$. This is the number of rational curves of degree $2$ through $3$ generic points on $G$. Here we see $G$ as embedded by the Plucker map, and the degree is defined accordingly. A rational curve $C \subset G$ of degree $d$ will sweep out a rational ruled surface $S$ of degree $d$ in $\mathbb{P}^3$; up to here I agree with the hints of the book. The problem is the following hint: Show that the condition on $C$ of passing through a point $q \in G$ corresponds to the condition on $S$ of containing the line in $\mathbb{P}^3$ corresponding to $q$. This seems to me plain false. Of course one implication is true, but is absolutely possible that $S$ contains a line without $C$ passing through the corresponding point. For instance, when $d = 1$, $C$ is a line on the Grassmannian, and it is well-known that these have the form `$\{ \ell \mid a \in \ell \subset A \}$`, where $a$ is a point and $A$ a plane of $\mathbb{P}^3$. In this case $S$ is the plane $A$, so it contains many lines which do not pass through $a$, hence these lines are not parametrized by $C$. Similarly, when $d = 2$, the surface $S$ can be a smooth quadric, which has two distinct rulings of lines; one will correspond to lines parametrized by $C$, but the other one will not. To see that a smooth quadric can actually arise, just invert the construction. Starting from a smooth quadric $S$ take any line $\ell \subset S$. There is a natural map $\ell \to G$ given by sending a point $q \in \ell$ to the unique line in the other ruling passing through $q$. The image of this map is a curve $C \subset G$, such that the associated surface is $S$ itself. Given the hint, the book goes on to say Show that $I_2(p \cdot p \cdot p) = 1$, by interpreting this number as a count of quadrics containing three lines. Now I certainly agree that given three generic lines in $\mathbb{P}^3$ there is a unique quadric containing them. To see this, just choose $3$ points on each line: a quadric will contain the lines iff it contains the $9$ points, and it can be shown that these $9$ points give $9$ independent conditions. Still I do no see how this implies the count $I_2(p \cdot p \cdot p) = 1$. What I guess happens is that generically we will have two lines in one ruling and one line in the other, so that the curve $C \subset G$ which sweeps $S$ will only pass through one or two of the assigned points. Question What is the right count? Is there something wrong in what I said above? Is it even true that $I_2(p \cdot p \cdot p) = 1$? - Maybe I am missing completely the point: isn't the Grassmannian you want a quadric in P^5? And you want the conics through three given points on this quadric right? The conic will be the unique conic in the plane spanned by these three points. By homogeneity it should also be easy to show that the tangent space at this point is zero-dimensional, and thus you really should get 1 as an answer. – damiano Apr 1 2010 at 11:50 Reading more carefully your question, I think that in the hint, the ruled surface is meant to only contain the lines that are part of the ruling, not the spurious ones that may come when you look at the scroll in P^3. Thus in the case of degree 1, you only get the lines through the point, in the case of degree two, you only get the lines in one ruling. I thus suspect that the three lines will be in the same ruling (otherwise they would intersect, which is not very generic) and hence the unique quadric containing them will be the 1 you need. You still need to make sure it is a reduced point. – damiano Apr 1 2010 at 12:07 The fact that it is reduced, under these hypothesis, is a general fact about Gromov-Witten invariants, proved earlier in the book. I agree it is easy to see that there is a unique conic, I just did not think about proving it directly. I still think the hint is wrong, but the point is that, as you suggest, 3 generic lines will lie on the same ruling of the unique quadric containing them, otherwise they would meet. So in the end the curve parametrizing that ruling is the unique desired curve. If you submit this as an answer, I will be glad to accept it. By the way, are you the damiano I know? – Andrea Ferretti Apr 1 2010 at 12:27 ## 1 Answer Reading more carefully your question, I think that in the hint, the ruled surface is meant to only contain the lines that are part of the ruling, not the spurious ones that may come when you look at the scroll in P^3. Thus in the case of degree 1, you only get the lines through the point, in the case of degree two, you only get the lines in one ruling. I thus suspect that the three lines will be in the same ruling (otherwise they would intersect, which is not very generic) and hence the unique quadric containing them will be the 1 you need. You still need to make sure it is a reduced point. (Si, sono il damiano che conosci!) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.957922637462616, "perplexity_flag": "head"}
http://mathhelpforum.com/discrete-math/31893-discrete-math-set-question-someone-help-thank-you.html
# Thread: 1. ## Discrete math set question someone help?? thank you In a period of 4 weeks, Joe played tennis every day. He plays at least one set everyday, and played a total of 40 sets. (In each day the nubmer of sets Joe played is an integer.) Show that there is a consecutive span of days during which Joe played exactly 15 sets of tennis. 2. Originally Posted by peiyilee In a period of 4 weeks, Joe played tennis every day. He plays at least one set everyday, and played a total of 40 sets. Show that there is a consecutive span of days during which Joe played exactly 15 sets of tennis. Suppose that $S_k$ is the sum total of all sets completed at the end of the kth day. Thus $1 \le S_1 < S_2 < \cdots < S_{27} < S_{28} = 40$. That is a set of 28 different sums. Construct a new set of 28 different sums, $S_1 + 15 < S_2 + 15 < \cdots < S_{27} + 15 < S_{28} + 15 = 55$. Now we have 56 representations of numbers 1 to 55. Use the pigeonhole principle to conclude.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9826204776763916, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/3674/random-sequence-generator-function
# Random Sequence Generator function By someone's suggestion, I am posting this question from math.stackexchange.com. I want to find out a function or algorithm, whichever is suitable, which can provide me a random sequence. Like Input: 3 Output: {1,2,3} or {1,3,2} or {2,1,3} or {2,3,1} or {3,1,3} or {3,2,1} Same as if I will enter a number N, output will be a random permutation of the set {1,2,...N} How can a I write this type of algorithm. Actually I want to find out the logic behind it. - No closed form function can describe a truly random permutation by definition. But if $N = 2^k$, then block ciphers come a significant fraction of the way. – Thomas Aug 29 '12 at 17:27 – CodesInChaos Aug 29 '12 at 18:12 ## 2 Answers If for some reason the solution given by @poncho does not please you (e.g. you want $N$ to be on the magnitude of a few billions but you do not have a few gigabytes of RAM), then there are other solutions, in which you get the permutation as an evaluable procedure (in other words, a block cipher). A practical solution is the Thorp shuffle. It is approximate, but the approximation can be made as good as needed by adding more rounds (except that, as a Feistel-derivative, it implements only even permutations, so if the attacker knows the output for $N-2$ inputs he can compute the last two outputs with 100% certainty). There is also a "perfect" solution but it involves some floating-point operations which needs potentially unbounded accuracy, so in practice it is very expensive. - 1 Can't any even-permutations-only generating cipher be made "perfect" by a trivial postprocessing conditionally switching two outputs (with a 50% probability based on the key)? – maaartinus Aug 31 '12 at 0:23 @maaartinus: I tend to think so, but it would deserve some careful analysis. – Thomas Pornin Aug 31 '12 at 13:12 The classical way to generate a random permutation is the Fisher-Yates shuffle; it takes an underlying random number generator, and produces a random permutation. With just a bit of care, it can generate each permutation with equal probability (assuming the underlying random number generator outputs are independent and uniformly distributed). The only downside is that the algorithm requires N to be small enough so that you hold the entire permutation in memory; that doesn't sound like that's a problem from you. - Sorry, But I don't want to use any buffer to store anything. – Rahul Taneja Aug 29 '12 at 17:31 2 @RahulTaneja: you said you wanted the output to be the random permutation. If you didn't want to use a buffer, how is the function or algorithm supposed to return you the permutation? – poncho Aug 29 '12 at 17:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9193530082702637, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/notation
# Tagged Questions Question on the meaning, history, and usage of mathematical symbols and notation. Please remember to mention where (book, paper, webpage, etc.) you encountered any mathematical notation you are asking about. learn more… | top users | synonyms 4answers 43 views ### Any commutative associative operation can be extended to a function on nonempty finite sets This is a fact we use very frequently in general mathematics when we write such notations as $1+2+3+4$: since we know that $+$ is commutative and associative, we can just "drop the parentheses" and ... 1answer 34 views ### Set Notation (Axiom of Replacement) This question is related to the one I asked yesterday here in that it's related to another one of the Zermelo-Fraenkel Axioms. After looking over the notation used to describe the axiom, that is: ... 1answer 28 views ### On Landau notations How common it is to write e.g. $1-o(1)$ for a function that eventually approaches $1$ from below (or eventually equals $1$)? Would a better notation be $1-|o(1)|$ or what is meant is already obvious ... 1answer 54 views ### Is there a name for this given type of matrix? Given a finite set of symbols, say $\Omega=\{1,\ldots,n\}$, is there a name for an $n\times m$ matrix $A$ such that every column of $A$ contains each elements of $\Omega$? (The motivation for this ... 0answers 68 views ### Working with subsets, as opposed to elements. Especially in algebraic contexts, we can often work with subsets, as opposed to elements. For instance, in a ring we can define $$A+B = \{a+b\mid a \in A, b \in B\},\quad -A = \{-a\mid a \in A\}$$ ... 2answers 48 views ### Set Notation (Axiom of Infinity) I'm having trouble understanding the notation used in describing the axiom of infinity (which is number 6 in the Wolfram MathWorld page). I understand what the axiom is saying, but I'm trying to ... 0answers 12 views ### Complex representation and Dual representation notation Let's say we have a representation $\rho$ of $G$ on a vector space $V$. Wikipedia refers to the dual representation as $V^*$, but the dual vector space as $\overline{V}$. It does the opposite for the ... 0answers 18 views ### Notation for Hadamard division What is a reasonable notation for Hadamard division of two matrices? Several forum threads point to $\oslash$ as a possibility, but it feels "forced", for lack of a better word (I might go with a ... 2answers 47 views ### What exactly does this physically mean? Let X(w) be a real random variable on ($\Omega$ , P). The image X($\Omega$) the set of all the values X(w) can take ,written $\Omega^{X}$. For any set $B \subset \Omega^{X}$ the probability of the ... 1answer 36 views ### Confusing symbol in papers on hybrid logic In literature about hybrid logic I'm reading for my thesis I've come across the following symbol: ::= Now, I've never seen this notation before. I can also not ... 3answers 42 views ### How to interpret summation signs I'm taking a course in statistics, and I really need to brush up my math to be able to follow the book at times. I'm looking at formulas for sum of squares, and I am slightly confused about the ... 0answers 48 views ### Pronunciation of $H^{1}(G, M)$ How does one say the cohomology group $H^{1}(G, M)$ out loud in a talk/lecture? Do people just say "H1 of G comma M"? 1answer 56 views ### “The whole is greater than the sum of its parts” as a mathematical expression [closed] I'm trying to come up with a coherent way to express the saying "The whole is greater than the sum of its parts" using mathematical constructs. The second half of the statement (greater than the sum ... 1answer 46 views ### Mathematical Symbol In the following paper, what does the symbol $\Phi$ in equation $3.1$ (page $3$) represent? Does it represent the normal distribution? 3answers 54 views ### Using $p\supset q$ instead of $p\implies q$ I saw that a use for the notation $p\supset q$ instead of $p\implies q$ that got me a bit confused. One occurrences is in this Wikipedia link. It seems to me opposite than what it should be, let me ... 1answer 39 views ### Is it a standard to say that $a \oplus a_{\small 1}=0$ or $a \veebar a_{\small 1}=0$? I am trying to express the following: $a$ or $a_{\small 1}=0$ but only one of them equals zero. so if $a=0$ then $a_{\small 1}\neq 0$ and if $a\neq 0$ then $a_{\small 1}=0$. And I'm ... 1answer 15 views ### Notation minimum of a column vector I'd like to know the notation to express the minimum of a column vector. Is this notation correct? \begin{equation} \min \left[\matrix{ \left|b_{n}-b_{n+1}\right| \cr ... 2answers 101 views ### Is it an abuse of notation to omit the leading zero in a decimal less than 1? Is it acceptable to write $.001$ rather than $0.001$ when using decimal notation? Are there contexts in which omitting the leading zero is acceptable, and other situations in which it is not? 1answer 41 views ### What is the equivalent of a diagonal in a non-square matrix or array? I have a non-square matrix $M$, that looks something like this: $M=\left[ \begin{array} & a & b & c \\ d & e & f \\ g & h & i \\ j & k & l \\ \end{array}\right]$ ... 0answers 34 views ### Definition(s) for variable binding in first-order logic The following statement made me realize that variable binding can be defined in first-order logic: The same holds for λ terms to define functions. There is no reason that they could not be ... 1answer 36 views ### Confusion over calculus notation (differentials/derivatives) I have read from multiple sources that dy/dx is not to be interpreted as a ratio as the idea of 'dy' and 'dx' themselves will lead to logical difficulties. However, I have seen in many areas (e.g. ... 9answers 1k views ### How to represent the floor function using mathematical notation? I'm curious as to how the floor function can be defined using mathematical notation. What I mean by this, is, instead of a word-based explanation (i.e. "The closest integer that is not greater than ... 3answers 100 views ### What is the meaning of the parentheses in $\phi^{-1}\left[\{\phi(g)\}\right]=gH=Hg$? I am studying homomorphisms is groups and i saw a theorem saying: For $g$ in a group $G$, the cosets $gH$ and $Hg$ are the same, and collapsed onto the single element $\phi(g)$ by $\phi$. That is, ... 1answer 32 views ### How to use a clamp function / median in mathematical notation? I'm writing some mathematical equations that describe some computations in my program and it's pretty important that it's written correctly. At one point, it clamps or truncates a value, $x$, into the ... 2answers 208 views ### Notation: Why write the differential first? From reading answers here, I've noticed that some people write integrals as $\int dx \; f(x)$, while other people write them as $\int f(x)\;dx$. I realize that there is no mathematical difference ... 3answers 88 views ### Should I put interpunction after formulas? I am presently doing my first substantial piece of mathematical writing, hence this, probably somewhat silly, question. How does display-style mathematics interact with punctuation? More ... 6answers 84 views ### $(\mathbf{u}^T\mathbf{v})\mathbf{v} = \mathbf{u}^T(\mathbf{v}\mathbf{v})$ doesn't hold for $\mathbf{u}, \mathbf{v}\in\mathbb{R}^n$ - why? Suppose I have vectors $\mathbf{u}$ and $\mathbf{v}$ in $\mathbb{R}^n$. It is well defined to write $5\mathbf{v}$ or $c\mathbf{v}$ for scalar $c$. Since the inner product of $\mathbf{u}$ and ... 2answers 124 views ### What is the first cardinal number which is greater than $\omega$ [closed] What is the first cardinal number which is greater than $\omega$? How to denote it? Thank you very much. ADDed: Thanks Brain and Quinn for explaining for me. However, honestly saying, the ... 2answers 94 views ### What does “-2E-07x” means? [duplicate] I'm a programmer who had always been lacking some mathematical skills, yes it's a shame, I know. I'm making this little software for a biologist friend, and at some point I need to pull out a graph ... 0answers 25 views ### Can anybody recommend a comprehensive source for understanding mathematical notations? I foten struggle with understanding some of the mathematics written down in papers. This stuggle is often due to notation used. Therefore, I was wondering whether somebody is aware of a resource that ... 0answers 33 views ### Resources for learning formal math notation Does anyone know of some resources that provide a good introduction to common notation used in formal math? For example, I honestly don't know how to interpret $f: \mathbb{Z} \rightarrow \mathbb{Z}$. ... 6answers 95 views ### Function Notation due to our national cirriculum (the way in which it was taught in high school). We just said that f(x) means a function. Though I understand this isn't necessarily correct? In high school we used ... 2answers 202 views ### What does the notation $\twoheadrightarrow$ mean? I don't know what this double-arrow $\twoheadrightarrow$ means! 1answer 43 views ### What's the difference of naming a polynomial ring as $\mathbb{C}\{ x,y\}$ and $\mathbb{C} [x,y]$? I sometimes see both notations and I am led (maybe misled) to believe that they are the same thing. What is the formal difference between both of them? Or there isn't any? 2answers 34 views ### Matrix rows notation I'm working with a set of $M$ vectors $\{\mathbf{w}_i \in \mathbb{R}^N, \, i = 1, \ldots, M \}$. Since single vectors are usually considered as column vectors, I'm defining a matrix \mathbf{W} = ... 1answer 42 views ### Is this the correct notation? $\left [ \left \{ 1,2,3,4 \right \}! \right]^{-1}$Is this the correct notation if one wanted to obtain the factorial for each number in a sequence and then take the sequence and inverse each number in ... 9answers 574 views ### What could be better than base 10? Most people use base 10; it's obviously the common notation in the modern world. However, if we could change what became the common notation, would there be a better choice? I'm aware that it very ... 0answers 20 views ### Notation for Space of Multilinear Functions I'm in doubt if there is some "standard" notation for the space of multilinear functions on the cartesian product of $p$ vector spaces $V_i$ with values in another vector space $W$. I have seem for ... 0answers 41 views ### Frequency of Math Symbols [duplicate] Does anyone know of a study that has calculated the frequency of math symbols based on some popular mathematics journals or math corpus? For example in English you have letter frequencies of the most ... 2answers 111 views ### Frequency of Math Symbols Does anyone know of a study that has calculated the frequency of math symbols based on some popular mathematics journals or math corpus? For example in English you have letter frequencies of the most ... 1answer 62 views ### what does z subscript something mean Decide a positive integer $N \in\mathbb Z$. Generate a uniformly distributed random positive integer sequence: $$v_1, v_2, \ldots,v_n\in\mathbb Z_N$$ My question is, what does $\mathbb Z_N$ really ... 1answer 35 views ### Any name for an isosceles triangle sides Is there an English translation for Finnish words kanta and kylki? Namely, if $ABC$ is an isosceles triangle with $AB=AC$ then $BC$ is kanta in Finnish and $AB$, $BC$ are both kylki. 1answer 17 views ### What is being maximised in the channel capacity formula? The channel capacity formula is given as such: $$C=\max_{p(x)}I(X,Y)$$ Does this mean that it is the maximum probability multiplied by the mutual information, or is something else being maximised ... 3answers 51 views ### Difference between $\land$ and braces I was wondering what are the difference between the $\land$ and $\begin{cases} \\ \\ \end{cases}$ symbol. As I know, they both mean "and". So far, I've noticed the $\land$ on statements (not sure ... 2answers 49 views ### Probability notation Hey guys, I was just wondering why in my textbook(A First Course in Probability, 8th edition) and basically everywhere I've looked at when we have some random variable(assume for the sake of the ... 2answers 43 views ### Elementary Set Theory - Relations I'm not exactly sure what to search for this problem I'm having, as I don't know the keywords, so I figured the best action would be to ask a question. I have this question: ... 2answers 53 views ### What does $\mathbb{\bar C}$ denote in complex analysis? What does $\bar A$ denote when $A \subseteq \mathbb{C}$? I've seen it used in some places as the algebraic closure, other places as $\bar A = A$ \ $\partial A$ and other places again as \$\bar A = ... 2answers 19 views ### Notation for number of value changes in a sequence Let $A=\{a_{1}, a_{2}, a_{3}, a_{4}, ...,a_{n}\}$ be a finite sequence , where $a \in \mathbb{N}$. I would like to know the notation for something similar to a change rate. If I programmed, what I ... 2answers 36 views ### Summation and Product Bounds If I have a sum or product whose upper index is less than its start index, how is this interpreted? For example: $$\sum_{k=2}^0a_k,\qquad \prod_{k=3}^1b_k$$ I want to say that they are equal to the ... 1answer 38 views ### Should brackets be placed around an exponentiated factorial? For example, one can derive an approximation of $\pi$ from Stirling's approximation with one additional term as $$\lim_{n \to \infty} \frac{72n(n!)^2}{n^{2n} e^{-2n} (12n+1)^2}$$ but is it correct ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 95, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462510347366333, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/94997/something-that-i-found-and-would-like-to-see-if-its-known?answertab=votes
# Something that I found, and would like to see if it's known. Well I am quite sure it's known (I mean number theory exists thousands of years), warning beforehand, it may look like numerology, but I try not to go to mysticism. So I was in a bus, and from boredom I started just adding numbers in the next way: $$1+1=2$$ $$2+2=4$$ $$4+4=8....$$ etc up to $32,768$ (it was quite boring, I can tell... :-)), I didn't have calculator. and notice that if I keep adding the digits until I get a number from 1 to 10, I get that for example for $8+8=16$, $1+6=7$, now seven steps after this, at $512+512=1024$, which $1+2+4=7$, and again after $7$ steps $32768+32768=65536$, and adding $6+5+5+3+6=10+12+3=25$, $2+5=7$. So this led me to cojecture that this repetition may occur endlessly. Now ofcourse I can programme some code that will check for large numbers, but I am tired, long day. So if this indeed the case (which could be disproved, but even then I would wonder when this repetition stops) then why? As I said, I am tired, it may make no sense, and I might have done mistakes in my calculations, and it may be trivial. Either way, if you have some answer, I would like to hear it. - Interesting! I've never heard of such a pattern before so it may very well be original. I'd love to hear what happens when you write that computer program. – Samuel Reid Dec 29 '11 at 18:33 12 – Qiaochu Yuan Dec 29 '11 at 18:34 1 As a layman, the sheer number of patterns that mathematicians have already found and documented amazes me. I suppose math's been around for a long while, but still! – Reid Dec 30 '11 at 6:26 ## 4 Answers "Adding the digits until [you] get a number from $1$ to $10$" is the same as finding the remainder when dividing by $9$ (except that you would get $9$ instead of $0$ by adding digits, and there is no reason to stop with $10$: you can add the digits to get $1$). It turns out, by something called Euler's Theorem, that if $a$ is not divisible by $3$, then $a^6$ always leaves a remainder of $1$ when divided by $9$. In particular, $2^6$ has remainder $1$ when divided by $7$. Another property of remainders is that if $a$ leaves a remainder of $r$ when divided by $9$, and $b$ leaves a remainder of $s$, then $ab$ leaves the same remainder as $rs$. So, that means that if $2^n$ leaves a remainder of $r$, then $2^n\times 2^6 = 2^{n+6}$ will leave the same remainder as $r\times 1 = r$; that is, the same remainder as $2^n$. So adding the digits of $2^n$ until you get a single number between $1$ and $9$ will give you the same answer as doing it for $2^{n+6}$. This is what you observed: $8=2^3$, and $512=2^9 = 2^{3+6}$. You will get the same answer ($7$) with $2^{15}$, $2^{21}$, $2^{27}$, etc. I note that you were slightly off in describing $512$ as being "seven steps after" $8$: it's really only six steps later: $$8\stackrel{1}{\mapsto} 16 \stackrel{2}{\mapsto}32 \stackrel{3}{\mapsto}64\stackrel{4}{\mapsto}128\stackrel{5}{\mapsto}256\stackrel{6}{\mapsto}512.$$ - OK, thanks. I should have known it was from something I learned already, I thought to myself that this must be true but I guess that I forgot from this theorem. – MathematicalPhysicist Dec 30 '11 at 3:43 5 – r.e.s. Dec 30 '11 at 4:20 This is indeed the case. The value you get out at the end of all your summations is just the value of your number mod 9 (this is because any natural number is congruent to the sum of its digits mod 9). For example, $1024=9\cdot113+7$ is congruent to 7 mod 9. Also note that the numbers you're getting out by your procedure of successive doubling are just the powers of 2. Finally, I think you're off by one in your counting of sevens -- the powers of two that are working for you are $2^4=16$, $2^{10}=1024$, $2^{16}=65536$, etc. So your fact is true because it's true for $2^4$, and each subsequent term differs by a factor of $2^6$ which is congruent to 1 mod 9 (either by direct computation or by Euler's Theorem -- $\phi(9)=6$). So in general, the sum of the sum of the .... sum of the digits of your values $2^{4+6k}$ is given by $$2^{4+6k}\equiv 2^4\cdot (2^6)^k\equiv 7\cdot 1^k\equiv \boxed{7}\pmod{9}.$$ - There is indeed a period of $6$. The sums of digits cycles through the numbers $1,2,4,8,7,5$. You are doubling the number each time, but then the sum of the digits of that number will double as well, since it is a linear function of the number. This then shows that if you start with $1$ you will go through the cycle $$1 \to 2 \to 4 \to 8 \to 16=7 \to 14=5 \to 10=1 \to \ldots$$ - 1 "You are doubling the number each time, but then the sum of the digits of that number will double as well, since it is a linear function of the number." -- This claim is strange, and not quite correct if interpreted literally; e.g., the sum of digits in $55$ is $10$ whereas it's $2$ for $110$. Are you doubling each digit independently and adding? – Srivatsan Dec 29 '11 at 20:25 1 And $1+0=1$. Therefore, doubled. – Raskolnikov Dec 30 '11 at 8:42 A repetition of this sort was bound to happen, and it always happens even under more general circumstances. First, as others have pointed out, the sum of the base-10 digits of a number $N$ is congruent to $N$ modulo $9$. The reason for this is that $10\equiv 1 \bmod 9$, and so $$\begin{align*} N &= a_t \cdot 10^t + a_{t-1} \cdot 10^{t-1} +\cdots +a_2 \cdot 10^2 + a_1 \cdot 10 +a_0 \\ & \equiv a_t\cdot 1^t + a_{t-1}\cdot 1^{t-1} +\cdots +a_2\cdot 1^2 + a_1 \cdot 1 +a_0 \bmod 9\\ & \equiv a_t + a_{t-1} +\cdots +a_2 + a_1 +a_0 \bmod 9. \end{align*}$$ Thus, if you add all the digits of $N$ and obtain $N_1$, then $N\equiv N_1\bmod 9$. If now we add all the digits of $N_1$ and obtain $N_2$, then $N\equiv N_1\equiv N_2 \bmod 9$. In this way we may create a sequence $N>N_1>N_2> \cdots$, and since all $N_i$ are natural numbers, we end up at some $1\leq N_t \leq 9$, such that $N_t\equiv N\bmod 9$, so $N_t$ is simply the remainder of division of $N$ by $9$. Let $a>1$ be any natural number relatively prime to $3$, whose sum of base-10 digits is $b$, and let $s$ be the order of $b\bmod 9$, i.e., $s$ is the smallest positive number such that $b^s\equiv 1 \bmod 9$. Then: • The sum of the base-10 digits of $a$ is $b\bmod 9$. • The sum of the base-10 digits of $a^{1+sk}$ is also $b\bmod 9$, for all $k\geq 0$, because $$a^{1+sk}\equiv b^{1+sk}\equiv b\cdot b^{sk}\equiv b\cdot (b^s)^k\equiv b \cdot 1\equiv b \bmod 9.$$ • For a fixed $t\geq 1$, the sum of the base-10 digits of $a^{t+sk}$ is $b^t\bmod 9$, for all $k\geq 0$, for similar reasons as above. • For a fixed $t\geq 1$, the sum of the base-10 digits of $$a^{t+sk} + a^{t+sk}$$ is $2\cdot b^t \bmod 9$, for all $k\geq 0$. • And more generally, for fixed $r\geq 1$ (relatively prime to $3$) and $t\geq 1$, the sum of the base-10 digits of $$a^{t+sk} + \cdots + a^{t+sk} = r\cdot a^{t+sk},$$ where we have added $r$ copies of $a^{t+sk}$, is $r\cdot b^t \bmod 9$, for all $k\geq 0$. Your example is the case where $a=2$, $b=2$, $s=6$, $t=3$ and $r=2$. According to the formula above, the sum of the digits must be $$r\cdot b^t \equiv 2\cdot 2^3\equiv 7 \bmod 9.$$ But any other choice works just as well. Pick $a=11$, $b\equiv a\equiv 2 \bmod 9$, $s=6$, $t=2$ and $r=5$. Then, the sum of the digits of the numbers $$11^{2+6k}+11^{2+6k}+11^{2+6k}+11^{2+6k}+11^{2+6k},$$ for all $k\geq 0$, is congruent to $$r\cdot b^t \equiv 5\cdot 2^2\equiv 2 \bmod 9.$$ For instance: • $11^{2}+11^{2}+11^{2}+11^{2}+11^{2}=605,$ and $6+5=11$ and $1+1=2$. • $11^{8}+11^{8}+11^{8}+11^{8}+11^{8}=1071794405,$ and $$1+0+7+1+7+9+4+4+5=38$$ and $3+8=11$, and $1+1=2$. • Etc. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 124, "mathjax_display_tex": 14, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9586277008056641, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/51271/force-and-torque-question-on-an-isolated-system?answertab=active
# Force and Torque Question on an isolated system If there's a rigid rod in space, and you give some external force perpendicular to the rod at one of the ends for a short time, what happens? Specifically: What dependence does the moment of inertia have? If it rotates, what is the center of rotation? Does it matter that the rod is rigid? What happens if it's "springy", say a rubber rod instead. Is there a difference between exerting a force for a short period of time, and having an inelastic collision (say a ball hits the end of the rod instead of you pressing). - – chase lambert Jan 15 at 11:21 What is the center of rotation though? – chase lambert Jan 15 at 14:15 ## 1 Answer The moment of inertia of a rod with a perpendicular axis of rotation through the center of mass is $$I=\frac{1}{12}ml^2$$ where m is the mass and l the length. If you apply a torque $$\tau=r\times F$$ (all vectors), then $$\tau=I\alpha$$ lets you calculate the angular acceleration alpha. If the rod deforms, energy will be needed for deformation and creation of heat. If the force F is exerted for a longer time, the total work on the rod has to be integrated over time. No. F has to be integrated to get the work. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8965750932693481, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27313/holographic-renormalization-in-non-ads-non-cft?answertab=oldest
# Holographic Renormalization in non-AdS/non-CFT In AdS/CFT, the story of renormalization has an elegant gravity dual. Regularizing the theory is done by putting a cutoff near the conformal boundary of AdS space, and renormalization is done by adding counterterms on that surface. Mathematically this is also interesting, since this utilizes the Lorentzian generalization of the Graham-Fefferman expansion. But, in the spirit of “effective holography”, one ought to be able to do that in spacetimes which do not admit a conformal boundary. I am wondering if anyone has ever seen an attempt to systematically define holographic renormalization in such spaces, for example for p-branes ($p \neq 3$), the NS fivebrane, or the Sakai-Sugimoto model, etc. In such cases one can still take a cutoff surface at the UV of the theory, take the fields to be essentially non-fluctuating, but one does not have a conformal boundary and all the associated machinery. - ## 1 Answer I believe one has to distinguish two kinds of dualities. AdS/CFT, even in the context where it describes an RG flow (so not the pure AdS_5xS^5 case), is an exact duality to a four-dimensional theory, which interpolates between one well-defined conformal field theory in the UV and another conformal field theory in the IR. So holographic renormalization is in one-to-one correspondence with renormalization in the four-dimensional theory (that is to say, one can map the counterterms, and identify diff invariance with the renormalization group invariance of correlation functions). On the other hand, Sakai-Sugimoto is not a true duality, it only reduces in the IR to something like a four-dimensional theory (one would hope). The UV of the full Sakai-Sugimoto setup has nothing to do with the UV of QCD or any other four-dimensional theory. So in my opinion there is no reason that (whatever renormalization means in this context) it would resemble what we expect in QCD or any other RG flow in four dimensions. - I am not sure I fully agree. The cleanest case is a complete RG flow for field theory defined at all scales. But, most effective field theories are not defined at all scales, normally that does not prevent you from defining cut-off independent quantities in the IR. Of course, this is easier said than done in the holographic context, but it is entirely possible there are some papers discussing this which I’ve missed. – user566 Nov 1 '11 at 19:33 2 Yes you can do that, but above the scale of pion physics it won't be four-dimensional. And at the scale of pion physics there is nothing beyond Leutwyler+Gasser.The interesting thing about holographic RG is that you can see the onset of confinement and symmetry breaking in a controlled setup which mirrors four-dimensional physics. That's not the case in Sakai-Sugimoto (to my understanding). – Zohar Ko Nov 1 '11 at 20:02 Yeah, Sakai-Sugimoto may not be the best example, maybe Klebanov-Strassler is better place to start. – user566 Nov 1 '11 at 20:30 Yes, KS is much better. – Zohar Ko Nov 2 '11 at 15:12 1 Generally that would mean that there is no dual four-dimensional description in the UV, and my objection is in order. (In other words, in this context it is not clear what holographic RG is good for and what should it be compared to.) A cascade is a kind of a middle ground, where there is no ultimate UV fixed point, but also the departure from ordinary Wilsonian physics is not very significant. So in the case of a cascade I would think the idea of holographic RG should make sense. – Zohar Ko Nov 2 '11 at 19:14 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232197999954224, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/182806-choosing-5-balls-10-black-20-white-40-red.html
# Thread: 1. ## Choosing 5 balls from 10 black, 20 white, 40 red. 10 black, 20 white, 40 red The question is what is probability when we reach 5 of them, and we have 2 white, 2 red, 1 black....... And some guy solved this question on youtube, and his answer was 12.2% He solve like : (20C2 x 40C2 x 10C1)/ (70C5) I know he was right. BUt I don't understand it I solve by another way like : Doing it your way, you would calculate the probability for each of the 30 possible ways to draw 2W, 2R and one B. So, to draw WWRRB in that order, you get (20/70)(19/69)(40/68)(39/67)(10/66) Each iteration of 2W, 2R, and 1B has an identical probability. So, multiply by 30, and you get 12.2%. Can somebody explain me about first way? I really can't make sense!! 2. Originally Posted by daivinhtran 10 black, 20 white, 40 red The question is what is probability when we reach 5 of them, and we have 2 white, 2 red, 1 black....... And some guy solved this question on youtube, and his answer was 12.2% He solve like : (20C2 x 40C2 x 10C1)/ (70C5) I know he was right. BUt I don't understand it I solve by another way like : Doing it your way, you would calculate the probability for each of the 30 possible ways to draw 2W, 2R and one B. So, to draw WWRRB in that order, you get (20/70)(19/69)(40/68)(39/67)(10/66) Each iteration of 2W, 2R, and 1B has an identical probability. So, multiply by 30, and you get 12.2%. Can somebody explain me about first way? I really can't make sense!! Do you know what nCk means? You have to choose 5 balls from 70(10+20+40), you have 70C5 combinations , now there some conditions: i. 2 W balls from 20 possible. (20C2) ii. 2 R balls from 40 possible. (40C2) iii. 1 black ball from 10 possible. (10C1) Can you proceed? 3. Originally Posted by Also sprach Zarathustra Do you know what nCk means? You have to choose 5 balls from 70(10+20+40), you have 70C5 combinations , now there some conditions: i. 2 W balls from 20 possible. (20C2) ii. 2 R balls from 40 possible. (40C2) iii. 1 black ball from 10 possible. (10C1) Can you proceed? i KNOW WHAT 70c5 MEAN..... my ways, and his way 's so different..... It's not related with his way.... BUt it comes out same answer 4. When you have an unordered, without replacement, choice, you know the possible orderings. If, say, you were dealing with a poker hand, you would have $52\times 51\times 50\times 49\times 48$ possible orderings, but you have to divide out the redundant orderings. The five card hand can be arranged in $5\times 4\times 3\times 2\times 1$ ways, so the total number of unordered hands is $\frac{52\times 51\times 50\times 49\times 48}{5\times 4\times 3\times 2\times 1} = \frac{52!}{5!47!}$ Now consider what you wrote above: $\frac{20\times 19\times 40\times 39\times 10}{70\times 69\times 68\times 67\times 66}\times 30$ See something similar? Now write out the statements like $\binom{70}{5}$ and the other ones. With a little algebra, I'm sure you will find that one expression does, in fact, lead to the other. Spoiler: $\frac{\binom{20}{2}\times \binom{40}{2}\times \binom{10}{1}}{\binom{70}{5}}$ $=\frac{\frac{20\times 19\times 40\times 39\times 10}{4}}{\frac{70\times 69\times 68\times 67\times 66}{5\times 4\times 3\times 2\times 1}}$ $=\frac{20\times 19\times 40\times 39\times 10\times 120}{70\times 69\times 68\times 67\times 66\times 4}$ $=\frac{20\times 19\times 40\times 39\times 10\times 30}{70\times 69\times 68\times 67\times 66}$ You simply thought about the problem in another way, but that way misses the point of using the binomial coefficient: it simplifies notation and intuition. You may not be comfortable with thinking about it in terms of the choose operation, but you should work at it. Your way develops from the basic counting rules, but so does the choose operation. The advantage of the choose operation is that we can simplify the expression to the first one above, which carries a much simpler intuition: e.g., from choosing 2 whites from a bin of 20 whites, we have 20 choose 2 many. Which is easier to comprehend, that or your accounting for the various orders, that they're all equally likely, and that there are 30 of them? The say the same thing, ultimately, but the choose operation is simpler. 5. Originally Posted by daivinhtran 10 black, 20 white, 40 red The question is what is probability when we reach 5 of them, and we have 2 white, 2 red, 1 black....... And some guy solved this question on youtube, and his answer was 12.2% He solve like : (20C2 x 40C2 x 10C1)/ (70C5) I know he was right. BUt I don't understand it I solve by another way like : Doing it your way, you would calculate the probability for each of the 30 possible ways to draw 2W, 2R and one B. So, to draw WWRRB in that order, you get (20/70)(19/69)(40/68)(39/67)(10/66) Each iteration of 2W, 2R, and 1B has an identical probability. So, multiply by 30, and you get 12.2%. Can somebody explain me about first way? I really can't make sense!! The way he did it is by counting the number of ways to get 2 reds with 2 white and 1 black. There are 40 reds, so the number of ways to pick a pair of reds is 40C2. The number of ways to pick 2 whites from the 20 is 20C2. There are of course 10C1 = 10 ways to pick the black. In obtaining the group of 5 balls with 2 red, 2 white and 1 black, realise that ANY pair of whites can be matched with ANY pair of reds. Hence the number of possible pairs of reds is multiplied by the number of possible pairs of whites. Then the resulting groups of 4 can all be matched with any of the 10 blacks. So we multiply by 10 to find the number of different ways to obtain a group of 2 reds, 2 whites and 1 black. Then the total number of such groups of 5 possible are 10C2 X 40C2 X 10C1. (If you choose small numbers, it's easier to get used to. Suppose there are 3 reds, 4 whites and 2 blacks. There are 3 ways to pick a pair of reds, 6 ways to pick a pair of whites and 2 ways to pick a black. Therefore, there are 3X6X2 such groups or sets of 5 satisfying the condition. That gives the probability numerator. The denominator is the number of groups of 5 from the total without any restrictions placed) Then the denominator of the probability fraction is the number of ways to pick a group of 5 from 70. You could think of this method as "picking all 5 at the same time". Your method is more akin to picking them "sequentially" one after the other, so you've taken into account the orders. You must get the same result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503037929534912, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-statistics/27234-moment-generating-function.html
# Thread: 1. ## Moment generating function Ok, let's use the brute approach. This is a continuation of my previuos post: http://www.mathhelpforum.com/math-he...ble-check.html which is meant to analytically calculate the moments of the following pdf (this is my problem!): $<br /> p(x;\lambda) =\left\{\begin{array}{cc}2 \lambda x\ e^{-\lambda{x^2}}&\mbox{ if } x\geq 0\\0 & \mbox{ if } x<0\end{array}\right. (\lambda>0)<br />$ I know that the even moments are: $<br /> E[x^{2n}]=\frac{n!}{\lambda^n}\ (n=0,1,2, ...)<br />$ In that previous post I'm using (unsuccesfully) the moment generating function: $<br /> M_X(t)=\int_0^\infty 2 \lambda x\ e^{-\lambda{x^2}+tx} dx\<br />$ You can see there why it does not take me to the expecetd results. How can I get to the expected results?? Perhaps developing with Taylor?? I am required NOT to use higher level tools, like Laplace/Fourier, etc... 2. The $E[x^{2n}]$ looks familiar to the expected value of the order statistics. 3. Originally Posted by paolopiace Ok, let's use the brute approach. This is a continuation of my previuos post: http://www.mathhelpforum.com/math-he...ble-check.html which is meant to analytically calculate the moments of the following pdf (this is my problem!): $<br /> p(x;\lambda) =\left\{\begin{array}{cc}2 \lambda x\ e^{-\lambda{x^2}}&\mbox{ if } x\geq 0\\0 & \mbox{ if } x<0\end{array}\right. (\lambda>0)<br />$ I know that the even moments are: $<br /> E[x^{2n}]=\frac{n!}{\lambda^n}\ (n=0,1,2, ...)<br />$ In that previous post I'm using (unsuccesfully) the moment generating function: $<br /> M_X(t)=\int_0^\infty 2 \lambda x\ e^{-\lambda{x^2}+tx} dx\<br />$ You can see there why it does not take me to the expecetd results. How can I get to the expected results?? This is a 1-sided Laplace transform, and can be done with a table of Laplace transforms and a bit of knowlege of LT properties. It looks to me as though (up to a scale factor) it's the derivative of the complementary error function. RonL 4. Originally Posted by paolopiace Ok, let's use the brute approach. This is a continuation of my previuos post: http://www.mathhelpforum.com/math-he...ble-check.html which is meant to analytically calculate the moments of the following pdf (this is my problem!): $<br /> p(x;\lambda) =\left\{\begin{array}{cc}2 \lambda x\ e^{-\lambda{x^2}}&\mbox{ if } x\geq 0\\0 & \mbox{ if } x<0\end{array}\right. (\lambda>0)<br />$ I know that the even moments are: $<br /> E[x^{2n}]=\frac{n!}{\lambda^n}\ (n=0,1,2, ...)<br />$ In that previous post I'm using (unsuccesfully) the moment generating function: $<br /> M_X(t)=\int_0^\infty 2 \lambda x\ e^{-\lambda{x^2}+tx} dx\<br />$ You can see there why it does not take me to the expecetd results. How can I get to the expected results?? Perhaps developing with Taylor?? I am required NOT to use higher level tools, like Laplace/Fourier, etc... I was going to ask that you post the original question that triggered your other posts. I should have guessed - you (more-or-less) have a Weibull distribution and you want its moments. But why get them from the moment generating function? - you're on a hiding to nothing there unless you want to work with the Error Function. Just use the definition of the nth moment and do the (simple but tedious) integral, which I'll post if I have time. 5. Originally Posted by mr fantastic I was going to ask that you post the original question that triggered your other posts. I should have guessed - you (more-or-less) have a Weibull distribution and you want its moments. But why get them from the moment generating function? - you're on a hiding to nothing there unless you want to work with the Error Function. Just use the definition of the nth moment and do the (simple but tedious) integral, which I'll post if I have time. $E(X^n) = 2\lambda \int_0^\infty x^{n+1} e^{-\lambda x^2} \, dx$. First tidy things up by making the substitution $u = \sqrt{\lambda} \, x \Rightarrow dx = \frac{du}{\sqrt{\lambda}}$: $E(X^n) = \frac{2}{\lambda^{n/2}} \int_0^\infty u^{n+1} e^{- u^2} \, du$. Now make the substitution $u = t^2 \Rightarrow dt = \frac{du}{2t}: \,$ $\, E(X^n) = \frac{1}{\lambda^{n/2}} \int_0^\infty t^{n/2} e^{-t} \, dt$. n is even: Let n = 2m say. Then you have: $\, E(X^{2m}) = \frac{1}{\lambda^{m}} \int_0^\infty t^{m} e^{-t} \, dt$. Repeated integration by parts (or spotting something interesting after the first application) gives the result you know: $<br /> E[x^{2m}]=\frac{m!}{\lambda^m},\ (m = 0, 1 , 2, ...).$ When n is odd life is not so ...... elementary. You need another new function - the Gamma Function. In terms of the Gamma Function, $\int_0^\infty t^{n/2} e^{-t} \, dt = \Gamma \left( \frac{n}{2} + 1 \right). \,$ Therefore $E(X^n) = \frac{1}{\lambda^{n/2}} \Gamma \left( \frac{n}{2} + 1 \right)$. Having clicked the above link and done the reading, you now know a well known property of the Gamma Function, namely $\Gamma (p + 1) = p \Gamma (p)$ and that $\Gamma (m + 1) = m!$. It follows that you can substitute n = 2m into the above general result for $E(X^n)$ and get the result you know for even moments. When n is odd, the well known property $\Gamma (p + 1) = p \Gamma (p)$ together with the well known (and easily proved) result $\Gamma \left( \frac{1}{2} \right) = \sqrt{\pi}$ can be used to generate a general formula for the odd moments. I leave this as a fairly simple exercise for you. Hint: You first might like to prove (by induction, perhaps) that $(1)(3)(5)(7) .......(2m - 3)(2m-1) = \frac{(2m)!}{m! 2^m}$. PS: Krizalid and TPH, I know that both the above substitutions could be done at once.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 25, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92328941822052, "perplexity_flag": "middle"}
http://www.quantumdiaries.org/tag/sigma/
# Quantum Diaries Thoughts on work and life from particle physicists from around the world. ## Posts Tagged ‘sigma’ ### A sigma here, a sigma there… Wednesday, May 9th, 2012 Whenever we come across a new result one of the first things we ask is “How many sigma is it?!” It’s a strange question, and one that deserves a good answer. What is a sigma? How do sigmas get (mis)used? How many sigmas is enough? The name “sigma” refers to the symbol for the standard deviation, σ. When someone says “It’s a one sigma result!” what they really mean is “If you drew a graph and measured a curve that was one standard deviation away from the underling model then this result would sit on that curve.” Or to use a simple analogy, the height distribution for male adults in the USA is 178cm with a standard deviation of 8cm. If a man measured 170cm tall he would be a one sigma deviation from the norm and we could say that he’s a one sigma effect. As you can probably guess, saying something is a one sigma effect is not very impressive. We need to know a bit more about sigmas before we can say anything meaningful. The term sigma is usually used for the Gaussian (or normal) distribution, and the normal distribution looks like this: The normal distribution The area under the curve tells us the population in that region. We can color in the region that is more than one sigma away from the mean on the high side like this: The normal distribution with the one sigma high tail shaded This accounts for about one sixth of the total, so the probability of getting a one sigma fluctuation up is about 16%. If we include the downward fluctuations (on the low side of the peak) as well then this becomes about 33%. If we color in a few more sigmas, we can see that the probability of getting two, three, four and five sigma effect above the underlying distribution is 2%, 0.1%, 0.003%, and 0.00003%, respectively. To say that we have a five sigma result is much more than five times as impressive as a one sigma result! The normal distribution with each sigma band shown in a different color. Within one sigma is green, two sigma is yellow, three sigma is... well can you see past the second sigma? When confronted with a result that is (for example) three sigma above what we expect we have to accept one of two conclusions: 1. the distribution shows a fluctuation that has a one in 500 chance of happening 2. there is some effect that is not accounted for in the model (eg a new particle exists, perhaps a massive scalar boson!) Unfortunately it’s not as simple as that, since we have to ask ourselves “What is the probability of getting a one sigma effect somewhere in the distribution?” rather than “What is the probability of getting a one sigma effect for a single data point?”. Let’s say we have a spectrum with 100 data points. The probability that every single one of those data points will be within the one sigma band (upward and downward fluctuations) is 68% to the power 100, or $$2\times 10^{-17}$$, a tiny number! In fact, we should be expecting one sigma effects in every plot we see! By comparison, the probability that every point falls within the three sigma band is 76%, and for five sigma it’s so close to 100% it’s not even worth writing out. A typical distribution with a one sigma band drawn on it looks like the plot below. There are plenty of one and two sigma deviations. So whenever you hear someone says “It’s an X sigma effect!” ask them how many data points there are. Ask them what the probability of seeing an X sigma effect is. Three sigma is unlikely for 100 data points. Five sigma is pretty much unheard of for that many data points! A typical distribution of simulated data with a one sigma band drawn. So far we’ve only looked at statistical effects, and found the probability of getting an X sigma deviation due to fluctuations. Let’s consider what happens with systematic uncertainties. Suppose we have a spectrum that looks like this: A sample distribution with a suspicious peak. It seems like we have a two-to-three sigma effect at the fourth data point. But if we look more closely we can see that the fifth data point looks a little low. We can draw three conclusions here: 1. the distribution shows a fluctuation that has a one in 50 chance of happening (when we take all the data points into account) 2. there is some effect that is not accounted for in the model 3. the model is correct, but something is causing events from one data point to “migrate” to another data point In many cases the third conclusion will be correct. There are all kinds of non-trivial effects which can change the shape of the data points, push events around from one data point to another and create false peaks where really, there is nothing to discover. In fact I generated the distribution randomly and then manually moved 20 events from the 5th data point to the 4th data point. The correct distribution looks like this: The sample distribution, corrected. So when we throw around sigmas in conversation we should also ask people what the shape of the data points looks like. If there is a suspicious downward fluctuation in the vicinity of an upward fluctuation be careful! Similarly, if someone points to an upward fluctuation while ignoring a similarly sized downward fluctuation, be careful! Fluctuations happen all the time, because of statistical effects and systematic effects. Take X sigma with a pinch of salt. Ask for more details and look at the whole spectrum available. Ask for a probability that the effect is due to the underlying model. Most of the time it’s a matter of “A sigma here, a sigma there, it all balances out in the end.” It’s only when the sigma continue to pile up as we add more data that we should start to take things seriously. Right now I’d say we’re at the point where a potential Higgs discovery could go either way. There’s a good chance that there is a Higgs at 125GeV, but there’s also a reasonable chance that it’s just a fluctuation. We’ve seen so many bumps and false alarms over the years that another one would not be a big surprise. Keep watching those sigmas! The magic number is five. Tags: data analysis, sigma, Statistics Posted in Latest Posts | 21 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9448170065879822, "perplexity_flag": "middle"}
http://complexzeta.wordpress.com/2007/05/31/arthur-merlin-games/
An Idelic Life Algebraic number theory and anything else I feel like telling the world about # Arthur-Merlin Games Thursday, May 31, 2007 in computational complexity We consider the graph isomorphism problem: Given two graphs $G_1$ and $G_2$, are they isomorphic? If you solve this problem and find that the two graphs are indeed isomorphic, it is very easy for you to convince me of this fact: We assume that $|G_1|=|G_2|=n$ (since if they have different numbers of vertices, they clearly cannot be isomorphic). For me to believe you, you can simply find an element $\pi\in S_n$ that induces an isomorphism of the two graphs. What if they aren’t isomorphic? You could try to find an invariant that differs. But that’s a rather ad hoc technique, and maybe you aren’t able to find one, but you still know that the two graphs are nonisomorphic (somehow). We can play a game that will help you convince me that the two graphs are nonisomorphic. I create a list of (say) 50 numbers $a_1,a_2,\ldots,a_{50}$, with each $a_i=1,2$. Then I randomly choose permutations $\pi_i\in S_n$ and construct the graphs $\pi_i G_{a_i}$ and give you this list. Now you have to tell me whether each $a_i$ is equal to 1 or 2. If the the two graphs are isomorphic, you’ll only get the graphs right half the time (since you’re just guessing in this case), so it’s very unlikely that you’ll always get it right. However, if they’re nonisomorphic, then you can (somehow) figure out which graphs on my list are isomorphic to $G_1$ and which to $G_2$ and get them all right. The class of problems (such as the graph nonisomorphism problem) that can be solved with this sort of interaction is called the Arthur Merlin (or AM) complexity class.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441261291503906, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/33711-distance-between-beam-cables.html
# Thread: 1. ## distance between a beam and cables a weight is supported by cables attached to both ends of a balance beam, as shown in the figure. What anles are formed between the beam and the cables? 2. Originally Posted by veronica mars a weight is supported by cables attached to both ends of a balance beam, as shown in the figure. What anles are formed between the beam and the cables? use the converse of the law of cosines $arccos\bigg[\frac{c^2-a^2-b^2}{-2ab}\bigg]=\theta$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9724559783935547, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/45656/game-statistics-extracting-interesting-patterns-out-of-users-and-level
# Game statistics: Extracting interesting patterns out of users and level I made a small game and in course of time collected fair amount of data between users and level The level chart is long (120 levels) but looks somewhat like this $$\begin{array}{|c|c|} \text{level} & \text{No. of Users}\\\hline 5 & 20\\ 6 & 23\\ 7 & 15\\ 10 & 2\\\hline \end{array}$$ Being a math nitwit I don’t know how to extract useful information out of this. Can someone help me with this so that I can learn more about my users and give them things they want? - What exactly is your question? I think you need to figure out (or tell us) what "useful information" you want. Perhaps try gamedev.stackexchange.com? – Harry Stern Jun 16 '11 at 2:37 Are there more attributes? As it stands, (Level, Number) doesn't seem very informative. – Jack Henahan Jun 16 '11 at 2:38 ## 1 Answer The data isn't hugely rich, so there's only a certain amount of insight you can get. If you had more data (eg, number of attempts at each level, time spent on each level). In your position, the main question I would be trying to answer is: Which levels cause an unusual number of players to stop playing? I suspect that simply plotting the data you have onto a chart would help you spot the answer without any further calculations. I would do this by calculating the users lost for each level: ````UsersLost(level) = Users(level - 1) - Users(level) ```` (Depending on the shape of the data, you may find it more useful to model the proportion of users lost, rather than the absolute number.) I would look at the distribution of these numbers to help me determine the next information I was interested in. Again, plotting the results is a useful exploratory tool. Ultimately, I'd be looking for any levels where there's a big difference between the expected number of users lost and the actual number of users lost. These are the levels you want to investigate closely, to help identify the qualitative reason that they lose/keep users. At this stage, I think that you'd be better off asking on a forum more suited to exploratory analytics and the user experience -- someone suggested gamedev.stackexchange as a good starting point. Mathematics might be more useful when you have figured out what you're looking for (and need help finding it). - Your expression for `UsersLost` will be negative for level 6 - what might this mean? – Henry Jun 16 '11 at 7:36 It means there is a way for a user to count on level 6 without counting on level 5. Perhaps the game has a "skip level" facility? The general principles I outline should still apply as long as players progress somewhat linearly through the levels. – Rob Hunter Jun 18 '11 at 18:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467810988426208, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/190084-scalar-multiplication-axioms.html
# Thread: 1. ## scalar multiplication axioms Scalar multiplication is defined generally as a function $S:E \times E \longrightarrow K$,where $K\in(R,C)$, $E$ - linear space, for which 5 axioms are true: $1. S(x+y,z)=S(x,z)+S(y,z)\\2. S(\lambda x,y)=\lambda S(x,y)\\3. S(x,y)=\bar {S(x,y)}\\4. S(x,x)\geq 0\\5. S(x,x)=0 \Rightarrow x=\bar{0}$ It is neccessary for any function that defines scalar multiplication, that all of these axioms are true. I've noticed that some sources offer just 4 axioms - the first and the second is joined into one. Does it mean that the first and the second axiom is equivalent? If not, then there must be a function for which the first axiom is false, but the remaining ones is true. But I cannot think of such an example.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493856430053711, "perplexity_flag": "head"}
http://nrich.maths.org/304/index
### Sixational The nth term of a sequence is given by the formula n^3 + 11n . Find the first four terms of the sequence given by this formula and the first term of the sequence which is bigger than one million. Prove that all terms of the sequence are divisible by 6. ### LOGO Challenge - Sequences and Pentagrams Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? ### Dalmatians Investigate the sequences obtained by starting with any positive 2 digit number (10a+b) and repeatedly using the rule 10a+b maps to 10b-a to get the next number in the sequence. # Route to Root ##### Stage: 5 Challenge Level: A sequence of numbers $x_1, x_2, x_3, \ldots$ , starts with $x_1 = 2$, and, if you know any term $x_n$, you can find the next term $x_{n+1}$ using the formula: $$x_{n+1} = \frac{1}{2}\bigl(x_n + \frac{3}{x_n}\bigr)$$ Calculate the first six terms of this sequence. What do you notice? Calculate a few more terms and find the squares of the terms. Can you prove that the special property you notice about this sequence will apply to all the later terms of the sequence? Write down a formula to give an approximation to the cube root of a number and test it for the cube root of 3 and the cube root of 8. How many terms of the sequence do you have to take before you get the cube root of 8 correct to as many decimal places as your calculator will give? What happens when you try this method for fourth roots or fifth roots etc.? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9305688142776489, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/4013/what-is-the-meaning-of-subadditivity-in-a-risk-measure?answertab=active
# What is the meaning of subadditivity in a risk measure? The subadditivity reads: $\rho(X_1+X_2) \leq \rho(X_1) + \rho(X_2)$ What is the meaning of this condition? I can vaguely accept that one should diversify the investment portfolio. Or, I can understand that $\rho(X_1+X_2)$ describes the situation of two assets $X_1$ and $X_2$ held together. Then what is the meaning of $\rho(X_1) + \rho(X_2)$? One person holds $X_1$ and another person holds $X_2$? I am having difficulty to interpret the right-hand-side. - ## 2 Answers Rewriting the condition as $$\rho\left({X_1+X_2 \over 2}\right) \leq {\rho(X_1) + \rho(X_2) \over 2}$$ You can interpret it as a portfolio containing the average holdings of two other portfolios has at most the risk of the average risk of the two other portfolios. There is no need to have any concept of anyone actually holding any of the portfolios. - As you inferred, this is related to the concept of diversification as a risk-mitigation tool. In short, think of $\rho$ as representing some risk measure, and $\rho(x)$ as the risk of asset $x$ under that measure. If subadditivity holds, then the risk of holding assets 1 and 2 simultaneously must be less than or equal to the sum of their individual risks: $\rho(x_1 + x_2) \leq \rho(x_1) + \rho(x_2)$. For example, volatility (standard deviation) is a subadditive risk measure. We know this intuitively from diversification: a portfolio is less volatile than the sum of its component volatilities. As it relates to finance, subadditivity is one of the four axioms characterizing "coherent" measures of risk. This class of risk measures was introduced in Artzner et al, 1998, see the bottom of page 6. Think of these as risk measures with desirable properties that won't be subverted by strange-behaving portfolios. It's important to note that subadditivity is not a statement of fact -- it's easy to define risk measures that are not subadditive -- but rather an axiom that risk measures must satisfy in order to be coherent. Artzner describes subadditivity nicely as the idea that "a merger does not create extra risk," and lists a number of practical points which follow from it. One interesting one is that if risk were not subadditive, then a person wanting exposure to asset 1 and asset 2 would be better off opening a separate account for each asset, as the (risk-based) margin requirement would be lower than if he held both in the same account. (Note this can be seen as a very literal interpretation of the right hand side of the equation.) The most (in)famous risk measure that does not satisfy this axiom is VaR. The VaR of a portfolio of two assets can be greater than the sum of their individual VaRs. This is because VaR is a quantile-based measure; see the Artzner paper for examples. - @Chang, upon browsing your other questions I've noticed that you appear quite familiar with coherent risk measures. I apologize if my answer is too basic for you in its treatment of such measures; hopefully it will still benefit others. – jlowin Aug 27 '12 at 17:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964404284954071, "perplexity_flag": "head"}
http://mathoverflow.net/questions/54516?sort=votes
## Is there an easy proof of the fact that the intermediate image functor respects weights? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) It was proven in BBD (see Corollary 5.3.2) that for an open immersion $j$ the functor $j_{!*}$ preserves weights of mixed sheaves. The proof relies on several previous results; it is especially complicated in the case when $j$ is not affine. Does an easier proof (or a plan of it:)) exist? I would like to have a proof that (mostly) relies on the properties of $j_{!*}$ (and on the 'formal' properties of weights). - What's BBD? – Harry Gindi Feb 6 2011 at 14:03 Beilinson, Bernstein, Deligne... – Donu Arapura Feb 6 2011 at 14:07 1 Perhaps this should be added to the question (for the non-experts). – Martin Brandenburg Feb 6 2011 at 18:19 ## 1 Answer The proof in BBD is not that complicated, and it doesn't matter much whether $j$ is affine or not. It uses the three following facts : • If $f$ is a morphism of schemes, then $f_*$ sends a complex of weight $\geq a$ to a complex of weight $\geq a$, and $f_!$ sends a complex of weight $\leq a$ to a complex of weight $\leq a$ (a very natural property of weights; of course that's not so easy to prove for weights of $\ell$-adic complexes, and it is the main result of Deligne's Weil II). Cf BBD 5.1.14. • If $K$ is an $\ell$-adic complex, then $K$ is of weight $\leq a$ (resp. $\geq a$) if and only if, for every $k\in\mathbb{Z}$, the $k$th perverse cohomology sheaf of $K$ (call it `${}^pH^k K$`) is of weight $\leq a+k$ (resp. $\geq a+k$). Cf BBD 5.4.1. Again, hard to prove, but a natural enough property of weights, and a reason in my opinion why perverse sheaves are so much more natural than constructible sheaves (one out of many). • If $j$ is a locally closed immersion (more generally, a quasi-finite map), then `$j_{!*}$` is the image of `${}^pH^0j_!$` in `${}^pH^0j_*$`. This is the definition of the intermediate extension. Now the result you want is obvious : Take `$j:X\rightarrow Y$` a quasi-finite morphism. If the perverse sheaf $K$ on $X$ is of weight $\leq a$, then `$j_!K$` is of weight $\leq a$ (as a complex), so the perverse sheaf `${}^pH^0j_!K$` is of weight $\leq a$, and so is its quotient `$j_{!*}K$`. Likewise for weights $\geq a$, using this time $j_*$. Note that you could also define $j_{!*}K$ (for $K$ pure of weight $a$) as the weight $\leq a$ part of $j_*K$, or as the weight $\geq a$ part of $j_!K$. I think it's not too hard to recover the usual properties of $j_{!*}K$ from that definition, but I would have to think more to see how to make it work for mixed (but not pure) perverse sheaves. Edited to add two remarks : (1) I don't think that it is so hard to go from the affine case to the general case. Consider an open embedding $j:U\rightarrow X$, let $i:Y\rightarrow X$ be the complement. Let $\pi:X'\rightarrow X$ be the blowup of $Y$ in $X$, and $j':U\rightarrow X'$ be the inclusion. Then $j'$ is affine, and, for every perverse sheaf $K$ on $U$, `$j_{!*}K$` is a direct factor of `${}^pH^0\pi_*j'_{!*}K$`, so the result for `$j_{!*}K$` follows if you know it for `$j'_{!*}K$`, without any need of BBD 5.3.1. (You don't need the decomposition theorem to prove my claim. It is an exercise in perverse sheaves to prove that the map `${}^pH^0\pi_*j'_!K={}^p H^0j_!K\rightarrow j_{!*}K$` factors through a map `${}^pH^0\pi_*j'_{!*}K\rightarrow j_{!*}K$`. Likewise, or by duality, there is a natural map `$j_{!*}K\rightarrow{}^pH^0\pi_*j_{!*}K$`. The composition `$j_{!*}K\rightarrow{}^pH^0\pi_*j'_{!*}K\rightarrow j_{!*}K$` is the identity when restricted to $U$, so it is the identity.) (2) If $K$ is pure, there is a slightly different way to prove what you want (you might be able to do something if $K$ is mixed too, but I didn't try to work it out). Notation : $j$ is an open immersion from $U$ to $X$. First, the problem is local in $X$, so you can assume that $X$ is affine. Then $Y:=X-U$ is defined by a finite number of functions on $X$. By induction over the number of functions necessary to define $Y$, you can reduce to the case where there exists a function `$f:X\rightarrow\mathbb{A}^1$` such that `$Y=f^{-1}(0)$`. Now you can use the result of Beilinson-Bernstein (cf "A proof of Jantzen conjectures") that the Jantzen filtration on `$j_!K$` coincides with (a shift of) the weight filtration if $K$ is pure. The Jantzen filtration on `$j_!K$` is induced by the monodromy filtration on the maximal extension `$\Xi_f K$`, and it is an exercise to identify the quotient `$j_{!*}K$` of `$j_!K$` with one of the graded pieces of this filtration and to conclude that it has the expected weight. This proof avoids BBD 5.2, but it relies on the article of Beilinson-Bernstein instead; as fat as I can tell, the methods Beilinson-Bernstein use to prove the result that you need are natural extensions of the methods of Weil II, and you have to assume Weil II anyway, so maybe this is slightly more natural. - Your reasoning is certainly very logical; yet it does not seem to follow BBD. In BBD 5.3.2 is not deduced from 5.4.1. Conversely, 5.4.1 relies on 5.3.7, which uses 5.3.5, which uses 5.3.4, and the latter is a corollary from 5.3.2.:) And 5.3.2 is deduced from 5.3.1, which relies on 5.2.1, whereas the proof of the latter is pretty complicated.:) – Mikhail Bondarko Feb 6 2011 at 19:58 You did not ask for a proof that follows BBD, you asked for a proof that uses the natural properties of $j_{!*}$ and the formal properties of weights. But none of the properties of weights are easy to prove for $\ell$-adic complexes, so there are no formal properties of weights. I think the best you can hope for is a proof that uses the natural properties of weights. The precise proof of those natural properties is going to depend on your particular context (and on your definitions). For example some of those properties are very hard to prove in the $\ell$-adic world... – Alex Feb 6 2011 at 20:16 ...but very easy to prove for motives. And some of these properties are still conjectural for motives. Note also that your statement is corollary 5.4.3 of BBD, which is deduced in BBD from theorem 5.4.1. So I was actually following BBD afer all. :) As for your original question, if what you want is a proof that is very simple and uses only the definition of weights of $\ell$-adic complexes but not, say, Weil II (which is very hard to prove), then I know of no such proof. Sorry. – Alex Feb 6 2011 at 20:18 Well, I do not know for sure which sort of a reasoning could help me.:) I would prefer one that relies on 5.1.14 of BBD. Could you say more about what is easy to prove for motives? – Mikhail Bondarko Feb 6 2011 at 20:47 Okay, the problem is that 5.3.1 (if a perverse sheaf is of weights between $a$ and $b$, then so is any subobject) is crucial, and I do not see a more direct way to get at it than the one in BBD. The definition of weights by looking at stalks is not well adapted to perverse sheaves... About motives, I should not have written "easy", because very few things are easy in maths, and I realized that I called part of your work easy although I do not think it is, so I did not mean any offense. I meant something like "the proof seems more natural to me" (more natural than the proof of Weil II). – Alex Feb 6 2011 at 22:49 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 66, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522578120231628, "perplexity_flag": "head"}
http://mathoverflow.net/questions/70081/volume-of-fundamental-domain-and-haar-measure
## Volume of fundamental domain and Haar measure ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In my research, I do need to know the Haar measure. I have spent some time on this subject, understanding theoretical part of the Haar measure, i.e existence and uniqueness, Haar measure on quotient. But I should confess I never felt confident with the Haar measure, essentially because theoretical part did not give me how to construct the Haar measure. Even when I was trying to understand Haar measure on $SL(n,\mathbb{R})$ via Iwasawa decomposition, I could not realize exactly how it works. I think, or I should say I fell, I am not the only one how has problem with this delicate subject. So I thought I would want to share this with you and see how people think about the Haar measure and how you compute, for instance, volume $${\rm Vol}(SL(n,\mathbb{Z})/SL(n,\mathbb{R}))$$ - Do you mean SL(n,R)/SL(n,Z)? – David Roberts Jul 12 2011 at 3:21 I do not see difference. Would you please explain me the difference – M.B Jul 12 2011 at 3:37 @David: I think it is a matter of convention whether one writes quotients from the left or from the right. – Qiaochu Yuan Jul 12 2011 at 3:44 6 @Qiaochu Yuan: but surely if he's quotienting on the left, then the slash should go the other way . . . – unknown (google) Jul 12 2011 at 3:50 mathoverflow.net/questions/36025/… – Steve Huntsman Jul 12 2011 at 4:01 ## 2 Answers In order to talk meaningfully about the volume of $SL(n,\mathbb{R})/SL(n,\mathbb{Z})$ you need to define a normalization for Haar measure. One way to think about it is as follows: the space $M_n(\mathbb{R})$ of $n$ by $n$ matrices is just $\mathbb{R}^{n^2}$, so it has a natural notion of Lebesgue measure $\lambda$. We can normalize $\lambda$ so that $\lambda(M_n(\mathbb{R})/M_n(\mathbb{Z})) = 1$. Now $SL(n,\mathbb{R})$ is a hypersurface in $M_n(\mathbb{R})$ given by $det = 1$. So we need to restrict $\lambda$ from $M_n(\mathbb{R})$ to the hypersurface. One natural way to do that is to define for $E \subset SL(n,\mathbb{R})$, $$\mu( E ) = \lambda(Cone(E)),$$ where $Cone(E)$ is the Euclidean cone which is the union of all line segments starting at the origin and ending at $E$. Now $\mu$ is $SL(n,\mathbb{R})$ invariant, and therefore it is the Haar measure. This also defines a natural normalization for the measure. With this normalization, it makes sense to ask for $\mu(SL(n,\mathbb{R})/SL(n,\mathbb{Z}))$. There is a beautiful formula due to Siegel: $$\mu(SL(n,\mathbb{R})/SL(n,\mathbb{Z})) = \frac{1}{n} \zeta(2) \dots \zeta(n)$$ (I think I probably did not get all the factors right with my normalization). I will outline two completely elementary approaches to proving this formula. (Later on you see that the two approaches are really the same, and that this all has to do with Tamagawa numbers). Approach 1: For a compactly supported function $f: \mathbb{R}^n \to \mathbb{R}$ we can define a function `$\hat{f}: SL(n,\mathbb{R})/SL(n,\mathbb{Z}) \to \mathbb{R}$` by the formula $$\hat{f}(\Delta) = \sum_{v \in \Delta'} f(v).$$ Here you think of $\Delta \in SL(n,\mathbb{R})/SL(n,\mathbb{Z})$ as a lattice in $\mathbb{R}^n$, and $\Delta'$ is the set of primitive vectors in $\Delta$. Then there is a formula due to Siegel (which you can prove by unfolding): $$\frac{1}{\mu(SL(n,\mathbb{R})/SL(n,\mathbb{Z}))}\int_{SL(n,\mathbb{R})/SL(n,\mathbb{Z})} \hat{f}(\Delta) \, d\mu(\Delta) = \frac{1}{\zeta(n)} \int_{\mathbb{R}^n} f.$$ You can now take $f$ to be the characteristic function of the ball of radius $\epsilon$ and then take $\epsilon \to 0$. The asymptotics of the integral on the left hand side is expressible in terms of $\mu(SL(n-1,\mathbb{R})/SL(n-1,\mathbb{Z}))$, so you get the formula for the volume by induction. Approach 2: Let $E \subset SL(n,\mathbb{R})$ be the fundamental domain for the action of $SL(n,\mathbb{Z})$. Pick a large parameter $R > 0$. Then, $$\mu(SL(n,\mathbb{R})/SL(n,\mathbb{Z})) = \mu(E) = \lambda(Cone(E)) = \frac{1}{R^{n^2}}\lambda( Cone(R E)).$$ But the volume of the cone $\lambda(Cone(RE))$ is asymptotic as $R \to \infty$ to the number of integer points in the cone, i.e. the cardinality of $Cone(RE) \cap M_n(\mathbb{Z})$. Now points in $M_n(\mathbb{Z}) \cap Cone(RE)$ parametrize integer lattices of covolume at most $R^n$. So if you count the number of sublattices of the standard lattice $\mathbb{Z}^n$ of index at most $R^n$ and take the leading term as $R \to \infty$ you also compute $\mu(SL(n,\mathbb{R})/SL(n,\mathbb{Z}))$. This will give the same answer as Approach 1. It turns out that both approaches make sense in other situations, e.g. volumes of moduli spaces of holomorphic differentials. In that setting they both sort of work, but give different information. - Thank you. I think I will learn a lot from your comment. Especially, since I have been working on your paper, where is giving a new proof of Siegel-Minkowski Mass formula. This might give me some ideas to be able to understand that work. – M.B Jul 12 2011 at 18:08 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Supplementing the other answer (too long for a comment...) First, for examples of innocent-context computations, there is an on-line computation of volumes of $SL(n,\mathbb Z)\backslash SL(n,\mathbb R)$ and of $Sp(n,\mathbb Z)\backslash Sp(n,\mathbb R)$ here , written in essentially Siegel's style. The same style of computation can be done adelically, over arbitrary number fields, but still does effectively beg the question of normalization. Nevertheless, such computations show that the normalization can be determined inductively, and, in any case, that the global computation can be done nicely once we have a locally-everywhere normalization of measures. Yet-another approach is (after Langlands) to look at suitable residues of Eisenstein series (e.g., as in the Boulder conference, AMS Proc Symp IX). In effect, such a computation back-handedly normalizes the Haar measure... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9341784119606018, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Cauchy_distribution
# Cauchy distribution Not to be confused with Lorenz curve. Parameters Probability density function The purple curve is the standard Cauchy distribution Cumulative distribution function $x_0\!$ location (real) γ > 0 scale (real) $\displaystyle x \in (-\infty, +\infty)\!$ $\frac{1}{\pi\gamma\,\left[1 + \left(\frac{x-x_0}{\gamma}\right)^2\right]}\!$ $\frac{1}{\pi} \arctan\left(\frac{x-x_0}{\gamma}\right)+\frac{1}{2}\!$ undefined $x_0\!$ $x_0\!$ undefined undefined undefined $\log(\gamma)\,+\,\log(4\,\pi)\!$ does not exist $\displaystyle \exp(x_0\,i\,t-\gamma\,|t|)\!$ The Cauchy distribution, named after Augustin Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The simplest Cauchy distribution is called the standard Cauchy distribution. It has the distribution of a random variable that is the ratio of two independent standard normal random variables. This has the probability density function $f(x; 0,1) = \frac{1}{\pi (1 + x^2)}. \!$ Its cumulative distribution function has the shape of an arctangent function arctan(x): $F(x; 0,1)=\frac{1}{\pi} \arctan\left(x\right)+\frac{1}{2}$ The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution. Both its mean and its variance are undefined. (But see the section Explanation of undefined moments below.) The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist.[1] The Cauchy distribution has no moment generating function. Its importance in physics is the result of it being the solution to the differential equation describing forced resonance.[2] In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane. In spectroscopy, it is the description of the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening, and Chantler–Alda radiation.[3] In its standard form, it is the maximum entropy probability distribution for a random variate X for which[4] $\operatorname{E}\!\left[\ln(1+X^2) \right]=\ln(4)$ ## Characterisation ### Probability density function The Cauchy distribution has the probability density function $f(x; x_0,\gamma) = \frac{1}{\pi\gamma \left[1 + \left(\frac{x - x_0}{\gamma}\right)^2\right]} = { 1 \over \pi } \left[ { \gamma \over (x - x_0)^2 + \gamma^2 } \right],$ where x0 is the location parameter, specifying the location of the peak of the distribution, and γ is the scale parameter which specifies the half-width at half-maximum (HWHM). γ is also equal to half the interquartile range and is sometimes called the probable error. Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining what would now be called a Dirac delta function. The amplitude of the above Lorentzian function is given by $\text{Amplitude (or height)} = \frac{1}{\pi\gamma}.$ The special case when x0 = 0 and γ = 1 is called the standard Cauchy distribution with the probability density function $f(x; 0,1) = \frac{1}{\pi (1 + x^2)}. \!$ In physics, a three-parameter Lorentzian function is often used: $f(x; x_0,\gamma,I) = \frac{I}{\left[1 + \left(\frac{x-x_0}{\gamma}\right)^2\right]} = I \left[ { \gamma^2 \over (x - x_0)^2 + \gamma^2 } \right],$ where I is the height of the peak. ### Cumulative distribution function The cumulative distribution function is: $F(x; x_0,\gamma)=\frac{1}{\pi} \arctan\left(\frac{x-x_0}{\gamma}\right)+\frac{1}{2}$ and the quantile function (inverse cdf) of the Cauchy distribution is $Q(p; x_0,\gamma) = x_0 + \gamma\,\tan\left[\pi\left(p-\tfrac{1}{2}\right)\right].$ It follows that the first and third quartiles are (x0−γ, x0+γ), and hence the interquartile range is 2γ. The derivative of the quantile function, the quantile density function, for the Cauchy distribution is: $Q'(p; \gamma) = \gamma\,\pi\,{\sec}^2\left[\pi\left(p-\tfrac{1}{2}\right)\right].\!$ The differential entropy of a distribution can be defined in terms of its quantile density,[5] specifically $h_e(\text{Cauchy}(\gamma)) = \int_0^1 \log\,(Q'(p; \gamma))\,\mathrm dp = \log(\gamma)\,+\,\log(4\,\pi).\!$ ## Properties The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to x0. When U and V are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio U/V has the standard Cauchy distribution. If X1, ..., Xn are independent and identically distributed random variables, each with a standard Cauchy distribution, then the sample mean (X1+ ... +Xn)/n has the same standard Cauchy distribution. To see that this is true, compute the characteristic function of the sample mean: $\phi_{\overline{X}}(t) = \mathrm{E}\left[e^{i\overline{X}t}\right]$ where $\overline{X}$ is the sample mean. This example serves to show that the hypothesis of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case. The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution.[6] The standard Cauchy distribution coincides with the Student's t-distribution with one degree of freedom. Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the Cauchy distribution is the only univariate distribution which is closed under linear fractional transformations with real coefficients.[7] In this connection, see also McCullagh's parametrization of the Cauchy distributions. ### Characteristic function Let X denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by $\phi_X(t; x_0,\gamma) = \mathrm{E}\left[e^{iXt} \right ] =\int_{-\infty}^\infty f(x;x_{0},\gamma)e^{ixt}\,dx = e^{ix_0t - \gamma |t|}.$ which is just the Fourier transform of the probability density.[citation needed] The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: $f(x; x_0,\gamma) = \frac{1}{2\pi}\int_{-\infty}^\infty \phi_X(t;x_0,\gamma)e^{-ixt}\,dt \!$ Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have an expected value. ## Explanation of undefined moments ### Mean If a probability distribution has a density function f(x), then the mean is $\int_{-\infty}^\infty x f(x)\,dx. \qquad\qquad (1)\!$ The question is now whether this is the same thing as $\int_0^\infty x f(x)\,dx-\int_{-\infty}^0 |x| f(x)\,dx.\qquad\qquad (2) \!$ If at most one of the two terms in (2) is infinite, then (1) is the same as (2). But in the case of the Cauchy distribution, both the positive and negative terms of (2) are infinite. This means (2) is undefined. Moreover, if (1) is construed as a Lebesgue integral, then (1) is also undefined, because (1) is then defined simply as the difference (2) between positive and negative parts. However, if (1) is construed as an improper integral rather than a Lebesgue integral, then (2) is undefined, and (1) is not necessarily well-defined. We may take (1) to mean $\lim_{a\to\infty}\int_{-a}^a x f(x)\,dx, \!$ and this is its Cauchy principal value, which is zero, but we could also take (1) to mean, for example, $\lim_{a\to\infty}\int_{-2a}^a x f(x)\,dx, \!$ which is not zero, as can be seen easily by computing the integral. Because the integrand is bounded and is not Lebesgue integrable, it is not even Henstock–Kurzweil integrable. Various results in probability theory about expected values, such as the strong law of large numbers, will not work in such cases. ### Higher moments The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example the raw second moment: $\mathrm{E}[X^2] \propto \int_{-\infty}^{\infty} {x^2 \over 1+x^2}\,dx = \int_{-\infty}^{\infty}dx - \int_{-\infty}^{\infty} 1-{x^2 \over 1+x^2}\,dx = \int_{-\infty}^{\infty}dx - \int_{-\infty}^{\infty} {1 \over 1+x^2}\,dx = \int_{-\infty}^{\infty}dx-\pi = \infty.$ By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, do not exist at all (i.e. are undefined), which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to ∞ − ∞ since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the central moments and standardized moments do not exist (are undefined), since they are all based on the mean. The variance — which is the second central moment — is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. ## Estimation of parameters Because the parameters of the Cauchy distribution don't correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if n samples are taken from a Cauchy distribution, one may calculate the sample mean as: $\overline{x}=\frac{1}{n}\sum_{i=1}^n x_i$ Although the sample values xi will be concentrated about the central value x0, the sample mean will become increasingly variable as more samples are taken, because of the increased likelihood of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the samples themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of x0 than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more samples are taken. Therefore, more robust means of estimating the central value x0 and the scaling parameter γ are needed. One simple method is to take the median value of the sample as an estimator of x0 and half the sample interquartile range as an estimator of γ. Other, more precise and robust methods have been developed [8][9] For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for x0 that is more efficient than using either the sample median or the full sample mean.[10][11] However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used.[10][11] Maximum likelihood can also be used to estimate the parameters x0 and γ. However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima.[12] Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples.[13] The log-likelihood function for the Cauchy distribution for sample size n is: $\hat\ell(\!x_0,\gamma|\,x_1,\dotsc,x_n) = n \log (\gamma) - \sum_{i=1}^n (\log [(\gamma)^2 + (x_i - \!x_0)^2]) - n \log (\pi)$ Maximizing the log likelihood function with respect to x0 and γ produces the following system of equations: $\sum_{i=1}^n \frac{x_i - x_0}{\gamma^2 + [x_i - \!x_0]^2} = 0$ $\sum_{i=1}^n \frac{\gamma^2}{\gamma^2 + [x_i - x_0]^2} - \frac{n}{2} = 0$ Note that $\sum_{i=1}^n \frac{\gamma^2}{\gamma^2 + [x_i - x_0]^2}$ is a monotone function in γ and that the solution γ must satisfy $\min |x_i-x_0|\le \gamma\le \max |x_i-x_0|.$ Solving just for x0 requires solving a polynomial of degree 2n−1,[12] and solving just for γ requires solving a polynomial of degree n (first for γ2, then x0). Therefore, whether solving for one parameter or for both paramters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating x0 using the sample median is only about 81% as asymptotically efficient as estimating x0 by maximum likelihood.[11][14] The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of x0 as the maximum likelihood estimate.[11] When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for x0. ## Circular Cauchy distribution If X is Cauchy distributed with median μ and scale parameter γ, then the complex variable $Z = \frac{X - i}{X+i}$ has unit modulus and is distributed on the unit circle with density: $P_{cc}(\theta;\zeta)= \frac{1}{2\pi } \frac{1 - |\zeta|^2}{|e^{i\theta} - \zeta|^2}$ with respect to the angular variable θ = arg(z),[citation needed] where $\zeta = \frac{\psi - i}{\psi + i}$ and ψ expresses the two parameters of the associated linear Cauchy distribution for x as a complex number: $\psi=\mu+i\gamma\,$ The distribution $P_{cc}(\theta;\zeta)$ is called the circular Cauchy distribution[15][16](also the complex Cauchy distribution)[citation needed] with parameter ζ. The circular Cauchy distribution is related to the wrapped Cauchy distribution. If $P_{wc}(\theta;\psi)$ is a wrapped Cauchy distribution with the parameter ψ = μ + i γ representing the parameters of the corresponding "unwrapped" Cauchy distribution in the variable y where θ = y mod 2π, then $P_{wc}(\theta;\psi)=P_{cc}(\theta,e^{i\psi})\,$ See also McCullagh's parametrization of the Cauchy distributions and Poisson kernel for related concepts. The circular Cauchy distribution expressed in complex form has finite moments of all orders $\operatorname{E}[Z^r] = \zeta^r, \quad \operatorname{E}[\bar Z^r] = \bar\zeta^r$ for integer r ≥ 1. For |φ| < 1, the transformation $U(z, \phi) = \frac{z - \phi}{1 - \bar \phi z}$ is holomorphic on the unit disk, and the transformed variable U(Z, φ) is distributed as complex Cauchy with parameter U(ζ, φ). Given a sample z1, ..., zn of size n > 2, the maximum-likelihood equation $n^{-1} U \left(z, \hat\zeta \right) = n^{-1} \sum U \left(z_j, \hat\zeta \right) = 0$ can be solved by a simple fixed-point iteration: $\zeta^{(r+1)} = U \left(n^{-1} U(z, \zeta^{(r)}), \, - \zeta^{(r)} \right)\,$ starting with ζ(0) = 0. The sequence of likelihood values is non-decreasing, and the solution is unique for samples containing at least three distinct values.[17] The maximum-likelihood estimate for the median ($\hat\mu$) and scale parameter ($\hat\gamma$) of a real Cauchy sample is obtained by the inverse transformation: $\hat\mu \pm i\hat\gamma = i\frac{1+\hat\zeta}{1-\hat\zeta}.$ For n ≤ 4, closed-form expressions are known for $\hat\zeta$.[12] The density of the maximum-likelihood estimator at t in the unit disk is necessarily of the form: $\frac{1}{4\pi}\frac{p_n(\chi(t, \zeta))}{(1 - |t|^2)^2} ,$ where $\chi(t, \zeta) = \frac{ |t - \zeta|^2}{4(1 - |t|^2)(1 - |\zeta|^2)}$. Formulae for p3 and p4 are available.[18] ## Multivariate Cauchy distribution A random vector X = (X1, ..., Xk)′ is said to have the multivariate Cauchy distribution if every linear combination of its components Y = a1X1 + ... + akXk has a Cauchy distribution. That is, for any constant vector a ∈ Rk, the random variable Y = a′X should have a univariate Cauchy distribution.[19] The characteristic function of a multivariate Cauchy distribution is given by: $\phi_X(t) = e^{ix_0(t)-\gamma(t)}, \!$ where x0(t) and γ(t) are real functions with x0(t) a homogeneous function of degree one and γ(t) a positive homogeneous function of degree one.[19] More formally:[19] $x_0(at) = ax_0(t),$ $\gamma (at) = |a|\gamma (t),$ for all t. An example of a bivariate Cauchy distribution can be given by:[20] $f(x, y; x_0,y_0,\gamma)= { 1 \over 2 \pi } \left[ { \gamma \over ((x - x_0)^2 + (y - y_0)^2 +\gamma^2)^{1.5} } \right] .$ Note that in this example, even though there is no analogue to a covariance matrix, x and y are not statistically independent.[20] Analogously to the univariate density, the multidimensional Cauchy density also relates to the multivariate Student distribution. They are equivalent when the degrees of freedom parameter is equal to one. The density of a k dimension Student distribution with one degree of freedom becomes: $f({\mathbf x}; {\mathbf\mu},{\mathbf\Sigma}, k)= \frac{\Gamma\left(\frac{1+k}{2}\right)}{\Gamma(\frac{1}{2})\pi^{\frac{k}{2}}\left|{\mathbf\Sigma}\right|^{\frac{1}{2}}\left[1+({\mathbf x}-{\mathbf\mu})^T{\mathbf\Sigma}^{-1}({\mathbf x}-{\mathbf\mu})\right]^{\frac{1+k}{2}}} .$ Properties and details for this density can be obtained by taking it as a particular case of the multivariate Student density. ## Transformation properties • If $X \sim \textrm{Cauchy}(x_0,\gamma)\,$ then $kX+l \sim \textrm{Cauchy}(x_0{k}+l,\gamma |k|)\,$ • If $X \sim \textrm{Cauchy}(0,1)\,$ then[citation needed] $\tfrac{2X}{1-X^2} \sim \textrm{Cauchy}(0,1)\,$ • If $X \sim \textrm{Cauchy}(x_0,\gamma_0)\,$ and $Y \sim \textrm{Cauchy}(x_1,\gamma_1)\,$ are independent, then[citation needed] $X+Y \sim \textrm{Cauchy}(x_0+x_1,\gamma_0+\gamma_1)\,$ • If $X \sim \textrm{Cauchy}(0,\gamma)\,$ then $\tfrac{1}{X} \sim \textrm{Cauchy}(0,\tfrac{1}{\gamma})\,$ • McCullagh's parametrization of the Cauchy distributions:[citation needed] Expressing a Cauchy distribution in terms of one complex parameter $\psi=x_0+i\gamma$, define X ~ Cauchy(ψ) to mean X ~ Cauchy$(x_0,|\gamma|)$. If X ~ Cauchy(ψ) then: $\frac{aX+b}{cX+d}$ ~ Cauchy$\left(\frac{a\psi+b}{c\psi+d}\right)$ where a,b,c and d are real numbers. • Using the same convention as above, if X ~ Cauchy(ψ) then:[citation needed] $\frac{X-i}{X+i}$ ~ CCauchy$\left(\frac{\psi-i}{\psi+i}\right)$ where "CCauchy" is the circular Cauchy distribution. ## Related distributions • $\textrm{Cauchy}(0,1) \sim \textrm{t}(df=1)\,$ Student's t distribution • $\textrm{Cauchy}(\mu,\sigma) \sim \textrm{t}_{(df=1)}(\mu,\sigma)\,$ Non-standardized Student's t distribution • If $X, Y \sim \textrm{N}(0,1)\,$ then $\tfrac{X}{Y} \sim \textrm{Cauchy}(0,1)\,$ • If $X \sim \textrm{U}(0,1)\,$ then $\tan \left({\pi\left(x-\tfrac{1}{2}\right)}\right) \sim \textrm{Cauchy}(0,1)\,$ • If X ~ Log-Cauchy(0, 1) then ln(X) ~ Cauchy(0, 1) • The Cauchy distribution is a limiting case of a Pearson distribution of type 4[citation needed] • The Cauchy distribution is a special case of a Pearson distribution of type 7.[1] • The Cauchy distribution is a stable distribution: if X ~ Stable(1, 0, γ, μ), then X ~ Cauchy(μ, γ). • The Cauchy distribution is a singular limit of a Hyperbolic distribution[citation needed] • The wrapped Cauchy distribution, taking values on a circle, is derived from the Cauchy distribution by wrapping it around the circle. ## Relativistic Breit–Wigner distribution Main article: Relativistic Breit–Wigner distribution In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution.[citation needed] ## References 1. ^ a b N. L. Johnson, S. Kotz, and N. Balakrishnan (1994). Continuous Univariate Distributions, Volume 1. New York: Wiley. , Chapter 16. 2. E. Hecht (1987). Optics (2nd ed.). Addison-Wesley. p. 603. 3. Park, Sung Y.; Bera, Anil K. (2009). "Maximum entropy autoregressive conditional heteroskedasticity model". Journal of Econometrics (Elsevier): 219–230. Retrieved 2011-06-02. 4. Vasicek, Oldrich (1976). "A Test for Normality Based on Sample Entropy". Journal of the Royal Statistical Society, Series B 38 (1): 54–59. 5. S.Kotz et al (2006). Encyclopedia of Statistical Sciences (2nd ed.). John Wiley & Sons. p. 778. ISBN 978-0-471-15044-2. 6. F. B. Knight (1976). "A characterization of the Cauchy type". Proceedings of the American Mathematical Society 55: 130–135. 7. Cane, Gwenda J. (1974). "Linear Estimation of Parameters of the Cauchy Distribution Based on Sample Quantiles". Journal of the American Statistical Association 69 (345): 243–245. JSTOR 2285535. 8. Zhang, Jin (2010). "A Highly Efficient L-estimator for the Location Parameter of the Cauchy Distribution". Computational Statistics 25 (1): 97–105. 9. ^ a b Rothenberg, Thomas J.; Fisher, Franklin, M.; Tilanus, C.B. (1966). "A note on estimation from a Cauchy sample". Journal of the American Statistical Association 59 (306): 460–463. 10. ^ a b c d Bloch, Daniel (1966). "A note on the estimation of the location parameters of the Cauchy distribution". Journal of the American Statistical Association 61 (316): 852–855. JSTOR 2282794. 11. ^ a b c Ferguson, Thomas S. (1978). "Maximum Likelihood Estimates of the Parameters of the Cauchy Distribution for Samples of Size 3 and 4". Journal of the American Statistical Association 73 (361): 211. JSTOR 2286549. 12. Cohen Freue, Gabriella V. (2007). "The Pitman estimator of the Cauchy location parameter". Journal of Statistical Planning and Inference 137: 1901. 13. Barnett, V. D. (1966). "Order Statistics Estimators of the Location of the Cauchy Distribution". Journal of the American Statistical Association 61 (316): 1205. JSTOR 2283210. 14. McCullagh, P., "Conditional inference and Cauchy models", , volume 79 (1992), pages 247–259. PDF from McCullagh's homepage. 15. K.V. Mardia (1972). Statistics of Directional Data. Academic Press. [] 16. J. Copas (1975). "On the unimodality of the likelihood function for the Cauchy distribution". Biometrika 62: 701–704. 17. P. McCullagh (1996). "Möbius transformation and Cauchy parameter estimation.". Annals of Statistics 24: 786–808. JSTOR 2242674. 18. ^ a b c Ferguson, Thomas S. (1962). "A Representation of the Symmetric Bivariate Cauchy Distribution". Journal of the American Statistical Association: 1256. JSTOR 2237984. 19. ^ a b Molenberghs, Geert; Lesaffre, Emmanuel (1997). "Non-linear Integral Equations to Approximate Bivariate Densities with Given Marginals and Dependence Function". Statistica Sinica 7: 713–738.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 77, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8361350893974304, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/213250/converse-to-runges-approximation-theorem
Converse to Runge's approximation theorem I'm trying to prove the following fact If $K$ is a compact set whose complement is not connected, then there exists a function $f$ holomorphic in a neighborhood of $K$ which cannot be approximated uniformly by polynomials in $K$. This is an exercise from Stein & Shakarchi's complex analysis, and the book gives the following hint Pick a point $z_0$ in a bounded component of $K^{c}$, and let $f(z)=\displaystyle\frac{1}{(z-z_0)}$. If $f$ can be approximated uniformly by polynomials on $K$, show that there exists a polinomial $p$ such that $\left |{(z-z_0)p(z)-1}\right |<1$. Use the maximum modulus principle to show that this inequality continues to hold for all $z$ of $K^{c}$ that contains $z_0$. I don't know how to begin the problem, any hint will be appreciated, please don't give answers. Thanks - 1 Answer You can begin by considering the case where $K=\{z\in \Bbb C:|z|=1\}$ and $z_0=0$, which is simpler but uses the same ideas. These ideas are described here. - ok, i hadn't seen it right away because i hadn't seen that inmediate consequence to the maximum modulus principle, you need only to check that if $\Omega$ is the bounded component containing $z_0$, then $\bar{\Omega}-\Omega\subseteq{K}$. Thanks!!! – Camilo Arosemena Oct 14 '12 at 16:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928852379322052, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/563/pairing-friendly-curves-in-small-characteristic-fields?answertab=votes
# Pairing-friendly curves in small characteristic fields There are several well-known techniques to generate pairing-friendly curves of degrees 1 to 36 on prime fields GF(p): Cocks-Pinch, MNT, Brezing-Weng, and several others. In extension fields GF(p^n), however, one is confined to supersingular curves. In characteristic 2, embedding degrees are <= 4, and in characteristic 3 they are <= 6. In general, they are <= 2. The question is: is there any known method to generate ordinary pairing-friendly curves over small characteristic fields, with a reasonable (say, 3-20) embedding degree? - ## 2 Answers To my knowledge the answer is no. Informally, the only known method to construct pairing friendly curve is the CM method, which allows you to find an elliptic curve with strong constraints on its number of points if you put few constraints on the cardinal of the base field, or conversely a curve over a very constrained base field with only little constraints on its number of points. (If you're interested in more precise statements and explicit examples, I suggest looking at Lay and Zimmer, Constructing elliptic curves with given group order over large finite fields, which despite its title covers both aspects. Article behind a pay wall unfortunately.) This works well to find pairing-friendly curves in large characteristic because you have plenty of room to let the cardinal of the base field vary. But it doesn't work well at all over small characteristic fields, because you put both huge constraints on the cardinal of the base field (it has to be a power of the characteristic) and on the number of points on the curve (to achieve a small embedding degree). In fact, I'd be curious to see even a single example of an ordinary curve with small embedding degree over a not too large binary field, like $\mathbb{F}_{2^{31}}$ say. Note that this problem is entirely unrelated with the question of constructing ordinary pairing-friendly curves of higher genus over large prime fields, which Freeman's article mentioned above addresses. - I marked the above reply correct because I failed to specify genus in my question --- Freeman does seem to provide pairing-friendly curves in $J(F_{q^k})$ for genus 2. In genus 1, what you say makes sense. – Samuel Neves Jan 17 '12 at 19:45 If you check the output of Algorithms 4.2 and 5.1 in Freeman's paper, you'll find that all of his curves (and hence their Jacobians) are defined over prime fields. The $J(\mathbb{F}_{q^k})$ in the abstract is about finding where the full r-torsion of the Jacobian in contained. – Mehdi Tibouchi Jan 18 '12 at 1:11 I should perhaps note, however (and sorry for commenting twice), that in principle, it might be possible to construct pairing-friendly curves over extension fields of a form like $\mathbb{F}_{p^2}$ with the CM method (see e.g. the discussion in 4.1 of Barreto and Naehrig's paper). But $p$ still has to be large and you cannot fix it in advance, so it doesn't solve the problem in small characteristic. – Mehdi Tibouchi Jan 18 '12 at 1:23 Understood. Should not have just skimmed the paper... – Samuel Neves Jan 18 '12 at 1:51 What I think your looking for is this paper. It's a modficiation of Cocks-Pinch that was published by Stanford last year. It allows for it to be defined for most k's inside of your extension field. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947557270526886, "perplexity_flag": "middle"}
http://michaelnielsen.org/polymath1/index.php?title=Hyper-optimistic_conjecture
# Hyper-optimistic conjecture ### From Polymath1Wiki Gil Kalai and Tim Gowers have proposed a “hyper-optimistic” conjecture. Let $c^\mu_n$ be the maximum equal-slices measure of a line-free set. For instance, $c^\mu_0 = 1$, $c^\mu_1 = 2$, and $c^\mu_2 = 4$. As in the unweighted case, every time we find a subset B of the grid Δn: = {(a,b,c):a + b + c = n} without equilateral triangles, it gives a line-free set $\Gamma_B := \bigcup_{(a,b,c) \in B} \Gamma_{a,b,c}$. The equal-slices measure of this set is precisely the cardinality of B. Thus we have the lower bound $c^\mu_n \geq \overline{c}^\mu_n$, where $\overline{c}^\mu_n$ is the largest size of an equilateral-triangle-free subset of Δn. The computation of the $\overline{c}^\mu_n$ is Fujimura's problem. Hyper-optimistic conjecture: We in fact have $c^\mu_n = \overline{c}^\mu_n$. In other words, to get the optimal equal-slices measure for a line-free set, one should take a set which is a union of slices Γa,b,c. This conjecture, if true, will imply the DHJ theorem. Note also that all our best lower bounds for the unweighted problem to date have been unions of slices. Also, the k=2 analogue of the conjecture is true, and is known as the LYM inequality (in fact, for k=2 we have $c^\mu_n = \overline{c}^\mu_n = 1$ for all n). ## Small values of $c^\mu_n$ I have now found the extremal solutions for the weighted problem in the hyper-optimistic conjecture, again using integer programming. The first few values are • $c^\mu_0=1$ (trivial) • $c^\mu_1=2$ (trivial) • $c^{\mu}_2=4$ with 3 solutions • $c^{\mu}_3=6$ with 9 solutions • $c^{\mu}_4=9$ with 1 solution • $c^{\mu}_5=12$ with 1 solution • $c^{\mu}_6=15$ with an incomplete list of solutions Comparing this with the known bounds for $\overline{c}^\mu_n$ we see that the hyper-optimistic conjecture is true for $n \leq 6$. ## Asymptotic lower bound on $c^\mu_n$ Let $S\subset [0,n-2]$ be a free of 3-term APs, with | S | = r3(n − 1) (the largest possible size of a subset of $\{1,2,\dots,n-1\}$ without 3-term APs). By replacing S with n − S if necessary, we can assume that the number of pairs $(s,t)\in S\times S$ with s + t < n is at least | S | 2 / 2. We now split $S \times S$ into 3 pieces according to the congruence class of s + t modulo 3. One of those pieces must have size at least | S | 2 / 6; call that largest set of pairs P and denote $s+t \mod 3$ by ε. Unless you care about the technicalities, you can assume that n is a multiple of 3 and that ε = 0 and still catch the gist. We now define the set A, which will have no combinatorial lines. First, let $L=\{(a+s,a+t,a): (s,t)\in P, a=(n-s-t)/3\}$ if $\epsilon\equiv n \mod 3$, and $L=\{(a+s+1,a+t,a): (s,t)\in P, a=(n-s-t-1)/3\}$ if $\epsilon\equiv n+1 \mod 3$, and $L=\{(a+s+1,a+t+1,a): (s,t)\in P, a=(n-s-t-2)/3\}$ if $\epsilon\equiv n+2 \mod 3$. In all cases, a is necessarily a nonnegative integer by the assumption s + t < n and the 3-arity of $n-\epsilon\equiv n-s-t$. Now we can define $A=\bigcup_{x\in L} \Gamma_x$. Now we explain why A has no combinatorial lines. If $\ell_i$ (with $1\leq i \leq 3$) is a combinatorial line in A, then (as with all combinatorial lines in [3]n), the difference of the first and third slice-coordinates (are these what people are calling c-statistics?) must be an arithmetic progression. These differences are in a translate of S, which has no nontrivial APs, and so the difference must be constant for all of the $\ell_i$. Likewise, the difference of the second and third coordinates give an arithmetic progression in a translate of S, and so that difference must also be constant. This, since the coordinates must add up to n, completely ties down all coordinates and so the $\ell_i$ must all lie in the same slice. Therefore, they form a trivial line. Now we compute the size of A, in the equal slices measure. Clearly, the size of A is just the size of L. The set L gets one pair for each pair (s,t) in P, and from the first paragraph we concluded that | P | is at least | S | 2 / 6. Recall that | S | = r3(n − 1), and you have the following inequality: $c_n^{\mu} \geq \frac16 r_3(n-1)^2$. The latest and greatest regarding r3(n) is: for every ε > 0, if n is sufficiently large then $r_3(n) \geq n(\sqrt{360}/(e\pi^{3/2}) - \epsilon) \sqrt[4]{2\log_2(n)}2^{-2\sqrt{2\log_2(n)}}.$ This is O'Bryant's optimization of the Green & Wolf simplification of Elkin's strengthening of Behrend's construction. Say that three times fast. It all combines to give ```:$c_n^{\mu} > n^2 \cdot ( \frac{60 \sqrt{2}}{e^2 \pi ^3}-\epsilon) \cdot 2^{-4\sqrt{2} \sqrt{\log_2 n }+1/2 \log_2 \log_2 n}$ ``` for sufficiently large n. ## Slice densities Given any $(a,b,c) \in \Delta_n$ and a line-free set A, define the slice density αa,b,c to be the quantity $\alpha_{a,b,c} := |A \cap \Gamma_{a,b,c}|/|\Gamma_{a,b,c}|$. The equal-slices measure of A is thus the sum of all the slice densities. Clearly $0 \leq \alpha_{a,b,c} \leq 1$. We also have that $\alpha_{a+r,b,c}+\alpha_{a,b+r,c}+\alpha_{a,b,c+r} \leq 2$ for all upward-pointing triangles (a+r,b,c), (a,b+r,c), (a,b,c+r). ## n=2 by hand One should in fact be able to get the Pareto-optimal and extremal statistics for the slice densities αa,b,c in this case. ## n=3 by hand $c^{\mu}_3=6$: If we have all Three points of the form xxx removed Then the remaining points have value 7 and We have covered all lines any set of moving coordinates And all constant points equal to one value this leaves The lines xab a,b not equal. Each point of the set abc covers three of these lines the entire set covers each of these lines there is no duplication the only alternative is to remove a point abc and cover the lines with points of the form aab which have a higher weight and only cover one line each this would lower the weight so the maximum weight occurs when all of abc is omitted along with the three points xxx and the weight is 6 If we have only two points removed of the form xxx then The weight is at most 8 say the point not removed is 222 Then we must cover the lines xx2 and x22 we have three six such Lines and all the xx2 must be covered one at a time by either 112 Or 332 the x22 must be covered one at a time by 322 or 122 These points must be removed and the that lowers the weight To 8 - 3*(2/6) – 3*(2/6) = 6 again we have c^{\mu} must be less than 6 If one point say 111 is removed then we must cover all lines of the form xx2 xx3 and x22 and x33 Look at the pairs of lines such as xx2 and 33x one with moving coordinates in two positions and a fixed coordinate equal to 2 or 3 say 2 the other with fixed coordinates equal to the other value which in this case is 3 so if the fixed coordinate(s) in one point of the pair is 2 in the other they or it will be three Then we must have one of the points 222 112 332 removed to block the first line of the pair and for the second line we can use 333 332 331. However we do not have the points 222 or 333 removed in this case so we must have either 332 or the pair 112 or 331. For every one of the six point 332 or 223 we will have a similar choice forcing either the removal of the point itself or the associated pair. After these choices have been made more points of the form aab can be added but there must be a subset corresponding to one set of the above six choices since in each case there are only two ways to cover the lines noted. If we start with the configuration 111 removed all six points with two 2’s and one three and two 3’s and one two removed and all points of the form abc removed then this configuration has weight 6 then we can perform a series of steps which will reach all combinatorial line free configurations. These steps are as follows: 1 Making choices as above and allowing the addition Of all possible abcs 2.Removal of points of the form aab and addition of all possible abc’s 3.Removal of abc It will be shown that with each step the weight decreases or remains the same so the weight is 6 or less This will give all line free configurations as we must have sub configuration corresponding to one of the six choices and all we can do is add points of the form aab and take the resulting set with the most possible Abc’s and them remove any arbitrary abcs that we wish to remove. Are the making of the six choices noted above and the addition of any points of the form abc where possible without forming a combinatorial line. At the start each point of the form abc cannot be added because it has two lines which are some permutations of x12 and x13 now look at the points possibly blocking x12 they are 112 212 and 312 initially point 312 which is removed could not be added because the two points 112 and 212 are not removed as each choice is moved then each of the removed lines of type 113 covers two lines of the form permutations of x13 similarly lines of type 133 covers two lines of the form permutations of x13 now each choice to replace a line of the form 332 increase the number of points removed with two coordinates the same by one thus lowers the weight by one third and blocks four lines of the form x13 or x12 thus after n such choices we have reduced the weight by n/3 and covered 4n such lines since every point of the of the form permutations of 123 starts out with 2 such lines which it is blocking and can only be added when they are filled we can only add at most 2n such points which since they have weight 1/6 at the end of n such steps the weight is unchanged now afterwards if we remove more points of the form aab they cover at most two lines of the form xab and thus allow at most two points of the form abc to be added thus the change in weight is at most -1/3 +1/6 +1/6=0 finally afterwards we can remove points of the form abc but that will only lower the weight. Finally we have no points of the form xxx removed but then we will have a line of the form xxx. ## n=4 by hand $c^{\mu}_4=9$: If we have all three points of the form aaaa removed then the remaining points have value 12 and we have covered all lines any set of moving coordinates with all constant points equal to one value this leaves the lines xxab, xabc and xaab. let us also remove the three sets of the form aabc, a, b, c not equal. These block the remaining lines and the size is now 9. Now suppose we try to replace the removed points of the form aabc by points of the form aaab or aabb. Now we must cover the lines of the form xaab each point of the form aabc covers two and has a weight 1/12 each point of the form aabb covers 4 and has a weight 1/6 each point of the form aaab covers three and has a weight of 1/3 and since each element of the form covers two elements of the form xaab that no other element of the set aabc covers and as noted above all lines of the form xaab are covered by the the elements of the three sets of the form aabc we cannot improve on the weight as we can only cover the xaab’s by the same or higher weight lowering the total from 9 so for this case the value is 9. If we have two points removed of the form aaaa then the weight is at most 13 say the point not removed is 2222 then we must cover the lines xxx2 and x222 we have eight such lines and all six xx22 must be covered one at a time by either 1122 or 3322 the four x222 must be covered one at a time by 3222 or 1222 the four xxx2 must be covered by 3332 or 1112 these points must be removed and the that lowers the weight To 13 - 4*(6/24) – 6*(1/6) – 4(6/24) = 10. We also have 24 lines of the form xabc which can only be covered two at a time by the aabc which have weight 1/12 so we need to cover these with at least 12 of these which we can do by choosing all points which are permutations of 1123 this reduces the number to 9 and again we have c^{\mu} must be 9. If one point of the form aaaa is removed say 1111 then we must cover all lines of the form xxx2 xxx3 and xx22 and xx33 and x222 and x333 Look at the pairs of lines such as xxx2 and 333x one with moving coordinates in three positions and a fixed coordinate equal to 2 or 3 say 2 the other with fixed coordinates equal to the other value which in this case is 3. Then we must have one of the points 2222 1112 3332 removed to block the first line of the pair and for the second line we can use 3333 3332 3331. However we do not have the points 2222 or 3333 removed in this case so we must have either 3332 or the pair 1112 or 3331. For every one of the eight points 3332 or 2223 we will have a similar choice forcing either the removal of the point itself or the associated pair. After these choices have been made more points of the form aaab can be added but there must be a subset corresponding to one set of the above eight choices since in each case there are only two ways to cover the lines noted. Look at the pairs of lines such as 22xx and xx33 one with moving coordinates in two positions and a two fixed coordinates equal to 2 or 3 say 2 the other with two fixed coordinates equal to the other value which in this case is 2 so if the fixed coordinate(s) in one point of the pair is 2 in the other they or it will Then we must have one of the points 2211 2222 2233 removed to block the first line of the pair and for the second line we can use 3333 2233 1133. However we do not have the points 2222 or 3333 removed in this case so we must have either 2233 or the pair 1133 or 2211. For every one of the six points with two threes and two twos we will have a similar choice forcing either the removal of the point itself or the associated pair. After these choices have been made more points of the form aabb can be added but there must be a subset corresponding to one set of the above six choices If we start with the configuration 1111 removed all eight points with three 2’s and one three and three 3’s and one two removed and all points with two 3’s and two 2’s removed and all points of the form aabc removed then this configuration has weight 8 then we can perform a series of steps in order which will reach all combinatorial line free configurations. These steps are as follows: 1 Making choices as above for elements of the form aaab and allowing the addition of all possible aabc’ss. 2 Making choices as above for elements of the form aabb and allowing the addition of all possible aabc’ss. 3.Removal of points of the form aabb or aaab and addition of all possible aabc’s. 4.Removal of aabc’s. At present none of the aabc can be added there are lines of the form xx12 and xx13 and x113 and x112 which each have to blocked to allow their addition What we are going to do is perform the previous steps in order. we will show that each step eliminates twice as much in weight of elements of the form aabc as it adds of all other forms. To facilitate the proof we will have to perform the operations in the order listed. However we will be able to reach every possible configuration. We must have a subconfiguration corresponding to one set of choices above then we can delete various elements of the form aabb and aaab to get all possible line free sets of the elements of the form aabb and aaab and then for each set we add the maximum possible number of elements of the form aabc. The point is doing this is that we can only add two units in weight of the form aabc as the lines of the form xabc must be covered and they Can only be covered by one unit in weight of elements of the form abc So if we have the two to one ratio or better and have to stop at two we will End up with a net change of 2-1 which will raise the weight to 9 or less And we still have the maximum weight of is 9. When we make a choice deleting a pair and adding say 2223 which will give 1113 and 2221 we block three lines of the form xx13 and three of the form xx12 thus allowing the addition of 6 points of weight 1/12 which give a total change of weight 1/2 which is exactly twice the net change in weight caused by the addition of 2223 and the removal of 1113 of 2221 Now with each substitution there will also be the blocking of lines of the form x113 or x112 however to allow the point say 2113 To be added we must block both x113 and 211x and to do this We must make choices adding both 2223 and 2333 which unblocks The line 2xx1 and forbids the addition of the point 2113 thus the increase in material of the form aabc is twice the net change in weight of the other types 1122 can block the 11×2 two at a time but their weight is 1/6 so even if they block two of the 11×2 and allow the addition of two 1132’s the weight will remain the same and since each replacement of 2233 by 1133 and 2211 has only one element of the form 2211 we are done with step one. For the choices of elements of the form aabb if we replace an element of the form 2223 by 1113 and 2221 we will be able to block three lines of the form x113 which gets rid of 3 elements of the form 2113 but the weight of the addional aaab element created is 1/4 and the weight of the removed elements is the same. So step two can not improve the weight. So we are done with the choices mentioned above and move on two steps 3 and 4 deleting individual elements. If we add an element of the form 2233 and delete 1133 and 2211 then it can unblock two lines of the form x113 and two of the form x112 thus allowing the addition of at most 4 points of the form 3112 thus giving a net change of -1/4 +4/12 and again the ratio is 2 to one. Then after all the above choices have been made we can delete elements of the form 2223 which will block at most three lines Of the form 2xx3 if created by the above choices And the ratio of points added to points deleted is at most one And so is less than 1/2 and we are done. The case 3332 is handled the same way. If we delete an element of the form 1112 we unblock three lines Of the form xx12 and three of x112 we free at most 6 points with Weight 1/12 giving a total weight of 1/2 the point removed is weight 1/4 so the ratio is 2 to 1 and we are done the case of 1113 is similar. If we delete an element of the form 3331 it can unblock three lines of the form xx31 and we free three points for possible addition The have total weight equal to the weight of 3331 so the ratio is less than 2 to 1. The case of an element of the form 2221 is similar. If we delete a point of the form 2233 We don’t unblock any lines so we can’t Add any points of the form aabc. If we delete a point of the form 1133 we unblock two lines Of the form xx13 and two of x113 and so can add four points With weight 1/12 which has total 1/3 which is twice the weight Of the point 1133 so the ratio is less than or equal to 2 and we are Done. The case of deleting a point of the form 1122 is handled similarly. So we are done with step three. If we delete a point of the form aabc the weight is less. So we have gone through all steps and the ratio in them 2 or less and thus in this case the weight is 9 or less and we are done. Since this is the final case we have proved the theorem and the weight is 9. Finally we have no points of the form aaaa removed but then we will have a line of the form xxxx.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 47, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942276120185852, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3934288
Physics Forums ## calculating axis of rotation hey guys, I have a problem I need to solve for my masters thesis. I have an object that rotates in 3D space. There is no translation. Given that I have coordinates of one landmark at 2 instances of time I need to calculate axis of rotation and angle of rotation. is this sufficient information? Regards Yasith (Medical Doctor) PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug If you also know the center of rotation that is enough. (1) calculate the vector from the center of rotation to the initial and final points. (2) take the cross product of these two vectors. That is the axis of rotation. (you may need to normalize) (3) normalize the two vectors and take the dot product. That is the cosine of the angle of rotation. (4) The cos does not really tell you if the angle is positive or negative (i.e. it does not change when you exchange the two vectors). However, the cross product changes sign. So the angle and direction do provide the complete information. (Sorry, but I don't remember off the top of my head if you have to choose the positive or negative value of the arccos.) (5) If you don't know the pivot point, then you need at least 3 points. Hope this helps. thank you for the reply, yes the pivot point is not known i can get 3 landmark points. if this is the case how can I approach this problem? Mentor ## calculating axis of rotation No, it doesn't, not even if you know the center of rotation. Suppose you use M Quack's procedure and determine that the object has rotated through 140 degrees between your two time points. How do you know it hasn't rotated through 500 degrees (140+360), or 860 degrees, or even more? There's aliasing going on here thanks to the wrapping function. Another aliasing problem arises due to the axis of rotation having a sign ambiguity by this method. Flip the sign and that 140 degrees becomes 220 degrees (or 580, or ...). Yet another problem is that it is rarely, if ever, a good idea to use the minimal number of measurements. A final problem is that the object may not have a fixed axis of rotation if this is an object rotating in free space. You will at best get a mean axis of rotation for the time interval of question. The Euler torque (aka "the polhode rolls without slipping on the herpolhode") means that such objects will have a different axis of rotation at some later time. thanks D H for your input. The problem I have at hand is to map the motion of a bone. There is no translation component to this problem. It is pure rotation. I am mapping landmarks on the bone at each 1 second interval. therefore between each time point I want to map the instant axis of rotation and the angle of rotation. Any ideas on how I can approach this problem? Mentor Quote by yasith Any ideas on how I can approach this problem? Search for hand+pose and hand+motion at Google scholar or CiteSeer. Eun-Jung Holden's 1997 PhD thesis, "Visual Recognition of Hand Motion" appears to be directly applicable. Addendum Another good one is Erol et al, "Vision-based hand pose estimation: A review", Computer Vision and Image Understanding 108 (2007). You have a pose estimation and motion detection problem here. There is lots and lots of technical literature on these subjects. One point rotating around an axis traverses a circle. If you observe it at point A, B, C then those are three points on the circle. Lets say we want to know the measure of the arc from A to C. According to Euclid, it is 360 minus double the measure of the angle ABC. The axis of rotation is simple to calculate because it is perpendicular to both AB and BC, so you can take the cross product to find the right direction. Let N be the direction of the axis of rotation. The axis of rotation is the intersection of the perpendicular bisectors of the segments AB and BC. The equations for the first plane is: $(A-B)\cdot X = (1/2)(A\cdot A - B\cdot B)$. The second plane has a similar equation. If you find just one point P that satisfies both equations, then you can easily find the direction of the axis by taking the cross product of A-B and C-B (since it has to be perpendicular to both of those). A point and direction characterize a line, so you're done. Quote by D H No, it doesn't, not even if you know the center of rotation. Suppose you use M Quack's procedure and determine that the object has rotated through 140 degrees between your two time points. How do you know it hasn't rotated through 500 degrees (140+360), or 860 degrees, or even more? There's aliasing going on here thanks to the wrapping function. Another aliasing problem arises due to the axis of rotation having a sign ambiguity by this method. Flip the sign and that 140 degrees becomes 220 degrees (or 580, or ...). True in the general case, but probably not a problem here. But you are right, I should have pointed out this limitation. Yet another problem is that it is rarely, if ever, a good idea to use the minimal number of measurements. I fully agree with this. No redundancy = no idea about the errors of your measurement. A final problem is that the object may not have a fixed axis of rotation if this is an object rotating in free space. You will at best get a mean axis of rotation for the time interval of question. The Euler torque (aka "the polhode rolls without slipping on the herpolhode") means that such objects will have a different axis of rotation at some later time. I was indeed thinking about 1 point (mapping landmark) at 3 different times to define an arc. For a bone that can rotate in several directions this is not a good idea, you will get false results. It would be much better to identify as many landmark points as possible and then calculate the center of rotation, axis and angle from these (including some estimate of the error). I am sure that there more sophisticated and elaborate methods available in the literature than I can come up with here :-) BTW, yasith, what kind of input data are you using? this is a diagram to the problem I have a perfect 3D volume rendered model of the wrist in 3D motion which I take measurements from I have managed to map all the points of interest to a 3D coordinate system. Therefore my input is p = [x,y,z]. The problem is to calculate instant axis of rotation. Therefore Vargo I cant assume the center of rotation will stay constant across point A -> B -> C. Mathematically speaking (in other words, being anal about not making assumptions, to the point that my answer might not be relevant to your problem!), you do not have enough information to determine the rotation axis and angle. For a given pair of points, there are infinitely many (axis, angle) pairs that move point A to point B. Think of a globe, and imagine that you have a rotation that moves the north pole (point A) to some specific point B on the equator. The axis itself could go through the equator (90 degrees east and west from point B), in which case point A moves along a line of longitude. Or the axis could go through that line of longitude (exactly halfway between A and B), so that point A moves along a smaller circle. And there are infinitely many more axes that would work, each with a different angle. If you first drew an arrow on the globe, pointing in some direction from the north pole, then observing the direction of the arrow after the rotation would allow you to deduce the axis and angle. And maybe the fact that your object isn't a sphere implicitly gives you this "arrow"? Anyway, you either need that, or a second landmark point (which is not antipodal to the first) to get the information you want. Quote by Tinyboss Mathematically speaking (in other words, being anal about not making assumptions, to the point that my answer might not be relevant to your problem!), you do not have enough information to determine the rotation axis and angle. For a given pair of points, there are infinitely many (axis, angle) pairs that move point A to point B. Think of a globe, and imagine that you have a rotation that moves the north pole (point A) to some specific point B on the equator. The axis itself could go through the equator (90 degrees east and west from point B), in which case point A moves along a line of longitude. Or the axis could go through that line of longitude (exactly halfway between A and B), so that point A moves along a smaller circle. And there are infinitely many more axes that would work, each with a different angle. If you first drew an arrow on the globe, pointing in some direction from the north pole, then observing the direction of the arrow after the rotation would allow you to deduce the axis and angle. And maybe the fact that your object isn't a sphere implicitly gives you this "arrow"? Anyway, you either need that, or a second landmark point (which is not antipodal to the first) to get the information you want. thank you tinyboss for your response. I can get track as many points as I need to solve this problem. So the solution to this problem can be achieved by tracking 2 points at 2 time instances? if so can someone be kind enough to show me the working for this solution. also is there any software/matlab coding to approach this problem? Thank you. Assuming you also don't know the center of rotation, I'd proceed this way:If point A moves to point A', then you know that the center of rotation is somewhere on the equidistant plane (the plane of points in 3D space which are at equal distance from A and A'). Now use another pair of points B and B' to get another plane. Intersecting these planes gives you a line, and you know the center of rotation is somewhere on that line. Use a third pair C and C' to get a third plane, and the intersection of it with your line will be a single point, which is the center of rotation. Now that you know the center, you can find the axis and angle using at least two before/after pairs. I would have to work out the recipe for that; I don't know it off the top of my head, sorry! Edit: Of course, I'm assuming there is no measurement error. Three nonparallel planes will always intersect in a single point, so that part will be okay. But when you start to work out the axis and angle, you might find there is no solution that moves all point pairs exactly as required. I don't have any experience in dealing with that kind of issue. Good luck! Earlier you said you could take 1 measurement per second... Can you speed up those measurements so that you are taking them every .1 s? If so, then we can do some numerical calculus here and solve these problems. If not then we could expand on Tinyboss' suggestion. With 4 measurements, you could find the location O of the joint. All axes of rotation must pass through O, so given two measurements A and B, the axis of rotation would lie in the plane that is the perpendicular bisector of AB and it would pass through the point O. Unfortunately that leaves us with an infinite number of axes of rotation... So even knowing O is not enough information to pin down the axis of rotation from just two measurements. There is also the problem that the wrist joint is hardly a point, so whether there is a stable point O for all motions of the wrist is doubtful in the first place. I think what you need Yasith is measurements over short timescales. Then it would make sense to discuss "instantaneous" quantities and it would be practically feasible to calculate them using calculus. Mentor yasith, you mentioned this is a project for a masters thesis. What literature searches have you done? I quickly found several and posted a couple, one from over a decade ago, the other from about four years ago. I thought I had typed this before, but apparently it got lost... You have a set of 3D coordinates of points before and after the rotation, p --> p'. You want to find the center of rotation COR, c, the axis of rotation (e.g. defined by Euler angles), and the angle of rotation. That makes 3 variables for the x,y,z coordinates of the COR, one for the angle, and 3 for the Euler angles (but I think you can get away with 2). The equation describing the transformation for each points is $\vec{p}' - \vec{c} = U \cdot R(\theta) \cdot U^T \cdot (\vec{p} - \vec{c})$ $R(\theta) = \left( \begin{array}{ccc} \cos(\theta) & -\sin(\theta) & 0 \\ \sin(\theta) & \cos(\theta) & 0 \\ 0 & 0 & 1 \end{array}\right)$ describes a rotation about the z-axis by the angle $\theta$ $U = (\hat{u}_1, \hat{u}_2, \hat{u}_3)$ is a coordinate transformation from the z-axis to the acutal axis of rotation (u_1,2,3 are the basis vectors, the rotation will be around u_3). I have to look up how to best express U in terms of Euler angles. What you need to do is fit the free variables (coordinates of the COR, θ, and the orientation of the axis of rotation) of this expression to the data. It might be helpful to use a simpler procedure to produce a good initial guess that assures the rapid convergence of the fit. An easier(?) and dirtier solution would be this: Your (instantaneous) center of rotation is defined as the point (or axis) that has the following property: A line connecting the center of rotation to any point on the body is perpendicular to the velocity vector at that point. If the center of rotation is an axis, this line must be perpendicular to the axis as well. Therefore, you can get 3 velocity vectors on your body and define 3 disks in 3d space with the velocities' normal vectors. These will intersect at least at 1 point (or define the axis directly if you're lucky). Repeat this one more time for 3 more vectors, and you end up with 2 points in 3D space. Connect them and you will have the (instantaneous) axis of rotation. You can repeat this process for more sets of points if you want to verify your result. One pitfall here is that the second point you define can be the same as the first. This means that all of your new velocity vectors are parallel to the first ones, meaning you have to calculate the velocity at different points edit: if you want to do it without tracking too many points, 4 in total should be enough. You can then do the process above by using only 3 of them at a time. meldraft, it sounds as if this could work. However, I don't see an easy systematic way to use the redundancy and estimate errors using this method. Least-squares fits are standard in most data analysis packages these days, so my method should be easy to implement. $U = \left(\begin{array}{ccc} \cos(\phi) & -\sin(\phi) & 0 \\ \sin(\phi) & \cos(\phi) & 0 \\ 0 & 0 & 1 \end{array}\right) \cdot \left(\begin{array}{ccc} \cos(\chi) & 0 & \sin(\chi) \\ 0 & 1 & 0 \\ -\sin(\chi) & 0 & \cos(\chi) \end{array}\right) = \left(\begin{array}{ccc} \cos(\phi) \cos(\chi) & -\sin(\phi) & \cos(\phi)\sin(\chi) \\ \sin(\phi) \cos(\chi) & \cos(\phi) & \sin(\phi)\sin(\chi) \\ -\sin(\chi) & 0 & \cos(\chi) \end{array}\right)$, where $0 \leq \phi < 2\pi$ and $0 \leq \chi \leq \pi$ The axis of rotation is the $\left(\begin{array}{c} \cos(\phi) \sin(\chi) \\ \sin(\phi) \sin(\chi) \\ \cos(\chi) \end{array}\right)$ Thread Tools | | | | |---------------------------------------------------|----------------------------------------------|---------| | Similar Threads for: calculating axis of rotation | | | | Thread | Forum | Replies | | | Classical Physics | 16 | | | Engineering, Comp Sci, & Technology Homework | 1 | | | Advanced Physics Homework | 1 | | | Introductory Physics Homework | 11 | | | General Astronomy | 6 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366199970245361, "perplexity_flag": "head"}
http://crypto.stackexchange.com/tags/mac/hot?filter=day
# Tag Info ## Hot answers tagged mac 3 ### When truncating an AES MAC value by “w” , how do I justify that “w” is still negligible? One of the factors that determines how hard it is to forge a MAC for a given message is how long the MAC is. If it's 1 bit long, you can definitely produce the correct MAC in two tries. $2^n$ is the number of possible bit-strings of length $n$; $1/2^n$ is the probability that any random bit-string happens to be the MAC (of length $n$) for a given message ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8753220438957214, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/tagged/risk-neutral-measure?sort=votes&pagesize=15
# Tagged Questions The risk-neutral-measure tag has no wiki summary. 5answers 1k views ### Formal proof for risk-neutral pricing formula As you know, the key equation of risk neutral pricing is the following: $\exp^{-rt} S_t = E_Q[\exp^{-rT} S_T | \mathcal{F}_t]$ That is, discounted prices are Q-martingales. It makes real-sense for ... 2answers 154 views ### St Petersburg lottery pricing & short investing horizons I am a statistician (no solid background in finance). Please forward me to a book \ chapter \ paper to resolve the following general question. Suppose we have a stock with the following monthly return ... 2answers 337 views ### How to transform process to risk-neutral measure for Monte Carlo option pricing? I am trying to price an option using the Monte Carlo method, and I have the price process simulations as an inputs. The underlying is a forward contract, so at all times the mean of the simulations is ... 0answers 117 views ### Consistency of economic scenarios in nested stochastics simulation I am interested in references on research regarding the consistency of economic scenarios in nested stochastics for risk measurement. Background: Pricing by Monte-Carlo: For pricing complex ... 2answers 588 views ### How does one go from measure P to Q(risk-neutral) when modeling an asset paying dividends? I am really having a terrible time applying Girsanov's theorem to go from the real-world measure $P$ to the risk-neutral measure $Q$. I want to determine the payoff of a derivative based an asset ... 2answers 362 views ### Version of Girsanov theorem with changing volatility Is there a version of Girsanov theorem when the volatility is changing? For example Girsanov theorem states that Radon Nikodym (RN) derivative for a stochastic equation is used to transform the ... 2answers 146 views ### Is drift rate the same as interest rate in risk-neutral random walk when using Monte Carlo for option pricing? When using following risk-neutral random walk $$\delta S = rS \delta t + \sigma S \sqrt{\delta t} \phi$$ where $\phi \sim N(0,1)$. Now when a text mentions drift = 5% does that mean that interest ... 2answers 193 views ### Black-Scholes and Fundamentals So basically $dS_t=\mu S_tdt+\sigma S_tdWt$ and $\mu=r-\frac12\sigma^2$ I have just been thinking about this later equation. This is very interesting because it ties together risk-free ... 2answers 127 views ### What mathematical characteristics are required from the asset price process in order to stay within the RNP framework? I'm currently doing a course in derivatives pricing and I'm having some trouble wrapping my head around the sweet spot where theory meets reality in terms of Risk Neutral Pricing. I know that the ... 2answers 424 views ### Financial Mathematics - Martingales example Was hoping somebody could help me with the following question. Prove that under the risk-neutral probability $\tilde{\mathsf P}$ the stock and the bank account have the same average rate of growth. ... 1answer 105 views ### American Option price formula assuming a logLaplace distribution? What are $d_1$ and $d_2$ for Laplace? may be running before walking. When I tried to use the equations provided, the pricing became extremely lopsided, with the calls being routinely double puts. ... 1answer 113 views ### How to choose model parameters? I'm studying math and attend this semester a course about interest rates. Now, some questions show up how exactly things are working in the real world. My examples will be about interest rates ... 1answer 173 views ### Risk neutral probability in binomial short rate model assumed to be 0.5? This should be a basic question but I have not been able to find a satisfying explanation. In the simplest binomial model, the risk neutral probability is computed using the up/down magnitude and the ... 0answers 100 views ### Measure change in a bond option problem This is not a homework or assignment exercise. I'm trying to evaluate $\displaystyle \ \ I := E_\beta \big[\frac{1}{\beta(T_0)} K \mathbf{1}_{\{B(T_0,T_1) > K\}}\big]$, where $\beta$ is the ... 1answer 223 views ### Risk Neutral Probability and invariant measure Is a risk-neutral probability a special case of an invariant measure? 1answer 86 views ### Pricing forward contract on a stock Please tell me where I've gone wrong (if I did in fact make a mistake). I'm pricing a long forward on a stock. The usual setup applies: This has payoff $S(T) - K$ at time $T$. We are at $t$ now. ... 0answers 82 views ### Pricing a Power Contract derivative security I'm trying to price a "power contract" and would appreciate guidance on the next step. The payoff at time $T$ is $(S(T)/K)^\alpha$, where $K > 0$, $\alpha \in \mathbb{N}$, $T > 0$. $S$ is ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9217156767845154, "perplexity_flag": "middle"}
http://gilkalai.wordpress.com/2008/05/01/extremal-combinatorics-i/
Gil Kalai’s blog ## Extremal Combinatorics I: Extremal Problems on Set Systems Posted on May 1, 2008 by The “basic notion seminar” is an initiative of David Kazhdan who joined HU math department  around 2000. People give series of lectures about basic mathematics (or not so basic at times). Usually, speakers do not talk about their own research and not even always about their field. I gave two lecture series, one about “computational complexity theory” a couple of years ago, and one about extremal combinatorics or Erdös-type combinatorics a few months ago, which later I expanded to a series of five+one talks at Yale. One talk was on  the Borsuk Conjecture,  which I will discuss separately, and five were titled: “Extremal Combinatorics: A working tool in mathematics and computer science.”  Let me try blogging about it. The first talk was devoted to extremal problems concerning systems of sets. Paul Erdös ### 1. Three warm up problems Here is how we move very quickly from very easy problems to very hard problems with a similar flavour. Problem I: Let  N = {1,2, … , n } . What is the largest size of a family $\cal F$  of subsets of $N$ such that every two sets in $\cal F$ have non empty intersection? (Such a family is called intersecting.) Answer I: The maximum is $2^{n-1}$. You can achieve it by taking all subsets containing the element ’1′. You cannot achieve more because from every pair of a set and its complement, you may choose only one set to the family. Problem II: What is the largest size of a family $\cal F$  of subsets of $N$ such that every two sets in $\cal F$ their union is not $N$. You can protest and claim that Problem II is just the same as Problem I. Just move to complements. Or just use the same answer. OK, lets have another Problem II, then. New Problem II: What is the largest size of a family $\cal F$  of subsets of $N$ such that for every two sets $S,R$ in $\cal F$ their intersection is non empty and their union is not $N$. An example of such a family is the set of all sets containing the element ’1′ and missing the element ’2′. This family has $2^{n-2}$ sets. It took several years from the time this problem was posed by Erdös untill Kleitman showed that there is no larger family with this property. Problem III (Erdös-Sos Conjecture) Let $\cal F$ be a family of graphs with N as the set of vertices. Suppose that every two graphs in the family have a triangle in common. How large can $\cal F$ be? Now, the total number of graphs on n vertices is $2 ^{{{n} \choose {2}}}$. (Note: we count labelled graphs and not isomorphism types of graphs.) A simple example of a family of graphs with the required property is all the graphs containing a fixed triangle. Say all graphs containing the edges {1,2},{1,3},{2,3}. This family contains 1/8 of all of graphs. Is there any larger family of graphs with the required property? Erdös and Sos conjectured that the answer is no – you cannot get a larger family. This conjecture is still open. Update (Oct 2010): Problem III should be referred to as the Siminovits-Sos conjecture. It was made by Simonovits and Sos in 1976. The conjecture was proved by Ellis, Filmus, and Friedgut in 2010. Vera Sos ### 2. Two basic theorems about families with prescribed intersections. Erdös-Ko-Rado Theorem: An intersecting of k-subsets of $N$, when $2k \le n$ contains at most ${{n-1} \choose {k-1}}$ sets. Fisher; deBruijn-Erdös Thorem: A family of subsets of $N$ so that every two (different) sets in the family have precisely  a single element in common has cardinality at most n. (Erdös and deBruijn concluded that n non colinear points in the plane determine at least n line. Try to deduce it!) All k-subsets of N containing the element ’1′ is an example of equality for the Erdos-Ko-Rado theorem. For the Erdös deBruijn take the family {{1} , {1,2} {1,3} … {1,n}} or replace the first set by {2,3,…,n}, or take a finite projective plane. Fano plane the finite projective place of order 2. ### 3. The linear algebra proof of deBruijn Erdös Theorem. The linear algebra proof of the Fisher; de Bruijn-Erdös Theorem  goes roughly as follows: Suppose that there are m sets in the family $A_1,A_2,\dots, A_m$. Consider the incidence matrix of the family:  The (i,j)-entry in this matrix is ’1′ if i belongs to $A_j$. The crucial fact is that the columns $c_1,c_2,\dots, c_m$ of the incidence matrix are linearly independent. This gives $m \le n$. How do we go about proving that the columns are linearly independent? We first assume that all sets have cardinality at least 2. Then we write $s = \sum \alpha_i c_i$, and compute the inner product $<s,s>=\sum \sum \alpha_i \alpha _j <c_i , c_j>$. We note that if i and j are distinct $<c_i,c_j>=1$ and that $<c_i,c_i>=|A_i|$. And write $<s,s>= \sum_{i=1}^m \alpha_i^2 (|A_i|-1) + (\sum_{i=1}^m \alpha_i)^2$. This can vanish only if all coefficients $\alpha_i$ are equal to 0. Update: This proof is an example of “dimension arguments in combinatorics”. For more examples and a general discussion see this post in Gowers’s weblog. ### 4. Sperner’s theorem (I wanted to indicate how Erdös-Ko-Rado theorem is proved. There are various proofs and for the two proofs I like to give it is better to demonstrate the proof technique in a simple case.) Sperner’s theorem fron 1927 asserts that the maximum size of a family $\cal F$ of subsets of N which is an antichain with respect to inclusion is the middle binomial coefficient ${{n} \choose {n/2}}$.  Lubell found a simple nice proof for Sperner’s theorem: Let $\cal F$ be such an antichain and suppose that it has $s_k$ sets of cardinality k.  Count pairs $(\pi, S)$ where $\pi$= $((\pi(1),\pi(2), \dots , \pi(n))$ is a permutation of {1,2, … ,n} and S is a set in the family which is initial w.r.t. $\pi$, namely S={$\pi(1), \pi(2),\dots,\pi(k)$ }  for some k. Now, for every permutation $\pi$ you can find at most one initial S in the family $\cal F$ (because of the untichain condition). If S is a set of k elements, you can find precisely k! (n-k)! permutations $\pi$ for which S is initial. Putting these two facts together we get that $\sum_{k=0}^n s_k k! (n-k)! \le n!$ or in other words $\sum_{k=0}^n s_k/ {{n} \choose {k}} \le1$. This inequality (called the LYM inequality) implies the required result. Bella Bollobas, one of the discoverers of the LYM inequality. There is a similar proof for Erdös-Ko-Rado’s theorem The idea is to count pairs $(\pi,S)$ where $S$ is a set in the family, $\pi$ is a circular permutation $(\pi(1),\pi(2), \dots , \pi(n))$ and $S$ is a continuous “interval” with respect tp $\pi$. On the one hand there are (n-1)! cyclical permutations and as it is not hard to see for each such permutation, you can get at most k “intervals” which are pairwise intersecting. On the other hand, for every set S there are k!(n-k)! cyclic permutations on which S is a continuous interval. So $|\cal F|$ k! (n-k)! $\le (n-1)! k$ and this gives the Erdös Ko Rado theorem. What about the “not hard to see part”. This uses the fact that $2k \le n$. One way about it is to consider the interval J whose left most element z is furthest to the left and notice that there are k intervals that intersect J whose left most element is right of z. Another way is to consider some interval J of length k and notice that the 2k-2 intervals intersecting it come in (k-1) pairs where each pair contains two disjoint intervals. ### 5. Turan’s theorem and Turan’s problem The special case of Turan’s theorem for graphs with no triangles was proved by Mantel in 1907 The maximum number $t_2(n)$ of edges in a graph on n vertices without a triangle is attained by a complete multi-partite graph with n vertices where the sizes of the parts are as equal as possible. The full Turan’s theorem was proved in the 40s. The maximum number $t_r(n)$ of edges in a graph on n vertices which does not contain a complete subgraph with (r+1) vertices is attained by a complete multi-partite graph with n vertices where the sizes of the parts are as equal as possible. Paul Turan Proving Turan’s theorem: It is difficult not to prove Turan’s theorem; it seems that every approach to proving it succeeds. One approach is this: for simplicity consider the case of triangles. Take a vertex with maximum degree and divide the other vertices of the graph into two parts: A – the neighbors of v, B – the remaining vertices. Now, note that the vertices in A form an independent set (i.e. there are no edges between vertices in A). For every vertex in B delete all edges containing it and instead, connect it to all edges in A. Note that in the new graph, the degree of every vertex is at least as large as in the original graph. And, in addition, the new graph is bipartite. (One part is A.) It is left to show that for a bipartite graph the number of egdes is maximum, when the parts are as equal as possible. Here is another proof. Delete a vertex from a graph G with n+1 vertices without $K_r$. The number of edges in the remaining graph is at most $t_r(n)$. Do it for all vertices and note that every edge is counted n-1 times. You get that the number of edges in G (hence $t_r(n+1)$) is at least the upper integral part of $t_r(n) \cdot (n+1)/(n-1)$. Lo and behold this gives the right formula. So let us conclude with Turan’s 1940 problem. You want to know what is the maximum cardinality of a set of triples from {1,2,…,n} which does not contain a “tetrahedron”, namely four triples of the form {a,b,c},{a,b,d),{a,c,d},{b,c,d}. If you do not know it already; try to guess and suggest the best example. Turan made a conjecture which is still open. (May 4, a few typos corrected, thanks alef and mike.) ### Like this: This entry was posted in Open problems and tagged Extremal combinatorics, Open problems, Paul Erdos, Simonovits-Sos conjecture, Sperner's theorem, Turan's problem. Bookmark the permalink. ### 9 Responses to Extremal Combinatorics I: Extremal Problems on Set Systems 1. alef says: took should be tool 2. Pingback: Local Events, Turan’s Problem and Limits of Graphs and Hypergraphs « Combinatorics and more 3. Pingback: Extermal Combinatorics II: Some Geometry and Number Theory « Combinatorics and more 4. Pingback: Plans and Updates « Combinatorics and more 5. Pingback: Extremal Combinatorics III: Some Basic Theorems « Combinatorics and more 6. Pingback: Mathematics, Science, and Blogs « Combinatorics and more 7. Pingback: Extremal Combinatorics on Permutations « Combinatorics and more 8. Pingback: The Simonovits-Sos Conjecture was Proved by Ellis, Filmus and Friedgut | Combinatorics and more 9. Pingback: Tentative Plans and Belated Updates II | Combinatorics and more • ### Blogroll %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 60, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9170736074447632, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=122924&page=9
Physics Forums Page 9 of 205 « First < 6 7 8 9 10 11 12 19 59 109 > Last » ## The Should I Become a Mathematician? Thread Recently I encountered a book "Mathematical problem solving methods" written by L.C.Larson. There are many problems from the Putnam competition. My question is: how important is it for a physicist (mathematician) to be able to solve this kind of problems. Recognitions: Homework Help Science Advisor well it is not essential, but it can't hurt. I myself have never solved a Putnam problem, and did not aprticipate in the contest in college, but really bright, quick people who do well on them may also be outstanding mathematicians. My feeling from reading a few of them is they do not much resemble real research problems, since they can presumably be done in a few hours as opposed to a few months or years. E.g. the famous fermat problem was solevd in several stages. first peoiple tried a lot of sepcial cases, i.e. special values of the exponent. None of these methods ever yielded enough insight to even prove it in an infinite number of cases. Then finally Gerhartd Frey thought of loinking the problem with elliptic curves, by asking what kind of elliptic curve would arise from the usual equation y^2 = (x-a)(x-b)(x-c) if a,b,c, were constructed ina simple way from three solutions to fermat's problem. he conjectured that the elliptic curve could not be "modular". this was indeed proved by ribet I believe, and then finally andrew wiles felt there was enough guidance and motivation there to be worth a long hard attempt on the problem via the question of modularity. Then he succeeded finally, after a famous well publicized error, and some corrective help from a student, at solving the requisite modularity problem. He had to invent and upgrade lots of new techniques for the task and it took him over 7 years. I am guessing a Putnam problem is a complicated question that may through sufficient cleverness be solved by also linking it with some simpler insight, but seldom requires any huge amount of theory. However any ropqactice at all in thinking up ways to simplify problems, apply old ideas to new situations, etc, or just compute hard quantities, is useful. I would do a few and see if they become fun. If not I would not punish myself. Recognitions: Homework Help Science Advisor you could start a putnam thread here perhaps if people want to talk about these problems and get some first hand knowledge. but in research the smartest people, although they often do best on these tests, do not always do the deepest research. that requires something else, like taste, courage, persistence, luck and inspiration. One of my better results coincided with the birth if one of my children. Hironaka (a famous fields medalist) once told me, somewhat tongue in cheek, that others had noticed a correlation between making discoveries and getting married, and "some of do this more than once for that reason". I have noticed that success in research is in the long run, related to long hard consistent work. I.e. if you keep at it faithfully, doing what you have noticed works, you will have some success. Don't be afraid to make mistakes, or to make lengthy calculations that may not go anywhere. And talk to people about it. This can be embarrassing, but after giving a talk on work that was still somewhat half baked, I have usually finished it off satisfactorily. Here is an example that may be relevant: Marilyn Vos Savant seems to be an intelligent person, who embarrassed many well educated mathematicians a few years back with a simple probability problem published in a magazine. But she not only cannot do any research in the subject without further training, but even does not understand much of what she has read about mathematics. Still she has parlayed her fame into a newspaper column and some books. The great Grothendieck, so deep a mathematician that his work discouraged Rene Thom from even attempting to advance algebraic geometry, once proposed 57 as an example of a prime number. But he made all of us begin to realize that to understand geometry, and also algebra, one must always study not just individual objects or spaces, but mappings between those objects. This is called category theory. that is why the first few chapters of hartshorne are about various types of maps, proper maps, finite maps, flat maps, etale maps, smooth maps, birational maps, generically finite maps, affine maps, etc.... If someone wanted to get a Ph.D in mathematical physics should you pursue an undergrad degree in math or physics. I would like to eventually like to do research in M theory but as a Mathematical physicist. Thanks in advance for your reply. Quote by courtrigrad Do most people major just in math? Or do they have a minor in something else? ... What are some good combinations? i didn't minor in anything else but a subject where math is used heavily might be not hurt. physics, economics or computer science combined with math are somewhat obvious choices. statistics and computer science would be a good combination if you're interested in raking in far more $$$ than any engineering, comp sci or business student. depending on your interests, statistics and biology (biostatistician=$$\$), statistics and economics, statistics and another social science (psych, soc, etc) might be good combinations. It depends on where you go to college what minors and majors will be available to you. At the college I go to, as part of the applied mathematics curriculum, we're required to get at least a minor in some other field, and as it is a tech school, the options are limited to mostly engineering and science fields. Recognitions: Homework Help Science Advisor we need input from some mathematical physicists here. my acquaintances who were mathematical physicists seem to have majored in physics and then learned as much math as possible. on the other hand some lecturers at math/physics meetings seem to be mathematicians, but i do not elarn as much ffrom them sinbce i want to understand the ophysicists point of view and i already nuderstand the amth. i would major in physics if i wanted to be any kind of physicist and learn as much math as possible to use it there. Quote by fournier17 If someone wanted to get a Ph.D in mathematical physics should you pursue an undergrad degree in math or physics. I would like to eventually like to do research in M theory but as a Mathematical physicist. Thanks in advance for your reply. you could do an undergraduate degree in combined maths & physics, and afterwards you can pursue with a phd in theoretical physics (synonymous with mathematical physics). Recognitions: Homework Help Science Advisor from pmb phy: pmb_phy pmb_phy is Online: Posts: 1,682 Quote: Originally Posted by mathwonk by the way pete, if you are a mathematical physicist, some posters in the thread "who wants to be a mathematician" under academic guidance, have been asking whether they should major in math or physicts to become one. what do you advise? I had two majors in college, physics and math. Most of what I do when I'm working in physics is only mathematical so in that sense I guess you could say that I'm a mathematical physicist. I recommend to your friend that he double major in physics and math as I did. This way if he wants to be a mathematician he can utilize his physics when he's working on mathematical problems. E.g. its nice to have solid examples of the math one is working with, especially in GR. Pete [QUOTE=loop quantum gravity]you could do an undergraduate degree in combined maths & physics, and afterwards you can pursue with a phd in theoretical physics (synonymous with mathematical physics).[QUOTe] Is theoretical physics the same as mathematical physics? If they are then thats great, more potential graduate programs to which I can apply to. However, I have heard that mathematical physics relys more on mathematics, and that theoretical physics is more physics than math. I have seen some graduate programs in mathematical physics that are in the math department of the university instead of the physics department. Recognitions: Homework Help Science Advisor Like many things in mathematics itself, the terms mathematical physics and theoretical physics mean different things to different people. Recognitions: Homework Help Science Advisor I wrote the following letter to my graduate committee today commenting on what seems to me wrong with our current prelims. these thoughts may help inform some students as to what to look for on prelims, and what they might preferably find there. In preparing to teach grad algebra in fall, one thing that jumps out at me is not the correctness of the exams, but their diversity. One examiner will ask only examples, another only creative problems, another mostly statements of theorems. only a few examiners ask straight forward proofs of theorems. Overall they look pretty fair, but I noticed after preparing my outline for the 8000 course that test preparation would be almost independent of the course i will teach. I.e. to do most of the old tests, all they need is the statements of the basic theorems and a few typical example problems. They do not need the proofs I am striving to make clear, and often not the ideas behind them. anybody who can calculate with sylow groups and compute small galois groups can score well on some tests. In my experience good research is not about applying big theorems directly, as such applications are already obvious to all experts. It is more often applying proof techniques to new but analogous situations after realizing those techniques apply. So proof methods are crucial. Also discovering what to prove involves seeing the general patterns and concepts behind the theorems. The balance of the exams is somewhat lopsided at times. some people insist on asking two-three or more questions out of 9, on finite group theory and applications of sylow and counting principles, an elementary but tricky topic i myself essentially never use in my research. this is probably the one ubiquitous test topic and the one i need least. I don't mind one such question but why more? The percentage of the test covered by the questions on one topic should not exceed that topic's share of the syllabus itself. if there are 6 clear topic areas on the syllabus, no one of them should take 3/9 of the test. also computing specific galois groups is to me another unnecessary skill in my research. It is the idea of symmetry that is important to me. When I do need them as monodromy groups, a basic technique for computing them is specialization, i.e. reduction mod p, or finding an action which has certain invariance properties, which is less often taught or tested. Here is an easy sample question that illustrates the basic idea of galois groups: State the FTGT, and use it to explain briefly why the galois group of X^4 - 17 over Q cannot be Sym(4). This kind of thing involves some understanding of symmetry. One should probably resist the temptation to ask it about 53X^4 - 379X^2 + 1129. As of now, with the recent division of the syllabus into undergraduate and graduate topics, more than half the previous tests cover undergraduate topics (groups, linear algebra, canonical forms of matrices.) This makes it harder to teach the graduate course and prepare people for the test at the same time, unless one just writes off people with weak undergrduate background, or settles for teaching them test skills instead of knowledge. Thus to me it is somewhat unclear what we want the students to actually know after taking the first algebra course. I like them to learn theorems and ideas for making proofs, since in research they will need to prove things, often by adapting known proof methods, but the lack of proof type question undermines their interest in learning how to prove things. The syllabus is now explicit on this point, but if we really want them to know how to state and prove the basic theorems we should not only say so, but enforce that by testing it. Suggestions: We might state some principles for prelims, such as: 1) include at least one question of stating a basic theorem and applying it. I.e. a student who can state all the basic theorems should not get a zero. 2) Include at least one request for a proof of a standard result at least in a special case. 3) include at least one request for examples or counterexamples. 4) try to mostly avoid questions which are tricky or hard to answer even for someone who "knows" all the basic material in the topic (such as a professor who has taught the course). I.e. try to test knowledge of the subject, rather than unusual cleverness or prior familiarity with the specific question. But do ask at least one question where application of a standard theorem requires understanding what that theorem says, e.g.: what is the determinant, minimal polynomial, and characteristic polynomial of an n by n matrix defining a k[X] module structure on k^n, by looking at the standard decomposition of that module as a product of cyclic k[X] modules. or explain why the cardinality of a finite set admitting an action by a p-group, is congruent modp to the number of fixed points. 5) point out to students that if they cannot do a given question, partial credit will be given for solving a similar but easier question, i.e. taking n= 2, or assuming commutativity, or finite generation. This skill of making the problem easier is crucial in research, when one needs to add hypotheses to make progress. 6) after writing a question, ask yourself what it tests, i.e. what is needed to solve it? These are just some ideas that arise upon trying to prepare to help students pass the prelim as well as prepare the write a thesis. Recognitions: Homework Help Science Advisor Alg prelim 2002. Do any 6 problems including I. I. True or false? Tell whether each statement is true or false, giving in each case a brief indication of why, e.g. by a one or two line argument citing an appropriate theorem or principle, or counterexample. Do not answer “this follows from B’s theorem” without indicating why the hypotheses of B’s theorem hold and what that theorem says in this case. (i) A commutative ring R with identity 1 ≠ 0, always has a non trivial maximal ideal M (i.e. such that M ≠ R). (ii) A group of order 100 has a unique subgroup of order 25. (iii) A subgroup of a solvable group is solvable. (iv) A square matrix over the rational numbers Q has a unique Jordan normal form. (v) In a noetherian domain, every non unit can be expressed as a finite product of irreducible elements. (vi) If F in K is a finite field extension, every automorphism of F extends to an automorphism of K. (vii) A vector space V is always isomorphic to its dual space V*. (viii) If A is a real 3 x 3 matrix such that AA^t = Id, (where A^t is the transpose of A), then there exist mutually orthogonal, non - zero, A - invariant subspaces V, W of R^3. In the following proofs give as much detail as time allows. II. Do either (i) or (ii): (i) If G is a finite group with subgroups H,K such that G = HK, and K is normal, prove G is the homomorphic image of a “semi direct product” of H and K (and define that concept). (ii) If G is a group of order pq, where p < q, are prime and p does not divide q-1, prove G is isomorphic to Z/p x Z/q. III. If k is a field, prove there is an extension field F of k such that every irreducible polynomial over k has a root in F. IV. Prove every ideal in the polynomial ring Z[X] is finitely generated where Z is the integers. V. If n is a positive integer, prove the Galois group over the rational field Q, of X^n - 1, is abelian. VI. Do both parts: (i) State the structure theorem for finitely generated torsion modules over a pid. (ii) Prove there is a one - one correspondence between conjugacy classes of elements of the group GL(3,Z/2) of invertible 3x3 matrices over Z/2, and the following six sequences of polynomials: (1+x, 1+x,1+x), (1+x, 1+x^2), (1+x+x^2+x^3), (1+x^3), (1+x+x^3), (1+x^2+x^3) [omitted(iii) Give representatives for each of the 6 conjugacy classes in GL(3,Z2).] VII. Calculate a basis that puts the matrix A : with rows ( 8, -4) and (9, -4) in Jordan form. VIII. Given k - vector spaces A, B and k - linear maps f:A-->A, g:B-->B, with matrices (x[ij]), (y[kl]), in terms of bases a1,...,an, and b1,...,bm, define the associated basis of AtensorB and compute the associated matrix of ftensorg: AtensorB--->AtensorB. Recognitions: Homework Help Science Advisor for advice on preparing for grad school, from me and others, see my posts 11 and 12 in the thread "4th year undergrad", near this one. how are Summer REUs regarded for graduate admissions? Recognitions: Homework Help Science Advisor They add something, especially if the summer reu guru says you are creative and powerful. i think my friend jeff brock (now full prof at brown) did one at amherst or williams and actually proved some theorems and got a big boost there. they are also taught by people who may be either refereeing or reviewing letters of grad school application.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9587496519088745, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CMB-2012-038-2
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CMB Abstract view # Compact Subsets of the Glimm Space of a $C^*$-algebra Read article [PDF: 142KB] Published:2012-10-28 • Aldo J. Lazar, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract If $A$ is a $\sigma$-unital $C^*$-algebra and $a$ is a strictly positive element of $A$ then for every compact subset $K$ of the complete regularization $\mathrm{Glimm}(A)$ of $\mathrm{Prim}(A)$ there exists $\alpha \gt 0$ such that $K\subset \{G\in \mathrm{Glimm}(A) \mid \Vert a + G\Vert \geq \alpha\}$. This extends a result of J. Dauns to all $\sigma$-unital $C^*$-algebras. However, there are a $C^*$-algebra $A$ and a compact subset of $\mathrm{Glimm}(A)$ that is not contained in any set of the form $\{G\in \mathrm{Glimm}(A) \mid \Vert a + G\Vert \geq \alpha\}$, $a\in A$ and $\alpha \gt 0$. Keywords: primitive ideal space, complete regularization MSC Classifications: 46L05 - General theory of $C^*$-algebras
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.6959922313690186, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/167142/general-theory-of-graph-coloring
# General theory of graph coloring In Ben Steven's article Colored graphs and their properties I read: We "color" a graph by assigning various colors to the vertices of that graph. [...] this process of coloring is generally governed by a set of coloring rules. For example, the most basic set of coloring rules, referred to as regular coloring, consists of a single rule: no two adjacent vertices may have the same color. What I am looking for is a truly general theory of graph colorings and resp. general coloring rules. The theory should be so general to include symmetric (= orderless) context-free grammars. - 1 Where do the context-free grammars enter into it? – MJD Jul 5 '12 at 17:33 Via the grammatical roles (= colors) the grammatical constituents (= symbols of the alphabet) do play. – Hans Stricker Jul 5 '12 at 17:42 Any hint for downvoting is welcome. – Hans Stricker Jul 5 '12 at 20:39 It wasn't me. ${}{}$ – MJD Jul 5 '12 at 20:42 1 A coloring is a function from the vertices of a (finite) graph to an initial segment of the natural numbers. A coloring rule is a subset of the set of all such functions. At that level of generality, I doubt there's much of a theory. – Gerry Myerson Jul 6 '12 at 1:52 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9213923811912537, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/113915/composition-of-topologically-connected-binary-relations
## Composition of (topologically) connected binary relations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My question seems far too basic to be unknown, but I could not find anything relevant... Let $X$, $Y$ and $Z$ be compact connected metric spaces, and let $F \subset X \times Y$ and $G \subset Y \times Z$ be closed binary relations such that both projections are surjective. Suppose that they are also connected as topological spaces. Is it true that $G \circ F$ is connected? Probably I should also mention that a combinatorial version of this statement for connected graphs is known to be true. - ## 1 Answer No. There are two (discontinuous) surjective maps $f,g:S^1\to S^1$ whose graphs are connected but the graph of $g\circ f$ (as well as its closure) is not. The map $f$ is defined as follows, using the standard parametrization of $S^1$ by $\mathbb R/2\pi\mathbb Z$. It is the identity on the complement from the arc (parametrized by) $[\pi/2,\pi]$. The arc $[\pi/2,\pi]$ is slightly stretched from the point $\pi/2$ so that its image is the arc $[\pi/2,\pi+\varepsilon]$. The map $g$ is similar but with a discontinuity at 0 rather than $\pi$. The graphs of $f$ and $g$ can be parametrized by an interval and hence connected. The graph of $g\circ f$ is not, because the map has two simple discontinuities (at 0 and $\pi$) that divide the circle into two components. The graphs are not closed in $S^1\times S^1$, but adding points $(\pi,\pi)$ and $(0,0)$ fixes this issue. - Oh, thank you. So I was very much wrong about graphs too. I thought it would follow easily from the amalgamation property for linear orders, but apparently it does not... – Alexander Shamov Nov 20 at 11:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9726842045783997, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/149283/steady-state-solution-of-diffusion-decay-pde
# Steady state solution of diffusion-decay PDE. Apologies for my overly simple problem. I am looking at the generic diffusion-decay PDE $$u_t=D\nabla^2u-\delta u(x,y,t),~u(0,0,t)=u_0,$$ and I am interested in the steady-state profile of $u(x,y,t)$, i.e. a solution to $$0=D\nabla^2u-\delta u(x,y,t),~u(0,0)=u_0.$$ Using the ansatz $$u(x,y,t) = F e^{\lambda x} + G e^{-\lambda x} + J e^{\lambda y} + K e^{-\lambda y},$$ I find one possible solution with $$\lambda^2=\delta/D,~F=G=J=K=u_0/4.$$ The problem I have now is that this solution grows away from the origin which I find puzzling as I expected to find a solution that has a peak in the origin and decays away from it. Could anyone point me in the right direction please? Thank you. Edit: Apologies for my slow response but @Andrew's answer was a little intimidating (and still is, although I don't seem to see it now - did they delete their answer?) and so I had to do a little background reading. Thanks @Willie Wong for your answer and for giving me some intuition as to why my expectation is wrong. Also thanks to @Andrew for pointing me in the direction of fundamental solutions and so forth. Following @Andrew I found these lecture notes: http://www.stanford.edu/class/math220b/handouts/laplace.pdf Where they construct a radial solution for the Laplace equation using the ansatz $$u(\mathbf{x},t)=v(|\mathbf{x}|,t),$$ and knowing the derivative of the absolute value function and defining radius $r=|\mathbf{x}|$ I get: $$v_t=D(v_{rr} + \frac{1}{r} v_r) - \delta v~;~D,\delta>0~;~r>0$$ For the steady state equation $$0=D(v_{rr} + \frac{1}{r} v_r) - \delta v~;~D,\delta>0~;~r>0$$ I use the ansatz $$v(r)=v_0 \exp{\lambda r},$$ which gives me $$0=D v_0 \exp{(\lambda r)} (\lambda^2 + \frac{\lambda}{r} - \delta)$$ so using $D>0$, $v_0> 0$ I get $$\lambda_{1,2}=-\frac{1}{2r} \mp \frac{1}{2} \sqrt{r^{-2}+4 \delta / D}.$$ Of course this solution blows up towards the origin so I am again a little puzzled and would appreciate any advice / help! In those notes I linked above, they construct the fundamental solution (starting around Eqn 3.2) which is probably what I want but I don't yet understand how some of the steps work. - ## 1 Answer If $D$ and $\delta$ are both positive, as indicated by the nomenclature diffusion-decay, that is the solution you should find. At a critical point $\nabla u = 0$ you have that $$D\nabla^2 u = \delta u$$ suggesting that if $u$ is positive at the critical point, it cannot be a maximum (as other wise $\nabla^2 u < 0$ and leads to a contradiction) and if $u$ is negative at the critical point, it cannot be a minimum. This means that your expectation to have a peak in the origin and decays away from it is unrealistic and untenable. Physically speaking, your equation has two processes, a diffusion and a decay, both are trying to drive the function toward 0. If you don't continuously inject more stuff in from infinity (which is represented by the growing solutions), the only reasonable steady state is $u(x,y) = 0$. - Thanks for your answer! Do you know if the other person deleted their answer? I don't seem to be able to see the answer they gave. Do you have any thoughts on the steps I have taken trying to get a radial solution (see my edits above)? – Name May 29 '12 at 15:10 – Willie Wong♦ May 29 '12 at 15:55 What you are doing is not quite right: $\lambda$ should be constant in your ansatz (I think), yet what you derived has $\lambda$ depending on $r$. Solutions of the radial problem should be in the form of Bessel functions. – Willie Wong♦ May 29 '12 at 15:59 Yea OK, I think I misunderstood the lecture notes I linked to. Do I understand what you wrote correctly when I say: we want to find a radial solution $v$ such that $\int_{-\infty}^{\infty}{(v(s) D \nabla^2 v - \delta v(s))ds} = v(0)$? This is how I understand the Dirac delta notation, but I don't think I understand the intuition behind this. I choose an exponential ansatz because I have a process where material is released at $r=0$ and diffuses and decays everywhere else in the two-dimensional domain. So in my head an exponential seemed reasonable. – Name May 29 '12 at 16:38 I updated my edit to show more steps. I can also solve the homogeneous equation of the radial ODE I get to find $v_{\text{hom}} = -c_1 \log{r}+c_2$. However, this homogeneous solution also blows up towards the origin so that doesn't help me? – Name May 29 '12 at 16:50 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545765519142151, "perplexity_flag": "head"}
http://dsp.stackexchange.com/questions/6421/face-recognition-using-independent-component-analysis-ica
# Face recognition using independent component analysis (ICA) Reading about independent component analysis (ICA), I learned that one of its applications is face recognition. I think in this problem we have a database of images and a test image to be recognized. However, I can't figure out what the independent components are and what the mixture (multivariate signal) is. - ## 2 Answers For your data set of images, first vectorize the images by raster scanning them, and making them vectors. Thus, say you have $M$ images, each of size 64*64 pixels. Then the total number of pixels per image is $N=64^2$, which means $N=4096$. Now, you have an image matrix of size $M\text{x}N$. For this image matrix, what you want to do is find the $M$ independent components, of again, length $N$. In other words, you want to decompose this set of images into another set of images, except those images make up the independent components of the original faces. (Similar to eigen-faces, but different). To answer your question about the mixture, the set of faces that you start with is the mixture. In other words, the set of faces that you have, is assumed to be a mixture of independent faces that you are trying to find. ICA will find those independent features for you, and for natural images, independent components turn out to be features with extremely high kurtosis, namely, the edges. - thank you....... – Hesham Abouelsoaod Jan 8 at 12:15 The basic idea in ICA is to separate a mixture into statistically independent components. Sometimes that's the end of the problem (e.g., solving the cocktail party problem). In face recognition the weights of the mixture components are used to create a feature vector to uniquely and succinctly identify a person. A notable difference is that you are trying to estimate the components not of one mixture, but of several (the faces, that is). There's an enlightening presentation with code here. - Technically speaking, ICA cannot solve the cock-tail party problem because it assumes an instantaneous mixture model, which the time delay between microphones completely destroys, and it becomes a convolutive mixture problem instead. – Mohammad Jan 5 at 4:52 @Emre thank you. – Hesham Abouelsoaod Jan 8 at 12:14 Emre how can we get in touch with you? – Ktuncer Jan 14 at 0:29 Emre, the e-mail is probably visible to you but not to us. – Ktuncer Jan 15 at 15:08 @Ktuncer: fixed – Emre Jan 15 at 18:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940854012966156, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/11521?sort=votes
## What is affine invariant used in computer vision? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Affine invariant for 4 coplanar points ABCD is said to be `Area(ACD)/Area(ABC)`. Can somebody provide the proof of this means why is this invariant under affine transformation? - ## 1 Answer The ratio of areas of $ABC$ and $ACD$ is the ratio in which the line $AC$ divides the segment $BD$ (and it is the ratio of the heights of $B$ and $D$ over $AC$ respectively). This later ratio is affine invariant as affine transformations preserve length ratios on any line. Do make sure that your points don't collapse onto a single line though. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9280160069465637, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/188039/showing-that-the-roots-of-a-polynomial-with-descending-positive-coefficients-lie?answertab=active
# Showing that the roots of a polynomial with descending positive coefficients lie in the unit disc. Let $P(z)=a_nz^n+\cdots+a_0$ be a polynomial whose coefficients satisfy $$0<a_0<a_1<\cdots<a_n.$$ I want to show that the roots of $P$ live in unit disc. The obvious idea is to use Rouche's theorem, but that doesn't quite work here, at least with the choice $f(z)=a_nz^n, g(z)=$ (the rest). Any ideas? - I think this is related to the Schur-Cohn criterion – Cocopuffs Aug 28 '12 at 19:28 – Hans Lundmark Aug 28 '12 at 19:40 ## 1 Answer The thing to do is to look instead at the polynomial $$Q(z) = (1-z)P(z) = (1-z)\left(\sum_{i=0}^n a_iz^i \right) = a_0 -a_n z^{n+1} + \sum_{i=1}^n (a_i-a_{i-1})z^i$$ Now, let $|z|>1$ be a root of $P(z)$, and hence a root of $Q(z)$. Therefore, we have $a_0 + \sum_{i=1}^n (a_i-a_{i-1})z^i = a_n z^{n+1}$ Then, we have \begin{aligned} |a_n z^{n+1}| &= |a_0 + \sum_{i=1}^n (a_i-a_{i-1})z^i| \\ & \le a_0 + \sum_{i=1}^n (a_i-a_{i-1})|z^i| \\ & < a_0|z^n| + \sum_{i=1}^n (a_i-a_{i-1})|z^n| \\ & = |a_n z^n|\end{aligned} a contradiction. For a nice article on integer polynomials, see here. (Your problem is Proposition 10) - The proof looks short enough that you could present it as an answer. External links might be broken some time in the future... – Fabian Aug 28 '12 at 19:37 How did you get the idea to construct $Q(z)$? – MJD Aug 28 '12 at 20:17 +1 Very nice proof. – DonAntonio Aug 29 '12 at 2:52 yea, thanks, the construction of $Q(z)$, makes the condition $a_i$ are monotonic into practice. – van abel Sep 6 '12 at 8:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8951942920684814, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/10832?sort=oldest
## Standard name for basis-independent submatrices? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Given a linear map $T:H\to H$ on an inner-product space $H$ and a subspace $K\subseteq H$, define the map $T_K = \pi_K T \pi_K^* :K \to K$, where $\pi_K:H\to K$ is the orthogonal projection. As an important special case, if $H=\mathbb{R}^n$ and $K$ is a coordinate subspace, then with respect to standard bases, $T_K$ is represented by a principal submatrix of the matrix of $T$. Is there a standard, or at least widely recognized, name for $T_K$ when $K$ is not a coordinate subspace of $\mathbb{R}^n$? The book Matrix Analysis by R. Bhatia calls $T_K$ the compression of $T$ to $K$, but I haven't seen that word used in this way elsewhere (and it's a tricky word to google). - I don't think I have heard any name other than “compression”, but it's not terribly much used I think, so it is probably fairly useless as a search term. – Harald Hanche-Olsen Jan 5 2010 at 20:09 It's neat to see that this was useful: arxiv.org/abs/1001.1954 – Jonas Meyer Jan 20 2010 at 6:43 @Jonas: yes, needing the right terminology for that paper was exactly the reason for the question. So thanks for the helpful answer! – Mark Meckes Jan 20 2010 at 14:05 ## 1 Answer The standard name in operator theory is "compression", and its partner in crime is "dilation". I.e., A is a compression of B if and only if B is a dilation of A (although sometimes "dilation" is reserved for cases where the compression respects powers). The Wikipedia entry is not proof, but here it is anyway. As for searches, you'll get some relevant hits from "compression of an operator" with quotes. Here are some examples. Some further remarks: Sz.-Nagy and Foiaș in Harmonic analysis of operators on Hilbert space (1970) use the notation $\text{pr }T$ for the compression of $T$ onto $K$ (see page 10), but apparently without ever giving it a name. The notation is suggestive of "projection", and that is the terminology used by Sarason in "Generalized interpolation in $H^\infty$" (1967). Lebow goes into more detail on terminology in "A note on normal dilations" (1965), saying in particular that Sz.-Nagy used "projection". In fact, this is the terminology used by Sz.-Nagy in the celebrated appendix to Riesz and Sz.-Nagy's Functional analysis (1955), which in turn refers to Halmos's paper "Normal dilations and extensions of operators" (1950) as the first place where "compression" and "dilation" were used. The terminology "strong compression" may be used when the compression respects powers, and this is the same as saying that $K$ is semi-invariant for $T$ (see Sarason's "On spectral sets having connected complement" (1965)). If $K$ is reducing for $T$, i.e., if both $K$ and $K^\perp$ are invariant subspaces for $T$, then Lebow calls the compression a "reduction". Dixmier gives some terminology in von Neumann algebras (translated 1981 printing) for the case when the compression is applied to an entire von Neumann algebra of operators, which clashes somewhat with the terminology of Lebow. A von Neumann algebra compressed to the space of a projection in the algebra is called a "reduced" von Neumann algebra (page 19), even though the space is reducing only if the projection is in the center. The compression of a von Neumann algebra onto the space of a projection in the commutant (in which case the compression is a normal $*$-homomorphism) is called an "induction". If $P$ denotes the orthogonal projection you called $\pi_K$, then Dixmier uses the notation $T_K$ or $T_P$ for the compression, but without ever giving a name to the construction for single operators. On the other hand, Jones and Sunder use "reduction" for what Dixmier calls "induction", more in tune with Lebow, on page 21 of Introduction to subfactors (1997). I stand by my answer that by now "compression" is most standard for single operators, and it is satisfying to find out that we have Halmos to thank for this. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228529930114746, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/73772/concrete-example-of-infty-categories
## Concrete example of $\infty$-categories. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I've seen many different notion of $\infty$-categories, actual I've seen the operadic-globular ones of Batanin and Leinster and the opetopic too and eventually I'll see the simplicial ones too. Although there are so many notion of $\infty$-category so far I've only seen the following examples: • $\infty$-grupoids as fundamental groupoids topological spaces; • $(\infty,1)$-categories, mostly via topological example and application in algebraic geometry (in particular in derived algebraic geometry); • strict $(\infty,\infty)$-categories, and their $n$-dimensional versions, for instance the various categories of strict-$n$-categories (here I intend $n \in \omega+{\infty}$). There are other example of $\infty$-categories, especially from algebraic topology or algebraic geometry, but also mathematical physics and computer science and logic? In particular I wondering if there's a concrete example, well known, weak $(\infty,\infty)$-category. (Edit:) after the a discussion with Mr.Porter I think adding some specifications may help: I'm looking for models/presentations of $\infty$-weak-categories for which is possible to give a combinatorial description, in which is possible to make manipulations and explicit calculations, but also $\infty$-categories arising in practice in various mathematical context. - 11 There is the $(\infty, n)$-category of bordisms. It is very interesting for algebraic topologists. – Chris Schommer-Pries Aug 26 2011 at 14:14 Chris, do you know of an up-to-date account of this in the literature? (I know of a couple of accounts, but they might not be considered the most up to date). If you do, I encourage you to post it as an answer, because it's a very important sort of example. Jacob Lurie's name should probably be mentioned as well. – Todd Trimble Aug 26 2011 at 16:16 4 Is Lurie's paper sketching the classification of topological field theories not considered up-to-date enough now? – Jeffrey Giansiracusa Aug 26 2011 at 16:23 4 In James Cranch's thesis (front.math.ucdavis.edu/1011.3243) there is the $(\infty,1)$-category of spans, which is an example of a rather different flavour from the ones that you mention. – Neil Strickland Aug 26 2011 at 17:03 1 Jeffrey: you're probably right -- I'm only asking. – Todd Trimble Aug 26 2011 at 17:10 show 2 more comments ## 6 Answers As per Todd's suggestion I am posting this as an answer. The $(\infty, n)$-category of bordisms is an important example for many reasons, the most imporant of which is its role in the Baez-Dolan cobordism hypothesis. There are several constructions of it, but one of the more modern ones, in the language of $(\infty,n)$-categories ($n$-fold complete Segal spaces), is given in Jacob Lurie's On the Classification of Topological Field Theories. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think it's worth mentioning the examples of weak $\infty$-groupoids constructed from type theory due to van den Berg and Garner and Lumsdaine. These examples are constructed out of syntax and so are very concrete in a certain sense (cf. Tom Leinster's answer). These models of $\infty$-groupoids arise from Martin-Löf type theory which was designed in such a way as to exhibit good computational properties. So there is a certain sense in which it is possible to compute with types (although it is a more technical sense of "compute" than you may have in mind). Of course, one expects that many of these examples (at least of $(\infty,1)$-categories) to be very closely related (as are, e.g., fundamental groupoids of spaces and Kan complexes). - This probably isn't "well known", and you might think it's cheating. Nevertheless, it's a very concrete, maximally weak, example of an $\infty$-category that isn't an $(\infty, n)$-category for any $n < \infty$. It's simply the free $\infty$-category on one cell in each dimension. For "free" to make sense, you have to use an appropriate definition of $\infty$-category (e.g. any one in which $\infty$-categories are defined as algebras for some particular monad or operad). For Segal-type definitions, it's not clear that it does make sense. I say "very concrete" because the cells and operations of this free $\infty$-category can be described in an explicit combinatorial way, much as the elements of a free group admit an explicit combinatorial description -- except that this, of course, is more complex. - 1 @tomleinster This should be the free $\infty$-category on the terminal globular set, shouldn't it? Is there any reference in which there's the explicit combinatorial description of this weak category? – Giorgio Mossa Sep 1 2011 at 16:06 1 I think the original reference should be Batanin's paper "Monoidal Globular Categories as a natural setting..." (which unfortunately doesn't seem to be freely available). Somewhere in among the various categories of pasting diagrams and so forth in Leinster's book "Higher Operads, Higher Categories" (freely available on his website) lies this particular object, but I'm not sure exactly where. Street's computads (see the nlab page) are supposed to enable explicit combinatorial, generators-and-relations treatment of higher categories. – Tim Campion May 27 at 2:57 1 It's hard for a non-expert to gauge just what what role computads play -- Batanin erroneously claimed (I think in the paper mentioned above?) that they form a presheaf category; this was corrected by Makkai and Zawadowski in their paper "The category of 3-computads is not cartesian closed". In Leinster's book, I think one roughly finds the idea "computads = many-in/many-out version of opetopes", and the fact that they don't form a presheaf category is taken as a sort of "no-go" theorem for using them in an "operadic" way. But Batanin has some sort of preprint saying that they're still useful – Tim Campion May 27 at 3:03 available from his webpage web.science.mq.edu.au/~mbatanin/papers.html . But that preprint is pretty old, and my impression as a non-expert is that Batanin and Weber's work over the last decade doesn't really work through stuff combinatorially... I would really love for someone to explain to me what's wrong or unclear with this or any other of my above statements. – Tim Campion May 27 at 3:07 It's plausible that there's a translation between the notion of an "$A_\infty$ disk-like n-category" (c.f. my paper "The blob complex" with Kevin Walker) and the usual notion of an $(\infty,n)$-category. (Roughly, the only difference should be that disk-like requires lots of duality.) In that case the blob complex of an $(n{-}k)$-manifold with coefficients in any disklike $n$-category gives an $(\infty, k)$-category. - The obvious concrete example is any Kan complex considered as a weak infinity groupoid. If that is not concrete enough, take a space and its singular complex is a weak infinity category. If you want category as against groupoid, the homotopy coherent nerve of a simplicially enriched category $\mathcal{B}$, is another example (provided $\mathcal{B}$ is `locally Kan' i.e. fibrant.) Thus setting size issues aside, the category of topological spaces yields an infinity category. (Look up homotopy coherent nerve in the nLab if you need. It is a very neat idea.) (Edit: I should have started by asking what `concrete' means for you.) (Edit number 2:) I see my original answer did not address the last part of the question. For that you are requiring morphisms in all dimensions to be potentially non-(invertible up to higher cells) so there is a non-reversiblility about things. Chris's examples give some idea of this but there is a nice set of ideas that have not been fully explored as yet that may give another. The context in which this arises is that of directed homotopy. This arises is computer science when modelling concurrent and distributed computing. An action takes time and resources, so is non reversible. If you model things by a directed space (and there are various interpretations of that idea see Marco Grandis' book for instance), and then use directed $n$-simplices for all $n$ and the test spaces, you get a singular complex with quite `singular' properties! (Look at directed space in the nLab for some ideas of what is going on here.) I tried to capture some of this in a paper (Enriched categories and models for spaces of evolving states, Theoretical Computer Science, 405, (2008), pp. 88 - 100.) The structure would seem to be related to the Segal category types constructions, but an adequate description is still lacking. The challenge is then to solidify this link and then to find out if it does give an adequate model of the sorts of situation modelled by the directed spaces in the first instance. - By concrete in this context I mean a model in which one can actually do manipulations or carry out some calculations. An example from group theory: a concrete group is for instance the group of word over an alphabet, or the group of permutation; other examples from algebra and category theory are the polinomial rings but also the free-category over a graph. In this case what I'm looking for is some example of ∞-weak-categories in which it's easy to see what are the n-morphisms, for each n∈N, and its also easy to describe the compositions between the morphisms. I hope this answer your question. – Giorgio Mossa Aug 26 2011 at 21:47 3 In that case, take any simplicial group, and look at the Moore complex. The simplicial group sits on its underlying simplical set which is a Kan complex, and there is a filler algorithm which gives a (weak) composite of a pasting scheme of faces of a horn of a simplex. Have a look at the Menagerie notes a fairly short version of which can be got from the nLab page of that name. In short you want to study the combinatorial homotopy (in J.H.C.Whitehead's sense) of weak infinity categories, a worthy aim. (I can say more if you like but I never know how much detail to give in a comment!) – Tim Porter Aug 27 2011 at 10:36 1 To my mind, the most concrete example of a multiple weak category in which one can easily see the compositions is the singular cubical complex $S^\square X$ of a space $X$. (Even better in some ways is the cubical singular complex $R X_*$ of a filtered space, but that is another story.) This has the advantage one can easily formulate multiple compositions $[\alpha_{(r)}]$ where $(r)$ is a multi-index. I don't know how to do that simplicially or globularly. – Ronnie Brown Jan 29 2012 at 11:30 A concrete example of a weak $\infty$-category, but not well studied abstractly, is the cubical singular complex of a space, preferably with the connections introduced in our 1981 JPAA paper with Philip Higgins. The clear advantage of the cubical setup is the command of multiple compositions, using an easy matrix notation. Thus one can express for the diagram that the big square is the composition of the little squares, by simply writing something like $\alpha=[\alpha_{ij}]$. (How does one do that simplicially, or globularly?) This and higher dimensional versions are useful in expressing algebraic inverses to subdivision, a useful tool in local-to-global problems. Because higher groupoids are nonabelian, unlike higher groups, one can also obtain nonabelian results in algebraic topology. The book "Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids" (EMS, (2011), pdf available from here) has a large number of uses of algebraic and geometric (e.g. homotopical) conclusions derived from rewriting such multiple arrays. The main results were conjectured and eventually proved by cubical methods. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9401081800460815, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/167554-virginia-state-standards-learning-algebra-problem-print.html
# Virginia state standards of Learning Algebra problem Printable View • January 5th 2011, 03:17 PM vaironxxrd Virginia state standards of Learning Algebra problem Hello Guys i just can't understand this question because i forgot much of my algebra's problem.. and as you can see this one has no explanation so if you guys can explain why is 3 the right answer ( got it after you do the problem wrong) and give me some examples to solve if you guys can. http://s2.postimage.org/5iqv2c10/Alg..._operation.jpg • January 5th 2011, 03:21 PM Archie Meade Quote: Originally Posted by vaironxxrd Hello Guys i just can't understand this question because i forgot much of my algebra's problem.. and as you can see this one has no explanation so if you guys can explain why is 3 the right answer ( got it after you do the problem wrong) and give me some examples to solve if you guys can. http://s2.postimage.org/5iqv2c10/Alg..._operation.jpg $f(x)=x^2-x-6$ $f(x)=0$ $x^2-x=6\Rightarrow\ x(x-1)=6$ The factors of 6 that differ by 1 are ? There is a positive and negative solution. • January 5th 2011, 03:21 PM dwsmith Do you know how to factor? $(x\pm\alpha)(x\pm\beta)=0$ $\alpha*\beta=-6$ $x\beta\pm x\alpha=-x$ • January 5th 2011, 03:21 PM rtblue Factor: $\displaystyle x^2-x-6=(x-3)(x+2)$ We are finding the zeros of this function, so we set it equal to zero. $(x-3)(x+2)=0$ By the zero product property, either x-3 or x+2 can be zero. we have x-3=0 and x+2=0 solving for x, we get x=3, x=-2 Here, try the following problem: Find both zeros of: $x^2+x-12$ • January 5th 2011, 03:42 PM Plato I am not sure what you expect us to do for you. But this is the answer to your question: $f(3)=0$. That is why 3 is the correct answer. Now if you do not understand why, then that is a different matter. Sometimes we all must suffer from what we have forgotten. • January 5th 2011, 03:45 PM vaironxxrd I don't like all of you said i am suffering the consequences • January 5th 2011, 03:46 PM vaironxxrd Quote: Originally Posted by Plato I am not sure what you expect us to do for you. But this is the answer to your question: $f(3)=0$. That is why 3 is the correct answer. Now if you do not understand why, then that is a different matter. Sometimes we all must suffer from what we have forgotten. Yea Im suffering the consequences for moving state • January 5th 2011, 03:48 PM dwsmith Quote: Originally Posted by vaironxxrd I don't like all of you said i am suffering the consequences Only one person said that not all. Regardless of what state you are reside, you need to know how to solve polynomials in basic math. • January 5th 2011, 03:56 PM Archie Meade I felt it better to use a straightforward method before introducing factoring and so on. There are a few ways to solve. $3$ and $-2$ are the two possible answers. Here is an explanation of why 3 is an answer. If two values are equal, then when we subtract them the answer is zero. $6-6=0$ $\left(x^2-x\right)-6=0$ Therefore $x^2-x=6$ $x^2=x(x)$ so $x(x)-x(1)=6$ Factor as x is common $x(x-1)=6$ The factors of 6 that differ by 1 are 3 and 2. Therefore x is 3 and (x-1) is 2. However $(-3)(-2)=6$ and so $x=-2$ and $x-1=-3$ is an alternative. If that makes sense, you could try $(x-3)(x+2)=x^2-x-6$ later. • January 5th 2011, 04:26 PM Plato Frankly I have written for this kind of test, albeit in a different state in the US. I will tell you the objective of the question. You are given an option of say five different choices. Probability three of which are wildly off. So that leaves only two to check. Which gives $f(a)=0~?$ What this tests is your understanding of the meaning of a zero of a function? If you have to actually take time to solve the equation, then that docks time from you on the rest of the test. So understanding the basic concepts improves your overall score. • January 5th 2011, 05:17 PM vaironxxrd Quote: Originally Posted by dwsmith Only one person said that not all. Regardless of what state you are reside, you need to know how to solve polynomials in basic math. Sorry i meant " Like all of you said" And yea one person said that • January 5th 2011, 05:22 PM vaironxxrd Quote: Originally Posted by Plato Frankly I have written for this kind of test, albeit in a different state in the US. I will tell you the objective of the question. You are given an option of say five different choices. Probability three of which are wildly off. So that leaves only two to check. Which gives $f(a)=0~?$ What this tests is your understanding of the meaning of a zero of a function? If you have to actually take time to solve the equation, then that docks time from you on the rest of the test. So understanding the basic concepts improves your overall score. oh ok i understand, i am now trying my hardest on this kind of stuff like Algebra geometry, and focusing on practicing my weak points in math • January 5th 2011, 05:23 PM Archie Meade In the case of your question, were you given values from which to choose ? In that case, you only need place the values into the equation to see which one gives you zero, as Plato showed. $x^2-x-6=0$ If x=2 $2^2-2-6=(4-2)-6=2-6$ which is not zero. If x=5 $5^2-5-6=(25-5)-6=20-6$ which is not zero. If x=3 $3^2-3-6=(9-3)-6=6-6$ and that is zero. No need for factoring... • January 5th 2011, 05:26 PM vaironxxrd Quote: Originally Posted by Archie Meade In the case of your question, were you given values from which to choose ? In that case, you only need place the values into the equation to see which one gives you zero, as Plato showed. $x^2-x-6=0$ If x=2 $2^2-2-6=(4-2)-6=2-6$ which is not zero. If x=5 $5^2-5-6=(25-5)-6=20-6$ which is not zero. If x=3 $3^2-3-6=(9-3)-6=6-6$ and that is zero. No need for factoring...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 33, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528744220733643, "perplexity_flag": "middle"}
http://www.mashpedia.com/Outlier
# Outlier Language Figure 1. Box plot of data from the Michelson-Morley Experiment displaying outliers in the middle column. In statistics, an outlier[1] is an observation that is numerically distant from the rest of the data. Grubbs[2] defined an outlier as: An outlying observation, or outlier, is one that appears to deviate markedly from other members of the sample in which it occurs. Outliers can occur by chance in any distribution, but they are often indicative either of measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high kurtosis and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a mixture model. In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition). Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations. Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object than the mean; however, naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may be indicative of data points that belong to a different population than the rest of the sample set. Estimators capable of coping with outliers are said to be robust: the median is a robust statistic, while the mean is not. ## Occurrence and causes In the case of normally distributed data, roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean, and 1 in 370 will deviate by three times the standard deviation; see three sigma rule[3] for details. In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution, and not indicative of an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number. In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability p) of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can generally be well-approximated by the Poisson distribution with λ = pn. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is approximately .3%, and thus for 1,000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3. ### Causes Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (King effect). ### Caution Unless it can be ascertained that the deviation is not significant, it is ill-advised to ignore the presence of outliers. Outliers that cannot be readily explained demand special attention – see kurtosis risk and black swan theory. ## Identifying outliers There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. Outlier detection[4][5] Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation: It is proposed to determine in a series of $m$ observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as $n$ such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from A Manual of Astronomy 2:558 by Chauvenet.) • Dixon's Q test • ASTM E178 Standard Practice for Dealing With Outlying Observations Other methods flag observations based on measures such as the interquartile range. For example, if $Q_1$ and $Q_3$ are the lower and upper quartiles respectively, then one could define an outlier to be any observation outside the range: $\big[ Q_1 - k (Q_3 - Q_1 ) , Q_3 + k (Q_3 - Q_1 ) \big]$ for some constant $k$. Other approaches are distance-based[10][11] and density-based,[12] and all of them frequently use the distance to the k-nearest neighbors to label observations as outliers or non-outliers. ## Working with outliers The choice of how to deal with an outlier should depend on the cause. ### Retention Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points. ### Exclusion Deletion of outlier data is a controversial practice frowned on by many scientists and science instructors; while mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified. In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the parameters, using a measure such as Cook's distance.[13] If a data point (or points) is excluded from the data analysis, this should be clearly stated on any subsequent report. ### Non-normal distributions The possibility should be considered that the underlying distribution of the data is not approximately normal, having "fat tails". For instance, when sampling from a Cauchy distribution,[14] the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. ### Alternative models In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a hierarchical Bayes model or a mixture model.[15][16] ## References 1. Barnett, V. and Lewis, T.: 1994, Outliers in Statistical Data. John Wiley & Sons., 3rd edition. 2. Grubbs, F. E.: 1969, Procedures for detecting outlying observations in samples. Technometrics 11, 1–21. 3. Benjamin Peirce, "Criterion for the Rejection of Doubtful Observations", Astronomical Journal II 45 (1852) and Errata to the original paper. 4. Peirce, Benjamin (May 1877–1878). "On Peirce's criterion". 13: 348–351. doi:10.2307/25138498. JSTOR 25138498. 5.  . NOAA PDF Eprint (goes to Report p. 200, PDF's p. 215). 6. Peirce, Charles Sanders (1982 [1986 copyright]). "On the Theory of Errors of Observation [Appendix 21, according to the editorial note on page 515]". In Kloesel, Christian J. W., et alia. Writings of Charles S. Peirce: A Chronological Edition. Volume 3, 1872-1878. Bloomington, Indiana: Indiana University Press. pp. 140–160. ISBN 0-253-37201-1. 7. Knorr, E. M. and Ng, R. T.: 1998, Algorithms for Mining Distance-Based Outliers in Large Datasets. In: Proceedings of the VLDB Conference. New York, USA, pp. 392–403 8. Ramaswamy, S., Rastogi, R., and Shim, K.: 2000, ‘Efficient Algorithms for Mining Outliers from Large Data Sets’. In: Proceedings of the ACM SIGMOD Conference on Management of Data. Dallas, TX, pp.427–438. 9. Markus Breunig and Hans-Peter Kriegel and Raymond T. Ng and Jörg Sander: 2000, LOF: Identifying Density-Based Local Outliers. In: Proceedings of the ACM SIGMOD Conference. pp. 93-104 10. Cook, R. Dennis (Feb 1977). "Detection of Influential Observations in Linear Regression". Technometrics (American Statistical Association) 19 (1): 15–18. 11. Roberts, S. and Tarassenko, L.: 1995, A probabilistic resource allocating network for novelty detection. Neural Computation 6, 270–284. 12. Bishop, C. M. (August 1994). "Novelty detection and Neural Network validation". Proceedings of the IEE Conference on Vision, Image and Signal Processing 141 (4): 217–222. doi:10.1049/ip-vis:19941330 • ISO 16269-4, Statistical interpretation of data — Part 4: Detection and treatment of outliers • Strutz, Tilo (2010). Data Fitting and Uncertainty - A practical introduction to weighted least squares and beyond. Vieweg+Teubner. ISBN 978-3-8348-1022-9.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8893107175827026, "perplexity_flag": "middle"}
http://regularize.wordpress.com/2012/08/
# regularize Trying to keep track of what I stumble upon Monthly Archive August 30, 2012 ## Open position in Optimization at my math department Posted by Dirk under Math | Tags: job advertisement, optimization | It’s out! Our department has a vacant position for optimization to fill! In particular we are looking for somebody working in continuous (nonlinear) optimization. Well, I know there have been a number of open positions with a similar scope recently, but I also know there are plenty of excellent people working in this field. In addition to the official advertisement (which can be found here (from the website of  TU Braunschweig) or here (from academics.de)), here is some further advertisement: The math department here is a medium sized department. It covers quite broad range of mathematics: • Numerical Linear Algebra (Fassbender, Bollhöfer) • PDEs (Sonar, Hempel) • Modelling (Langemann) • Stochastics (Kreiss, Lindner, Aurzada) • Applied Analysis/Mathematical Physics (Bach, myself) • Algebra and Discrete Mathematics (Eick, Löwen, Opolka) and, of course, Optimization (Zimmermann) – in fact, I usually find some expert around for all the questions I have which are a bit outside my field. All groups are active and (as far as I can see) working together smoothly. The department is located in the Carl-Friedrich Gauss Faculty which is also the home of the departments for Computer Science, Business Administration and Social Sciences. At the least in Computer Science and Business Administration there are some mathematically oriented groups, e.g. • the Algorithms Group by Sandor Fekete (a genuine optimizer) • the Institute for Theoretical Computer Science by Jiri Adamek • the Institute for Scientific Computing by Hermann Matthies (a genuine applied mathematician) • the group Business Information Systems by Dirk Mattfeld (doing partly operations research) and there are several groups with some mathematical background and interesting fields of applications (computer graphics, robotics,…). Moreover, the TU has a lot of engineering institutes with strong background in mathematics and cool applications. In addition to a lively and interesting research environment, the university treats its staff well (as far as I can see) and administrative burden or failures are not harming too much (in fact less then at other places, I’ve heard)! In case you have any questions concerning the advertisement, feel free to ask (in addition to the head of the search committee, Jens-Peter Kreiss) me. Deadline for application is October 14th 2012. August 25, 2012 ## ISMP over – non-convex and non-smooth minimization, l^1 and l^p Posted by Dirk under Conference, Math, Optimization, Signal and image processing, Sparsity | Tags: Basis pursuit denoising, conference, ismp, non-convex optimization, sparsity | ISMP is over now and I’m already home. I do not have many things to report on from the last day. This is not due the lower quality of the talks but due to the fact that I was a little bit exhausted, as usual at the end of a five-day conference. However, I collect a few things for the record: • In the morning I visited the semi-planary by Xiaojun Chenon non-convex and non-smooth minimization with smoothing methods. Not surprisingly, she treated the problem $\displaystyle \min_x f(x) + \|x\|_p^p$ with convex and smooth ${f:{\mathbb R}^n\rightarrow{\mathbb R}}$ and ${0<p<1}$. She proposed and analyzed smoothing methods, that is, to smooth the problem a bit to obtain a Lipschitz-continuous objective function ${\phi_\epsilon}$, minimizing this and then gradually decreasing ${\epsilon}$. This works, as she showed. If I remember correctly, she also treated “iteratively reweighted least squares” as I described in my previous post. Unfortunately, she did not include the generalized forward-backward methods based on ${\text{prox}}$-functions for non-convex functions. Kristian and I pursued this approach in our paper Minimization of non-smooth, non-convex functionals by iterative thresholding and some special features of our analysis include: • A condition which excludes some (but not all) local minimizers from being global. • An algorithm which avoids this non-global minimizers by carefully adjusting the steplength of the method. • A result that the number of local minimizers is still finite, even if the problem is posed in ${\ell^2({\mathbb N})}$ and not in ${{\mathbb R}^n}$. Most of our results hold true, if the ${p}$-quasi-norm is replaced by functions of the form $\displaystyle \sum_n \phi_n(|x_n|)$ with special non-convex ${\phi}$, namely fulfilling a list of assumptions like • ${\phi'(x) \rightarrow \infty}$ for ${x\rightarrow 0}$ (infinite slope at ${0}$) and ${\phi(x)\rightarrow\infty}$ for ${x\rightarrow\infty}$ (mild coercivity), • ${\phi'}$ strictly convex on ${]0,\infty[}$ and ${\phi'(x)/x\rightarrow 0}$ for ${x\rightarrow\infty}$, • for each ${b>0}$ there is ${a>0}$ such that for ${x<b}$ it holds that ${\phi(x)>ax^2}$, and • local integrability of some section of ${\partial\phi'(x) x}$. As one easily sees, ${p}$-quasi-norms fulfill the assumptions and some other interesting functions as well (e.g. some with very steep slope at ${0}$ like ${x\mapsto \log(x^{1/3}+1)}$). • Jorge Nocedalgave a talk on second-order methods for non-smooth problems and his main example was a functional like $\displaystyle \min_x f(x) + \|x\|_1$ with a convex and smooth ${f}$, but different from Xiaojun Chen, he only considered the ${1}$-norm. His talked is among the best planary talks I have ever attended and it was a great pleasure to listen to him. He carefully explained things and put them in perspective. In the case he skipped slides, he made me feel that I either did not miss an important thing, or understood them even though he didn’t show them He argued that it is not necessarily more expensive to use second order information in contrast to first order methods. Indeed, the ${1}$-norm can be used to reduce the number of degrees of freedom for a second order step. What was pretty interesting is, that he advocated semismooth Newton methods for this problem. Roland and I pursued this approach some time ago in our paper A Semismooth Newton Method for Tikhonov Functionals with Sparsity Constraints and, if I remember correctly (my notes are not complete at this point), his family of methods included our ssn-method. The method Roland and I proposed worked amazingly well in the cases in which it converged but the method suffered from non-global convergence. We had some preliminary ideas for globalization, which we could not tune enough to retain the speed of the method, and abandoned the topic. Now, that the topic will most probably be revived by the community, I am looking forward to fresh ideas here. August 23, 2012 ## ISMP – inverse problems with uniform noise and TV does not preserve edges Posted by Dirk under Conference, Math, Regularization, Signal and image processing | Tags: conference, ill-posed problems, image processing, ismp, parameter choice, regularization, tikhonov | [5] Comments Today there are several things I could blog on. The first is the planary by Rich Baraniuk on Compressed Sensing. However, I don’t think that I could reflect the content in a way which would be helpful for a potential reader. Just for the record: If you have the chance to visit one of Rich’s talk: Do it! The second thing is the talk by Bernd Hofmann on source conditions, smoothness and variational inequalities and their use in regularization of inverse problems. However, this would be too technical for now and I just did not take enough notes to write a meaningful post. As a third thing I have the talk by Christian Clason on inverse problems with uniformly distributed noise. He argued that for uniform noise it is much better to use an ${L^\infty}$ discrepancy term instead of the usual ${L^2}$-one. He presented a path-following semismooth Newton method to solve the problem $\displaystyle \min_x \frac{1}{p}\|Kx-y^\delta\|_\infty^p + \frac{\alpha}{2}\|x\|_2^2$ and showed examples with different kinds of noise. Indeed the examples showed that ${L^\infty}$ works much better than ${L^2}$ here. But in fact it works even better, if the noise is not uniformly distributed but “impulsive” i.e. it attains bounds ${\pm\delta}$ almost everywhere. It seems to me that uniform noise would need a slightly different penalty but I don’t know which one – probably you do? Moreover, Christian presented the balancing principle to choose the regularization parameter (without knowledge about the noise level) and this was the first time I really got what it’s about. What one does here is, to choose ${\alpha}$ such that (for some ${\sigma>0}$ which only depends on ${K}$, but not on the noise) $\displaystyle \sigma\|Kx_\alpha^\delta-y^\delta\|_\infty = \frac{\alpha}{2}\|x_\alpha^\delta\|_2^2.$ The rational behind this is, that the left hand side is monotonically non-decreasing in ${\alpha}$, while the right hand side is monotonically non-increasing. Hence, there should be some ${\alpha}$ “in the middle” which make both somewhat equally large. Of course, we do neither want to “over-regularize” (which would usually “smooth too much”) nor to “under-regularize” (which would not eliminate noise). Hence, balancing seems to be a valid choice. From a practical point of view the balancing is also nice because one can use the fixed-point iteration $\displaystyle \alpha^{n+1} = 2\sigma\frac{\|Kx_{\alpha^n}^\delta - y^\delta\|_\infty}{\|x_{\alpha_n}^\delta\|_2^2}$ which converges in a few number of iterations. Then there was the talk by Esther Klann, but unfortunately, I was late so only heard the last half… Last but not least we have the talk by Christiane Pöschl. If you are interested in Total-Variation-Denoising (TV denoising), then you probably have heard many times that “TV denoising preserves edges” (have a look at the Wikipedia page – it claims this twice). What Christiane showed (in a work with Vicent Caselles and M. Novaga) that this claim is not true in general but only for very special cases. In case of characteristic functions, the only functions for which the TV minimizer has sharp edges are these so-called calibrated sets, introduced by Caselles et el. Building on earlier works by Caselles and co-workers she calculated exact minimizers for TV denoising in the case that the image consists of characteristic functions of two convex sets or of a single star shaped domain, that is, for a given set $B$ she calculated the solution of $\displaystyle \min_u\int (u - \chi_B)^2dx + \lambda \int|Du|.$ This is not is as easy as it may sound. Even for the minimizer for a single convex set one has to make some effort. She presented a nice connection of the shape of the obtained level-sets with the morphological operators of closing and opening. With the help of this link she derived a methodology to obtain the exact TV denoising minimizer for all parameters. I do not have the images right now but be assured that most of the time, the minimizers do not have sharp edges all over the place. Even for simple geometries (like two rectangles touching in a corner) strange things happen and only very few sharp edges appear. I’ll keep you posted in case the paper comes out (or appears as a preprint). Christiane has some nice images which make this much more clear: For two circles edges are preserved if they are far enough away from each other. If they are close, the area “in between” them is filled and, moreover, obey this fuzzy boundary. I remember myself seeing effects like this in the output of TV-solvers and thinking “well, it seems that the algorithm is either not good or not converged yet – TV should output sharp edges!”. For a star-shaped shape (well, actually a star) the output looks like this. The corners are not only rounded but also blurred and this is true both for the “outer” corners and the “inner” corners. So, if you have any TV-minimizing code, go ahead and check if your code actually does the right things on images like this! Moreover, I would love to see similar results for more complicated extensions of TV like Total Generalized Variation, I treated here. August 22, 2012 ## ISMP – alternatingly projecting on non-convex sets and demixing by convex optimization Posted by Dirk under Conference, Math, Optimization, Sparsity | Tags: Basis pursuit denoising, compressed sensing, conference, ismp, ivanov, sparsity | Today I report on two things I came across here at ISMP: • The first is a talk by Russell Luke on Constraint qualifications for nonconvex feasibility problems. Luke treated the NP-hard problem of sparsest solutions of linear systems. In fact he did not tackle this problem but the problem to find an ${s}$-sparse solution of an ${m\times n}$ system of equations. He formulated this as a feasibility-problem (well, Heinz Bauschke was a collaborator) as follows: With the usual malpractice let us denote by ${\|x\|_0}$ the number of non-zero entries of ${x\in{\mathbb R}^n}$. Then the problem of finding an ${s}$-sparse solution to ${Ax=b}$ is: $\displaystyle \text{Find}\ x\ \text{in}\ \{\|x\|_0\leq s\}\cap\{Ax=b\}.$ In other words: find a feasible point, i.e. a point which lies in the intersection of the two sets. Well, most often feasibility problems involve convex sets but here, the first one given by this “${0}$-norm” is definitely not convex. One of the simplest algorithms for the convex feasibility problem is to alternatingly project onto both sets. This algorithm dates back to von Neumann and has been analyzed in great detail. To make this method work for non-convex sets one only needs to know how to project onto both sets. For the case of the equality constraint ${Ax=b}$ one can use numerical linear algebra to obtain the projection. The non-convex constraint on the number of non-zero entries is in fact even easier: For ${x\in{\mathbb R}^n}$ the projection onto ${\{\|x\|_0\leq s\}}$ consists of just keeping the ${s}$ largest entries of ${x}$ while setting the others to zero (known as the “best ${s}$-term approximation”). However, the theory breaks down in the case of non-convex sets. Russell treated problem in several papers (have a look at his publication page) and in the talk he focused on the problem of constraint qualification, i.e. what kind of regularity has to be imposed on the intersection of the two sets. He could shows that (local) linear convergence of the algorithm (which is observed numerically) can indeed be justified theoretically. One point which is still open is the phenomenon that the method seems to be convergent regardless of the initialization and that (even more surprisingly) that the limit point seems to be independent of the starting point (and also seems to be robust with respect to overestimating the sparsity ${s}$). I wondered if his results are robust with respect to inexact projections. For larger problems the projection onto the equality constraint ${Ax=b}$ are computationally expensive. For example it would be interesting to see what happens if one approximates the projection with a truncated CG-iteration as Andreas, Marc and I did in our paper on subgradient methods for Basis Pursuit. • Joel Tropp reported on his paper Sharp recovery bounds for convex deconvolution, with applications together with Michael McCoy. However, in his title he used demixing instead of deconvolution (which, I think, is more appropriate and leads to less confusion). With “demixing” they mean the following: Suppose you have two signals ${x_0}$ and ${y_0}$ of which you observe only the superposition of ${x_0}$ and a unitarily transformed ${y_0}$, i.e. for a unitary matrix ${U}$ you observe $\displaystyle z_0 = x_0 + Uy_0.$ Of course, without further assumptions there is no way to recover ${x_0}$ and ${y_0}$ from the knowledge of ${z_0}$ and ${U}$. As one motivation he used the assumption that both ${x_0}$ and ${y_0}$ are sparse. After the big bang of compressed sensing nobody wonders that one turns to convex optimization with ${\ell^1}$-norms in the following manner: $\displaystyle \min_{x,y} \|x\|_1 + \lambda\|y\|_1 \ \text{such that}\ x + Uy = z_0. \ \ \ \ \ (1)$ This looks a lot like sparse approximation: Eliminating ${x}$ one obtains the unconstraint problem \begin{equation*} \min_y \|z_0-Uy\|_1 + \lambda \|y\|_1. \end{equation*} Phrased differently, this problem aims at finding an approximate sparse solution of ${Uy=z_0}$ such that the residual (could also say “noise”) ${z_0-Uy=x}$ is also sparse. This differs from the common Basis Pursuit Denoising (BPDN) by the structure function for the residual (which is the squared ${2}$-norm). This is due to the fact that in BPDN one usually assumes Gaussian noise which naturally lead to the squared ${2}$-norm. Well, one man’s noise is the other man’s signal, as we see here. Tropp and McCoy obtained very sharp thresholds on the sparsity of ${x_0}$ and ${y_0}$ which allow for exact recovery of both of them by solving (1). One thing which makes their analysis simpler is the following reformulation: The treated the related problem \begin{equation*} \min_{x,y} \|x\|_1 \text{such that} \|y\|_1\leq\alpha, x+Uy=z_0 \end{equation*} (which I would call this the Ivanov version of the Tikhonov-problem (1)). This allows for precise exploitation of prior knowledge by assuming that the number ${\alpha_0 = \|y_0\|_1}$ is known. First I wondered if this reformulation was responsible for their unusual sharp results (sharper the results for exact recovery by BDPN), but I think it’s not. I think this is due to the fact that they have this strong assumption on the “residual”, namely that it is sparse. This can be formulated with the help of ${1}$-norm (which is “non-smooth”) in contrast to the smooth ${2}$-norm which is what one gets as prior for Gaussian noise. Moreover, McCoy and Tropp generalized their result to the case in which the structure of ${x_0}$ and ${y_0}$ is formulated by two functionals ${f}$ and ${g}$, respectively. Assuming a kind of non-smoothness of ${f}$ and ${g}$ the obtain the same kind of results and especially matrix decomposition problems are covered. August 21, 2012 ## ISMP, second day Posted by Dirk under Conference, Math, Optimization, Sparsity | Tags: conference, ismp, Optimal control, parameter choice | The second day of ISMP started (for me) with the session I organized and chaired. The first talk was by Michael Goldman on Continuous Primal-Dual Methods in Image Processing. He considered the continuous Arrow-Hurwitz method for saddle point problems $\displaystyle \min_{u}\max_{\xi} K(u,\xi)$ with ${K}$ convex in the first and concave in the second variable. The continuous Arrow-Hurwitz method consists of solving $\displaystyle \begin{array}{rcl} \partial_t u(t) &=& -\nabla_u K(u(t),\xi(t))\\ \partial_t \xi(t) &=& \nabla_\xi K(u(t),\xi(t)). \end{array}$ His talk evolved around the problem if ${K}$ comes from a functional which contains the total variation, namely he considered $\displaystyle K(u,\xi) = -\int_\Omega u\text{div}(\xi) + G(u)$ with the additional constraints ${\xi\in C^1_C(\Omega,{\mathbb R}^2)}$ and ${|\xi|\leq 1}$. For the case of ${G(u) = \lambda\|u-f\|^2/2}$ he presented a nice analysis of the problem including convergence of the method to a solution of the primal problem and some a-posteriori estimates. This reminded me of Showalters method for the regularization of ill-posed problems. The Arrow-Hurwitz method looks like a regularized version of Showalters method and hence, early stopping does not seem to be necessary for regularization. The related paper is Continuous Primal-Dual Methods for Image Processing. The second talk was given by Elias Helou and was on Incremental Algorithms for Convex Non-Smooth Optimization with Applications in Image Reconstructions. He presented his work on a very general framework for problems of the class $\displaystyle \min_{x\in X} f(x)$ with a convex function ${f}$ and a convex set ${X}$. Basically, he abstracted the properties of the projected subgradient method. This consists of taking subgradient descent steps for ${f}$ followed by projection onto ${X}$ iteratively: With a subgradient ${g^n\in\partial f(x^n)}$ this reads as $\displaystyle x^{n+1} = P_X(x^n -\alpha_n g^n)$ he extracted the conditions one needs from the subgradient descent step and from the projection step and formulated an algorithm which consists of successive application of an “optimality operator” ${\mathcal{O}_f}$ (replacing the subgradient step) and a feasibility operator ${\mathcal{F}_X}$ (replacing the projection step). The algorithm then reads as $\displaystyle \begin{array}{rcl} x^{n+1/2} &=& \mathcal{O}_f(x^n,\alpha_n)\\ x^{n+1} &=& \mathcal{F}_x(x^{n+1/2} \end{array}$ and he showed convergence under the extracted conditions. The related paper is , Incremental Subgradients for Constrained Convex Optimization: a Unified Framework and New Methods. The third talk was by Jerome Fehrenbach on Stripes removel in images, apllications in microscopy. He considered the problem of very specific noise which is appear in the form of stripes (and appears, for example, “single plane illumination microscopy”). In fact he considered a little more general case and the model he proposed was as follows: The observed image is $\displaystyle u_{\text{OBS}} = u + n,$ i.e. the usual sum of the true image ${u}$ and noise ${n}$. However, for the noise he assumed that it is given by $\displaystyle n = \sum_{j=1}^m \psi_j*\lambda_j,$ i.e. it is a sum of different convolutions. The ${\psi_j}$ are kind of shape-functions which describe the “pattern of the noise” and the ${\lambda_j}$ are samples of noise processes, following specific distributions (could be white noise realizations, impulsive noise or something else)-. He then formulated a variational method to identify the variables ${\lambda_j}$ which reads as $\displaystyle \min \|\nabla(u_{\text{OBS}} - \sum_{j=1}^m \psi_j*\lambda_j)\|_1 + \sum_j \phi_j(\lambda_j).$ Basically, this is the usual variational approach to image denoising, but nor the optimization variable is the noise rather than the image. This is due to the fact that the noise has a specific complicated structure and the usual formulation with ${u = u_{\text{OBS}} +n}$ is not feasible. He used the primal-dual algorithm by Chambolle and Pock for this problem and showed that the method works well on real world problems. Another theme which caught my attention here is “optimization with variational inequalities as constraints”. At first glance that sounds pretty awkward. Variational inequalities can be quite complicated things and why on earth would somebody considers these things as side conditions in optimization problems? In fact there are good reasons to do so. One reason is, if you have to deal with bi-level optimization problems. Consider an optimization problem $\displaystyle \min_{x\in C} F(x,p) \ \ \ \ \ (1)$ with convex ${C}$ and ${F(\cdot,p)}$ (omitting regularity conditions which could be necessary to impose) depending on a parameter ${p}$. Now consider the case that you want to choose the parameter ${p}$ in an optimal way, i.e. it solves another optimization problem. This could look like $\displaystyle \min_p G(x)\quad\text{s.t.}\ x\ \text{solves (1)}. \ \ \ \ \ (2)$ Now you have an optimization problem as a constraint. Now we use the optimality condition for the problem~(1): For differentiable ${F}$, ${x^*}$ solves~(1) if and only if $\displaystyle \forall y\in C:\ \nabla_x F(x^*(p),p)(y-x^*(p))\geq 0.$ In other words: We con reformulate (2) as $\displaystyle \min_p G(x)\quad\text{s.t.}\ \forall y\in C:\ \nabla_x F(x^*(p),p)(y-x^*(p))\geq 0. \ \ \ \ \ (3)$ And there it is, our optimization problem with a variational inequality as constraint. Here at ISMP there are entire sessions devoted to this, see here and here. August 20, 2012 ## ISMP first day Posted by Dirk under Conference, Math, Signal and image processing, Sparsity | Tags: Basis pursuit denoising, conference, Inverse problems, ismp, regularization, sparsity | The scientific program at ISMP started today and I planned to write a small personal summary of each day. However, it is a very intense meeting. Lot’s of excellent talks, lot’s of people to meet and little spare time. So I’m afraid that I have to deviate from my plan a little bit. Instead of a summary of every day I just pick out a few events. I remark that these picks do not reflect quality, significance or something like this in any way. I just pick things for which I have something to record for personal reasons. My day started after the first plenary which the session Testing environments for machine learning and compressed sensing in which my own talk was located. The session started with the talk by Michael Friedlander of the SPOT toolbox. Haven’t heard of SPOT yet? Take a look! In a nutshell its a toolbox which turns MATLAB into “OPLAB”, i.e. it allows to treat abstract linear operators like matrices. By the way, the code is on github. The second talk was by Katya Scheinberg (who is giving a semi-planary talk on derivative free optimization at the moment…). She talked about speeding up FISTA by cleverly adjusting step-sizes and over-relaxation parameters and generalizing these ideas to other methods like alternating direction methods. Notably, she used the “SPEAR test instances” from our project homepage! (And credited them as “surprisingly hard sparsity problems”.) My own talk was the third and last one in that session. I talked about the issue of constructing test instance for Basis Pursuit Denoising. I argued that the naive approach (which takes a matrix ${A}$, a right hand side ${b}$ and a parameter ${\lambda}$ and let some great solver run for a while to obtain a solution ${x^*}$) may suffer from “trusted method bias”. I proposed to use “reverse instance construction” which is: First choose ${A}$, ${\lambda}$ and the solution ${x^*}$ and the construct the right hand side ${b}$ (I blogged on this before here). Last but not least, I’d like to mention the talk by Thomas Pock: He talked about parameter selection on variational models (think of the regularization parameter in Tikhonov, for example). In a paper with Karl Kunisch titled A bilevel optimization approach for parameter learning in variational models they formulated this as a bi-level optimization problem. An approach which seemed to have been overdue! Although they treat somehow simple inverse problems (well, denoising) (but with not so easy regularizers) it is a promising first step in this direction. August 20, 2012 ## ISMP in Berlin – Reception Posted by Dirk under Conference, Math, Optimization | Tags: conference, ismp | Today I arrived at ISMP in Berlin. This seems to be the largest conference on optimization and is hosted by the Mathematical Optimization Society. (As a side note: The society recently changed its name from MPS (Mathematical Programming Society) to MOS. Probably the conference will be called ISMO in a few years…). The reception today was special in comparison to conference receptions I have attended so far. First, it was held in the Konzerthaus which is a pretty fancy neo-classical building. Well, I’ve been to equally fancy buildings at conference receptions at GAMM or AIP conferences already, but the distinguishing feature this evening was the program. As usual it featured welcome notes by important people (notably, the one by the official government representative was accurate and entertaining!), prizes and music. The music was great, the host (G.M. Ziegler) did a great job and the ceremony felt like a show rather than an opening reception. From the prices I’d like to mention two: • The Beale-Orchard-Hayes prize was given to Boyd and Grant for their great software CVX. • The Lagrange prize was given to Candes and Recht for their paper on matrix completion by convex programming. After this reception I am looking even more forward to the rest of this conference. As I a side note: Something seems to be wrong with me and optimization conferences. It seems like every time I visit such a conference, I am loosing my cell-phone. Happened to me at SIOPT 2011 in Darmstadt and happened to me again today…
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 138, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9510558843612671, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/97117/functions-with-zero-derivative-on-manifolds-are-constant
# Functions with zero derivative on manifolds are constant. This seems obvious, but I'm having trouble carrying through the details. Suppose there is a smooth function $f$ with zero derivative on a manifold $M$ with $n$ connected components. Why is $f$ constant on each connected component? Detailed answers are very much appreciated. Thanks! - 4 What exactly do you mean by derivative here? – Mariano Suárez-Alvarez♦ Jan 7 '12 at 5:58 1 I suggest assuming $n=1$. Do you know how to prove it when $M=\mathbb R^k$? – Jonas Meyer Jan 7 '12 at 6:00 @MarianoSuárez-Alvarez I guess that's part of my question. I only know how to prove this when $M=\mathbb{R}$, and I don't know enough differential geometry to define derivatives on manifolds. But wikipedia claims this is true. – Potato Jan 7 '12 at 6:15 2 But then you should probably pick a textbook dealing with the subject and learn that first! It is extraordinarily understandable that you be having problems with proving this if you do not know what derivatives are in this context. – Mariano Suárez-Alvarez♦ Jan 7 '12 at 6:17 1 (The only way to detailedly answer this starting from what a derivative is to the claim you want to prove is to more or less write out an exposition of what a manifold is and what a smooth function on it is: this is not the best way to use this site) – Mariano Suárez-Alvarez♦ Jan 7 '12 at 6:19 show 2 more comments ## 1 Answer Let $M$ be an $m$-manifold. We'll concentrate on a connected component of $M$, say $U$. Pick $p\in U$, let $V_p$ be a neighborhood of $p$ in $U$ admitting a local Euclidean chart, and let $\phi: D\subset\mathbb{R}^m\rightarrow V_p$ be a coordinate chart. If $f: M\rightarrow \mathbb{R}$ is a differentiable function on $M$, this really means that $f\circ \phi: \mathbb{R}^m\rightarrow \mathbb{R}$ is a differentiable function (in fact, this is the definition of a differentiable function on $M$). Now prove that $f\circ \phi$ is constant using standard calculus. So $f$ is constant on $V_p$. From the fact that $U$ is connected, conclude by standard topological arguments that $f$ is constant on $U$. I would suggest picking up a book on differential geometry and topology. - This is crystal clear. Thank you! – Potato Jan 7 '12 at 7:09 1 One wonders how you followed this along, being that it does not contain a definition of the derivative you mention in your question (in fact, it does not even mention the hypothesis! :D ) – Mariano Suárez-Alvarez♦ Jan 7 '12 at 9:41 Is there something unclear about the differential of a map from $\mathbb{R}^m$ to $\mathbb{R}^n$? Presumably he does know how to differentiate multivariable maps. If not, then he should pick up a book on elementary calculus, before looking to geometry (my answer does make a reference to calculus, so he knows where to look :-D). – William Jan 7 '12 at 10:02 @MarianoSuárez-Alvarez What it means to be a differentiable function on a manifold is defined in the answer. – Potato Jan 7 '12 at 22:51 @WNY Do not worry. I am familiar with multivariable analysis, just not so much with differential geometry. – Potato Jan 7 '12 at 22:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9327939748764038, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5254/explanation-of-the-decision-diffie-hellman-ddh-problem
# Explanation of the Decision Diffie Hellman (DDH) problem. I'm extremely new to crypto, and very much inexperienced. Lately I've been reading about the Diffie-Hellman key-exchange methods, and specifically about the computational diffie-hellman assumption vs. decision diffie-hellman assumption. Specifically I'm referencing Dan Boneh's paper on DDH problem. However, I'm having some trouble understanding the difference between CDH and DDH. What are the differences? Similarities? - 1 – Paŭlo Ebermann♦ Nov 2 '12 at 18:02 @PaŭloEbermann Thanks! – Nico Bellic Nov 4 '12 at 6:29 ## 1 Answer The Computational Diffie-Hellman (CDH) problem is: Given some group $G$ and group element $g$, and the elements $g^a$ and $g^b$, compute the value $g^{ab}$. The Decisional Diffie-Hellman (DDH) problem is: Given some group $G$ and group elements $g$, and the elements $g^a$, $g^b$ and $g^c$, determine whether $g^c = g^{ab}$ These are obviously related problems, the difference is that the CDH problem asks us to derive the DH shared secret, while the DDH problem just asks us to recognize it. In addition, the CDH problem appears to be potentially harder, in this sense: if we're given an oracle to solve the CDH problem, we can easily solve the DDH problem (simply by handing $g$, $g^a$ and $g^b$ to the oracle, have it compute the value $g^{ab}$ and comparing that value to $g^c$. In contrast, there's no known generic way to solve the CDH problem given a DDH oracle. As for why the DDH problem comes up so often (and in cases where one would naively expect the CDH problem to to appropriate), well, many protocols do a DH-type computation, and then use the value $F(g^{ab})$ (for some function $F$); breaking the protocol may allow an attacker to recover the value $F(g^{ab})$. Now, recovering this value would allow us to solve the DDH problem (by comparing that value to $F(g^c)$); hence, breaking the protocol would allow us to solve the DDH problem; or equivalently, if the DDH problem is hard, the protocol is secure. However, if the function $F$ is one-way, recovering the value $F(g^{ab})$ doesn't give us the value $g^{ab}$, and so there is no reduction of the security of the protocol to the CDH problem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.905255138874054, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/63375/what-function-is-this/63384
# What function is this? I'm trying to find a function. And although it seems to be very simple at first I can't figure it out. Maybe I just need some sleep, and maybe someone could help me out. ````given an Integer x between 0 and 100: if x is between 0-10 then f(x)=0 if x is between 11-20 then f(x)=11 if x is between 21-30 then f(x)=21 if x is between 31-40 then f(x)=31 ... if x is between 91-100 f(x)=91 ```` I'm trying to find the simplest f possible. The best I could do is: ````f(x) = x / 10 * 10 ```` But it's not right. What function is f? EDIT I'm accepting @Didier's solution but I'm going with this one instead. $$f(x) = \begin{cases} 1+10\lfloor (x-1)/10 \rfloor & \mbox{if } x >10; \\ 0 & \mbox{if } x \leq 10 \end{cases}$$ - What language are you using? As I mentioned in comments to the answers, your problem becomes slightly easier if your environment treats `0` and `1` as Booleans... – J. M. Sep 10 '11 at 18:56 I'm on Javascript – eduardocereto Sep 10 '11 at 19:09 1 JS, huh? Well then: `(x < 11 ? 0 : (10*Math.floor((x-1)/10)+1))` ... – J. M. Sep 10 '11 at 19:23 – J. M. Sep 10 '11 at 19:29 ## 4 Answers Let $H$ denote the Heaviside step function, using the convention that $H(x)=1$ if $x\ge0$ and $H(x)=0$ if $x<0$. Then, for every $x$, $$f(x)=H(x-11)+10\,\sum\limits_{k=1}^9H(x-1-10\,k).$$ An equivalent formulation, based on Iverson bracket, is $$f(x)=[x\ge11]+10\,\sum\limits_{k=1}^9[x\ge1+10\,k].$$ Still equivalently, but less rigorously, $$f(x)=[x\ge11]+10\,[x\ge11]+10\,[x\ge21]+10\,[x\ge31]+\cdots+10\,[x\ge91].$$ - I have no idea of what you mean. But it looks like cheating to me. A magical function that get's rid of the different range problem. You sure there's no simpler function? – eduardocereto Sep 10 '11 at 18:46 If OP is working in a C-ish language, one could formally replace the unit step functions with Iverson brackets... – J. M. Sep 10 '11 at 18:48 2 @eduardo: What's cheating here? Your function is piecewise-defined, and unit step functions are used for representing piecewise-defined functions among other things... – J. M. Sep 10 '11 at 18:49 Sorry I was just checking. I'm not a pro mthematician like you guys. But I'm still trying to understand this function. The $\sum$ is puzzling me. I think I'll go with 2 separate functions, one for the first range and one for the second. – eduardocereto Sep 10 '11 at 18:54 @eduardo: The $\sum$ means you can use a `for` loop or `while` loop, as the case may be... – J. M. Sep 10 '11 at 18:58 show 2 more comments In C notation, ````(x > 10 ? 1 : 0)*((x-1)/10*10 +1) ```` - Yup, the inline conditional would be best, but I'd have done `(x <= 10 ? 0 : 1 + (x-1)/10*10)` or something... – J. M. Sep 10 '11 at 19:03 This is very useful, thanks for providing this, I'm accepting Didier's because it's more in line with my question about a single function to solve the case. Though your solution is better in pratical terms to me. – eduardocereto Sep 10 '11 at 19:10 $f(x) = 1+10\lfloor (x-1)/10 \rfloor$ works, except for the first range. Are you sure that $f(x)$ is not $1$ for $0 \le x \le 10$? - yes I'm sure. But thanks for the effort. – eduardocereto Sep 10 '11 at 18:39 The function would be $$f(x)=10\left\lfloor\frac{x-1}{10}\right\rfloor+1$$ where $\lfloor \;\;\;\rfloor$ denotes the floor function, except that the definition for $0\leq x\leq 10$ does not match the pattern of the other parts of the definition - if you defined $f(x)=1$ for $1\leq x\leq 10$ and $f(0)=-9$, then the above function would be correct for all inputs. - This is exactly what is driving me crazy. The first range doesn't match. But the question is correct. – eduardocereto Sep 10 '11 at 18:41 2 If you multiply $f(x)$ with $[x-11 \geq 0]$ where $[p]$ is an Iverson bracket, then his problem's solved. – J. M. Sep 10 '11 at 18:53 – eduardocereto Sep 10 '11 at 18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9338437914848328, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/118/why-square-the-difference-instead-of-taking-the-absolute-value-in-standard-devia/121
# Why square the difference instead of taking the absolute value in standard deviation? In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the data? The number is going to be different from square method (the absolute-value method will be smaller), but it should still show the spread of data. Anybody know why we take this square approach as a standard? The definition of standard deviation: $\sigma = \sqrt{E\left[\left(X - \mu\right)^2\right]}.$ Can't we just take the absolute value instead and still be a good measurement? $\sigma = E\left[|X - \mu|\right]$ - 7 In a way, the measurement you proposed is widely used in case of error (model quality) analysis -- then it is called MAE, "mean absolute error". – mbq♦ Jul 19 '10 at 21:30 In accepting an answer it seems important to me that we pay attention to whether the answer is circular. The normal distribution is based on these measurements of variance from squared error terms, but that isn't in and of itself a justification for using (X-M)^2 over |X-M|. – Russell S. Pierce Jul 20 '10 at 7:59 1 Do you think the term standard means this is THE standard today ? Isn't it like asking why principal component are "principal" and not secondary ? – robin girard Jul 23 '10 at 21:44 My understanding of this question is that it could be shorter just be something like: what is the difference between the MAE and the RMSE ? otherwise it is difficult to deal with. – robin girard Jul 24 '10 at 6:08 – whuber♦ Nov 27 '10 at 21:53 show 2 more comments ## 16 Answers If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread. The benefits of squaring include: • Squaring always gives a positive value, so the sum will not be zero. • Squaring emphasizes larger differences - a feature that turns out to be both good and bad (think of the effect outliers have). Squaring however does have a problem as a measure of spread and that is that the units are all squared, where as we'd might prefer the spread to be in the same units as the original data (think of squared pounds or squared dollars or squared apples). Hence the square root allows us to return to the original units. I suppose you could say that absolute difference assigns equal weight to the spread of data where as squaring emphasises the extremes. Technically though, as others have pointed out, squaring makes the algebra much easier to work with and offers properties that the absolute method does not (for example, the variance is equal to the expected value of the square of the distribution minus the square of the mean of the distribution) It's important to note however that there's no reason you couldn't take the absolute difference if that is your preference on how you wish to view 'spread' (sort of how some people see 5% as some magical thresh hold for p-values, when in fact it's situation dependent). Indeed, there are in fact several competing methods for measuring spread. My view is to use the squared values because I like to think of how it relates to the Pythagorean Theorem of Statistics: c = sqrt(a^2 + b^2) ...this also helps me remember that when working with independent random variables, variances add, standard deviations don't. But that's just my personal subjective preference. An much more indepth analysis can be read here. - 13 "Squaring always gives a positive value, so the sum will not be zero." and so does absolute values. – robin girard Jul 22 '10 at 9:54 10 @robin girard: That is correct, hence why I preceded that point with "The benefits of squaring include". I wasn't implying that anything about absolute values in that statement. I take your point though, I'll consider removing/rephrasing it if others feel it is unclear. – Tony Breyal Jul 22 '10 at 13:19 3 – Thylacoleo Aug 13 '10 at 5:15 Thank you for the link to that analysis – Jack Aidley Jan 23 at 14:03 The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a version of) the L2 norm which comes in handy for proving convergence and so on. The mean absolute deviation (the absolute value notation you suggest) is also used as a measure of dispersion, but it's not as "well-behaved" as the squared error. - said "it's continuously differentiable (nice when you want to minimize it)" do you mean that the absolute value is difficult to optimize ? – robin girard Jul 23 '10 at 21:40 6 @robin: while the absolute value function is continuous everywhere, its first derivative is not (at x=0). This makes analytical optimization more difficult. – Vince Jul 23 '10 at 23:59 1 Yeah, finding quantiles in general (which includes optimizing absolute values) tends to churn up linear programming type problems, which -- while they're certainly tractable numerically -- can get fiddly. They typically don't have an analytical closed-form solution, and are a bit slower and a bit more difficult to implement than least-square-type solutions. – Rich Jul 24 '10 at 2:55 I do not agree with this. First, theoretically, the problem may be of different nature (because of the discontinuity) but not necessarily harder (for example the median is easely shown to be arginf_m E[|Y-m|]). Second, practically, using a L1 norm (absolute value) rather than a L2 norm makes it piecewise linear and hence at least not more difficult. Quantile regression and its multiple variante is an example of that. – robin girard Jul 24 '10 at 6:01 6 Yes, but finding the actual number you want, rather than just a descriptor of it, is easier under squared error loss. Consider the 1 dimension case; you can express the minimizer of the squared error by the mean: O(n) operations and closed form. You can express the value of the absolute error minimizer by the median, but there's not a closed-form solution that tells you what the median value is; it requires a sort to find, which is something like O(n log n). Least squares solutions tend to be a simple plug-and-chug type operation, absolute value solutions usually require more work to find. – Rich Jul 24 '10 at 9:10 One way you can think of this is that standard deviation is similar to a "distance from the mean". Compare this to distances in euclidean space - this gives you the true distance, where what you suggested (which, btw, is the absolute deviation) is more like a manhattan distance calculation. - 6 Nice analogy of euclidean space! – c4il Jul 19 '10 at 21:38 Yeah. Great analogy. – Daniel Rodriguez Oct 31 '11 at 4:10 Except that in one dimension the $l_1$ and $l_2$ norm are the same thing, aren't they? – naught101 Mar 29 '12 at 5:20 1 @naught101: It's not one dimension, but rather $n$ dimensions where $n$ is the number of samples. The standard deviation and the absolute deviation are (scaled) $l_2$ and $l_1$ distances respectively, between the two points $(x_1, x_2, \dots, x_n)$ and $(\mu, \mu, \dots, \mu)$ where $\mu$ is the mean. – ShreevatsaR Nov 16 '12 at 7:21 Squaring the difference from the mean has a couple of reasons. • Variance is defined as the 2nd moment of the deviation (the R.V here is (x-$\mu$) ) and thus the square as moments are simply the expectations of higher powers of the random variable. • Having a square as opposed to the absolute value function gives a nice continuous and differentiable function (absolute value is not differentiable at 0) - which makes it the natural choice, especially in the context of estimation and regression analysis. • The squared formulation also naturally falls out of parameters of the Normal Distribution. - The answer that best satisfied me is that it falls out naturally from the generalization of a sample to n-dimensional euclidean space. It's certainly debatable whether that's something that should be done, but in any case: Assume your $n$ measurements $X_i$ are each an axis in $\mathbb R^n$. Then your data $x_i$ define a point $\bf x$ in that space. Now you might notice that the data are all very similar to each other, so you can represent them with a single location parameter $\mu$ that is constrained to lie on the line defined by $X_i=\mu$. Projecting your datapoint onto this line gets you $\hat\mu=\bar x$, and the distance from the projected point $\hat\mu\bf 1$ to the actual datapoint is $\sqrt{\frac{n-1} n}\hat\sigma=\|\bf x-\hat\mu\bf 1\|$. This approach also gets you a geometric interpretation for correlation, $\hat\rho=\cos \angle(\vec{\bf\tilde x},\vec{\bf\tilde y})$. - 1 This is correct and appealing. However, in the end it appears only to rephrase the question without actually answering it: namely, why should we use the Euclidean (L2) distance? – whuber♦ Nov 24 '10 at 21:07 That is indeed an excellent question, left unanswered. I used to feel strongly that the use of L2 is unfounded. After having studied a little statistics, I saw the analytic niceties, and since then have revised my viewpoint into "if it really matters, you're probably in deep water already, and if not, easy is nice". I don't know measure theory yet, and worry that analysis rules there too - but I've noticed some new interest in combinatorics, so perhaps new niceties have been/will be found. – sesqu Nov 24 '10 at 21:39 8 @sesqu Standard deviations did not become commonplace until Gauss in 1809 derived his eponymous deviation using squared error, rather than absolute error, as a starting point. However, what pushed them over the top (I believe) was Galton's regression theory (at which you hint) and the ability of ANOVA to decompose sums of squares--which amounts to a restatement of the Pythagorean Theorem, a relationship enjoyed only by the L2 norm. Thus the SD became a natural omnibus measure of spread advocated in Fisher's 1925 "Statistical Methods for Research Workers" and here we are, 85 years later. – whuber♦ Nov 24 '10 at 21:56 5 (+1) Continuing in @whuber's vein, I would bet that had Student published a paper in 1908 entitled, "Probable Error of the Mean - Hey, Guys, Check Out That MAE in the Denominator!" then statistics would have an entirely different face by now. Of course, he didn't publish a paper like that, and of course he couldn't have, because the MAE doesn't boast all the nice properties that S^2 has. One of them (related to Student) is its independence of the mean (in the normal case), which of course is a restatement of orthogonality, which gets us right back to L2 and the inner product. – G. Jay Kerns Nov 25 '10 at 3:38 The reason that we calculate standard deviation instead of absolute error is that we are assuming error to be normally distributed. It's a part of the model. Suppose you were measuring very small lengths with a ruler, then standard deviation is a bad metric for error because you know you will never accidentally measure a negative length. A better metric would be one to help fit a Gamma distribution to your measurements: $E(\log(x)) - \log(E(x))$ Like st. dev., this is also non-negative and differentiable, but it is a better error statistic for this problem. - I like your answer. The sd is not always the best statistic. – RockScience Nov 25 '10 at 3:03 There are many reasons; probably the main is that it works well as parameter of normal distribution. - 4 I agree. Standard deviation is the right way to measure dispersion if you assume normal distribution. And a lot of distributions and real data are an approximately normal. – Łukasz Lew Jul 20 '10 at 14:40 – Neil G Mar 12 '12 at 7:40 @NeilG Good point; I was thinking about "casual" meaning here. I'll think about some better word. – mbq♦ Mar 12 '12 at 10:41 Just so people know, there is a Math Overflow question on the same topic. Why-is-it-so-cool-to-square-numbers-in-terms-of-finding-the-standard-deviation The take away message is that using the square root of the variance leads to easier maths. A similar response is given by Rich and Reed above. - Yet another reason (in addition to the excellent ones above) comes from Fisher himself, who showed that the standard deviation is more "efficient" than the absolute deviation. Here, efficient has to do with how much a statistic will fluctuate in value on different samplings from a population. If your population is normally distributed, the standard deviation of various samples from that population will, on average, tend to give you values that are pretty similar to each other, whereas the absolute deviation will give you numbers that spread out a bit more. Now, obviously this is in ideal circumstances, but this reason convinced a lot of people (along with the math being cleaner), so most people worked with standard deviations. - 1 Your argument depends on the data being normally distributed. If we assume the population to have a "double exponential" distribution, then the absolute deviation is more efficient (in fact it is a sufficient statistic for the scale) – probabilityislogic Jul 16 '11 at 5:08 2 Yes, as I stated, "if your population is normally distributed." – Eric Suh Sep 8 '11 at 19:49 I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at http://en.wikipedia.org/wiki/Least_absolute_deviations, particularly the section "Contrasting Least Squares with Least Absolute Deviations" , which links to some student exercises with a neat set of applets at http://www.math.wpi.edu/Course_Materials/SAS/lablets/7.3/73_choices.html . To summarise, least absolute deviations is more robust to outliers than ordinary least squares, but it can be unstable (small change in even a single datum can give big change in fitted line) and doesn't always have a unique solution - there can be a whole range of fitted lines. Also least absolute deviations requires iterative methods, while ordinary least squares has a simple closed-form solution, though that's not such a big deal now as it was in the days of Gauss and Legendre, of course. - Because squares can allow use of many other mathematical operations or functions more easily than absolute values. Example: squares can be integrated, differentiated, can be used in trigonometric, logarithmic and other functions, with ease. - I wonder if there is a self fulfilling profecy here. We get – probabilityislogic Mar 13 '12 at 12:04 Naturally you can describe dispersion of a distribution in any way meaningful (absolute deviation, quantiles, etc.). One nice fact is that the variance is the second central moment, and every distribution is uniquely described by its moments if they exist. Another nice fact is that the variance is much more tractable mathematically than any comparable metric. Another fact is that the variance is one of two parameters of the normal distribution for the usual parametrization, and the normal distribution only has 2 non-zero central moments which are those two very parameters. Even for non-normal distributions it can be helpful to think in a normal framework. As I see it, the reason the standard deviation exists as such is that in applications the square-root of the variance regularly appears (such as to standardize a random varianble), which necessitated a name for it. - Estimating the standard deviation of a distribution requires to choose a distance. Any of the following distance can be used: $d_{n}((X)_{i=1..I},\mu)=(\sum \left | X-\mu \right |^n)^{1/n}$ We usually use the natural euclidean distance (n=2), which is the one everybody uses in daily life. The distance that you propose is the one with $n=1$. Both are good candidates but they are different. One could decide to use $n=3$ as well. I am not sure that you will like my answer, my point contrary to others is not to demonstrate that $n=2$ is better. I think that if you want to estimate the standard deviation of a distribution, you can absolutely use a different distance. - Did you mean $n=1$ instead of the (undefined) $n=0$? – whuber♦ Jan 5 '11 at 3:25 Yes indeed, thx – RockScience Jan 5 '11 at 3:40 It depends on what you are talking about when you say "spread of the data". To me this could mean two things: 1. The width of a sampling distribution 2. The accuracy of a given estimate For point 1) there is no particular reason to use the standard deviation as a measure of spread, except for when you have a normal sampling distribution. The measure $E(|X-\mu|)$ is a more appropriate measure in the case of a Laplace Sampling distribution. My guess is that the standard deviation gets used here because of intuition carried over from point 2). Probably also due to the success of least squares modelling in general, for which the standard deviation is the appropriate measure. Probably also because calculating $E(X^2)$ is generally easier than calculating $E(|X|)$ for most distributions. Now, for point 2) there is a very good reason for using the variance/standard deviation as the measure of spread, in one particular, but very common case. You can see it in the Laplace approximation to a posterior. With Data $D$ and prior information $I$, write the posterior for a parameter $\theta$ as: $$p(\theta|DI)=\frac{\exp\left(h(\theta)\right)}{\int \exp\left(h(t)\right)dt}\;\;\;\;\;\;h(\theta)\equiv\log[p(\theta|I)p(D|\theta I)]$$ I have used $t$ as a dummy variable to indicate that the denominator does not depend on $\theta$. If the posterior has a single well rounded maximum (i.e. not too close to a "boundary"), we can taylor expand the log probability about its maximum $\theta_{max}$. If we take the first two terms of the taylor expansion we get (using prime for differentiation): $$h(\theta)\approx h(\theta_{max})+(\theta_{max}-\theta)h'(\theta_{max})+\frac{1}{2}(\theta_{max}-\theta)^{2}h''(\theta_{max})$$ But we have here that because $\theta_{max}$ is a "well rounded" maximum, $h'(\theta_{max})=0$, so we have: $$h(\theta)\approx h(\theta_{max})+\frac{1}{2}(\theta_{max}-\theta)^{2}h''(\theta_{max})$$ If we plug in this approximation we get: $$p(\theta|DI)\approx\frac{\exp\left(h(\theta_{max})+\frac{1}{2}(\theta_{max}-\theta)^{2}h''(\theta_{max})\right)}{\int \exp\left(h(\theta_{max})+\frac{1}{2}(\theta_{max}-t)^{2}h''(\theta_{max})\right)dt}$$ $$=\frac{\exp\left(\frac{1}{2}(\theta_{max}-\theta)^{2}h''(\theta_{max})\right)}{\int \exp\left(\frac{1}{2}(\theta_{max}-t)^{2}h''(\theta_{max})\right)dt}$$ Which, but for notation is a normal distribution, with mean equal to $E(\theta|DI)\approx\theta_{max}$, and variance equal to $$V(\theta|DI)\approx \left[-h''(\theta_{max})\right]^{-1}$$ ($-h''(\theta_{max})$ is always positive because we have a well rounded maximum). So this means that in "regular problems" (which is most of them), the variance is the fundamental quantity which determines the accuracy of estimates for $\theta$. So for estimates based on a large amount of data, the standard deviation makes a lot of sense theoretically - it tells you basically everything you need to know. Essentially the same argument applies (with same conditions required) in multi-dimensional case with $h''(\theta)_{jk}=\frac{\partial h(\theta)}{\partial \theta_j \partial \theta_k}$ being a Hessian matrix. The diagonal entries are also essentially variances here too. The frequentist using the method of maximum likelihood will come to essentially the same conclusion because the MLE tends to be a weighted combination of the data, and for large samples the Central Limit Theorem applies and you basically get the same result if we take $p(\theta|I)=1$ but with $\theta$ and $\theta_{max}$ interchanged: $$p(\theta_{max}|\theta)\approx N\left(\theta,\left[-h''(\theta_{max})\right]^{-1}\right)$$ (see if you can guess which paradigm I prefer :P ). So either way, in parameter estimation the standard deviation is an important theoretical measure of spread. - $\newcommand{\var}{\operatorname{var}}$ Variances are additive: for independent random variables $X_1,\ldots,X_n$, $$\var(X_1+\cdots+X_n)=\var(X_1)+\cdots+\var(X_n).$$ Notice what this makes possible: Say I toss a fair coin 900 times. What's the probability that the number of heads I get is between 440 and 455 inclusive? Just find the expected number of heads ($450$), and the variance of the number of heads ($225=15^2$), then find the probability with a normal (or Gaussian) distribution with expectation $450$ and standard deviation $15$ is between $439.5$ and $455.5$. Abraham de Moivre did this with coin tosses in the 18th century, thereby first showing that the bell-shaped curve is worth something. - Are mean absolute deviations not additive in the same way as variances? – Russell S. Pierce Feb 9 at 23:30 No, they're not. – Michael Hardy Feb 10 at 18:14 See http://www.graphpad.com/curvefit/linear_regression.htm See Minimizing sum-of-squares section - 2 That looks like a potentially good resource, but this should have been either left as a comment or expanded upon. – Andy W Nov 10 '11 at 19:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347123503684998, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/6743/the-inverse-galois-problem-what-is-it-good-for/6754
## The inverse Galois problem, what is it good for? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Several years ago I attended a colloquium talk of an expert in Galois theory. He motivated some of his work on its relation with the inverse Galois problem. During the talk, a guy from the audience asked: why should I, as a number theorist, should care about the inverse Galois problem?" I must say that as a young graduate student that works on Galois theory, I was amazed or even perhaps shocked from this question. But later, I realized that I should have asked myself this question long ago. Can you pose reasons to convince a mathematician (not just number theorist) of the importance of the inverse Galois problem? or maybe why it's unimportant if you want to ruin the party ;) - 2 I'm not sure, by the way, that this should be community wiki. – Pete L. Clark Nov 29 2009 at 8:31 19 Almost 200 years after Galois, and we still don't know which groups arise as the Galois group of an irreducible polynomial over Q; and someone asks why should we care??? – JS Milne Jan 5 2010 at 5:17 I am a layman in this subject, but I am curious to know even if people were able to show every simple group is realisable over Q, how far are we from solving the actual inverse Galois problem? – Ying Zhang May 2 2010 at 19:38 1 some simple group are known to be realizable over Q and there are some very recent discoveries too (e.g. Gabor's usage of modular representations), but we are very far from realizing all simple groups. for example the Matthieu group M11 is not known to be realizable (if I am not mistaken). – Lior Bary-Soroker Jun 1 2010 at 21:19 2 You're thinking of $M_{23}$. It's known by now that all the other sporadic simple groups arise, and for many of them, including $M_{11}$, there are explicit examples or even explicit families of examples. – Noam D. Elkies Jan 12 at 5:36 ## 8 Answers For me, it's one of those questions that would not be so interesting if the answer is Yes but which would probably be very interesting if the answer is No. If not all groups are Galois groups over Q, then there is probably some structure that can be regarded as an obstruction, and then this structure would probably be essential to know about. For instance, not all groups are Galois groups over local fields -- they have to be solvable. This is by basic properties of the higher ramification filtration, which is, surprise, essential to know about if you want to understand local fields. So you could say it's an approach to finding deeper structure in the absolute Galois group. Why not just do that directly? The problem with directly looking for structure is that it's not a yes/no question, and so sometimes you lose track of what exactly you're doing (although in new and fertile subjects often you don't). So the inverse Galois problem has the advantage of being a yes/no question and the advantage that things would be really interesting if the answer is No. Unfortunately, I think the answer is expected to be Yes, though correct me if I'm wrong. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The question is not really my business, but I can give a stock answer. It might be a good answer in the sense that it convinces an outsider like me. The narrowest version of the inverse Galois problem, find all of the Galois groups of finite extensions of $\mathbb{Q}$, might not be all that interesting. A better question would be the following: Let $G$ be a finite group and let $\mathbb{F}$ be a field of characteristic 0 (or more generally a perfect field). Can you describe the set (or moduli space if you like) of all Galois extensions of $\mathbb{F}$ over $G$? For instance if $G = C_2$, it's a good question with a good answer; the question is a model of taking square roots of elements of $\mathbb{F}$. With that special case in mind, it's always a good question. It can be viewed as a theory of nonabelian surds. If for a given field $\mathbb{F}$ and a given finite group $G$, you don't even know if there are any points in the moduli space of extensions with Galois group $G$, then you hardly know anything. In particular, $\mathbb{Q}$ is an important field, and there are many specific finite groups for which people don't even know that much. - The previous answers are all on point; let me just say a little more. First, the IGP as a problem is a sink, not a source (or something with both inward and outward flow!): I know of no nontrivial consequences of assuming that every finite group over Q (or even over every Hilbertian field) is a Galois group. This does not mean it's a bad problem: the same holds for Fermat's Last Theorem. As with FLT, if IGP were easy to prove, then it would be of little interest. (As a good example, if you know Dirichlet's theorem on primes in arithmetic progressions, it's easy to prove that every finite abelian group occurs as a Galois group of Q. What does this tell you about the maximal abelian extension of Q? Not much -- the Kronecker-Weber theorem is an order of magnitude deeper.) But as with FLT, the special cases of IGP that have been established use a wide array of fascinating techniques and provide an important border-crossing between algebra and geometry. Arguably more interesting than IGP itself is the Regular Inverse Galois Problem: for any field K and any finite group G, there exists a regular function field K(C)/K(t) with Galois group isomorphic to G. (If K is Hilbertian -- e.g. a global field -- then RIGP for K implies IGP for K.) Now RIGP is of great interest in arithmetic geometry: given any finite group G there are infinitely many moduli spaces (Hurwitz spaces) attached to the problem of realizing G regularly over K (because we have discrete invariants which can take infinitely many possible values, like the number of branch points). If even one of these Hurwitz schemes has a K-rational point, then G occurs regularly over K. In general, the prevailing wisdom about varieties over fields like Q is that they should have very few rational points other than the ones that stare you in the face. (Yes, it is difficult or impossible to formalize this precisely.) So it is somewhat reasonable to say that the chance that a given Hurwitz space -- say of general type -- has a Q-rational point is zero, but what about the chance that at least one of infinitely many Hurwitz spaces, related to each other by various functorialities, has a Q-rational point? To me that is one of mathematics' most fascinating questions: to learn the answer either way would be tremendously exciting. - "So it is somewhat reasonable to say that the chance that a given Hurwitz space -- say of general type, as most of them are -- has a Q-rational point is zero" Just a note: what "most" means here may depend strongly on how you count. When G is fixed and your discrete invariants (like number of branch points) vary, it's not at all clear that most of the resulting Hurwitz spaces are general type! – JSE Nov 29 2009 at 15:18 Corrected accordingly. Thanks. – Pete L. Clark Nov 29 2009 at 21:24 I personally know of no immediate applications of a positive (or negative) answer to the inverse Galois problem. At the same time, the problem seems to me a useful standard against which to gauge mathematical progress. Answering the inverse Galois problem for solvable extensions required class field theory (one of the pinnacles of early 20th century mathematics). This can be seen as evidence that the ability to solve the inverse Galois problem will entail a deeper understanding of a variety of mathematical things. - 5 I agree with you. It is similar to Fermat last problem. The equation is just one of a gazillion others. But still it drove crazy the mathematical society for centuries, and was a catalysis for many very interesting mathematics. You can give the inverse Galois problem the credit for Hilbert's irreducibility theorem, which is one of my favorites. – Lior Bary-Soroker Nov 25 2009 at 2:18 3 I also agree. In fact, the problem is more natural than other famous open problems so it is an obvious challenge mathematicians face and progress in mathematics measured. There are views (see ihes.fr/~gromov/topics/SpacesandQuestions.pdf ) that dismiss the importance of "natural" problems rather than deep emerging problems. But even if you agree to this opinion (and I tend not to agree) Natural old-standing problems stand as an objective measure for progress in math. – Gil Kalai Nov 28 2009 at 19:59 when you say “Answering the inverse Galois problem for solvable extensions required class field theory”, what do you mean? do you mean solvable extension over Q or over local fields? Thank you. – natura Jan 5 2010 at 4:51 I also agree that IGP is a very natural problem. Galois theory is one of the very first (highly) nontrivial example of equivalence of seemingly different categories, and it's one the most beautiful. – natura Jan 5 2010 at 4:54 Bauer's Theorem (a simple consequence of the Chebotarev Density Theorem) states that a finite Galois extension K of an algebraic number field F is uniquely determined (as a subield of some fixed algebraic closure of F) by the set of primes of F which split completely in K. Thus knowing all possible Galois groups is the same as knowing all possible splitting laws in finite Galois extensions. Being able to describe these splitting laws in some explicit fashion is basically "nonabelian reciprocity", which is THE most important problem in algebraic number theory, so the "inverse Galois problem" is of FUNDAMENTAL importance to all number theorists. - Any branch of mathematics after the first few definitions will make everyone routinely ask themselves some basic questions. I consider Inverse Galois Problem is one such. If the question (i) is not highly technical, (ii) can be understood at very early stages and (iii) does not sound concocted then it justifies itself. These are the natural questions the subject should attempt to answer. (It is irrelevant if solving them requires Fields medallists or undergraduates). Let me list more questions in the same category (not necessarily of the same level of difficulty!) 1. Which divisors of $|G|$ are orders of subgroups of $G$? 2. Which connected open subsets of the complex plane are biholomorphic to the unit disc? 3. For which numbers $d$, is the ring $\mathbf{Z}[\sqrt d]$ a UFD? 4. Which finite groups occur as subgroups of $\mathbf{SO}(3)$? 5. Which integers are represented by an indefinite/definite integral quadratic form? 6. Which projective curves are subvarieties of the projective plane? I have been under the impression that this is the way mathematicians think. If someone questions the relevance of the above questions it would be difficult for me to communicate with that person. - I agree that the inverse Galois problem is interesting by itself. However, it seems to me that it is a sink, as Pete formulated it. Many of the questions you raise above, have many beautiful implication to other topics in math, and outside of math. Thus making them interesting in a much deeper sense. – Lior Bary-Soroker Jan 13 at 6:35 just a link, http://mathoverflow.net/questions/10736/families-of-number-fields-of-prime-discriminant if we not only consider about the group, but also put some other restrictions on the extension (such as discriminant, ramifications etc), then we have the above problem, which is quite interesting. - P.S. Infinitely many Galois extensions inside an algebraic closure can have isomorphic Galois groups, so the "explicit" description of the primes which split in a finite Galois extension is the crux of of the matter when it comes to nonabelian reciprocity. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9484173059463501, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/15482/how-do-i-calculate-the-position-on-the-bloch-sphere-of-a-quantum-gate-with-a-giv?answertab=active
# How do I calculate the position on the Bloch sphere of a quantum gate with a given diagonal matrix? In quantum computation there are several principal quantum gates that have corresponding matrix representations. One of these is the Z gate, whose matrix is $\left[\begin{smallmatrix} 1 & 0 \\ 0 & -1 \end{smallmatrix}\right]$. ... anyway, I've found the eigenvalues (equal to +1, -1) using the characteristic equation, and used them to derive the corresponding eigenvectors, which come together quite nicely in a 2×2 matrix $\left[\begin{smallmatrix} 1 & 0 \\ 0 & 1\end{smallmatrix}\right]$, equal to the identity. So, in diagonalizing this matrix, I find that the diagonal matrix $D$ is the same matrix as the one for gate $Z$. ... the next step and where I'm stuck is to find the corresponding point on the Bloch sphere for this gate. In order to do that, I need to compute how to take the diagonalized matrix call it $D_z$ and derive two things: (a) its diagonal representation $| 0 \rangle \langle 0 | - | 1 \rangle \langle 1 |$, and (b) the normalized eigenvalues $a, b$ for $Z$, where $Z = a|0\rangle + b|1\rangle$ and must be orthonormal i.e. $a^2 + b^2 = 1$. The $a$ and $b$ terms correspond to the probabilities of measuring 0 or 1 for the state, respectively (I think). After I have the values for $a$ and $b$, I'll be able to locate the gate on the Bloch sphere because the calculation of its coordinates on the sphere is straightforward: $a = \cos(\theta / 2)$, and $b = e^{i\phi}\sin(\theta/2)$. - ## 2 Answers There's actually an extremely nice way to uncover the Bloch sphere representation for any density operator. (Pure states are just a special case.) Definition. I'm not sure how your (lecturer? book? other learning source?) defines the Bloch sphere, but the definition that makes the most sense from a fundamental perspective is that, for any density operator ρ, the point on (or inside) the Bloch sphere corresponding to ρ is the vector (rx, ry, rz) such that $$\rho = \tfrac{1}{2}( I + r_x X + r_y Y + r_z Z )$$ where $I$ is the identity and $X, Y, Z$ are the (other) 2×2 Pauli operators. Proof sketch. It's easy to show that the operators $I, X, Y, Z$ are linearly independent (what linear combinations of them add to the zero operator?) and are Hermitian (each is equal to it's own conjugate-transpose). From this you can show that they span the set of all 2×2 Hermitian operators; and as they are linearly independent, they're actually a basis set for those operators. So any density operator — which is also Hermitian — will decompose into $I, X, Y, Z$ in a unique way. (It's possible to show that it's coefficient in $I$ is always ½ by considering the trace. Do you see how?) Answer. You should try to prove the things I've said above — it isn't hard, and it's using math that will be useful to you later anyway — but for the problem of finding the Bloch sphere representation, all you need to do is solve for (rx, ry, rz) in the equation above. If you like, you can even obtain these coefficients by a simple formula. (Hint: what is the trace of the product of two different matrices chosen from $I,X,Y,Z$? What does this mean for $\mathrm{tr}(\rho P)$ for $P \in \{I,X,Y,Z\}$?) Another remark — In the future, you don't have to really do any work to find the eigenvalues of a diagonal matrix $D$. It's easy to show that he standard basis vectors $\mathbf e_j = [\; 0 \; \cdots \; 0 \;\; 1 \;\; 0 \; \cdots \; 0 \;]^\top$ are eigenvectors for any diagonal matrix, and that the eigenvalues are exactly the coefficients on the diagonal (with multiplicity given by how often each is repeated). It's also easy to show that $D - \lambda I$ is invertible for any other $\lambda$, so that these are all the eigenvalues. - Niel this is great stuff! Since you asked, I'm teaching myself Q.I.C. from Nielson and Chuang a.k.a. "mike and ike". My hope is that within a year I can get to a point where I can design quantum circuits that are useful. – bwkaplan Oct 8 '11 at 15:22 @bwkaplan: Best of luck in your studies! – Niel de Beaudrap Oct 8 '11 at 15:33 I'm having some trouble finding any pure states e.g. such as the Z gate on the Bloch sphere representation you've provided. Someone in chat pointed to theorem 2.5 in Nielsen and Chuang providing a reason as to why. Pure states fail both criteria for density operators. Perhaps I've misinterpreted your reasoning and ask that you please clarify. – bwkaplan Oct 13 '11 at 21:18 The Bloch sphere representation only holds for 2x2 density operators, which are positive semidefinite and have trace 1. The Z operator fails both of these criteria. Because it does not have trace 1, the coefficient of the identity matrix will not be 1/2; in fact, it will be zero, as the trace of Z is zero. (Note: can you derive what properties hold for the coefficients of a 2x2 positive semidefinite matrix when it is decomposed as a linear combination of the operators I, X, Y, and Z?) For an arbitrary Hermitian matrix, you need instead a vector $(r_I, r_X, r_Y, r_Z)$ with real coefficients. – Niel de Beaudrap Oct 13 '11 at 23:06 Adding some more to Niel's post, let $\vec{u} = (u_x,u_y,u_z)$ be a unit vector. Then $\sigma_u = u_x\sigma_x+u_y\sigma_y+u_z\sigma_z$ is the Pauli spin operator in the $\vec{u}$ direction. As with any Pauli spin operator, $\sigma_u^2 = 1$. This is because $\sigma_x,\sigma_y$, and $\sigma_z$ square to 1, and all cross terms cancel by anticommutativity. Therefore $(1+\sigma_u)/2$ is easily shown to be idempotent and since it has trace 1, it is the pure density matrix of a state. But the state has to be spin in the $\vec{u}$ direction because it is an eigenoperator of spin in the u direction (with eigenvalue 1): $\sigma_u\;\;(1+\sigma_u)/2 = 1\;\; (1+\sigma_u)/2$. Thus to get the corresponding state (spinor), take any nonzero column vector of $(1+\sigma_u)/2$ and normalize. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9375305771827698, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/282402/at-what-speed-should-it-be-traveling-if-the-driver-aims-to-arrive-at-town-b-at-2
# At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm? A car will travel from Town A to Town B. If it travels at a constant speed of 60 km/h, it will arrive at 3.00 pm. If travels at a constant speed of 80kh/h, it will arrive at 1.00 pm. At what speed should it be traveling if the driver aims to arrive at Town B at 2.00 pm? - Peaceyou: An acceptance rate of 17% is terrible. You've asked eight questions and have left seven without feedback. If people take the time to write thoughtful, formatted replies then the least you could do is tick a box next to the most helpful answer. – Fly by Night Jan 20 at 2:02 ## 3 Answers Let $d$ be the distance between Town A and Town B. Let $x$ be a number so that 3.00PM - $x$ be the time that the driver started. We then have: $$d = 60 \cdot x$$ $$d = 80 \cdot (x - 2)$$ Set the two equations equal to get: $$80(x-2) = 60x$$ $$80x - 160 = 60x$$ $$20x = 160$$ $$x = 8$$ Hence, the driver started at 8 hours before $3.00$. We want to find out how fast the driver should be if he wants to arrive after $x - 1$ hours ($3 - 2 = 1$). $x - 1 = 7$. Solving for $d$, we have $d = 480$. $$480 = 7m$$ So the driver should drive at $480/7 \approx 68.6 \text{km/hr}$. - The trip became $120$ minutes ($2$ hours) shorter by using $\frac34$ of a minute per kilometer ($80$ km/hr) instead of $1$ minute per kilometer ($60$ km/hr.) Since the savings from going faster was $\frac14$ of a minute per kilometer, the trip must be $480$ kilometers long, so it took $8$ hours at $60$ km/hr, and we set off at 7 AM. Therefore, to arrive at 2 PM, we should travel $480$ kilometers in $7$ hours, or $68\frac{4}{7}$ km/hr. - Since both journeys are made at constant speeds the SUVAT equation $s = ut + \frac{1}{2}at^2$ (where $s$ measures displacement, $u$ is the initial velocity, $a$ the necessarily constant acceleration and $t$ the time) becomes $s=ut$. This is as we expect: if speed is constant then Distance = Speed $\times$ Time. Since the distances of the two journeys are equal we have $u_1t_1=u_2t_2$ where $u_i$ denotes the velocity of the $i^{\text{th}}$ journey in km/h and $t_j$ the time taken for the $j^{\text{th}}$ journey. Let us assume that the first journey took $t_1$ hours. Since the second journey took two hours less, we have $t_2=t_1-2$. Thus: $u_1t_1 = u_2t_2$ becomes $60t_1 = 80(t_1-2)$ and hence $t_1=8$ hours. It also follows that $t_2 = 8-2 = 6$ hours. For a third journey arriving at $2$ pm, we must have $t_3 = 7$ hours. Again, the distance is the same and so we have $u_1t_1=u_3t_3$ which becomes $60 \times 8 = 7u_3$, and hence $u_3=68\frac{4}{7}$ km/h. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9444621801376343, "perplexity_flag": "middle"}
http://nrich.maths.org/2363
### Flexi Quads A quadrilateral changes shape with the edge lengths constant. Show the scalar product of the diagonals is constant. If the diagonals are perpendicular in one position are they always perpendicular? ### Flexi Quad Tan As a quadrilateral Q is deformed (keeping the edge lengths constnt) the diagonals and the angle X between them change. Prove that the area of Q is proportional to tanX. ### Multiplication of Vectors An account of multiplication of vectors, both scalar products and vector products. # Air Routes ##### Stage: 5 Challenge Level: London is situated at longitude $0^o$, latitude $52^o$ North and Cape Town at longitude $18^o$ East, latitude $34^o$ South. Taking the earth to be a sphere with unit radius (and ultimately scaling by 6367 kilometres for the radius of the earth) work out coordinates for both places, then find the angle LOC where L represents London, O the centre of the earth and C Cape Town. Hence find the distance on the surface of the earth between the two places. If a plane flies at an altitude of 6 kilometres and the journey takes 11 hours what is the average speed? [You might also like to try the problems 'Over The Pole', which is a little simpler, and 'Flight Path' which is similar to 'Air Routes' but the method of solution given there is a bit different. ] The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8978756666183472, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/48550/why-does-a-capacitor-discharge-a-percentage-of-the-original-energy-in-the-same-t/48551
# Why does a capacitor discharge a percentage of the original energy in the same time? If I charge a capacitor ($220\mu{F}$) using a 6V battery, and then measure the time it takes to discharge 90% of the initial energy over a resistor (${100k}\Omega$), and then charge the same capacitor using a 12V battery and measure the time it takes to discharge 90% of its initial energy again (over the same resistor). Why are both times the same? Especially given that the second time there is 4 times more starting energy that the first time. ($E=\frac{1}{2}CV^2$.) - ## 1 Answer Why are both times the same? Because the time constant of the circuit hasn't changed. For an RC circuit, the time constant $\tau$ is just the product of the resistance and the capacitance: $\tau = RC$. When you write and solve the differential equation for the RC circuit with an initial voltage across the capacitor $V_0$, the solution is: $v_C(t) = V_0 e^{-t/\tau}$ By inspection, the time to decay to a percentage of the initial value is independent of the initial value. More intuitively, the power delivered to the resistor goes as the square of the voltage. So, when the initial voltage is doubled, the initial energy stored is squared, but the initial power delivered by the capacitor is also squared, i.e., the initial rate of change of energy is also squared. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9093337059020996, "perplexity_flag": "head"}
http://mathoverflow.net/questions/43683/subfactor-summer-reading-list/43696
Subfactor summer reading list Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Many people I talk to lament the nonexistence of a coherent source for learning the theory of subfactors. Could someone suggest a nice (ordered) list of books/papers to work through to obtain a suitable background in this theory, assuming the audience is comprised of mathematicians familiar with the basics of von Neumann algebra theory? - 3 Answers So it really depends on why you want to learn about subfactors. I'll try and give different reading lists based on different motivation. The basics of $II_1$-subfactors: If you're familiar with $II_1$-factors, then "Introduction to subfactors" (MR1473221) is a good place to start. Of course, Jones' original "Index for subfactors" (MR696688) is an enjoyable read as well. A more advanced and comprehensive treatment is Evans and Kawahigashi's "Quantum symmetries on operator algebras" (MR1642584). Classification: When mathematicians encounter a family of mathematical objects, we feel the need to classify them. Subfactors were first (and still!) classified by their principal graphs, which is the principal part of the Bratteli diagram of the standard invariant, or tower of relative commutants (see JS, EK, or Bisch's "Bimodules and higher relative commutants" MR1424954). The first classifications (index $\leq 4$) were completed by Jones, Ocneanu (MR996454) (very hard to find this source), and Popa (MR1278111). Popa showed that "amenable" subfactors of the hyperfinite $II_1$-factor (the index is the norm squared of the principal graph) are completely classified by their principal graphs. Axiomatization: There are several axiomatizations for the standard invariant of a subfactor: Ocneanu's paragroups (see EK), Popa's $\lambda$-lattices (MR1334479), and Jones' planar algebras (see section below on planar algebras). These three different axiomatizations play together nicely, and it's good to have an overview on what's really going on here. Unfortunately, there is no good unified source for this. Yet. However, you should think of it this way: The standard invariant (or "representation theory") of a finite index $II_1$-subfactor $N\subset M$ is a unitary $2$-category with $2$ $0$-morphisms called $N$ and $M$, $1$-morphisms given by various bimodule summands of the basic constructions of $N\subset M$, and $2$-morphisms given by various intertwiner spaces. This is what Ocneanu calls a "paragroup" because it resembles the tensor category of representations of a finite group. This $2$-category is unitary, has nice duals, and satisfies Frobenius reciprocity, and other cool stuff as well. In particular, we can draw planar diagrams to represent different $2$-morphisms in the spaces. So this $2$-category has the structure of a planar algebra. For this category theory stuff, see Mueger's "From subfactors to categories and topology I" (MR1966524). In many cases, we can recover the special $2$-category from connections on a bipartite graph or on a commuting square (see EK, JS, Popa). Reconstruction: Popa proved that one can start with he standard invariant of a subfactor and reconstruct a subfactor with the same standard invariant (MR1334479). In the "strongly amenable" case (a bit more technical than "amenable"), you get a subfactor of the hyperfinite $II_1$-factor. You can also do the reconstruction completely planar algebraically. This result is due to Guionnet-Jones-Shlyakhtenko (arXiv:0712.2904), and a really easy version to understand was given by Jones-Shlyakhtenko-Walker (arXiv:0807.4146) (Kodiyalam and Sunder also have a version of this). Planar Algebras: If you want to know what a planar algebra is, see Peters' construction of the Haagerup subfactor planar algebra (arXiv:0902.1294), or Morrison-Peters-Snyder "Skein-theory for the $D_{2n}$ planar algebras" (MR2559686). If you want to know how a subfactor actually gives a planar algebra, see the first section of Jones-Penneys (arXiv:1007.3173), which relies on some proofs in Jones' "Planar Algebras I" (arXiv:math/9909027). Examples: A great class of examples is the Bisch-Haagerup subfactors (MR1386923) which are just slightly harder than group-subgroup examples (see JS). Some of the most important examples rely on the above reconstruction theorems. For example, there are the exotic Haagerup and Asaeda-Haagerup subfactors (MR1686551) and the composite Fuss-Catalan subfactors (MR1437496). I'll keep updating this post. Right now I have office hours. Sections to come include: Type III, Recent results - Excellent, Dave! Thanks!!!! – Jon Bannon Oct 27 2010 at 0:59 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Maybe the best choice today is the book by Jones and Sunder, "Introduction to subfactors.", at least the first few chapters to get a grip of the basics. After this it depends on your interests in which direction you could continue. I don't know enough about the field to give any specific references. - This is probably a good start. I have this book and have read it. I guess the question really should be where to go from there? There is a lot more to subfactor theory than is contained in that book, if I'm not mistaken. – Jon Bannon Oct 26 2010 at 17:14 I also think a good starting point is Jones and Sunders. If you are interested in type III subfactors, the literature seems to me a bit rare, but there are some lecture notes: Kosaki - "Type III factors and index theory" In algebraic quantum field theory one studies "nets" which associates to certain space time regions (under certain conditions) a type III factor. Subfactors play an important role. There is e.g. the paper: Longo, Rehren - "Nets of Subfactors" - 1 Ah, those lecture notes by Kosaki look interesting, didn't know they exist. Thanks! – Pieter Naaijkens Nov 11 2010 at 12:04 Thanks, Marcel! – Jon Bannon Nov 11 2010 at 17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099792242050171, "perplexity_flag": "middle"}
http://medlibrary.org/medwiki/Newton's_law_of_universal_gravitation
# Newton's law of universal gravitation Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below: Classical mechanics Branches Formulations Fundamental concepts Core topics Scientists Prof. Walter Lewin explains Newton's law of gravitation in MIT course 8.01[1] Satellites and projectiles all obey Newton's law of gravitation Newton's law of universal gravitation states that every point mass in the universe attracts every other point mass with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. (Separately it was shown that large spherically symmetrical masses attract and are attracted as if all their mass were concentrated at their centers.) This is a general physical law derived from empirical observations by what Newton called induction.[2] It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. (When Newton's book was presented in 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him – see History section below.) In modern language, the law states the following: Every point mass attracts every single other point mass by a force pointing along the line intersecting both points. The force is proportional to the product of the two masses and inversely proportional to the square of the distance between them:[3] $F = G \frac{m_1 m_2}{r^2}\$, where: F is the force between the masses, G is the gravitational constant, m1 is the first mass, m2 is the second mass, and r is the distance between the centers of the masses. Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in meters (m), and the constant G is approximately equal to 6.674×10−11 N m2 kg−2.[4] The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G.[5] This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. It took place 111 years after the publication of Newton's Principia and 71 years after Newton's death, so none of Newton's calculations could use the value of G; instead he could only calculate a force relative to another force. Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used to calculate the magnitude of electrical force between two charged bodies. Both are inverse-square laws, in which force is inversely proportional to the square of the distance between the bodies. Coulomb's Law has the product of two charges in place of the product of the masses, and the electrostatic constant in place of the gravitational constant. Newton's law has since been superseded by Einstein's theory of general relativity, but it continues to be used as an excellent approximation of the effects of gravity. Relativity is required only when there is a need for extreme precision, or when dealing with gravitation for extremely massive and dense objects. ## History[] ### Early History[] A recent assessment (by Ofer Gal) about the early history of the inverse square law is that "by the late 1660s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons".[6] The same author does credit Hooke with a significant and even seminal contribution, but he treats Hooke's claim of priority on the inverse square point as uninteresting since several individuals besides Newton and Hooke had at least suggested it, and he points instead to the idea of "compounding the celestial motions" and the conversion of Newton's thinking away from 'centrifugal' and towards 'centripetal' force as Hooke's significant contributions. ### Plagiarism dispute[] In 1686, when the first book of Newton's Principia was presented to the Royal Society, Robert Hooke accused Newton of plagiarism by claiming that he had taken from him the "notion" of "the rule of the decrease of Gravity, being reciprocally as the squares of the distances from the Center". At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated thereby" was wholly Newton's.[7] In this way arose the question as to what, if anything, Newton owed to Hooke. This is a subject extensively discussed since that time, and on which some points still excite some controversy. ### Hooke's work and claims[] Robert Hooke published his ideas about the "System of the World" in the 1660s, when he read to the Royal Society on 21 March 1666 a paper "On gravity", "concerning the inflection of a direct motion into a curve by a supervening attractive principle", and he published them again in somewhat developed form in 1674, as an addition to "An Attempt to Prove the Motion of the Earth from Observations".[8] Hooke announced in 1674 that he planned to "explain a System of the World differing in many particulars from any yet known", based on three "Suppositions": that "all Celestial Bodies whatsoever, have an attraction or gravitating power towards their own Centers" [and] "they do also attract all the other Celestial Bodies that are within the sphere of their activity";[9] that "all bodies whatsoever that are put into a direct and simple motion, will so continue to move forward in a straight line, till they are by some other effectual powers deflected and bent..."; and that "these attractive powers are so much the more powerful in operating, by how much the nearer the body wrought upon is to their own Centers". Thus Hooke clearly postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, together with a principle of linear inertia. Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. Hooke's gravitation was also not yet universal, though it approached universality more closely than previous hypotheses.[10] He also did not provide accompanying evidence or mathematical demonstration. On the latter two aspects, Hooke himself stated in 1674: "Now what these several degrees [of attraction] are I have not yet experimentally verified"; and as to his whole proposal: "This I only hint at present", "having my self many other things in hand which I would first compleat, and therefore cannot so well attend it" (i.e. "prosecuting this Inquiry").[8] It was later on, in writing on 6 January 1679|80 to Newton, that Hooke communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the Distance."[11] (The inference about the velocity was incorrect.[12]) Hooke's correspondence of 1679-1680 with Newton mentioned not only this inverse square supposition for the decline of attraction with increasing distance, but also, in Hooke's opening letter to Newton, of 24 November 1679, an approach of "compounding the celestial motions of the planetts of a direct motion by the tangent & an attractive motion towards the central body".[13] ### Newton's work and claims[] Newton, faced in May 1686 with Hooke's claim on the inverse square law, denied that Hooke was to be credited as author of the idea. Among the reasons, Newton recalled that the idea had been discussed with Sir Christopher Wren previous to Hooke's 1679 letter.[14] Newton also pointed out and acknowledged prior work of others,[15] including Bullialdus,[16] (who suggested, but without demonstration, that there was an attractive force from the Sun in the inverse square proportion to the distance), and Borelli[17] (who suggested, also without demonstration, that there was a centrifugal tendency in counterbalance with a gravitational attraction towards the Sun so as to make the planets move in ellipses). D T Whiteside has described the contribution to Newton's thinking that came from Borelli's book, a copy of which was in Newton's library at his death.[18] Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, he would still have some rights to it in view of his demonstrations of its accuracy. Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. According to Newton, while the 'Principia' was still at pre-publication stage, there were so many a-priori reasons to doubt the accuracy of the inverse-square law (especially close to an attracting sphere) that "without my (Newton's) Demonstrations, to which Mr Hooke is yet a stranger, it cannot believed by a judicious Philosopher to be any where accurate."[19] This remark refers among other things to Newton's finding, supported by mathematical demonstration, that if the inverse square law applies to tiny particles, then even a large spherically symmetrical mass also attracts masses external to its surface, even close up, exactly as if all its own mass were concentrated at its center. Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles.[20] In addition, Newton had formulated in Propositions 43-45 of Book 1,[21] and associated sections of Book 3, a sensitive test of the accuracy of the inverse square law, in which he showed that only where the law of force is accurately as the inverse square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as they are observed to do apart from small effects attributable to inter-planetary perturbations. In regard to evidence that still survives of the earlier history, manuscripts written by Newton in the 1660s show that Newton himself had arrived by 1669 at proofs that in a circular case of planetary motion, 'endeavour to recede' (what was later called centrifugal force) had an inverse-square relation with distance from the center.[22] After his 1679-1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis.[23] This background shows there was basis for Newton to deny deriving the inverse square law from Hooke. ### Newton's acknowledgment[] On the other hand, Newton did accept and acknowledge, in all editions of the 'Principia', that Hooke (but not exclusively Hooke) had separately appreciated the inverse square law in the solar system. Newton acknowledged Wren, Hooke and Halley in this connection in the Scholium to Proposition 4 in Book 1.[24] Newton also acknowledged to Halley that his correspondence with Hooke in 1679-80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: "yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."[15] ### Modern controversy[] Since the time of Newton and Hooke, scholarly discussion has also touched on the question of whether Hooke's 1679 mention of 'compounding the motions' provided Newton with something new and valuable, even though that was not a claim actually voiced by Hooke at the time. As described above, Newton's manuscripts of the 1660s do show him actually combining tangential motion with the effects of radially directed force or endeavour, for example in his derivation of the inverse square relation for the circular case. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was).[25] These matters do not appear to have been learned by Newton from Hooke. Nevertheless, a number of authors have had more to say about what Newton gained from Hooke and some aspects remain controversial.[26] The fact that most of Hooke's private papers had been destroyed or have disappeared does not help to establish the truth. Newton's role in relation to the inverse square law was not as it has sometimes been represented. He did not claim to think it up as a bare idea. What Newton did was to show how the inverse-square law of attraction had many necessary mathematical connections with observable features of the motions of bodies in the solar system; and that they were related in such a way that the observational evidence and the mathematical demonstrations, taken together, gave reason to believe that the inverse square law was not just approximately true but exactly true (to the accuracy achievable in Newton's time and for about two centuries afterwards – and with some loose ends of points that could not yet be certainly examined, where the implications of the theory had not yet been adequately identified or calculated).[27][28] About thirty years after Newton's death in 1727, Alexis Clairaut, a mathematical astronomer eminent in his own right in the field of gravitational studies, wrote after reviewing what Hooke published, that "One must not think that this idea ... of Hooke diminishes Newton's glory"; and that "the example of Hooke" serves "to show what a distance there is between a truth that is glimpsed and a truth that is demonstrated".[29][30] ## Bodies with spatial extent[] If the bodies in question have spatial extent (rather than being theoretical point masses), then the gravitational force between them is calculated by summing the contributions of the notional point masses which constitute the bodies. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies. In this way it can be shown that an object with a spherically-symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its centre.[3] (This is not generally true for non-spherically-symmetrical bodies.) For points inside a spherically-symmetric distribution of matter, Newton's Shell theorem can be used to find the gravitational force. The theorem tells us how different parts of the mass distribution affect the gravitational force measured at a point located a distance r0 from the center of the mass distribution:[31] • The portion of the mass that is located at radii r < r0 causes the same force at r0 as if all of the mass enclosed within a sphere of radius r0 was concentrated at the center of the mass distribution (as noted above). • The portion of the mass that is located at radii r > r0 exerts no net gravitational force at the distance r0 from the center. That is, the individual gravitational forces exerted by the elements of the sphere out there, on the point at r0, cancel each other out. As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere. Gravitational field strength within the Earth Gravity field near earth at 1,2 and A Furthermore, inside a uniform sphere the gravity increases linearly with the distance from the center; the increase due to the additional mass is 1.5 times the decrease due to the larger distance from the center. Thus, if a spherically symmetric body has a uniform core and a uniform mantle with a density that is less than 2/3 of that of the core, then the gravity initially decreases outwardly beyond the boundary, and if the sphere is large enough, further outward the gravity increases again, and eventually it exceeds the gravity at the core/mantle boundary. The gravity of the Earth may be highest at the core/mantle boundary. ## Vector form[] Field lines drawn for a point mass using 24 field lines Gravity field surrounding Earth from a macroscopic perspective. Gravity field lines representation is arbitrary as illustrated here represented in 30x30 grid to 0x0 grid and almost being parallel and pointing straight down to the center of the Earth Gravity in a room: the curvature of the Earth is negligible at this scale, and the force lines can be approximated as being parallel and pointing straight down to the center of the Earth Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formula, quantities in bold represent vectors. $\mathbf{F}_{12} = - G {m_1 m_2 \over {\vert \mathbf{r}_{12} \vert}^2} \, \mathbf{\hat{r}}_{12}$ where F12 is the force applied on object 2 due to object 1, G is the gravitational constant, m1 and m2 are respectively the masses of objects 1 and 2, |r12| = |r2 − r1| is the distance between objects 1 and 2, and $\mathbf{\hat{r}}_{12} \ \stackrel{\mathrm{def}}{=}\ \frac{\mathbf{r}_2 - \mathbf{r}_1}{\vert\mathbf{r}_2 - \mathbf{r}_1\vert}$ is the unit vector from object 1 to 2. It can be seen that the vector form of the equation is the same as the scalar form given earlier, except that F is now a vector quantity, and the right hand side is multiplied by the appropriate unit vector. Also, it can be seen that F12 = −F21. ## Gravitational field[] Main article: Gravitational field The gravitational field is a vector field that describes the gravitational force which would be applied on an object in any given point in space, per unit mass. It is actually equal to the gravitational acceleration at that point. It is a generalization of the vector form, which becomes particularly useful if more than 2 objects are involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as: $\mathbf g(\mathbf r) = - G {m_1 \over {{\vert \mathbf{r} \vert}^2}} \, \mathbf{\hat{r}}$ so that we can write: $\mathbf{F}( \mathbf r) = m \mathbf g(\mathbf r).$ This formulation is dependent on the objects causing the field. The field has units of acceleration; in SI, this is m/s2. Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. This has the consequence that there exists a gravitational potential field V(r) such that $\mathbf{g}(\mathbf{r}) = - \mathbf{\Delta} V( \mathbf r).$ If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. In that case $V(r) = -G\frac{m_1}{r}.$ ## Problematic aspects[] Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. Deviations from it are small when the dimensionless quantities φ/c2 and (v/c)2 are both much less than one, where φ is the gravitational potential, v is the velocity of the objects being studied, and c is the speed of light.[32] For example, Newtonian gravity provides an accurate description of the Earth/Sun system, since $\frac{\Phi}{c^2}=\frac{GM_\mathrm{sun}}{r_\mathrm{orbit}c^2} \sim 10^{-8}, \quad \left(\frac{v_\mathrm{Earth}}{c}\right)^2=\left(\frac{2\pi r_\mathrm{orbit}}{(1\ \mathrm{yr})c}\right)^2 \sim 10^{-8}$ where rorbit is the radius of the Earth's orbit around the Sun. In situations where either dimensionless parameter is large, then general relativity must be used to describe the system. General relativity reduces to Newtonian gravity in the limit of small potential and low velocities, so Newton's law of gravitation is often said to be the low-gravity limit of general relativity. ### Theoretical concerns with Newton's expression[] • There is no immediate prospect of identifying the mediator of gravity. Attempts by physicists to identify the relationship between the gravitational force and other known fundamental forces are not yet resolved, although considerable headway has been made over the last 50 years (See: Theory of everything and Standard Model). Newton himself felt that the concept of an inexplicable action at a distance was unsatisfactory (see "Newton's reservations" below), but that there was nothing more that he could do at the time. • Newton's Theory of Gravitation requires that the gravitational force be transmitted instantaneously. Given the classical assumptions of the nature of space and time before the development of General Relativity, a significant propagation delay in gravity leads to unstable planetary and stellar orbits. ### Observations conflicting with Newton's formula[] • Newton's Theory does not fully explain the precession of the perihelion of the orbits of the planets, especially of planet Mercury, which was detected long after the life of Newton.[33] There is a 43 arcsecond per century discrepancy between the Newtonian calculation, which arises only from the gravitational attractions from the other planets, and the observed precession, made with advanced telescopes during the 19th Century. • The predicted angular deflection of light rays by gravity that is calculated by using Newton's Theory is only one-half of the deflection that is actually observed by astronomers. Calculations using General Relativity are in much closer agreement with the astronomical observations. The observed fact that the gravitational mass and the inertial mass is the same for all objects is unexplained within Newton's Theories. General Relativity takes this as a basic principle. See the Equivalence Principle. In point of fact, the experiments of Galileo Galilei, decades before Newton, established that objects that have the same air or fluid resistance are accelerated by the force of the Earth's gravity equally, regardless of their different inertial masses. Yet, the forces and energies that are required to accelerate various masses is completely dependent upon their different inertial masses, as can be seen from Newton's Second Law of Motion, F = ma. The problem is that Newton's Theories and his mathematical formulas explain and permit the (inaccurate) calculation of the effects of the precession of the perihelions of the orbits and the deflection of light rays. However, they did not and do not explain the equivalence of the behavior of various masses under the influence of gravity, independent of the quantities of matter involved. ### Newton's reservations[] While Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" which his equations implied. In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it." He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity (although he invented two mechanical hypotheses in 1675 and 1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science. He lamented that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. And in Newton's 1713 General Scholium in the second edition of Principia: "I have not yet been able to discover the cause of these properties of gravity from phenomena and I feign no hypotheses... It is enough that gravity does really exist and acts according to the laws I have explained, and that it abundantly serves to account for all the motions of celestial bodies."[34] ### Einstein's solution[] These objections were explained by Einstein's theory of general relativity, in which gravitation is an attribute of curved spacetime instead of being due to a force propagated between bodies. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. This allowed a description of the motions of light and mass that was consistent with all available observations. In general relativity, the gravitational force is a fictitious force due to the curvature of spacetime, because the gravitational acceleration of a body in free fall is due to its world line being a geodesic of spacetime. ## Extensions[] Newton was the first to consider in his Principia an extended expression of his law of gravity including an inverse-cube term attempting to explain the Moon's apsidal motion. ## See also[] • Gauss's law for gravity • Kepler orbit, the analysis of Newton's laws as it applies to orbits • Newton's cannonball • Newton's laws of motion • Static forces and virtual-particle exchange • Related articles on a diagram ## Notes[] 1. Walter Lewin (October 4, 1999) (in English) (ogg). Work, Energy, and Universal GravitatioT Course 8.01: Classical Mechanics, Lecture 11. (videotape). Cambridge, MA USA: MIT OCW. Event occurs at 1:21-10:10. Retrieved December 23, 2010. 2. ^ a b - Proposition 75, Theorem 35: p.956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I.Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 [Amazon-US | Amazon-UK] ISBN 0-520-08817-4 [Amazon-US | Amazon-UK] 3. Mohr, Peter J.; Taylor, Barry N.; Newell, David B. (2008). "CODATA Recommended Values of the Fundamental Physical Constants: 2006". 80 (2): 633–730. Bibcode:2008RvMP...80..633M. doi:10.1103/RevModPhys.80.633. Direct link to value.. 4. H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press, 1960), giving the Halley-Newton correspondence of May to July 1686 about Hooke's claims at pp.431-448, see particularly page 431. 5. ^ a b 6. Purrington, Robert D. (2009). The First Professional Scientist: Robert Hooke and the Royal Society of London. Springer. p. 168. ISBN 3-0346-0036-4 [Amazon-US | Amazon-UK]., Extract of page 168 7. See page 239 in Curtis Wilson (1989), "The Newtonian achievement in astronomy", ch.13 (pages 233-274) in "Planetary astronomy from the Renaissance to the rise of astrophysics: 2A: Tycho Brahe to Newton", CUP 1989. 8. Page 309 in H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press, 1960), document #239. 9. See Curtis Wilson (1989) at page 244. 10. Page 297 in H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press, 1960), document #235, 24 November 1679. 11. Page 433 in H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press, 1960), document #286, 27 May 1686. 12. ^ a b Pages 435-440 in H W Turnbull (ed.), Correspondence of Isaac Newton, Vol 2 (1676-1687), (Cambridge University Press, 1960), document #288, 20 June 1686. 13. Bullialdus (Ismael Bouillau) (1645), "Astronomia philolaica", Paris, 1645. 14. Borelli, G. A., "Theoricae Mediceorum Planetarum ex causis physicis deductae", Florence, 1666. 15. D T Whiteside, "Before the Principia: the maturing of Newton's thoughts on dynamical astronomy, 1664-1684", Journal for the History of Astronomy, i (1970), pages 5-19; especially at page 13. 16. Page 436, Correspondence, Vol.2, already cited. 17. D T Whiteside, "The pre-history of the 'Principia' from 1664 to 1686", Notes and Records of the Royal Society of London, 45 (1991), pages 11-61; especially at 13-20. 18. See page 10 in D T Whiteside, "Before the Principia: the maturing of Newton's thoughts on dynamical astronomy, 1664-1684", Journal for the History of Astronomy, i (1970), pages 5-19. 19. Discussion points can be seen for example in the following papers: N Guicciardini, "Reconsidering the Hooke-Newton debate on Gravitation: Recent Results", in Early Science and Medicine, 10 (2005), 511-517; Ofer Gal, "The Invention of Celestial Mechanics", in Early Science and Medicine, 10 (2005), 529-534; M Nauenberg, "Hooke's and Newton's Contributions to the Early Development of Orbital mechanics and Universal Gravitation", in Early Science and Medicine, 10 (2005), 518-528. 20. See for example the results of Propositions 43-45 and 70-75 in Book 1, cited above. 21. The second extract is quoted and translated in W.W. Rouse Ball, "An Essay on Newton's 'Principia'" (London and New York: Macmillan, 1893), at page 69. 22. The original statements by Clairaut (in French) are found (with orthography here as in the original) in "Explication abregée du systême du monde, et explication des principaux phénomenes astronomiques tirée des Principes de M. Newton" (1759), at Introduction (section IX), page 6: "Il ne faut pas croire que cette idée ... de Hook diminue la gloire de M. Newton", [and] "L'exemple de Hook" [serve] "à faire voir quelle distance il y a entre une vérité entrevue & une vérité démontrée". 23. Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973). Gravitation. New York: W. H.Freeman and Company. ISBN 0-7167-0344-0 [Amazon-US | Amazon-UK] Page 1049. 24. - Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and the Earth.) 25. - The Construction of Modern Science: Mechanisms and Mechanics, by Richard S. Westfall. Cambridge University Press. 1978 ## [] Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Newton's law of universal gravitation", available in its original form here: http://en.wikipedia.org/w/index.php?title=Newton's_law_of_universal_gravitation • ## Finding More You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library. • ## Questions or Comments? If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. • ## About This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9414095282554626, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109456?sort=votes
## Are extensions of linear algebraic groups (over a field) themselves linear algebraic? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The title says it all. A very similar question was asked and answered about linear groups, but none of the counterexamples are algebraic: http://mathoverflow.net/questions/22814/are-extensions-of-linear-groups-linear If $A$, $B$ are affine and there is a rational section of $C \to A$ in $1 \to B \to C \to A \to 1$, then $C \to A$ is affine, so $C$ is affine. But if not? - @Michael: Do you mean that in your exact sequence, $B$ and $A$ are affine algebraic groups, $C$ is merely algebraic and your are asking if $C$ is affine? Or you only mean that $C$ is an abstract group? Also, do you make any assumptions about the field? (If the field is perfect and $C$ is algebraic then the answer is positive; if $C$ is not required to be algebraic then the answer is negative in the case when the field is ${\mathbb R}$.) – Misha Oct 12 at 13:29 What you said first. An extension in the category of algebraic groups. – Michael Thaddeus Oct 12 at 13:33 4 Sure. If $1 \rightarrow G' \rightarrow G \rightarrow G'' \rightarrow 1$ is a short exact sequence of fppf group sheaves over a scheme $S$ with $G''$ representable and $G'$ is $S$-affine and fppf over $S$ then $G$ is representable and $G \rightarrow G''$ is affine and fppf (so $G$ is $S$-affine if $G''$ is, same for fppf). This is proved by identifying $G$ as a $G'$-torsor sheaf over $G''$ for the fppf topology (sheaf quotient maps have "local" sections!) and using effectivity of fppf descent for affine morphisms. It is explained in Oort's LNM book on commutative (!) group schemes. – grp Oct 12 at 13:50 ## 1 Answer Yes. The point is that $C$ is a $B$-torsor over $A$. Since being affine is a local property in the fpqc topology, $C$ is affine over $A$. : Sorry I had not noticed grp's comment, or I wouldn't have posted an answer. At to why there are local section, well, to me that's by the definition of an extension. Alternatively, assuming you are on a field, the injectivity of $A \to B$ means, I suppose, that $A$ is an embedding of algebraic groups. This defines a free action of $A$ on $C$; take the quotient $B/A$ (as an fppf sheaf, or étale, if $A$ is smooth); the projection $B \to B/A$ has local sections, by construction. It's a basic result that $B/A$ is represented by a group scheme. Then the exactness of the sequence should meant that $B \to C$ induces an isomorphism of $B/A$ with $C$. If you are over an algebraically closed of characteristic 0, exactness of the sequence can be checked, in fact, at the level of closed points. - Very good, but that's basically the same argument I briefly indicated, only with the fppf or fpqc topology replacing the Zariski topology. My question for you, and for commenter grp above, is why do sections exist in the fppf or fpqc topologies? That is, why is a group extension a torsor for such topologies? I'll see if I can find this in Oort's book. – Michael Thaddeus Oct 12 at 15:18 PS: sorry, of course I meant local sections – Michael Thaddeus Oct 12 at 15:19 @Michael: I assumed (and probably Angelo did too) that you know the equivalence of several equivalent definitions of "group extension" (without which it is hard to work with this concept in a nice way). What definition are you using (especially if you aren't assuming smoothness of the groups)? – grp Oct 12 at 16:08 The map $C\to A$ is itself fppf (if your algebraic groups are reduced, flatness follows from generic flatness and homogeneity ; if they are not necessarily reduced, flatness should be somehow part of the definition of being surjective). Then you can base change $C\to A$ by the map $C\to A$ itself : here you have a tautological section. – Olivier Benoist Oct 12 at 16:14 @Olivier: Since Michael seems to be working over a general field, one should say "smooth" rather than "reduced". As you know, over any imperfect field there are reduced linear algebraic groups that are not smooth, and their relative Frobenius morphism is a finite surjective homomorphism which is not flat. Also, I see your intent by saying "flatness should somehow be part of the definition of being surjective", but this seems a bit risky since surjective has its own useful (ordinary) meaning for scheme maps. However,"fppf" requires fewer letters than "surjective", so using French solves it. :) – grp Oct 12 at 17:36 show 7 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9458792805671692, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/differential-geometry+tensors
# Tagged Questions 0answers 67 views ### Curvature and spacetime Suppose that it is given that the Riemann curvature tensor in a special kind of spacetime of dimension $d\geq2$ can be written as $$R_{abcd}=k(x^a)(g_{ac}g_{bd}-g_{ad}g_{bc})$$ where $x^a$ is a ... 1answer 112 views ### Ricci identity/Riemann curvature tensor and covectors Can somebody please explain to me how the following statement is true? The Riemann curvature tensor $R^c_{dab}$ is given by the Ricci identity (\nabla_a\nabla_b-\nabla_b\nabla_a)V^c\equiv ... 1answer 128 views ### Do partial derivatives commute on tensors? For example; is $$\partial_{\rho}\partial_{\sigma}h_{\mu\nu} - \partial_{\sigma}\partial_{\rho}h_{\mu\nu}=0$$ correct? 1answer 173 views ### Difference between $\partial$ and $\nabla$ in general relativity I read a lot in Road to Reality, so I think I might use some general relativity terms where I should only special ones. In our lectures we just had $\partial_\mu$ which would have the plain partial ... 0answers 54 views ### Expectation of 2-form field $B_{MN}$ in string theory In the context of string theory, in particular when we're dealing with a low energy effective action, if we have an effective action of the form: S_{eff} \sim S^{(0)} + \alpha S^{(1)} + (\alpha)^2 ... 1answer 249 views ### Diffeomorphisms, Isometries And General Relativity Apologies if this question is too naive, but it strikes at the heart of something that's been bothering me for a while. Under a diffeomorphism $\phi$ we can push forward an arbitrary tensor field $F$ ... 1answer 164 views ### Tensor Introduction I have recently started learning about tensors during my course on Special Relativity. I am struggling to gain an intuitive idea for invariant, contravariant and covariant quantities. In my book, ... 6answers 846 views ### What is a tensor? I have a pretty good knowledge of physics but couldn't understand what a tensor is. I just couldn't understand it, and the wiki page is very hard to understand as well. Can someone refer me to a good ... 3answers 129 views ### From Manifold to Manifold? Tensor equations are supposed to stay invariant in form wrt coordinate transformations where the metric is preserved. It is important to take note of the fact that invariance in form of the tensor ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326726198196411, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/180855-using-discrete-continuous-rvs-distribution.html
# Thread: 1. ## Using discrete and continuous rvs in a distribution Suppose that W, the amount of moisture in the air on a given day, is a gamma random variable with parameters $(t, \beta)$ Suppose also that given that $W = w$, the number of accidents during that day - call it N - has a poisson distribution with mean $w$. Show that the conditional distribution of W given that N = n is the gamma distribution with parameters $(t+n, \beta +\sum_{i = 1}^n x_i)$ I would like some help to write the formula for the second supposition, as it goes from a continuous rv to a discrete one Thanks 2. Originally Posted by FGT12 Suppose that W, the amount of moisture in the air on a given day, is a gamma random variable with parameters $(t, \beta)$ Suppose also that given that $W = w$, the number of accidents during that day - call it N - has a poisson distribution with mean $w$. Show that the conditional distribution of W given that N = n is the gamma distribution with parameters $(t+n, \beta +\sum_{i = 1}^n x_i)$ I would like some help to write the formula for the second supposition, as it goes from a continuous rv to a discrete one Thanks So we know the distribution of W and N|W. To get the distribution of W|N note that $f_{W|N}(w|n) \propto f_{W, N} (w, n) = f_{N|W} (n|w) f_W (w)$. You should be able to recognize the RHS as the kernal of a gamma with the appropriate parameters. 3. When I evalute the RHS I get $\frac{e^{-(n+\beta)w} w^{x_1+...+x_n} (\beta)^t }{x_1!x_2!...x_n!\Gamma(t) }$ I do not see how to go further with this question 4. That shouldn't be what you get when you evaluate the RHS. Come to think of it, you didn't even define what $x_i$ is...I think what you intended was that $X_i | W$ is distributed Poisson with mean W, $i = 1, ..., n$ and you want the distribution of $W| X_1, ..., X_n$. Can you post the question exactly as it is written? Among other things that don't make sense, if you interpret the question as I wrote it above you get $(t + \sum x_i, \beta + n)$ and not $(t + n, \beta + \sum x_i)$. 5. Question: Let $W$ be a gamma random variable with parameters $(t, \beta )$, and suppose that conditional on $W = w, X_1, X_2, ..., X_n$ are independent exponential random variables with rate $w$. Show that the conditional distribution of $W$ given that $X_1=x_1, X_2=x_2,..., X_n=x_n$ is gamma with parameters $(t+n, \beta + \sum_{i=1}^{n}x_i )$ 6. Originally Posted by FGT12 Question: Let $W$ be a gamma random variable with parameters $(t, \beta )$, and suppose that conditional on $W = w, X_1, X_2, ..., X_n$ are independent exponential random variables with rate $w$. Show that the conditional distribution of $W$ given that $X_1=x_1, X_2=x_2,..., X_n=x_n$ is gamma with parameters $(t+n, \beta + \sum_{i=1}^{n}x_i )$ That is false; it should be that $W|X = x$ is $(t + \sum x_i, \beta + n)$. $\displaystyle f_{W|X} (w|x) \propto f_{X|W} (x|w) f_W (w)$ $\displaystyle= \left(\prod_{i = 1} ^ n \frac{w^{x_i} e^{-w}}{x_i !}\right) \frac{\beta^t}{\Gamma(t)} w^{t - 1} e^{-\beta w} = \frac{\beta^t}{\Gamma(t) \prod_{i = 1} ^ n x_i !} w^{t + \sum x_i - 1} e^{-(\beta + n) w}$ $\displaystyle\propto w^{t + \sum x_i - 1} e^{-(\beta + n)w}$ (valid for positive w) which is the kernal of a Gamma $(t + \sum x_i, \beta + n)$. 7. what exactly do you mean by the kernel in this instance? How can I recognise other kernels for other distributions? And is there no way of getting to $f(w)=\frac{(\beta+n)e^{-(\beta+n)w}((\beta+n)w)^{t+\sum x_i -1}}{\Gamma(t+\sum x_i)}$ Could we integrate between infinity and minus infinity so that it equals one 8. When I say the "kernal" I mean the part of the density that matters, i.e. everything but a normalizing constant (in this case, the x_i are considered fixed so you can get rid of anything that is only a function of the x_i as well as any other fixed constants). You can retrieve the normalizing constant because the density must integrate to 1. If you can show that a pdf is proportional to the kernal of something you know then you know what the pdf is because you can get the normalizing constant by integrating the kernal. The technique I used is nice because it saves you from having to calculate the marginal of X which requires integration. To give another example, here are a couple of useful kernals for the normal distribution: $e^{\frac {-1} {2 \sigma^2} (x - \mu)^2}$ as well as $e^{\frac{-1}{2\sigma^2}(x^2 - 2x\mu)}$. 9. ## Re: Using discrete and continuous rvs in a distribution So I'm working on this same problem and wondering, how is the claim of proportionality that you use here justified? I thought that it would be $f_{W|X}(w|x) P_{X}(x) = f_{X|W}(x|w)P_{W}(w)$ so that when you divide, you're not dividing by a constant but instead dividing by a function of $x$. 10. ## Re: Using discrete and continuous rvs in a distribution Yeah, that's fine. For finding the law of W|X you can think of all the stuff on the right side of the conditioning bar as being constants when you do any proportionality stuff. 11. ## Re: Using discrete and continuous rvs in a distribution Oh right, duh, the thing we are to prove is that the resulting distribution has parameters which are themselves functions of $x_{i}$! Making sense now, thank you! 12. ## Re: Using discrete and continuous rvs in a distribution Okay, I lied, I've been off-and-on staring at this some more and I'm back to not really getting it. I intuitively understand the idea of how, conditional on $X$, things in terms of $X$ are like a constant, but I'm not sure how to make rigorous use of that idea. Here's an outline of what I've done, followed by a more detailed description if it's helpful. By some simple algebraic manipulation and Bayes's Law, I get $P_{W|X}P(w|x) = \frac{P_{X|W}(x|w)P_{W}(w)}{P_{X}(x)}$ Where the expressions in the numerator are described in the assumptions of the problem. From that, I combine expressions with a base of $w$ and with a base of $e$. The result is $w^{t+n-1}e^{-w(\beta +\sum x_{i})}$, which seems to me the (as you call it) "kernel" of a gamma distribution with parameters $t+n, \, \, \beta+\sum x_{i}$. Now I know that you earlier said this cannot be right, and maybe that's why I'm running into problems--however, I'm not seeing how what I've done is wrong or how anything else could work. But as a result of so organizing my terms, my "coefficient" is now $\frac{\beta^{t}}{\Gamma (t)P_{X}(x)}$ For this to truly be a gamma distribution in those parameters, I need my coefficient to be $\frac{\Big( \beta+\sum x_{i} \Big)^{t+n}}{\Gamma(t+n)}$ So how do I do this? I don't have freedom to choose what any of the terms are, so it doesn't seem like I am able to compensate for this difference by assigning some value to a constant coefficient or anything like that. A more detailed derivation of the expression that I ultimately obtain: $P(X_{1}=x_{1}, ..., X_{n}=x_{n}|W=w)P(W=w) \quad = \\\\ P(W=w|X_{1}=x_{1}, ..., X_{n}=x_{n})P(X_{1}=x_{1}, ..., X_{n}=x_{n}) \quad \Longrightarrow \\\\ P(W=w|X_{1}=x_{1}, ..., X_{n}=x_{n}) \quad = \quad \frac{P(X_{1}=x_{1}, ..., X_{n}=x_{n}|W=w)P(W=w)}{P(X_{1}=x_{1}, ..., X_{n}=x_{n})} \quad = \\\\ \frac{w^{n}e^{-w(x_{1}+...+x_{n})}\frac{\beta^{t}}{\Gamma (t)}w^{t-1}e^{-\beta w}}{P(X_{1}=x_{1}, ..., X_{n}=x_{n})} \quad = \quad \frac{\beta^{t}}{\Gamma (t) P(X_{1}=x_{1}, ..., X_{n}=x_{n})}{w^{t+n-1}e^{-w(\beta + \sum_{i=1}^{n}x_{i})}}$ 13. ## Re: Using discrete and continuous rvs in a distribution By the way, I just noticed that, in your earlier statement you were using the Poisson distribution, which is what the original poster had originally posted. However, in the original poster's reply (time-stamped May 18th 8:51 AM) when he wrote exactly what the problem was asking, he wrote the exponential distribution. So really, this problem should be about each $X_{i}|W$ being exponential with rate $w$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 54, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540112018585205, "perplexity_flag": "head"}