url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/advanced-algebra/29864-inverse-integer-modulo-m.html
# Thread: 1. ## Inverse of an integer modulo m Hi, I have a test tomorrow, and I have been trying for a very long time to figure out how to do this. Thank you in advance for any help you can give me. This is the question: Find the inverse of 5 modulo 16. Please explain how to do this in steps. I already know the answer, I just don't know how to get it. Thanks 2. Originally Posted by lisaak Hi, I have a test tomorrow, and I have been trying for a very long time to figure out how to do this. Thank you in advance for any help you can give me. This is the question: Find the inverse of 5 modulo 16. Please explain how to do this in steps. I already know the answer, I just don't know how to get it. Thanks First of all, 5 and 16 are relatively prime so 5 does have a multiplicative inverse mod 16. By definition, you need to find the integer value of a such that 5a = 1 modulo 16. One quick way here is to see that $a = \frac{16n+1}{5}$ where n is an integer. It's not hard to see that n = 4 works and so a = 13. 3. Originally Posted by mr fantastic First of all, 5 and 16 are relatively prime so 5 does have a multiplicative inverse mod 16. By definition, you need to find the integer value of a such that 5a = 1 modulo 16. One quick way here is to see that $a = \frac{16n+1}{5}$ where n is an integer. It's not hard to see that n = 4 works and so a = 13. And I guess you saw that n = -1 works too => a = -3. The multipicative inverse is not unique. 4. Okay, so what if the numbers are larger? How do I find the inverse of say 40 mod 81? Because this one is not easy to see as in the previous example. a= (81n+1)/40 5. alright, here is what I'm thinking: since 40x is congruent to 1 mod 81 40x-1=81y (y in Z) 40x+81z=1 (where z is -y) So working the Euclidean Algorithm backwards I can express the equation as 1=81-40*2 So, 40(-2) is congruent to 1 mod 81 81+(-2)=79 Thus, 79 is the multiplicative inverse. Does this make sense? 6. Originally Posted by lisaak alright, here is what I'm thinking: since 40x is congruent to 1 mod 81 40x-1=81y (y in Z) 40x+81z=1 (where z is -y) So working the Euclidean Algorithm backwards I can express the equation as 1=81-40*2 So, 40(-2) is congruent to 1 mod 81 81+(-2)=79 Thus, 79 is the multiplicative inverse. Does this make sense? Well you got a correct answer so it looks good to me!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404527544975281, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/topology
# Tagged Questions The topology tag has no wiki summary. 1answer 52 views ### Topological vs. non-topological noetherian charges What (if any) is the relationship between the conserved (non-topological) noetherian charges and topological charges? Namely, is there any "generalization" of the Noether's first theorem that includes ... 2answers 161 views ### Excluding big bang itself, does spacetime have a boundary? My understanding of big bang cosmology and General Relativity is that both matter and spacetime emerged together (I'm not considering time zero where there was a singularity). Does this mean that ... 1answer 131 views ### Our Universe Can't be Looped? [duplicate] With reference to the Twin-Paradox (I am new with this), now information of who has actually aged comes from the fact that one of the twins felt some acceleration. So if universe was like a loop, and ... 1answer 61 views ### Consequences of Compactness in Physics If we understand spacetime as a $4$-dimensional manifold $M$, from the point of view of physics what are the consquences of a subset of it being compact? My point here is simple: in math we usually ... 2answers 81 views ### Are all points in the universe connected? Is it true that every point in the universe is connected or could be so theoretically? If so how is this mediated? Is it through the quantum nature of the fabric of space or is it through the ... 0answers 59 views ### Do we expect that the universe is simply-connected? [duplicate] I heard recently that the universe is expected to be essentially flat. If this is true, I believe this means (by the 3d Poincare conjecture) that the universe cannot be simply-connected, since the ... 1answer 45 views ### Topological phase transitions - breaking of continuous translational invariance [closed] I'm relatively new to the theoretical side of physics. I have a question about topology, continuous symmetry breaking and phase transitions. Your help is much appreciated! Ok so I have an infinite ... 0answers 113 views ### Tangent bundles and $\mathbb{C}P^n$ and $\mathbb{C}^n$ As discussed here the complex projective space $\mathbb{C}P^n$ is the set of all lines on $\mathbb{C}^n$ passing through the origin. It would seem natural to assume that any $\mathbb{C}P^n$ can be ... 1answer 104 views ### Topology for physicists [duplicate] Which are the best introductory books for topology, algebraic geometry, manifolds etc, needed for string theory? 1answer 102 views ### How is the direction of time determined in general relativity? In special relativity every frame has its own unique time axis, represented in Minkowski diagrams by a fan-out of time vectors that grows infinitely dense as you approach the surface of the light cone ... 1answer 176 views ### Chiral edge state as topological properity of bulk state As far as I know, quantum hall effect and quantum spin hall effect has chiral edge state. Chiral edge state is usually closely related with delocalization, since back scattering is forbidden. However, ... 1answer 64 views ### Proof of quantization of magnetic charge of monopoles using homotopy groups Suppose we place a monopole at the origin $\{{\bf 0}\}$, and the gauge field is well-definded in region $\mathbb R^3-\{0\}$ which is homomorphic to a sphere $S^2$. Then the total manifold is $U(1)$ ... 1answer 205 views ### Quantum dimension in topological entanglement entropy In 2D the entanglement entropy of a simply connected region goes like \begin{align} S_L \to \alpha L - \gamma + \cdots, \end{align} where $\gamma$ is the topological entanglement entropy. $\gamma$ is ... 1answer 230 views ### First Chern number, monoples and quantum Hall states The first Chern number $\cal C$ is known to be related to various physical objects. Gauge fields are known as connections of some principle bundles. In particular, principle $U(1)$ bundle is said to ... 1answer 82 views ### Gauss-Bonnet theorem in the Hawking/Ellis book At the page 336 of Hawking, Ellis: The Large Scale Structure of Space-Time, the Gauss-Bonnet theorem is stated as $$\int_H \hat{R}\ d\hat{S} = 2\pi \chi(H) \qquad (1)$$ with \hat{R} = R_{abcd} ... 0answers 115 views ### Is it mathematically possible or topologically allowable for cutouts, or cavities, to exist in a 3-manifold? A few weeks back, I posted a related question, Could metric expansion create holes, or cavities in the fabric of spacetime?, asking if metric stretching could create cutouts in the spacetime manifold. ... 1answer 240 views ### Questions about Thouless-Kohmoto-Nightingale-den Nijs (TKNN) paper I am reading the famous and concise Thouless-Kohmoto-Nightingale-den Nijs (TKNN) paper Quantized Hall Conductance in a Two-Dimensional Periodic Potential, Phys. Rev. Lett. 49, 405–408 (1982), where I ... 2answers 209 views ### Topology and Quantum mechanics I have a very simple question. Can we know about the topology of the underlying space-time manifolds from Quantum mechanics calculations? If the Space-time is not simply connected, how can one measure ... 1answer 168 views ### What is the simplest possible topological Bloch function? Kohmoto (1985) pointed out in Topological Invariant and the Quantization of the Hall Conductance how TKNN's calcuation of Hall conducance is related to topology, in which topologically nontriviality ... 0answers 89 views ### Alternate geodesic completions of a Schwarzschild black hole The Kruskal-Szekeres solution extends the exterior Schwarzschild solution maximally, so that every geodesic not contacting a curvature singularity can be extended arbitrarily far in either direction. ... 1answer 209 views +300 ### Does a charged or rotating black hole change the genus of spacetime? For a Reissner–Nordström or Kerr black hole there is an analytic continuation through the event horizon and back out. Assuming this is physically meaningful (various site members hereabouts think ... 3answers 259 views ### Could metric expansion create holes, or cavities in the fabric of spacetime? Is it possible for metric expansion to create holes, or cavities in the fabric of spacetime? According to the Schwarzschild metric, the metric expansion of space around a black hole goes to infinity ... 1answer 80 views ### Proper times of two observers in a three-torus Consider two observer in a tree-torus space of size $L$. Observer $A$ is at rest, while observer $B$ moves in the $x$-direction with constant velocity $v$. $A$ and $B$ began at the same event, and ... 3answers 338 views ### Homotopy $\pi_4(SU(2))=\mathbb{Z}_2$ Recently I read a paper using $$\pi_4(SU(2))=\mathbb{Z}_2.$$ Do you have any visualization or explanation of this result? More generally, how do physicists understand or calculate high dimension ... 0answers 141 views ### Lagrangian for Goldstone mode + topological excitation The XY-model Hamiltonian is the following, $${\cal H}~=~-J\sum_{\langle i,j\rangle} \cos (\theta_i -\theta_j).$$ The Goldstone mode corresponds to term $(\nabla \theta)^2$ in the effective ... 2answers 415 views ### How is the topological $Z_2$ invariant related to the Chern number? (e.g. for a topological insulator) This question relates to the $Z_2$ invariant defined e.g. for topological insulators: Is it correct to relate $Z_2$ = 1 to an odd Chern number and $Z_2$ = 0 to an even Chern number? If yes, is it ... 1answer 317 views ### Chern number in condensed matter physics In mathematics, the Chern number is defined in terms of the Chern class of a manifold. What is the exact definition of Chern number in condensed matter physics, i.e. quantum hall system? 2answers 262 views ### (Co)homology of the universe In this post let $U$ be the universe considered as a manifold. From what I gather we don't really have any firm evidence whether the universe is closed or open. The evidence seems to point towards it ... 1answer 171 views ### Is a preferred reference frame of the universe the old aether? About two years ago I posted a question about a symmetrical twin paradox: Here. Recently a new answer was posted and an intense discussion ensued: Here. One of the points discussed concerns a ... 0answers 140 views ### 7 sphere, is there any physical interpretation of exotic spheres? Basically an exotic sphere is topologically a sphere, but doesn't look like a one. Or more accurately: homeomorphic but not diffeomorphic to the standard Euclidean n-sphere The first exotic ... 5answers 127 views ### Why is the world sheet of an open string a cylinder? I went to a lecture a few weeks ago and was told the following: The world sheet of a closed string is a normal, standing cylinder. The world sheet of an open string is a cylinder on its side. This ... 1answer 129 views ### Can closed loops evade the spin-statistic theorem in 3 dimensions? The famous spin-statistics result asserts that there are only bosons and fermions, and that they have integer and integer-and-a-half spin respectively. In two-dimensional condensed matter systems, ... 1answer 171 views ### Large gauge transformations I would like to understand what is the importance of large gauge transformations. I read that these gauge transformation cannot be deformed to the identity, but why should we care about that? 2answers 124 views ### Graph Invariants and Statistical Mechanics Many intuitive knot invariants including Jones' polynomial are inspired by statistical mechanics. Further profound connections have been explored between knot theory and statistical mechanics. I was ... 1answer 282 views ### What is topological degeneracy in condensed matter physics? What is topological degeneracy in strongly correlated systems such as FQH? What is the difference between topological degeneracy and ordinary degeneracy? Why is topological degeneracy important for ... 1answer 99 views ### Why are topological solitons present in some phases for lattice models? Over a spatial continuum, it is easy to see why some topological solitons like vortices and monopoles have to be stable. For similar reasons, Skyrmions also have to be stable, with a conserved ... 2answers 126 views ### Is a compact universe consistent with the results of (for example) the Michelson-Morley experiment? If the universe is compact then there is a twin paradox that is resolvable only by selecting a preferred inertial reference frame (arXiv). I was under the impression that the lack of a preferred ... 2answers 361 views ### Aharonov-Bohm Effect and Flux Quantization in superconductors Why is the magnetic flux not quantized in a standard Aharonov-Bohm (infinite) solenoid setup, whereas in a superconductor setting, flux is quantized? 2answers 246 views ### On Aharonov–Bohm effect Aharonov–Bohm effect in brief is due to some singularities in space. In books it's infinite solenoid most of the time, which makes some regions of space not simply connected. What intrigues me is the ... 1answer 98 views ### how does nature prevent transient toroidal event horizons? .. and does it really need to? Steps to construct a (transient) toroidal event horizon in a asymptotically flat Minkowski spacetime: 1) take a circle of radius $R$ 2) take $N$ equidistant points in ... 1answer 113 views ### what is wrong with the following argument about stokes law in compact universes? I want to understand what is wrong with the following argument: in a topologically compact spacetime, a closed 3D boundary separates the spacetime in two connected components, because of this ... 0answers 69 views ### are pinch-off bubbles valid solutions to general relativity? are bubbles of spacetime pinching-off allowed solutions to general relativity? With "pinch-off bubble" i really mean a finite 3D volume of space whose 2D boundary decreases until it reaches zero and ... 3answers 181 views ### What are some mechanics examples with a globally non-generic symplecic structure? In the framework of statistical mechanics, in books and lectures when the fundamentals are stated, i.e. phase space, Hamiltons equation, the density etc., phase space seems usually be assumed to be ... 1answer 117 views ### geometry inside the event horizon I'm trying to understand intuitively the geometry as it would look to an observer entering the event horizon of a schwarszchild black hole. I would appreciate any insights or corrections to the above. ... 4answers 477 views ### Topology needed for Differential Geometry [duplicate] I am a physics undergrad, and need to study differential geometry ASAP to supplement my studies on solitons and instantons. How much topology do I need to know. I know some basic concepts reading from ... 2answers 342 views ### Book covering Topology required for physics and applications I am a physics undergrad, and interested to learn Topology so far as it has use in Physics. Currently I am trying to study Topological solitons but bogged down by some topological concepts. I am not ... 1answer 83 views ### Does General Relativity require that Spacetime must be a orientable? [duplicate] Possible Duplicate: Can spacetime be non-orientable? Apart from the constraints put on the topology of spacetime by QFT (Parity For example), if the global topology of a universe is that of ... 0answers 127 views ### What are the topics of string theory that are comprehensible with only a mathematical background on Manifolds and Algebraic Topology? What are the topics of string theory that are comprehensible with only a mathematical background on manifolds and algebraic topology? Also, I have read only the first four chapters in Peskin & ... 1answer 206 views ### What is the fate of a 3-Torus universe? Since it is flat, will it expand forever like a flat and open universe or collapse like a closed and curved universe? 2answers 341 views ### Does spacetime in general relativity contain holes? Are there physical models of spacetimes, which have bounded (four dimensional) holes in them? And do the Einstein equations give restrictions to such phenomena? Here by holes I mean ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9117551445960999, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111749/how-to-interpret-a-double-integral
# How to interpret a double integral If you have an integral $\int_c^{d}\int_{a}^{b} f(x_1,x_2)\,dx_2\, dx_1$. I am not sure how to visualize this.I know that you are adding two dimensional rectangles but I cannot see the relationship between the formula and the visualization. Do you basically add all the rectangles in the x2 direction first and get a function of x1 and then add all the rectangles in the x2 direction to get a function of x2? It's easy to interpret a single variable integral but I am not sure what's actually being done in a double integral.I know it's the volume under a particular function in the xyz plane but I cannot determine the "algorithm" that is performed to actually compute that volume. - 1 If the integrand is nonnegative, you can think of it as the volume of the solid bounded by the graph of $f$ above and the rectangle $[a,b]\times [c,d]$ below (the sides of the solid are determined by the rectangle). – David Mitra Feb 21 '12 at 18:46 ## 1 Answer You can think $f(x_1, x_2)$ as height function over $X_1- X_2$ plane. $$\int_c^d\int_a^b f(x_1,x_2) dx_2dx_1$$ will be volume of solid bounded by 1- graph $f$ and 2- $X_1-X_2$ plane 3- $X_2=a \text{ to }X_2= b$ and 4- $X_1=c \text{ to } X_1= d$. This is just a vague visualization. Because $f$ may not be graph globally....and it may be negative... But then you can make partition of domain suitably and break the integration... and Each partition, you can think given integration as SIGNED volume... -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9201307892799377, "perplexity_flag": "head"}
http://mathoverflow.net/questions/23758/published-results-when-to-take-them-for-granted/24084
## Published results: when to take them for granted? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Two kinds of papers. There are two kinds of papers: self-contained ones, and those relying on published results (which I believe are the vast majority). Checking the result. Of course, one should check carefully other's results before using them. There are several incentives to do that: become a real specialist; expand one's knowledge of concepts and techniques; find and mention a gap in the proof should that happen; get the ability to interact with more people ("I read your paper..."). So ideally, in a sense, checking a result before using it should always be the case. Trusting peer-review. Yet, the very idea of academic peer-reviewed publications is to allow readers to locate results deemed trustable. The implied degree of trustability varies among scientific disciplines, but one would expect mathematics to have to most stringent one: a proof is either correct or it is not. Given this, it is sometimes very tempting to use a result as a kind of "useful axiom", especially if that result has been proven with concepts very far from one's own area(s) of expertise, or if it is the culmination of several long papers: in those cases it would require a substantial amount of time, maybe even years, to personally check the results in their own right. Someone wanting to move forward quickly (or with a short-term position) may not want to go into this. How to decide? Some cases are clear-cut (e.g. most people would accept the classification of finite simple groups), while others are borderline. My questions on that matter are: 1. Are there rules of thumb that you have come up with when deciding between checking a result, and taking it as an axiom ? 2. When accepting without checking, how do you phrase it? 3. Has it ever occured to you that taking a result for granted actually backfired: what happened, and what would you do differently (job interview, retraction of publication)? EDIT (friday 7 may): many thanks to those who have replied, very interesting comments! (Also, please note that since there is no "best answer" to that kind of question I will not single out one over the others.) - 2 It's not clear that peer review is meant to imply that published results are trustable. One possibly reasonably interpretation is that published papers have some threshhold probability of correctness. Thus peer review doesn't save you from needing to check results; it merely saves you the time to avoid checking the stuff that is most likely to be wrong. I'm not endorsing this viewpoint, but it's worth keeping in mind that some referees may be acting with this in mind. – Mark Meckes May 6 2010 at 18:29 4 Perhaps I should mention the personal experience that makes me say this: I once got a (positive) referee report in which the referee said that he or she had "sampled some of the proofs" in the paper and found them to be correct. – Mark Meckes May 6 2010 at 18:30 2 @Mark: Your referee was more honest than most. Having been involved in refereeing hundreds of papers over many decades, I know how demanding the job can be. Editors usually make it clear that the referee is not responsible for the correctness of a proof but should make a reasonable effort to evaluate it. – Jim Humphreys May 6 2010 at 18:49 1 @Jim: True enough. I wouldn't completely vouch for the correctness of every detail in all the papers I've refereed, either. What struck me was the implication that this referee had completely ignored some of the proofs (and it wasn't a long paper). – Mark Meckes May 6 2010 at 18:54 1 I believe the instructions to referees from the AMS say explicitly that the responsibility for correctness rests with the author rather than the referee... – Gerald Edgar May 7 2010 at 15:19 show 1 more comment ## 8 Answers In a word: never. But slightly more usefully, here's my 50øre. If you publish a paper that depends on the result, are you going to be embarrassed if the referee says, "Can you clarify your use of Theorem X?". If you feel happy saying, "A,B, and C all published result depending on it, so I figured I was safe." then go ahead. If you're not quite so sure that A, B, or C check things quite so carefully as you do, check it yourself. So, for example, if it's a result about differential topology on loop spaces then I would check it very carefully because I ought to know about that stuff and I would be embarrassed if the referee said that. But, say, Kuiper's result on the contractibility of the general linear group, then I figure it's not quite my area of expertise and plenty of other people have used that result in the meantime that if someone finds a mistake now then my minor embarrassment is going to vanish into nothingness besides the other things that are going to come crashing down. To put it a slightly different way, suppose that you prove X, which depends on Y. Then someone proves W depending on your X. Later, Y is found to be false. When you and the person who proved W happen to be at the same conference, do you a) hide in a corner and hope that they don't see you, or b) go to the pub and have a good laugh about it all. If you think it'll be (a), then you should have checked Y. If (b), then you're in the clear. - 1 Rather selfish point of view, but also extremely practical. Just to illustrate that mathematicians are people too (as opposed to some sort of Romantic tragic hero). – Willie Wong May 7 2010 at 14:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. There obviously won't be a single answer that fits all circumstances, but here is my pennyworth. If a result is sufficiently accepted by experts you have good reason to trust, then the result can be trusted. (Obviously the better you understand it the better, but sometimes one has to save time.) If a result does not satisfy the first criterion, then be very suspicious of it unless you are given, or can think of, some accompanying reason for its being true (rather than a long calculation that just happens to work). - Here is a rule of thumb I learnt the hard way: if the article A you are using seems to be under-quoted, that is to say if articles published later quote other sources while A seems to be perfectly acceptable, then beware of A. Suggesting this rule of thumb pains me greatly because I think there are already too much path and cultural dependancy in citation practices in mathematics. That is I think the importance of some authors and/or works are downplayed and underestimated because people tend to quote the version of a result they learnt first in their mathematical education, or the one published by someone they know, or the version they actually read in details instead of the one with historical and/or mathematical precedence. And yet here I am suggesting one should beware of rarely quoted papers. But the truth is, even though I could still be considered a relatively young mathematician, I already maintain a quite long list of serious mistakes in articles with impeccable pedigree (excellent journals, ICM class authors...) so I suspect everyone does the same. The problem is that upon this subject, everyone (including me) seems to operate on an "everyone knows" basis. So everyone (in my field) knows that Deligne's Travaux de Shimura (EDIT: actually in Deligne's article in the Corvallis volume, as pointed out in comment by the person who found and corrected the mistake in the first place) contains a sign mistake, everyone knows that Bloch-Kato's article on Tamagawa numbers contain a recurring misprint (except Dummigan, Stein and Watkins sweat to prove a lemma apparently because they didn't know about it), everyone knows that Skinner-Wiles on residually reducible representations contains a mistake (except I agonized over it several weeks before asking a senior member of my department who immediately replied, "yes, everyone knows about it") etc. So if an article is underquoted, it might be because "everyone" knows that there is a problem with it, so maybe it is a good idea to double or triple check in that case. Watch it especially if the authors themselves seem to have "forgotten" about their own paper (in that case "everyone knows" might mean "the authors know"). As you gathered, I have made mistakes because I took for granted a published result. So my answer to your number 3 is: in that particular case, nothing spectacular happened, I was the only one to notice my own mistake (the referees missed it), I alerted colleagues and friends whom I knew had used the same result or intended to quote my work and put a sentence in my next article saying that one had to put an extra hypothesis in my previous work on the subject because an extra hypothesis was required in one of the sources. EDIT: Reading comments, I realize that my answer could be read as meaning that I actually think that everyone knows the mistakes I mention, and hence support the "everyone knows" attitude. This was meant as irony, pointing out to the fact that in reality, very few people know about such mistakes (in my experience), and that many people (me included) have sweat for weeks only to discover that some people thought this was so well-known as to be self-evident. I apologize for the ambiguous statement and record that my position is that this "everyone knows the mistakes in other articles" actually produces the result "someone, somewhere noticed the mistake for a few weeks" and "generation of researchers will reproduce the mistake or lose time trying to figure out what went wrong". - 2 Actually, the signs in Deligne's Travaux de Shimura are correct. It's in his Corvallis article that he got them wrong... – JS Milne May 7 2010 at 12:50 4 Making a mistake in a list about notorious mistakes! Ah, the irony! Thanks a lot for the correction. – Olivier May 7 2010 at 13:49 9 @Olivier: I find the "everyone knows" mentality about errors in published work annoying and sometimes infuriating. At first glance, you seemed to be supporting it. Reading your response carefully, I found that you have justified my sentiments quite well: i.e., the problem with "everyone knows" is that too often, manifestly not everyone knows. Thanks for your answer. – Pete L. Clark May 7 2010 at 14:49 @Pete: I am happy that I am not the only one annoyed. It is great time that with the electronic availability of most mathematical articles, a common open sourced technology of maintaining errata together with the soft copy of the original article and good search engine for the greater good of the mathematical community be established and of widespread use. So that people saying "everyone knows this error is there" would be ashamed not to have checked if this errata entry had been entered and would be even more ashamed of not having entered it him/herself. – ogerard May 7 2010 at 16:03 3 Maybe I am naive, but why not publish an erratum? – Boris Bukh Feb 21 2011 at 13:55 show 2 more comments No-one has quite answered: • When accepting without checking, how do you phrase it? I think? So, here's a thought. I'm always of the opinion that you should give the maximal amount of detail where referencing something. Give the exact Theorem number in the paper you reference (not just by results of [12] it follows that...''). Explain carefully the hypotheses and conclusions. Of course, the more accepted a result is, the less detail you need to give. If, however, you actually want to reference the proof then I'd be very careful. E.g. you might observe that the paper shows X=>Y, but the proof works for the weaker X'. I'd be tempted to give a sketch, or outline exactly the changes needed. My guess is that a lot of subtle errors can be introduced by referencing proofs: I've heard it said that most mathematical results are true, but many proofs are subtly wrong. So it might be true that the proof in [12] shows that X'=>Y, but maybe the author stated it only for X because there is a subtle error in the proof, and really the stronger X is required. (This, of course, is also a good test of your own proofs, but I'm heading off topic...) - 2 Excellent advice. Detailed references are much more helpful to the reader. And I have seen serious gaps result from referencing proofs. – Nate Eldredge May 7 2010 at 15:15 This question was discussed on E. Kowalski's blog. Here is a comment I made: Dear Emmanuel, You have raised an interesting issue, with (I believe) no simple answer. I think that Terry’s suggestion on how to deal with the situation is a sensible one. I might add another piece of advice. (Note that, as with Terry’s advice, this is not advice on how to address this issue in one’s writing, but rather, how one should proceed when confronted with this situation in one’s research, so as to avoid blunders.) Most pieces of mathematics (including Weil II, for example) fit into a framework (and I don’t here mean a logical framework, but rather a narrative framework), with illustrative analogies to other parts of mathematics (in the case of the Weil conjectures, there are important analogies with algebraic topology and Hodge theory), interconnections between various results in the area, key motivations and heuristics, and so on, and one can often learn these even if learning the actual details of the arguments is out of the question. If there is such a narrative that one can learn, I would say it is normally a good idea to learn it, since it will give one a better feeling for the results being cited, and a better feeling for how to apply them correctly. On the other hand, if such a narrative structure isn’t available, it will probably be harder to test the correctness of one’s understanding of the results, since (short of actually reading the proof), there is nothing to check against. Perhaps in such a situation, it is probably a good idea, if possible, to verify with an expert that one is really applying the result in a correct manner. Good expository literature can also help a lot (both to learn the narrative, if one is available, or at least to learn one’s way around the results that one wants to apply). On the question of how one should phrase the citation in such a situation (of citing a result whose proof one doesn’t know): I think that having a good understanding of how to apply a result is itself a valid and important skill, whether or not one knows how to prove the result. (Similarly, we value good drivers/pilots of vehicles, as well as the engineers who build the vehicles themselves.) I don’t think that there is any intellectual dishonesty in citing a result with confidence, if one is genuinely confident that it is true (and trust in a group of established experts is a genuine and legitimate source of confidence) and one is genuinely confident that one understands the statement and the ways in which it can be applied. On the other hand, if one doesn’t have this genuine confidence with regard to a result that one is applying in some argument, then one could be heading for a blunder, and I would say that caution is required, not just in the citation, but in the construction of the argument itself. Just to add to this: if a result is generally certified by experts, is well-established, and widely used and understood (even if not by you personally), then there is surely no problem in quoting it, applying it, and relying on it. (As Andrew notes in his answer, if such a result does somehow collapse in the future, you will have plenty of good company with whom to commiserate about the collapse of you own work.) On the other hand, if a result is not like this, you should be more cautious in applying it, not for any ethical reason (at least in my view), but so as to avoid having your own work built on an unstable foundation. As I write above, when you can't verify the result yourself, do your best at least to see that it fits into a reasonable narrative framework, and also try to find experts that you trust who can certify the results correctness, and that you are applying it correctly. - The questions raised are real but can't be answered by giving rules of thumb, I'm afraid. Mathematics is hierarchical by nature and has a long history, with results often building on earlier ones. Peer review of published work varies a lot in thoroughness, but even done conscientiously can't root out all subtle errors. Most of us bring a bit of skepticism to results we haven't understood deeply on our own. Many of us publish results which are not quite right (making later corrections when feasible). It's always risky to quote stuff at random from areas you are not a specialist in, even if you trust the people involved. If you are a specialist, you probably trust some people more than others to get it right; but even so you try to check details. A great many mathematics papers contain at least minor errors, in most cases correctable but not self-correcting. Some real mistakes become famous and spawn important research. You write: How to decide? Some cases are clear-cut (e.g. most people would accept the classification of finite simple groups), while others are borderline. Actually, most people I know accept the classification only conditionally, as do some of the real experts in the subject who are still working to codify a complete proof. A result like this, plausible as it looks, depends on a huge amount of published (and some unpublished) work. I think it's still customary to point out in papers when the CFSG is being used to get other results. If you have to quote results you haven't understood from scratch, you should try to access the MathSciNet database in order to read a review and follow up later citations. Or try to ask an expert. Lacking that, quote at your own risk and without offering too firm an endorsement, e.g., "Theorem X in paper Y states that Z". - P. Halmos in "I want to be a mathematicien" has a short section on refereeing where he exposes his point view: The role of the referee in his eyes is not to certify correctness of a paper (this is the authors job, according to Halmos) but he has to "smell" a paper and to advise the editors on its interest (I am citing from memory and hope that there is not too much distorsion). As a referee, I am following his advice in the following sense: If I enjoy reading a paper then I recommend it generally for publication (and in this case I check also more or less carefully the proofs). If I can find no pleasure and no interest then I suggest either another referee if the paper seems interesting nevertheless or I recommend rejection. In the last case, I do generally not check proofs (and I tell the editor and the authors so). In some sense, mathematics should mainly be interesting, errors in very interesting and stimulating papers can be tolerated to some extend since they will generally get quickly corrected. (This is of course only true for exceptional papers, most papers have close to zero readers anyway.) Mathoverflow is somehow a mirror of the mathematical litterature (except that the process is much faster on MO): Erroneous statements get quickly commented and eventually corrected or retracted. I believe that the possibility of making errors is a necessary part of any creative process. Computers make (generally) no errors. They are also more stupid than the most disabled human who is not braindead. - It is the life's work of many people to make this question obsolete. More precisely, we aim to make mathematical assistants which are both pervasive and easy-to-use so that, in say 50 year's time, all high-quality journals would immediately reject a paper which has not been checked by one of these systems. [There are sub-fields of computer science where the time horizon for this seems closer to 5 years, with the 'best' papers already being machine-checked today]. Note that some people mis-interpret such statements. In the past, it is true that formal verification was extremely difficult and it intruded too much into the actual results and their write-up. But this has changed tremendously of late, where 'modern' verified papers (through the liberal use of literate programming tools and ideas) look the same as non-verified papers, they just come with attachments which contain the fully formal parts. Mathematical papers can then retain their human-oriented aspects of communicating the crucial ideas and insights, while allowing a certain lightening of the formalism in the text. This is coming. Current mid-career mathematicians don't have to worry about this too much, but I would certainly advise the younger generation to keep an eye on these developments. - 3 I'm a bit skeptical about this. It might work in some algebraic areas, but in (nonalgebraic) geometry I can't imagine this happening. It's already hard enough to write a paper which is based on subtle geometric or visual reasoning without the burden of trying to formalize everything. The subject would never progress! And in the end, there are so many subtle places where the formalization can go astray that I don't think it would add much. – Andy Putman May 10 2010 at 15:18 2 @Andy: our ultimate goal is to make it a benefit (relieving mathematicians of tedium) rather than a burden (ie, imposing it). Even in geometry, I think in some cases it will be essential. For example, Hales' reviewers were simply unable to fully confirm his claimed proof of the Kepler conjecture, despite running a three year seminar on it. His project to mechanize his proof seems to be the only sensible response to this state of affairs. – Neel Krishnaswami May 10 2010 at 21:26 @Neel: we are in complete agreement. – Jacques Carette May 11 2010 at 7:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.965133011341095, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/2765/what-is-the-optimal-strategy-when-there-is-an-equal-chance-for-gain-or-loss-but/2766
# What is the optimal strategy when there is an equal chance for gain or loss but the size of the potential gain is larger? I'm investigating a situation where the chance for gain or loss is the same, but the amount gained is greater than the amount that is lost. For example, the gain would be about 30% of the trade amount, and the loss would be 23% of the trade amount. While there is slightly more to it than that, that is the core of it -- random/even chance for hitting the gain or the loss the way the trade is structured, and approximately the percentages indicated. Please note either the gain or loss will be reached. If one has amount A to invest, what considerations need to be taken into account to make a situation like this profitable, or is it not possible for it to be profitable (e.g. due to many successive losses)? - ## 1 Answer This is practically a textbook case begging for the Kelly criterion. In your specific example, the optimal trade size is $f^*A$, where $f^*$ maximizes the average rate of return $$\mathbb{E}[\log (X)]=0.5\log(1+0.3f)+0.5\log(1-0.23f).$$ Here $f$ is the fraction of the current capital to trade. A straightforward calculation yields that $$f^*=\frac{0.3-0.23}{2\times 0.3\times 0.23}\approx 0.5072$$ In general, if you expect to gain $gX$ with probability $p$ or lose $lX$ with probability $q$ on a trade of the size $X$, then the optimal (Kelly) bet is $$f^*=\frac{pg-ql}{gl}.$$ Some caveats might be worth noting. • The Kelly framework assumes that sequential trades are (sufficiently) independent. • Since the exact payoffs $g$, $l$ and probabilities $p$, $q$ are typically not known, it is safer to bet less than Kelly. Betting double the optimal Kelly bet reduces the growth rate of capital to zero (see e.g. "Good and Bad Properties of the Kelly Criterion" by Bill Ziemba). - Thanks much for your detailed answer. I am surprised at how high the ratio (0.5072) came out with these numbers. – Ray Jan 10 '12 at 1:12 Related, do you by chance know of any formulae that describe "dollar-cost-averaging" for short term applications. E.g. if an instrument varies approximately 20% a day in price, are there formulae that describe the effect of purchasing this instrument perhaps 10-12 times over 3 days at regular intervals, always spending the same amount, to average one's price down? Of course not every trade will succeed, as the item purchased may not rise back up, but is there formula that can help understand this better? Thanks much. – Ray Jan 10 '12 at 1:27 @Ray: Thanks for your comments. I cannot give you a precise reference at the moment although this should be rather standard stuff in the theory of portfolio management. You might want to post this as a separate question. – olaker♦ Jan 10 '12 at 20:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332993626594543, "perplexity_flag": "middle"}
http://cms.math.ca/10.4153/CJM-2011-082-2
Canadian Mathematical Society www.cms.math.ca | | | | | |----------|----|-----------|----| | | | | | | | | Site map | | | CMS store | | location:  Publications → journals → CJM Abstract view # Poisson Brackets with Prescribed Casimirs Read article [PDF: 329KB] http://dx.doi.org/10.4153/CJM-2011-082-2 Canad. J. Math. 64(2012), 991-1018 Published:2011-11-15 Printed: Oct 2012 • Pantelis A. Damianou, Department of Mathematics and Statistics, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus • Fani Petalidou, Department of Mathematics and Statistics, University of Cyprus, P.O. Box 20537, 1678 Nicosia, Cyprus Features coming soon: Citations   (via CrossRef) Tools: Search Google Scholar: Format: LaTeX MathJax PDF ## Abstract We consider the problem of constructing Poisson brackets on smooth manifolds $M$ with prescribed Casimir functions. If $M$ is of even dimension, we achieve our construction by considering a suitable almost symplectic structure on $M$, while, in the case where $M$ is of odd dimension, our objective is achieved by using a convenient almost cosymplectic structure. Several examples and applications are presented. Keywords: Poisson bracket, Casimir function, almost symplectic structure, almost cosymplectic structure MSC Classifications: 53D17 - Poisson manifolds; Poisson groupoids and algebroids 53D15 - Almost contact and almost symplectic manifolds
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7329378724098206, "perplexity_flag": "middle"}
http://www.haskell.org/haskellwiki/index.php?title=User:Michiexile/MATH198/Lecture_1&oldid=30054
# User:Michiexile/MATH198/Lecture 1 ### From HaskellWiki Revision as of 12:11, 12 September 2009 by Michiexile (Talk | contribs) IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE FIRST LECTURE WITH HANDING ANYTHING IN, OR TAKING THE NOTES AS READY TO READ. ## 1 Welcome, administrativia I'm Mikael Vejdemo-Johansson. I can be reached in my office 383-BB, especially during my office hours; or by email to mik@math.stanford.edu. I encourage, strongly, student interactions. I will be out of town September 24 - 29. I will monitor forum and email closely, and recommend electronic ways of getting in touch with me during this week. I will be back again in time for the office hours on the 30th. ## 2 Introduction ### 2.1 Why this course? An introduction to Haskell will usually come with pointers toward Category Theory as a useful tool, though not with much more than the mention of the subject. This course is intended to fill that gap, and provide an introduction to Category Theory that ties into Haskell and functional programming as a source of examples and applications. ### 2.2 What will we cover? The definition of categories, special objects and morphisms, functors, natural transformation, (co-)limits and special cases of these, adjunctions, freeness and presentations as categorical constructs, monads and Kleisli arrows, recursion with categorical constructs. Maybe, just maybe, if we have enough time, we'll finish with looking at the definition of a topos, and how this encodes logic internal to a category. Applications to fuzzy sets. ### 2.3 What do we require? Our examples will be drawn from discrete mathematics, logic, Haskell programming and linear algebra. I expect the following concepts to be at least vaguely familiar to anyone taking this course: • Sets • Functions • Permutations • Groups • Partially ordered sets • Vector spaces • Linear maps • Matrices • Homomorphisms ## 3 Category ### 3.1 Graphs We recall the definition of a (directed) graph. A graph G is a collection of edges (arrows) and vertices (nodes). Each edge is assigned a source node and a target node. $source \to target$ Given a graph G, we denote the collection of nodes by G0 and the collection of arrows by G1. These two collections are connected, and the graph given its structure, by two functions: the source function $s:G_1\to G_0$ and the target function $t:G_1\to G_0$. We shall not, in general, require either of the collections to be a set, but will happily accept larger collections; dealing with set-theoretical paradoxes as and when we have to. A graph where both nodes and arrows are sets shall be called small. A graph where either is a class shall be called large. If both G0 and G1 are finite, the graph is called finite too. The empty graph has $G_0 = G_1 = \emptyset$. A discrete graph has $G_1=\emptyset$. A complete graph has $G_1 = \{ (v,w) | v,w\in G_0\}$. A simple graph has at most one arrow between each pair of nodes. Any relation on a set can be interpreted as a simple graph. • Show some examples. A homomorphism $f:G\to H$ of graphs is a pair of functions $f_0:G_0\to H_0$ and $f_1:G_1\to H_1$ such that sources map to sources and targets map to targets, or in other words: • s(f1(e)) = f0(s(e)) • t(f1(e)) = f0(t(e)) By a path in a graph G from the node x to the node y of length k, we mean a sequence of edges $(f_1,f_2,\dots,f_k)$ such that: • s(f1) = x • t(fk) = y • s(fi) = t(fi − 1) for all other i. Paths with start and end point identical are called closed. For any node x, there is a unique closed path () starting and ending in x of length 0. For any edge f, there is a unique path from s(f) to t(f) of length 1: (f). We denote by Gk the set of paths in G of length k. ### 3.2 Categories We now are ready to define a category. A category is a graph C equipped with an associative composition operation $\circ:G_2\to G_1$, and an identity element for composition 1x for each node x of the graph. Note that G2 can be viewed as a subset of $G_1\times G_1$, the set of all pairs of arrows. It is intentional that we define the composition operator on only a subset of the set of all pairs of arrows - the composable pairs. Whenever you'd want to compose two arrows that don't line up to a path, you'll get nonsense, and so any statement about the composition operator has an implicit "whenever defined" attached to it. The definition is not quite done yet - this composition operator, and the identity arrows both have a few rules to fulfill, and before I state these rules, there are some notation we need to cover. #### 3.2.1 Backwards! If we have a path given by the arrows (f,g) in G2, we expect $f:A\to B$ and $g:B\to C$ to compose to something that goes $A\to C$. The origin of all these ideas lies in geometry and algebra, and so the abstract arrows in a category are supposed to behave like functions under function composition, even though we don't say it explicitly. Now, we are used to writing function application as f(x) - and possibly, from Haskell, as f x . This way, the composition of two functions would read g(f(x)). On the other hand, the way we write our paths, we'd read f then g. This juxtaposition makes one of the two ways we write things seem backwards. We can resolve it either by making our paths in the category go backwards, or by reversing how we write function application. In the latter case, we'd write x.f, say, for the application of f to x, and then write x.f.g for the composition. It all ends up looking a lot like Reverse Polish Notation, and has its strengths, but feels unnatural to most. It does, however, have the benefit that we can write out function composition as $(f,g) \mapsto f.g$ and have everything still make sense in all notations. In the former case, which is the most common in the field, we accept that paths as we read along the arrows and compositions look backwards, and so, if $f:A\to B$ and $g:B\to C$, we write $g\circ f:A\to C$, remembering that elements are introduced from the right, and the functions have to consume the elements in the right order. The existence of the identity map can be captured in a function language as well: it is the existence of a function $u:G_0\to G_1$. Now for the remaining rules for composition. Whenever defined, we expect associativity - so that $h\circ(g\circ f)=(h\circ g)\circ f$. Furthermore, we expect: 1. Composition respects sources and targets, so that: • $s(g\circ f) = s(f)$ • $t(g\circ f) = t(g)$ 2. s(u(x)) = t(u(x)) = x In a category, arrows are also called morphisms, and nodes are also called objects. This ties in with the algebraic roots of the field. We denote by HomC(A,B), or if C is obvious from context, just Hom(A,B), the set of all arrows from A to B. This is the hom-set or set of morphisms, and may also be denoted C(A,B). If a category is large or small or finite as a graph, it is called a large/small/finite category. A category with objects a collection of sets and morphisms a selection from all possible set-valued functions such that the identity morphism for each object is a morphism, and composition in the category is just composition of functions is called concrete. Concrete categories form a very rich source of examples, though far from all categories are concrete. ### 3.3 New Categories from old As with most other algebraic objects, one essential part of our tool box is to take known objects and form new examples from them. This allows us generate a wealth of examples from the ones that shape our intuition. Typical things to do here would be to talk about subobjects, products and coproducts, sometimes obvious variations on the structure, and what a typical object looks like. Remember from linear algebra how subspaces, cartesian products (which for finite-dimensional vectorspaces covers both products and coproducts) and dual spaces show up early, as well as the theorems giving dimension as a complete descriptor of a vectorspace. We'll go through the same sequence here; with some significant small variations. A category D is a subcategory of the category C if: • $D_0\subseteq C_0$ • $D_1\subseteq C_1$ • D1 contains 1X for all $X\in D_0$ • sources and targets of all the arrows in D1 are all in D0 • the composition in D is the restriction of the composition in C. Written this way, it does look somewhat obnoxious. It does become easier though, with the realization - studied closer in homework exercise 2 - that the really important part of a category is the collection of arrows. Thus, a subcategory is a subcollection of the collection of arrows - with identities for all objects present, and with at least all objects that the existing arrows imply. A subcategory $D\subseteq C$ is full if D(A,B) = C(A,B) for all objects A,B of D. In other words, a full subcategory is completely determined by the selection of objects in the subcategory. A subcategory $D\subseteq C$ is wide if the collection of objects is the same in both categories. Hence, a wide subcategory picks out a subcollection of the morphisms. The dual of a category is to a large extent inspired by vector space duals. In the dual C * of a category C, we have the same objects, and the morphisms are given by the equality C * (A,B) = C(B,A) - every morphism from C is present, but it goes in the wrong direction. Dualizing has a tendency to add the prefix co- when it happens, so for instance coproducts are the dual notion to products. We'll return to this construction many times in the course. Given two categories C,D, we can combine them in several ways: 1. We can form the category that has as objects the disjoint union of all the objects of C and D, and that sets $Hom(A,B)=\emptyset$ whenever A,B come from different original categories. If A,B come from the same original category, we simply take over the homset from that category. This yields a categorical coproduct, and we denote the result by C + D. Composition is inherited from the original categories. 2. We can also form the category with objects $\langle A,B\rangle$ for every pair of objects $A\in C, B\in D$. A morphism in $Hom(\langle A,B\rangle,\langle A',B'\rangle)$ is simply a pair $\langle f:A\to A',g:B\to B'\rangle$. Composition is defined componentwise. This category is the categorical correspondent to the cartesian product, and we denot it by $C\times D$. In these three constructions - the dual, the product and the coproduct - he arrows in the categories are formal constructions, not functions; even if the original category was given by functions, the result is no longer given by a function. Given a category C and an object A of that category, we can form the slice category C / A. Objects in the slice category are arrows $B\to A$ for some object B in C, and an arrow $\phi:f\to g$ is an arrow $s(f)\to s(g)$ such that $f=g\circ\phi$. Composites of arrows are just the composites in the base category. Notice that the same arrow φ in the base category C represents potentially many different arrows in C / A: it represents one arrow for each choice of source and target compatible with it. There is a dual notion: the coslice category $A\backslash C$, where the objects are paired with maps $A\to B$. Slice categories can be used, among other things, to specify the idea of parametrization. The slice category C / A gives a sense to the idea of objects from C labeled by elements of A. We get this characterization by interpreting the arrow representing an object as representing its source and a type function. Hence, in a way, the Typeable type class in Haskell builds a slice category on an appropriate subcategory of the category of datatypes. Alternatively, we can phrase the importance of the arrow in a slice categories of, say, Set, by looking at preimages of the slice functions. That way, an object $f:B\to A$ gives us a family of (disjoint) subsets of B indexed by the elements of A. Finally, any graph yields a category by just filling in the arrows that are missing. The result is called the free category generated by the graph, and is a concept we will return to in some depth. Free objects have a strict categorical definition, and they serve to give a model of thought for the things they are free objects for. Thus, categories are essentially graphs, possibly with restrictions or relations imposed; and monoids are essentially strings in some alphabet, with restrictions or relatinos. ### 3.4 Examples • The empty category. • No objects, no morphisms. • The one object/one arrow category 1. • A single object and its identity arrow. • The categories 2 and 1 + 1. • Two objects, A,B with identity arrows and a unique arrow $A\to B$. • The category Set of sets. • Sets for objects, functions for arrows. • The catgeory FSet of finite sets. • Finite sets for objects, functions for arrows. • The category PFn of sets and partial functions. • Sets for objects. Arrows are pairs $(S'\subseteq S,f:S'\to T)\in PFn(S,T)$. • PFn(A,B) is a partially ordered set. $(S_f,f)\leq(S_g,g)$ precisely if $S_f\subseteq S_g$ and $f=g|_{S_f}$. • There is an alternative way to define a category of partial functions: For objects, we take sets, and for morphisms $S\to T$, we take subsets $F\subseteq S\times T$ such that each element in S occurs in at most one pair in the subset. Composition is by an interpretation of these subsets corresponding to the previous description. We'll call this category PFn'. • Every partial order is a category. Each hom-set has at most one element. • Objects are the elements of the poset. Arrows are unique, with $A\to B$ precisely if $A\leq B$. • Every monoid is a category. Only one object. • The category of Sets and injective functions. • The category of Sets and surjective functions. • The category of k-vector spaces and linear maps. • The category with objects the natural numbers and Hom(m,n) the set of $m\times n$-matrices. • The category of Data Types with Computable Functions. • Our ideal programming language has: • Primitive data types. • Constants of each primitive type. • Operations, given as functions between types. • Constructors, producing elements from data types, and producing derived data types and operations. • We will assume that the language is equipped with • A do-nothing operation for each data type. Haskell has id . • An empty type 1, with the property that each type has exactly one function to this type. Haskell has () . We will use this to define the constants of type t as functions $1\to t$. Thus, constants end up being 0-ary functions. • A composition constructor, taking an operator $f:A\to B$ and another operator $g:B\to C$ and producing an operator $g\circ f:A\to C$. Haskell has (.) . • This allows us to model a functional programming language with a category. • The category with objects logical propositions and arrows proofs. ### 3.5 Homework For a passing mark, a written, acceptable solution to at least 2 of the 5 questions should be given no later than midnight before the next lecture. For each lecture, there will be a few exercises marked with the symbol *. These will be more difficult than the other exercises given, will require significant time and independent study, and will aim to complement the course with material not covered in lectures, but nevertheless interesting for the general philosophy of the lecture course. 1. Prove the general associative law: that for any path, and any bracketing of that path, the same composition may be found. 2. Suppose $u:A\to A$ in some category C. 1. If $g\circ u=g$ for all $g:A\to B$ in the category, then u = 1A. 2. If $u\circ h=h$ for all $h:B\to A$ in the category, then u = 1A. 3. These two results characterize the objects in a category by the properties of their corresponding identity arrows completely. 3. For as many of the examples given as you can, prove that they really do form a category. Passing mark is at least 60% of the given examples. • Which of the categories are subcategories of which other categories? Which of these are wide? Which are full? 4. For this question, all parts are required: 1. For which sets is the free monoid on that set commutative. 2. Prove that for any category C, the set Hom(A,A) is a monoid under composition for every object A. 5. * Read up on ω-complete partial orders. Suppose S is some set and $\mathfrak P$ is the set of partial functions $S\to S$ - in other words, an element of $\mathfrak P$ is some pair $(S_0,f:S_0\to S)$ with $S_0\subseteq S$. We give this set a poset structure by $(S_0,f)\leq(S_1,g)$ precisely if $S_0\subseteq S_1$ and $f(s)=g(s)\forall s\in S_0$. • Show that $\mathfrak P$ is a strict ω-CPO. • An element x of S is a fixpoint of $f:S\to S$ if f(x) = x. Let $\mathfrak N$ be the ω-CPO of partially defined functions on the natural numbers. We define a function $\phi:\mathfrak N\to\mathfrak N$ by sending some $h:\mathbb N\to\mathbb N$ to a function k defined by 1. k(0) = 1 2. k(n) is defined only if h(n − 1) is defined, and then by k(n) = n * h(n − 1). Describe $\phi(n\mapsto n^2)$ and $\phi(n\mapsto n^3)$. Show that φ is continuous. Find a fixpoint (S0,f) of φ such that any other fixpoint of the same function is less than this one. Find a continuous endofunction on some ω-CPO that has the fibonacci function F(0) = 0,F(1) = 1,F(n) = F(n − 1) + F(n − 2) as the least fixed point. Implement a Haskell function that finds fixed points in an ω-CPO. Implement the two fixed points above as Haskell functions - using the ω-CPO fixed point approach in the implementation. It may well be worth looking at Data.Map to provide a Haskell context for a partial function for this part of the task.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 75, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9204946160316467, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/240879/what-is-the-relation-between-two-invertible-functions?answertab=active
# What is the relation between two invertible functions Lets say that if f(x) and g(x) are invertible. 1- is (f(x)+g(x)) also invertible? 2- is f(g(x)) invertible too? for the first one lets say that f(x)=x and g(x)=-x then f(x)+g(x)=x+(-x)=0 and f(x)=0 is not invertible. so my conculusion is that f(x)+g(x) can be uninvertible if f(x) and g(x) are invertible. Am I right? For the second one I have no clue. - ## 3 Answers I have no clue whether the following answers have made headway. As a result, I'll take a stab at this: For (1), you had to provide an adequate counterexample. Now, for (2), consider the following: Let $f$ be a function from set $X$ to set $Y$. Likewise, let $g$ be a function from set $Y$ to set $Z$. Thus, $g\circ f$ is a function from set $X$ to set $Z$. We see that $g\circ f$ has an inverse since we can perform $f^{-1}\circ g^{-1}$ to get back to $X$. Let's decode that. Does it make more sense to think that $f$ is a bridge from place $X$ to place $Y$ and that $g$ is a bridge from place $Y$ to place $Z$? Well, if we have bridges back---that is, $g^{-1}$ being a bridge from place $Z$ to place $Y$ and $f^{-1}$ being a bridge from place $Y$ to place $X$---is it not self evident that to get from place $Z$ to place $X$ we simply take bridge $g^{-1}$ and then bridge $f^{-1}$, and we call this combined route $f^{-1}\circ g^{-1}$? :) For sake of completeness, here's what I just said in the dry-Algebra language: Let $f: X\to Y$ and $g: Y \to Z$. Thus, $g\circ f: X \to Z$. If we have $f^{-1}: Y\to X$ and $g^{-1}: Z \to Y$, we have that $f^{-1}\circ g^{-1}: Z \to X$. Letting the identity map be $I$, we have $f^{-1}\circ g^{-1}\circ g\circ f=I$. Therefore, $f^{-1}\circ g^{-1}$ is the inverse map of $g \circ f$. - +1 Very nicely expressed! – amWhy Nov 20 '12 at 0:16 Thank you! I completely fudged up the order of my $f$'s and $g$'s at first, but I have corrected that. – 000 Nov 20 '12 at 0:25 You'll get the hang of typesetting; it takes some getting used to! I still can't believe how quickly some users are able to post elaborate responses! – amWhy Nov 20 '12 at 0:27 You are right about 1. 2 is correct provided $Range(g) \subseteq dom(f)$. In any case it has to be or else $f(g(x))$ would not be well defined. - HINT: You are right about (1). For (2), suppose that you know that $f\big(g(x)\big)$ is some particular number $y$. Does the invertibility of $f$ guarantee that in principle you can find $g(x)$? And what then does the invertibility of $g$ tell you? - so is this correct: f(x)=x^3 and g(x)=x^(2/3). Then f(g(x))=(x^(2/3))^3 which is f(g(x))=x^2 which is not invertible. – Muffin Nov 19 '12 at 21:20 @Alex90: Your $g$ isn’t invertible, so this example can’t tell you anything. If $f$ and $g$ are invertible, so is their composition; see if you can follow the hint to see how to explain why. – Brian M. Scott Nov 19 '12 at 21:27 Honestly I dont understand anything from your hint since im a beginner. 1st year in university. Why isnt g invertible? I get only one x for each y. – Muffin Nov 19 '12 at 21:38 @Alex90: $(-8)^{2/3}=(-2)^2=4=2^2=8^{2/3}$, so $g(-8)=g(8)$, and $g$ is not invertible. Look: if $f\big(g(x)\big)=y$, and $f$ is invertible, then isn’t it true that $g(x)=f^{-1}(y)$? Now keep going to find $x$. – Brian M. Scott Nov 19 '12 at 21:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393743872642517, "perplexity_flag": "head"}
http://mathoverflow.net/questions/95637/connected-compact-semisimple-lie-group-finite-fundamental-group/95667
connected compact semisimple lie group finite fundamental group Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was told that the fundamental group of a connected, compact, semisimple Lie group is finite, with the outline of a possible way to prove this fact. Is there any source however that fleshes this out in detail / are there several ways to prove this fact? Thanks! (The result is often known as Weyl's theorem, I think for his take on the proof, Knapp provides a fairly detailed exposition of his perspective.) - I can think of at least four substantially different proofs off the top of my head and I'm sure there are more, so it is not surprising that Knapp's proof is different from the one that was first suggested to you. So if you want to see a fleshed-out version of the outline that you were first given it might help to include a few details that you remember in your question. – Paul Siegel May 1 2012 at 17:37 2 While I'm thinking about this, my personal favorite argument is to compute the Ricci curvature of a compact Lie group relative to a bi-invariant Riemannian metric in terms of the Lie bracket and conclude that the Ricci curvature is positive if the group is semi-simple. This implies that the Ricci curvature of the universal cover is also positive and hence the universal cover is compact by the Bonnet-Meyers theorem. – Paul Siegel May 1 2012 at 17:45 @Paul: oops, I just saw your comment... This is exactly the proof I mentioned in the answer I just posted below. I apologize for the repetition. – Renato G Bettiol May 1 2012 at 18:06 6 Answers There is a quick proof via Lie algebra cohomology: Let $G$ denote your compact, connected, semisimple Lie group, and let $\mathfrak g$ denote its Lie algebra. Then $$H^1(G;\mathbb R) = H^1(\mathfrak g;\mathbb R) = \text{Hom}_{\mathbb R} (\mathfrak g/[\mathfrak g, \mathfrak g], \mathbb R) = 0,$$ whence $H_1(G;\mathbb Z)$ is finite. But $\pi_1(G)$ is abelian hence is isomorphic to $H_1(G;\mathbb Z)$. QED. - 1 By the way, this argument is most likely due to Chevalley--Eilenberg. I would give a more precise reference but my erratic internet connection is making this difficult. – Faisal May 1 2012 at 15:18 How easy are the first two equalities? – Igor Rivin May 1 2012 at 23:34 1 Depends on how you set things up. E.g. let's view $H^\ast(\mathfrak g;\mathbb R)$ as being computed by the complex $(\wedge^q\mathfrak g^\ast,d)$ (I'll omit the formula for $d$...). Then the second equality is trivial: $f\in\mathfrak g^\ast$ is in $H^1$ iff $df=0$ (there are no 1-coboundaries). The formula for $d$ here is $df(X,Y)=f([X,Y])$, whence $f\in H^1 \iff f\in (\mathfrak g/[\mathfrak g,\mathfrak g])^\ast$. – Faisal May 2 2012 at 1:03 2 The first 'equality' follows from the observation that the complex of left invariant forms computes $H^\ast_{\rm dR}(G)$ (not difficult to prove) together with the fact that said complex can be identified (via evaluation at the identity) with the complex $\wedge^q\mathfrak g^\ast$. You can find all the details in, e.g., Chevalley--Eilenberg. – Faisal May 2 2012 at 1:04 Thanks, I will check it out! Part of my question is which proof is the quickest from nothing... – Igor Rivin May 2 2012 at 1:48 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I think you can get a much faster (and maybe easier...) proof using Riemannian geometry, as follows: First, recall that a semi-simple connected Lie group $G$ is compact if and only if its Killing form $B$ is negative-definite (the proof is easy, see, e.g., Thm 2.28 in these notes). The side we will use ($G$ compact semi-simple $\Rightarrow$ $B$ neg.-def.) actually follows directly from $B(X,X)=tr(ad(X)\cdot ad(X))$ using an orthonormal basis with respect to an auxiliary bi-invariant metric to compute this trace. Now, the Ricci curvature of any bi-invariant metric on $G$ (that exists because $G$ is compact) can be computed as: $$Ric(X,Y)=-\frac14 B(X,Y),$$ see Remark 2.27 in the same notes. By the observation above, since $G$ is compact and semi-simple, its Killing form $B$ is negative-definite. Hence the above formula gives $Ric>0$. So, by the Bonnet-Myers Theorem, $G$ must have finite fundamental group. Q.E.D. - Besides Samelson's short 1946 research note linked by Mrc Plm, it's also useful to mention his longer 1952 survey on topology of Lie groups here (see Section 10 and references for various proofs of Weyl's theorem on finiteness of the fundamental group). By now the whole subject has been treated in numerous textbooks and lecture notes, from a variety of viewpoints. Which approach you take depends a lot on your own background and interests. But the finiteness by itself is too limited a goal, since case-by-case study of the simple compact Lie groups computes each fundamental group in an elegant way relative to the roots and weights of a maximal torus. P.S. Lucy has found the answer to the question asked about finiteness of the fundamental group (via Knapp's book), though the result is actually Weyl's theorem and not just sometimes called that. As Johannes points out, there is a full treatment in V.7 of the Springer GTM 98 by Brocker and tom Dieck which has the advantage of integrating the topological questions with structure, classification, and representation theory of arbitrary compact connected Lie groups; note their nice summary (7.13). Like other basic theorems, Weyl's theorem has been developed from a variety of directions as indicated in the answers here, though for me the actual computation of the fundamental group for each simple type is an essential part of the picture. - Another proof is in Bröcker-tom Dieck's book on compact Lie groups, p. 223 ff. It is based on an analysis of root systems. - See theorem B in Samelson a "Note on Lie groups". He presents two proofs: one via differential forms and one via differential geometry: http://www.ams.org/journals/bull/1946-52-10/S0002-9904-1946-08663-2/home.html - Every connected Liegroup, which has a semisimple Liealgebra with a definite Killing form is compact. The Liealgebra of a compact Liegroup is always the direct sum of an semisimple and abelian Liealgebra, where the killing form of the semisimple part is negative definite. So we can conclude, that the universal cover of your liegroup is compact and the finiteness of the fundamentalgroup follows immediately. The fundamentalgroup is even always abelian. The first two things can be found nicely in Serre's "Lie Algebras and Lie Groups" in Theorem 6.2 and 6.3. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9393360614776611, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5682/what-are-the-uses-of-hopf-algebras-in-physics
# What are the uses of Hopf algebras in physics? Hopf algebra is nice object full of structure (a bialgebra with an antipode). To get some idea what it looks like, group itself is a Hopf algebra, considered over a field with one element ;) usual multiplication, diagonal comultiplication, obvious units and inverse for the antipode. For a less pathological example, a group algebra and universal enveloping algebra can be both quite naturally turned into Hopf algebra. It all obviously relates to representation theory and lots of other neat stuff. So much for the math-related stuff. Now, I've heard that there should also be some applications to physics. • For one thing, (and this will be very vague and probably also wrong) Feynman diagrams should somehow carry a structure of Hopf algebra with multiplication given by joining of two lines into a vertex and comultiplication as splitting. It reminds me of cobordisms but I am not sure it really makes sense. Any idea whether something like this works? • Besides that, I heard people try to formalize renormalization using Hopf algebras. I found some papers but I am not sure where to start. Anyone care to give an overview how does this stuff work and whether it's leading anywhere? • Anything else? Sorry, if this is way to vague and there is actually whole industry of Hopfy physics. If so, just try to give some most important examples, preferably with references. - I am guessing this is motivated by Connes-Kreimer's work? – MBN Feb 22 '11 at 21:56 @MBN: if you mean my question, it's motivated purely by my study of group theory and few random phrases I've heard here and there. If you mean the renormalization formalization then I am completely clueless and that's why I am asking :) – Marek Feb 22 '11 at 22:05 Ok, it was just a guess. Then you can google their names and the key words form your question. – MBN Feb 22 '11 at 22:08 4 @Marek: both your points are right. One can use a Hopf-algebraic structure to encode the combinatorics of subdivergences and overlapping divergences in Feynman diagrams (Kreimer). Alternatively, the Hopf algebra can be defined directly on the Feynman diagrams (Connes & Kreimer). For a mild introduction into the early development, I would in a shameless self-advertisement recommend my diploma thesis (unfortunately only in Czech). However, since this was already some time ago and I changed field after then, I don't know whether this idea really "led anywhere". – Tomáš Brauner Feb 22 '11 at 22:43 @Tomáš: thanks for the pointers, especially that diploma thesis. For once being Slovak pays off :) – Marek Feb 22 '11 at 23:03 show 8 more comments ## 7 Answers This is essentially an addition to the list of @4tnemele I'd like to add some earlier work to this list, namely Discrete Gauge Theory. Discrete gauge theory in 2+1 dimensions arises by breaking a gauge symmetry with gauge group $G$ to some lower discrete subgroup $H$, via a Higgs mechanism. The force carriers ('photons') become massive which makes the gauge force ultra-short ranged. However, as the gauge group is not completely broken we still have the the Aharanov-Bohm effect. If H is Abelian this AB effect is essentially a 'topological force'. It gives rise to a phase change when one particle loops around another particle. This is the idea of fractional statistics of Abelian anyons. The particle types that we can construct in such a theory (i.e. the one that are "color neutral") are completely determined by the residual, discrete gauge group $H$. To be more precise: a particle is said to be charged if it carries a representation of the group H. The number of different particle types that carry a charge is then equal to the number of irreducible representations of the group H. This is similar to ordinary Yang-Mills theory where charged particles (quarks) carry the fundamental representation of the gauge group (SU(3). In a discrete gauge theory we can label all possible charged particle types using the representation theory of the discrete gauge group H. But there are also other types of particles that can exist, namely those that carry flux. These flux carrying particles are also known as magnetic monopoles. In a discrete gauge theory the flux-carrying particles are labeled by the conjugacy classes of the group H. Why conjugacy classes? Well, we can label flux-carrying particles by elements of the group H. A gauge transformation is performed through conjugacy, where $|g_i\rangle \rightarrow |hg_ih^{-1}\rangle$ for all particle states $|g_i\rangle$ (suppressing the coordinate label). Since states related by gauge transformations are physically indistinguishable the only unique flux-carrying particles we have are labeled by conjugacy classes. Is that all then? Nope. We can also have particles which carry both charge and flux -- these are known as dyons. They are labeled by both an irrep and a conjugacy class of the group $H$. But, for reasons which I wont go into, we cannot take all possible combinations of possible charges and fluxes. (It has to do with the distinguishability of the particle types. Essentially, a dyon is labeled by $|\alpha, \Pi(g)\rangle$ where $\alpha$ is a conjugacy class and $\Pi(N(g))$ is a representation of the associated normalizer $N(\alpha)$ of the conjugacy class $\alpha$.) The downside of this approach is the rather unequal setting of flux carrying particles (which are labeled by conjugacy classes), charged particles (labeled by representations) and dyons (flux+compatible charge). A unifying picture is provided by making use of the (quasitriangular) Hopf algebra $D(H)$ also known as a quantum double of the group $H$. In this language all particles are (irreducible) representations of the Hopf algebra $D(H)$. A Hopf Algebra is endowed with certain structures which have very physical counterparts. For instance, the existence of a tensor product allows for the existence of multiple particle configurations (each particle labeled by their own representation of the Hopf algebra). The co-multiplication then defines how the algebra acts on this tensored space. the existence of an antipode (which is a certain mapping from the algebra to itself) ensures the existence of an anti-particle. The existence of a unit labels the vacuum (=trivial particle). We can also go beyond the structure of a Hopf algebra and include the notion of an R-matrix. In fact, the quasitriangular Hopf Algebra (i.e. the quantum double) does precisely this: add the R-matrix mapping. This R-matrix describes what happens when one particle loops around another particle (braiding). For non-Abelian groups $H$ this leads to non-Abelian statistics. These quasitriangular Hopf algebras are also known as quantum groups. Nowadays the language of discrete gauge theory has been replaced by more general structures, referred to by topological field theories, anyon models or even modular tensor categories. The subject is huge, very rich, very physical and a lot of fun (if you're a bit nerdy ;)). Sources: http://arxiv.org/abs/hep-th/9511201 (discrete gauge theory) http://www.theory.caltech.edu/people/preskill/ph229/ (lecture notes: check out chapter 9. Quite accessible!) http://arxiv.org/abs/quant-ph/9707021 (a simple lattice model with anyons. There are definitely more accessible review articles of this model out there though.) http://arxiv.org/abs/0707.1889 (review article, which includes potential physical realizations) - 1 Very interesting! I feel that I need to learn more about Quantum groups, Hopf algebras and Quantum doubles! What are the ups and downs in using Quantum groups to describe Anyons, compared to modular tensor categories? – Heidar Feb 23 '11 at 19:01 2 There is alot of mathematical literature out there on this and I always find it a bit overwhelming, given the complexity of the subject. So my advice would be: start small, and stick to the more physically oriented articles in the beginning ;) Now, Modular Tensor Categories are, simply put, just more general than quantum groups. They do not completely overlap -- as far as I understand you can construct MTC's from the representation theory of quantum groups at so-called roots of unity. – Olaf Feb 23 '11 at 21:08 Amazing stuff, thank you very much. – Marek Feb 23 '11 at 22:22 This is an expanded version of the comment I made before. It concerns solely the application of Hopf algebras to renormalization in quantum field theory and the combinatorics of Feynman diagrams. Other applications of quantum groups and Hopf algebras to low-dimensional physics etc. have been mentioned by others. Renormalization of multiloop Feynman graphs brings the problem of disentangling divergences coming from various subgraphs and removing all of them properly by local counterterms. This can be done systematically in perturbation theory by working from one loop up and keeping track of the loop order each counterterm contributes at e.g. by the powers of the Planck's constant. Carrying out this procedure blindly, one cannot make a mistake and at N loops, all subdivergences are removed by counterterms of order up to N-1, leaving only an overall local divergence that can be removed by a new counterterm of order N. Yet, it may be useful to know a priori e.g. how to renormalize a single given multiloop diagram without having to go all the way from one loop up. The combinatorial recursive solution to this problem was found by Bogoliubov, Parasiuk, Hepp, and Zimmermann (BPHZ). It was originally noticed by Kreimer that the nesting of divergences can be encoded in an oriented rooted tree graph, whose root corresponds to the whole Feynman diagram, the other nodes to its subdiagrams, and a link connecting two nodes expresses the fact that one diagram is a subset of the other. All such rooted trees can be given the structure of a Hopf algebra. Roughly speaking, a coproduct of a tree is given by all splittings into "cut-off branches" and the "remainder of the tree", connected to the root. In physics terms, this corresponds to all the ways one can shrink different Feynman subdiagrams into points. The antipode of the Hopf algebra then contains exactly the same information as the BPHZ recursion. The Kreimer Hopf algebra depends only on the topology of Feynman diagrams. Also, resolution of overlapping divergences within this framework is subtle. That is why Connes and Kreimer proposed to define a Hopf algebra structure directly on Feynman diagrams. This formulation treats nested and overlapping divergences on the same footing and is designed to allow for additional structure such as tensor structure of the diagrams or dependence on external momentum. Similarly to the previous, the coproduct is defined by all splittings of the Feynman diagram into a subdiagram and a graph where this subdiagram is replaced by a counterterm vertex. The antipode again encodes the BPHZ recursion. Within this approach, one can for example show that the Ward identities in QED generate an ideal in the Hopf algebra, and the renormalization can be carried out on the corresponding factor-Hopf algebra. This only expresses in mathematical terms the well known fact that renormalization preserves gauge invariance. As I pointed out before, a mild introduction to this topic can be found in my diploma thesis from 2002, available at my home page. Unfortunately it is written in Czech, so for those non-Czech/Slovak I would recommend e.g. the review paper by Kreimer, hep-th/0202110. So far, this was for lovers of neat mathematics. For the others, now comes the disappointment: what is this all actually good for? As @Luboš Motl pointed out in his comment, this idea, no matter how mathematically elegant, had better also contribute something to physics. I doubt it can ever improve our physical understanding of renormalization in quantum field theory; that has been well settled since Wilson. However, it sounds like something that might be helpful in automatizing multiloop calculations. For instance, Broadhurst and Kreimer in hep-th/9912093 used the formalism to resum and renormalize chain-rainbow diagrams in Yukawa theory up to 30 loops with a high numerical precision. Yet, I asked a colleague who is an expert in multiloop calculations and he says that this approach has not become widely used. It seems that it is advocated more or less only by its inventors. Turning my back on a field I worked on myself? That is science :) - 1 Very helpful. Thanks. – Peter Morgan Feb 23 '11 at 13:01 http://arxiv.org/abs/hep-th/9904014 from Brouder and http://arxiv.org/abs/q-alg/9707029 from Kreimer is, to me, the most puzzling application, doing a coproduct of trees that, when formalised by Connes, provided a different angle to perturbative renormalization. A lot of references can be found in the arxiv, but not a definitive monograph. - I can mention some applications in condensed matter physics, but I must warn you that I know close to nothing about Hopf algebraes. In the field of topological order (and topological quantum computation), Hopf algebras has been recently used to construct some very general models with emergent gauge fields, topolgical order and (non-abelian) anyons among other uses. See for example But I think that Hopf algebras are new thing in this field. Quantum groups on the other hand has been used for longer period, for example to study Fractional Quantum Hall Effect But the main tool in this field is modular tensor categories ($\mathbb C$-additive monoidal categories, with braiding structure and some more). There are many papers to cite, but a random one is I think the main connection between these approaches originate from the fact that many topological phases in condensed matter physics are (in the low-energy/long wavelength limit) described by non-Abelian Chern-Simons theories. And there are well-known connection to knot theory, modular tensor categories, representations theory of mapping class groups, Quantum groups and Hopf algebraes (I'm sure you know much much more about this than me). I hope you can use this, rather vague, answer. - 1 Very interesting! – Marek Feb 23 '11 at 8:36 More references: Mack and Schomerus: Discussion of Hopf algebras as the general symmetry structure of quantum states. I think there is quite a bit of work on Hopf algebra for two dimensional chiral CFTs, especially the Wess-Zumino-Witten model. I don't remember references off the top of my head. Specific type of Hopf algebra, called the Yangian, plays now an enormous role in understanding the simplification of amplitudes in gauge theories, and potential integrability of N=4 SYM in the planar limit. The literature is too vast to do the subject any justice, but here is one review. Hope that helps a bit, looks to me like it is a general structure and you probably want to zoom in before getting into details. - Thanks. Planar $N=4$ SYM interests me also for other reasons (dualities with usual lattice models) so finding Hopf algebras also here is a very pleasant surprise. – Marek Feb 23 '11 at 8:42 I think of this kind of math that the hope is that tidying up a complex process like regularization/renormalization by using a relatively neat structure --which a Hopf algebra certainly is-- may give us a hint towards something quite different that we wouldn't have thought of doing if we just left things messy. Different kinds of tidying up may suggest quite different ideas to try next. Feynman says at one point (I'd be glad if someone can tell me where -- a Question, perhaps, if not) that one should try every mathematical way one can think of using to think about a problem, spend lots of time finding the relationships between the different ways, then mull the problem, which may well suggest a new mathematical way to use to think about the problem, then mull again. Repeat until something publishable emerges. From this point of view, I think Hopf algebras are something that anyone who wants to do serious Physics research has to become familiar with, because they are seriously tidy, whether they're useful or not for your present purposes. For Physics, however, I have come to think that Hopf algebras are too tied to perturbation theory and to Feynman diagrams to get us out into a different conceptual playground. Happy to be proved wrong, of course. The level of abstraction of Hopf algebras is close enough to my math limits that getting them internalized has proved a challenge. My feeling is that no-one has yet found a way to present Hopf algebras at an engineering level of concept and usability. I suspect the 3rd or 4th monograph might get to that point, but we haven't even had the first yet. I also haven't yet seen a review article that has hit the mark. I think an engineering level presentation will find a ready market, because although Tomáš Brauner's expert colleague and others may not have used these methods much, one can be sure that from time to time he and other experts wonder whether it would be a good idea to use them and whether it might be helpful to teach at least some aspects of renormalization using something like Hopf algebras. - Hopf algebras are also used in quantum gravity: 1. In the context of quantum gravity in 2+1 dimensions coupled to particles it is well known that the Poincare (or de Sitter, or anti de Sitter) local symmetry group becomes quantum deformed and turns out to be Drinfeld double of SU(2) (and similarly when the cosmological constant is not zero). The good references are Quantum group symmetry and particle scattering in (2+1)-dimensional quantum gravity. hep-th/0205021 Lessons from (2+1)-dimensional quantum gravity arXiv:0710.5844 [gr-qc] For more general picture you may look at the papers of S. Majid like Quantum geometry and the Planck scale q-alg/9701001 Hopf algebras are also used in the context of quantum gravity in 3+1 dimensions, see, eg. my review Introduction to doubly special relativity hep-th/0405273 -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9318594932556152, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/108720-fourth-order-runge-kutta-equation.html
# Thread: 1. ## Fourth-order Runge-Kutta for this equation. Hi, For my 3rd year project I'm meant to code different O.D.E solving algorithms and run them on GPU and compare the results to running them on CPU. The first algorithm I am meant to code is the fourth order Runge-Kutta for a O.D.E in the form: dx/dt = ax(t), x0 = 0 (I think that is how it is written) It's solution is meant to be in the form of x = e^(at)x0 (again, I think) Could someone show me how to set up the runge kutta algorithm? I am totally confused about the t and the x in the Runge-Kutta equation. 2. Originally Posted by kenny Hi, For my 3rd year project I'm meant to code different O.D.E solving algorithms and run them on GPU and compare the results to running them on CPU. The first algorithm I am meant to code is the fourth order Runge-Kutta for a O.D.E in the form: dx/dt = ax(t), x0 = 0 (I think that is how it is written) It's solution is meant to be in the form of x = e^(at)x0 (again, I think) Could someone show me how to set up the runge kutta algorithm? I am totally confused about the t and the x in the Runge-Kutta equation. We have an ODE of the form: $\frac{dx}{dt}=f(x)$ Because $t$ does not appear explicitly on the right-hand-side of the ODE the equations for the 4-order RK algorithm become: $x_{n+1}=x_n+\frac{h}{6}(k_1+2k_2+2k_3+k_4)$ with: $k_1=f(x_n)$ $k_2=f(x_n+hk_1/2)$ $k_3=f(x_n+hk_2/2)$ $k_4=f(x_n+hk_3)$ Where $x_n=x(nh)$, so we start from $x_0=x(0)=0$ and step forward from there. The full equations for 4-order RK can be found on the Wikipedia page. CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391152262687683, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/18588/why-are-differential-equations-for-fields-in-physics-of-order-two/44049
# Why are differential equations for fields in physics of order two? What is the reason for the observation that across the board fields in physics are generally governed by second order (partial) differential equations? If someone on the street would flat out ask me that question, then I'd probably mumble something about physicists wanting to be able to use the Lagrangian approach. And to allow a positive rotation and translation invariant energy term, which allows for local propagation, you need something like $-\phi\Delta\phi$. I assume the answer goes in this direction, but I can't really justify why more complex terms in the Lagrangian are not allowed or why higher orders are a physical problem. Even if these require more initial data, I don't see the a priori problem. Furthermore you could come up with quantities in the spirit of $F\wedge F$ and $F \wedge *F$ and okay yes... maybe any made up scalar just doesn't describe physics or misses valuable symmetries. On there other hand in the whole renormalization business, they seem to be allowed to use lots and lots of terms in their Lagrangians. And if I understand correctly, supersymmetry theory is basically a method of introducing new Lagrangian densities too. Do we know the limit for making up these objects? What is the fundamental justification for order two? - – Qmechanic♦ Dec 21 '11 at 14:24 Please wait until the last moment for putting the 'answered' mark. This is a very interesting question and I want to see what the big ones here have to say. Leave that temping bounty waiting as much as possible. – Eduardo Guerras Valera Nov 13 '12 at 23:26 ## 7 Answers First of all, it's not true that all important differential equations in physics are second-order. The Dirac equation is first-order. The number of derivatives in the equations is equal to the number of derivatives in the corresponding relevant term of the Lagrangian. These kinetic terms have the form $${\mathcal L}_{\rm Dirac} = \bar \Psi \gamma^\mu \partial_\mu \Psi$$ for Dirac fields. Note that the term has to be Lorentz-invariant – a generalization of rotational invariance for the whole spacetime – and for spinors, one may contract them with $\gamma_\mu$ matrices, so it's possible to include just one derivative $\partial_\mu$. However, for bosons which have an integer spin, there is nothing like $\gamma_\mu$ acting on them. So the Lorentz-invariance i.e. the disappearance of the Lorentz indices in the terms with derivatives has to be achieved by having an even number of them, like in $${\mathcal L}_{\rm Klein-Gordon} = \frac{1}{2} \partial^\mu \Phi \partial_\mu \Phi$$ which inevitably produce second-order equations as well. Now, what about the terms in the equations with fourth or higher derivatives? They're actually present in the equations, too. But their coefficients are powers of a microscopic scale or distance scale $L$ – because the origin of these terms are short-distance phenomena. Every time you add a derivative $\partial_\mu$ to a term, you must add $L$ as well, not to change the units of the term. Consequently, the coefficients of higher-derivative terms are positive powers of $L$ which means that these coefficients including the derivatives, when applied to a typical macroscopic situation, are of order $(L/R)^k$ where $1/R^k$ comes from the extra derivatives $\partial_\mu^k$ and $R$ is a distance scale of the macroscopic problem we are solving here (the typical scale where the field changes by 100 percent or so). Consequently, the coefficients with higher derivatives may be neglected in all classical limits. They are there but they are negligible. Einstein believed that one should construct "beautiful" equations without the higher-derivative terms and he could guess the right low-energy approximate equations as a result. But he was wrong: the higher derivative terms are not really absent. Now, why don't we encounter equations whose lowest-order derivative terms are absent? It's because their coefficient in the Lagrangian would have to be strictly zero but there's no reason for it to be zero. So it's infinitely unlikely for the coefficient to be zero. It is inevitably nonzero. This principle is known as Gell-Mann's anarchic (or totalitarian) principle: everything that isn't prohibited is mandatory. - Thanks for the answer. What is the reason that "their coefficients are powers of a microscopic scale or distance scale $L$"? In the last paragraph you use this again, where it's implied that the lower order derivatives are a priori related to a bigger scale, which then outweighs the later ones associated with higher orders. Is there a justification, which goes back to axiomatic assumptions or is it "just" an empirical insight from dealing with effective field theories? – Nick Kidman Dec 21 '11 at 15:08 Dear @Nikolaj, $L$ determining the coefficients is microscopic because microscopic scales are the natural ones for the formulation of the laws of physics. By definition, microscopic scales are the scales associated with the elementary particles. These general discussions talk about many things at the same moment. For example, in GR, the typical scale is the Planck length, $10^{-35}$ meters, which is the shortest one. In other theories, the typical scale is longer. But it's always microscopic because it determines the internal structure/behavior of the fields and particles which are small. – Luboš Motl Dec 21 '11 at 16:06 The comment that the derivatives are not just related, they produce long scale was meant to be as a self-evident tautology. What I mean is that if we consider a field that is changing in space, e.g. as a wave with wavelength $R$, then the derivative will pick a factor of order $1/R$, too. For example, the derivative of $\sin(x/R)$, the wave of length $2\pi R$, is $\cos(x/R)/R$. Cos and sin is almost the same thing, of the same order 1, and we therefore picked an extra factor of $1/R$. All these things are order-of-magnitude estimates. Macroscopic usage of field theory has a macroscopic $R$. – Luboš Motl Dec 21 '11 at 16:09 I'm not sure if I successfully pointed out my problem in the comment. My question is: What is the justification for assuming the coefficient of smaller orders would describe a bigger scale? What speaks against a situation, where the fourth order term has a small coefficient, but the second order term has an even smaller one? Then in the classical limit, just the fourth order expression would survive. – Nick Kidman Dec 21 '11 at 18:36 Dear @Nikolaj, it's likely that I don't understand your continued confusion at all. Whether a term may be neglected depends on the relative magnitude of the two terms, the neglected one and the surviving one. So I am estimating the ratio of higher-derivative terms and two-derivative terms and it scales like $(L/R)^k$, a small number, so the higher-derivative terms may be neglected if the two-derivative terms are there. It doesn't matter how you normalize both of these terms in an "absolute way". What matters for being able to neglect one term is the ratio of the two terms. – Luboš Motl Dec 21 '11 at 18:53 show 6 more comments One can rewrite any pde of any order as a system of first order pde's, hence the assumption behind question is somewhat questionable. Also there exist first order PDE's of relevance to physics (Dirac equation, Burgers equation, to name just two). However, it is common that quantities in physics appear in conjugate pairs of potential fields and their associated field strength, defined by the potential gradient. Now the gradients of field strength act as generalized forces that try to move the system to an equilibrium state at which these gradients vanish. (They will succeed only if there is sufficient friction and no external force.) In a formulation where only one half of each conjugate pair is explicit in the equations, a second order differential equation results. - Here we will for simplicity limit ourselves to systems that have an action principle. (For fundamental and quantum mechanical systems, this are often the case.) Let us reformulate OP's question as follows: Why does the Euler-Lagrange equations of motion for a relativistic system (non-relativistic system) have at most two derivatives (time-derivatives), respectively? (Here the precise order depends on whether one considers the Lagrangian or the Hamiltonian formulation, which are related via Legendre transformation. In case of a singular Legendre transformation, one should use the Dirac-Bergmann or the Jackiw-Faddeev method to go back and forth between the two formalisms. See also this Phys.SE post.) Answer: The higher-derivative terms are in certain theories suppressed for dimensional reasons by the natural scales of the problem. This may e.g. happen in renormalizable theories. But the generic answer is that the equations of motion actually doesn't have to be of order $\leq 2$. However, for a generic higher-order quantum theory, if higher-derivative terms are not naturally suppressed, this typically leads to ghosts of the so-called bad type with wrong sign of the kinetic term, negative norm states and unitarity violation. Explicit appearances of higher time-derivatives may be removed on the paper by introducing more variables, either via the Ostrogradsky method, or equivalently, via the Lagrange multiplier method. However, the positivity problem is not cured by such rewritings, and the quantum system remains ill-defined. See also e.g. this and this Phys.SE answer. Hence one can often not make consistent sense of higher-order theories, and this may be why OP seldom faces them. Finally, let us mention that it is nowadays popular to study effective higher-derivative field theory, with the possibly unfounded hope, that an underlying, supposedly well-defined, unitary description, e.g. string theory, will cure all pathologies. - Weinberg gives a pretty good answer for this in Volume 1 of his QFT opus: 2nd order differential equations appear in the field theories relevant to particle physics because of the relativistic mass-shell condition $p^2 = m^2$. If we have a quantum field $\phi$, and we think of its fourier modes $\phi(p)$ as creating particles with 4-momentum $p$, then the mass-shell condition provides a constraint: $(p^2 - m^2)\phi(p) = 0$, because we don't want particle creation off-shell. Fourier-transform this back to position space, and you find that $\phi$ has to obey a 2nd order differential equation. - This doesn't apply to general relativity, where nevertheless equations are of second order. – Arnold Neumaier Nov 12 '12 at 16:39 1 It does tell you that the linearized Einstein equations should be second order. And it explains why the renormalization flow should be defined in such a way that the kinetic term is fixed, which is a important assumption implicit in Lubos' answer. – user1504 Nov 12 '12 at 16:42 First of all, it's not true that all important differential equations in physics are second-order. The Dirac equation is first-order. This is correct. However, physical evolution equations are second (in time) order hyperbolic equations. In fact, each component of Dirac spinor follows a second order equation, namely, Klein-Gordon equation. Now, what about the terms in the equations with fourth or higher derivatives? They're actually present in the equations, too. Neither the Standard Model (SM) Lagrangian nor the Einstein-Hilbert (EH) action contain higher than second order temporal derivatives. These are the actions which are experimentally tested and these two theories are the most fundamental scientific theories we have. We know that there are physics beyond these two theories and people have good candidates to the underlying theories, but physics is an experimental science and these theories are not experimentally verified. The effective SM Lagrangian (a Lorentz invariant theory with the gauge symmetries of the SM but with irrelevant operators) does contain higher than second order temporal derivatives. Equally for the EH action plus higher order scalars. Two clarifications are however in order: • These irrelevant terms are not experimentally verified. Almost everyone is sure that neutrino mass terms (which are irrelevant operators but do not contain higher order derivatives) exist in order to explain neutrino oscillations, but so far we do not have direct measurements of neutrino masses thus we are not allowed to claim that these terms exist. Summarizing: the effective SM is not a verified theory. • The origin of these irrelevant terms is a consequence of integrating out fields with a mass much greater than the energy scale we are interested in. This could be the case of the neutrino mass term and a right-handed neutrino. For instance, in quantum electrodynamics, if one is interested in the physics at much lower energies than the electron mass, one can integrate (or nature integrates-out) out the electron field obtaining an effective Lagrangian (Euler-Heisenberg Lagrangian) with terms with higher order derivatives like $\frac{\alpha ^2}{m_e^4}~F_{\mu\nu}~F^{\mu\nu}~F_{\rho\sigma}~F^{\rho\sigma}$ (which contains four derivatives). These are terms suppressed by coupling constants ($\alpha$) and high-energy scales ($m_e$). There are terms with a number of derivates arbitrarily high, and they come from inverses of differential operators. This makes that the higher order derivatives do not enter in the zeroth-order equation of motion. However, in a fundamental theory (in contrast to an effective one), finite higher order derivatives are not allowed in interactive theories (there are some exceptions with gauge fields, but for example a generic $f(R)$ theory of gravity is inconsistent). The reason is that those theories are not bounded from bellow (see Why are only derivatives to the first order relevant?) or, in some quantizations, contain negative norm states. These terms are among the forbidden operators in Gell-Mann's totalitarian principle. In summary, evolution equations are order two because of existence of a normalizable vacuum state and unitarity (including here the fact that physical states must have positive norm). Newton was right when he wrote $$\ddot x=f(x,\dot x)$$ - It was already noted in other answers that fields in physics are not always governed by second order partial differential equations (PDEs). It was said, e.g., that the Dirac equation is a first-order PDE. However, the Dirac equation is a system of PDEs for four complex functions - components of the Dirac spinor. It was also mentioned that any PDE is equivalent to a system of PDEs of the first order. I mentioned previously that the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform (http://akhmeteli.org/wp-content/uploads/2011/08/JMAPAQ528082303_1.pdf (my article published in the Journal of Mathematical Physics) or http://arxiv.org/abs/1008.4828 ). Let me also mention my article http://arxiv.org/pdf/1111.4630.pdf , where it is shown that the equations of spinor electrodynamics (the Dirac-Maxwell electrodynamics) are generally equivalent to a system of PDEs of the third order for complex four-potential of electromagnetic field (producing the same electromagnetic field as the usual real four-potential of electromagnetic field). - The reason for equations of physics, not being of second order is due to the so-called Ostrogradskian instability. (see paper by Woodard). This is a theorem, which states that equations of motion with higher-order derivatives are in principle unstable or non-local. This is easily shown using the Lagrangian and Hamiltonian formalism. The key point is that in order to get an equation of motion of third order in the derivatives, we need a Lagrangian that depends on the coordinates and the generalized velocities and accelerations: $L(q,\dot{q},\ddot{q})$. By performing a Legendre transformation to obtain the Hamiltonian, this implies that we need two generalized momenta. The Hamiltonian results to be linear in at least one of the momenta and therefore it is unbounded from below (it can become negative). This corresponds to a phase space in which there are no stable orbits. I would like to write the proof here, but it was already answered in this post. There the question is why Lagrangians only have one derivative, but it is actually closely related, since one can always find the equations of motion from a Lagrangian and viceversa. Citing Woodard: "It has long seemed to me that the Ostrogradskian instability is the most powerful, and the least recognized, fundamental restriction upon Lagrangian field theory. It rules out far more candidate Lagrangians than any symmetry principle. Theoretical physicists dislike being told they cannot do something and such a bald no-go theorem provokes them to envisage tortuous evasions. ... The Ostrogradskian instability should not seem surprising. It explains why every single system we have so far observed seems to be described, on the fundamental level, by a local Lagrangian containing no higher than first time derivatives. The bizarre and incredible thing would be if this fact was simply an accident." - 2 – Qmechanic♦ Mar 24 at 20:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9421980381011963, "perplexity_flag": "head"}
http://mathoverflow.net/questions/83943/a-module-with-extim-r-0-for-all-i-0
## A Module with $Ext^i(M,R) = 0$ for all $i > 0$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $M$ be a finitely generated module over a noetherian local ring $R$. We can take our ring to be Cohen-Macaulay. Suppose $M$ satisfies the condition $Ext^i(M,R) = 0$ for all $i > 0$. We want to know if $M$ is projective? One can easily show from the given condition that for any module $N$ of finite projective dimension, we do have $Ext^1(M,N) = 0$. Thus if (for example) our ring $R$ were regular local ring (which means any f.g module will have finite projective dimension), then we get the desired result (that $M$ is projective). Now, Regular local => Cohen-Macaulay. So my first question is can we say the same with only Cohen-Macaulay condition on $R$ (that $M$ is projective)? If it helps, we may assume that $M$ itself has finite projective dimension. My second question is that can we write any f.g. module on (say) a noetherian local ring, as direct limit of modules having finite projective dimension? - 1 There are many counterexamples, mostly (but not all) having to do with the ring being Gorenstein. If that's not a familiar word, you might look it up; it fits between regular and CM. Also, if M has finite projective dimension and all the Ext's vanish, then M is free -- this is a relatively easy exercise. – Graham Leuschke Dec 20 2011 at 15:32 1 I think if you assume $M$ of finite projective dimension, then the answer is yes over any ring (you don't need CM), because if $M$ is not free and you take a minimal free resolution of $M$ and apply $\mathrm{Hom}_R(\:\cdot\:,R)$ to it, the last map on the left after you apply Hom cannot be surjective and will have a cokernel. It cannot be surjective because the resolution is minimal. – Mahdi Majidi-Zolbanin Dec 20 2011 at 15:32 Sorry Graham, looks like I answered the exercise! – Mahdi Majidi-Zolbanin Dec 20 2011 at 15:34 1 Compare with the Auslander-Reiten conjecture that $Ext^i(M,M\oplus R)=0$ for all $i>0$ implies $M$ is projective. They were originally interested in Artin algebras but people have considered things like Gorenstein rings. – Benjamin Steinberg Dec 20 2011 at 16:12 @Mahdi, thanks. @Benjamin, thanks. I was not aware of this conjecture. – Amit Dec 21 2011 at 16:33 ## 1 Answer As regards the second question: consider for example $R = k[[x]]$ where $k$ is a field. By the structure theory for modules over a PID, indecomposable finitely generated $R$-modules are cyclic, of the form $R/(x^n)$ for $n\geq 0$ or $R$. No module of the form $R/(x^n)$ contains a free submodule, so they can't be a direct limit of free modules. - Thanks for this example. – Amit Dec 21 2011 at 10:54 Actually, this is a terrible example. All these modules have finite projective dimension! I should have used $k[x]/(x^2)$; the residue field has infinite projective dimension and is not a direct limiit of modules of finite projective dimension. – Graham Leuschke Dec 21 2011 at 11:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9044005870819092, "perplexity_flag": "head"}
http://nrich.maths.org/6431
### Weekly Challenge 43: A Close Match Can you massage the parameters of these curves to make them match as closely as possible? ### Weekly Challenge 28: the Right Volume Can you rotate a curve to make a volume of 1? ### Weekly Challenge 46: the Jabber-notty Can you invert this confusing sentence from Lewis Carrol? # Weekly Challenge 44: Prime Counter ##### Stage: 5 Challenge Level: The prime counting function $\Pi(x)$ counts how many prime numbers are less than or equal to $x$ for any positive value of $x$. Since the primes start $2, 3, 5, 7, 11, 13, \dots$ we therefore have, for example, $\Pi(11) = 5$ and $\Pi(8) = 4$. It is believed by mathematicians that $\frac{x}{\ln(x)}$ is a good approximation to $\Pi(x)$. It is believed to get progressively better as $x$ increases to very, very large numbers. How well does it work for lower values of $x$ (up to the 100,000th prime) Use the following interactivity to examine the percentage accuracy of this approximation for these values. This text is usually replaced by the Flash movie. Use a few sensible values / choice of axes to try to create a useful graphical representation of $\ln(\Pi(x))$ against $\ln(x)$ for $x$ taking values up to about a million. Use your curve to try to predict $\Pi(x)$ for a few values of $x$ away from your data points. How close are your estimates for whole number multiples of $100,000$? Use your judgement to try to extrapolate your curve to make approximations as to \Pi(10^7)\quad\quad \Pi(10^8)\quad\quad \Pi(10^9) Did you know ... ? Although prime numbers are distributed with some large-scale regularity across the natural numbers, as this problem indicates, there is no known process which generates the prime numbers. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9154692888259888, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/16403/knights-and-knaves-who-are-b-and-c-task-26-from-what-is-the-name-of-this-boo
# Knights and knaves: Who are B and C? (task 26 from “What Is the Name of This Book?”) I have the following issue #26 from What Is the Name of This Book? of R. Smullyan: There is a wide variety of puzzles about an island in which certain inhabitants called "knights" always tell the truth, and others called "knaves" always lie. It is assumed that every inhabitant of the island is either a knight or a knave. I shall start with a well-known puzzle of this type and then follow it with a variety of puzzles of my own. According to this old problem, three of the inhabitants — A, B, and C — were standing together in a garden. A stranger passed by and asked A, "Are you a knight or a knave?" A answered, but rather indistinctly, so the stranger could not make out what he said. The stranger than asked B, "What did A say?" B replied, "A said that he is a knave." At this point the third man, C, said, "Don't believe B; he is lying!" The question is, what are B and C? I supposed that truth tables can be used, and composed the following: ```` | | | F1 | F2 | G ===|===|===|==============|========|========= A | B | C | B ↔ (A ↔ ¬A) | C ↔ ¬B | F1 ^ F2 ===|===|===|==============|========|========= 1 | 1 | 1 | 0 | 0 | 0 1 | 1 | 0 | 0 | 1 | 0 1 | 0 | 1 | 1 | 1 | 1 1 | 0 | 0 | 1 | 0 | 0 0 | 1 | 1 | 0 | 0 | 0 0 | 1 | 0 | 0 | 1 | 0 0 | 0 | 1 | 1 | 1 | 1 0 | 0 | 0 | 1 | 0 | 0 ```` Provided that: 1. We use $A$, when A is a knight, and $\neg A$, when A is a knave. 2. $F1$ is what B said ($A \leftrightarrow \neg A$), i. e. B said that A said he's knave. Therefore, B is telling the truth if and only if he's a knight ($B$). 3. $F2$ means that C is a knight if and only if he's telling the truth, i. e. B is a knave ($\neg B$). 4. $G$ allows us to select only those claims amongst $F1$ and $F2$ which are true. can I safely say that we have only two cases, when $G$ is true and the following conclusions can be made: 1. B is a knave, because there are $0$s (false) in the appropriate rows. 2. C is a knight, because B is telling lies, and there are $1$s (true) in the appropriate rows. 3. We cannot say what is A exactly, because we couldn't make out what he said, and there are two cases in the table with $0$ and $1$ in the appropriate rows, where $G$ is true. Please tell me if my calculations and the truth table are right, not only the conclusion. The best answer is one, which either explains what I'm missing in my truth table, or contains a correct one instead of mine, being supposedly wrong. I'm trying to figure out how they can be used, and I guess this issue is quite simple to play with, after all you have the same reasoning in your mind. Thanks in advance. - Your calculation of F1 is wrong, and I also think you don't need it (why assume that there must be a knight?). Besides, you can reason that B must be lying (since A must have said he's a knight), and so C must be telling the truth. – Yuval Filmus Jan 5 '11 at 7:13 ## 5 Answers Your table is incorrect in the F1 column in that $A \vee B \vee C$ should evaluate to 0 when all three are 0 (the bottom line). Otherwise your calculations are OK. Edit: this column has been removed. The definition of F1 is not what you want. F1 is supposed to represent whether A spoke the truth, so should just be A. Edit: as this column has been removed, this does not apply. Edit: this is incorrect as I misread the table. See the paragraph below. The definition of G is the biggest error. G should be $(A \leftrightarrow F1) \wedge (B \leftrightarrow F2) \wedge (C \leftrightarrow F3)$. This is the heart of the matter. You want A to have spoken the truth if and only if A is a knight, and the same for B and C. G should always be of this form (maybe more terms if you have more individuals involved) and will pick out the lines where the truth value of the statements matches the type of the individuals. Added: G is correct. Your F1 says "B spoke properly for his type" and F2 says "C spoke properly for his type". So you want to find the cases both spoke properly. The fact that the 1's appear opposite B=0, C=1 both times says that B is a knave and C is a knight. - I suspect all are clear that we don't know what A said, we only know what he could've said and what couldn't. Because if he would have said something like "I have bananas in my ears", we wouldn't even consider his claims, or even would be unable to answer what B and C are. – Yasir Arsanukaev Jan 7 '11 at 14:44 That is true, but we know the truth value of what he said matches his type. This is why I said F1 should be A. If you follow it through, this is not contributing to the solution at all, because we don't know what A is at the end. I was trying to stay in the spirit of your table. The heart of the other arguments that B is a knave comes out in your table under F2 when you say $B \leftrightarrow (A \leftrightarrow \neg A)$. As $(A \leftrightarrow \neg A)$ is always false, this requires B to be false. – Ross Millikan Jan 7 '11 at 15:00 A could have said a lot of things. The one thing he cannot have said is "I am a knave", which marks B as a liar. If B claimed A said "I have bananas in my ears", even observing no bananas in A's ears would only tell us that (at least) one is a liar. – Ross Millikan Jan 7 '11 at 15:02 I just got rid of $F1$ in the question, it has actually been put there because of one of the next issues after #26 (this one) stating that there would be at least one knight amongst inhabitants. Nothing seems to confuse me any more. – Yasir Arsanukaev Jan 7 '11 at 15:22 @Yasir Arsanukaev: Looking at your table again, I see that you put the equivalence I said you should have in G into the F1 and F2 columns. So F1 represents "B spoke in conformance with his type", not "B told the truth". In that case your G is correct. I will edit my response. – Ross Millikan Jan 7 '11 at 17:58 A truth table is really the wrong tool for this; and your table contains errors. For example, you assert that A v B v C is always true, even in the case when A, B, and C are all false! Truth tables (when correctly filled out) are useful in some situations, but the proper approach to this problem is to take a more direct deductive route. First, we adopt the rule that knights always speak true statements, and knaves never do, and everyone is either a knight or a knave (and not both, unless they never speak!). Under these assumptions, no one can say "I am a knave": a knight cannot say it, because it would not be a true statement; and a knave cannot say it because it would be a true statement. Therefore, when B states "A said 'I am a knave'", then it is immediately the case that B is stating a falsehood, and thus we have proven that B is a knave. Since C states a truth (i.e., that B has stated a falsehood), C cannot be a knave; therefore we have proven C is a knight. That is much simpler and clearer than a logic table... in my humble opinion! - The book has explanation similar to yours. I'd like to know how the truth table should look like though, so that I know what I'm missing besides $F1$ is not needed as @Yuval and you already mentioned. – Yasir Arsanukaev Jan 5 '11 at 7:35 I edited to indicate that your calculation of F1 is incorrect. It's still, in my opinion, not a particularly fruitful way to approach this problem. – Chas Brown Jan 5 '11 at 7:40 The problem is this: We have three inhabitants A, B and C. B said "A said "A is a knave"", and C said "B is a knave". So the information we have is (if you read the first paragraph of the chapter I referred to in my previous answer and carefully follow the instructions): $k_B \leftrightarrow (k_A \leftrightarrow \neg k_A)$ $k_C \leftrightarrow \neg k_B$ Your truth table for both formulas (as it is now written) is correct, you simply used A instead of $k_A$, but that's all. To know which are the possible situations under the givens, you just look for the rows that give simultaneously a value 1 for both formulas. These are the two rows with a 1 in the last column of your table. What the table is telling you, then, is that there are only two possible situations. In both of them B is a knave and C is a knight. So that's true no matter the situation. Instead, you cannot tell what A is because A can either be a knight or a knave. - I agree with Chas Brown that the truth table is more work-intensive than required, but it can be used. What you need to do is (correctly) calculate the truth value of the statements for each possibility of knight/knave assignment. Then see if the truth value corresponds to the type for each person. This is a poor selection of a problem to demonstrate the method. But we will try. The first column would be A's statement, which as Chas points out, is "I am a knight" regardless of his status. The second column would be the truth value of B's statement, which as Chas says is always false. The third column would be the truth value of C's statement, which is always true. Then the fourth column would be whether the truth value in the second column corresponds to B's type. This is correct if B is a knave. Similarly the fifth column would be whether the truth value in the third column corresponds to C's type, which is true if C is a knight. The sixth column would be the and of the fourth and fifth. Lines where the sixth column shows true are possible assignments. - To learn about how to use truth tables, and formal logic in general, to solve Smullyan's puzzles on Knights and Knaves you can read Smullyan's own "Logical Labyrinths" esp. chapter 8: Liars, Truth-tellers and propositional logic:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585407376289368, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/43489/list
2 retag 1 # Analysis of a quadratic diophantine equation Hi! This is my first post on Math Overflow. I have two equations: $a(3a-1) + b(3b-1) = c(3c-1)$ and $a(3a-1) - b(3b-1) = d(3d-1)$. I'm trying to find properties of $a$ and $b$ that lead to solutions, where $a, b, c, d \in \mathbb{N}$. I'm having trouble applying any of the techniques in my abstract algebra book, as they mostly only apply to linear Diophantine equations. So far, I only really have managed to deduce the following things: $2b(3b-1) + d(3d-1) = a(3a-1) +b(3b-1)$ $2b(3b-1) = c(3c-1) - d(3d-1)$ $2b(3b-1) = (c-d)(3(c+d)-1)$ Any ideas on where to go from here would be greatly appreciated. Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673390984535217, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/124860-two-angles-complementary-print.html
# Two angles are complementary. Printable View • January 21st 2010, 05:49 PM bball20 Two angles are complementary. Two angles are complementary. The sum of the measure of the first angle and half the second angle is 68 degrees. Find the measures of the angles: What is the measure of the smaller angle? What is the measure of the other angle? • January 21st 2010, 05:56 PM skeeter Quote: Originally Posted by bball20 Two angles are complementary. The sum of the measure of the first angle and half the second angle is 68 degrees. Find the measures of the angles: What is the measure of the smaller angle? What is the measure of the other angle? let $a$ and $b$ represent the measure of the two angles. $\textcolor{red}{a+b = 90}$ $\textcolor{blue}{a + \frac{b}{2} = 68}$ solve. • January 21st 2010, 05:56 PM emathinstruction Quote: Originally Posted by bball20 Two angles are complementary. The sum of the measure of the first angle and half the second angle is 68 degrees. Find the measures of the angles: What is the measure of the smaller angle? What is the measure of the other angle? We know the angles sum to 90. Let x be the measure of the first angle and y be the measure of the second angle. So $x+y=90$ Also, the measure of the first angle (x) and half the second angle (.5y) is 68 So $x+.5y=68$ can you solve those equations • January 21st 2010, 06:21 PM bball20 Thanks! Small one is 44 other is 46 All times are GMT -8. The time now is 05:52 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300076961517334, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/12669/are-the-rest-masses-of-fundamental-particles-certainly-constants/12670
# Are the rest masses of fundamental particles certainly constants? In particular I am curious if the values of the rest masses of the electron, up/down quark, neutrino and the corresponding particles over the next two generations can be defined as constant or if there is an intrinsic uncertainty in the numbers. I suppose that since there is no (as of yet) mathematical theory that produces these masses we must instead rely on experimental results that will always be plagued by margins of error. But I wonder does it go deeper than this? In nature do such constant values exist or can they be "smeared" over a distribution like so many other observables prior to measurement? (Are we sampling a distribution albeit a very narrow one?) Does current theory say anything about this? (i/e they must be constant with no wiggle room vs. no comment) I am somewhat familiar with on-shell and off-shell particles- but I must confess I'm not sure if this figures into my question. I'd like to say that as an example I'm talking about the rest mass of the electron as could be determined by charge and charge/mass ratios. But perhaps this very number is itself influenced by off-shell electron masses? Perhaps this makes no sense. Needless to say I'd appreciate any clarification. - ## 7 Answers They most certainly are not. You are right that there is no theory that explains masses (these are input as parameters) but note that our current theories used to explain e.g. LHC data (that is, quantum field theories) inevitably come with a scale attached: you need to describe upto what energies you do physics otherwise the theory just doesn't make sense [insert usual story here about renormalization and infinities often told to scare little children before their going to bed]. Now, this shouldn't come as such a surprise since there are new particles awaiting discovery just behind the corner, so claiming that we have a complete theory would be preposterous. Instead, what we claim is that we have a good theory that works upto some scale. Consequently, all of the parameters that are inserted by hand must depend on the scale. Again, this is because theories at different scales are potentially completely different (e.g. at the "present scale" there is no supersymmetry assumed while it is conceivable that at a little higher scale our theories will have to include it) and so the parameters of the theories that are used to connect the theory with experiment potentially have no relation to each other. This phenomenon is known as running of coupling constants or, briefly, the running coupling. The moral is that all the rest masses and interaction "constants" depend on some scale. They shouldn't be thought as something inherently deep about the nature but just as fitting parameters that describe only effective masses and effective coupling. To illustrate why they are just effective: consider an electron in classical physics. We can measure its charge by usual methods. This value is the long-distance low-energy $e(E \to 0)$ limit of the scale dependent coupling $e(E)$. As you increase the energy and try to probe electron at shorter distances you will find that lots of others electron-positron pairs appear, screening the electron, and the charge that you will measure will be different due to these changed conditions (we talk about the polarization of vacuum). Just for the sake of completeness: one could say that $E \to 0$ limit is the most important thing about couplings and that we should take that as definition. If so, then these long-distance couplings are indeed constants as one was used in classical physics. But this point of view is worthless in particle physics where people instead try to make $E$ as high as possible to obtain a theory valid at high scales (since this is what they need at LHC). - Why the downvote? – Marek Jul 24 '11 at 15:54 Initially I too was concerned about the downvote. I read the link and followed up with reading on renormalization. This does not seem do be just a trick of math but rather it is modeling what happens at smaller and smaller distances. In which case I see we view mass as not constant at all but rather a scale related parameter. But is it not the masses at E-->0 that are adjusted during renormalization- and these are true constants? – jaskey13 Jul 24 '11 at 22:28 @jaskey: yes, there are two (interrelated) effects present: that of renormalization (known as subtraction of infinities) and renormalization group (i.e. scale dependence). In the example with electron I mentioned, even at the $E \to 0$ limit we need to use renormalization since there is always screening of charge due to electron's own field and this turns outs to be infinite if computed naively. One uses renormalization to make it finite. But then if we change the scale $E$ we'll find that the (now finite) charge changes again (due to higher-energy scale effects). – Marek Jul 24 '11 at 22:47 I guess I can think of three possible ways in which masses could be non-constant. (1) They could change due to quantum-mechanical fluctuations, (2) they could be slightly different for different particles at the same time, (3) or they could change over cosmological time intervals. Number 1 seems to be what you had in mind, but I don't think it works. The standard picture is that for a particle of mass m, its momentum p and mass-energy E can fluctuate, but the fluctuations are always such that $m=\sqrt{E^2-p^2}$ (with c=1) stays the same. Re #2, here's some good info: Are all electrons identical? Re #3, one thing to watch out for is that it is impossible, even in principle, to tell whether a unitful fundamental constant is changing. The notion only makes sense when you talk about unitless constants: Duff, http://arxiv.org/abs/hep-th/0208093 However, it certainly does make sense to talk about changes in the unitless ratios of fundamental constants, such as the ratio of two masses or the fine structure constant. There are claims by Webb et al. J.K. Webb et al., http://arxiv.org/abs/astro-ph/0012539v3 that the fine structure constant has changed over cosmological timescales. Chand et al., Astron. Astrophys. 417: 853, failed to reproduce the result, and IMO it's bogus. I'm not aware of any similar tests for the ratios of masses of fundamental particles. If you change the ratio of masses of the electron and proton, it will change the spectrum of hydrogen, but at least to first order, the change would just be a rescaling of energies, which would be indistinguishable from a tiny change in the Doppler shift. Brans-Dicke gravity (Physical Review 124 (1961) 925, http://loyno.edu/~brans/ST-history/ ) has a scalar field that can be interpreted as either a local variation in inertia or a local variation in the gravitational constant G. This could in some sense be interpreted as meaning that, e.g., electrons at different locations in spacetime had different masses, but all particles would be affected in the same way, so there would be no effect on ratios of masses -- hence the ambiguity between interpreting it as a variation in inertia or a variation in G. B-D gravity has a unitless constant $\omega$, and the limit $\omega\rightarrow\infty$ corresponds to general relativity. Solar system tests constrain $\omega$ to be at least 40,000, so B-D gravity is basically dead these days. - Or (most relevantly), they could be a consequence of the renormalization group theory of QFT where it turns out they actually aren't constants at all. This is e.g. the famous unification of forces in GUT theories: coupling parameters of electroweak (this has two components $U(1)$ and $SU(2)$, and consequently is described by two parameters, but these components are not the same as EM and weak parameters we are used to in low-energy physics) and strong force depend on the scale and at GUT scale all three of them meet at the same point. – Marek Jul 24 '11 at 6:27 About point #1. Since we find rest mass as the invariant of the energy/momentum four-vector- we similarly find an invariant length for the position/time four-vector. I see the position-momentum uncertainty relation and then the energy-time uncertainty relation. Somewhere (think it was Griffiths Q&M) I read the energy-time relation is a byproduct of extending the position-momentum relation into special relativity (3-->4 vectors) If this is correct then should there be an uncertainty relationship for the invariant length of a particles four vector and it's rest mass? Or can't one do that? Why? – jaskey13 Jul 24 '11 at 22:18 if you think about it, any time you perform a measuremeant, you must use a device which is limited by quantum mechanical laws. Any device measuring the weight of a particle must record the weight via some state transition internal to the device... (e.g. the device must change in some way when the particle is placed near it so even if the particle is at rest when the observation is made, the device must have some transition in internal position and momenta to record an observation). So I would say that quantum mechanics does impose a limit on how close we can get to finding the actual rest mass of a particle. That limit is always loosely proportional to plancks constant or plancks constant times 1/2. - notice that the uncertainty doesn't come from the mass itself, but from the process of measuring, which by definition will involve some particle motion, which will in turn limit the uncertainty to the heisenberg limits. – Timtam Jul 24 '11 at 5:06 Then by your answer you would contend that the rest masses are real constants but forever out of our reach due to our limited measuring process? Perhaps I may be stepping into philosophy here- but how can one say something is real (or constant) if it cannot be verified as such by observation? – jaskey13 Jul 24 '11 at 22:55 I'm not saying it's constant, i'm saying there will always be some uncertainty due to quantum mechanics. – Timtam Jul 25 '11 at 0:01 thus it is impossible to verify whether any observable is "truly" a constant. But to my mind getting the uncertainty to a factor of \$10^-34 is good enough approximation that to me, i'll just go ahead and assume it's a constant... what is motivating this question by the way – Timtam Jul 25 '11 at 0:04 I read text books but am not in a university- so when I cannot resolve something I ask it here. I feel that I have over and over again seen the assumption of rest mass as a constant for a given particle but seen no reason as to why this should be so- especially given the nature of the quantum world. I'd like to see theoretical motivation for constancy- but as of yet (and especially considering the array of answers) I see none – jaskey13 Jul 25 '11 at 1:26 show 2 more comments The rest masses of fundamental particles certainly are NOT constants ! I will do copy/past from pages 6-9 (of 20) of a recent document by Alfredo (Independent researcher) the sufficient information to show how atomic properties can change, including mass, at a cosmological level. Starting only from data and making no hypothesis he formally presents a dilation(scaling) model of the universe where the atoms are not invariant and physical laws hold, without contradiction with GR, and compares the model with FRW and $\Lambda$CDM models. (the whole paper deserves your attention, it is very clear and it only uses basic physics, imo accessible to undergraduated students. He makes a full study on how we measure, units, local and field constants and laws, how can exist a variation and why we are not aware of such, and a lot more) quoting Alfredo: How the universe can be scaling We have seen that if space expansion traces a scaling phenomenon, we should expect to detect varying field constants; we have now to find out why that is not observed. The first thing to do is to look up to the dimension functions of field and some other constants: $$\begin{array}{ccl} \left[G\right] & = & M^{-1}L^{3}T^{-2}\\ \left[\varepsilon\right] & = & M^{-1}Q^{2}L^{-3}T^{2}\\ \left[c\right] & = & LT^{-1}\\ \left[h\right] & = & ML^{2}T^{-1}\\ \left[\sigma\right] & = & M^{-3}L^{-8}T^{5}. \end{array}$$ The equations of field constants ($G,\varepsilon$ and c) display a peculiar characteristic: the summation of exponents of the dimension function of each field constant is zero! This is unexpected and does not happen with the other constants. It means that if all the four base units concerned change by the same factor, $$M=Q=L=T,$$ then the measuring units of field constants hold invariant, $[G]=[\varepsilon]=[c]=1$. To see the relevance of this, let us consider that the atomic units of mass, charge, length and time change all at the same rate in relation to the space units. In that case, because of the property shown above, the atomic units of the field constants hold invariant in relation to the space ones and, therefore, the field constants are invariant in both systems (they are invariant in space units by definition of these ones). The geometry of space would be scaling in atomic units while the value of field constants would hold invariant--- which is exactly what cosmic data seems to display. The fact that the dimensions of field constants display null summation of exponents can just be a coincidence, but it is also the kind of indication we were looking for, a property embedded in physical laws. This is the only way we can consider a previously unknown fundamental property without conflicting with established physics. We have now the fundamental understanding that can support a scaling (dilation) model of the universe and we will now proceed to the formal development of that model. ... Hence, one of the systems of units is defined from matter properties, designated here by atomic system and identified by A ("A" from "atomic") and the other is the space system of units, identified by S ("S" from "space"); the later is such that space properties (geometry and field constants) remain invariant in it, which is required to qualify the S system as internally defined in relation to space. Thus, the conditions that define the S system are the following: - The units of S are such that the S measures of field constants hold invariant; - The length unit of S is such that the wavelength of a propagating radiation in vacuum is time invariant. The base quantities are Mass (M), Charge (Q), Time (T), Length (L) and Temperature ($\theta$), and the ratio between A and S base units is denoted by $M_{AS},Q_{AS},T_{AS},L_{AS},\theta_{AS}$. Note that the ratio between the A and S units of any quantity or constant is therefore expressed by the respective dimension function; ... Postulates The model will be deducted not from hypotheses but from relevant observational results, which are stated as postulates: 1. In atomic units (A), all local and field constants are time-independent. 2. $L_{AS}\,$decreases with time. The first postulate is not fully supported in experience, as we cannot state it with the required error margin; however, we have also no sound indication from observations that it might be otherwise. The second postulate represents the observed phenomenon of space expansion in atomic units, stated in this unusual way because it is presented as a function of $L_{AS}\,$, i.e., of the ratio between atomic and space length units and not the inverse, as usual. ... S units, by definition, are such that (eq.1) \begin{equation} \frac{dG_{S}}{dt_{S}}=\frac{d\varepsilon_{S}}{dt_{S}}=\frac{dc_{S}}{dt_{S}}=0. \end{equation} Since the field constants are time-invariant also in atomic units, as stated by postulate 1, and since the two systems of units are identical at $t=0$, then the values of these constants are the same in the two systems at whatever time moment: (eq.2) $$\begin{array}{ccccc} G_{A} & = & G_{S} & = & G\\ \varepsilon_{A}^{\vphantom{l}^{\vphantom{L}}} & = & \varepsilon_{S} & = & \varepsilon\\ c_{A}^{\vphantom{l}^{\vphantom{L}}} & = & c_{S} & = & c. \end{array}$$ The relation between the S and A values of each constant is the one between the respective A units and S units, which is given by the dimension function, therefore (eq.3) $$\begin{array}{ccccl} \dfrac{G_{S}}{G_{A}} & = & \left[G\right]_{AS} & = & {M_{AS}^{-1}}{L_{AS}^{3}}{T_{AS}^{-2}}=1\\ \dfrac{\varepsilon_{S}^{\vphantom{l}^{\vphantom{L}}}}{\varepsilon_{A}} & = & \left[\varepsilon\right]_{AS} & = & {M_{AS}^{-1}}{Q_{AS}^{2}}{L_{AS}^{-3}}{T_{AS}^{2}}=1\\ \dfrac{c_{S}^{\vphantom{l}^{\vphantom{L}}}}{c_{A}} & = & \left[c\right]_{AS} & = & L_{AS}{T_{AS}^{-1}}=1. \end{array}$$ This set of equations implies $M_{AS}=Q_{AS}=T_{AS}=L_{AS}$. By postulate 2, $L_{AS}$ is a time function, therefore the solution can be presented as: (eq.4) $$\begin{equation} M_{AS}(t)\,=Q_{AS}(t)\,=T_{AS}(t)\,=L_{AS}(t)\,.\, \end{equation}$$ Note that temperature is independent of this result. The next step is to define this time function, which is the space scale factor law. As all the above four base quantities follow this function, it is convenient to identify it by a specific designation; in this work this scaling law is identified by the symbol $\mathcal{\alpha}$: (eq.5) $$\begin{equation} \alpha(t)\,=\, L_{AS}(t). \end{equation}$$ ... The scaling law To make no hypothesis on the cause of the expansion is to consider that expansion is due to a fundamental property; to consider otherwise would imply a specific hypothesis on a particular phenomenon driving the expansion. Therefore, for this model, the space expansion is due to a fundamental property, tracing a self-similar phenomenon. Likewise, as no hypothesis is made on how fundamental properties may vary with position on space and time, it is assumed that they do not depend on it. This implies that the scaling has a constant time rate in some physically relevant system of units, i.e., that the scaling law is exponential in such system of units. There are only two possibilities in the framework established for this model: either space expansion is exponential in A units ($L_{SA}(t_{A})=\alpha^{-1}(t_{A})$ is exponential) or matter evanesces exponentially in S units ($L_{AS}(t_{S})=\alpha(t_{S})$ is exponential). The former case does not fit observations; only the later case is possible. The general expression for a scaling law exponential in S units is (eq.6) $$\begin{equation} \alpha(t_{S})=k_{1}e^{k_{2}\cdot t_{s}}\,; \end{equation}$$ at the moment $t_{A}=t_{S}=0$ it is $\alpha(0)=L_{AS}(0)=1,$ so $k_{1}=1$; note now that (eq.7) $$\begin{equation} \frac{dt_{S}}{dt_{A}}=T_{AS}=\alpha\, \end{equation}$$ which shows that the variation of the measure of time is inversely proportional to the time unit; and that (eq.8) $$\begin{equation} r_{A}=r_{S}{L_{AS}^{-1}}=r_{S}\cdot\alpha^{-1}, \end{equation}$$ where r is the distance to some point, or its length coordinate; as the rate of space expansion at t=0 is, by definition, the value of Hubble constant, represented by $H_{0}$, then (eq.9) $$\begin{equation} H_{0}=\left(\frac{1}{r_{A}}\frac{dr_{A}}{dt_{A}}\right)_{0}=-k_{2}, \end{equation}$$ therefore (eq.10) $$\begin{equation} \alpha(t_{S})=e^{-H_{0}\cdot t_{S}}. \end{equation}$$ Hubble constant is the present space expansion rate for an atomic observer and is the matter evanescence rate (negative) for a space observer. - 1 If you decided that masses depend on time, then there is no need to prove that masses depend on time ;-) – Vladimir Kalitvianski Jul 25 '11 at 9:02 @Vladimir I see no logic in your sentence. It's a fallacy. I will try to rephrase it in a logical way. "If you decided to show that masses can depend on time, you must use a referential 'above' the particles, then you need to prove that 'physical laws'(eq.3) are still valid, as the author did using dimensional analysis. – Helder Velez Jul 25 '11 at 13:45 Sorry this is utterly ridiculous! – Columbia Jul 25 '11 at 14:29 @Columbia When I was a little boy I needed 40 steps to cross the road in front of my home. Now I measure it to be 20 steps and we have a consensus, among all that grew with me, that the road is shrinking, backed with careful measures. My special world is made of equal kids and when I was told that our height and weight had changed I reply utterly ridiculous!. Mass/length atom invariance is a hidden claim of BBT (it is unavoidable and it was never proved, and not been explicit makes it worse). Columbia please read the argument carefully and try to find a flaw. It should be utterly simple. – Helder Velez Jul 25 '11 at 15:20 Marek, you know well that the total wave function of any system is a product of wave functions of independent degrees of freedom (or "particles") of this system. Each particle wave function may be in the ground or excited state and is determined with some specific to it constants, including mass. This fact does not depend on the state of other wave functions in this product so if a heavy particle wave function remains in its ground state (not yet born particle), it does not influence the other constants (masses of already born particles). When we discover a new heavy particle and use its excited wave function, its presence does not influence the previous particle masses. Masses in such a construction are scale and energy independent. For example the photon mass is the same in presence of electron and in absence of it, and vice versa. The energy in an experiment is not a scale of the theory but a given energy of the system, here somebody fooled you. At a given total energy the particle populations may be quite different. It's an external parameter to the theory; the theory should work at any energy used in calculations. So the "running constants" is not the physical property of particles or nature but a feature of the human failure do describe the excitation processes properly. Remember the useless self-action we introduce in the theory (for the sake of what?). We are obliged to get rid of its effects in the end. As well, we couple coupled already things, we couple them with a wrong understanding of physics and obtain immediately lots of problems. And you promote all that as "physics". -1 for "... all of the parameters that are inserted by hand must depend on the scale.". - so you are downvoting answers that are based on the standard physics literature, accepted world-wide in the physics community and are fundamental insights that people (e.g. K. Wilson) earned a Nobel prize for? Well, actually, it makes sense that people like you who don't have a slightest clue about physics downvote correct answers. It just makes me little concerned about the future of this site, if more mystifiers like you appear... – Marek Jul 25 '11 at 7:29 – jaskey13 Jul 26 '11 at 21:00 @jaskey13: I do not expect you to read and accept my point of view but the toy problem I presented in my essay is exactly solvable, so you can see connection between bare constants, physical constants and cut-off contained in $\eta$. You will see that no $\Lambda$-dependent things is involved in the exact calculations so everything cut-of- or scale-dependent is a big science about nothing. As well, you can consider my opus to be mainstream because there is no error in it. – Vladimir Kalitvianski Jul 26 '11 at 21:08 The rest masses are certain as they correspond to the ground states of QM systems where the energy is certain. It is valid for any system - "elementary" or compound. High temperatures may create the energy uncertainty and it may influence the "measured" mass. The mass "measurement" can be indirect via precise transitions expressed with theoretical formulas involving mass. Many current theories fail to simply use the experimental data on masses and charges and it is obviously not a nature feature but a human being imperfection. If you read Marek's answer, you will learn that, as soon as Marek does not know all the particles the nature can produce, he and his theory cannot be wrong. So masses and charges are running for him, they run his errands. And the main errand is to describe the experimental data with a wrong theory. Difficult task but fortunately feasible with help of running constants. EDIT: For those who does not know that most of "our theories" are non renormalizable, the usual blah-blah about "scale dependence" is not applicable at all (it does not help). EDIT 2: I see downvoters do not want to leave the rest masses at rest. - So would you say then that there is no physical meaning to renormalization? Do charged particles remain cloaked in a cloud of virtual copies of themselves and their own anti-particle in that sense or not? – jaskey13 Jul 24 '11 at 22:41 1 @jaskey13: renormalization by its purpose is repairing wrong results. Non yet renormalized theory gives too bad results. Who does renormalizations? It's we, not the nature, not the particles themselves. The physical meaning of renormalizations is to get reasonable results from bad ones. This is a human intervention in bad solutions. There is no physics in it at all. And there is no vacuum polarization. Another thing is that sometimes we get reasonable results in this way. It means that another, "renormalized" theory is good and is somewhere near. We just have not found it non perturbatively. – Vladimir Kalitvianski Jul 24 '11 at 22:53 @Marek: Wilson never dealt with the true QFT; he used QFT methods in his phase transition problems where QFT methods are not applicable by definition, strictly speaking. That is why his QFT is called an "effective" (smoothed) description. Then QFT people, who could not make up their minds for decades what the value for the cut-off to choose in their RG, decided to call their sorry QFTs "effective" too and choose the cut-off according to the energy. This choice was as good as their previous choices where cut-off tended to infinity but such a choice "explained" renormalizations "a la Wilson". It is a sloppiness and weakness of QFT theorists that you promote here. None received Nobel prise for cut-off dependent constants in QFT, do not exaggerate. I downvote when I see you do not understand physics, purpose of physics, and repeat as a parrot what sinful people use as a pretext to hide their failures. - Could you please cite something for me to consider why renormalization is so incorrect and any efforts to resolve these problems without it? – jaskey13 Jul 25 '11 at 11:59 – Vladimir Kalitvianski Jul 25 '11 at 12:10 It's a PowerPoint document with comments below slides. It is quite self-consistent. I show how we can advance a wrong coupling and why renormalizations may luckily work. It is renormalization practicers who are happy with renormalizations, not the theory developers. – Vladimir Kalitvianski Jul 25 '11 at 12:13 I don't see the link? Also wasn't Dirac the initial "renormalizer" by filling his sea of electron-holes with electrons? – jaskey13 Jul 25 '11 at 12:17 The link is docs.google.com/… As to P. Dirac, he used renormalizations, as Lorentz, Abraham, and many others before him but he never was satisfied with it. You know, people write equations for physical particles with physical parameters in mind. If the calculations give bad results - it is the theory who is wrong. "Repairing" those bad solutions is repairing that bad theory. So, the logic of QFT fathers was right - we have to find a better (renormalized exactly) theory to obtain the right results directly. – Vladimir Kalitvianski Jul 25 '11 at 12:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.947191596031189, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/170111/asymptotics-of-an-expression-of-the-root-of-a-polynomial/171137
# Asymptotics of an expression of the root of a polynomial Given that $x_0$ is the unique positive solution of $(2-x)^{n+1}=x(x+1)\cdots(x+n)$, try to find the asymptotic value of $$M=\prod_{k=0}^n\left(\frac{k+2}{k+x_0}\right)^{k+2}$$ with absolute error $o(1)$ as $n\to\infty$, where $H_n$ denotes $n$-th harmonic number $\sum_{k=1}^n1/k$. Source (background) IMO2012 problem 2 My answer, which I'm not sure whether is right, is posted isolatedly. - 1 If you have a complete answer for your own question, you should post it as an answer. – tomasz Jul 15 '12 at 14:14 @tomasz It's okay. – Frank Science Jul 15 '12 at 14:46 ## 1 Answer All $O$-notations and $o$-notations work for $n\to\infty$. First we have $2^{n+1}\ge n!x_0$, hence $x_0\le2^{n+1}/n!=O(2^n/n!)$. Take logarithm, we derive that \begin{equation} \begin{split} (n+1)\ln(2-x_0)&=\ln x_0+\sum_{k=1}^n\ln(x_0+k)\\ &=\ln n!+\ln x_0+\sum_{k=1}^n\ln\left(1+\frac{x_0}k\right) \end{split} \end{equation} therefore \begin{gather} \ln(2-x_0)=\ln2+\ln(1-x_0/2)=\ln2+O(x_0)\\ \sum_{k=1}^n\ln(1+x_0/k)=\sum_{k=1}^nO(x_0/k)=O(x_0H_n) \end{gather} \begin{equation} \begin{split} \ln x_0&=(n+1)\ln(2-x_0)-\sum_{k=1}^n\ln\left(1+\frac{x_0}k\right)-\ln n!\\ &=(n+1)(\ln 2+O(x_0))-\ln n!-O(x_0H_n)\\ &=-\ln n!+(n+1)\ln 2+O(nx_0) \end{split} \end{equation} \begin{equation} x_0=\frac{2^{n+1}(1+O(nx_0))}{n!} \end{equation} Notice that $nx_0=O(2^n/(n-1)!)=o(1)$, so we have $x_0\sim2^{n+1}/n!$, thus \begin{equation} x_0=\frac{2^{n+1}}{n!}+O(nx_0^2) \end{equation} Now we can observe $x_0$ more closely \begin{equation} \begin{split} \ln(2-x_0)&=\ln2+\ln(1-x_0/2)=\ln2-x_0/2+O(x_0^2)\\ &=\ln2-2^n/n!+O(nx_0^2) \end{split} \end{equation} \begin{equation} \begin{split} \sum_{k=1}^n\ln(1+x_0/k) &=\sum_{k=1}^n(x_0/k+O(x_0/k)^2)\\ &=H_nx_0+O\left(x_0\sum_{k\ge1}1/k^2\right)\\ &=H_n(2^{n+1}/n!+O(nx_0^2))+O(x_0^2)\\ &=\frac{2^{n+1}H_n}{n!}+O(x_0^2\cdot n\log n) \end{split} \end{equation} \begin{equation} \ln x_0=-\ln n!+(n+1)\ln2-\frac{2^n(n+2H_n+1)}{n!}+O(n^2x_0^2) \end{equation} \begin{equation} \begin{split} x_0&=\frac{2^{n+1}}{n!}\exp\left(-\frac{2^n(n+2H_n+1)}{n!}\right)(1+O(n^2x_0^2))\\ &=\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)(1+O(n^2x_0^2))\\ &=\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)+O(n^2x_0^3) \end{split} \end{equation} Next, we compute $\ln M$ \begin{equation} \begin{split} \ln M&=\sum_{k=0}^n(k+2)(\ln(k+2)-\ln(k+x_0))\\ &=\sum_{k=1}^{n+2}k\ln k-\sum_{k=0}^n(k+2)\ln(k+x_0) \end{split} \end{equation} where \begin{equation} \begin{split} \sum_{k=0}^n(k+2)\ln(k+x_0)&=2\ln x_0+\sum_{k=1}^n(k+2)(\ln k+\ln(1+x_0/k))\\ &=2\ln x_0+\sum_{k=1}^n(k+2)\ln k+\sum_{k=1}^n(k+2)\ln(1+x_0/k)\\ &=2\left((n+1)\ln(2-x_0)-\sum_{k=1}^n\ln(1+x_0/k)-\ln n!\right)\\ &\qquad+\sum_{k=1}^n(k+2)\ln k+\sum_{k=1}^n(k+2)\ln(1+x_0/k)\\ &=2(n+1)\ln(2-x_0)+\sum_{k=1}^nk\ln k+\sum_{k=1}^nk\ln(1+x_0/k) \end{split} \end{equation} thus \begin{multline} \ln M=(n+1)\ln(n+1)+(n+2)\ln(n+2)\\ -2(n+1)\ln(2-x_0)-\sum_{k=1}^nk\ln(1+x_0/k) \end{multline} therefore \begin{equation} x_0^2=\frac{4^{n+1}}{n!^2}(1+O(nx_0))^2=\frac{4^{n+1}}{n!^2}+O(nx_0^3) \end{equation} \begin{equation} \begin{split} \ln(2-x_0)&=\ln2+\ln(1-x_0/2)=\ln2-x_0/2-x_0^2/8+O(x_0^3)\\ &=\ln2-\frac{2^n}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)-\frac18\frac{4^{n+1}}{n!^2}+O(n^2x_0^3)\\ &=\ln2-\frac{2^n}{n!}+\frac{4^n}{n!^2}\left(n+2H_n+\frac12\right)+O(n^2x_0^3) \end{split} \end{equation} \begin{equation} \begin{split} \sum_{k=1}^nk\ln(1+x_0/k)&=\sum_{k=1}^nk(x_0/k-x_0^2/2k^2+O(x_0/k)^3)\\ &=nx_0-\frac12H_nx_0^2+O\left(x_0^3\sum_{k\ge1}1/k^2\right)\\ &=n\frac{2^{n+1}}{n!}\left(1-\frac{2^n(n+2H_n+1)}{n!}\right)-\frac12H_n\frac{4^{n+1}}{n!^2}+O(n^3x_0^3)\\ &=2n\frac{2^n}{n!}-\frac{4^n}{n!^2}(2n^2+4nH_n+2n+2H_n)+O(n^3x_0^3) \end{split} \end{equation} We have enough stuff to estimate $\ln M$ now. \begin{multline} \ln M=(n+1)\ln(n+1)+(n+2)\ln(n+2)-2(n+1)\ln2\\ +\frac{2^{n+1}}{n!}-\frac{4^n}{n!^2}(n+2H_n+1)+O(n^3x_0^3) \end{multline} thus \begin{equation} \begin{split} M&=\frac{(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}\\ &\qquad\qquad\exp\left(\frac{2^{n+1}}{n!}\right)\exp\left(-\frac{4^n}{n!^2}(n+2H_n+1)\right)\\ &\qquad\qquad(1+O(n^3x_0^3)) \end{split} \end{equation} Finally, we have \begin{gather} \exp\left(\frac{2^{n+1}}{n!}\right)=1+\frac{2^{n+1}}{n!}+\frac{2\cdot4^n}{n!^2}+O(x_0^3)\\ \exp\left(-\frac{4^n}{n!^2}(n+2H_n+1)\right)=1-\frac{4^n}{n!^2}(n+2H_n+1)+O(n^2x_0^4) \end{gather} and \begin{equation} \begin{split} M&=\frac{(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}\\ &\qquad\qquad\left(1+\frac{2^{n+1}}{n!}-\frac{4^n}{n!^2}(n+2H_n-1)\right)\\ &\qquad\qquad(1+O(n^3x_0^3)) \end{split} \end{equation} Notice that the absolute error \begin{equation} O\left(\frac{n^3(n+1)^{n+1}(n+2)^{n+2}}{4^{n+1}}x_0^3\right) \end{equation} approaches $0$ when $n\to\infty$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.3228013217449188, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/44873-parabolic-arch-bridge.html
# Thread: 1. ## Parabolic Arch Bridge A horizontal bridge is in the shape of a parabolic arch. Given the information below, what is the height h of the arch 2 feet from shore? Given Data: *width across bridge (from one side of the parabola to the other side of the parabola) is 20 feet *The height of the arch (the vertex of the parabola under the bridge to the water below) is 10 feet *the height h we want is located under the bridge itself 2 feet from the shore 2. Unfortunately, we cannot see your drawing. Answer would be only speculation. "A horizontal bridge is in the shape of a parabolic arch" -- What does that mean? Is it horizontal or is it parabolic? A drawing would clear up this confusion. 3. Center your parabola at the origin. You can use the formula $y=a(x-h)^{2}+k$ to find the parabola equation with the given info. h=0, k=10, y=0, x=10 Plug them in and solve for a. Then you have your equation. Then just plug in x=8 to find the height at 2 feet from the end. 4. ## Well-done! I thank the second person who took time to go over the question and provide the necessary steps. 5. Originally Posted by magentarita I thank the second person who took time to go over the question and provide the necessary steps. That really hurts. This second person to whom you are referring took a wild stab at what you meant. If this guess was close enough to be of benefit to you, that is great, but this does NOT mean that you provided sufficient information. If you simply would have answerd my objections, there would have been no guessing involved. Please make an effort to provide COMPLETE problem statements and any necessary drawings. 6. ## TKHunny...Sorry...Rita TKHunny: I meant nothing bad by my reply. I think you are a great mathematician. The people who post questions here (LIKE ME) do not have a solid relationship with math. I have read your replies and consider you to be one of the best mathematicians online. I thank you and sorry for hurting your feelings. Rita 7. No worries. You would have to try WAY harder than that to hurt my feelings. Let's learn some mathematics. 8. ## I agree... I agree...let's learn math and forget about the world.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508509039878845, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/29333?sort=oldest
## Warmup (you've probably seen this before) Suppose $\sum_{n\ge 1} a_n$ is a conditionally convergent series of real numbers, then by rearranging the terms, you can make "the same series" converge to any real number $x$. To do this, let `$P=\{n\ge 1|a_n\ge 0\}$` and `$N=\{n\ge 1|a_n<0\}$`. Since $\sum_{n\ge 1} a_n$ converges conditionally, each of $\sum_{n\in P}a_n$ and $\sum_{n\in N}a_n$ diverge and $\lim a_n=0$. Starting with the empty sum (namely zero), build the rearrangement inductively. Suppose $\sum_{i=1}^m a_{n_i}=x_m$ is the (inductively constructed) $m$-th partial sum of the rearrangement. If $x_m\le x$, take $n_{m+1}$ to be the smallest element of $P$ which hasn't already been used. If $x_m> x$, take $n_{m+1}$ to be the smallest element of $N$ which hasn't already been used. Since $\sum_{n\in P}a_n$ diverges, there will be infinitely many $m$ for which $x_m\ge x$, so $n_{m+1}$ will be in $N$ infinitely often. Similarly, $n_{m+1}$ will be in $P$ infinitely often, so we've really constructed a rearrangement of the original series. Note that `$|x-x_m|\le \max\{|a_n|\bigm| n\not\in\{n_1,\dots, n_m\}\}$`, so $\lim x_m=x$ because $\lim a_n=0$. Suppose $\sum_{n\ge 1}v_n$ is a conditionally convergent series with $v_n\in \mathbb R^k$. Can the sum be rearranged to converge to any given $w\in \mathbb R^k$? Obviously not! If $\lambda$ is a linear functional on $\mathbb R^k$ such that $\sum \lambda(v_n)$ converges absolutely, then $\lambda$ applied to any rearrangement will be equal to $\sum \lambda(v_n)$. So let's also suppose that $\sum \lambda(v_n)$ is conditionally convergent for every non-zero linear functional $\lambda$. Under this additional hypothesis, I'm pretty sure the answer should be "yes". - ## 4 Answers The Levy--Steinitz theorem says the set of all convergent rearrangements of a series of vectors, if nonempty, is an affine subspace of ${\mathbf R}^k$. There is an article on this by Peter Rosenthal in the Amer. Math. Monthly from 1987, called "The Remarkable Theorem of Levy and Steinitz". Also see Remmert's Theory of Complex Functions, pp. 30--31. As an example, taking $k = 2$, suppose $v_n = ((-1)^{n-1}/n,(-1)^{n-1}/n)$. Then the convergent rearrangments fill up the line $y = x$. The linear function $\lambda(x,y) = x-y$ of course kills the series, which makes Anton's observation explicit in this instance. The Rosenthal article, at the end, discusses Anton's question. Indeed if there is no absolute convergence in any direction then the set of all rearranged series is all of ${\mathbf R}^k$. Note by the above example that this condition is stronger than saying the series in each standard coordinate is conditionally convergent. Rosenthal said this stronger form of the Levy-Steinitz theorem was in the papers by Levy (1905) and Steinitz (1913). He also refers to I. Halperin, Sums of a Series Permitting Rearrangements, C. R. Math Rep. Acad. Sci. Canada VIII (1986), 87--102. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I discuss (with references, but without proof) the Levy-Steinitz Theorem in Section 2 of the following document: http://www.math.uga.edu/~pete/UGAVIGRE08.pdf In particular, the version I give describes precisely the set of limits of convergent rearrangements in terms of the subspace of directions of absolute convergence of the series. As a special case, if no one-dimensional projection is absolutely convergent, then indeed one can rearrange the series to converge to any vector in $\mathbb{R}^n$. - To a conditionally convergent series $\sum_{n\geq 1}v_n$ in $\mathbb{R}^d$ one can attach so called convergence functionals $f$, which are linear functionals $f:\mathbb{R}^d\to\mathbb{R}$ with the property $\sum_{n=1}^{\infty}|f(v_n)|<\infty$. Let $\Gamma ((v_n))$ be the set of all these functionals. Then the set of values of the possible rearrangements of the series $\sum_{n=0}^{\infty}v_n$ is exactly the affine space $\sum_{n=0}^{\infty}v_n + \Gamma ((v_n))_0$, where $\Gamma ((v_n))_0$ denotes the annihilator of $\Gamma ((v_n))$, i.e. $\bigcap_{f\in\Gamma ((v_n))}\mathrm{ker}(f)$. This is precisely the Steinitz` Theorem mentioned by KConrad. Let me just add that this result does not hold in general for infinite-dimensional spaces. However, a generalization of Steinitz theorem seems very approachable for locally convex spaces -> see e.g. "The Steinitz theorem on rearrangement of series for nuclear spaces" by W. Banaszczyk (1990), in Journal für die reine und angewandte Mathematik 403, 187-200. EDIT: added the condition on $v_n$ per KConrad´s comment. - You should also add the condition that the specific series you wrote down a_1 + a_2 + a_3 + ... is convergent. It wouldn't make sense to take the translate of that annihilator by a divergent ordering of the terms. – KConrad Jun 24 2010 at 21:51 I meant the same sequence/series $a_n$ as in the OP. But you are right, for the sake of completeness I should add it. – ex falso quodlibet Jun 24 2010 at 23:16 You don't want the series to be in R, since there nothing of much interest is happening (linear functionals on the real line?). Your series and linear functionals belong in R^k, as you had edited it once before. Maybe change a_n to v_n? – KConrad Jun 25 2010 at 0:25 Ok, notation also fixed to be in correspondence with the original post. – ex falso quodlibet Jun 25 2010 at 0:36 The exponents in the R^d were enclosed in the \mathbb, so they weren't showing up (at least on my screen). I took them outside of \mathbb and now it looks okay. – KConrad Jun 25 2010 at 2:10 show 1 more comment One way to think of this is to have a vector whose $x$-component is conditionally convergent and whose $y$-component is absolutely convergent. Then the answer is seen to be obviously "no". Next, change to a different basis and do the same thing, and go to higher dimensions, and you then quickly see that you can have an affine space to some point of which every rearrangement converges, but to any point of which some rearrangement converges. - 1 Not precisely worded correctly. Some rearrangements do not converge at all. Thus "to some point of which every rearrangement converges" is wrong. – Gerald Edgar Jun 25 2010 at 3:02 OK: "....to some point of which every convergent rearrangement converges". – Michael Hardy Jun 25 2010 at 16:13 Your example doesn't satisfy the hypothesis that the series remains conditionally convergent after applying any linear functional. Specifically, it becomes absolutely convergent after applying the linear functional $x$. The example demonstrates that it's possible to have a 1-dimensional affine space of limit points, but the question is essentially whether it's always possible to have a higher-dimensional space of possible limit points. – Anton Geraschenko♦ Jun 26 2010 at 0:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 56, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212150573730469, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/39215/newtons-3rd-law-how-can-i-break-things?answertab=votes
# Newton's 3rd Law: How can I break things? If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? - ## 3 Answers The board did exert an equal and opposite force, but your mass is considerably greater than the mass of the board so all the friction between you and the ground keeps you from accelerating. If you punched the board while standing on perfectly slippery ice, or in a vacuum, you would accelerate also, but at a much smaller rate than the board due to the mass difference. If you could ignore all losses (friction, the board breaking, etc.) then all Newton's 3rd Law really is saying is that the center of mass of the system doesn't change when you punch the board. So the board accelerates in the direction of your punch and you move away from it at rates that keep the system center of mass constant. - Interesting, thanks. – simplysimple Oct 6 '12 at 18:38 Yes . . . very very interesting. – Velox Feb 27 at 17:54 If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? Yes. When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? Incorrect, the board did exert an opposite reaction - where do you think your bruised knuckles will come from? The transfer of momentum from your body to the board requires an opposite force to decelerate your fist or you could punch through arbitrarily thick material. That the opposite force will be equal in magnitude is a consequence of conversation of momentum applied to a 2-body interaction - modern physics no longer postulates the 3rd law. - Thanks, I didn't know what pain was until now. – simplysimple Oct 6 '12 at 18:40 If I punch a wooden board hard enough and it breaks in two, has the board still exerted a force of equal magnitude on my fist? Good question but short answer: Yes. Exactly..! The Newton's third law (a contact force) applies to every-day life like Jumping on the floor, Walking on slippery ground, balloons etc. So, the board has also exerted an equal force on your fists... How is this? (similar to $\vec{F_{AB}}=-\vec{F_{BA}}$) Whenever you exert a force on an object, the object also exerts an equal normal force on you so that both cancels out. Without it, your fist won't stop accelerating and you'd probably pierce through the ground. One thing to be noted is that these action-reaction pairs always act on two different bodies and depends on their masses. Here, the board does not have enough mass to withstand your force and hence the break-through. It could be seen from this example. As a ball falls on earth, you could tell that the ball is pulled by earth or according to 3rd law, earth is pulled by ball). Amazed of it, Eh..? Take the mass of ball to be $10kg$, then force exerted by Earth on ball is $F=mg=98N$. According to the 3rd law, this force is also exerted by the ball on earth. Hence, acceleration of Earth towards ball is $a=F/m=\frac{98}{5.98×10^{24}}=16.38×10^{-24}ms^{-2}$ which is too small to be measured. There are greater probabilities for the forces to be same, but not necessarily their acceleration..! A simulation for Newton's cradle which is also an example of the 3rd law... When the board breaks in two due to my force, the halves have a component of acceleration in the direction of my striking fist...that implies the board did not exert an equal and opposite reaction, no? NO... Actually the Newton's 3rd law is stated as: • All forces result from interactions between pairs of objects, each object exerting a force on the other. The two resultant forces have same strength but in exactly opposite directions. It seems that the breaking of board has confused you. The board-breaking is due to a Stress-Strain mechanism which is not necessary here..! It depends on the type of material, magnitude of force exerted on it, etc. When you apply the force on the board, the work done by you is stored as potential energy in the board. When it exerts the equal force on you, it also experiences the same force and when this force exceeds the breaking stress, the board BREAKS..! The potential energy is then dissipated in the form of heat. My thought is that, Simultaneous action-reaction pairs are observed both on board and on your fist. But, the board can't withstand the action-reaction due to it's low mass and small breaking stress..! - 1 You may want to clarify that the "yes" applies to the first question asked, not the second. – Chris White Oct 6 '12 at 17:47 1 Thanks for the calculation, it clarifies the concept. I would upvote if I could. – simplysimple Oct 6 '12 at 18:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9569975733757019, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/6738/simple-explanation-of-quantum-mechanics/6740
# Simple explanation of quantum mechanics Can you please describe quantum mechanics in simple words? When ever I read this word (quantum computers, quantum mechanics, quantum physics, quantum gravity etc) I feel like fantasy, myth and something very strange that i can never understand. So what is quantum mechanics? - The essence of quantum mechanics is quantum entanglement. Everything else follows from that. – QGR Mar 12 '11 at 10:53 Actually, a question perhaps for the experts: what is the status of recovering quantum mechanics (as a logical formalism) from "physical" considerations? I.e. the von Neumann program. The last I heard was something like "we get an orthonormal lattice". – genneth Mar 12 '11 at 11:22 – Marek Mar 12 '11 at 11:58 I don't know if a "simple" explanation of QM exists but this paper can be helpful: Quantum Theory Needs No Interpretation Authors: Fuchs, Christopher A.; Peres, Asher. Physics Today, vol. 53, issue 3, p. 70 – xavimol Mar 12 '11 at 13:44 @Marek @xavimol: I'm familiar with both of those pieces. The last paragraph of Fuchs et al. outlines what I'm asking for: what physical principles can we use to reconstruct the mathematical edifice known as quantum mechanics? – genneth Mar 12 '11 at 14:31 show 3 more comments ## 7 Answers Deepak's answer covers the question, except if "simple" means "in terms an english major would understand". I have been trying to formulate such an answer, fwiw: Mechanics covers the macroscopic world, the world we see even with glass microscopes and simple telescopes. It explains the motion and interaction of bodies, from large, to quite small. Wave mechanics covers the macroscopic behavior of waves as observed in the sea and the behavior of light. The theories by the beginning of the 20th century were beautifully established so that some physicists thought that real physics was finished, and engineering was all that was left. Then came the quanta. They came from various fronts. Chemists studying the elements and measuring atomic weights came up with a table of elements that had discrete numbers. The photo electric effect showed that light was not always behaving as a classical wave, because light could hit material and kick out a single electron. Then lines were found in the light spectra of elements. This forced physicist to think of energy coming in discrete numbers instead of a continuum, quanta of energy. The first model of an atom, the Bohr model, had the nucleus like a miniature sun and the electrons as planets around it. The discrete lines observed though, meant that the orbits had fixed positions. Electrons could only occupy certain orbits, get kicked to a higher one and release a photon falling back, with a specific line. Classical mechanics could not solve this conundrum: why there were discrete orbits and why the electrons did not fall into the nucleus anyway since the nucleus is positive and the electrons negative and the attraction inevitable classically. Physicists were forced to postulate quantum mechanics and developed a whole new set of theories of how the microscopic atomic world worked: in quanta of energy. In this new mathematical theory the electrons stay in orbit around the nucleus because they can only change orbits by quanta of energy. They cannot fall into the positive nucleus because there is a lowest stable ground state, which is the lowest energy an electron can have. An atom can gain a quantum of energy and the electron can jump to a higher energy state; it can not go lower than the ground state. One can never take a micro photo of the electron, only a probability distribution where it might be, can be computed by the new theories. Studying the probability distributions coming out of the solution of quantum mechanical problems, it was found that in the microscopic world particles sometimes behave as waves, and waves (light) sometimes behave as particles, depending on the circumstances under study. Experiments confirmed all this. Quantum mechanics explains beautifully the light spectra of atoms, the periodic table of elements and nuclear interactions and a lot of phenomena, from transistors to lasers. The price payed is a loss of intuitive understanding of "particle" and "motion" , new intuitions have to be developed to understand the predictions of a quantum mechanical world that need long study and perseverance. Specifically : quantum physics includes a) quantum mechanics: solutions of problems with known potential energy using the Schrodinger or Klein Gordon equations. The problems are treated as particles moving in a potential well b)second quantization, where particles are treated as creation and annihilation operators acting on the vacuum, and c) quantum strings where one has quantized strings and the particles are energy levels on these strings. d) whatever new coming down the theoretical pike.( It is turtles all the way down :) ) To make any sense of this you have to read further. quantum computers, utilize the knowledge gained by quantum mechanics to create compact computers, and my knowledge is covered by the wikipedia article quantum gravity is an attempt to extend quantum mechanics to general relativity, which is a classical theory. There are computational difficulties in doing this. Theorists are aiming at a unified theory of everything, called TOE. To get that, one has to quantize gravity, which means that the gravitational field should be coming in quanta, called gravitons. This is ongoing research and connected with string theory research, since, up to now, string theories are the only ones that have come up with both the quantum levels needed to describe particles and also to quantize gravity. Deepak's answer is a good beginning and also sb1's answer. Otherwise start with a quantum mechanics course. - 1 Excellent answer @anna +1. Any "simple explanation" of QM is only helped by providing some historical context. In reading it I felt that someone might find the notion of a "ground state" an obstacle. Perhaps you could elaborate on this concept a bit. – user346 Mar 12 '11 at 21:12 @Deepak Vaid thanks. enlarged on ground state. – anna v Mar 13 '11 at 4:48 Thanks, this was easier to understand. Can you please add a few more lines to last three definitions? – LifeH2O Mar 13 '11 at 9:53 "Once more unto the breach ..." The aspect of quantum mechanics that distinguishes it the most (IMHO) from classical mechanics, is that of superposition of states - that at any given moment a system is described by a state which can be a linear combination of various, physically realizable, outcomes: $$|\Psi\rangle = c_1 |\psi_1\rangle + c_2 |\psi_2 \rangle + \dots$$ Indeed one can give a simple visual analogy. Think of that prototypical classical mechanical system - the pool (or "billiards" depending on which time-zone you happen to be in) table. The pool balls obey simple classical trajectories, assuming the absence of non-uniformities on the table, which consist of straight lines interrupted by reflection off of the sides of the table. In classical mechanics one can enumerate all such trajectories and construct the resulting phase space for the system. If you were to ask me to characterize the state of the pool table at any instant, I would do so by choosing a point in phase space whose co-ordinates specify the locations and momenta of all the balls on the table at that instant. Given this information and Newton's second law I can determine, to arbitrary accuracy , the future trajectories of all the pool balls. Quantum Mechanics introduces the entirely new possibility that I could specify a state which is a linear combination of eigen (or "physically realizable") states. In other words a given pool ball could be at location $x$ and at $x'$ at the same time. Even to a laymen this possibility open up great new vistas. Engineers can only imagine what sort of machines they could construct if they could build a transmission that could operate at more than one speed simultaneously. Computational scientists wonder at the thought that a quantum system could exist in a superposition of $0$ and $1$ simultaneously - this possibility is in fact being realized in the exploding field of quantum computation. In return for all these nice possibilities that superposition opens up we must accept significant trade-offs - there is an inherent uncertainty in whatever physical quantity we measure and we must accept restrictions on which observables can be measured simultaneously. To illustrate the first trade-off (uncertainty in measurement outcomes) I can - in theory - construct a quantum computer which employs "massive parallelism" and solves a problem by considering all possible solutions simultaneously, but the trade-off is that I can never be absolutely certain that the solution it spits out at the end is the right one. The second trade-off is perhaps more significant in terms of its implications for our notions of "reality". Coming back to the pool ball, superposition allows me to say that it is at location $x$ and $x'$ simultaneously. However, the more accurately I try to localize the ball at these two locations, the less accurately I can localize the ball's momenta. Or as when a cop pulled Heisenberg over for speeding and asked "Do you know how fast you were going?", he replied "No. But I know exactly where I am" [Note for the purists and experts: The language of this answer is directed at a beginner and not for a conference proceeding, therefore I have avoided technicalities and mathematics. Please keep this in mind when pointing out errors and such.] - 3 Nice exposition, +1. I'd just like to add that superposition is not something that is unique to quantum theory and actually abounds in all linear systems. In particular, everyone should be familiar with superpositions of E-M fields. The key hallmark of quantum mechanics is non-compatibility of observables, i.e. the non-commutative deformation of the phase space. – Marek Mar 12 '11 at 11:53 Thank you @Marek. I'd rather be in your good graces, than not :). You're right about superposition of fields. I mean it more in the context of the path-integral approach. In QM the evolution of a system can be described by a superposition of more than one, classical trajectory - as in whether the electron goes through the right slit or the left one, or both! – user346 Mar 12 '11 at 12:06 1 @space_cadet: I would agree with Marek that the key is non-commutativity of observables. After all, if all observables commuted, we could still use the quantum formalism, superposition and all, but nothing quantum would be happening. In a recent lecture series by Claude Cohen-Tannoudji, he focuses on the existence of off-diagonal elements of the density matrix, and shows how this idea ties together many/most of the novel quantum effects we are familiar with. – genneth Mar 12 '11 at 14:34 @genneth the path-integral formalism is not contingent upon the commutativity (or not) of observables, correct? I think these two aspects (superposition and uncertainty) are both inseparable parts of the quantum picture. I try to emphasize as much in my answer. – user346 Mar 12 '11 at 15:48 1 Thanks @genneth for this reference. I think anyone should see for themselves once how classical mechanics is done in the quantum formalism, to be able to make the distinction between the different formalisms (which are mostly historical) and the real differences. I agree that linearity is not it, and non-commutativity is the key concept. – user566 Mar 12 '11 at 17:57 show 12 more comments Feynman explained a whole lot in plain language in "QED: the strange theory of light and matter". But as Zee takes pains to point out, one must pay PARTICULAR attention to what Feynman says, because he is not popularizing or dumbing it down, except for not getting too much into the notion of spin. - 2 this is not an answer. -1. A reference can always be listed in the comments thread. – user346 Mar 12 '11 at 18:22 2 @Deepak: bear in mind that Joel can't post comments with his reputation (needs at least 50, I think). It's okay to leave comments as answers until he can post comments, in my opinion. – Marek Mar 12 '11 at 20:51 Oooops. Many thanks for pointing that out @Marek. @Joel if you were to make any edit (however minor) to your question it would allow me to revoke my downvote. – user346 Mar 12 '11 at 21:04 ... and I just realized that I could that just as well. Edited and -1 revoked! Cheers. – user346 Mar 16 '11 at 22:51 Consider a cavity or a black body in thermal equilibrium. According to classical notions, every harmonic oscillation mode in that system contains on average the same amount of energy, in equilibrium. But the density of modes rises proportionally to the frequency. Thus the system contains an infinite amount of energy, which obviously doesn't happen in reality. Quantization resolves this. If the modes of the oscillation come at discrete energy levels at much larger steps than the average equilibrium thermal energy, that energy can't occupy every mode equally any more. This however comes with the price of uncertainty. Consider the interaction of a photon and electron in a measurement process. Because of the wave particle duality, energy of the photon is proportional to the frequency of it's wave. If you want a sharp picture of the electron, a point particle, you need high frequency photons to scatter with it. The electron gains kinetic energy in the scattering and so taking an infinitely sharp picture of the electron is practically impossible, you end up with a fuzzy picture no matter what. - In the early part of the twentieth century, physics was revolutionized. The foundation of the subject changed once and for all. It was discovered that nature in its deeper scheme of things does not operate according to the deterministic rules of classical Newtonian physics. Heisenberg discovered his famous uncertainty principle in 1926. Physicists came to know that values of certain pairs of classical dynamical variables like position and its conjugate momentum can not be measured to arbitrary precision. The more you are certain about one of them the less you know about the value of the other. The simultaneous precise knowledge of these pairs of variables (at least in principle) are essential to predict the outcome of any experiment in classical physics. Classical mechanics used to assert that in principle nature is completely deterministic. However the uncertainty principle put an end to that dream. The uncertainty principle is a consequence of the fact that there is a wave particle duality for all the so called "particles"(like electron) and "waves" (like electromagnetic wave) in nature. One can say that some times under certain conditions, it is helpful to think a particle like electron as waves or a wave like em wave as particles. Schrödinger discovered his famous equation for particles with non-relativistic speed which describe the evolution of a characteristic function $\psi$ in space with time. Quantum mechanics is based on certain basic postulates. 1) It is assumed that to every observable physical quantity there exists a hermitian operator. The measurable values of the observable is the various eigen values of the operator. 2) To every physical system there corresponds an abstract Hilbert space. A state of the system is represented by a vector in this space on which the operator corresponding to the observable acts. 3) The outcome of a measurement of an observable in a particular state is given by the expectation value of the corresponding operator in that state. 4) The operator corresponding to a dynamical variable is obtained by replacing the classical canonical variables by corresponding quantum mechanical operators. 5) Any pair of canonically conjugate operators will satisfy the Heisenberg commutation rules. The edifice of quantum mechanics is based on these basic postulates. What quantum mechanics really implies is that light, electrons, protons etc. can not be described by exclusively in particle terms or wave terms. They are like neither particles nor waves. Particles or waves are approximate classical concepts and to describe quantum reality these classical concepts should be viewed as complimentary concepts rather than contradictory. In the language of Neils Bohr, "However far the phenomena transcend the scope of classical physical explanation, the account of all the evidence must be expressed in classical terms". This is because "by the word 'experiment' we refer to a situation where we can tell others what we have done and what we have learned" so that "the account of the experimental arrangement and of the results of the observations must be expressed in unambiguous language with suitable application of the terminology of the classical physics". - While listing the axioms of quantum mechanics might not satisfy the OP's criterion of simplicity, there is nothing wrong per se with this answer. In particular for anyone trying to understand quantum physics many points of view are more helpful than just one. So +1. – user346 Mar 12 '11 at 15:51 In classical mechanics the state of a particle is described by a point in phase space. We can directly measure that state. But in the microscopic world this not possible. Then we postulate that we can describe the state by a ket vector and operator in this space. This operator when acting on this state gives the physical quantities as the eigen value. - There's an easy way to explain the concepts in quantum mechanics. There are two key principles at work: superposition and complementarity. Let's take factoring numbers as an example of superposition. Given any counting number, there is one and only one way to represent it as a product of its primes. If the number you are given to factor is prime, then the trivial solution is to give back the same number you are given. If the number is composite, then you have to find which primes multiply together to produce the number given. For example, if you are given the number 14, then you give back the numbers 2 and 7. Whatever number you are given, no matter how long it takes to perform the calculation, you have all the information necessary to return its primes. That's the easy part. The hard part is complementarity. Conceptually, it is very close to 'selection'. Suppose we reverse the previous calculation in the sense that someone gives you the primes and you are asked to compute the composite number. This would be even easier than the factoring problem except that the primes are given to you in a funny way. In the case of 14, you may be given the primes in the following order: 222777, 2727, 22272777, or any number of unspecified combinations of 2 and 7. You might ask 'with this redundancy, how is one to tell apart different orders of the same prime?'. The answer lies in the number of primes that you have been given. In this case, all our examples have an equal number of 2's and 7's. If we were given twice as many 2's as 7's i.e. 222277, then the number is 2 times 2 times 7, or 28. The point is, the individual answers don't track; they only make sense taken as a whole. Nobody in QM knows how individually random events come together to make a predictable structure. This is essentially what makes QM so hard to figure out! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395121335983276, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38344/can-there-be-two-continuous-real-valued-functions-such-that-at-least-one-has-rati/38350
## Can there be two continuous real-valued functions such that at least one has rational values for all x? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Of course, no continuous real valued non-constant function can attain only rational or irrational values, but can there be a pair of nowhere-constant continuous functions f and g such that for all x, at least one of f(x) and g(x) is rational? Or maybe a countable collection of continuous functions, {f1, f2...} such that for all x there is n such that fn(x) is rational? Thanks - 1 I assume that you want f(x) and g(x) to be nonconstant? – Andy Putman Sep 10 2010 at 19:45 Yeah - edited to fix that. – mathahada Sep 10 2010 at 19:47 1 You probably want non-locally constant, since there are trivial counterexamples with two functions making steps with constant rational value on intervals. – Joel David Hamkins Sep 10 2010 at 19:48 R is connected, so locally constant is equivalent to constant. – Ricky Demer Sep 10 2010 at 19:50 Sorry, I meant non-constant on any interval. – Joel David Hamkins Sep 10 2010 at 19:59 show 1 more comment ## 3 Answers If you allow the functions to be constant on some intervals, then there are some easy examples, and Ricky has provided one. But if you rule that out, then there can be no examples, even with countably many functions. To see this, suppose that $f_n$ is a list of countably many continuous functions which are never constant on an interval. Enumerate the pairs $(r,n)$ of rational numbers $r$ and natural numbers $n$ in a countable list $\langle (r_0,n_0), (r_1,n_1),\ldots\rangle$. Let $C_0$ be any closed interval. If the closed interval $C_i$ is defined, consider the function $f_{n_i}$ and the rational value $r_i$. Since $f_{n_i}$ is not constant value $r_i$ on $C_i$, we may shrink the interval to $C_{i+1}\subset C_i$ such that $f_{n_i}$ on $C_{i+1}$ is bounded away from $r_i$. By compactness, there is some $x\in C_i$ for all $i$. Thus, $f_n(x)$ is not $r$ for any rational number $r$. - 2 And this argument amounts to the proof of the Baire Category theorem, mentioned by Mikhail... – Joel David Hamkins Sep 10 2010 at 20:21 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Since your functions are locally non-constant, the preimage of any (rational) point is nowhere dense (in $\mathbb{R}$; if a preimage of a point with respect to a continious function is dense on an interval, then the function is constant on this interval). Hence the union of all preimages of all rational points is of Baire category one (in $\mathbb{R}$); so it is not equal to the whole $\mathbb{R}$. - 2 +1. This way of thinking about it shows that it is consistent with $ZFC+\neg CH$ that you can't even do it with $\aleph_1$ many functions (or more, even), since it is known to be consistent with $\neg CH$ that the ideal of meager sets has more than countable additivity. – Joel David Hamkins Sep 10 2010 at 20:44 +1. @Joel, do you know where I could find a proof of that consistency result? (I suppose what I'm really interested in is a respectable article stating it, since I probably wouldn't understand a proof.) – Ricky Demer Sep 10 2010 at 20:51 1 It is part of Cichon's diagram (see en.wikipedia.org/wiki/Cichon_diagram). This seems to be one of the standard results in the field of cardinal characteristics. It is probably in the survey article by Andreas Blass: math.lsa.umich.edu/~ablass/set.html – Joel David Hamkins Sep 10 2010 at 20:58 Hey Joel. Why it doesn't work with uncountably many functions? It is enough to have f(x) = ax for any real a. – mathahada Sep 10 2010 at 21:44 1 I am saying merely that it is consistent with the axioms of set theory that you could extend the result to $\aleph_1$ many functions, but of course, this is only possible when $\aleph_1\lt 2^{\aleph_0}$, which is to say, when the Continuum Hypothesis fails. The result can never hold for continuum $2^{\aleph_0}$ many functions, as your example shows. – Joel David Hamkins Sep 10 2010 at 21:55 show 1 more comment f(x) := closest point in [-2,-1] to x g(x) := closest point in [+1,+2] to x If x≤0, then g(x) is rational. If 0≤x, then f(x) is rational. The question becomes more interesting if you demand that the functions be nowhere locally constant. - I was not familiar with this term up until now but yes, that's what I meant. Perhaps a better formulation would be: is there a curve in the plane such that all of its points have at least one rational coordinate – mathahada Sep 10 2010 at 19:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9357032179832458, "perplexity_flag": "head"}
http://mathoverflow.net/questions/16365?sort=oldest
## “Oldest” bug in computer algebra system? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The goal of this question is to find an error in a computation by a computer algebra system where the 'correct' answer (complete with correct reasoning to justify the answer) can be found in the literature. Note that the system must claim to be able to perform that computation, not implementing a piece of (really old) mathematics is sad, but is a different topic. From my knowledge of the field, there are plenty of examples of 19th century mathematics where today's computer algebra system get the wrong answer. But how far back can we go? Let me illustrate what I mean. James Bernoulli in letters to Leibniz (circa 1697-1704) wrote that [in today's notation, where I will assume that $y$ is a function of $x$ throughout] he could not find a closed-form to $y' = y^2 + x^2$. In a letter of Nov. 15th, 1702, he wrote to Leibniz that he was however able to reduce this to a 2nd order LODE, namely $y''/y = -x^2$. Maple can find (correct) closed-forms for both of these differential equations, in terms of Bessel functions. An example that is 'sad' but less interesting is $$r^{n+1}\int_0^{\pi}\cos(r\rho \cos (\omega))\sin(\omega)^{2n+1}d\omega$$ with $n$ assumed to be a positive integer, $r>0$ and $\rho$ real; this can be evaluated as a Bessel functions but, for example, Maple can't. Poisson published this result in a long memoir of 1823. One could complain that (following Schloemilch, 1857) that he well knew that $$J_n(z) = \sum_{0}^{\infty} \frac{(-1)^m(z/2)^{n+2*m}}{m!(n+m)!}$$ Maple seems to think that this sum is instead $J_n(z)\frac{\Gamma(n+1)}{n!}$, which no mathematician would ever write down in this manner. Another example which gets closer to a real bug is that Lommel in 1871 showed that the Wronskian of $J_{\nu}$ and $J_{-\nu}$ was $-2\frac{sin(\nu\pi)}{\nu z}$. Maple can compute the Wronskian, but it cannot simplify the result to $0$. This can be transformed into a bug by using the resulting expression in a context where we force the CAS to divide by it. For a real bug, consider $$\int_{0}^{\infty} t^{-\lambda} J_{\mu}(at) J_{\nu}(bt)$$ as investigated by Weber in 1873. Maple returns an unconditional answer, which a priori looks fine. If, however, the same question is asked but with $a=b$, no answer is returned! What is going on? Well, in reality that answer is only valid for one of $0\lt a\lt b$ or $0\lt b \lt a$. But it turns out (as Watson explains lucidly on pages 398-404 of his master treatise on Bessel functions, this integral is discontinuous for $a=b$. Actually, the answer given is also problematic for $\lambda=\mu=0, \nu=1$. And for the curious, the answer given is $$\frac{2^{-\lambda}{a}^{\lambda-1-\nu}{b}^{\nu} \Gamma \left( 1/2\nu+1/2\mu-1/2\lambda+1/2 \right)} { \Gamma\left( 1/2\mu+1/2\lambda+1/2-1/2\nu\right) \Gamma \left( \nu+1 \right)} {F(1/2-1/2\mu-1/2\lambda+1/2\nu,1/2\nu+1/2\mu-1/2\lambda+1/2;\nu+1;{\frac {{b}^{2}}{{a}^{2}}})}$$ EDIT: I first asked this question when the MO community was much smaller. Now that it has grown a lot, I think it needs a second go-around. A lot of mathematicians use CASes routinely in their work, so wouldn't they be interested to know the 'age' gap between human mathematics and (trustable) CAS mathematics? - 2 This is pretty much a duplicate of this thread: mathoverflow.net/questions/11517/… – Ryan Budney Feb 25 2010 at 4:34 2 Can you give an example to get things started? – Steve D Feb 25 2010 at 4:35 1 @Ryan: I don't agree - that other thread threw up excessively wide-ranging answers and so was unfocussed. This is a clearly better question. It is possible that it will throw up a clearly correct historical source, but I rather think it won't, though, and would better be community wiki. – Charles Stewart Feb 25 2010 at 11:48 Hi Charles. I suspect this question is maybe too focused. Presumably the author made a typo, for example, as I'm not awareof any 19th century computer algebra packages, as this is before the electronic computer. Perhaps this question has a simple answer. The first computer algebra package is reportedly Schoonschip (1963). Presumably it wasn't error free? – Ryan Budney Feb 25 2010 at 13:06 2 The only way I can see one could be surpised by that is that one very. very grossly underestimate the difficulty of simulating, say, Bernoulli and that one has never wrote any code (for anyone who's programmed anything knows bugs are essentially inevitable!) – Mariano Suárez-Alvarez Feb 25 2010 at 18:26 show 7 more comments ## 5 Answers I don't know if you mean this but have a look here (there some bugs that seem to be quite elementary): - http://www.walkingrandomly.com/?p=801 - http://www.walkingrandomly.com/?p=578 - http://www.walkingrandomly.com/?p=88 - ...search for "bug" on this site Hope this helps - 1 Somewhat like that, but I want the link to an early paper which showed that > 100 years ago, mathematicians already knew how to get the correct answer. – Jacques Carette Feb 25 2010 at 17:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm not sure I completely understand the question, but many versions of Maple give the wrong answer when counting the number of partitions of $n$, for some $n$. Obviously, mathematicians have known how to do this since at least Euler. (One could argue that mathematicians have known how to count for a very long time, indeed.) - This is closer. An actual answer would have quoted a paper of Euler's which contains a specific enough method that, if implemented on today's computer, would correctly and efficiently compute the number of partitions of n. The sizes where Maple returns the wrong answer are large enough that few humans would have ever computed these without mechanical help (with 'computers' like Gauss excepted). – Jacques Carette Feb 25 2010 at 20:41 1 What is typically attributed to Euler is the [pentagonal number theorem][1] which gives a recurrence for computing the number of partitions of n. This could certainly be implemented easily on today's computers and I suspect (though haven't tried) that it would work reasonably fast on the numbers in question. That said, I think most 'state-of-the-art' algorithms use [Rademacher's formula][2] which is more efficient and was certainly not known to Euler. [1]: en.wikipedia.org/wiki/Pentagonal_number_theorem [2]: en.wikipedia.org/wiki/Partition_(number_theory) – Jason Bandlow Feb 25 2010 at 21:09 1 And here is a link to Euler's work: front.math.ucdavis.edu/math.HO/0510054 – Jason Bandlow Feb 25 2010 at 21:13 2 @Jacques Carette: As mentioned in the link to the OEIS, you can at least turn this into an example known by 1920. Simply compute p(11269) mod 5. An identity by Ramanujan says that is 0, while Maple would say 1. – aorq Mar 28 2010 at 5:23 If I recall correctly from ~30 years ago, on the Apple ][ the calculation 7^2 would return 49.0001. More than 100 years ago (or even 100 years before that), mathematicians already knew that the square of an integer is an integer. - Hmmm. I certainly don't remember anything of the kind. Do you mean in Applesoft BASIC? If you go to <a href="calormen.com/applesoft/">this</a>; Applesoft interpreter and run the program "10 PRINT 7^2", then you will get 49. – James Borger Sep 16 2010 at 9:17 Let's try this again: calormen.com/applesoft – James Borger Sep 16 2010 at 9:18 I was good with computers when they had 6502 processors, but then it all changed... – James Borger Sep 16 2010 at 9:19 1 and, until you prove me wrong, -1 for sullying The Woz's name! – James Borger Sep 16 2010 at 9:23 In mathematica, if you look at the dirichlet characters modulo 4, you don't actually get the characters. - What year would you associate to this? – Jacques Carette Sep 15 2010 at 16:07 I think Dirichlet introduced characters in the 1830s, in his proof of the existence of primes in arithmetic progressions. – Gerry Myerson Sep 16 2010 at 3:49 According to Wolfram Alpha and the tables in [2], $\pi(10^{10}) = 455, 052, 511$. Nevertheless, in Zagier's paper we find that $\pi(10^{10}) = 455, 052, 512$. Wonder whether someone has already noted this discrepancy between the sources elsewhere. Naturally, the discrepancy implies the existence of a bug in either the routines of Zagier or in WA's implementation of the prime counting function. I don't think that it's only a typo in Zagier' note because, if memory serves me right, there are some other texts in the literature that endorse the computations of Zagier (for instance, see [1, page 7].). References [1] A. E. Ingham. The distribution of prime numbers. Cambridge Mathematical Library. [2] H. Riesel. Prime Numbers and Computer Methods for Factorization. Second Edition, 1994, Birkhäuser. [3] D. Zagier. "The first 50 million primes". Math. Intelligencer, 0 (1977). - The discrepancy has been noted elsewhere, e.g., research.att.com/~njas/sequences/A006880 which says "Lehmer gave the incorrect value 455052512 for the 10th term." Unfortunately, it gives no Lehmer citation, and doesn't even say which Lehmer! It has been noted elsewhere at MO that D N Lehmer, father of D H Lehmer, was one of the last mathematicians to count 1 as a prime, so maybe there's no bug here, just two different definitions. – Gerry Myerson Sep 16 2010 at 3:13 1 My speculation in my previous comment was off the mark. J C Lagarias, V S Miller, and A M Odlyzko, Computing $\pi(x)$: the Meissel-Lehmer method, Math Comp 44 (1985) 537-560, say "[D H] Lehmer used an IBM 701 to calculate $\pi(10^{10})=455,052,512$ (a value later shown [by J Bohman, On the number of primes less than a given limit, BIT 12 (1972) 576-577] to be too large by 1)." This paper also notes that Meissel's calculation of $\pi(10^9)$ was too small by 56, and Bohman's value of $\pi(10^{13})$ was too small by 941! – Gerry Myerson Sep 16 2010 at 3:34 2 Don't think it's just a matter of definitions... All of the other entries in the table would be wrong in that case! – J. H. S. Sep 16 2010 at 5:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9468073844909668, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/194585-statistical-probility-baye-s-theorem.html
# Thread: 1. ## Statistical probility, bayes' theorem Hi, I am looking for a solution to the following problem, which according to the question can be solved using bayes' theorem: The same event occurs a 1000 times Every time, the chance of it being positive is 17% Everytime 11 events in a row are negative, the 12th is positive with a 100% chance. There are no external events apart from the aforementioned rules What is the expected amount of positive events, and how did you reach this conclusion? My approach was invalid because it was to simplified, yet the end result was close to the actual answer (which still hasn't been provided) (1000-1000/12)*0.17+1000/12=239.17 That answer is incorrect. Does anyone have a solution? Edit: recieved a hint: the chance that the event will be negative 11 times in a row is 12.69%. This implies that that number has something to do with it, otherwise the tip would not have been given 2. ## Re: Statistical probility, baye's theorem Originally Posted by brentvos The same event occurs a 1000 times Every time, the chance of it being positive is 17% Every 12th event will be positive regardless of chance. There are no external events apart from the aforementioned rules That is simply a series of statements. What is the question? 3. ## Re: Statistical probility, baye's theorem The question is: What's the amount of positive events given the conditions, and how did you come to that conclusion. Thanks for the help! 4. ## Re: Statistical probility, baye's theorem Hello, brentvos! I am looking for a solution to the following problem which, according to the question, can be solved using Bayes' Theorem. I don't understand the hint; this is not conditonal probability. The same event occurs a 1000 times Every time, the chance of it being positive is 17% Every 12th event will be positive regardless of chance. There are no external events apart from the aforementioned rules Find the expected number of positive events. My approach was invalid because it was too simplified, . How do you know this? yet the end result was close to the actual answer . And how do you know this . . (which still hasn't been provided). . . . . . . . . . . if you don't know the answer? Every 12th event will be positive. .It will be positive. $\left[\frac{1000}{12}\right] = 83$ times. For the other 917 events, it will be positive 17% of the time. . . $0.17 \times 917 \,=\,155.89$ expected positive events. Therefore, the expectation is: . $83 + 155.89 \:=\:238.89$ positive events. 5. ## Re: Statistical probility, baye's theorem The question giver told us it had something to do with Bayes' theorem. I didn't understand the hint either, but hoped one of you would. As for your answer, except for different decimals, I had the same answer as you did. This was incorrect, as told by the question giver. He gave another tip, saying that the possibility of not having a positive event 11 times in a row is 12.9% thereby declaring that it has something to do with the given statement. I'll edit the original post to include this 6. ## Re: Statistical probility, baye's theorem Originally Posted by brentvos The question giver told us it had something to do with Bayes' theorem. I didn't understand the hint either, but hoped one of you would. As for your answer, except for different decimals, I had the same answer as you did. This was incorrect, as told by the question giver. He gave another tip, saying that the possibility of not having a positive event 11 times in a row is 12.9% thereby declaring that it has something to do with the given statement. Well I agree with Soroban's answer. Perhaps we all are misreading the intended question. But rather I suspect that this a case in which the author has a very specific method in mind. So for someone not privy to his/her style any effort at a solution would simply guessing. 7. ## Re: Statistical probility, baye's theorem Ah I've found the mistake. I copied it wrong. I changed OP, but the change is: Every time 11 events in a row have been resulted negative, the 12th is positive. (the first 11 are negative, the 12th therefore is positive, regardless of any previous chance) Sorry for the mistake, would you be so kind to look at it again? 8. ## Re: Statistical probility, bayes' theorem Originally Posted by brentvos Hi, I am looking for a solution to the following problem, which according to the question can be solved using bayes' theorem: The same event occurs a 1000 times Every time, the chance of it being positive is 17% Everytime 11 events in a row are negative, the 12th is positive with a 100% chance. There are no external events apart from the aforementioned rules What is the expected amount of positive events, and how did you reach this conclusion? My approach was invalid because it was to simplified, yet the end result was close to the actual answer (which still hasn't been provided) (1000-1000/12)*0.17+1000/12=239.17 That answer is incorrect. Does anyone have a solution? Edit: recieved a hint: the chance that the event will be negative 11 times in a row is 12.69%. This implies that that number has something to do with it, otherwise the tip would not have been given Hi Brentvos, The probability that the ith event is positive is 0.17 for i = 1, 2, 3, ..., 11. For i = 12, 13, 14, ..., 100, the event will be positive with probability 1, if the preceding 11 events are all negative; this happens with probability 0.83^11. If at least one of the 11 preceding events is positive, which happens with probability 1 - 0.83^11, the ith event will be positive with probability 0.17. So the total probability of a positive event is 0.83^11 + (1 - 0.83^11) * 0.17. Therefore the expected number of positive events is $11 \times 0.17 + 989 \times [0.83^{11} + (1 - 0.83^{11}) * 0.17]$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.955565869808197, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/118903/elementary-applications-of-linear-algebra-over-finite-fields/118922
## Elementary applications of linear algebra over finite fields ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm teaching axiomatic linear algebra again this semester. Although the textbooks I'm using do everything over the real or complex numbers, for various reasons I prefer to work over an arbitrary field when possible. I always introduce at least $\mathbb{F}_2$ as an example of a finite field. To help motivate this level of generality, I'd like to cover some application of linear algebra over finite fields. Ideally it shouldn't make explicit reference to linear algebra or finite fields in its setup, and should require as little background as possible (the students have taken calculus, but not necessarily any other advanced math — in particular applications to group theory are out). I've looked around a little, but haven't found anything so far that requires little enough overhead to fit into a single 50-minute lecture and wouldn't seem either too abstract or too arbitrary to motivate such students. Any suggestions? Alternatively, I'd be interested in elementary applications of linear algebra over any other field which isn't a subfield of $\mathbb{C}$. - 8 If you have $n+1$ positive integers, all of whose prime factors belong come from a set of size at most $n$, then some nonempty subproduct of your positive integers is a perfect square. (Look at the exponents on the primes as giving you a vector over $\mathbb{F}_2$, and note that there must be a linear dependence relation.) This is important in (e.g.) the quadratic sieve factoring algorithm. – Anonymous Jan 14 at 17:16 3 The book "Thirty-three miniatures" is full of delightful applications of linear algebra, and a preliminary version is available at the author's web page kam.mff.cuni.cz/~matousek . In particular, finite fields are used in "miniature 27" to explain the fastest known algorithm for checking the associativity of an arbitrary binary operation on a finite set. – boumol Jan 14 at 17:48 @buomol: I have that book, and the sections I'd looked at seemed either too abstract or to have too much overhead for this class. I hadn't noticed that Miniature 27 involves finite fields, though -- I'll have to think about whether that one fits. – Mark Meckes Jan 14 at 18:14 1 To clarify my last comment, I don't mind abstraction per se in this class -- I do after all give the definition of a field on the first day. But since many of the students are encountering this level of abstraction for the first time, I want applications that will feel more concrete to the students in order to motivate the abstraction. Also, I will definitely cover one or two of the sections of Matousek's beautiful book which deal with real scalars; I just wasn't sure I wanted to use any of his miniatures involving finite fields. – Mark Meckes Jan 14 at 18:31 No elementary applications of linear algebra over a field of functions? Not that I expected any, but it would have been interesting to see. – Mark Meckes Jan 15 at 15:44 ## 13 Answers How about binary linear codes? You can "see" the Hamming distance between codewords, and use linear transformations to encode/decode - I now feel foolish given my comments above, because Matousek has a nice section on error correcting codes near the very beginning of his book, which I somehow missed entirely. – Mark Meckes Jan 14 at 18:34 A neat application of linear codes also arises in network coding: en.wikipedia.org/wiki/… – Tobias Fritz Jan 14 at 23:48 How about $q$-ary linear codes? – Steve Huntsman Jan 15 at 0:09 BTW, a very concrete/friendly way to handle $q$-ary codes is facilitated by the approach here: mathdl.maa.org/images/cms_upload/Wardlaw47052.pdf – Steve Huntsman Jan 15 at 0:15 It's perhaps silly to bother accepting an answer to a CW question, but I've decided I will definitely discuss binary linear codes, so I'm going ahead and accepting this. (I may or may not discuss some of the other applications suggested here, but of course there's only so much time in a semester.) – Mark Meckes Jan 16 at 21:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You can use linear algebra over $\mathbb{F}_2$ to solve the game "Lights Out": http://en.wikipedia.org/wiki/Lights_Out_%28game%29 - 2 This at least used to be the canonical answer to this question (when Lights Out was better known). – Allen Knutson Jan 14 at 20:24 1 Can be played online here: addictinggames.com/puzzle-games/lightsout.jsp – Qiaochu Yuan Jan 15 at 0:34 The linear algebra behind the game is explained here: math.ksu.edu/~dmaldona/math551/lights_out.pdf – Martin Brandenburg Jan 15 at 9:29 I suggest Linear Feedback Shift Register (LFSR) as an easy example. They can be used as pseudo random number generators and have a wide practical use in communication and cryptography, GPS, GSM, CRC, WIFI, .. (non-math) applications which are usually accepted as usefull. Usually they work over $\mathbb{F}_2$, but other fields are possible. Basically you have to work with polynomials (including long division) over $\mathbb{F}_2$. The need for primitive polynomials may motivate some more advanced considerations. A brief summary for mathematicians is Nayuki's blog. I would explicitly pick the CRC algorithm. A description is located for example in this lecture(pdf) from D.Culler. This also relates to linear codes, which is also a good idea. More easy is an application as fancy counter. If you ever wondered how the shuffle mode of your media player works. - Suppose you want to compute the period of the Fibonacci sequence $\bmod p$. This reduces to examining the powers of the matrix $\left[ \begin{array}{cc} 1 & 1 \\ 1 & 0 \end{array} \right]$ over $\mathbb{F}_p$, which requires either diagonalizing it over $\mathbb{F}_p$ or over $\mathbb{F}_{p^2}$ (or, when $p = 5$, using a nontrivial Jordan block). From here you can write down a nice number that is divisible by the period, depending on the value of $p \bmod 5$ (this uses a little quadratic reciprocity). Edit: I suppose this requires some number-theoretic background to do properly. Never mind. - Possibly the simplest application is Berlekamp's Oddtown theorem. One reference is Section 12.2 of http://math.mit.edu/~rstan/algcomb.pdf. - The game of Projective Set is tantamount to finding a linear dependence on $7$ (distinct, nonzero) vectors in $\mathbb{F}_2^6$. Linear algebra over $\mathbb{F}_2$ shows that there's always a solution, and moreover that the number of solutions is always $2^k-1$ for some positive integer $k$. How large can this $k$ get, and how many of the ${63 \choose 7} = 553270671$ possible deals attain this maximal $k$? [Thanks to Zach Abel for introducing me to this game.] - This is similar to the card game SET!, see mathoverflow.net/questions/13638/… – Stephan Müller Jan 15 at 14:24 1 Yes, it's something like SET, where one must find affine lines in a subset of $\mathbb{F}_3^4$; that's probably why the game is called "Projective Set". The $\mathbb{F}_2^6$ game is less familiar, but has the advantage for teaching that one can use ideas from an intro linear-algebra course not just to describe the game but also to obtain results such as it takes $7$ cards to guarantee a valid subset (and with $6$ cards the probability of success is $1 - \prod_{n=0}^5 (64-2^n)/(63-n) = 61363/104371 \doteq 58.8\%$). – Noam D. Elkies Jan 15 at 16:09 (1) An obvious but a very conceptual application is the basic fact that the cardinality of every finite field $F$ is a power of a prime, as $F$ can be considered as a vector space over $\mathbb{F}_p$ for $p=char F$. (2) Another application is the construction of finite projective planes. Consider a vector space $V$ of dimension $3$ over a finite field $F$. Then consider the projective geometry $PG(V)$ whose points are the $1$-dimensional subspaces of $V$ and whose lines are the $2$-dimensional subspaces of $V$. In particular, the Fano projective place can be obtained by this method over $\mathbb{F}_2$. - There is a purely group-theoretic group for (1): Use Cauchy's Theorem and $p = \mathrm{exp}(F,+)$. – Martin Brandenburg Jan 15 at 1:18 (1) you can use $\mathbb{F}_2$ to prove that every group $G$ with $Aut(G)\cong 0$ is either $0$ or $\mathbb{Z}/2\mathbb{Z}$. This is done by noting that the group is abelian, since all conjugation-automorphisms are the identity. Then for abelian groups one has the automorphism $g\mapsto -g$ so all elements are self inverse. At this point one gets that $G$ is a $\mathbb{F}_2$-vector space. Since any vector space of dimension $≥2$ admits nontrivial automorphisms, the result follows. (2) also you might want to take a look at this: http://gowers.wordpress.com/2008/07/31/dimension-arguments-in-combinatorics/ - I've given (1) as an exam problem in a more advanced course, but these students have never heard of groups. – Mark Meckes Jan 14 at 20:08 Sometimes math problems crop up in high school competitions that are secretly linear algebra problems in disguise. For instance, #6 of USAMO 2008 (http://amc.maa.org/a-activities/a7-problems/USAMO-IMO/q-usamo/-pdf/usamo2008.pdf) can be approached via linear algebra over $\mathbb{F}_2$. - 5 I'd like problems like this one better if I could understand why everyone would insist on being in a room containing an even number of one's friends. – Mark Meckes Jan 14 at 20:23 1 Don't you play (spontaneous with a single referee) games which need two teams, each team having the same number of players? Gerhard "Neither Do I. So There!" Paseman, 2013.01.14 – Gerhard Paseman Jan 14 at 23:57 3 Anyone who's ever been single knows the phenomenon whereby everyone else in the room is a couple. But I've never heard anyone say they liked that. – Tom Leinster Jan 15 at 0:38 @Gerhard, no I don't, but to apply this problem you're either talking about two games each being played by two matched teams of a priori arbitrary sizes, or one game between two teams whose sizes are even but not necessarily equal. To me this kind of problem seems like it would work better as a motivating example in a combinatorics course than in a linear algebra course. – Mark Meckes Jan 15 at 15:39 I don't see a motivation based in linear algebra, but my comment above was based on a contrived scenario from reading your comment: in MetaMagiGame (a name I just made up) a person is selected at random from a room of people, and that person must devise or choose a game which he or she must referee and must pit half of their friends against the other half, while the non-friends look on. Tom Leinster's scenario sounds more plausible to me. Gerhard "Game Parties Beat Party Games" Paseman, 2013.01.15 – Gerhard Paseman Jan 15 at 18:49 This one is pretty good. Kaplansky wanted squarefree numbers $x$ such that $$\sigma ( x^3) = y^2,$$ where $\sigma$ is the sum of divisors function. Somewhere around here I have a short note of his. He referred to this (and some very similar problems) as Ozanam's problem, from page 56 of Dickson's History, volume 1. Let's see; for each prime $p$ up to some bound, I had the computer factor $\sigma ( p^3),$ especially recording the exponents on the output primes $q.$ So, to the best of my memory (about 18 years ago), I wound up with a big matrix with entries in the field with two elements; each column meant a prime $p,$ each row was saying whether the exponent for the prime $q$ was even or odd. Then, a solution was a column vector, also of 0's and 1's, which my big matrix mapped to the zero vector. So, I did Gauss elimination over the field of two elements. And found hundreds of solutions. I will see if I can find something written about this. For that matter, the program or programs I wrote should still be there in my MSRI account. Note: i am having a little trouble remembering if it is the matrix i describe above or its transpose. So perhaps a little care is needed. It definitely worked, though, and quickly. I also built in some procedure where i could force some prime to be included, then see if i could find solutions with that restriction. As I recall, that needed more handholding for the computer, more attention by me. - There is a Martin Gardner problem, reprinted in his Unexpected Hanging collection, that goes like this: Miranda beat Rosemary in a set of tennis, winning 6–3. There were five service breaks. Who served first? One solution is as follows. The wins by the player who served first may be represented by a vector $\mathbb{F}_2^9$ that is the sum of (1,0,1,0,1,0,1,0,1) and another vector of weight 5. Such a vector must have even weight, so the player who served first won an even number of games. Thus Miranda served first. In the book, Gardner writes that his original solution was long and cumbersome, and that the shortest solution he received was by Goran Ohlin: "Whoever served first, served five games, and the other player served four. Suppose the first server won $x$ of the games she served and $y$ of the other four games. The total number of games lost by the player who served them is then $5-x+y$. This equals $5$ [we were told that the non-server won five games]. Therefore $x=y$, and the first server won a total of $2x$ games. Because only Miranda won an even number of games, she must have been the first server." Though more elementary in some sense, this solution seems more ad hoc and less conceptual to me than the above argument using $\mathbb{F}_2$. - Rubik's Clock and Its Solution by Dénes and Mullen (Math. Mag. 68 (1995), 378–381) uses linear algebra modulo 12 to solve the Rubik's clock puzzle. - A trivial application: The group $(\mathbb{Z}/2\mathbb{Z})^n$ is generated by no less than $n$ elements. [Probably this is the easiest way to prove that the free profinite group on $n$ letters cannot be topologically generated by less elements. And more abstractly, it is in general hard to find the minimal number of generators of a group, UNLESS it has a vector space quotient, and then we have dimension theory of vector spaces.] - 1 Yes every dihedral group is generated by two elements of order 2, so we need to use that the group is abelian. If an abelian group is generated by $m$ elements and all of them have order $2$, then we get an epimorphism from $\mathbb{Z}/2 \oplus \dotsc \oplus \mathbb{Z}/2$, thus the group has $\leq 2^m$ elements. – Martin Brandenburg Jan 15 at 1:14 @Berlusconi see the beginning of the answer. @Martin, I agree to what you write, but the vector space approach is as simple. – Lior Bary-Soroker Jan 17 at 15:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473340511322021, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/184582-z-score-probability-print.html
# Z-Score and probability Printable View • July 14th 2011, 01:52 PM sfspitfire23 Z-Score and probability Fellas, Say I have a list of numbers (normally distributed) with a mean of -200 and a SD of 50. Now, say I would like to find the probability of getting a less than -100 and greater than +100. For the -100 I get Z=2 and for +100 I get z=6. Now I integrate against a standard normal distribution from (-infty to 2) for the -100 and from (6 to infty) for the 100 one. Something seems wrong with this method...Basically, I'm trying to figure out how to calculate the probability that the Dow Jones makes a 500 point swing in either direction given that its returns are normally distributed. Thanks! • July 14th 2011, 02:14 PM Siron Re: Z-Score and probability To find the probability after calculated the z-score you can also use 'the table of the normal distribution'. • July 14th 2011, 02:25 PM pickslides Re: Z-Score and probability Quote: Originally Posted by sfspitfire23 Basically, I'm trying to figure out how to calculate the probability that the Dow Jones makes a 500 point swing in either direction given that its returns are normally distributed. Thanks! Then you are looking for $\displaystyle P(-200-500<X<-200+500) = P(-700<X<300)$ $\displaystyle = P\left(\frac{-700-(-200)}{50}<Z<\frac{300-(-200)}{50}\right)=\dots$ • July 14th 2011, 03:02 PM sfspitfire23 Re: Z-Score and probability Sorry pickslides, I should have been more clear. The numbers I used in the problem are not from the Dow Jones. I just used that as an example. What I understand you saying is this: If I want to find the probability of a 500 point swing, I should do this: P(mean-500<X<mean+500)=P(t<X<y)=P(t-mean/sigma<Z<y-mean/sigma). Then if I wanted the prob of just a 500 point decline it would be P(mean-500<)=P(t<X)=P(t-mean/sigma<Z) and integrate from -infty to Z. Right? Thx • July 14th 2011, 03:12 PM pickslides Re: Z-Score and probability Quote: Originally Posted by sfspitfire23 If I want to find the probability of a 500 point swing, I should do this: P(mean-500<X<mean+500)=P(t<X<y)=P(t-mean/sigma<Z<y-mean/sigma). Right? Thx This seems ok, but I would just use a table as suggested in post #2. • July 14th 2011, 10:07 PM CaptainBlack Re: Z-Score and probability Quote: Originally Posted by sfspitfire23 Fellas, Say I have a list of numbers (normally distributed) with a mean of -200 and a SD of 50. Now, say I would like to find the probability of getting a less than -100 and greater than +100. For the -100 I get Z=2 and for +100 I get z=6. Now I integrate against a standard normal distribution from (-infty to 2) for the -100 and from (6 to infty) for the 100 one. Something seems wrong with this method...Basically, I'm trying to figure out how to calculate the probability that the Dow Jones makes a 500 point swing in either direction given that its returns are normally distributed. Thanks! What evidence do you have that they are normally distributed (or stationary)? CB All times are GMT -8. The time now is 06:16 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.941308856010437, "perplexity_flag": "middle"}
http://www.reference.com/browse/brute
Definitions Nearby Words # Brute-force search In computer science, brute-force search or exhaustive search, also known as generate and test, is a trivial but very general problem-solving technique, that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. For example, a brute-force algorithm to find the divisors of a natural number n is to enumerate all integers from 1 to n, and check whether each of them divides n without remainder. For another example, consider the popular eight queens problem, which asks to place eight queens on a standard chessboard so that no queen attacks any other. A brute-force approach would examine all the 64! /56! = 178,462,987,637,760 possible placements of 8 pieces in the 64 squares, and, for each arrangement, check whether no queen attacks any other. Brute-force search is simple to implement, and will always find a solution if it exists. However, its cost is proportional to the number of candidate solutions, which, in many practical problems, tends to grow very quickly as the size of the problem increases. Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed. This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences; or when using a computer to prove a mathematical theorem. Brute-force search is also useful as "baseline" method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). ## Implementing the brute-force search ### Basic algorithm In order to apply brute-force search to a specific class of problems, one must implement four procedures, first, next, valid, and output. These procedures should take as a parameter the data P for the particular instance of the problem that is to be solved, and should do the following: 1. first (P): generate a first candidate solution for P. 2. next (P, c): generate the next candidate for P after the current one c. 3. valid (P, c): check whether candidate c is a solution for P. 4. output (P, c): use the solution c of P as appropriate to the application The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm `c $gets$ first(P)` `while c $neq$ Λ do` ` if valid(P,c) then output(P, c)` ` c $gets$ next(P,c)` For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n $geq$ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c $<$ n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests are unnecessary, and the algorithm simplifies considerably.) ### Common variations The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time. ## Combinatorial explosion The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occur in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider; which a typical PC can generate and test in less than one second. However, adding one more letter — which is only a 10% increase in the data size — will multiply the number of candidates by 11 — a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 million million million; and the search will take about 10,000 years. This unwelcome phenomenon is commonly called the combinatorial explosion. ## Speeding up brute-force searches One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, consider the popular eight queens problem, which asks to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 (over 281 million million) possibilities to consider. However, if we observe that the queens are all alike, and that no two queens can be placed on the same square, we conclude that the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64!/56!/8! = 4,426,165,368 (less than 5 thousand million) candidate solutions — about 1/60,000 of the previous estimate. Actually, it is easy to see that no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements where queen 1 is on row 1, queen 2 is in row 2, and so on; all in different columns. We can describe such an arrangement by an array of eight numbers c[1] through c[8], each of them between 1 and 8, where c[1] is the column of queen 1, c[2] is the column of queen 2, and so on. Since these numbers must be all different, the number of candidates to search is the number of permutations of the integers 1 through 8, namely 8! = 40,320 — about 1/100,000 of the previous estimate, and 1/7,000,000,000 of the first one. As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one. This example also shows that the candidate enumeration procedures (first and next) for the restricted set may be just as simple as those of the original set, or even simpler. In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, consider the problem of finding all integers between 1 and 1,000,000 that are evenly divisible by 417. A naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000 — which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests. ## Reordering the search space In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to n - 1, than the other way around — because the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2. More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance. ## Alternatives to brute force search There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9182054400444031, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Minkowski_space
# Minkowski space For spacetime graphics, see Minkowski diagram. For Minkowski space associated to a number field, see Minkowski space (number field). Special relativity Part of a series on Foundations Mathematical formulation Consequences Minkowski spacetime World line Spacetime diagrams Light cone People In mathematical physics, Minkowski space or Minkowski spacetime (named after the mathematician Hermann Minkowski) is the mathematical space setting in which Einstein's theory of special relativity is most conveniently formulated. In this setting the three ordinary dimensions of space are combined with a single dimension of time to form a four-dimensional manifold for representing a spacetime. In theoretical physics, Minkowski space is often contrasted with Euclidean space. While a Euclidean space has only spacelike dimensions, a Minkowski space also has one timelike dimension. Therefore the symmetry group of a Euclidean space is the Euclidean group and for a Minkowski space it is the Poincaré group. The spacetime interval between two events in Minkowski space is either: 1. space-like, 2. light-like ('null') or 3. time-like. ## History In 1906 it was noted by Henri Poincaré that, by taking time to be the imaginary part of the fourth spacetime coordinate √−1 , a Lorentz transformation can be regarded as a rotation of coordinates in a four-dimensional Euclidean space with three real coordinates representing space, and one imaginary coordinate, representing time, as the fourth dimension. Since the space is then a pseudo-Euclidean space, the rotation is a representation of a hyperbolic rotation, although Poincaré did not give this interpretation, his purpose being only to explain the Lorentz transformation in terms of the familiar Euclidean rotation.[1] This idea was elaborated by Hermann Minkowski,[2] who used it to restate the Maxwell equations in four dimensions, showing directly their invariance under the Lorentz transformation. He further reformulated in four dimensions the then-recent theory of special relativity of Einstein. From this he concluded that time and space should be treated equally, and so arose his concept of events taking place in a unified four-dimensional space-time continuum. In a further development,[3] he gave an alternative formulation of this idea that did not use the imaginary time coordinate, but represented the four variables (x, y, z, t) of space and time in coordinate form in a four dimensional affine space. Points in this space correspond to events in space-time. In this space, there is a defined light-cone associated with each point (see diagram above), and events not on the light-cone are classified by their relation to the apex as space-like or time-like. It is principally this view of space-time that is current nowadays, although the older view involving imaginary time has also influenced special relativity. Minkowski, aware of the fundamental restatement of the theory which he had made, said: The views of space and time which I wish to lay before you have sprung from the soil of experimental physics, and therein lies their strength. They are radical. Henceforth space by itself, and time by itself, are doomed to fade away into mere shadows, and only a kind of union of the two will preserve an independent reality. – Hermann Minkowski, 1908 For further historical information see references Galison (1979), Corry (1997), Walter (1999). ## Structure Formally, Minkowski space is a four-dimensional real vector space equipped with a nondegenerate, symmetric bilinear form with signature (−,+,+,+) (Some may also prefer the alternative signature (+,−,−,−); in general, mathematicians and general relativists prefer the former while particle physicists tend to use the latter.) In other words, Minkowski space is a pseudo-Euclidean space with n = 4 and n − k = 1 (in a broader definition any n > 1 is allowed). Elements of Minkowski space are called events or four-vectors. Minkowski space is often denoted R1,3 to emphasize the signature, although it is also denoted M4 or simply M. It is perhaps the simplest example of a pseudo-Riemannian manifold. ### The Minkowski inner product This inner product is similar to the usual Euclidean inner product, but is used to describe a different geometry; the geometry is usually associated with relativity. Let M be a 4-dimensional real vector space. The Minkowski inner product is a map η: M × M → R (i.e. given any two vectors v, w in M we define η(v,w) as a real number) which satisfies properties (1), (2), and (3) listed here, as well as property (4) given below: | | | | |----|---------------|---------------------------------------------------------------| | 1 | bilinear | η(au+v, w) = aη(u,w) + η(v,w) for all a ∈ R and u, v, w in M. | | 2 | symmetric | η(v,w) = η(w,v) for all v, w ∈ M. | | 3 | nondegenerate | if η(v,w) = 0 for all w ∈ M then v = 0. | Note that this is not an inner product in the usual sense, since it is not positive-definite, i.e. the quadratic form η(v,v) need not be positive for nonzero v. The positive-definite condition has been replaced by the weaker condition of nondegeneracy (every positive-definite form is nondegenerate but not vice-versa). The inner product is said to be indefinite. These misnomers, "Minkowski inner product" and "Minkowski metric," conflict with the standard meanings of inner product and metric in pure mathematics; as with many other misnomers, the usage of these terms is due to similarity to the mathematical structure. Just as in Euclidean space, two vectors v and w are said to be orthogonal if η(v,w) = 0. Minkowski space differs by including hyperbolic-orthogonal events in case v and w span a plane where η takes negative values. This difference is clarified by comparing the Euclidean structure of the ordinary complex number plane to the structure of the plane of split-complex numbers. The Minkowski norm of a vector v is defined by $\|v\| = \sqrt{|\eta(v,v)|}.$ This is not a norm in the usual sense (because it fails to be subadditive), but it does define a useful generalization of the notion of length to Minkowski space. In particular, a vector v is called a unit vector if ||v|| = 1 (i.e., η(v,v) = ±1). A basis for M consisting of mutually orthogonal unit vectors is called an orthonormal basis. By the Gram–Schmidt process, any inner product space satisfying conditions (1), (2), and (3) above always has an orthonormal basis. Furthermore, the number of positive and negative unit vectors in any such basis is a fixed pair of numbers, equal to the signature of the inner product. This is Sylvester's law of inertia. Then the fourth condition on η can be stated: 4 signature The bilinear form η has signature (−,+,+,+) or (+,−,−,−). Which signature is used is a matter of convention. Both are fairly common. See sign convention. ### Standard basis A standard basis for Minkowski space is a set of four mutually orthogonal vectors {e0,e1,e2,e3} such that −(e0)2 = (e1)2 = (e2)2 = (e3)2 = 1 These conditions can be written compactly in the following form: $\langle e_\mu, e_\nu \rangle = \eta_{\mu \nu}$ where μ and ν run over the values (0, 1, 2, 3) and the matrix η is given by $\eta = \begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}$ This tensor is frequently called the "Minkowski tensor". Relative to a standard basis, the components of a vector v are written (v0,v1,v2,v3) and we use the Einstein notation to write v = vμeμ. The component v0 is called the timelike component of v while the other three components are called the spatial components. In terms of components, the inner product between two vectors v and w is given by $\langle v, w \rangle = \eta_{\mu \nu} v^\mu w^\nu = - v^0 w^0 + v^1 w^1 + v^2 w^2 + v^3 w^3$ and the norm-squared of a vector v is $v^2 = \eta_{\mu \nu} v^\mu v^\nu = - (v^0)^2 + (v^1)^2 + (v^2)^2 + (v^3)^2$ ## Alternative definition The section above defines Minkowski space as a vector space. There is an alternative definition of Minkowski space as an affine space which views Minkowski space as a homogeneous space of the Poincaré group with the Lorentz group as the stabilizer. See Erlangen program. Note also that the term "Minkowski space" is also used for analogues in any dimension: if n≥2, n-dimensional Minkowski space is a vector space or affine space of real dimension n on which there is an inner product or pseudo-Riemannian metric of signature (n−1,1), i.e., in the above terminology, n−1 "pluses" and one "minus". ## Lorentz transformations and symmetry Standard configuration of coordinate systems for Lorentz transformations. The Poincaré group is the group of all isometries of Minkowski spacetime including boosts, rotations, and translations. The Lorentz group is the subgroup of isometries which leave the origin fixed and includes the boosts and rotations; members of this subgroup are called Lorentz transformations. Among the simplest Lorentz transformations is a Lorentz boost. The archetypal Lorentz boost is $\begin{bmatrix} U'_0 \\ U'_1 \\ U'_2 \\ U'_3 \end{bmatrix} = \begin{bmatrix} \gamma&-\beta \gamma&0&0\\ -\beta \gamma&\gamma&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{bmatrix} \begin{bmatrix} U_0 \\ U_1 \\ U_2 \\ U_3 \end{bmatrix}\$ where $\gamma = { 1 \over \sqrt{1 - {v^2 \over c^2}} }$ is the Lorentz factor, and $\beta = { v \over c} \,.$ All four-vectors in Minkowski space transform according to the same formula under Lorentz transformations. Minkowski diagrams illustrate Lorentz transformations. ## Causal structure Main article: Causal structure Vectors are classified according to the sign of η(v,v). When the standard signature (−,+,+,+) is used, a vector v is: Timelike if η(v,v) < 0 Spacelike if η(v,v) > 0 Null (or lightlike) if η(v,v) = 0 This terminology comes from the use of Minkowski space in the theory of relativity. The set of all null vectors at an event of Minkowski space constitutes the light cone of that event. Note that all these notions are independent of the frame of reference. Given a timelike vector v, there is a worldline of constant velocity associated with it. The set {w : η(w,v) = 0 } corresponds to the simultaneous hyperplane at the origin of this worldline. Minkowski space exhibits relativity of simultaneity since this hyperplane depends on v. In the plane spanned by v and such a w in the hyperplane, the relation of w to v is hyperbolic-orthogonal. Once a direction of time is chosen, timelike and null vectors can be further decomposed into various classes. For timelike vectors we have 1. future directed timelike vectors whose first component is positive, and 2. past directed timelike vectors whose first component is negative. Null vectors fall into three classes: 1. the zero vector, whose components in any basis are (0,0,0,0), 2. future directed null vectors whose first component is positive, and 3. past directed null vectors whose first component is negative. Together with spacelike vectors there are 6 classes in all. An orthonormal basis for Minkowski space necessarily consists of one timelike and three spacelike unit vectors. If one wishes to work with non-orthonormal bases it is possible to have other combinations of vectors. For example, one can easily construct a (non-orthonormal) basis consisting entirely of null vectors, called a null basis. Over the reals, if two null vectors are orthogonal (zero inner product), then they must be proportional. However, allowing complex numbers, one can obtain a null tetrad which is a basis consisting of null vectors, some of which are orthogonal to each other. Vector fields are called timelike, spacelike or null if the associated vectors are timelike, spacelike or null at each point where the field is defined. ### Causality relations Let x, y ∈ M. We say that 1. x chronologically precedes y if y − x is future directed timelike. 2. x causally precedes y if y − x is future directed null or future directed timelike ## Reversed triangle inequality If v and w are two equally directed timelike four-vectors, then $|v+w| \ge |v|+|w|,$ where $|v|:=\sqrt{-\eta_{\mu \nu}v^\mu v^\nu}.$ ## Locally flat spacetime Strictly speaking, the use of the Minkowski space to describe physical systems over finite distances applies only in the Newtonian limit of systems without significant gravitation. In the case of significant gravitation, spacetime becomes curved and one must abandon special relativity in favor of the full theory of general relativity. Nevertheless, even in such cases, Minkowski space is still a good description in an infinitesimal region surrounding any point (barring gravitational singularities). More abstractly, we say that in the presence of gravity spacetime is described by a curved 4-dimensional manifold for which the tangent space to any point is a 4-dimensional Minkowski space. Thus, the structure of Minkowski space is still essential in the description of general relativity. In the realm of weak gravity, spacetime becomes flat and looks globally, not just locally, like Minkowski space. For this reason Minkowski space is often referred to as flat spacetime. ## References 1. *Poincaré, Henri (1905/6), "Sur la dynamique de l’électron", Rendiconti del Circolo matematico di Palermo 21: 129–176 • Wikisource translation: On the Dynamics of the Electron 2. Minkowski, Hermann (1907/8), "Die Grundgleichungen für die elektromagnetischen Vorgänge in bewegten Körpern", Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse: 53–111 • Wikisource translation: The Fundamental Equations for Electromagnetic Processes in Moving Bodies. 3. Minkowski, Hermann (1908/9), "Raum und Zeit", Physikalische Zeitschrift 10: 75–88 • Various English translations on Wikisource: Space and Time • Galison P L: Minkowski's Space-Time: from visual thinking to the absolute world, Historical Studies in the Physical Sciences (R McCormach et al. eds) Johns Hopkins Univ.Press, vol.10 1979 85-121 • Corry L: Hermann Minkowski and the postulate of relativity, Arch. Hist. Exact Sci. 51 1997 273-314 • Francesco Catoni, Dino Boccaletti, & Roberto Cannata (2008) Mathematics of Minkowski Space, Birkhäuser Verlag, Basel. • Naber, Gregory L. (1992). The Geometry of Minkowski Spacetime. New York: Springer-Verlag. ISBN 0-387-97848-8. • Roger Penrose (2005) Road to Reality : A Complete Guide to the Laws of the Universe, chapter 18 "Minkowskian geometry", Alfred A. Knopf ISBN 9780679454434 . • Shaw, Ronald (1982) Linear Algebra and Group Representations, § 6.6 "Minkowski space", § 6.7,8 "Canonical forms", pp 221–42, Academic Press ISBN 0-12-639201-3 . • Walter, Scott (1999). "Minkowski, Mathematicians, and the Mathematical Theory of Relativity". In Goenner, Hubert et al. (ed.). The Expanding Worlds of General Relativity. Boston: Birkhäuser. pp. 45–86. ISBN 0-8176-4060-6.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.863000750541687, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/276018/metric-tensor-of-complex-numbers-hamiltonian-mechanics?answertab=votes
# Metric tensor of complex numbers & Hamiltonian Mechanics The Euclidean $\mathbb{R}^2$ geometric space can be mapped onto $\mathbb{C}$. In other words I see it like this $$\vec{v} = x\vec{x}+y\vec{y} = x\vec{1}+y\vec{i}= \begin{bmatrix}x \\y\end{bmatrix}$$ Where in the complex topology (loosely used here) $\vec{1},\vec{i}$ have a curvature given by: $$\eta=\begin{bmatrix}1 & 0\\0 & -1\end{bmatrix}$$ as opposed to the normal Euclidean $\eta = \begin{bmatrix}1 & 0\\0 & +1\end{bmatrix}$. where the inner product is defined as $<v|v>=v \cdot\eta \cdot v$ This means that the gradient is $\nabla =\begin{bmatrix}+ \frac{\partial }{\partial x }\\ -\frac{\partial }{\partial y }\end{bmatrix}$ which is very useful for Hamiltonian Mechanics. (1) Normally, the gradient gives the steepest descent, but what is its interpertation with this metric? (2) I was hoping someone can explain to me what this metric tensor of complex numbers means, in terms of the curvature (or some geometric concept) and what its connection to complex algebra is. I know it somehow defines circles and makes vector algebra work well on them. (3) Since Hamilton's equations of motion can be recast as $\dot{\vec{v}} =\nabla H$, where H is the Hamiltonian, this also gives the phase space vector field. If you can provide some insight to the connection with Hamiltonian Mechanics, that would be wonderful. [references are appreciated] - ps. My background is in physics so I apologize for the surely sloppy mathematical notation and nomenclature, it demonstrates a lack of understanding on my part, apologies :) – AimForClarity Jan 11 at 20:49 The complex plane equipped with that (1,-1) bilinear form has a hyperbolic geometry (as opposed to the Euclidean geometry you get with the (1,1) metric.) That's about all I can think of saying... :/ – rschwieb Jan 11 at 21:47 @rschwieb Thank you, how do you see the hyperbolic nature mathematically is it just the $\nabla \vec{v}$, where $\vec{v}=(x,y)$? – AimForClarity Jan 11 at 22:07 @AimForClarity I'm not sure I understand why you write "curvature" for $\eta$. I do not understand that part of the question. – Jorge Campos Jan 11 at 22:28 Well I'm not sure I do wither. I wrote that because It seems to me that this is like the Riemann curvature tensor used in general relativity, although this is more of a hunch than any hard proof. I am not really familiar with differential geometry or topology. Maybe someone else can shed light on this. – AimForClarity Jan 11 at 22:32 show 1 more comment ## 1 Answer As I see it, passing to complex variables is just a simplification. Instead of having a metric tensor $\delta=$diag$(1,1)$, since $\mathrm{i}^2=-1$ you have to change it into $\eta=$diag$(1,-1)$ in order to obtain the same inner product. If you consider a Hamiltonian system with $N$-degrees of freedom you might be interested in the simplification obtained by "complexifying" the phase space. Map $(q^i,p_i)$ to $z_i=q^i+\mathrm{i} p_i\in \mathbb{C}$. Then you can write the Hamilton equations of motion for that system in the more compact form $$\mathrm{i}\frac{dz}{dt}= 2\nabla_\bar{z}H,$$ where $H=H(q^i(z),p_i(z))$ is the Hamiltonian and the components of $\nabla_\bar{z}\,f$ are $$\frac{\partial f}{\partial \bar{z}^i}:=\frac{1}{2}\left(\frac{\partial f}{\partial q^i}-\mathrm{i}\frac{\partial f}{\partial p_i}\right).$$ - Why is there the factor of 2? Also, is there someplace I can look where you saw this? – AimForClarity Jan 11 at 22:33 the $2$ factor comes from the definition of the partial with respect to $\bar{z}$. // These equations are $N$ complex equations, so $2N$ real eqs. I think I saw that once on one of Marsden's books. Let me search... – Jorge Campos Jan 11 at 22:47 Which one of Marsden's books? – AimForClarity Jan 12 at 0:36 I've found it. It's "Introduction to Mechanics and Symmetry" by Marsden and Ratiu, Exercise 2.1-1. – Jorge Campos Jan 12 at 21:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361153841018677, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/75989-theta-naught.html
# Thread: 1. ## theta naught I'm taking a precalculus class in college and our text is seriously lacking in explanation. There is this one topic which I don't understand and every time someone tries to explain it, I get don't fully grasp the logic. The concept of theta naught. Is this everything that isn't theta? Here is a word problem from the text: Here is an example of one of the problems: Sven is running clockwise around a circular track. Sven runs at 3.5 meters per second, and it takes him 82 seconds to complete a lap of the track. From his starting point, it takes him 14 seconds to reach the southernmost point of the track. After running for 20 minutes, how far (in a straight line) is Sven from the northmost point of the track? So, my problem again is theta naught. I understand how to find angular speed and how to find the radius. I know the formula for finding their coordinates but within that formula we are to take X=rCos(wt+(theta naught)). I've been told that its always calculated from the positive x axis but what I don't get is literally what exactly I'm trying to get. I think if I can understand what it literally is, this will make sense to me. Is it everything but the theta? 2. Usually "naught" means $x_0$, or this case $\theta_0$ meaning the initial value of this term. It doesn't mean everything but theta.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9675130248069763, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/32396?sort=votes
Good reduction and blow-ups Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a projective variety over $\mathbb{Z}$, and suppose that $X$ has everywhere good reduction. Let $Y$ be the blow-up of $X$ at an integral point. Then is it the case that $Y$ also has everywhere good reduction? The example situation that I have in mind is the following (my main motivation is del Pezzo surfaces). Take $X= \mathbb{P}^2$ over $\mathbb{Z}$. This clearly has good reduction everywhere. Next let $Y$ be the blow-up of $\mathbb{P}^2$ at the integral point $(0:0:1)$. This can be realised as the subvariety of $\mathbb{P}^2 \times \mathbb{P}^1$ (with variables $x_0,x_1,x_2$ and $y_1,y_2$) given by the equation $x_1 y_2 = x_2 y_1$. Then $Y$ has everywhere good reduction (at least if my caluclations are correct). I am curious to know if this happens after successively blowing up more integral points to obtain other del Pezzo surfaces. Note however that I am not claiming that all del Pezzo surfaces have everywhere good reduction! Thanks in advance! - 2 Answers It is true if the integral point $T$ is actually a section (as in your example), because you then blow-up a smooth scheme $X\to {\rm Spec}(\mathbb Z)$ along a smooth center $T\simeq {\rm Spec}(\mathbb Z)$. In general, as $T$ is flat over $\mathbb Z$, the fiber $Y_p$ of $Y$ at a prime $p$ is the blow-up of $X_p$ along $T_p$. At a $p$ ramified for $T\to {\rm Spec}(\mathbb Z)$, $T_p$ is non-reduced and in general $Y_p$ is not smooth. As an example, take $X=\mathbb P^2={\rm Proj}\ \mathbb Z[x,y,z]$ and $T=V_+(x, y^2-2z^2)$. Then $Y$ has singular fiber at $2$. Sorry, I was a little to optimistic on the compatibility of the blowing-up of $X$ with the base change $X_p\to X$. However the conclusion is the same. Suppose for simplicity that the generic fiber of $X$ is geometrically connected. Let $p$ be any prime number and let $(X_p)'$ be the blow-up of $X_p$ along $T_p$. Then we have a canonical closed immersion $(X_p)'\to Y_p$ which commutes with $(X_p)'\to X_p$ and $Y_p\to X_p$. Suppose now that $Y$ is smooth, then as $X_p$ and $Y_p$ are connected and smooth of the same dimension and $(X_p)'\to X_p$ is birational, $(X_p)'\to Y_p$ is an isomorphism. Hence $(X_p)'$ must be smooth too. But in general this is not the case as $T_p$ is not necessarily reduced (in the above example $(X_2)'$ is a normal singular surface). - Thanks for your answer but I just wanted to clarify: In general if $X \to S$ is a smooth morphism of schemes, then the blow-up $Y$ of $X$ at an $S-$valued point is also smooth over $S$? – Daniel Loughran Jul 19 2010 at 8:42 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. More generally, if $X\to S$ is flat and finitely presented, and if $T$ is a closed subscheme of $X$ which is a relative local complete intersection over $S$, then the blow-up $Y$ of $T$ in $X$ is flat over $S$ and commutes with every base change $S'\to S$. This is just because the powers of the defining ideal $I$ of $T$ are all flat and commute with base change, and $Y$ is by definition Proj($\bigoplus_{n\geq0}I^n$). This applies in particular if $X$ and $T$ are both smooth over $S$. In this case, compatibility with base change implies that the geometric fibers of $Y\to S$ are blow-ups of smooth subvarieties in smooth varieties, hence smooth. Since $Y$ is flat over $S$, it is smooth. - Thanks for your help. I think that I have what I need now to prove my lemma.. – Daniel Loughran Jul 19 2010 at 16:32 @Laurent : Salut, et bienvenu ! – Chandan Singh Dalawat Jul 20 2010 at 9:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 76, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948207676410675, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4020478
Physics Forums Conjecture Regarding rotation of a set by a sequence of rational angles. Consider the following sequence, where the elements are rational numbers mulriplied by $\pi$: $(\alpha_{i}) = \hspace{2 mm}\pi/4,\hspace{2 mm} 3\pi/8,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/16,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/8,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/32,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/8,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/16,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/8,\hspace{2 mm} \pi/4,\hspace{2 mm} 3\pi/64,\hspace{2 mm} \pi/4,\hspace{2 mm} \cdots$ Let $K \subset ℝ^{2}$ be a compact set. Also let $R_{\alpha_{i}}$ denote the rotation by $\alpha_{i}$. Suppose $R_{\alpha_{i}}K = \hspace{2 mm} K$ for each $\alpha_{i} \in (\alpha_{i})$. Question: Is it true that for all $\theta \in [0, 2\pi)$ $R_{\theta}K = \hspace{2 mm} K$. Note: If instead we had the sequence $(n\alpha)$ where $\alpha$ is an irrational number, it is trivial that the conjecture holds. This is trivial due to the following fact from the study of continued fractions: Given any real number on a circle, it can be approximated arbitrarily close by multiples of an irrational number. But if $\alpha$ is a rational number this doesn't hold since after a finite number of rotations you will get back to where you started from. However in the question above we don't have rotations by a fixed rational number and the answer is not immediate! PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Science Advisor There are some things not making sense to me in this question. Most obviously, the sequence clearly consists of irrationals, so I guess you mean rational multiples of pi. Secondly, I don't see any significance in showing it as a sequence, including repeats. It seems to be used only as a set - so why the repeats? That said, if I've understood the question... K is closed under rotations by 3nπ.2-m, for all positive integers m, n. Given θ, let xi be the bits of the binary fraction expressing θ/3π. From this you can construct a sequence of rotations converging on θ. Quote by haruspex There are some things not making sense to me in this question. Most obviously, the sequence clearly consists of irrationals, so I guess you mean rational multiples of pi. That's exactly right. Thank you for pointing that out. I corrected the question. Quote by haruspex Secondly, I don't see any significance in showing it as a sequence, including repeats. It seems to be used only as a set - so why the repeats? The reason for the repeats is the following: I'm preforming a symmetrization on the set K and the algorithm is such that it produces the above sequence. I got confused myself because once I got the sequence, I thought the rule is that I must follow the sequence to get arbitrary close to a θ. But you're absolutely right. Once I show $R_{\alpha_{i}}K = K$ for each $\alpha_{i}$, I'm done. This is because of the role that "n" is playing in your solution. When I get a little too excited I need someone to check I'm not doing something stupid. Thanks for the comment! Thread Tools | | | | |-----------------------------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: Conjecture Regarding rotation of a set by a sequence of rational angles. | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 5 | | | Calculus & Beyond Homework | 5 | | | Calculus & Beyond Homework | 5 | | | General Math | 42 | | | General Math | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9505913257598877, "perplexity_flag": "middle"}
http://denishaine.wordpress.com/2013/02/14/veterinary-epidemiologic-research-linear-regression/
denis haine Veterinary Epidemiologic Research: Linear Regression 14 02 2013 This post will describe linear regression as from the book Veterinary Epidemiologic Research, describing the examples provided with R. Regression analysis is used for modeling the relationship between a single variable Y (the outcome, or dependent variable) measured on a continuous or near-continuous scale and one or more predictor (independent or explanatory variable), X. If only one predictor is used, we have a simple regression model, and if we have more than one predictor, we have a multiple regression model. Let’s download the data coming together with the book. There’s a choice of various format, including R: ```temp <- tempfile() download.file("http://ic.upei.ca/ver/sites/ic.upei.ca.ver/files/ver2_data_R.zip", tmp) # fetch the file into the temporary file load(unz(tmp, "ver2_data_R/daisy2.rdata")) # extract the target file from the temporary file unlink(tmp) # remove the temporary file ### some data management daisy2 <- daisy2[daisy2$h7 == 1, ] # we only use a subset of the data daisy2[, c(4, 7, 11, 15:18)] <- lapply(daisy2[, c(4, 7, 11, 15:18)], factor) library(Hmisc) daisy2 <- upData(daisy2, labels = c(region = 'Region', herd = 'Herd number', cow = 'Cow number', study_lact = 'Study lactation number', herd_size = 'Herd size', mwp = "Minimum wait period for herd", parity = 'Lactation number', milk120 = 'Milk volume in first 120 days of lactation', calv_dt = 'Calving date', cf = 'Calving to first service interval', fs = 'Conception at first service', cc = 'Calving to conception interval', wpc = 'Interval from wait period to conception', spc = 'Services to conception', twin = 'Twins born', dyst = 'Dystocia at calving', rp = 'Retained placenta at calving', vag_disch = 'Vaginal discharge observed', h7 = 'Indicator for 7 herd subset'), levels = list(fs = list('No' = 0, 'Yes' = 1), twin = list('No' = 0, 'Yes' = 1), dyst = list('No' = 0, 'Yes' = 1), rp = list('No' = 0, 'Yes' = 1), vag_disch = list('No' = 0, 'Yes' = 1)), units = c(milk120 = "litres")) summary(daisy2$milk120, na.rm = TRUE) Min. 1st Qu. Median Mean 3rd Qu. Max. 1110 2742 3226 3225 3693 5630 sd(daisy2$milk120, na.rm = TRUE) [1] 703.364 daisy2 <- daisy2[!is.na(daisy2$milk120), ] # get rid of missing observations for milk production ``` To use R to download zipped file, see this answer on Stackoverflow by Dirk Eddelbuettel. The regression model could be written: $Y = \beta_0 + \beta_1 X_1 + \beta_2 X_2 + \epsilon$. A first simple model, with a categorical independent variable would be in R: ```lm.milk <- lm(milk120 ~ parity, data = daisy2) (lm.milk.sum <- summary(lm.milk)) Call: lm(formula = milk120 ~ parity, data = daisy2) Residuals: Milk volume in first 120 days of lactation [litres] Min 1Q Median 3Q Max -2234.73 -384.50 -0.63 376.52 2188.48 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2629.63 29.31 89.712 < 2e-16 *** parity2 715.30 42.45 16.849 < 2e-16 *** parity3 794.67 44.05 18.039 < 2e-16 *** parity4 834.51 48.50 17.205 < 2e-16 *** parity5 812.19 50.77 15.998 < 2e-16 *** parity6 929.63 66.92 13.893 < 2e-16 *** parity7 795.26 218.86 3.634 0.000287 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 613.5 on 1761 degrees of freedom Multiple R-squared: 0.2419, Adjusted R-squared: 0.2393 F-statistic: 93.65 on 6 and 1761 DF, p-value: < 2.2e-16 ``` The output shows in order: the model specified, the distribution of residuals (which should be normal with mean = 0 and so median close to 0), the $\beta$ coefficients and their standard errors, the standard error of the residuals, the $R^2$ and adjusted $R^2$ and then a F test for the null hypothesis $H_0: \beta_1 = \beta_2 = ...\beta_k = 0$ (all the coefficients equal zero except the intercept, i.e. is our model better to explain variance than a model predicting the dependent variable mean for all data points). But where’s the ANOVA table? ```anova(lm.milk) Analysis of Variance Table Response: milk120 Df Sum Sq Mean Sq F value Pr(>F) parity 6 211463774 35243962 93.653 < 2.2e-16 *** Residuals 1761 662708166 376325 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` How would you calculate this F-test yourself? The F value is given by $F = \frac{(SST - SSE) / (p - 1)}{SSE / (n - p)}$, with SST the total sum of square, SSE the sum of square for the residuals, n and p the degrees of freedom for total and residuals. We thus have $F_{(p - 1, n - p)}$ to get a p-value. ```(src.var.total <- sum((daisy2$milk120 - mean(daisy2$milk120))^2)) #source of variation: total [1] 874171940 (src.var.res <- sum(lm.milk$res^2)) #source of variation: error (or residual) [1] 662708166 (f.stat <- ((src.var.total - src.var.res) / (lm.milk.sum$fstatistic[2])) / + (src.var.res / lm.milk.sum$fstatistic[3])) #F-statistic 93.65301 1 - pf(f.stat, lm.milk.sum$fstatistic[2], lm.milk.sum$fstatistic[3]) # p-value 0 ``` We could also compute ourself $R^2$: ```1 - sum(lm.milk$res^2) / sum((daisy2$milk120 - mean(daisy2$milk120))^2) [1] 0.2419018 ``` We can also show some actual values, predicted ones and residuals: ```head(data.frame(daisy2[, c(7:8)], fitted.value = fitted(lm.milk), + residual = resid(lm.milk)), n = 10) parity milk120 fitted.value residual 1 5 3505.8 3441.824 63.97631 2 5 3691.3 3441.824 249.47631 3 5 4173.0 3441.824 731.17626 4 5 3727.3 3441.824 285.47631 5 5 3090.8 3441.824 -351.02369 6 4 5041.2 3464.141 1577.05893 7 5 3861.2 3441.824 419.37621 8 5 4228.4 3441.824 786.57616 9 6 3431.1 3559.258 -128.15761 10 5 4445.5 3441.824 1003.67626 ``` Next we could plot actual values and intervals for prediction both for the mean and new observations. We first create a new data frame to hold the values of parity for which we want the predictions, then compute the prediction and confidence bands and plot them with ggplo2. ```confidence <- data.frame(parity = as.factor(1:7)) # Confidence bands confidence.band <- predict(lm.milk, int = "c" , newdata = confidence) confidence.band <- as.data.frame(cbind(confidence.band, parity = c(1:7))) # Prediction bands prediction.band <- predict(lm.milk, int = "p" , newdata = confidence) prediction.band <- as.data.frame(cbind(prediction.band, parity = c(1:7))) library(ggplot2) ggplot(data = daisy2, aes(x = as.numeric(parity), y = milk120)) + geom_point() + geom_smooth(data = confidence.band, aes(x = parity, y = lwr), method = lm, se = FALSE, colour = "blue") + geom_smooth(data = confidence.band, aes(x = parity, y = upr), method = lm, se = FALSE, colour = "blue") + geom_smooth(data = prediction.band, aes(x = parity, y = lwr), method = lm, se = FALSE, colour = "green") + geom_smooth(data = prediction.band, aes(x = parity, y = upr), method = lm, se = FALSE, colour = "green") + xlab("Parity") + ylab("Milk volume in first 120 days of lactation") ``` Finally, let’s have a look at a multiple regression and see how we can assess the significance of predictor variables. We will look at the effect of milk production, parity, retained placenta and vaginal discharge on the waiting period of the cows. ```daisy2$milk120.sq <- daisy2$milk120^2 daisy2$parity.cat[as.numeric(daisy2$parity) >= 3] <- "3+" daisy2$parity.cat[daisy2$parity == 2] <- "2" daisy2$parity.cat[as.numeric(daisy2$parity) < 2] <- "1" lm.wpc <- lm(wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp + vag_disch, data = daisy2) summary(lm.wpc) Call: lm(formula = wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp + vag_disch, data = daisy2) Residuals: Interval from wait period to conception Min 1Q Median 3Q Max -69.21 -37.98 -14.66 24.46 219.46 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.232e+02 2.134e+01 5.775 9.29e-09 *** milk120 -3.426e-02 1.339e-02 -2.558 0.0106 * milk120.sq 4.507e-06 2.011e-06 2.241 0.0251 * as.factor(parity.cat)2 5.083e+00 4.056e+00 1.253 0.2103 as.factor(parity.cat)3+ 8.887e+00 3.665e+00 2.425 0.0154 * rpYes 9.771e+00 4.572e+00 2.137 0.0327 * vag_dischYes 9.838e+00 6.086e+00 1.616 0.1062 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 51.47 on 1529 degrees of freedom (232 observations deleted due to missingness) Multiple R-squared: 0.01306, Adjusted R-squared: 0.009191 F-statistic: 3.373 on 6 and 1529 DF, p-value: 0.002622 ``` The F-test tells us if any of the predictors is useful in predicting the length of the waiting period. The F-statistic is significant and we can reject the null hypothesis that no predictors are useful. If we had failed to reject the null hypothesis, we would still be interested in the possibility of non-linear transformations of variables (already done here) or the presence of outliers, masking the true effect. Also we could have not enough data to demonstrate a real effect. Now that we rejected this null hypothesis, maybe not all predictors are necessary to predict the response. We can see from the summary table which predictors are significant. However we can test a single predictor, with an F-test: ```lm.wpc2 <- lm(wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp, data = daisy2) anova(lm.wpc, lm.wpc2) Analysis of Variance Table Model 1: wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp + vag_disch Model 2: wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp Res.Df RSS Df Sum of Sq F Pr(>F) 1 1529 4050745 2 1530 4057666 -1 -6921.1 2.6125 0.1062 ``` or with a t-test, as $t_i = \hat\beta_i/SE(\hat\beta_i)$ with $n - p$ degrees of freedom. Note that $t^2$ equals the F-statistic. The same approach can be used to test a group of predictors. For example, if we want to see if we have to keep both milk production and its squared value in the model: ```lm.wpc3 <- lm(wpc ~ as.factor(parity.cat) + rp + vag_disch, data = daisy2) anova(lm.wpc, lm.wpc3) Analysis of Variance Table Model 1: wpc ~ milk120 + milk120.sq + as.factor(parity.cat) + rp + vag_disch Model 2: wpc ~ as.factor(parity.cat) + rp + vag_disch Res.Df RSS Df Sum of Sq F Pr(>F) 1 1529 4050745 2 1531 4076138 -2 -25394 4.7925 0.008416 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 ``` Information • Date : February 14, 2013 • Tags: epidemiology, R, statistics, veterinary • Categories : R, statistics Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 11, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8823592066764832, "perplexity_flag": "middle"}
http://www.durofy.com/mathematics/the-derivative-the-binomial-theorem/
## The Derivative & The Binomial Theorem Posted Jun 29th, 2010 Author Discuss Category If we observe closely, we find that the various branches of mathematics are all linked together in some way or the other. I show here one such observation. The binomial theorem can actually be expressed in terms of the derivatives of xn instead of the use of combinations. Lets start with the standard representation of the binomial theorm, $large%20(x+a)^{n}=x^{n}+%20^{n}C_{1}ax^{n-1}+%20^{n}C_{2}a^{2}x^{n-2}+^{n}C_{3}a^{3}x^{n-3}+...^{n}C_{n}a^{n}$ We could then rewrite this as a sum, $huge%20(x+a)^{n}=sum_{0}^{n}^{n}C_{r}a^{r}x^{n-r}$ Another way of writing the same thing would be, $large%20(x+a)^{n}=x^{n}+nax^{n-1}+frac{n(n-1)}{1.2}a^{2}x^{n-2}+frac{n(n-1)(n-2)}{1.2.3}a^{3}x^{n-3}+...a^{n}$ We observe here that the equation can be rewritten in terms of the derivatives of xn. The coefficient of a in the second terms is the first derivative of xn, similarily the coefficient of a2/2! in the second term is the second derivative of x… Lets look at the last term of the expansion, The coefficient of xn/n! should now be the nth derivative of xn. Which is very true… The following simple relation holds for all n… $large%20frac{mathrm{d}%20}{mathrm{d}%20x}^{n}(x^{n})=n!$ Hence, the binomial expansion can now be written in terms of derivatives! We have, $large%20(x+a)^{n}=x^{n}+aD_{1}+frac{D_{2}}{2!}a^{2}+frac{D_{3}}{3!}a^{3}...frac{D_{n}}{n!}a^{n}$ where Dr represents the rth derivate of xn. Hence, we can now write this as a sum, http://latex.codecogs.com/gif.latex?huge%20(x+a)^{n}=sum_{0}^{n}frac{D_{r}a^{r}}{r!} So, we now have the expansion in terms of combinations as well as in terms of derivatives! $huge%20(x+a)^{n}=sum_{0}^{n}^{n}C_{r}a^{r}x^{n-r}=sum_{0}^{n}frac{D_{r}a^{r}}{r!}$n instead of the use of combinations. Lets start with the standard representation of the binomial theorm, $(x+a)^{n}=x^{n}+%20^{n}C_{1}ax^{n-1}+%20^{n}C_{2}a^{2}x^{n-2}+^{n}C_{3}a^{3}x^{n-3}+...^{n}C_{n}a^{n}$ We could then rewrite this as a sum, $\huge%20(x+a)^{n}=\sum_{0}^{n}^{n}C_{r}a^{r}x^{n-r}$ Another way of writing the same thing would be, $(x+a)^{n}=x^{n}+nax^{n-1}+\frac{n(n-1)}{1.2}a^{2}x^{n-2}+\frac{n(n-1)(n-2)}{1.2.3}a^{3}x^{n-3}+...a^{n}$ We observe here that the equation can be rewritten in terms of the derivatives of xn. The coefficient of a in the second terms is the first derivative of xn, similarily the coefficient of a2/2! in the second term is the second derivative of x… Lets look at the last term of the expansion, The coefficient of xn/n! should now be the nth derivative of xn. Which is very true… The following simple relation holds for all n… $\huge%20\frac{\mathrm{d}%20}{\mathrm{d}%20x}^{n}(x^{n})=n!$ Hence, the binomial expansion can now be written in terms of derivatives! We have, $\large%20(x+a)^{n}=x^{n}+aD_{1}+\frac{D_{2}}{2!}a^{2}+\frac{D_{3}}{3!}a^{3}...\frac{D_{n}}{n!}a^{n}$ where Dr represents the rth derivate of xn.Hence, we can now write this as a sum, $\huge%20(x+a)^{n}=\sum_{0}^{n}\frac{D_{r}a^{r}}{r!}$ Or as the sum, $\huge%20(x+a)^{n}=\sum_{0}^{n}\frac{D_{r}x^{r}}{r!}$ So, we now have the expansion in terms of combinations as well as in terms of derivatives! $\huge%20(x+a)^{n}=\sum_{0}^{n}^{n}C_{r}a^{r}x^{n-r}=\sum_{0}^{n}\frac{D_{r}a^{r}}{r!}$ ## Related Posts via Taxonomies #### Discuss - 2 Comments 1. rick woods says: Great. now, can we generate the nth. der, without generating all the derivatives in betweeen 2. maria andros says: Great work keep it coming ## About Durofy Durofy - An Engineering & Technology Blog - Provides free Tutorials & Resources on Engineering, Technology, Internet, Computers, Social Media, Web Design, Programming, Business, & Mathematics. ## Social ### Recent Posts #### How to Detect Spam in Shortened URLs May 18th, 2013 URL Shorteners like goo.gl provide an easy way to send long URLs that look short and neat. However, [...] #### Microsoft's Creative Parody of Google Chrome's "Now Everywhere" Advertisement May 18th, 2013 Sometimes a parody is much more creative than the original itself. Such is the case with Google Chro[...] #### Top 16 Free Online Advertising Platforms May 18th, 2013 There are a number of online platforms available on the web. These online platforms provide wonderfu[...] #### Gmail - Create Calender Events Directly from Time References in Emails May 16th, 2013 Gmail now let's you create calender events directly from your emails. If your email has time / date [...] #### The Easiest Way to Find / Share Your Blackberry BBM Pin May 15th, 2013 Ever wondered what's the easiest and the fastest way to find out / share your BBM pin? Blackberry[...]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.885680079460144, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/206774-mechanics-forces-acting-object-slope.html
3Thanks • 2 Post By ebaines • 1 Post By skeeter # Thread: 1. ## Mechanics forces acting on a object in a slope answer to q15 is 2.42ms can anyone solve these questions please 2. ## Re: Mechanics forces acting on a object in a slope Originally Posted by abdulrehmanshah answer to q15 is 2.42ms can anyone solve these questions please First, one question per thread. Second, these are all Newton's 2nd Law problems. Can you show us how you set this up? -Dan 3. ## Re: Mechanics forces acting on a object in a slope well can u please solve question 15 only once u solve it i will make change the topic to solved ok i copied it from a book Originally Posted by topsquark First, one question per thread. Second, these are all Newton's 2nd Law problems. Can you show us how you set this up? -Dan 4. ## Re: Mechanics forces acting on a object in a slope forces acting on mass A ... $T - Mg\sin{\theta} = Ma$ forces acting on mass B ... $mg - T = ma$ combining equations ... $mg - Mg\sin{\theta} = Ma + ma$ $\frac{m - M\sin{\theta}}{M + m} \cdot g = a$ this acceleration will be constant as mass A slides up 2.8 m ... use an appropriate kinematics equation to find the velocity of mass A when mass B hits the ground. for the last 0.2 m, the only force acting on mass A will be the component of weight acting parallel to the incline, so mass A's acceleration will change ... $g\sin{\theta} = a_2$ use the velocity found when mass B hits the ground, the acceleration $a_2$, and the remaining displacement to find the final velocity of mass A when it hits the pulley. 5. ## Re: Mechanics forces acting on a object in a slope this is where i got to but i cant get to the final velocity as 2.42ms^-1 can u please help me here i write equation 2as=v^2-u^2. 2*4.9*0.2=v^2-u^2 what should be u(i.e initial velocity) Originally Posted by skeeter forces acting on mass A ... $T - Mg\sin{\theta} = Ma$ forces acting on mass B ... $mg - T = ma$ combining equations ... $mg - Mg\sin{\theta} = Ma + ma$ $\frac{m - M\sin{\theta}}{M + m} \cdot g = a$ this acceleration will be constant as mass A slides up 2.8 m ... use an appropriate kinematics equation to find the velocity of mass A when mass B hits the ground. for the last 0.2 m, the only force acting on mass A will be the component of weight acting parallel to the incline, so mass A's acceleration will change ... $g\sin{\theta} = a_2$ use the velocity found when mass B hits the ground, the acceleration $a_2$, and the remaining displacement to find the final velocity of mass A when it hits the pulley. 6. ## Re: Mechanics forces acting on a object in a slope I suggest that you consider conservation of energy principles. There are two phases to this problem - the first phases is the 4 Kg mass being pulled up the ramp by the force of the 3Kg mass, and then the second phase is after the 3Kg mass hits the ground the tension in the rope becomes zero and the 4Kg mass continues to coast up uphill but slows as it rises. For phase 1 you have $\small\Delta KE + \Delta PE = 0$, where $\small \Delta KE = \frac 1 2 (m_A + M_B) v_1^2$ and $\small \Delta PE = m_A g (2.8m) \sin 30 = m_B g 2.8m$, so you can solve for $v_1$ . For phase 2: $\small \Delta KE = \frac 1 2 m_A (v_2^2-v_1^2) = m_A g (0.2m \sin 30)$ 7. ## Re: Mechanics forces acting on a object in a slope what is the value of m Originally Posted by ebaines I suggest that you consider conservation of energy principles. There are two phases to this problem - the first phases is the 4 Kg mass being pulled up the ramp by the force of the 3Kg mass, and then the second phase is after the 3Kg mass hits the ground the tension in the rope becomes zero and the 4Kg mass continues to coast up uphill but slows as it rises. For phase 1 you have $\small\Delta KE + \Delta PE = 0$, where $\small \Delta KE = \frac 1 2 (m_A + M_B) v_1^2$ and $\small \Delta PE = m_A g (2.8m) \sin 30 = m_B g 2.8m$, so you can solve for $v_1$ . For phase 2: $\small \Delta KE = \frac 1 2 m_A (v_2^2-v_1^2) = m_A g (0.2m \sin 30)$ 8. ## Re: Mechanics forces acting on a object in a slope K.E=0.5*m*V where V is final velocity what have u done with phase 2 Originally Posted by ebaines I suggest that you consider conservation of energy principles. There are two phases to this problem - the first phases is the 4 Kg mass being pulled up the ramp by the force of the 3Kg mass, and then the second phase is after the 3Kg mass hits the ground the tension in the rope becomes zero and the 4Kg mass continues to coast up uphill but slows as it rises. For phase 1 you have $\small\Delta KE + \Delta PE = 0$, where $\small \Delta KE = \frac 1 2 (m_A + M_B) v_1^2$ and $\small \Delta PE = m_A g (2.8m) \sin 30 = m_B g 2.8m$, so you can solve for $v_1$ . For phase 2: $\small \Delta KE = \frac 1 2 m_A (v_2^2-v_1^2) = m_A g (0.2m \sin 30)$ 9. ## Re: Mechanics forces acting on a object in a slope "m" stands for "mass." You are told that the mass of block A is 4 Kg and the mass of block B is 3 Kg. The change in kinetic energy of an object is: $\frac 1 2 mv_2^2 - \frac 1 2 mv_1^2$. 10. ## Re: Mechanics forces acting on a object in a slope for the initial motion 2.8 m up the incline ... $v_f^2 = v_0^2 + 2a(\Delta x)$ $v_f = \sqrt{2a(\Delta x)} = \sqrt{\frac{2g(2.8)}{7}} = \, 2.8 m/s$ for the final 0.2 m up the incline ... $v_f^2 = 2.8^2 - g(0.2)$ $v_f = 2.42 \, m/s$ 11. ## Re: Mechanics forces acting on a object in a slope how to change the topic to solved
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9067740440368652, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/257556/why-lim-r-to-infty-int-c-r-fraceizzdz-0
# Why $\lim_{R\to\infty} \int_{C_R}\frac{e^{iz}}{z}dz=0$? Why does $$\lim_{R\to\infty} \int_{C_R}\frac{e^{iz}}{z}dz=0$$ where $C_R = \{Re^{it}: 0\le t\le \pi\}$? - ## 3 Answers Put $z:=x+iy\,\,,\,\,x,y\in\Bbb R\,$ , so if $\,Re^{it}=z=x+iy\,\,,\,R\to\infty\Longrightarrow x^2+y^2\to\infty$ and $\,0\leq t\leq \pi\Longrightarrow y\geq 0\,$: $$\left|\oint_{C_R}\frac{e^{iz}}{z}dz\right|\xrightarrow [R\to\infty\Longrightarrow y\to\infty]{}0$$ Added: The above follows from Jordan's Lemma since $$\left|\frac{1}{Re^{it}}\right|=\frac{1}{R}\xrightarrow [R\to\infty]{}0$$ - $y$ does go to $\infty$ on some of the circle, but not on all of the circle. It is not really valid to pull $y$ out of the integral like that. – robjohn♦ Dec 13 '12 at 5:22 Of course it is: By Cauchy's Estimate Formula, $$\left|\int_{C_R}\frac{e^{iz}}{z}dz\right|\leq\max_{z\in C_R}\left|\frac{e^{i(x+iy)}}{z}\right|\cdot l(C_R)\leq \frac{e^{-y}}{R}\cdot R\pi$$ Perhaps there's a mistake somewhere else, but the fact of "pulling out y", if this is what you meant, is correct. – DonAntonio Dec 13 '12 at 11:55 $y=\mathrm{Im}(z)$ and $z$ is the variable of integration, so the scope of $y$ is inside the integral. If $z=Re^{it}$, then $y=R\sin(t)$ inside the integral, but outside the integral only $R$ is known. – robjohn♦ Dec 13 '12 at 12:21 It's definitely not correct: $$\max_{z\in C_R} \left| \frac{e^{i(x+iy)}}{z} \right| = \frac1R.$$ – mrf Dec 13 '12 at 13:09 Will you please read Theorem 2 in tinyurl.com/9wtzao3 ? I'm sure you know this but for some reason you seem to be forgetting it. With this, and since $\,|e^{i(x+iy)}|=|e^{-y}e^{ix}|=e^{-y}\,$, we get the formula above. It is not that I'm taking the integration variable (or part of it) "out" of the integral, of course. I'm just estimating, or bounding above, the module of the integral using that theorem of Cauchy, since this is all is needed to show the integral's limit is zero when $\,R\to\infty\,$ . I hope this clears up things unless something else is bothering you. – DonAntonio Dec 13 '12 at 13:15 show 6 more comments $$\begin{eqnarray} \lim_{R\to\infty} \int_{C_R}\frac{e^{iz}}{z}dz &=& \lim_{R\to\infty} \int_0^\pi d\left(R e^{i t}\right) \frac{\exp\left(iR e^{i t}\right)}{R e^{i t}} \\ &=& i \lim_{R\to\infty} \int_0^\pi dt \ R e^{i t} \frac{\exp\left(iR e^{i t}\right)}{R e^{i t}} \\ &=& i \lim_{R\to\infty} \int_0^\pi dt \ \exp\left[iR \left(\cos t + i \sin t\right)\right] \\ &=& i \lim_{R\to\infty} \int_0^\pi dt \ e^{i R \cos t} e^{- R \sin t} \\ &=& 0 \end{eqnarray}$$ since $\sin t \ge 0$ for $0 \le t \le \pi$ and $\left|e^{i R \cos t}\right| = 1$. - Since $\left|e^{iz}\right|=e^{-y}$ and $\left|\frac{\mathrm{d}z}{z}\right|=\mathrm{d}t$, we have $$\begin{align} \left|\lim_{R\to\infty}\int_{C_R}\frac{e^{iz}}{z}\,\mathrm{d}z\,\right| &\le\lim_{R\to\infty}\int_0^\pi e^{-R\sin(t)}\,\mathrm{d}t\\[6pt] &=0 \end{align}$$ by dominated convergence. In fact, $$\begin{align} \int_0^\pi e^{-R\sin(t)}\,\mathrm{d}t &=2\int_0^{\pi/2} e^{-R\sin(t)}\,\mathrm{d}t\\ &\le2\int_0^{\pi/2} e^{-2Rt/\pi}\,\mathrm{d}t\\ &=\frac\pi R\int_0^R e^{-u}\,\mathrm{d}u\\ &\le\frac\pi R \end{align}$$ - I think you meant dominated convergence theorem. – TCL Dec 13 '12 at 3:19 @TCL: Indeed. Thanks. – robjohn♦ Dec 13 '12 at 3:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9309054613113403, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/2552/how-to-calculate-cord-tension-in-a-vertical-circle
# How to calculate cord tension in a vertical circle? Mass m is connected to the end of a cord at length R above its rotational axis (the axis is parallel to the horizon, the position of the mass is perpendicular to the horizon). It is given an initial velocity, V0, at a direction parallel to the horizon. The initial state is depicted at position A in the image. The forces working on the mass are MG from the earth and T the tension of the cord. How can I calculate the tension of the cord when the mass is at some angle $\theta$ from its initial position (position B in the image)? . Here's what I thought: Since the mass is moving in a circle then the total force in the radial direction is T - MG*$\cos\theta$ = M*(V^2)/R and so T = MG*$\cos\theta$+M*(V^2)/R but since MG applies acceleration in the tangential direction then V should also be a function of $\theta$ and that is where I kind of got lost. I tried to express V as the integration of MG*$\sin\theta$, but I wasn't sure if that's the right approach. - ## 2 Answers Some hints to get you started. The mass is always on the circle. The radius is constant. Therefore the tension supplied must always be equal to the centrifugal forces acting on the mass. This is the composition of the rotational centrifugal force and gravity. $$F_{tension} = F_{rot}+F_{grav}$$ From the point of view of the equations this is a pendulum - you will obtain a set of equations of the form $$\ddot\theta(t) = k \sin \theta(t)$$ - Imagine the initial velocity being $v_0 = 0\,{\rm m/s}$ (i.e. bungee jumping). How does that fall into your answer? – Marek Jan 6 '11 at 13:13 The assumption is that the cord is always stretched to R, so v>v_min(theta). But you are right, this is also a case that should be considered (I doubt that this is what the OP is looking for though) – Sklivvz♦ Jan 6 '11 at 13:48 If you know the motion path, then you know what the acceleration of the mass should be at any angle $\theta$ Position $$\vec{r}=[\mbox{-}R\,\sin\theta,R\,\cos\theta]$$ Velocity $$\vec{v}=[\mbox{-}R\dot{\theta}\,\cos\theta,\mbox{-}R\dot{\theta}\,\sin\theta]$$ Acceleration $$\vec{a}=[R\dot{\theta}^{2}\,\sin\theta-R\ddot{\theta}\,\cos\theta,\mbox{-}R\dot{\theta}^{2}\cos\,\theta-R\ddot{\theta}\,\sin\theta]$$ The Sum of the forces acting on the mass are $$\sum\vec{F}=[T\,\sin\theta,\mbox{-}T\,\cos\theta]-[0,m\, g]$$ Newtons law of motion is used to solve for the angular acceleration $\ddot{\theta}$ and the tension $T$ (must be positive). $$\ddot{\theta}=\frac{g}{R}\,\sin\theta$$ and $$T=m\,R\,\dot{\theta}^2-m\,g\,\cos\theta$$ So to get the tension, you not only need the angle $\theta$, but also the angular speed $\dot{\theta}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.926446795463562, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/21626/magnetic-field-in-a-cavity
# Magnetic field in a cavity We are given an infinitely long cylinder of radius $b$ with an empty cylinder (not coaxial) cut out of it, of radius $a$. The system carries a steady current (direction along the cylinders) of size $I$. I am trying to find the magnetic field at a point in the hollow. I am told that the answer is that the magnetic field is uniform throughout the cavity. and is proportional to $d\over b^2-a^2$ where $d$ is the distance between the centers of the cylinders. Attempt: I have found by using Ampere's law that the magnetic field at a point at distance r from the axis in a cylinder of radius $R$ carrying a steady current, $I$, is given by $\mu_0 I r\over 2\pi R^2$. So I thought I would use superposition. But what I get is ${\mu_0 I \sqrt{(x-d)^2+y^2}\over 2\pi b^2}-{\mu_0 I \sqrt{(x)^2+y^2}\over 2\pi a^2}$. However this is not the given answer! - You need to assume that the current is distributed equally over the remaining cross sectional area of the wire. – Ron Maimon Jul 14 '12 at 4:16 ## 2 Answers This is a problem of superposition--- you can imagine this is a uniform cylinder carrying a current, and the cut-out part is another uniform cylinder carrying a current of the same current density in the opposite direction. Then you superpose the two fields of the two cylinders, and you get the total field. Ampere's law tells you that the magnetic field inside a cylinder is linearly proportional to the distance from the center, and goes around the center line as given by the right hand rule. So that for a uniform cylinder with current density j going in the z-direction, $$B_x \propto - j(y-y_0)$$ $$B_y \propto + j(x-x_0)$$ for the opposite current density cylinder, you just reverse the sign of j. When you add, the x and y dependent parts cancel out, so that the field is constant. - Hint: Take the position vector of center of hole as $\vec{a}$, and take the relative p.v. of a point in the cavity as $\vec{r_1}$. Solve vectorially, you'll be surprised by the result. A similar surprise occurs when you have the same cylinder with a uniform charge density. Maybe you should try this first, magnetic fields involve the extra cross product (It gets out of the way here, though) Yes, superposition is useful here. - Thanks, in my attempt I used the origin as the center of the hole and that answer is what I got... I can't seem to get the right answer :( – light Feb 29 '12 at 11:51 Vectors can make complicated stuff easier. Use the origin as the center of the main cylinder. Note that you applied Pythagoras theorem to non-perpendicular lines. Bad idea. Have you tried vectors yet? It really makes the solution beautiful. – Manishearth♦ Feb 29 '12 at 11:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452193975448608, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/174422/projective-representations-of-loop-groups
# Projective representations of loop groups If $G$ is a Lie group and we take its loop group $LG$ why do we deal with projective representations of $LG$ and central extensions thereof? Where does the extra complexity come in to require us to consider this extra, very complicated, step? - When are you ever required to do anything in mathematics? – Qiaochu Yuan Jul 24 '12 at 1:56 I asked this because I am reading about these on behest of my advisor with little motivation. I told him I was keen on mathematical physics and he told me to read Loop groups by Segal and Pressley. – Sven Jul 25 '12 at 1:53 It just seems like strange wording to me. Nobody is required to centrally extend a loop group. People want to centrally extend loop groups. – Qiaochu Yuan Jul 25 '12 at 2:13 ## 2 Answers Projective representations are needed in quantum mechanics, and I think that is the main motivation behind studying projective representations. The point is that in QM, it is rays in a Hilbert space which have physical meaning. So the symmetry group is required to act on and transform these rays rather than vectors. - So the representations we study happen to be projective because of some physical meaning? Is there no mathematical reason? – Sven Jul 23 '12 at 22:29 I am not sure. If there is any mathematical reason I too would like to know :) – user10001 Jul 23 '12 at 22:41 But from Lie algebra point of view study of projective representations will also give you ordinary ones. Right ? – user10001 Jul 23 '12 at 22:44 @Sven: I disagree with the premise that a physical justification for studying projective representations is not a mathematical justification for studying projective representations. Much of mathematics owes its existence to physical considerations and this seems disrespectful of the role of physics in mathematics to me. – Qiaochu Yuan Jul 24 '12 at 1:55 I meant no offense, if it were a physical constraint I would be perfectly happy with that explanation. I asked because in my limited experience the physical constraint is not independent of a mathematical one. – Sven Jul 25 '12 at 1:48 Well, we are usually concerned about these exotic objects at the algebraic level. They pop up when we consider spinor representations of the orthogonal group. Lets write $\gamma_{\alpha}$ for the generators of the Clifford algebra which are represented by $\Gamma_{\alpha}$. They satisfy $$\Gamma_{\alpha}\Gamma_{\beta}+\Gamma_{\beta}\Gamma_{\alpha}=2\eta_{\alpha\beta}$$ where $\eta_{\alpha\beta}$ is nondegenerate. (If it's degenerate, we have a Grassmann algebra and not a Clifford algebra, but it's not terrible.) So we have commutators of the generators of the Clifford algebra. We will leave it as an exercise for the reader to determine $$[\Gamma_{\alpha},\Gamma_{\beta}]=\text{something quadratic}$$ and $$[\Gamma_{\alpha},\Gamma_{\mu}\Gamma_{\nu}]=2\eta_{\alpha\mu}\Gamma_{\nu}-2\eta_{\alpha\nu}\Gamma_{\mu}.$$ Now we expect commutators of quadratic terms to look like $$[\Gamma_{\alpha}\Gamma_{\beta},\Gamma_{\mu}\Gamma_{\nu}]\sim a\Gamma_{\sigma}\Gamma_{\tau}+b\cdot\textbf{1}$$ and wouldn't it be lovely if we could make this an equality and ditch the constant multiple of the identity operator $b\textbf{1}$? This would make the representation of quadratic guys a closed algebra! We find if we write an arbitrary element of these generated algebra as $a\cdot\textbf{1}+b^{\mu\nu}\Gamma_{\mu}\Gamma_{\nu}$ (where we use Einstein summation convention) we have an algebra, provided one condition on the matrix $b^{\mu\nu}$ namely we need $$b^{\mu\nu}=-b^{\nu\mu}.$$ This says we have a "representation" of the orthogonal Lie algebra. Is it a representation? Yes and no. No, it's technically not a "vanilla representation"; but with all this extra structure, we have a central extension of the orthogonal lie algebra. Wouldn't it be lovely, after all this work, if we could do the same thing to other Lie algebras? That's how we get started with losing ourselves in the jungle of loop groups, its central extensions, etc. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489650726318359, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/02/17/
# The Unapologetic Mathematician ## The First Isomorphism Theorem Today I want to walk through what’s called the “First Isomorphism Theorem” for groups. There are two more, but the first is really more interesting in my view. I’ll start with a high-level sketch: kernels of homomorphisms are normal subgroups, images are quotient groups, and every homomorphism is a quotient followed by an isomorphism. First I’m going to need a couple homomorphisms. If we’ve got a group $G$ and a normal subgroup $N$, there’s immediately a homomorphism $\pi_{(G,N)}:G\rightarrow G/N$. Just send each $g$ to its coset $gN$. It should be clear that every coset gets hit at least once, so this is an epimorphism, and that its kernel is exactly $N$. We call $\pi_{(G,N)}$ the “canonical projection” or the “canonical epimorphism” from $G$ to $G/N$. On the other hand, if $G$ is a group and $H$ is any subgroup, we have a homomorphism $\iota_{(G,H)}:H\rightarrow G$ given by just sending every element of $H$ to itself inside $G$. This is such a natural identification to make that it feels a little weird to think of it as a homomorphism at all, but it actually turns out to be quite useful. The kernel of $\iota_{(G,H)}$ is trivial, making it a monomorphism. We call it the “canonical injection” or “canonical monomorphism” from $H$ to $G$. Now consider any homomorphism $f:G\rightarrow H$. If $k$ is in the kernel of $f$ — $f(k)$ is the identity $e_H$ of $H$ — and $g$ is any element of $G$, we calculate $f(gkg^{-1}) = f(g)f(k)f(g^{-1}) = f(g)f(g)^{-1} = e_H$ so $gkg^{-1}$ is in the kernel as well. Thus the kernel is a normal subgroup. So every kernel is a normal subgroup, and the canonical projection shows that every normal subgroup shows up as the kernel of some homomorphism. Now we can write any homomorphism $f$ as a composition $G\rightarrow^{\pi_{(G,{\rm Ker}(f))}}G/{\rm Ker}(f)\rightarrow^{f'} {\rm Im}(f)\rightarrow^{\iota_{(H,{\rm Im}(f)}}H$ where I’ve written the name of each composition next to its arrow. That is, we first project onto the quotient of the domain by the kernel of $f$, then we send that to the image of $f$ by a homomorphism we call $f'$, and finally we inject the image into the codomain. As a bonus, $f'$ is an isomorphism! Okay, so how do we define $f'$? If we write the kernel of $f$ as $N$, we need to figure out what to do with a coset $gN$. If $g$ and $gn$ are two elements of $gN$, then $f(gn) = f(g)f(n) = f(g)$, so $f$ sends every element of $gN$ to the same element of $H$. We define $f'(gN) = f(g)$. Now let’s say $f'(gN) = e_H$. This means that $f(g)=e_H$, so $g$ is in $N$ already, and $gN$ is the identity of $G/N$. The kernel of $f'$ is trivial, so $f'$ is a monomorphism. On the other hand, every element in ${\rm Im}(f)$ is $f(g)$ for some $g$, so each one is also $f'(gN)$ for some $gN$. That makes $f'$ an epimorphism, and thus an isomorphism. Q.E.D. Every homomorphism works like this: you divide out some kernel, hit the quotient group with an isomorphism, and include the result into the target group. Since isomorphisms don’t really change anything about a group and the inclusion is pretty simple too, all the really interesting stuff goes on in the first step. The homomorphisms that can come out of $G$ are essentially determined by the normal subgroups of $G$. Because of this, we call a group with no nontrivial normal subgroups “simple”. The kernel of an homomorphism from a simple group is either trivial or the whole group. What we’re starting to see here is the tip of a much deeper approach to algebra. The internal structure of a group is intimately bound up with the external structure of the homomorphisms linking it to other groups. Each one determines, and is determined by the other, and this duality can be a powerful tool for answering questions on one side by turning them into questions on the other side. ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 61, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9325217604637146, "perplexity_flag": "head"}
http://mathoverflow.net/questions/58907/a-naive-question-about-composition-factor-of-a-representation/58910
## A naive question about composition factor of a representation ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $G$ be a Lie group, and $(\pi,V)$ is a continuous representation of $G$ which has finite composition series. A question I have which might be somehow naive is that: for any irreducible representation $(\sigma,W)$ of $G$, is it true that $(\sigma,W)$ occurs as one composition factor if and only if the set $Hom_G(V,W)$ is nonzero? I have no idea how difficult or how easy this question might be, and any reference or answer is appreciated. Edit: Thanks a lot for all of your answers, comments and examples. Now if $G$ is real reductive, $(\pi,V)$ is smooth admissible. Is there a way to determine all of the composition factors of $V$? - I'm not too familiar with the etiquette on MO myself, but since your original question was already answered, you might want to make a separate question about real reductive groups. – Kimball Mar 21 2011 at 14:48 ## 3 Answers No. This holds only if $V$ is semisimple. Consider the case when $V$ has two composition factors. This means that one is an invariant subspace and the other is the quotient of $V$ by this invariant subspace. If there is also an invariant subspace isomorphic to the quotient then $V$ is the direct sum of these two representations and so $V$ is decomposable. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Bruce's answer is perfectly satisfactory, but you might want to see an explicit example. Let $B$ be the group of all upper triangular matrices in $GL_2({\mathbb R})$ and let $\pi:B\to GL_2({\mathbb R})$ be the inclusion map, which you might consider as a representation on $\mathbb R^2$. Let $\chi_1,\chi_2:B\to GL_1({\mathbb R})$ be the representations given by $$\chi_j\left(^{a_1}\ ^x_{a_2} \right)=a_j.$$ Then $\chi_1$ is a subrepresentation and $\chi_2$ is a quotient of $\pi$. So both are subquotients, but $Hom_B(\pi,\chi_1)$ is zero. - The answers given by Bruce and Anton are sensible, but I'd add that it's important in Lie theory to distinguish the behavior of reductive groups from that of arbitrary Lie groups: finite dimensional representations of the former are semisimple (motivating the label "reductive") but often not in general. It's also important to distinguish finite and infinite dimensional representations, since you use the term "continuous". Even for a well-behaved simple Lie group, typical infinite dimensional representations of finite length are often not semisimple: those in the "principal series", etc. This is mirrored in a more elementary way in related Lie algebra representations, starting with Verma modules: these have finite composition series but are only rarely semisimple (then only when they are in fact simple). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495893120765686, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/hamiltonian-formalism?page=1&sort=active&pagesize=50
Tagged Questions The hamiltonian-formalism tag has no wiki summary. 1answer 32 views Finding Hamilton's equations given a Hamiltonian I am trying to find Hamilton's equations for a general Hamiltonian given by $$H[u]=\int_\mathbf{R} \phi(u,u_x)dx$$ Suppose \frac{\delta f[u]}{\delta u(x)}\equiv \frac{\partial f}{\partial ... 2answers 79 views What would happen if energy was conserved but phase space volume wasn't? (and vice-versa) I'm trying to understand the relationship between the two conservation laws. As I understand, Liouville's result is a weaker condition: it relies merely on the particular form assumed by Hamilton's ... 1answer 58 views Peculiar Hamiltonian Phase space I was solving an exercise of classical mechanics : Consider the following hamiltonian $H(p,q,t) = \frac{p^2}{2m} + \lambda pq + \frac{1}{2}m\lambda^2\frac{q^6}{q^4+\alpha^4}$ Where ... 3answers 111 views Physical interpretation of Poisson bracket properties In classical Hamiltonian mechanics evolution of any observable (scalar function on a manifold in hand) is given as $$\frac{dA}{dt} = [A,H]+\frac{\partial A}{\partial t}$$ So Poisson bracket is a ... 1answer 295 views About Turbulence modeling There is a paper titled "Lagrangian/Hamiltonian formalism for description of Navier-Stokes fluids" in PRL. After reading the paper, the question arises how far can we investigate turbulence with this ... 0answers 300 views Calculation of the non-Gaussity parameter for primordial cosmological perturbations by the ADM Formalism Maldacena has used the ADM Formalism in one of his papers (http://arxiv.org/abs/astro-ph/0210603) in computing the the three point correlation function (i.e the non-Gaussianity) parameter for ... 1answer 117 views Phase space in quantum mechanics and Heisenberg uncertainty principle In my book about quantum mechanics they give a derivation that for one particle an area of $h$ in $2D$ phase space contains exactly one quantum mechanical state. In my book about statistical physics ... 1answer 49 views Is symplectic form in Hamiltonian mechanics a physical quantity? Is symplectic form $dp_i \wedge dq_i$ in Hamiltonian mechanics a physical quantity? It feels to me to be something different than say energy, momentum or mass. Like just certain structure. The real ... 2answers 208 views primary constraints for constrained Hamiltonian systems I would be most thankful if you could help me clarify the setting of primary constraints for constrained Hamiltonian systems. I am reading "Classical and quantum dynamics of constrained Hamiltonian ... 1answer 173 views Can I find a potential function in the usual way if the central field contains $t$ in its magnitude? I'm working on a classical mechanics problem in which the problem states that a particle of mass $m$ moves in a central field of attractive force of magnitude: $$F(r, t) = \frac{k}{r^2}e^{-at}$$ ... 1answer 56 views Quantum mechanical analogue of conjugate momentum In classical mechanics, we define the concept of canonical momentum conjugate to a given generalised position coordinate. This quantity is the partial derivative of the Lagrangian of the system, with ... 2answers 154 views Elimination of velocities from momenta equations for singular Lagrangian this doubt is related to Generalized Hamiltonian Dynamics paper by Dirac. Consider the set of $n$ equations : $p_i$ = $∂L/∂v_i$, (where $v_i$ is $q_i$(dot) = $dq_i/dt$, or time derivative of ... 2answers 349 views Lorentz invariance of the 3 + 1 decomposition of spacetime Why is allowed decompose the spacetime metric into a spatial part + temporal part like this for example $$ds^2 ~=~ (-N^2 + N_aN^a)dt^2 + 2N_adtdx^a + q_{ab}dx^adx^b$$ ($N$ is called lapse, $N_a$ is ... 0answers 300 views Square of Laplace–Runge–Lenz vector in Hydrogen atom [closed] I have a problem. I've tried this question, but I don't get the correct expression. Can someone give me some ideas? Thanks! Consider the Hydrogen Atom Hamiltonian: H = (\mathbf p^2/2 ... 1answer 103 views rate of change of spring potential energy $\frac{dU}{dt}$ Suppose we have a setup like this. In orange are two wooden sticks sort of things, and they are attached to the block of mass $m$(as usual) at a joint which is hinge type something. A similar ... 2answers 84 views Heisenberg evolution equation for $\hat{\phi}$ Consider quantum Hamiltonian of free massive scalar particle: \hat{H} = \int d^3x \left[\frac{1}{2} \hat{\pi}^2 (t, \vec{x}) + \frac{1}{2} \partial_i \hat{\phi}(t, \vec{x}) \partial_i \hat{\phi}(t, ... 2answers 146 views From Lagrangian to Hamiltonian in Fermionic Model While going from a given Lagrangian to Hamiltonian for a fermionic field, we use the following formula. $$H = \Sigma_{i} \pi_i \dot{\phi_i} - L$$ where \$\pi_i = \dfrac{\partial L}{\partial ... 2answers 154 views Ordering Ambiguity in Quantum Hamiltonian While dealing with General Sigma models (See e.g. Ref. 1) $$\tag{10.67} S ~=~ \frac{1}{2}\int \! dt ~g_{ij}(X) \dot{X^i} \dot{X^j},$$ where the Riemann metric can be expanded as, \tag{10.68} ... 1answer 132 views The relation between Hamiltonian and Energy I know Hamiltonian can be energy and be a constant of motion if and only if: Lagrangian be time-independent, potential be independent of velocity, coordinate be time independent. Otherwise ... 1answer 52 views Deriving equations of motion of polymer chain with Hamilton's equations This is related to a question about a simple model of a polymer chain that I have asked yesterday. I have a Hamiltonian that is given as: \$H = \sum\limits_{i=1}^N \frac{p_{\alpha_i}^2}{2m} + ... 1answer 52 views Hamiltonian of polymer chain I'm reading up on classical mechanics. In my book there is an example of a simple classical polymer model, which consists of N point particles that are connected by nearest neighbor harmonic ... 2answers 161 views Weyl Ordering Rule While studying Path Integrals in Quantum Mechanics I have found that [Srednicki: Eqn. no. 6.6] the quantum Hamiltonian $\hat{H}(\hat{P},\hat{Q})$ can be given in terms of the classical Hamiltonian ... 2answers 214 views Quantum Mechanics Notation for BRA KET I've been given this homework problem, but I do not understand its notation. Please perform the following where the wavefunctions are the normalized eigenfunctions of the harmonic oscillator ... 0answers 40 views The consistency conditions of constrained Hamiltonian systems I am studying the Hamiltonian description of a constrained system. There are some questions puzzled me for days, which I have been stuck on it. From the lagrangian, we can obtain the primary ... 0answers 49 views Second order action ADM formalism I am trying to derive the second order action $$S_{(2)}~=~\frac{m_{pl}^{2}}{8}\int a^{2}[(h_{ij}')^{2}-(\partial_{i}h_{ij})^{2}]d^{4}x,$$ used for tensor fluctuations derived from the ADM ... 2answers 191 views The string Poisson bracket Where does the factor $\frac{1}{T}$ ($T$ is the string tension) in this Poisson bracket come from? \{X^{\mu}(\tau,\sigma),\dot{X}^{\nu}(\tau,\sigma')\} ~=~ ... 3answers 86 views Other application of Liouville's theorem besides thermodynamics Are there any other important practical and theoretical consequences of Liouville's theorem on the conservation of phase space volume besides the calculation of the microcanonical potential in ... 1answer 79 views Is Hamilton-Jacobi equation valid for only conserved systems? From derivation of Hamilton-Jacobi (HJ) equation one can see that it is only applicable for conserved systems, but from some books and Wikipedia one reads the HJ equation as ... 1answer 93 views Does a constant of motion always imply a Hamiltonian formulation? If a continuous dynamical system has a constant of motion that is a function of all its variables, and is not already evidently Hamiltonian, is it always possible to use a change of variables and ... 2answers 108 views Are Poisson brackets of second-class constraints independent of the canonical coordinates? Say we have a constraint system with second-class constraints $\chi_N(q,p)=0$. To define Dirac brackets we need the Poisson brackets of these constraints: $C_{NM}=\{\chi_N(q,p),\chi_M(q,p)\}_P$ . Is ... 1answer 80 views Eikonal approximation for wave optics. Why follow the unit vector parallel to the Pointing vector? The description of the passage from wave optics to geometrical optics claims that light rays are the integral curves of a certain vector field (the Pointing vector direction, normalized to 1). Here ... 1answer 81 views Noise spectrum of two systems and interacting Hamiltonian I've been discovering recently the concept of noise spectrum, defined as: $$S_{xx}[\omega] = \int dt<x(t)x(0)>\text{e}^{-i\omega t}$$ Roughly the Fourrier transform of the two-point function. ... 5answers 1k views Why not using Lagrangian, instead of Hamiltonian, in non relativistic QM? When we studied classical mechanics on the undergraduate level, on the level of Taylor, we covered Hamiltonian as well as Lagrangian mechanics. Now when we studied QM, on the level of Griffiths, we ... 1answer 157 views Find the Hamiltonian given $\dot p$ and $\dot q$ I have these equations: $$\dot p=ap+bq,$$ $$\dot q=cp+dq,$$ and I have to find the conditions such as the equations are canonical. Then, I have to find the Hamiltonian $H$. To answer to the first ... 1answer 75 views Find generating function $F_1$ for canonical trasformation I'd like to know the steps to follow to find the generating function $F_1(q,Q)$ given a canonical transformation. For example, considering the transformation $$q=Q^{1/2}e^{-P}$$ $$p=Q^{1/2}e^P$$ ... 1answer 55 views Solution of motion in hamiltonian formalism I have these canonical equations: $$\dot p = - \alpha pq$$ $$\dot q =\frac{1}{2} \alpha q^2$$ I have to find $q(t)$ and p$(t)$, considering initial conditions $p_0$ and $q_0$. I thought to simply ... 2answers 77 views Hamiltonian constraint in spherical Friedmann cosmology I'm taking a GR course, in which the instructor discussed the 'Hamiltonian constraint' of spherical Friedmann cosmology action. I'm not quite clear about the definition of 'Hamiltonian constraint' ... 3answers 189 views Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows: H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 ... 1answer 124 views Graphical representation of Hamilton's equation of motion [closed] Position time graph for the Hamilton's equations motion for a simple pendulum. 2answers 257 views Hamiltonian or not? Is there a way to know if a system described by a known equation of motion admits a Hamiltonian function? Take for example $$\dot \vartheta_i = \omega_i + J\sum_j \sin(\vartheta_j-\vartheta_i)$$ ... 3answers 68 views Constructing a Hamiltonian (as a polynomial of $q_i$ and $p_i$) from its spectrum For a countable sequence of positive numbers $S=\{\lambda_i\}_{i\in N}$ is there a construction producing a Hamiltonian with spectrum $S$ (or at least having the same eigenvalues for $i\leq s$ for ... 1answer 145 views Meaning of a canonical transformation “preserving” a differential form? In Chapter 9 of Arnold's Mathematical Methods of Classical Mechanics, we find the following definition: Let $g$ be a differentiable mapping of the phase space $\mathbb R^{2n}$ to $\mathbb R^{2n}$. ... 1answer 288 views Hamilton's equations for a simple pendulum I don't get how to use Hamilton's equations in mechanics, for example let's take the simple pendulum with $$H=\frac{p^2}{2mR^2}+mgR(1-\cos\theta)$$ Now Hamilton's equations will be: \dot ... 9answers 2k views Book about classical mechanics I am looking for a book about "advanced" classical mechanics. By advanced I mean a book considering directly Lagrangian and Hamiltonian formulation, and also providing a firm basis in the geometrical ... 5answers 541 views What does symplecticity imply? Symplectic systems are a common object of studies in classical physics and nonlinearity sciences. At first I assumed it was just another way of saying Hamiltonian, but I also heard it in the context ... 2answers 131 views Hamiltonian and non conservative force I have to find the Hamiltonian of a charged particle in a uniform magnetic field; the potential vector is $\vec {A}= B/2 (-y, x, 0)$. I know that $$H=\sum_i p_i \dot q_i -L$$ where $p_i$ is ... 1answer 199 views Cyclic Coordinates in Hamiltonian Mechanics I was reading up on Hamiltonian Mechanics and came across the following: If a generalized coordinate $q_j$ doesn't explicitly occur in the Hamiltonian, then $p_j$ is a constant of motion ... 1answer 181 views Sympletic structure of General Relativity Inspired by physics.SE: http://physics.stackexchange.com/questions/15571/does-the-dimensionality-of-phase-space-go-up-as-the-universe-expands/15613 It made me wonder about symplectic structures in ... 1answer 115 views Canonical transformation and Hamilton's equations I was trying to prove, that for a transformation to be Canonical, one must have a relationship: $$\left\{ Q_a,P_i \right\} = \delta_{ai}$$ Where $Q_a = Q_a(p_i,q_i)$ and $P_a = P_a(p_i,q_i)$. Now ... 2answers 142 views Good book for Analytical Mechanics What is a good book for Analytical Mechanics? To be more specific, I would prefer a book that: Is written "for mathematicians", i.e. with high mathematics precision (for example, with less emphasis ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8945648074150085, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39037?sort=votes
## boolean functions and averaging / counting ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hey guys, I have a slightly imprecise question. I would like say something about a whole set of binary strings evaluated by a binary function by just looking at some type of average. The easiest example I can think of is probably a binary function $f: \{0,1\}^n \rightarrow \{0,1\}$ that is linear with $f(0) = 0$. Now in order to count the number of assignments resulting in $1$ I can do the following: $1/2^n * \sum_{x \in \{0,1\}^n} f(x) = f(1/2^n * \sum_{x \in \{0,1\}^n} x) = f(1/2 e)$ where $e = (1,\dots, 1)$ is the all-one vector and thus $f(1/2 e) * 2^n$ gives me the answer i am looking for. I vaguely recall that I have seen something like this beforehand and I guess that there is something like a whole theory about this type of combinatorial argument out there. It is also somehow about inferring the structure of the boolean function by evaluating it at non-boolean inputs. For example, I think that this is part of the idea of the algebraization as a barrier to showing P != NP where one of the oracles get enhanced power by not only being able to evaluate a certain function at 0/1 assignments but also any other point contained in $[0,1]^n$. I would really appreciate any pointers or references or just names for what I am actually looking for. Thanks a lot, Alberto - I still don't know what your question is, and your example computation doesn't make sense to me, either. – Darsh Ranjan Sep 17 2010 at 2:23 3 Alberto, can you try re-writing your question so that you explain yourself more clearly? Are you saying the $e$ is the expected value (and not the constant 'e' used in exponentiation)? Are you asking if since you have a linear function, the average of the function $f$ over all possible values of binary strings of length $n$ is equal to the function of the average of the all of the expected binary strings, is equal to the function of one-half of the expected value of all binary strings of length $n$? Please restate your question more clearly. – sleepless in beantown Sep 17 2010 at 6:20 sorry guys, you are absolutely right. The "e" here is the all-one vector. The example was just meant to point out the type of argument: I am looking for. Here, as the function is linear (as in being a homomorphism, i.e., f(0) = 0), I can compute the average of the evaluation function by computing the evaluation on the average. Than I can count by multiplying $2^n$. – Alberto Sep 17 2010 at 16:17 I do not get “by evaluating it at non-boolean inputs” in your example. Since you could compute the same value as 2^{n−1}⋅f(e), where e is the all-one vector, it seems to me that you just chose to evaluate f at a non-boolean point when it was not necessary. – Tsuyoshi Ito Sep 17 2010 at 22:43 By the way, I guess you should write (1/2)e or e/2 instead of 1/2e. – Tsuyoshi Ito Sep 17 2010 at 22:45 show 3 more comments ## 3 Answers There's a canonical way to extend a boolean function to the unit n-cube: you replace the boolean arguments by real numbers that are the probabilities of independent events, and the new output is the probability of the compound event defined by the original boolean function. For example, take the boolean function $(a,b,c)\mapsto (a\wedge b) \vee c$. If $a$, $b$, and $c$ are independent events with respective probabilities $p$, $q$, and $r$, then you can check that the probability of the event $(a\wedge b) \vee c$ is $pq +r -pqr$. I wrote a somewhat long post about this here: http://mathoverflow.net/questions/4930/finding-minimal-or-canonical-expressions-for-boolean-truth-tables/4951#4951 I bring this up because if $f$ is an n-ary boolean function and $f^*$ is the probability version, then $f^*(1/2,...,1/2)$ is precisely the average value of $f$ over all possible boolean inputs. (That's obvious: if you flip a fair coin to determine the truth value for each of the n arguments of $f$, then by definition, $f^*(1/2,...,1/2)$ is the probability of $f$ evaluating to "true," but that probability is also clearly the average value of $f$ in this case.) That seems to be what you're looking for. (Unfortunately, computing $f^*$ for any reasonably complicated function $f$ is pretty much intractable: indeed, there is no polynomial-time algorithm unless P = #P, by the simple fact stated above.) - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One view of this is integration. If the curve (function) you have is sufficiently well understood, you can estimate the area under it by various approximation rules using a few points, e.g. Simpson's Rule. Thus you can generalize your linear example. The other ways below involve obtaining the average through evaluating the function at some of the points in {0,1}^n . They don't speak to how to use this to infer properties of the function. If your function is monotonic increasing, then in principle you can find the set of points which are least and sent to one under the function, and then use some form of inclusion-exclusion to get the count, or perhaps something simpler. If your function domain can be nicely partitioned so that the result is a disjoint union of monotonic functions, then you have a divide-and-conquer strategy to implement. If your function looks like XOR of several variables in some places, those places contribute a weight 1/2 to the average, and you might be able to handle the other places by something similar to the above. The quickest way to estimate the average is by random sampling. The average of the function is likely to be close to the average of a random sample.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9363369345664978, "perplexity_flag": "head"}
http://mathoverflow.net/questions/93771?sort=votes
## Proper compact connected subgroup of $Spin(n)$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What are the proper compact connected subgroups of $Spin(n)$ of maximal rank where $Spin(n)$ is the spin group, that is, the universal cover of the special orthogonal group $SO(n)$? In fact, I am only interested in the highest dimension of a compact connected subgroup of $Spin(n)$ of maximal rank. I am not sure if this is an easier question. - What's wrong with a maximal torus? That satisfies all of your conditions and is certainly the one of minimum dimension. – Robert Bryant Apr 11 2012 at 15:15 Yes, it is an easier question. I assume that by $Spin(n)$ you mean the compact group $G=Spin(n)$ over $\mathbf{R}$. Then a connected subgroup of maximal rank (i.e. containing a maximal torus) of minimal dimension is a maximal torus. Its dimension is $rk(G)$, equal to $n/2$ or $(n-1)/2$ depending on whether $n$ is even or odd. The "next lowest" dimension of a connected subgroup is $rk(G)+2$. – Mikhail Borovoi Apr 11 2012 at 15:16 Sorry, for the confusion. I made a mistake which I edited. In fact, I am looking for the subgroup of highest dimension which satisifies these properties. – berl13 Apr 11 2012 at 15:20 ## 2 Answers I think that the answer here is just the double cover of the obvious answer for $SO(n)$, which is $U(n/2)$ when $n$ is even and $SO(n{-}1)$ when $n$ is odd. You can double-check this by consulting the Dynkin tables of maximal subgroups. Added after Mikhail's comment: Mikhail actually went to the tables and checked (which I had not) and observed that, when $n$ is even, the maximal subgroup $SO(n{-}2)\times SO(2)$ of maximal rank has larger dimension than $U(n/2)$ when $n>8$. (They have equal dimension when $n=8$ and the former has smaller dimension when $n<8$.) Thus, the above answer needs to be divided into parts when $n$ is even. By the way, the double covers of the subgroups $SO(6)\times SO(2)$ and $U(4)$ in $Spin(8)$ are actually conjugate by an outer automorphism of $Spin(8)$, so they are essentially the same. This is a consequence of triality as discovered by Cartan. - Ok, thank you for your help. – berl13 Apr 11 2012 at 15:59 2 I double-checked this. When $n\ge 8$ is even, the answer is (the cover of) $SO(n−2)\times SO(2)$, which differs from $U(n/2)$ when $n>8$. – Mikhail Borovoi Apr 11 2012 at 17:55 @Mikhail: Yes, I agree that you are right. $U(n/2)$ is a maximal subgroup of maximal rank, but it's not the largest dimension, even among the ones of rank $n/2$ (except when $n$ is small). – Robert Bryant Apr 11 2012 at 18:03 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A subgroup of maximal rank of maximal dimension is certainly a maximal subgroup of maximal rank. Maximal connected subgroups of maximal rank in $Spin(n)$ correspond to maximal reductive Lie subalgebras of maximal rank in $so(n)_{\mathbf{C}}$. Such subalgebras in semisimple Lie algebras were classified by Dynkin in 1952, see Onishchik and Vinberg (Eds.), Lie Groups and Lie Algebras III, Encyclopaedia of Mathematical Sciences, vol. 41, Tables 5 and 6. For $so(n)$ all such subalgebras are $so(2k)\oplus so(n-2k)$, and also $gl(n/2)$ for $n$ even. The subalgebras of highest dimension are probably $so(n-1)$ for $n$ odd and $gl(n/2)$ for $n$ even. EDIT: For $n=2l\ge 10$, the subalgebra of highest dimension and of maximal rank in $so(n)$ is $so(n-2)\oplus so(2)$ of dimension $2l^2-5l+4=l^2+l(l-5)+4$, and NOT $gl(n/2)$ of dimension $l^2$. For example, for $n=10$ we have ${\rm dim} (so(8)\oplus so(2))=29$, while ${\rm dim}\ gl(5)=25$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92787766456604, "perplexity_flag": "head"}
http://mathoverflow.net/questions/47524/when-do-we-study-maps-into-an-object-or-from-the-object-to-another-object/47527
## When do we study maps into an object or from the object to another object? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In many Mathematical theories, to study an object, we usually consider the set of all maps from that object to some other object. For example, in differential geometry, we study the smooth maps from a manifold $M$ to $\mathbb{R}$. Or in Algebraic Geometry, we consider the structure sheaf, which is the set of maps from a variety to $\mathbb{A}^1$. So, is there any heuristic idea about why we don't do the other way around, i.e. study the set of all maps from some object to the object we want to study (at least in the two examples above)? Would this give us any more information? And also, is there any subject in which we do that? Edit: One more clarification that might make my question clearer. In algebraic geometry, when we write an $R-$scheme $\mathrm{Spec} A$, already implicitly, we are viewing $A$ as the ring of all $R-$functions from $\mathrm{Spec} A$ to $\mathrm{Spec} R[x]$. Edit (based on Qiaochu Yuan's answer): maps in seem to give us local information while maps out gives us global one, at least in Differential Geometry and Algebraic geometry. For example, to learn about the tangent space at a point, we look at the map $I\to M$ (in differential geometry) and $\mathrm{Spec} k[x]/(x^2) \to X$ (in algebraic geometry). Is there any more example along these lines? - 11 In algebraic geometry, people certainly study maps from curves (especially $\mathbb{P}^1$) to various varieties. The study of rational curves (ie, exactly such maps) on a given variety is an area of very active research. Higher dimensional generalizations have also begun to be explored. – Karl Schwede Nov 27 2010 at 19:07 11 The question is a little misleading. There are plenty of cases where one learns about an object by mapping into it. For example, in differential geometry, the study of geodesics, minimal surfaces,... can shed a lot of light on the target manifold, as can the theory of harmonic maps (a.k.a. sigma models in the Physics literature). – José Figueroa-O'Farrill Nov 27 2010 at 19:18 10 In algebraic topology, we study maps from simplices to spaces -- homology. We also study maps from spheres to spaces -- homotopy groups. As Karl says, in algebraic geometry the study of maps of curves into varieties has been very fruitful. This has also been fruitful in symplectic geometry -- look up Gromov-Witten theory. It should not be surprising that maps to an object are interesting, because of Yoneda's lemma, which says that maps to an object contain essentially all information about that object. See mathoverflow.net/questions/3184/… – Kevin Lin Nov 27 2010 at 19:25 4 In dynamics (or ODE, or ergodic theory) we study orbits and trajectories, i.e. maps from Z or R into the space of interest. (One also studies factor maps from the space into simpler spaces, so it goes both ways in this case.) And in virtually any subject of mathematics, we study elements of a space, i.e. maps from a point into the space... – Terry Tao Nov 27 2010 at 20:33 5 IMO the premise of the question is mis-informed. In particular the differential geometry example is largely missing the point of the subject -- the study of the properties of smooth maps $M \to \mathbb R$ is part of differential topology, and even then it fits into the study of smooth maps $M \to N$, and in that setting the directionality of the map becomes rather irrelevant. – Ryan Budney Nov 27 2010 at 23:22 show 9 more comments ## 5 Answers In combinatorics, one considers both maps out of a space $X$ (colourings) and maps into a space $X$ (tuples). But there is one key difference between the two: if $X$ has $n$ elements, then the number of maps from, say, $\{0,1\}$ to $X$ is polynomial in $n$ (it has order $n^2$), while the number of maps from $X$ to $\{0,1\}$ is exponential in $n$ (it has order $2^n$). Thus we see that maps out of $X$ into a simple space form a much larger, and presumably thus much richer, space than maps into $X$ from a simple space. For instance, deciding whether a four-colouring of a graph with certain specified properties exists is usually a harder problem than deciding whether a four-tuple in a graph with certain specified properties exists, although both questions can be interesting. Of course, the situation could be different for other categories than the combinatorial one (particularly if some sort of duality is available)... - 2 As always,incredibly insightful and right on point,Terry.Wish I'd had you as my combinatorics professor-I might have actually "gotten" it.Not that either John Kennedy or Christopher Hanusa didn't try,trust me........ – Andrew L Nov 27 2010 at 21:48 9 Andrew L, I seem to remember that not so long ago, you were advertising your new plan to limit your contributions to this site to the domain of mathematics. The first sentence of the above comment would have perfectly sufficed to show your appreciation. – Alex Bartel Nov 28 2010 at 6:48 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $V$ be a variety over a field $K$. In the category of varieties (or schemes) over $K$. I am very interested in studying all the morphisms from $\operatorname{Spec}(K)$ to $X$. I could hardly be less interested in studying the set of all morphisms from $X$ to $\operatorname{Spec}(K)$: this is, trivially, a single point. On the other hand, if your variety $V$ is affine -- say $\operatorname{Spec} A$ -- then we are really saying that we prefer to study $K$-algebra maps from $A$ to $K$ (i.e., maps from $A$) rather than $K$-algebra maps from $K$ to $A$ (i.e., maps into $A$). This points to a curious feature of your question: it is probably most natural to construe it in terms of categories. But in this setup, if you just switch to the opposite category, the answer switches around! Nevertheless I think your question is a real one. One could just as well ask: why do most categories come with a "natural orientation", i.e., why do we prefer the category to its opposite category? I think there's something to this question as well. - 1 P.S.: If one is going to mention categories at all, I suppose "Yoneda Lemma" should occur in the answer somewhere. But there are others who enjoy talking about this material more than I... – Pete L. Clark Nov 27 2010 at 19:31 1 Thanks for your answer. This is exactly the question I am asking: when we write $X = \mathrm{Spec} A$, already implicitly, we are viewing $A$ as the ring of functions from $X$ to $\mathrm{Spec} K[x]$. – Brian Nov 27 2010 at 19:36 @Pete: agreed. I would like to especially recommend this paper: maths.gla.ac.uk/~tl/categories/yoneda.ps . I quote the last page: "two objects look the same if and only if they look the same from all viewpoints". This is formally explored in that paper, and in full generality, I believe. – Bruno Stonek Nov 27 2010 at 20:35 4 "Why do most categories come with a "natural orientation"" is a very nice interpretation of the question! – Martin Brandenburg Nov 27 2010 at 22:58 @Martin: at least for the category of affine schemes I think the answer is "it behaves more like Set than the opposite category." – Qiaochu Yuan Nov 28 2010 at 0:28 show 2 more comments Here is a low-level observation, which I think I read on a different MO thread somewhere. If the objects in your category behave like the category of sets, then a map into an object $X$ can be "local" (its image might be a small subobject), but a map out of $X$ must always be "global" (it has to be defined on all of $X$). So in some sense even a single map out of $X$ (e.g. a Morse function in the category of smooth manifolds) can capture much more of the structure of $X$ than a single map into $X$. Of course if your category behaves like the opposite of the category of sets then the opposite is true. And by the Yoneda lemma both maps into an object and maps out of an object classify it up to isomorphism, so I don't think it necessarily makes sense to privilege either point of view in general. There is some really interesting general discussion of these issues in Lawvere and Schanuel's Conceptual Mathematics. - Do you mean morse function? – Dylan Wilson Nov 28 2010 at 1:04 Oops. Yes, I did. – Qiaochu Yuan Nov 28 2010 at 1:10 Indeed, one often chooses a category of nice objects depending on the niceness of the 'maps out'. For example, one chooses locally convex vector spaces because they have enough maps to the base field. One considers completely regular spaces because they have enough maps to the unit interval to separate subspaces. Compact Hausdorff spaces are particularly nice in some respects because the ring of complex functions is a unital $C^\ast$ algebra. – David Roberts Nov 28 2010 at 1:11 Thanks! I also started to think about the local vs. global like you. In the "local case", we do actually learn something about the local property when we look at "maps in," like the map $I\to M$ (in differentiable manifolds) and $\mathrm{Spec}k[x]/(x^2) \to X$ in Algebraic geometry. I am wondering if there is any other example along these lines. – Brian Nov 28 2010 at 3:40 1 Session 6 of Conceptual Mathematics: "The point of view about maps indicated by the terms 'naming,' 'listing,' 'exemplifying,' and 'parameterizing' is to be considered as 'opposite' to the point of view indicated by the words 'sorting,' 'stacking,' 'fibering,' and 'partitioning'." (p. 83) lawvere and Schanuel then go on to explain this 'opposition' philosophically. – David Corfield Nov 28 2010 at 12:27 Here are two important generalizations of the notion of topological space: 1) C*-algebras. Given a space X, one considers the set of continuous functions from X to ℂ. One then looks at all the properties of this set (it's an algebra, it's a Banach space, ...). One relaxes a bit the conditions (allow the multiplication to be non-commutative): there you get the notion of a C*-algebra. 2) Stacks. Given a space X, one considers the collection of all maps Y→ X, where Y is an arbitrary space. We then look at at their properties (they form a category over Top1, there exists a notion of precomposition with a map Y'→Y, they behave well w.r.t open covers, ...). Relax some conditions, and you'll get the notion of a stack. By focusing on out of a space X, you get the notion of C*-algebra. By focusing on maps into a space X, you get the notion of stack. 1Here, Top refers to the category of topological spaces. Stacks are encountered more often in algebraic geometry. In that case, one uses the category of schemes in place of Top. - To me, it seems that your question is essentially "Why does contravariance occur more frequently than covariance?" If one has a contravariant representable functor, then one is implicitly studying maps into a fixed (universal) object from the object one is interested in studying. I think perhaps one reason why contravariance is more natural than covariance is what Qiaochu indicates in his answer: contravariant functors have more of an opportunity to be "local." For instance, let $X, Y$ be topological spaces, and $f: X \to Y$ a continuous map. Then a sheaf on $Y$ when pulled back to $X$ at a point $x$ depends only on the local nature near the single point $f(x)$. Things are not so nice when pushing forward a sheaf. Thus it happens that pull-backs preserve stalks, while push-forwards need not (unless you are working with a particularly nice map $f$). In particular, if one has a bundle on $Y$, the local nature implies that it can be pulled back to $X$, while pushing a vector bundle forward will only give a sheaf, not necessarily a locally free one (i.e. a bundle). In algebraic geometry, one of the first representable functors one encounters is the one that represents projective space. Namely, fix a field $k$ and an integer $n$; then a map from a $k$-scheme $X$ into $\mathbb{P}^n_k$ is given by a line bundle on $X$ and $n+1$ global sections generating it (up to isomorphism). This is contravariant because you can pull-back line bundles and the generating property of global sections. You can't push this forward in a reasonable manner. (This universal property generalizes to projective space bundles over any scheme.) At an even more basic level, the definition of a sheaf itself is contravariant (it's a contravariant functor from the category of open sets with inclusions to the category of sets that satisfies unique gluing axioms). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9270007014274597, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/108467-partial-derivative-without-knowing-actual-function.html
# Thread: 1. ## partial derivative without knowing actual function? My textbook, Advanced Macroeconomics (Romer), introduces a function $Y=F(K,AL)$, which can be written $\frac{Y}{AL}=F(\frac{K}{AL},1)$ and is often shortened to $y=f(k)$. It then suggests that $\frac{\partial F(K,AL)}{\partial K} = ALf'(\frac{K}{AL})(\frac{1}{AL})$. But how can they take the partial with respect to K if they don't specify the actual function? They say what the function depends on, but not what the actual function is. What is happening here? 2. Looks like they used the Chain Rule: $\frac{\partial}{\partial K}\left(AL\cdot F\left(\frac{K}{AL},1\right)\right)=AL\frac{\parti al F}{\partial K}\left(\frac{K}{AL},1\right)\cdot\frac{\partial}{ \partial K}\left(\frac{K}{AL}\right)=AL\frac{\partial F}{\partial K}\left(\frac{K}{AL},1\right)\cdot\frac{1}{AL}.$ 3. Originally Posted by Scott H Looks like they used the Chain Rule: $\frac{\partial}{\partial K}\left(AL\cdot F\left(\frac{K}{AL},1\right)\right)=AL\frac{\parti al F}{\partial K}\left(\frac{K}{AL},1\right)\cdot\frac{\partial}{ \partial K}\left(\frac{K}{AL}\right)=AL\frac{\partial F}{\partial K}\left(\frac{K}{AL},1\right)\cdot\frac{1}{AL}.$ That looks about right. A little strange to find the partial without knowing what the actual function is, but I guess it would work out if we plugged in the actual function? 4. Correct, as the Chain Rule applies to every function. 5. Originally Posted by Scott H Correct, as the Chain Rule applies to every function. Hmm, so say you discovered that the function is $Y=F(K,AL)=K^\alpha(AL)^{1-\alpha}$. Can we figure out what the partial with respect to K is from $f'(\frac{K}{AL})$? 6. Yes. Differentiating the function normally, we obtain $\frac{\partial}{\partial K}(K^{\alpha}(AL)^{1-\alpha})=\alpha K^{\alpha-1}(AL)^{1-\alpha}.$ Using the formula derived from the Chain Rule, we obtain $\begin{aligned}<br /> \frac{\partial}{\partial K}\left(AL\cdot F\left(\frac{K}{AL},1\right)\right)&=AL\frac{\part ial F}{\partial K}\left(\frac{K}{AL},1\right)\cdot\frac{1}{AL}\\<br /> &=AL\alpha\left(\frac{K}{AL}\right)^{\alpha-1}(1)^{1-\alpha}\cdot\frac{1}{AL}\\<br /> &=\frac{\alpha K^{\alpha-1}}{(AL)^{\alpha-1}}\\<br /> &=\alpha K^{\alpha-1}(AL)^{1-\alpha}.<br /> \end{aligned}<br />$ 7. That's great, thanks for your help!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9078240394592285, "perplexity_flag": "middle"}
http://www.unitaryflow.com/2010/04/vec-bund-fundam-physics.html
## Thursday, April 22, 2010 ### Are vector bundles fundamental in Physics? Vector bundles and gauge theory The idea in Gauge Theory is that the fields of the known forces can be expressed starting with some principal bundles and their associated vector bundles. To be more precise, let's consider Maxwell's electromagnetic field $F_{ab}$. It can be represented with the help of a principal bundle of group $U(1)$, and a connection on this bundle. The connection corresponds to the electromagnetic potential, and the curvature to the electromagnetic field. It is known that we can modify the potential to $A_a(t,x)\mapsto A_a(t,x) + \partial_a \theta(t,x)$, and obtain the same $F_{ab}$. In terms of bundles, this transformation corresponds to a gauge transformation of $\mathbb C$ by the action of $e^{i\theta(t,x)}$. The connection will appear to depend on the gauge, but the curvature is gauge invariant. A bundle is just another manifold Both principal bundles and vector bundles are differential manifolds (that is, topological spaces which looks locally, from topological viewpoint, like a vector space with with a fixed number of dimensions, and on which we can define partial derivatives). A fiber bundle over spacetime looks locally like the cartesian product between the spacetime and a fixed manifold named fiber. For the vector bundles the fiber is a vector space, for the principal bundle it is a Lie group. The $U(1)$ bundle looks locally like a cartesian product between the spacetime and a circle. This space is 5-dimensional, and it was used by Kaluza and Klein in their attempt to unify electromagnetism with gravity by using a 5-dimensional version of general relativity. After the electromagnetic force was understood as a gauge field, Yang and Mills provided a generalization which allowed us to see as gauge fields also the strong and electroweak forces. It seemed as easy as replacing the $U(1)$ group with a non-abelian group like $U(2)$ for the electroweak force, and $SU(3)$ for the color force. New bundles resulted, and they can be viewed as well as spacetimes with more dimensions, from which some are compactified. The obvious problem with these extra dimensions is that we cannot "see" them. What explanation is that we cannot test? To avoid this questions, these dimensions are referred as corresponding to "internal spaces", and the Kaluza-Klein interpretation is in general avoided, being preferred that in terms of bundles. What is more fundamental, the field or the connection? It was believed that the potential is only a mathematical trick to simplify Maxwell's equations, and that it has no correspondent in reality. There are some reasons to change this view. One is, as I explained here, in chapter III., the following. Maxwell's equations contain constraints imposed on the field for equal time, that is, between the values of the field at spacelike separated points (Gauss' law). This may seem a little bit acausal, because requires the initial conditions at two spacelike separated points to be related. Of course, the separation between the two points is infinitesimal, but it still exists, and has non-local consequences. In terms of the potential, these constraints are no longer needed. If we consider the connection as fundamental, then the curvature will be a derived field. It will still obey Gauss' law, but this time just as a consequence of being associated to the connection, which is the true fundamental field. And the connection is not constrained. Taking a charged field, such as the Dirac electron field, under a gauge transformation it is multiplied by $e^{i\theta(t,x)}$. The Dirac-Maxwell equations maintain their form, if we apply the corresponding gauge transformation to the potential. This allows us to perform an experiment to see whether the potential is a real field, or just a mathematical trick. This experiment was imagined by Werner Ehrenberg and Raymond E. Siday, and Aharonov and Bohm, a decade later. It was verified experimentally by S. Olariu and I. Iovitzu Popescu, and confirmed two years later by Osakabe et. al.. Basically, this effect shows that the electromagnetic potential has a fundamental nature. But how can a potential be the fundamental quantity? Which potential, considering that there can be an infinity such choices, related by $A_a(t,x)\mapsto A_a(t,x) + \partial_a \theta(t,x)$? The only way known for this is if it represents a connection on a $U(1)$-bundle. This way, the potential is just the expression of the connection, in a particular frame on the bundle. Gauge transformations are just changes of that frame. The Aharonov-Bohm effect is interpreted topologically as an effect of the holonomy of a connection on this bundle (which is the electromagnetic potential). These properties are captured by Wilson’s loops. Are those "internal spaces" real? It is easy to check the number of dimensions of our space: it is the number of coordinates required to indicate the position of a point, that is, 3. The number of numbers needed to express a rotation, 3(3-1)/2=3, indicates also that we live in a 3-dimensional space. How can we check the extra, "internal" dimensions? We just count the numbers needed to represent them. Since the electromagnetic potential can be changed in a way indicating a rotation of a circle, we conclude that the internal space has one dimension. It is the same as in the case of the 3-dimensional space. The only difference is that we can actually move in this space, and this is why we consider it real. We cannot move in the internal dimensions. But can we, at least, send particles to move in those dimensions? In fact we can. The Aharonov-Bohm effect shows that we can rotate the wavefunction of an electron. We can compare the rotation of a part of the wavefunction of an electron with that of another part. To do this, we just make them interfere, and see the relative rotation between them. Isn't this remind us of comparing the speed of light in two arms of the Michaelson-Morley interferometer? Only that the Aharonov-Bohm effect succeded, and showed that there is an "internal rotation". Now, it is time to remember the notion of existence as it is used by mathematicians. Something exists from a mathematical viewpoint if it is logically consistent. The 5-dimensional spacetime (3+1+1) of the electromagnetism exists, in this respect. Did the rotation verified by the Aharonov-Bohm effect confirm its physical existence? In fact, we can take for the internal space, instead of a circle, the complex space $\mathbb C$. The group $U(1)$ acts as well on this space, and we can think that the physical spacetime is in fact 6-dimensional (3+1+2). What is the true number of dimensions? I would say that this number is given by the number of dimensions of the $U(1)$-bundle, that is, 3+1+1. And the internal space happens to be a circle because the $U(1)$ group itself is, topologically, a circle. It has one dimension too. And both the circle bundle and the $\mathbb C$ bundle are associated to this principal bundle, that is, they are obtained from representations of the $U(1)$ group. OK, so the space dimensions are more real for us, because we can move almost freely in these dimensions. Time is the fourth, at least mathematically, and some people can accept that it is the fourth physically too. They think that this is true, because of the great beauty and symmetry of the Lorentz group. But the internal dimensions, have they more than a mathematical existence? We can ask as well whether the three space dimensions are true or not. What if the real number of space dimensions is two, as the holographic principle suggests? Do we have a criterion to distinguish between real dimensions and simple mathematical constructions in physics? Can this criterion be the experiment? Reactions: #### No comments: Subscribe to: Post Comments (Atom) Search for Books: keyword / book title / author / ISBN ## Themes • Essence (2) • Geometry of Illusion (2) • Geometry of Physics (3) • Quantum Theory (4) • Singularities (3) • Smooth Quantum Mechanics (7) • Symmetry (1) • Time (5) ## Blog Archive • ►  2013 (1) • ►  April (1) • ►  2012 (3) • ►  September (1) • ►  April (1) • ►  January (1) • ►  2011 (5) • ►  August (1) • ►  May (1) • ►  February (3) • ►  2009 (7) • ►  June (1) • ►  February (2) • ►  January (4) • ►  2008 (6) • ►  December (4) • ►  October (1) • ►  September (1) Posts Posts Comments Comments ## Follow by Email Search for Books: keyword / book title / author / ISBN
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9268171787261963, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/3329/how-to-get-greeks-using-monte-carlo-for-arbitrary-option
# How to get greeks using Monte-Carlo for arbitrary option? Let's assume I have an arbitrary option that I can price using Monte-Carlo simulation. What is the general approach (i.e. without relying on specific option type) to calculating the greeks in this case? Edit: I woud like to add a few links on the topic that I found useful: - That vibrato Monte Carlo thesis is very interesting. – Brian B Apr 27 '12 at 15:00 ## 4 Answers You need to compute your greeks as finite differences, but the full procedure may be pretty tricky. I will use vega $\aleph$ as the example here. Let's begin by designating your Monte Carlo estimator as a function $V(\sigma,s,M)$ where $\sigma$ is the volatility as usual, $s$ is the seed to your random number generator, and $M$ is the sample count. To begin with, recall that the Monte Carlo estimate of any value converges with the square root of the sample count. In particular, if you choose, say, $M=100$, you can run your estimator $N=500$ times to get estimate $\{V_n\}_{i=1}^{500}$, obtaining the standard deviation $\Sigma_{100}$ of those estimates. Having done this, we now know the standard error of the estimator for any $M$ to be $$e_M \approx \Sigma_{100} \sqrt{\frac{100}{M}}$$ There are three possible cases: 1. You can control the random seed $s$, or the set of random samples, used by the Monte Carlo estimator 2. You cannot control $s$. 3. You cannot even control the sample count $M$. In the first case, you can use the fact that $s$ has been controlled to get a reasonable estimate of vega with relatively little extra work. Find an $M$ such that the error in option price $e_M$ is tolerable. Choose a seed $s_0$ and a small increment $\Delta\sigma$ in the volatility, and compute $$\aleph^{(1)} = \frac{V(\sigma+\Delta\sigma,s_0,M)-V(\sigma-\Delta\sigma,s_0,M)}{2\Delta\sigma}$$ and $$\aleph^{(2)} = \frac{V(\sigma+\frac12\Delta\sigma,s_0,M) - V(\sigma-\frac12\Delta\sigma,s_0,M)}{\Delta\sigma}$$ If $\aleph^{(1)} \approx \aleph^{(2)}$ then you have a good estimate and you are done. The reason this works so nicely is that, by controlling the seed, our difference computations $$\delta=V(\sigma+\Delta\sigma,s_0,M)-V(\sigma-\Delta\sigma,s_0,M)$$ are direct Monte Carlo estimators of the vega, since the shared seed implies the samples $x_i$ match in the difference of sums. That is $$\delta = ( \frac1M \sum_{i=1}^M f(x_i, \sigma+\Delta\sigma) ) -( \frac1M \sum_{i=1}^M f(x_i, \sigma-\Delta\sigma) ) \\ =\frac1M \sum_{i=1}^M f(x_i, \sigma+\Delta\sigma)-f(x_i, \sigma-\Delta\sigma)$$ The second case where you cannot control the seed, on the other hand, is rather more difficult. Here, you will have a different error $e$ to the true value every time you run the function. For brevity, let's let $$e_\pm = V(\sigma\pm \Delta\sigma,s_\pm,M).$$ Of course we do not know the value of $e_\pm$ or of $s_\pm$, but we do at least have our estimate of the size of $e_\pm$ as noted above. Therefore, the error in $\delta$ is approximately $e_M \sqrt{2}$. You need to choose $M$ so large that $$\delta \gg e_M \sqrt{2}.$$ Not knowing the value of $\delta$ a priori makes this difficult, but usually in a trading context one can specify an acceptable absolute error $\epsilon$ in vega. In that case, we can demand $$\epsilon < \frac{e_M \sqrt{2}}{\Delta\sigma}$$ which translates to $$M > \Sigma_{100}^2 {\frac{200}{\epsilon^2 \Delta\sigma^2}}.$$ The third case, where you can control neither the random seed $s$ nor the sample count $M$ should be treated as the second case above. You simply treat each run of the algorithm as a single sample. - The most general answer is to shift your input to approximate the first derivative. Given that you need Monte Carlo to price this, it may get expensive. But that's the way it goes as when you have no analytical solutions as there aint't no free lunch ... - Numerical derivatives are iffy business, but I agree that it seems to be your best choice. As you probably know; be aware of the how the precision decreases quickly(!) as higher orders are measures. – AdAbsurdum Apr 24 '12 at 7:34 Download the code from http://fmsoption.codeplex.com to see how to do that for vanilla options. You are right, you need implementations for transcendental functions that are written for dual numbers. You will find them in the fmsdual project. If you just want browse some source code, see http://fmsoption.codeplex.com/SourceControl/changeset/view/10924#145366. Note that `eps` is machine epsilon ~= 2e-16. (!) - 1 For the curious, the "dual number" approach referenced here is an automatic differentiation package, where the bookkeeping mostly handled by C++ templates and extensions to the standard numeric types. Derivatives of transcendental functions are handled by automatic differentiation of the numerical analytic series approximations used to calculate function values. (Please correct any mistakes I have made in that) – Brian B May 10 '12 at 14:12 I beleive AD refers to techniques for automatically generating functions for the derivatives. Dual numbers don't do that for you. – Keith A. Lewis May 11 '12 at 14:05 Another way to do this is to use dual numbers. http://fmsdual.codeplex.com. They let you calculate an arbitrary number of derivatives while running a single Monte Carlo. Here is an example of how to use it: ````// Monte Carlo derivatives void fms_test_monte(size_t N) { ::srand(static_cast<unsigned int>(::time(0))); double a = 0.5; dual::number<double,3> A(a, 1); dual::number<double,3> E(0.,1); for (int i = 0; i < N; ++i) { double x = 1.0*rand()/RAND_MAX; E = E + (x - A)*(x - A); } E = E/(1.*N); // X uniform [0,1] // E(X - a)^2 = 1/3 - 2a 1/2 + a^2 ensure (fabs(E._(0) - (1./3 - a + a*a)) < sqrt(1./N)); // d/da E(X - a)^2 = -2 E(X - a) = 2a - 1 ensure (fabs(E._(1) - (2*a - 1)) < sqrt(1./N)); // d^2/da^2 E(X - a)^2 = 2 ensure (fabs(E._(2) - 2) < sqrt(1./N)); } ```` - A novel method. It appears mathematically equivalent to finite differences. Also worth noting is that all samples need to remain in memory, at least if I read this stuff right. It would be informative to see code applied to a nontrivial estimator function. – Brian B Apr 27 '12 at 15:36 You are reading it wrong. Not all samples need to remain in memory. It is not mathematically equivalent to finite differences, you seem to be completely missing the point of dual numbers. You have the source code, let me know if you need help applying it to a non-trivial estimator. Monte Carlo aside, dual numbers allow you to calculate derivatives down to machine precision. – Keith A. Lewis May 7 '12 at 9:56 You're right, I now see it is not finite differences. I would find the simplicity of your example far more convincing if you demonstrated calculating, say, delta of an average strike option under the the Black-Scholes stochastic model. It seems to me the transcendental functions involved make this difficult even for vanilla options. Path dependencies will make the problem much worse. – Brian B May 7 '12 at 16:24 I'll also note that, as @Dirk and I read the question, Alexey does not necessarily have the source code to the option pricer. – Brian B May 7 '12 at 16:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947598934173584, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/15471/sum-of-rational-numbers/15472
# Sum of rational numbers The sum of a finite number of rational numbers is of course a rational number, but the sum of an infinite number of rational numbers might be an irrational number. Can someone give me some intuition why this sum might be irrational? I just "don't feel it." - 1 As Hans observes below, any real number is an infinite sum of rationals, so if you have any intuition for why there are irrational real numbers, there you go. I feel like adding: it is OK not to have intuition about infinite sums. (Maybe even that it is counterproductive to have expectations about what infinite sums ought to do, or ought not to do, until you have a great deal of experience with them.) A lot of stuff about infinite sums cannot be "felt" at first; only learned. If you "don't feel it", that might even be a sign that you understand it better than someone who does :) – anon Dec 24 '10 at 23:05 Now I most certainly fell it. To be honest I read that sum of rational numbers might be irrational yesterday (I havent thought about it earlier). The example that I found was that $\sum\limits_{i=1}^{\infty}\frac{1}{F_{i}}$, where F means fibonacci number, is irrational. From this example it wasn't clear for me why is it true, but the 'decimal' example that Trevor wrote is convincing. – Tomek Tarczynski Dec 24 '10 at 23:26 But still some things about rational and irrational numbers are not so obvious. For example: There is a rational number between every two irrational numbers, so how is it possible that there is 'so much more' irrational numbers than rational. – Tomek Tarczynski Dec 24 '10 at 23:30 Infact, an irrational number or a real number is defined as a limit of a sequence of rational numbers. – user17762 Jan 8 '11 at 22:23 ## 5 Answers Look at this. So: $3.14159 \dots = 3 + \frac{1}{10} + \frac{4}{10^2} +\frac{1}{10^3} + \frac{5}{10^4}+ \frac{9}{10^5} + \dots$ The above expression is $\pi$. - The problem is psychological: you think of the "infinite sum" of rational numbers as an obvious, intuitive concept, but it's not. It has a precise mathematical meaning, and that precise mathematical meaning only works if you allow the sums to be real numbers (which themselves have a precise mathematical meaning). The definitions which allow these "infinite sums" to make sense are much less trivial than someone who's never worked through them would think. - Any irrational number $x$ is the limit of a sequence of rational numbers $a_n$ (take for example the decimal expansion truncated after $n$ decimals, for $n=1,2,3,\dots$). Then $$x = a_1 + (a_2-a_1) + (a_3-a_2) + \dots$$ is a sum of rational numbers, but it is irrational by construction. - An irrational number is a "gap" inside the rationals. The decimal expansion of any irrational number is an infinite sum converging to it. It gives better and better approximations to the irrational number, which is unfortunately just not "there". There are many more examples of such limiting constructions. Sometimes you get new objects and sometimes you don't. An example is the delta function which can be approximated using bona fide functions. So there are functions arbitrarily "close" (in some sense) to the delta function, but the delta function itself is just "too good" to be an actual function. As an aside, if these approximations are good enough, you can prove that the number is transcendental (Legendre's or Roth's theorem). - Dear I am thinking about it because I am also a student of Mathematics. We know that $e$ is an irrational number. The value of $e$ is $$= 1 + \frac{1}{1} + \frac{1}{2}! + \frac{1}{3}! + \ldots$$ $$= 1 + \frac{1}{1} + \frac{1}{2} + \frac{1}{6} + \frac{1}{24} + \ldots$$ $$= 2.7182 \ldots$$ (Irrational Number) So sum of infinite rational numbers may be irrational. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9447548985481262, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/83132/list
## Return to Answer 1 [made Community Wiki] The skills that students are practicing in related rates problems are: 1. Differentiating a known equation implicitly with respect to time. 2. Interpreting the time derivative of a quantity as a rate of change. The main reason that related rates problems feel so contrived is that calculus books do not want to assume that the students are familiar with any of the equations of science or economics. Every related rates problem inherently involves differentiating a known equation, and the only equations that the calculus book assumes are the equations of geometry. Thus, you can find related rates problems involving various area and volume formulas, related rates problems involving the Pythagorean Theorem or similar triangles, related rates problems involving triangle trigonometry, and so forth. A few of these problems are compelling -- for example, computing the speed of an airplane based on ground observations of its altitude and apparent angular velocity -- but most of them do feel a bit contrived. The reality, of course, is that students are familiar with many of the basic equations and concepts of science and economics, and there's no rule against using these in problems. For example, you can make up all sorts of compelling related rates problems by starting with any physics or chemistry equation and imagining a situation where you might want to take its derivative: 1. The kinetic energy of an object is $K = \frac{1}{2}mv^2$. If the object is accelerating at a rate of $9.8 \text{m}/\text{s}^2$, how fast is the kinetic energy increasing when the speed is $30 \;\text{m}/\text{s}$? 2. An ideal gas satisfies $PV = nRT$, where $n$ is the number of moles and $R \approx 8.314\;\; \text{J}\; \text{mol}^{-1} \text{K}^{-1}$. Give the rate at which the temperature and volume of the gas are increasing, and then ask about the rate of change in pressure when the volume and temperature reach certain amounts. 3. The total energy stored in a capacitor is $\frac{1}{2} Q^2 / C$, where $Q$ is the amount of charge stored in the capacitor and $C$ is the capacitance. Give the value of $C$ and the rate at which $Q$ is decreasing, and ask about the rate at which the capacitor is losing energy when the energy is a certain amount. 4. In astronomy, the absolute magnitude $M$ of a star is related to its luminosity $L$ by the formula $$M \;=\; M_{\text{sun}} -\; 2.5\; \log_{10}(L/L_{\text{sun}}).$$ where $M_{\text{sun}} = 4.75$ and $L_{\text{sun}} = 3.839 \times 10^{26} \text{watts}$. (Note that, by convention, brighter stars have lower magnitude.) If the absolute magnitude of a variable star is decreasing at a rate of $0.09 / \text{week}$, how quickly is the luminosity of the star increasing when the magnitude is $3.8$? It's easy to make these up: just think of any equation in science or economics whose derivative might be interesting. Wikipedia and/or science textbooks can be helpful for finding equations from a wide variety of fields.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454770088195801, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-equations/64534-ordinary-differential-equation.html
# Thread: 1. ## Ordinary Differential equation A uniform chain of total length 8 metres, settle 1 metre above the ledge and 2 metres hang on the other side. If x represents the length. The motion of the chain is $(x+1)v(dv/dx) + v^2 = (x-1)g$ where v is velocity and g is a gravitational constant show that by making the substitution $u = v^2$ we obtain ${du}/{dx} + (^{2}/_{x+1})u=2(^{x-1}/_{x+2})g$ now do i have to integrate this in order to subsitute? any help on this would be appreciated as i have no idea where to start cheers people!. 2. Why not use an integrating factor? $\phi (x) = e^{\int p(x)dx}$ $\phi (x) u(x) = \int \phi (x) r(x) dx$ Where $\frac{du}{dx} + p(x)u = r(x)$ Is the format. Sorry, the above is what you would do to solve after substitution. You don't need to integrate BEFORE you substitute. I'm fairly sure your of the latter equation should be $2 (\frac{x-1}{x+1})g$, no? If that's the case: Divide through by $x+1$ After that apply the following considerations: $u = v^2$ $\frac{du}{dv} = 2v$ $vdv = \frac{1}{2}du$ $\frac{dv}{dx}v = \frac{1}{2}\frac{du}{dx}$ Should help! 3. ## thanks brilliant cheers got it!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262871146202087, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=37947&page=10
Physics Forums Thread Closed Page 10 of 13 « First < 7 8 9 10 11 12 13 > ## Work equation? Yes, you are right; I do know that force does not equal energy, but force is related to energy. In order for a force to be applied, there is a needed energy source. Has my question been answered? No. Quote by Alkatran This means that the magnet is transferring the force of gravity into the fridge, which is pushing on the ground. Since the fridge is pushing on the ground the ground is pushing back on the fridge. Since when did you refer to gravity making things push? I think it should be the ground is pulling the fridge, and the fridge is pulling the ground. Yea, doesn't make much sense putting it in the pull form. So, you are saying that magnets are conductors for gravitational fields? One problem I see in this. If gravity is lending this force, it's basically unlimited, because that magnet sits there until a force is pulling it away. This "unlimited" amount of force this gravity is providing in order ot keep the net force 0 requires an energy source of unlimited energy, seeing that force is related to energy. This energy seems as though it is created on the spot as a constant supply to the magnet. Give me your arguement on this so I can improvise mine. I'm not able to make a direct arguement based on what you have wrote, yet. So, I'm waiting. I have to go somewhere right now, so if I don't reply, don't think it's because I don't have a plausible answer. Nereid, everyone is entitled to their own opinion. Whether it be right or wrong. Recognitions: Homework Help Science Advisor Quote by urtalkinstupid Yes, you are right; I do know that force does not equal energy, but force is related to energy. In order for a force to be applied, there is a needed energy source. You don't need a change in energy for a force, because two forces can cancel each other out (so no energy change). Perhaps an energy source, I don't really know, above my level. Has my question been answered? No. Quote by urtalkinstupid Since when did you refer to gravity making things push? I think it should be the ground is pulling the fridge, and the fridge is pulling the ground. Yea, doesn't make much sense putting it in the pull form. So, you are saying that magnets are conductors for gravitational fields? One problem I see in this. If gravity is lending this force, it's basically unlimited, because that magnet sits there until a force is pulling it away. This "unlimited" amount of force this gravity is providing in order ot keep the net force 0 requires an energy source of unlimited energy, seeing that force is related to energy. This energy seems as though it is created on the spot as a constant supply to the magnet. Gravity is pulling the fridge into the ground, so the fridge is pushed/pulled against the ground, and the ground is pushing back. THEY'RE JUST WORDS. Quote by urtalkinstupid Give me your arguement on this so I can improvise mine. I'm not able to make a direct arguement based on what you have wrote, yet. So, I'm waiting. I have to go somewhere right now, so if I don't reply, don't think it's because I don't have a plausible answer. AKA I can't come up with something to argue about. Recognitions: Gold Member Homework Help Science Advisor Staff Emeritus Quote by urtalkinstupid Nereid, everyone is entitled to their own opinion. Whether it be right or wrong. Yes, but there is no entitlement to waste bandwidth at this (privately owned) website. When I locked that other thread and advised you all that PF is not a chatroom for children, I was specifically thinking of both yourself and beatrix kiddo. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by urtalkinstupid This force needs a soucre. That source is energy Let's do a thought experiment. Say you have two walls facing each other on opposite sides of your room. You put a hook on each wall. You then take a piece of rope and tie the two hooks together. You exert some energy making the rope as taut as you possibly can. You crank it down and tie a strong knot in it. The rope now has tension; it is pulling the two walls together. The walls are strong, however, and don't move. The tension in the rope will be the same tomorrow or in the year 3000 as it is today, as will the forces on the walls. It certainly took energy to tighten the rope in the first place, but it doesn't require any energy to keep it taut. If you assert that the rope requires energy to stay taut, where does this energy come from? Why does the rope use energy when it's taut, but not when it's just laying on the floor? If the rope uses an exhaustible source of energy to stay taut, what happens when that energy source runs out? Does the rope somehow untie itself and fall off the hooks? Does it stay the same length but magically just stop pulling on the walls? Does it turn into soup and drip onto the ground? (according to the Standard-Model equations)... $$F=a\frac{E}{c^2}$$ This equation does not say what you think it says. You think it says that force requires a source of energy, presumably just because F appears on the left and E on the right. This is not sound reasoning. It's like saying that voltage requires a "source of current" because V = IR has voltage on the left and current on the right. What you're doing is simply expressing a relationship between these quantities. Of course, E/c^2 is just the mass, so your equation is really just F=ma, or Newton's second law of motion. Forces and accelerations are related by mass. Mass and energy are related through c. Thus you can say that "force and energy are related through acceleration and c," but you're not saying anything new or novel. You're certainly not saying forces require sources of energy. So, go out, pull something, and tell me if you get tired or not. You act the same way as gravity does. No, you don't. We've already explained to you that the human body is a complex machine, with individual muscle fibers contracting and then relaxing. You already wowed us with your high-school biology curriculum. We've already been over this. If your muscle fibers could contract and then simply stay locked in that position, you'd never get tired. They don't do that, though. In order for the earth to keep the moon in orbit, there would have to be an unlimited amount of energy. Gravity is a force, where does the force of attraction get its energy from? You can keep saying it, but it's still wrong. - Warren Recognitions: Gold Member Science Advisor Staff Emeritus Quote by urtalkinstupid Nereid, everyone is entitled to their own opinion. Whether it be right or wrong. And indeed they are (I don't think I said otherwise, did I?). Since you did not answer my question, let me try to ask it in another way (perhaps you didn't understand my question): PF is a forum for the discussion of physics, and other sciences. One of the cornerstones of science today is, in simple terms, the scientific method (please let me know if you are unfamiliar with what this is). Since PF is about science, I personally expect that everyone who posts to the science threads in PF - and that includes Theory Development - has at least the intention of respecting the scientific method. If a person has issues with the scientific method, then PF has a section where folk may discuss and debate that very topic. When I read your posts, you appear (to me) to disparage the scientific method, and to consider it unworthy of your time to learn about it (which may explain why you don't appear to be interested to discuss the nature of science, in the Philosophy of Science and Mathematics section for example). A good example of what I mean is your apparent unwillingness to accept or consider scientific method-based questions and critiques of your own ideas. To ask again: why are you here? Quote by Alkatran Perhaps an energy source, I don't really know, above my level. That's what I'm trying to say. Quote by chroot This equation does not say what you think it says. You think it says that force requires a source of energy, presumably just because F appears on the left and E on the right. This is not sound reasoning. It's like saying that voltage requires a "source of current" because V = IR has voltage on the left and current on the right. What you're doing is simply expressing a relationship between these quantities. Of course, E/c^2 is just the mass, so your equation is really just F=ma, or Newton's second law of motion. Forces and accelerations are related by mass. Mass and energy are related through c. Thus you can say that "force and energy are related through acceleration and c," but you're not saying anything new or novel. You're certainly not saying forces require sources of energy. That equation relates energy and mass to force. There are two types of forces: those that arise from mass and those that arise from energy. Energy forces are the kinds that work at a distance. I.E. Earth-Moon system, because that is a lot of force (energy) to keep moon in orbit. The space in between them is said to be the force of attraction. This has to be energy, there is no mass to constitute the force in between them. Ok, so it does not require an energy source, but an energy source would better explain how the attraction works. New or novel, nice job on being redundant. It takes energy to push or pull for anything. This energy is directed through a force. Mass is just a compact form of energy; I'm sure you all know that. Ok, new analogy. You weigh a certain amount of Newtons. Gravity pulls on you that exact force, thus cancelling it, right? You go up to a box. The box weighs 20N and, you push with 20N. The forces cancel out, thus making you unable to push the box. Now, you pull on the box with 30N. Not only are you moving the box, but you are also doing work. You are the only thing that is losing energy, not the box. How can the box not lose energy? You go and wrestle with a friend. You both pull each other with 20N of force; you two don't move. One pulls the other with 30N while the other with 20. You both get tired in this situation. It requires an energy for BOTH sources to keep on doing it. Yes, the human body is copmlicated, but the overall outcome is that your body takes a mass and converts it to energy to be used as the force applier. Everything needs some type of source, whether it be mass or energy, to apply a continous source. If they apply a continous force forever, this requires an unlimited source. chroot, AP is college-level classes. So, get it right. Recognitions: Gold Member Science Advisor Staff Emeritus urtalkinstupid, I asked you some specific questions. So did Nereid. Why are you not answering them? Quote by urtalkinstupid There are two types of forces: those that arise from mass and those that arise from energy. And once again, this is nothing but abject speculation. Ok, so it does not require an energy source And thus falls this new theory of yours, just like the last one. - Warren Mentor Blog Entries: 1 Quote by urtalkinstupid You weigh a certain amount of Newtons. So far, so good. Gravity pulls on you that exact force, thus cancelling it, right? Huh? The pull of gravity is your weight. Are you saying gravity cancels itself? You go up to a box. The box weighs 20N and, you push with 20N. I assume you mean lift with 20N? The forces cancel out, thus making you unable to push the box. It would require a slight bit of extra force to accelerate the box from rest. Now, you pull on the box with 30N. Not only are you moving the box, but you are also doing work. I assume you mean that you exert an upward force of 30N on the box. It will accelerate. And yes you are doing work on the box. You are the only thing that is losing energy, not the box. You are converting chemical energy into heat and mechanical energy, some of which you are transfering to the box. How can the box not lose energy? Huh? The box gains energy. You go and wrestle with a friend. You both pull each other with 20N of force; you two don't move. I hope you realize that you always exert the same force on each other (assuming an ideal rope): that's Newton's 3rd law. Whether you accelerate or not depends on the net force on you. The rope pulling on you is just one force. The ground also exerts a force on you. One pulls the other with 30N while the other with 20. LOL... can't happen. You both get tired in this situation. It requires an energy for BOTH sources to keep on doing it. Yes, the human body is copmlicated, but the overall outcome is that your body takes a mass and converts it to energy to be used as the force applier. The reason why it takes energy for you to exert a force is not because "forces require energy", but because exerting a force involves your muscles in continual movement, contracting and relaxing. You are a biological system, not an inanimate object. Everything needs some type of source, whether it be mass or energy, to apply a continous source. If they apply a continous force forever, this requires an unlimited source. Nonsense. chroot, AP is college-level classes. So, get it right. I trust you're not taking AP physics! Sorry, I didn't see your questions chroot. Quote by chroot If you assert that the rope requires energy to stay taut, where does this energy come from? Why does the rope use energy when it's taut, but not when it's just laying on the floor? Ok, so [itex]F=ma[/tex], and we all know mass is related to energy. Mass is a compact form of energy, thus giving energy the greater quantity. It takes an emmence amount of energy to compose mass, it takes much more energy to make a suitable force between two objects. (or in this case three). Take away gravity and frictional forces, what are you left with? A loosely fit rope between two walls. It is no longer taut. No longer is energy acting through force on the objects. This energy arises between the forces that are applied. It's source?...I don't know. It's not my case to state that. That's simply my question that I'm asking you people. The rope uses energy when it is on the floor. It is held down by gravity, this is a force, and it is in the form of energy. Quote by chroot If the rope uses an exhaustible source of energy to stay taut, what happens when that energy source runs out? Does the rope somehow untie itself and fall off the hooks? Does it stay the same length but magically just stop pulling on the walls? Does it turn into soup and drip onto the ground? I stated above, "Take away gravity and frictional forces, what are you left with?" You take away forces, and the energy that keeps the rope taut is gone. It doesn't untie itself, it simply gets loose, allowing the walls to move in or accelerate in one directionas a system of the two walls and rope. Wall and rope soup...Sounds like the soup of the day. Quote by Nereid And indeed they are (I don't think I said otherwise, did I?). No, but I implied that you took it into assumption that your opinion was right. Otherwise you wouldn't question my presence on this forum. Quote by Nereid To ask again: why are you here? I'm here for the heck of it. I like this site, though I'm liked by very few...none. You people have actually inspired me to make a website based on the Standard odel. Isn't that exciting. A site made by me with no absurd theories! Perhaps, I will understand the Standard Model more?? Maybe, I'm here to play as the devil's advocate. Just to spur up debates. Who knows? Quote by Doc Al Huh? The pull of gravity is your weight. Are you saying gravity cancels itself? Sorry, poorly worded. What I meant was gravity is what holds you down to the Earth's surface. Not what I said. Told you guys I'm bad at wording, heh. Quote by Doc Al I assume you mean lift with 20N? No, I actually meant what I said, this time. Lift makes a better scenario though. Doc Al, you are cool unlike others. Quote by Doc Al It would require a slight bit of extra force to accelerate the box from rest. I'm aware of that; I added that in there for clarity. As you noted in the progression of this scenario. Quote by Doc Al Huh? The box gains energy You said it yourself: Quote by Doc Al You are converting chemical energy into heat and mechanical energy, some of which you are transfering to the box. Quote by Doc Al LOL... can't happen. It can. If one is more powerful than the other, one pulls with more force. Just like lifting a box. If you lift with more force than the box has, you overcome its force. Heh, I'm taking AP Physics B. Doc Al, at least you aren't mean like the others. Recognitions: Gold Member Science Advisor Staff Emeritus Quote by urtalkinstupid The rope uses energy when it is on the floor. It is held down by gravity, this is a force, and it is in the form of energy. Then you're saying the rope uses energy in being acted upon gravitationally, and it also uses energy in being held taut. This means that the taut rope is actually using more energy than the rope on the ground, since the taut rope is having to expend energy both in having weight and in being taut. If the rope is using more energy, shouldn't it run out of that energy more quickly? If so, you have a clear experiment that can be done to test your theory. It doesn't untie itself, it simply gets loose The tension in the rope is maintained via intermolecular bonds. The atoms in the rope are bound together chemically. If this rope is to just suddenly run out of energy, give up and go limp, it must actually break chemical bonds to do so. This means that the rope, after giving up, will be fundamentally different from the original rope. Since it ran out of energy, you should now be able to do all sorts of paradoxical things with it. For example, tie that piece of rope between two tractors and have them pull against it. If the rope is no longer capable of supporting tension (it ran out of energy to do so) then it will simply stretch and stretch forever -- it can't exert any more forces, but it can't untie itself from the tractors either. It must just keep getting longer. This is the "rope soup" I was getting at. Now, people have been using ropes and building materials for a very long time. The Earth itself has been around for almost 5 billion years, and its crust still seems to have the energy required to exert a force on me to keep me from falling through it. If this phenomenon (materials running out of energy to exert forces) really happens, why have we never seen it anywhere in the entire universe? Maybe, I'm here to play as the devil's advocate. Just to spur up debates. Who knows? We do not welcome such people here. - Warren Recognitions: Gold Member Science Advisor Staff Emeritus Quote by urtalkinstupid No, but I implied that you took it into assumption that your opinion was right. Otherwise you wouldn't question my presence on this forum. There you go again, making unwarranted assumptions I'm here for the heck of it. I like this site, though I'm liked by very few...none. You people have actually inspired me to make a website based on the Standard odel. Isn't that exciting. A site made by me with no absurd theories! Perhaps, I will understand the Standard Model more?? Maybe, I'm here to play as the devil's advocate. Just to spur up debates. Who knows? Do you consider PF to be a site where physics (and other sciences) is discussed, as science? Do you recognise that discussion of physics, as a science, should be conducted on its own terms? In case this isn't clear, let me give you an analogy: if we are having a discussion on apple pie in the context of cooking, recipes and so forth, I personally would not consider it appropriate to talk about sexual fantasies concerning apple pies in that discussion, or whether the Sun is powered by a giant apple pie. urtalkinstudid, just so that you don't make any further unwarranted assumptions, let me be clear as to my intention: I think the evidence is overwhelming that you are a troll, and so feel that you should be immediately banned from PF. However, I first want to make sure that you really do understand what PF is and what it's trying to do. (for the avoidance of doubt, I personally have no power to ban anyone) Mentor Blog Entries: 1 Quote by urtalkinstupid It can. If one is more powerful than the other, one pulls with more force. Just like lifting a box. If you lift with more force than the box has, you overcome its force. Two very different situations: (1) Two guys yanking on a rope: the force they exert is always the same. Or: You and superman are arm-wrestling: I don't care how strong he is, whatever force he exerts on you will exactly equal the force that you exert on him. Note that these forces are on different objects, so they don't "cancel". This is Newton's 3rd law: learn it. (2) Lifting a box. The acceleration of the box depends on the total force on the box. You lift with 30N, gravity pulls with 20N, so the box accelerates. This is Newton's 2nd law: learn it. Heh, I'm taking AP Physics B. Then you'd better learn about Newton's laws before that class starts! Doc Al, at least you aren't mean like the others. Give it time. Recognitions: Homework Help Science Advisor Quote by urtalkinstupid Ok, so [itex]F=ma[/tex], and we all know mass is related to energy. Mass is a compact form of energy, thus giving energy the greater quantity. It takes an emmence amount of energy to compose mass, it takes much more energy to make a suitable force between two objects. It doesn't take energy to make a force, we've already told you this. A force isn't energy either, unless it's over a distance. It's like using a charge to make a distance, makes no sense. Quote by urtalkinstupid Take away gravity and frictional forces, what are you left with? A loosely fit rope between two walls. It is no longer taut. No longer is energy acting through force on the objects. This energy arises between the forces that are applied. It's source?...I don't know. It's not my case to state that. That's simply my question that I'm asking you people. The rope uses energy when it is on the floor. It is held down by gravity, this is a force, and it is in the form of energy. Same argument as above. Your posts are so full of BS it's scary. Quote by urtalkinstupid I stated above, "Take away gravity and frictional forces, what are you left with?" You take away forces, and the energy that keeps the rope taut is gone. It doesn't untie itself, it simply gets loose, allowing the walls to move in or accelerate in one directionas a system of the two walls and rope. Wall and rope soup...Sounds like the soup of the day. Stop trying to argue by being clever (soup of the day), it won't work here and should only be done when you're actually making a valid point. I refer you to Chroot's post about the rope stretching forever. Quote by urtalkinstupid No, but I implied that you took it into assumption that your opinion was right. Otherwise you wouldn't question my presence on this forum. If you have an opinion you MUST think it's right. That's what an opinion is. I'm more for the ban every post. this is so ridiculous.. if anyone should be banned, its people who aren't questioning the current model. stupid is just pointing out what he thinks provides evidence for his case. just because u don't agree with it doesn't mean u have the right to ban him. this is TD and criticism is welcome, but to the point where someone gets banned, especially if they aren't saying anything vulgar, is crossing the line. are u afraid this is going to be another neutrino debate, soon? i was actually hoping for it, with the exclusion of another whack ultimatum... It doesn't take energy to make a force but mass is energy and it takes mass to make force... Your posts are so full of BS it's scary. well help to eliminate the bull-**** and answer the question... If you have an opinion you MUST think it's right. duh.. but doesn't mean it is right... Thread Closed Page 10 of 13 « First < 7 8 9 10 11 12 13 > Thread Tools | | | | |-------------------------------------|-------------------------------|---------| | Similar Threads for: Work equation? | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 5 | | | Introductory Physics Homework | 5 | | | Differential Equations | 13 | | | General Physics | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9675948023796082, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/157256-accumulation-points.html
# Thread: 1. ## accumulation points how can you prove this theorem from the book? A finite set has no accumulation points. I know that it's true, but fail to show it mathematically 2. Are you working in the real numbers? What ideas have you had so far? 3. Originally Posted by EmmWalfer how can you prove this theorem from the book? A finite set has no accumulation points. I know that it's true, but fail to show it mathematically Well, if you construct an epsilon neighborhood around a proposed limit point x in your finite set, then the intersection of your set with the epsilon neighborhood of x must contain points in your finite set other than x for x to be a limit point (accumulation point). If you can show this doesn't hold, then you are done. 4. The definition of "accumulation point" is: p is an accumulation point of set A if and only if every neighborhood of p contains at least one point of A (other than p itself). Suppose A is finite. Then the set of all distances from p to points in A (other that p itself) is finite and so contains a smallest value. Take the radius of your neighborhood to be smaller than that value. (I notice now, that is pretty much what Danneedshelp said!) 5. Originally Posted by HallsofIvy The definition of "accumulation point" is: p is an accumulation point of set A if and only if every neighborhood of p contains at least one point of A (other than p itself). Suppose A is finite. Then the set of all distances from p to points in A (other that p itself) is finite and so contains a smallest value. Take the radius of your neighborhood to be smaller than that value. (I notice now, that is pretty much what Danneedshelp said!) I am not a 100% this correct, but I recall a lemma from my intro to real analysis book that stated something along to lines of: a point $x$ is a limit point (accumulation point) of a set $A$ iff for every $\epsilon>0$, the neighborhood $N_{\epsilon}(x)$ contains infinitly many points of the set $A$. I think you would have to prove this to use it, but it answers your original question rather quickly.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9594689607620239, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3781675
Physics Forums ## Why does the band gap exist? Hello, do you know why no electron can stay in the band gap? Is it impossible at every energy? Thank you! PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics Do you know no electron can stay between -13.6eV and -3.4eV energy levels in an isolated Hydrogen atom? Now, apply the analogy. Yes but why they cannot stay there? What does impede them to stay in that levels^ ## Why does the band gap exist? Bloch's theorem says that anytime you have a periodic potential (like in a lattice of a metal or semiconductor where the atoms are equally spaced apart), then the solutions to Schrodinger's Equation will be plane waves, i.e. $\psi$ ~ $e^{i kx}$ where k is the wave number. When you actually solve a particular problem, you will find certain restrictions on k, that is, you will find for certain values of k, no such plane wave solutions exist. Since k is related to the energy, then you also get restrictions on the energy. That is, for certain values of the energy, there will be no valid solutions to Shrodinger's Equation. These energy levels where no solution exists are referred to as Energy gaps, or Band gaps. The reason the electron can't be on one of these gaps is because there is no solution to Shrodinger's equation in these regions, hence they are forbidden. Recognitions: Science Advisor Because there are no energy levels for them to sit in. Energy levels are time-independent solutions to the Schrodinger equation. Claude. Thread Tools | | | | |---------------------------------------------------|------------------------------------|---------| | Similar Threads for: Why does the band gap exist? | | | | Thread | Forum | Replies | | | General Discussion | 31 | | | Atomic, Solid State, Comp. Physics | 10 | | | Atomic, Solid State, Comp. Physics | 3 | | | Advanced Physics Homework | 0 | | | Introductory Physics Homework | 14 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8904867768287659, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/107790-set-union-its-finite-subsets.html
# Thread: 1. ## Is a set the union of its finite subsets? Hi: Let A be a set. Is A the union of all its finite subsets? By the way: how can I write a capital cursive in Latex? Regards and thanks for reading. Enrique. 2. I would say yes it is. A union is everything included in the selected sets. Here, there is only one set, and we are not excluding any part of it, so it is in Union of itself. I am no expert., just trying to help. 3. Originally Posted by ENRIQUESTEFANINI Hi: Let A be a set. Is A the union of all its finite subsets? By the way: how can I write a capital cursive in Latex? Regards and thanks for reading. Consider this very carefully. $\bigcup\limits_{n \in A} {\{ n\} } = ?$ [tex]A~,~\mathcal{A}~,~\mathbb{A}[/tex] gives $A~,~\mathcal{A}~,~\mathbb{A}$ 4. It was very kind of you. A short time after writing I saw the key was considering the singletons. Best regards, Enrique.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9514103531837463, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/166260-simplifying-roots.html
# Thread: 1. ## Simplifying roots Alright, I've got a final in College Algebra tomorrow but can't figure out how my teacher got this answer. Here was the original problem (as best I can type it): 1 over the 5th root of 9 I am told to rationalize the denominator. I got to this: 5th root of 9 to the 4th power over 9 This works as the answer, but it can simplify even further apparently. I can't figure out how it simplified further into: 5th root of 27 over 3 Help? 2. Originally Posted by Kieth89 Alright, I've got a final in College Algebra tomorrow but can't figure out how my teacher got this answer. Here was the original problem (as best I can type it): 1 over the 5th root of 9 I am told to rationalize the denominator. I got to this: 5th root of 9 to the 4th power over 9 This works as the answer, but it can simplify even further apparently. I can't figure out how it simplified further into: 5th root of 27 over 3 Help? $\displaystyle \frac{\sqrt[5]{9^4}}{9} = \frac{\sqrt[5]{3^8}}{9} = \frac{\sqrt[5]{3^5 \cdot 3^3}}{9} = \frac{3\sqrt[5]{3^3}}{9} = \frac{\sqrt[5]{3^3}}{3}$ 3. How did you get the root sign to type 4. Originally Posted by Kieth89 Alright, I've got a final in College Algebra tomorrow but can't figure out how my teacher got this answer. Here was the original problem (as best I can type it): 1 over the 5th root of 9 I am told to rationalize the denominator. I got to this: 5th root of 9 to the 4th power over 9 This works as the answer, but it can simplify even further apparently. I can't figure out how it simplified further into: 5th root of 27 over 3 Help? $\displaystyle\frac{1}{\sqrt[5]{9}}=\frac{1}{\sqrt[5]{3^2}}=\frac{1}{3^{\frac{2}{5}}}=\frac{3^{\frac{3} {5}}}{3^{\frac{2}{5}}\;3^{\frac{3}{5}}}$ 5. Could you help me with this one, too? $\sqrt{(3+x)^2-(3-x)^2}$ Alright, I know that I can't take the things out of the root until they are being multiplied, so what I did at first was just what the problem says: $\sqrt{(3+x)(3+x)+(-3+x)(-3+x)} = \sqrt{9+6x+x^2+9-6x+x^2}<br /> = \sqrt{2x^2+18} =$ $\sqrt{2(X^2+9)} = ?$ I don't know if I messed up somewhere, or am not seeing the key, but I'm not getting any further than that. The correct answer is supposed to be $2\sqrt{3x}$ 6. ## another way by fractional exponents (probably not any easier tho) $<br /> \frac{1}{9^{\frac{1}{5}}} \rightarrow \frac{1}{3^{\frac{2}{5}}}<br />$ $<br /> \frac{1}{3^{\frac{2}{5}}}<br /> \times<br /> \frac<br /> {3^<br /> {\frac{3}{5}}<br /> }<br /> {3^<br /> {\frac{3}{5}}<br /> }<br /> \rightarrow<br /> \frac<br /> {3^{\frac{3}{5}}}<br /> {3^{\frac{5}{5}}}<br /> \rightarrow<br /> \frac<br /> {27^<br /> {\frac{1}{5}}<br /> }<br /> {3}<br />$ 7. $\displaystyle \sqrt{(3+x)^2 - (3 - x)^2} = \sqrt{9 + 6x + x^2 - (9 - 6x + x^2)}$ $\displaystyle = \sqrt{9 + 6x + x^2 - 9 + 6x - x^2}$ $\displaystyle = \sqrt{12x}$ $\displaystyle = \sqrt{4\cdot 3x}$ $\displaystyle =\sqrt{4}\cdot \sqrt{3x}$ $\displaystyle = 2\sqrt{3x}$ 8. Originally Posted by Prove It $\displaystyle \sqrt{(3+x)^2 - (3 - x)^2} = \sqrt{9 + 6x + x^2 - (9 - 6x + x^2)}$ $\displaystyle = \sqrt{9 + 6x + x^2 - 9 + 6x - x^2}$ $\displaystyle = \sqrt{12x}$ $\displaystyle = \sqrt{4\cdot 3x}$ $\displaystyle =\sqrt{4}\cdot \sqrt{3x}$ $\displaystyle = 2\sqrt{3x}$ So did distributing the minus first mess it all up for me? 9. $<br /> \sqrt{9 +6x+x^2-9+6x-x^2}<br /> \rightarrow<br /> \sqrt{12x}<br /> \rightarrow<br /> \sqrt{3}\sqrt{4}\sqrt{x}<br /> \rightarrow<br /> 2\sqrt{3x}<br />$ 10. Originally Posted by Kieth89 So did distributing the minus first mess it all up for me? order of operations, remember? exponents first, then multiply by -1 ... $-(3-x)^2 = -(9 - 6x + x^2) = -9 + 6x - x^2$ 11. Oh...it's always something simple.. Thanks for the help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9698947668075562, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/77887/intersection-of-curves/77914
## Intersection of curves ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $f(x,y)=0$ and $g(x,y)=0$ be curves in $\mathbb R^2$. Assume that the origin $(0,0)\in \mathbb R^2$ is a $d$-fold point of $f$ and an $e$-fold point of $g$, respectively. Let $f_d(x,y)$ be the sum of the terms of degree $d$ in $f(x,y)$, $g_e(x,y)$ be the sum of the terms of degree $e$ in $g(x,y)$. If $f_d(x,y)$ and $g_e(x,y)$ have a common factor of positive degree, then the intersection multiplicity $I_O(f,g)>de.$ - 5 mfn -- 1. what is the question? 2. there is a much better chance of getting a good answer if you make the question easier to read (hint: dollars). – algori Oct 12 2011 at 3:51 7 @algori: the dollar hint makes you sound like you're asking for a bribe. Of course, if you'd asked for latex instead, who knows what it would make it sound like... – Thierry Zell Oct 12 2011 at 12:11 ## 1 Answer This is proved in Fulton's "Algebraic Curves", available online here. The precise reference is Section 3.3, property (5) on p. 37. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9082209467887878, "perplexity_flag": "head"}
http://mathoverflow.net/questions/43581/are-schematic-fixed-points-of-a-cohen-macaulay-scheme-cohen-macaulay
## Are schematic fixed-points of a Cohen-Macaulay scheme Cohen-Macaulay? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I'm not sure how long this iterative questions can go on, but let me try again. Let's say $X$ is a Cohen-Macaulay scheme with an action of $\mathbb{G}_m$ (i.e. if $X$ is affine, a grading on the coordinate ring). Are the schematic fixed points $X^{\mathbb{G}_m}$ of $X$ Cohen-Macaulay? - 2 I am not sure what the usual business is, but it is not true that if $\mathbb G_{\rm m}$ acts on a variety $X$ with a fixed point $p$, then this induces an action of $\mathbb G_{\rm m}$ on $\mathop{\rm Spec}\mathcal O_{X,p}$. – Angelo Oct 25 2010 at 21:39 Is this an issue of not being able to find a $\mathbb{G}_m$-invariant affine open containing $p$? – Ben Webster♦ Oct 25 2010 at 22:23 Those invariant affines are not cofinal in all neighborhoods. Algebraically, the localization of C[x] at the ideal (x) does not admit a grading...does it? – David Treumann Oct 25 2010 at 22:34 Ah, right. That was complete nonsense. Removed. I think you probably you can reduce to the graded local case, but let me not worry about that. – Ben Webster♦ Oct 25 2010 at 23:03 Let $T^1$ act on ${\mathbb A}^3$ with weights $0,1,1$. Then on the ${\mathbb P}^2$, the fixed-point set is not equidimensional. So you can only hope to have a local statement. – Allen Knutson Oct 26 2010 at 0:39 show 1 more comment ## 2 Answers Here is a counterexample. Consider the action of $\mathbb G_{\rm m}$ on $\mathbb A^4$ defined by $t \cdot(x,y,z,w) = (x, y, tz, t^{-1}w)$, and let $X$ be the invariant closed subscheme with ideal $(xy, y^2 + zw)$; this is a complete intersection, hence it is Cohen-Macaulay. The fixed point subscheme is obtained by intersecting with the fixed point subscheme in $\mathbb A^4$, which is given by $z = w = 0$; hence it is the subscheme of $\mathbb A^2$ given by $xy = y^2 = 0$, which is of course the canonical example of a non Cohen-Macaulay scheme. Developing this idea a little, one can show that any kind of horrible singularity can appear in the fixed point subscheme of a $\mathbb G_{\rm m}$-action on a complete intersection variety. - Murphy's law strikes again! – Ben Webster♦ Oct 27 2010 at 6:50 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Edit: the following does not answer Ben's question. It gives an example of the subring fixed by $G_m$ being not CM, while the question asked about the subscheme of fixed points, see the comments for more details. Let $R$ be the (homogenous) cone of a curve $C$ of genus $g>0$, for example $R=\mathbb C[x,y,z]/(x^3+y^3+z^3)$. Let $S=R[u,v]$, $X=\text{Spec}(S)$ and $G_m$ acts by $a.(x,y,z,u,v) = (ax,ay,az,a^{-1}u, a^{-1}v)$. Then $A= S^{G_m}$ would be a homogenous coordinate ring for $Y= C\times \mathbb P^1$, so it is not Cohen-Macaulay (if $A$ is CM, it would mean that $H^1(Y,\mathcal O_Y)=0$, impossible, see here for an explanation). (I learned this idea from Hochster, let me try to find a reference) - Hailong- this is the categorical quotient, not schematic fixed points. The schematic fixed points here are a single point with reduced structure and thus Cohen-Macaulay. – Ben Webster♦ Oct 25 2010 at 23:12 Hmm, sorry, I always thought of this notation as the invariants. What do you mean by schematic fixed pts? Are they just literally the pts of $X$ fixed by the group action? – Hailong Dao Oct 26 2010 at 0:10 @Hailong: you applied the fixed-point functor to the algebra rather than to the space. That is, the notation itself has a consistent meaning. (I'm not surely how to reasonable define schematic fixed points; in characteristic zero, with a connected group, one could look where the generating vector fields vanish.) – Allen Knutson Oct 26 2010 at 0:43 Dear Hailong: No ad hoc constructions/definitions are required. For any scheme $X$ over a ring $k$ and action on $X$ by a $k$-gp scheme $G$, define functor $X^G$ on $k$-algebras as follows: for $k$-algebra $R$, $X^G(R)$ is set of $x \in X(R)$ fixed by the $G_R$-action on $X_R$ (i.e., for any $R$-algebra $R'$, $x$ viewed in $X(R')$ is $G(R')$-invariant). Is this represented by a closed subscheme of $X$? Yes, provided $X$ is locally of finite type and separated over $k$ and $G$ is affine and fppf over $k$ with connected fibers. (See Prop. A.8.10(1)ff. in "Pseudo-reductive groups" for details.) – BCnrd Oct 26 2010 at 1:28 1 @ Allen and BCnrd: thank you! I will leave my answer, so people might benefit from your explanations and avoid my mistakes! – Hailong Dao Oct 26 2010 at 2:07 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.912219762802124, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/44425/solving-inclined-plane-using-diff-frame?answertab=oldest
# Solving inclined plane using diff. frame I was trying to solve frictionless inclined plane problem using a diff. frame as shown in the figure, and can't figure out that acceleration along the plane = g.sin(Θ), but I think it should be = g/sin(Θ) as according to the figure. I have gone through numerous examples and theory regarding such cases, but all of them use the frame with x axis along the incline plane and y axis perpendicular to it. - It does not matter what frame you resolve the components at, the answer should be the same. – ja72 Nov 18 '12 at 0:42 ## 1 Answer I think $\cos(90^\circ-\theta)=\frac{g}{a}$ is wrong. Angle between $\vec{g}$ and $\vec{a}$ is that, but that is not the same thing. Think about what happens when $\theta=0$ in the acceleration infinite? To solve this problem you need to find the projection of $\vec{g}$ along the incline. See picture below that might help you. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9712932705879211, "perplexity_flag": "head"}
http://everythingscience.co.za/grade-12/06-motion-in-two-dimensions/06-motion-in-two-dimensions-06.cnxmlplus
# Science • Home • Read a textbook • Practise for exams • Order You are here: Home › Chapter summary # Chapter summary • Projectiles are objects that move through the air. • Objects that move up and down (vertical projectiles) on the earth accelerate with a constant acceleration g which is approximately equal to 9,8 m·s−2 directed downwards towards the centre of the earth. • The equations of motion can be used to solve vertical projectile problems. $vf=vi+gtΔx=(vi+vf)2tΔx=vit+12gt2vf2=vi2+2gΔx$(1) • Graphs for vertical projectile motion are similar to graphs for motion at constant acceleration. If upwards is taken as positive the $Δx$ vs t, v vs t ans a vs t graphs for an object being thrown upwards look like this: • Momentum is conserved in one and two dimensions. $p=mvΔp=mΔvΔp=FΔt$(2) • An elastic collision is a collision where both momentum and kinetic energy is conserved. $pbefore=pafterKEbefore=KEafter$(3) • An inelastic collision is where momentum is conserved but kinetic energy is not conserved. $pbefore=pafterKEbefore≠KEafter$(4) • The frame of reference is the point of view from which a system is observed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9117769598960876, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/61608/complex-functions-integration
# Complex functions, integration Let $L_iL_j-L_jL_i = i\hbar\varepsilon_{ijk}L_k$ where $i,j,k\in\{1,2,3\}$ Let $u$ be any eigenstate of $L_3$. How might one show that $\langle L_1^2 \rangle = \int u^*L_1^2u = \int u^*L_2^2u = \langle L_2^2\rangle$ ? I can show that $\langle u|L_3L_1L_2-L_1L_2L_3|u\rangle=\int u^*(L_3L_1L_2-L_1L_2L_3)u =0$. And I know that $L_3$ can be written as ${1\over C}(L_1L_2-L_2L_1)$. Hence I have $\int u^*L_1L_2^2L_1u=\int u^*L_2L_1^2L_2u$. But I don't seem to be getting the required form... Help will be appreciated. Added: The $L_i$'s are operators that don't necessarily commute. - $L_i$ are operators on what? $u$ is a function defined over what space? – Willie Wong♦ Sep 3 '11 at 17:18 @Willie W: The $L_i$'s operate on the space of functions. In fact, I must say this reminds me of the angular momentum operator and eigenstates/eigenfunctions (can't distinguish those 2)... – Tom L Sep 3 '11 at 17:25 @Willie W: You are quite right. I have edited the question to give it verbatim. Hopefully it makes more sense now? :) – Tom L Sep 3 '11 at 17:45 Are the $L_i$ assumed to be self-adjoint? – Willie Wong♦ Sep 3 '11 at 18:11 @Willie W: Yup. Sorry :S – Tom L Sep 3 '11 at 18:12 ## 1 Answer Okay then. In below, we fix $u$ an eigenfunction of $L_3$, and denote by $\langle T\rangle:= \langle u|T|u\rangle$ for convenience for any operator $T$. Using self-adjointness of $L_3$, we have that $L_3 u = \lambda u$ where $\lambda \in \mathbb{R}$. And furthermore $$\langle L_3T\rangle = \langle L_3^* T\rangle = \lambda \cdot \langle T\rangle = \langle TL_3\rangle$$ which we can also write as $$\langle [T,L_3] \rangle = 0$$ for any operator $T$. This implies $$\frac{1}{\hbar}\langle L_1^2\rangle = \langle -iL_1L_2L_3 + iL_1L_3L_2 \rangle = \langle -iL_3L_1L_2 +i L_1L_3L_2\rangle = \frac{1}{\hbar}\langle L_2^2\rangle$$ The first and third equalities are via the defining relationship $[L_i,L_j] = i \hbar \epsilon_{ijk} L_k$. The middle equality is the general relationship derived above, applied to the first summand. (And is precisely the identity that you said you could show in the question.) Remark: it is important to note that the expression $\langle u| [T,A] |u\rangle = 0$ holds whenever $A$ is self adjoint and $u$ is an eigenvector for $A$. This does not imply that $[T,A] = 0$. This is already clear in a finite dimensional vector space where we can represent operators by matrices: consider $A = \begin{pmatrix} 1 & 0 \\ 0 & 2\end{pmatrix}$ and $T = \begin{pmatrix} 1 & 1 \\ 0 & 0\end{pmatrix}$. The commutator $[T,A] = \begin{pmatrix} 0 & 1 \\ 0 & 0\end{pmatrix}$, which is zero on the diagonals (as required), but is not the zero operator. - Thank you very much!! – Tom L Sep 3 '11 at 19:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9322868585586548, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=2799187&postcount=4
View Single Post ## integration help for expectation of a function of a random variable hmm...now that I try it fully, that integral doesn't work out. Are you sure you copied down the problem correctly? If yes, then I'm assuming there's a typo because the answer WolframAlpha is giving is: $$\frac{\pi e^{\frac{1}{2a}} \ \ \ erfc(\frac{1}{\sqrt{2}\sqrt{a}})}{\sqrt{a}}$$, where erfc(z) is the complementary error function. It exists and I've read up on its definition; however, unless your teacher has mentioned it in class yet, I doubt it's the correct answer. Most likely, there's an error the problem you stated.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9501345753669739, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/48610/difference-between-a-theory-in-logic-and-a-system-of-axioms/48613
# Difference between a “theory” in logic and a “system of axioms” In logic, a $\Sigma$-theory $T$ is just a set of sentences obtained from the signature $\Sigma$. As I understand, what logician calls "theory" is what a mathematician calls "system of axioms". But what a mathematician calls "theory" is the set of all sentences, that can be proven from the system of axioms. My questions are: 1) Is it correct what I said above ? 2) Is there a special name that logicians use/is used in logic, to designate the set of all sentences that can be proven from (in logic-speak) a theory/(in mathematics speak) system of axioms ? - ## 4 Answers It may help clarify the issue with "axioms" by looking at how the meaning of that word has changed. In Euclid's Elements, and for a long time after that, an "axiom" or "postulate" was not just any sentence: an axiom had to be obviously true and self-evident, so that no proof was required. In this traditional sense, the negation of the parallel postulate would not qualify as an "axiom", because it's not obviously true. For example, the fuss over the parallel postulate started because it wasn't clear that the parallel postulate was sufficiently self-evident. In modern logic, we worry much less about the "self-evident" requirement [1]. When we are working in complete generality, any set $S$ of sentences can be regarded as a set of axioms. The set of all sentences that can be deduced from $S$ is then the deductive closure of $S$. With this reductive meaning of "axiom", there is no longer much difference between a theory and a set of axioms. We could consider every sentence in the theory to be an axiom, for example, while Euclid would not accept every statement provable form his postulates as a postulate. The word "theory", as matt says in his answer, is used in several ways. Sometimes it is used to mean a deductively closed set of sentences, and sometimes it is used to mean just any set of sentences, which might also be called a set of axioms. In most settings, we can replace a set of axioms with its (unique) deductive closure when necessary, so the difference between the two conventions is not very substantial. There is one more meaning of "theory" I want to point out. I mentioned theories that are generated by taking the deductive closure of a set of sentences that are treated as axioms. Another way to form a theory is to take some class of semantic structures, in the same signature, and then form the set of all sentences that are true in all structures of the class. For example, one can form the "theory of abelian groups" and the "theory of the real line". Such theories will always be deductively closed. The difference here is that we did not start with a set of axioms. [1] We do worry about self-evidence when we are trying to justify a foundational theory such as ZFC. And the axioms we assume are often obviously true, in which case there is no issue. But we also look at axioms like the axiom of determinacy, which is disprovable in ZFC. Some of the traditional usage remains, but only in certain contexts. - The fact is that the meaning of the term theory varies within mathematical logic, and different logicians and logic texts define this term differently. Some say that a theory is any set of sentences, others insist that it be deductively closed. You asked for a bit of notation, and I have seen various ways of denoting the deductive closure of a set $T$ of sentences, such as $\text{Cons}(T)$ for "conseqeuences" and also $\text{Thm}(T)$ for "the theorems of $T$". None of the principal concepts that we apply to theories depend on the difference, and logicians are usually happy to move from one definition to another (e.g. at a lecture) with ease. • A theory $T$ is complete if $T\vdash\varphi$ or $T\vdash\neg\varphi$ for every $\varphi$. • A theory $T$ is consistent if it does not derive a contradiction. • A theory $T$ is satisfiable if there is a model $M\models T$. • A theory $T$ is finitely axiomatizable if there is a finite set $T_0$ derivable from $T$ that also proves all of $T$. (One may assume without loss that $T_0\subset T$.) • A theory $T$ is computably axiomatizable if there is a computably decidable set of axioms $T_0$ derivable from $T$ that also proves all of $T$. (By an interesting observation of Craig, this is equivalent to the assertion that the set of theorems of $T$ is c.e., or that $T$ has a c.e. set of axioms.) All of the above properties work for either concept of theory, either as a set of sentences or a deductively closed set of sentences. One subtle differences arises with the last point (computable axiomatizability), since if you have the set-of-sentences understanding of theory, then you may not necessarily assume that $T_0\subset T$. Let me mention also another similar issue, namely, that in common usage, as opposed to formal definition, many logicians use the term"*theory" to mean "consistent theory". (Finally, I would like to second User6312 remarks opposing the casual exclusion in the question of logic from mathematics. ) - The two definitions are floating around even within logic. Some authors just say any set of sentences is a theory; some require them to be closed under deduction / semantic consequence. It typically doesn't matter, since, for example, if $T$ is a set of sentences and $T\models \phi$, then (exercise) for every sentence $\psi$, $T\models \psi$ if and only if $T\cup \{\phi\}\models \psi$. The more important distinction regarding theories containing or not containing sentences is between complete and incomplete theories (though, depending on how you define theories, you might see complete theories defined in terms of deduction or containment). - The relationship between logician and mathematician has precisely the same character as the relationship between functional analyst and mathematician: a special case. More precisely, a mathematician is a logician when (s)he is doing logic. The same person may write papers that have a different subject classification. And a theory, in first-order logic anyway, is a deductively closed set of sentences. This hardly needs saying, since if it is mathematicians' notion of theory, then it is a mathematical (and therefore logical) consequence of the fact that a logician is a mathematician. Added, evidence: We talk about a theory being finitely axiomatizable. Surely this cannot mean that a set of axioms is finitely axiomatizable. There are a number of examples of this general character. Also, there is sometimes discussion about alternate axiomatizations, of, for example, the theory of groups. This supports the notion that by the theory of groups we mean the body of theorems. When we discuss model completeness, or completeness, of certain theories, we do not necessarily have a specific set of axioms in mind, since axiomatizations differ. - The convention you describe (i.e., requiring theories to be deductively closed) is a reasonable one. However, your answer says that your convention is the only one. As a factual matter this is simply wrong: many people and texts use the other convention (including me, not that I feel very strongly about it). I don't know whether the business about logicians being mathematicians (not that it matters, but I think there are still some logicians who are philosophers) is meant seriously or not, but it seems to merely add obscurity to your answer. – Pete L. Clark Jun 30 '11 at 10:39 @Pete L. Clark: Indeed it makes little difference, since most theories are specified by giving a list of axioms. And it is common (and useful) to make a formal definition, and then implicitly modify it in practice. About logic, I meant mathematical logic. I was expressing my annoyance with the still not uncommon tendency to think of logic as peripheral, not quite mathematics, or alternately purer, deeper, a foundation for everything. – André Nicolas Jun 30 '11 at 10:51 2 Concerning your added evidence, a set $T$ of axioms is finitely axiomatizable when there is a finite subset of $T$ from which one may prove all of $T$. – JDH Jun 30 '11 at 11:13 @JDH: That is not the meaning of the term. And here we are not dealing with opinion, as was the case with the meaning of the word "theory." – André Nicolas Jun 30 '11 at 11:22 2 In my experience, most mathematical logicians would agree with the statement I make in my comment above. – JDH Jun 30 '11 at 11:35 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9666828513145447, "perplexity_flag": "head"}
http://mathoverflow.net/questions/37610?sort=votes
## Demonstrating that rigour is important ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them. That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I'm asking the question is that it occurred to me that I don't have a good stock of examples myself. The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months' time if somebody suggests that proofs aren't all that important one can refer them to this page for lots of convincing examples that show that they are. Added after 13 answers: Interestingly, the focus so far has been almost entirely on the "You can't be sure if you don't have a proof" justification of proofs. But what if a physicist were to say, "OK I can't be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that's good enough for me." To counter that, we would want to use one of the other reasons, such as the "Having a proof gives more insight into the problem" justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.) Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, "I can't understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that's good enough." One way of countering such an argument would be to give justifications such as the ones that I've just briefly sketched. But those are a bit abstract and will not be convincing if you can't back them up with some examples. So I'm looking for some good examples. What I hadn't spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier. - 10 There's a clear advantage to knowing a 'good' proof of a statement (or even better, several good proofs), as it is an intuitively comprehensible explanation of why the statement is true, and the resulting insight probably improves our hunches about related problems (or even about which problems are closely related, even if they appear superficially unrelated). But if we are handed an 'ugly' proof whose validity we can verify (with the aid of a computer, say), but where we can't discern any overall strategy, what do we gain? – Colin Reid Sep 3 2010 at 13:53 8 What kind of person do you have in mind who would suggest proofs are not important? I can't imagine it would be a mathematician, so exactly what kind of mathematical background do you want these replies to assume? – KConrad Sep 3 2010 at 15:33 7 Colin Reid- I think one can differentiate between a person understanding and a technique understanding. The latter applies even if we cannot understand the proof. We know that the tools themselves "see enough" and "understand enough", and that in itself is a significant advance in our understanding. But we still want a "better proof", because a hard proof makes us feel that our techniques aren't really getting to the heart of the problem- we want techniques which understand the problem more clearly. – Daniel Moskovich Sep 3 2010 at 16:26 12 Concerning the Zeilberger link that Jonas posted, sorry but I think that essay is absurd. If Z. thinks that the fact that only a small number of mathematicians can understand something makes it uninteresting then he should reflect on the fact that most of the planet won't understand a lot of Z's own work since most people don't remember any math beyond high school. Therefore is Z's work dull and pointless? He has written other essays that take extreme viewpoints (like R should be replaced with Z/p for some unknown large prime p). – KConrad Sep 5 2010 at 1:39 14 Every proof has it's own "believability index". A number of years ago I was giving a lecture about a certain algorithm related to Galois Theory. I mentioned that there were two proofs that the algorithm was polynomial time. The first depended on the classification of finite simple groups, and the second on the Riemann Hypothesis for a certain class of L-functions. Peter Sarnak remarked that he'd rather believe the second. – Victor Miller Sep 6 2010 at 15:56 show 20 more comments ## 37 Answers i once got a letter from someone who had overwhelming numerical evidence that the sum of the reciprocals of primes is slightly bigger than 3 (he may have conjectured the limit was π). The sum is in fact infinite, but diverges so slowly (like log log n) that one gets no hint of this by computation. - 42 This reminds me of a classical mathematical foklore "get rich" scheme (or scam). The ad in a newspaper says: send in 10 dollars and get back unlimited amount of money in monthly installments. The dupe follows the instructions and receives 1 dollar the first month, 1/2 dollar the second month, 1/3 dollar the third month, ... – Victor Protsak Sep 6 2010 at 2:37 20 I remember the first time I learned that the harmonic series is divergent. I was in high school, in my first calculus class; it was in 2001. I was really surprised and couldn't really believe that it could be divergent, so I programmed my TI-83 to compute partial sums of the harmonic series. I let it run for the entire day, checking in on the progress periodically. If I recall correctly, by the end of the day the partial sum remained only in the 20s. Needless to say, I was not convinced of the divergence of the series that day. – Kevin Lin Sep 6 2010 at 7:09 9 If one wants to carry this to the extreme, any divergent series with the property that the n-th term goes to zero will converge on a calculator as the terms will eventually fall below the underflow value for the calculator, and hence be considered to be zero. – Chris Leary Jan 1 2012 at 23:54 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I would like to preface this long answer by a few philosophical remarks. As noted in the original posting, proofs play multiple roles in mathematics: for example, they assure that certain results are correct and give insight into the problem. A related aspect is that in the course of proving an intuitively obvious statement, it is often necessary to create theoretical framework, i.e. definitions that formalize the situation and new tools that address the question, which may lead to vast generalizations in the course of the proof itself or in the subsequent development of the subject; often it is the proof, not the statement itself, that generalizes, hence it becomes valuable to know multiple proofs of the same theorem that are based on different ideas. The greatest insight is gained by the proofs that subtly modify the original statement that turned out to be wrong or incomplete. Sometimes, the whole subject may spring forth from a proof of a key result, which is especially true for proofs of impossibility statements. Most examples below, chosen among different fields and featuring general interest results, illustrate this thesis. 1. Differential geometry a. It had been known since the ancient times that it was impossible to create a perfect (i.e. undistorted) map of the Earth. The first proof was given by Gauss and relies on the notion of intrinsic curvature introduced by Gauss especially for this purpose. Although Gauss's proof of Theorema Egregium was complicated, the tools he used became standard in the differential geometry of surfaces. b. Isoperimetric property of the circle has been known in some form for over two millenia. Part of the motivation for Euler's and Lagrange's work on variational calculus came from the isoperimetric problem. Jakob Steiner devised several different synthetic proofs that contributed technical tools (Steiner symmetrization, the role of convexity), even though they didn't settle the question because they relied on the existence of the absolutely minimizing shape. Steiner's assumption led Weierstrass to consider the general question of existence of solutions to variational problems (later taken up by Hilbert, as mentioned below) and to give the first rigorous proof. Further proofs gained new insight into the isoperimetric problem and its generalizations: for example, Hurwitz's two proofs using Fourier series exploited abelian symmetries of closed curves; the proof by Santaló using integral geometry established more general Bonnesen inequality; E.Schmidt's 1939 proof works in $n$ dimensions. Full solution of related lattice packing problems led to such important techniques as Dirichlet domains and Voronoi cells and the geometry of numbers. 2. Algebra a. For more than two and a half centuries since Cardano's Ars Magna, no one was able to devise a formula expressing the roots of a general quintic equation in radicals. The Abel–Ruffini theorem and Galois theory not only proved the impossibility of such a formula and provided an explanation for the success and failure of earlier methods (cf Lagrange resolvents and casus irreducibilis), but, more significantly, put the notion of group on the mathematical map. b. Systems of linear equations were considered already by Leibniz. Cramer's rule gave the formula for a solution in the $n\times n$ case and Gauss developed a method for obtaining the solutions, which yields the least square solution in the underdetermined case. But none of this work yielded a criterion for the existence of a solution. Euler, Laplace, Cauchy, and Jacobi all considered the problem of diagonalization of quadratic forms (the principal axis theorem). However, the work prior to 1850 was incomplete because it required genericity assumptions (in particular, the arguments of Jacobi et al didn't handle singular matrices or forms. Proofs that encompass all linear systems, matrices and bilinear/quadratic forms were devised by Sylvester, Kronecker, Frobenius, Weierstrass, Jordan, and Capelli as part of the program of classifying matrices and bilinear forms up to equivalence. Thus we got the notion of rank of a matrix, minimal polynomial, Jordan normal form, and the theory of elementary divisors that all became cornerstones of linear algebra. 3. Topology a. Attempts to rigorously prove the Euler formula $V-E+F=2$ led to the discovery of non-orientable surfaces by Möbius and Listing. b. Brouwer's proof of the Jordan curve theorem and of its generalization to higher dimensions was a major development in algebraic topology. Although the theorem is intuitively obvious, it is also very delicate, because various plausible sounding related statements are actually wrong, as demonstrated by the Lakes of Wada and the Alexander horned sphere. 4. Analysis The work on existense, uniqueness, and stability of solutions of ordinary differential equations and well-posedness of initial and boundary value problems for partial differential equations gave rise to tremendous insights into theoretical, numerical, and applied aspects. Instead of imagining a single transition from 99% ("obvious") to 100% ("rigorous") confidence level, it would be more helpful to think of a series of progressive sharpenings of statements that become natural or plausible after the last round of work. a. Picard's proof of the existence and uniqueness theorem for a first order ODE with Lipschitz right hand side, Peano's proof of the existence for continuous right hand side (uniqueness may fail), and Lyapunov's proof of stability introduced key methods and technical assumptions (contractible mapping principle, compactness in function spaces, Lipschitz condition, Lyapunov functions and characteristic exponents). b. Hilbert's proof of the Dirichlet principle for elliptic boundary value problems and his work on the eigenvalue problems and integral equations form the foundation for linear functional analysis. c. The Cauchy problem for hyperbolic linear partial differential equations was investigated by a whole constellation of mathematicians, including Cauchy, Kowalevski, Hadamard, Petrovsky, L.Schwartz, Leray, Malgrange, Sobolev, Hörmander. The "easy" case of analytic coefficients is addressed by the Cauchy–Kowalevski theorem. The concepts and methods developed in the course of the proof in more general cases, such as the characteristic variety, well-posed problem, weak solution, Petrovsky lacuna, Sobolev space, hypoelliptic operator, pseudodifferential operator, span a large part of the theory of partial differential equations. 5. Dynamical systems Universality for one-parameter families of unimodal continuous self-maps of an interval was experimentally discovered by Feigenbaum and, independently, by Coullet and Tresser in the late 1970s. It states that the ratio between the lengths of intervals in the parameter space between successive period-doubling bifurcations tends to a limiting value $\delta\approx 4.669201$ that is independent of the family. This could be explained by the existence of a nonlinear renormalization operator $\mathcal{R}$ in the space of all maps with a unique fixed point $g$ and the property that all but one eigenvalues of its linearization at $g$ belong to the open unit disk and the exceptional eigenvalue is $\delta$ and corresponds to the period-doubling transformation. Later, computer-assisted proofs of this assertion were given, so while Feigebaum universality had initially appeared mysterious, by the late 1980s it moved into the "99% true" category. The full proof of universality for quadratic-like maps by Lyubich (MR) followed this strategy, but it also required very elaborate ideas and techniques from complex dynamics due to a number of people (Douady–Hubbard, Sullivan, McMullen) and yielded hitherto unknown information about the combinatorics of non-chaotic quadratic maps of the interval and the local structure of the Mandelbrot set. 6. Number theory Agrawal, Kayal, and Saxena proved that PRIMES is in P, i.e. primality testing can be done deterministically in polynomial time. While the result had been widely expected, their work was striking in at least two respects: it used very elementary tools, such as variations of Fermat's little theorem, and it was carried out by a computer science professor and two undergraduate students. The sociological effect of the proof may have been even greater than its numerous consequences for computational number theory. - 3 I meant the inspirational effect due to (a) elementary tools used; and (b) the youth of 2/3 of the authors. – Victor Protsak Sep 7 2010 at 8:09 28 It is indeed great and inspiring to see very young people cracking down famous problems. Recently, I find it no less inspiring to see old people cracking down famous problems. – Gil Kalai Sep 7 2010 at 8:57 1 As far as Euler's formula goes, the book Proofs and Refutations by Imre Lakatos en.wikipedia.org/wiki/Proofs_and_Refutations shows how many interesting questions and new considerations can be derived from a seemingly-obvious formula. – Thierry Zell Mar 19 2011 at 5:30 show 1 more comment When I teach our "Introduction to Mathematical Reasoning" course for undergraduates, I start out by describing a collection of mathematical "facts" that everybody "knew" to be true, but which, with increasing standards of rigor, were eventually proved false. Here they are: 1. Non-Euclidean geometry: The geometry described by Euclid is the only possible "true" geometry of the real world. 2. Zeno's paradox: It is impossible to add together infinitely many positive numbers and get a finite answer. 3. Cardinality vs. dimension: There are more points in the unit square than there are in the unit interval. 4. Space-filling curves: A continuous parametrized curve in the unit square must miss "most" points. 5. Nowhere-differentiable functions: A continuous real-valued function on the unit interval must be differentiable at "most" points. 6. The Cantor Function: A function that is continuous and satisfies f'(x)=0 almost everywhere must be constant. 7. The Banach-Tarski paradox: If a bounded solid in R^3 is decomposed into finitely many disjoint pieces, and those pieces are rearranged by rigid motions to form a new solid, then the new solid will have the same volume as the original one. - 3 Regarding 5: cf the comment here: mathoverflow.net/questions/22189/… gives a strictly increasing function whose derivative is zero almost everywhere. Intuitively such a thing shouldn't exist, but applying the definitions rigourously shows it is true. – David Roberts Sep 4 2010 at 4:08 19 Historical examples tend to retroactively attribute stupid errors that were not the original, and still subtle, issue. In 3 and 7 the equivalences are not geometric (see Feynman's deconstruction of Banach-Tarski as "So-and-So's Theorem of Immeasurable Measure"). For #1, Riemannian geometry doesn't address historical/conceptual issue of non-Euclidean geometry, which was about logical status of the Parallel Axiom, categoricity of the axioms, and lack of 20th-century mathematical logic framework. Zeno's contention that motion is mysterious remains true today, despite theory of infinite sums. – T. Sep 4 2010 at 8:15 3 It seems to me that items 3-7 are regarded by most people as "monsters" and as such not really worthy of serious consideration. As for items 1 and 2, I think that not only have most people not heard of them, when they do hear of them, they regard them either as jokes or don't really get the point at all. So it doesn't seem to me that these are convincing arguments for most people. (They are, to be sure, convincing arguments for me.) To some extent, I'm sure this is something that can only be appreciated by some experience. I think for instance that the Pythagorean theorem is [out of space – Carl Offner Apr 28 2011 at 2:58 Surely calculus is the ultimate treasure trove for such examples. In antiquity, the Egyptians, Greeks, Indians, Chinese, and many others could calculate integrals with a pretty good degree of certainty via the method of exhaustion and its variants. But it is not without reason that Newton and Leibniz are credited with the invention of calculus. Because once you had a formalism- a proof- of the product rule, chain rule, taylor expansions, calculation of an integral- in fact, once you had the formalism in hand to make such a proof possible- then with that came an understanding, and from that sprung the most powerful analytic machine known to man, that is calculus. Without a formalism, Zeno's paradox was just that- a paradox. With the concept of limits and of epsilon-delta proofs, it becomes a triviality. Thus, in my opinion, proof is important in that it leads to mathematics. Mathematics is important in that it leads to understanding patterns, and patterns govern all of science and the universe. If you can prove something, you understand it, or at least "your concepts understand it". If you can't prove it, you're nothing more than a goat, knowing the sun will rise in the morning from experience or from experiment, but having not the slightest inkling of why. The specific example, then, is "calculating integrals" and "solving differential equations". With the reader's indulgence, an example of a mathematical proof saving lives. My friend's mum is an aeronautical engineer at a place which designs fighter jets. There was some wing design, whose wind resistance satisfied some PDE. They numerically simulated it by computer, and everything was perfect. My friend's mum, who had studied PDE's seriously in university and thought this one could be solved rigourously, set about finding an exact solution, and lo-and-behold, there was some unexpected singularity, and if wind were to blow at some speed from some direction then the wing would shear off. She pointed this out, was awarded a medal, and the wing design was changed. Lives saved by a proof. I'm sure there are a thousand examples like that. - 43 Do you have a citation for the wing story? Otherwise if I repeat it the story becomes "I read about this guy on the internet who had a friend whose mother...". – Dan Piponi Sep 3 2010 at 17:12 42 Not to politicise this, but is it clear which of (1) having a properly-working fighter jet or (2) the opposite, saves more lives in the end? – José Figueroa-O'Farrill Sep 3 2010 at 18:31 36 All stories about wings falling off airplanes due to design errors are jokes or urban legends. In practice wings are not only tested extensively, but are built with huge error tolerances. And I doubt that anyone could find an exact solution of a realistic differential equation modeling 3 dimensional air flow over an airplane wing. – Richard Borcherds Sep 3 2010 at 22:03 17 Way to ruin the story Richard! – Robby McKilliam Sep 3 2010 at 22:33 18 Tacoma Narrows was designed with tolerances too. It's just that the tolerances didn't anticipate the resonant frequency of the structure. – Ryan Budney Sep 3 2010 at 22:49 show 13 more comments Mumford in Rational equivalence of 0-cycles on surfaces gave an example where an intuitive result of Severi, who claimed the space of rational equivalence classes was finite dimensional, was just completely wrong: it is infinite dimensional for most surfaces. This is a typical example of why the informal non-rigorous style of algebraic geometry was abandoned: too many of the "obvious" but unproved results turned out to be incorrect. - ### Claim The trefoil knot is knotted. ### Discussion One could scarcely find a reasonable person who would doubt the veracity of the above claim. None of the 19th century knot tabulators (Tait, Little, and Kirkman) could rigourously prove it, nor could anybody before them. It's not clear that anyone was bothered by this. Yet mathematics requires proof, and proof was to come. In 1908 Tietze proved the beknottedness of the trefoil using Poincaré's new concept of a fundamental group. Generators and relations for fundamental groups of knot complements could be found using a procedure of Wirtinger, and the fundamental group of the trefoil complement could be shown to be non-commutative by representing it in $SL_2(\mathbb{Z})$, while the fundamental group of the unknot complement is $\mathbb{Z}$. In general, to distinguish even fairly simple knots, whose difference was blatantly obvious to everybody, it was necessary to distinguish non-abelian fundamental groups given in terms of Wirtinger presentations, via generators and relations. This is difficult, and the Reidemeister-Schreier method was developed to tackle this difficulty. Out of these investigations grew combinatorial group theory, not to mention classical knot theory. All because beknottedness of a trefoil requires proof. ### Claim Kishino's virtual knot is knotted. ### Discussion We are now in the 21st century, and virtual knot theory is all the rage. One could scarecely find a reasonable person who would argue that Kishino's knot is trivial. But the trefoil's lesson has been learnt well, and it was clear to everyone right away that proving beknottedness of Kishino's knot was to be a most fruitful endeavour. Indeed, that is how things have turned out, and proofs that Kishino's knot is knotted have led to substantial progress in understanding quandles and generalized bracket polynomials. ### Summary Above we have claims which were obvious to everybody, and were indeed correct, but whose proofs directly led to greater understanding and to palpable mathematical progress. - 10 I very much like your answer although I'd put what I consider to be a different spin on it. To me the key point of interest in showing the trefoil is non-trivial is that it shows that one can talk in a quantitative, analytical way about a concept that at first glance seems to have nothing to do with standard conceptions of what mathematics is about. A trefoil has no obvious quantitative, numerical thing associated to it. In contrast, the statement $\pi > 3$ is very much steeped in traditional mathematical language so it's rather unsurprising that mathematics can say things about it. – Ryan Budney Sep 6 2010 at 20:18 7 Let me make my point more hyperbolically. That one can rigorously show that a trefoil can't be untangled, this is one of the most effective mechanisms one can use to communicate to people that modern mathematics deals with sophisticated and substantial ideas. Mathematics as a subject wasn't solved with the development of the Excel spreadsheet. :) – Ryan Budney Sep 7 2010 at 0:28 5 =HOMFLYPOLYNOMIAL(A3:B5) – Mariano Suárez-Alvarez Jan 2 2012 at 3:54 show 3 more comments The evidence for both quantum mechanics and for general relativity is overwhelming. However, one can prove that without serious modifications, these two theories are incompatible. Hence the (still incomplete) quest for quantum gravity. - 28 Devils Advocate: Couldn't one argue that this demonstrates the opposite? Our theories are mathematically incompatible, yet they can compute the outcome of any fundamental physical experiment to a half dozen digits. Clearly, this shows that mathematical consistency is overrated! – David Speyer Sep 6 2010 at 12:06 16 @David, you may be right: quantum mechanics and general relativity are incompatible, and the first time that they come into mathematical conflict the universe will end. This should be around the time when the first black hole evaporates, around $10^{66}$ years from now. – Peter Shor Sep 6 2010 at 14:01 3 Indeed nonrigirous mathematical computations and heuristic arguments in physics are spectacularly successful for good experimental predictions and even for mathematics. Yet the accuracy David talked about is only in small fragments of standard physics. Of course, just like asking what is the purpose of rigor we can also ask what is the purpose gaining the 7th accurate digit in experimental predictions. The answer that allowing better predictions like rigorous proofs enlighten us is a good partial answer. – Gil Kalai Sep 7 2010 at 5:17 1 There were intensice discussions regarding the role of rigor over physics weblogs. Here is an example from Distler's blog "musing": golem.ph.utexas.edu/~distler/blog/archives/… – Gil Kalai Sep 7 2010 at 8:07 show 1 more comment Here's an example: In the Mathscinet review of "Y-systems and generalized associahedra", by Sergey Fomin and Andrei Zelevinsky, you find: Let $I$ be an $n$-element set and $A=(a_{ij})$, $i,j\in I$, an indecomposable Cartan matrix of finite type. Let $\Phi$ be the corresponding root system (of rank $n$), and $h$ the Coxeter number. Consider a family $(Y_i(t))_{i\in I,\,t\in\Bbb{Z}}$ of commuting variables satisfying the recurrence relations $$Y_i(t+1)Y_i(t-1)=\prod_{j\ne i}(Y_j(t)+1)^{-a_{ij}}.$$ Zamolodchikov's conjecture states that the family is periodic with period $2(h+2)$, i.e., $Y_i(t+2(h+2))=Y_i(t)$ for all $i$ and $t$. That conjecture claims that an explicitly described algebraic map is periodic. The conjecture can be checked numerically by plugging in real numbers with 30 digits, and iterating the map the appropriate number of times. If you see that time after time, the numbers you get back agree with the initial values with a 29 digit accuracy, then you start to be pretty confident that the conjecture is true. For the $E_8$ case, the proof presented in the above paper involves a massive amount of symbolic computations done by computer. Is it really much better than the numerical evidence? Conclusion: I think that we only like proofs when we learn something from them. It's not the property of "being a proof" that is attractive to mathematicians. - 4 Gian-Carlo Rota would agree, for he said (in "The Phenomenology of Mathematical Beauty") that we most value a proof that enlightens us. – Joseph O'Rourke Sep 5 2010 at 3:00 2 That's a very interesting example, even if it is of the opposite of what I asked. My instinct is to think it's good that there's a proof, but I'm not sure how to justify that. And obviously I'd prefer a conceptual argument. – gowers Sep 5 2010 at 15:26 13 And now, of course, we finally understand exactly what Gian-Carlo meant: A proof enlightens us if: 1) it is the first proof, 2) it is accepted, and 3) it has at least 10 up votes! – Gil Kalai Sep 6 2010 at 21:05 Maybe some of the answers to this question about "eventual counterexamples" - ie, which could plausibly be true for all $n$ but which fail for some large number - are relevant? Some highlights from that question: • $gcd(n^5−5,(n+1)^5−5)=1$ is true for n=1,2,…,1435389 but not for n=1435390; and many similar • factors of $x^n−1$ over the rationals have no coefficient exceeding 1 in absolute value - until $n=105$ • The Strong Law of Small Numbers, a fun paper by Guy - 4 Nitpick: The title of Guy's paper is "The Strong Law of Small Numbers". – Ravi Boppana Sep 3 2010 at 14:01 1 Oh yes... muscle memory must have taken over as I typed. Thanks! – Tom Smith Sep 3 2010 at 14:04 show 1 more comment [Edited to correct the Galileo story] An old example of a plausible result that was overthrown by rigor is the 17th-century example of the hanging chain. Galileo once said (though he later said otherwise), and Girard claimed to have proved, that the shape was a parabola. But this was disproved by Huygens (then aged 17) by a more rigorous analysis. Some decades later, the exact equation for the catenary was found by the Bernoullis, Leibniz, and Huygens. In the 20th century, some people thought it plausible that the shape of the cable of a suspension bridge is also a catenary. Indeed, I once saw this claim in a very popular engineering mathematics text. But a rigorous argument shows (with the sensible simplifying assumption that the weight of the cable is negligible compared with the weight of the road) that the shape is in fact a parabola. - 17 The story about Galileo and the hanging chain is a myth: he was well aware that it is approximately but not exactly a parabola, and even commented that the approximation gets better for shorter chains. If you take a very long chain with the ends close together, which Galileo was perfectly capable of doing, it is obviously not a parabola – Richard Borcherds Sep 3 2010 at 21:43 3 It may be a mistranslation: my guess is he meant that a catenary RESEMBLES a parabola. Later on in the same book he makes it clear that he knows they are different. – Richard Borcherds Sep 3 2010 at 22:44 2 Years ago, I saw in "Scientific American" a 2-page ad for some calculator. One of the two pages was a photograph of a suspension bridge. Across the top was the equation of a catenary. – Andreas Blass Sep 4 2010 at 1:27 9 I was very fascinated by Richard Borcherds's comments and looked at two different translations of the Galileo's book (I also found a quote from the original text, but my Italian is not good enough). The hanging line is definitely described to take the shape of a parabola, but this statement is given in a section describing quick ways of sketching parabolae. Indeed, later in the book, Galileo talks about the shape of a hanging chain being parabolic only approximately: "the Fitting shall be so much more exact, by how much the describ'd Parabola is less curv'd, i.e. more distended". – Aleksey Pichugin Sep 4 2010 at 11:15 4 @J.M.:Jungius may have been the first to publish a proof that the catenary is not a parabola (1669), but the proof of Huygens in his letter to Mersenne was earlier (1646). – John Stillwell Sep 8 2010 at 15:10 show 4 more comments I think the recent work on compressed sensing is a good example. As I understand from listening to a talk by Emmanuel Candes - please correct me if I get anything wrong - the recent advances in compressed sensing began with an empirical observation that a certain image reconstruction algorithm seemed to perfectly reconstruct some classes of corrupted images. Candes, Romberg, and Tao collaborated to prove this as a mathematical theorem. Their proof captured the basic insight that explained the good performance of the algorithm: $l_1$ minimization finds a sparse solution to a system of equations for many classes of matrices. It was then realized this insight is portable to other problems and analogous tools could work in many other settings where sparsity is an issue (e.g., computational genetics). If Candes, Romberg, and Tao had not published their proof, and if only the empirical observation that a certain image reconstruction works well was published, it is possible (likely?) that this insight would never have penetrated outside the image processing community. - I don't think that proofs are about replacing 99% certainty with 99.99% (or 100%, if the proof is simple enough). In one of his problems he studied early on, Fermat stated that it was important to find out whether a prime divides only numbers $a^n-1$, or also numbers of the form $a^n+1$. For $a = 2$ and $a = 3$ he saw that the answer seemed to depend on the residue class of $p$ modulo $4a$. He did not really come back to investigate this problem more closely; Euler did, but couldn't find the proof. Gauss's proofs did not remove the remaining 1 % uncertainty, it brought in structure and allowed to ask the next question. Just looking at patterns of prime divisors of $a^n \pm 1$ wouldn't have led to Artin reciprocity. - 1. Nonexistence theorems can not be demonstrated with numerical evidence. For example, the impossibility of classical geometric construction problems (trisecting the angle, doubling the cube) could only be shown with a proof that the efforts in the positive direction were futile. Or consider the equation $x^n + y^n = z^n$ with $n > 2$. [EDIT: Strictly speaking my first sentence is not true. For example, the primality of a number is a kind of nonexistence theorem -- this number has no nontrivial factorization -- and one could prove the primality of a specific number by just trying out all the finitely many numerical possibilities, whether by naive trial division or a more efficient rigorous primality test. Probabilistic primality tests, such as the Solovay--Strassen or Miller--Rabin tests, allow one to present a short amount of compelling numerical evidence, without a proof, that a number is quite likely to be prime. What I should have written is that nonexistence theorems are usually not (or at least some of them are not) demonstrable by numerical evidence, and the geometric impossibility theorems which I mentioned illustrate that. I don't see how one can give real evidence short for those theorems other than by a proof. Lack of success in making the constructions is not convincing: the Greeks couldn't construct a regular 17-gon by their rules, but Gauss showed much later that it can be done.] 2. You can't apply a theorem to all commutative rings unless you have a proof of the result which works that broadly. Otherwise math just becomes conjectures upon conjectures, or you have awkward hypotheses: "For a ring whose nonzero quotients all have maximal ideals, etc." Emmy Noether revolutionized abstract algebra by replacing her predecessor's tedious computational arguments in polynomial rings with short conceptual proofs valid in any Noetherian ring, which not only gave a better understanding of what was done before but revealed a much broader terrain where earlier work could be used. Or consider the true scope of harmonic analysis: it can be carried out not just in Euclidean space or Lie groups, but in any locally compact group. Why? Because, to get things started, Weil's proof of the existence of Haar measure works that broadly. How are you going to collect 99% numerical evidence that all locally compact groups have a Haar measure? (In number theory and representation theory one integrates over the adeles, which are in no sense like Lie groups, so the "topological group" concept, rather than just "Lie group", is really crucial.) 3. Proofs tell you why something works, and knowing that explanatory mechanism can give you the tools to generalize the result to new settings. For example, consider the classification of finitely generated torsion-free abelian groups, finitely generated torsion-free modules over any PID, and finitely generated torsion-free modules over a Dedekind domain. The last classification is very useful, but I think its statement is too involved to believe it is valid as generally as it is without having a proof. 4. Proofs can show in advance how certain unsolved problems are related to each other. For instance, there are tons of known consequences of the generalized Riemann hypothesis because the proofs show how GRH leads to those other results. (Along the same lines, Ribet showed how modularity of elliptic curves would imply FLT, which at the time were both open questions, and that work inspired Wiles.) - 4 Re 1: Not all evidence is numerical! It had been known for a long time that $x^n+y^n=z^n$ doesn't admit a solution in non-constant polynomials in 1 variable for $n\geq 3,$ Kummer proved FLT for regular prime exponents, etc. I wouldn't try to assign numerical "confidence rating" to these developments prior to Wiles and Taylor-Wiles. – Victor Protsak Sep 3 2010 at 23:32 2 The kind of thing I had in mind was the following argument for Goldbach's conjecture. The mere fact that it is true up to some large n is not that convincing, but the fact that if you assume certain randomness heuristics for the primes you can predict how many solutions there ought to be to p_1+p_2=2n and that more refined prediction is true up to a large n is, to my mind, very convincing indeed. – gowers Sep 4 2010 at 17:21 2 Since you bring up prime heuristics on the side of numerical evidence in lieu of a proof, I will point out one problem with them. Cramer's "additive" probabilistic model for the primes, which suggested results that seemed consistent with numerical data, does predict relations that are known to be wrong. See Granville's paper dms.umontreal.ca/~andrew/PDF/icm.pdf, especially starting on page 5. – KConrad Sep 5 2010 at 1:49 1 I think what we learn from that example is that those kinds of heuristics have to be treated with care when we start looking at very delicate phenomena such as gaps between primes. But the randomness properties you'd need for Goldbach to be true (in its more precise form where you approximate the number of solutions) are much more robust, so far more of a miracle would be needed for the predictions they make to be false. – gowers Sep 5 2010 at 19:18 1 Oh, I'm not disputing the value of the Hardy--Littlewood type prime distribution heuristics. At the same time, I should point out that if you apply their ideas on prime values of polynomials (so not Goldbach, but still good w.r.t. numerical data) to the distribution of irreducible values of polynomials with coefficients in F_q[u], there are provable counterexamples and this has an explanation: the Mobius function on F_q[u] has predictable behavior along some irreducible polynomial progressions. For a survey on this, see math.uconn.edu/~kconrad/articles/texel.pdf. – KConrad Sep 6 2010 at 1:20 show 5 more comments Here is an example: 19 century geometers extended Euler's formula V-E+F=2 to higher dimensions: the alternating sum of the number of i-faces of a d-dimensional polytope is 2 in odd dimensions and 0 in even dimensions. The 19th centuries proofs were incomplete and the first rigorous proof came with Poincare and used homology groups. Here, what enabled a rigorous proof was arguably even more important than the theorem itself. - 1 This is not so much an example of increased care in an arguement as the development of critical technology needed to prove a result.The need for such technology is not always clear to mathematicains when they begin to formulate such arguments.The important question here is whether or not the general form of Euler's formula could have been proven WITHOUT it.In dimensions less then or equal to 3,there are many alternate proofs using purely combinatorial arguements.I'm not sure it can be proven without homology in higher-dimensional spaces. – Andrew L Sep 4 2010 at 23:39 1 Andrew: Yes, it can be proven without homology for higher dimensional polytopes: see ics.uci.edu/~eppstein/junkyard/euler/shell.html – David Eppstein Sep 5 2010 at 5:02 2 It does happen that techniques developed in order to give a proof are more very important. In this case, as it turned out a few decades after Poincare, the high dimensional Euler's theorem can be proved without algebtaic topology, and in the 70s even the specific gaps in the 19th century proofs was fixed, but the new technology allows for extensions of the theorem that cannot be proved by elementary method and it also shed light on the original Euler theorem: That the Euler characteristic is a topological invariant. – Gil Kalai Sep 5 2010 at 5:10 One can rigorously prove that pyramid schemes cannot run forever, and that no betting system with finite monetary reserves can guarantee a profit from a martingale or submartingale. But there are countless examples of people who have suffered monetary loss due to their lack of awareness of the rigorous nature of these non-existence proofs. Here is a case in which having a non-rigorous 99% plausibility argument is not enough, because one can always rationalise that "this time is different", or that one has some special secret strategy that nobody else thought of before. In a similar spirit: a rigorous demonstration of a barrier (e.g. one of the three known barriers to proving P != NP) can prevent a lot of time being wasted on pursuing a fruitless approach to a difficult problem. (In contrast, a non-rigorous plausibility argument that an approach is "probably futile" is significantly less effective at preventing an intrepid mathematician or amateur from trying his or her luck, especially if they have excessive confidence in their own abilities.) [Admittedly, P!=NP is not a great example to use here as motivation, because this is itself a problem whose goal is to obtain a rigorous proof...] - 8 I doubt if the main problem is that people are not aware of the rigorous nature of non-existence proofs. First, for most people the meaning of a regorous mathematical proof makes no sense and has no importance. The empirical fact that pyramid schemes always ended in a collapse in the past should be more convincing. But even people who realize that the pyramid scheme cannot run forever (from past experience or from mathematics) may still think that they can make money by entering early enough. (The concept "run forever" is an abstraction whose relevance should also be explained.) – Gil Kalai Sep 8 2010 at 7:01 2 @Gil: this is where a proof can give more than what you set out to prove. For the pyramid scheme, not only can we prove it cannot run forever, but we can also extract quantitative evidence to show that the odds of you getting in early enough are close to zero. Of course, this will not convince the numerically illiterate, but I'm convinced there is a non-negligible portion of the population that you could reach in this way. – Thierry Zell Mar 19 2011 at 5:20 show 1 more comment I have found that a strong indicator of research ability is a student wanting to know why something is true. There is also an interesting distinction between an explanation and a proof. (I gave up using the word "proof" for first year students of analysis, and changed it to "explanation", a word they could understand. This was after a student complained I gave too many proofs!) A proof takes place in a certain conceptual landscape, and the clearer and better formed this is the easier it is to be sure the proof is right, rather than a complicated manipulation. So part of the work of a mathematician is to develop these landscapes: Grothendieck was a master of this! Of course the more professional a person is in an area the nearer an explanation comes to a rigorous proof. But in fact we do not write down all the details. It is more like giving directions to the station and not listing the cracks in the pavement, though warning of dangerous holes. The search for an explanation is also related to the search for a beautiful proof. So we should not neglect the aesthetic aspect. - This question is begging for someone to state the obvious, so here goes. Take for example the existence and uniqueness of solutions to differential equations. Without these theorems, the mathematical models used in many branches of the physical sciences are incapable of making actual predictions. If potentially the DE has no solutions, or the model provides infinitely many solutions, your model has no predictive power. So the model isn't really science. In that regard, the point of proof in mathematics is to create a foundation that allows for quantitative physical sciences to exist to have a firm philosophical foundation. Moreover, the proofs of existence and uniqueness shed light on the behaviour of solutions, allowing one to make precise predictions about how good various approximations are to actual solutions -- giving a sense for how computationally expensive it is to make reliable predictions. - 6 I regret to say that most physicists seem to neither know nor care about rigorous proofs of existence and uniqueness theorems. But they seem to have no trouble doing good physics without them. – Richard Borcherds Sep 3 2010 at 21:55 4 In physics, having a formula or approximation scheme depending on N parameters (= dim. of phase space) shows existence and uniqueness locally, with global questions of singularities, attractors, topology etc understood by calculation and simulation. For classical ODE this is almost always enough and there are very few cases where careful analysis and error estimates overturned accepted physics ideas. There are more cases where physics heuristics drove the mathematics and some where they changed intuitions that prevailed in the math community. – T. Sep 4 2010 at 7:58 4 @T it sounds like you're assuming existence and uniqueness. What are you approximating? Approximations don't matter if you don't know what you're approximating. Moreover, if you're approximating one of infinitely-many solutions, this gives you no sense for how all the solutions (with certain initial conditions) behave, and limits your ability to predict anything. – Ryan Budney Sep 4 2010 at 18:26 4 May I point out that models which do not satisfy existence or uniqueness can be very useful. For an example, consider the Navier-Stokes equations (you get the Clay Prize for proving existence). It is quite possible that there are initial conditions where the solution does not exist. This could happen because the Navier-Stokes equations assume that the fluid is incompressible, and all real fluids are (at least to some very small degree) compressible. Even if existence were to fail to be satisfied, these equations would still have enormous predictive power, and be real science. – Peter Shor Sep 5 2010 at 17:04 5 From a physical point of view, uniqueness is a claim about causality. Given our belief in causality, any proposed law of nature that does not obey uniqueness may well be considered physically defective. But perhaps more important than uniqueness is the stronger property of continuous dependence on the data. (The former asserts that the same data will lead to the same solution; the latter asserts that nearby data leads to nearby solutions.) Once one has this, one has some confidence that one's model can be numerically simulated with reasonable accuracy, and also be resistant to noise. – Terry Tao Sep 8 2010 at 5:46 show 8 more comments Tim Gowers wrote: ## But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. How about Stokes' Theorem ? The two-dimensional version involving line and surface integrals is "proved" in most physics textbooks using a neat little picture dividing up the surface into little rectangles and shrinking them to zero. Similarly, the Divergence Theorem related volume and surface integrals is demonstrated with very intuitive ideas about liquid flowing out of tiny cubes. But to prove these rigorously requires developing the theory of differential forms whose consequences go way beyond the original theorems - 1 In fact, the original motivation for the Bourbaki project was the scandalous state of affairs that no rigorous proof of the Stokes theorem could be located in the (French?) literature. The rest is history... – Victor Protsak Sep 8 2010 at 8:44 2 I don't see how this example demonstrates the importance of rigour. Most engineers or physicists (e. g. doing electrodynamics) can get by perfectly well with "tiny-cubes" proof of Stokes' theorem and nothing ever gets wrong from using this intuitive approach. – Michal Kotowski Sep 8 2010 at 22:35 3 @MichalKotowski: that is not inconsistent with the notion that mathematics has benefited hugely from a rigorous theory of differential forms. – gowers Sep 10 2010 at 17:19 show 2 more comments Many examples that have been given are of statements that one could at least formulate, and conjecture, without rigorous proof. However, one of the most important benefits of rigorous proof is that it allows us to step surefootedly along long, intricate chains of reasoning into regions that we previously never suspected existed. For example, it is hard to imagine discovering the Monster group without the ability to reason rigorously. In any other field besides mathematics, as soon as a line of abstract argument exceeds a certain (low) threshold of complexity, it becomes doubtful, and unless there is some way to corroborate the conclusion empirically, the argument lapses into controversy. If you are trying to search a very large space of possibilities, then it is indispensable to be able to close off certain avenues definitively so that the search can be focused effectively on the remaining possibilities. Only in mathematics are definitive impossibility proofs so routine that we can rely on them as a fundamental tool for discovering new phenomena. The classification of finite simple groups is a particularly spectacular example, but I would argue that almost any unexpected mathematical object—the BBP formula for $\pi$, the Lie group $E_8$, the eversion of the sphere, etc.—is the product of a sustained search involving the systematic and rigorous elimination of dead end after dead end. Of course, once an object is discovered, you might try to argue that mathematical rigor was not really necessary and that someone could have stumbled across it with a combination of luck, persistence, and insight. However, I find such an argument disingenuous. Mathematical rigor allows us to distribute the workload across the entire community; each reasoner can contribute his or her piece without worrying that it will be torn to shreds by controversy. Searches can therefore be conducted on a massively greater scale than would be possible otherwise, and the productivity is correspondingly magnified. - I think that the question itself is entirely misleading. It tacitly assumes as if mathematics could be separated into two parts: mathematical results and their proofs. Mathematics is nothing other than the proofs of mathematical results. Mathematical statements lacks any value, they are neither good nor bad. From the mathematical point of view, it is entirely immaterial whether the answer to a mathematical question like `Is there an even integer greater than two that is not the sum of two primes?' is yes or no. Mathematicians simply do not interested in the right answer. What they would like to do is to solve the problem. That is the main difference between natural sciences or engineering on the one hand, and mathematics on the other. A physicist would like to know the right answer to his question and he does not interested in the way it is obtained. An engineer needs a tool that he can use in the course of his work. He does not interested in the way a useful device works. Mathematics is nothing other than a specific set consisting of different solutions to different problems and, of course, some unsolved problems waiting to be solved. Proofs are not important for mathematics, they constitute the body of knowledge we call mathematics. - 7 This is a very Bourbakist view. Much of interesting mathematics does not conform to it, because ideas and open problems are just as important in mathematics as rigorous proofs (even leaving aside the distinction between theory and proofs that is not appreciated by non-mathematicians). – Victor Protsak Sep 5 2010 at 4:27 3 Victor, Gyorgy's point of view does not conflict with the importance of ideas and open problems. Still for a large part of mathematics proofs is the essential part. The relation between a mathematical result and its proof can often be compared to the relation between the title of a picture or a poem or a musical piece and the content. – Gil Kalai Sep 5 2010 at 5:20 4 Gil, gowers' question addressed the distinction between an intuitive proof and a rigorous proof and my comment was written with that in mind. Using your artistic analogies, let me say that a piece of music cannot be reduced to the set of notes, nor a poem to the set of words, that comprise it (the analogy obviously breaks down for "modern art", such as atonal music and dadaist poetry). – Victor Protsak Sep 5 2010 at 7:28 Based on the recent update to the question, Fermat's Last Theorem seems like the top example of a proof being far more valuable than the truth of the statement. Personally it's a rare occurrence for me to use the nonexistence of a rational point on a Fermat curve but for instance it is quite common for me to use class numbers. - Circle division by chords, http://mathworld.wolfram.com/CircleDivisionbyChords.html, leads to a sequence whose first terms are 1, 2, 4, 8, 16, 31. It's simple and effective to draw the first five cases on a blackboard, count the regions, and ask the students what's the next number in the sequence. - 7 I like that example and have used it myself. However, the conjecture that the number of regions doubles each time has nothing beyond a very small amount of numerical evidence to support it, and the best way of showing that it is false is, in my view, not to count 31 regions but to point out that the number of crossings of lines is at most n^4, from which it follows easily that the number of regions grows at most quartically. – gowers Sep 5 2010 at 15:23 13 More mischievously, the sequence is 1, 2, 4, 8, 16, ... , 256, ... – Richard Borcherds Sep 6 2010 at 4:28 3 That I never knew. What a great fact! – gowers Sep 6 2010 at 19:25 I tend to think that mathematics or---better---the activity we mathematicians do, is not so much defined by (let me use what's probably nowadays rather old fashioned language) its material object of study (whatever it may be: it is surely very difficult to try to pin down exactly what it is that we are talking about...) but by its formal object, by the way we know what we know. And, of course, proofs are the way we know what we know. Now: rigour is important in that it allows us to tell apart what we can prove from what we cannot, what we know in the way that we want to know it. (By the way, I don't think that it is fair to say that, for example, the Italians were not rigorous: they were simply wrong) - 3 I once heard a less accurate but more punch version of this in some idiosyncratic history of mathematics lectures: "in mathematics, we don't know what it is we are doing, but we know *how to do it*". – Yemon Choi Sep 8 2010 at 5:08 3 Yemon, there is a famous definition of mathematics, perhaps, from "What is mathematics?" by Courant and Robbins: mathematics is what mathematicians do. – Victor Protsak Sep 8 2010 at 8:53 show 1 more comment A rich source of examples may be found in the study of finite element methods for PDEs in mixed form. Proving that a given mixed finite element method provided a stable and consistent approximation strategy was usually done 'a posteriori': one had a method in mind, and then sought to establish well-posedness of the discretization. This meant a proliferation of methods and strategies tailored for very specific examples. In the bid for a more comprehensive treatment and unifying proofs, the finite element exterior calculus was developed and refined (eg., the 2006 Acta Numerica paper by Arnold, Falk and Winther). The proofs revealed the importance of using discrete subspaces which form a subcomplex of the Hilbert complex, as well as bounded co-chain projections (we now call this the 'commuting diagram property). These ideas, in turn, provided an elegant design strategy for stable finite element discretizations. - Isn't the point that human reason is generally frail altogether, especially when making conclusions by using long serial chains of arguments? So in mathematics where such extended arguments are routine, we want their soundness to be as close to ideal as possible. Of course, even generally accepted proofs are occasionally later seen to be lacking, but to give up proofs as the ideal changes the very nature of mathematics. I heard that the example of parts of the Italian school of algebraic geometry of the 19th century was an important example of this overextension of intuition. Furthermore, it is only in the attempt at proof that the real nature of the reasons why a statement is true are finally exposed. So the reformulation and refoundation of algebraic geometry in the 20th century is said to have exposed revolutionary new ways of seeing mathematics in general. Finally, it is only by proof that the limits of applicability of a theorem are really understood. This comes into play many times in physics, say where some "no-go theorem" is elided because its assumptions are not valid in some new realm. - 1 I'm not downvoting this, but it seems overly vague to me. I think for this issue specific examples of problems that intuitive methods get wrong are more convincing than generalities about the nature of human reasoning. The Italian school example is a good one but again needs to be made more specific. – David Eppstein Sep 3 2010 at 18:18 1 Regarding the topic of Itatlian algebraic geometry, this question and some of the comments and answers may be of interest: mathoverflow.net/questions/19420/… (The linked email of David Mumford, in the comment by bhwang, is particularly interesting.) – Emerton Sep 3 2010 at 18:49 I have the tendency to think that the need for absolute certainty is related to the arborescent structure of mathematic. The mathematics of today rest upon layers of more ancient theories and after piling up 50 layers of concepts, if you are only sure of the previous layers with a confidence of 99%, a disaster is bound to happen and a beautiful branch of the tree to disappear with all the mathematicians living on it. This is rather unique in natural sciences with the exception of extremely sophisticated computer programms but, in mathematics, you will have to fix by yourself an equivalent of 2K bug. Of course, there are people who are willing to take the risk to see what they have achieved collapse in front of their eyes by working under the assumption that an unproven, but highly plausible, result is true (like Riemann hypothesis or Leopoldt conjecture). In some cases this is actually a good way to be on top of the competition (think of the work of Skinner and Urban on the main conjecture for elliptic curves which rests upon the existence of Galois representations that were not proven to exist before the completion of the proof of the Fundamental Lemma). - Richard Lipton recently blogged about this question in the context of why a potential proof of $P \neq NP$ would be important. I am probably bastardizing his words, but one of the reasons he gives is that a proof may give new insight and methods of attack to other problems. He cites Wiles' proof of Fermat's Last Theorem as an example of this phenomenom. - "Sufficient unto the day is the rigor thereof."-E.H.Moore. There's a lot of discussion over not only the role of rigor in mathematics,but whether or not this is a function of time and point in history.Clearly,what was a rigorous argument to Euler would not pass muster today in a number of cases. Passing from generalities to specific cases,I think the prototype of statements which were almost universally accepted as true without proof was the early-19th century notion that globally continuous real valued functions had to have at most a finite number of nondifferentiable points.Intuitively,it's easy to see why in a world ruled by handwaving and graph drawing,this would be seen as true. Which is why the counterexamples by Bolzano and Weirstrauss were so conceptually devastating to this way of approaching mathematics. Edit: I see Jack Lee already mentioned this example "below" in his excellent list of such cases. But to be honest,I don't think his first example is really about rigor so much as a related but more profound change in our understanding how mathematical systems are created. The main reason no one thought non-Euclidean geometries made any sense was because most scientists believed Euclidean geometry was an empirical statement about the fundamental geometry of the universe.Studies of mechanics supported this until the early 20th century;as long as one stays in the "common sense" realm of the world our 5 senses perceive and we do not approach relativistic velocities or distances,this is more or less true. Eddington's eclipse experiments finally vindicated not only Einstein's conceptions,but indirectly,non-Euclidean geometry-which until that point,was greeted with skepticism outside of pure mathematics. - 1 This seems more like a discussion on Jack Lee's post than an answer. – Jonas Meyer Sep 4 2010 at 23:27 show 3 more comments The way current computer algebra systems (that I know of) are designed is a compromise between ease of use and mathematical rigor. Although in practice, most of the answers given by CASes are correct, the lack of rigor is still a problem because the user cannot fully trust the results (even under the assumption that the software is bug-free). Now, it might sound like just another case of "99% certainty is enough," but in practice it means having to verify the results independently afterwards, which could be considered unnecessary extra work. The root of the problem seems to be that a CAS manipulates expressions when it should output theorems instead. In many cases, the expressions simply don't have any rigorous interpretation. For example, variables are usually not introduced explicitly and thus not properly quantified; in the result of an indefinite integral they might even appear out of nowhere. Dealing with undefinedness is another problem. All of this is inherent in the architecture of computer algebra systems, so it cannot be fixed properly without switching to a different design. The extra 1% of certainty may indeed not justify such a change. But if rigor had been emphasized more from the start, maybe we would have trustworthy CASes now. I think this line of thought can be generalized. (As a non-mathematician) I can't help but wonder how mathematics would have progressed without the widespread introduction of rigor in the 19th century. I can't really imagine what things would be like if we still didn't have a proper definition of what a function is. So maybe rigor is indeed not strictly necessary in particular cases, but it has shaped mathematical practice in general. - 2 I tend to not use CAS packages that aren't open-source. Knowing the precise details of the implementation of the algorithm allows you to understand its limitations, when it will and will not function. There is still uncertainty in this of course -- did I really understand the algorithm? Is the computer hardware faulty? Does the compiler not compile the code correctly? And so on. Open-source also has the advantage that the algorithms are re-usable and not "canned". – Ryan Budney Sep 4 2010 at 20:40 In response to the request for an example of a statement that was widely but erroneously believed to be true: does Gauss's conjecture that $\pi(n) < \operatorname{li}(n)$ for every integer $n \geq 2$, disproved by Littlewood in 1914, qualify? - 2 It's a good example of the need for rigor. But I've always been skeptical of the story that it was widely believed to be true, since any competent mathematician familiar with Riemann's 1857 explicit formula for π(n) would have realized that there are almost certainly going to be occasional exceptions. (Littlewood removed the word "almost" by a more careful analysis.) – Richard Borcherds Sep 8 2010 at 19:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539579153060913, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33303/demonstrate-magnets-adhere-to-conservation-of-energy-pursuant-to-the-laws-of-the/33307
# Demonstrate magnets adhere to conservation of energy pursuant to the laws of thermodynamics I am looking for a way to demonstrate that magnets adhere to the laws of thermodynamics, in particular the requirement that energy in a closed system be conserved. To adhere to the requirement that energy not be lost, I would expect that the energy required to create a magnet would be offset by the energy exerted when that magnet exerts force. My (elementary, if you will pardon the pun) understanding of magnets is that they exhibit a magnetic field because of a kind of polarization of the electrons, that looks something like this for a "perfectly" magnetized metal: ````--------------------- --------------------- --------------------- ```` whereas an unmagnetized object would not have this polarization (if that is the right word), and might "look" something like this: ````|\/-/-\|-/|\/-/-\|-/ /|--/-\|-/|\/-|\\|\- \-/-/-\-/|\/-|/-\|-\ ```` The former object would exhibit the maximum force possible for the given material. The latter object would exhibit no magnetic force. One would expect that the energy required to align the electrons would never exceed (but where inefficient it may be less than) the force that is exerted by the magnetic field. As an example, suppose the polarization of a magnet from a completely unpolarized state uses 1 kJ of energy. To adhere to the laws of thermodynamics, the maximum amount of force the magnet can exert is 1 kJ, after which point it would be depolarized. One would expect the strength of the magnetic field to dissipate as it exerts force. Is there a demonstration that one can perform to help visualize the conservation of energy by way of "charging" and "discharging" a magnet? - 1 – Mono Aug 2 '12 at 4:14 ## 2 Answers The answer is that the magnetic field energy is always the integral of $B^2\over 2$, up to factors which are different in different systems of units. So magnetizing a magnet requires as much energy as you get in the field after you are done, and when you bring magnets closer, so extracting energy from the field, the total integral of $B^2$ is diminished. The total amount of energy you can extract from the magnetic reconfiguration is never greater than the total magnetic energy, and the total magnetic energy is the minimum work that you need to do to magnetize the system initially. The general proof that magnetic fields conserve energy comes from the fact that the interactions of electrons, nuclei and electromagnetic fields have a Hamiltonian formulation. The Hamiltonian formulation automatically implies conservation of energy. If you aren't concerned about loop quantum effects, which require renormalization, or if you are working classically, the Hamiltonian is obtained by adding the electromagnetic field Hamiltonian $\int {1\over 2} (E^2+B^2) d^3x$ to the mechanical energy of the electrons and nuclei, and replacing the momentum $p$ by $p-eA$ where A is the vector potential, and adding $q\phi$ where $\phi$ is the scalar potential, and adding appropriate coupling to the magnetic moments of the electrons and nuclei. - There's no "electron polarization" in a magnet. I'll concentrate in this answer in the typical example of ferromagnetic materials (like iron, nickel or cobalt) which exhibit the following list of properties that explain their macroscopic behavior, but there are of course other kinds of materials which can develop magnetic attributes by subtly different mechanisms. So, for ferromagnetic species: • At the atomic level, they're all elements (which, excepting rare-earth metals, all belong to the d-block or "transition metals") that have a number of electrons in their outer shell such that the latter stands incomplete. In an atom, each atomic orbital is occupied by at most two electrons with anti-parallel spin, because of Pauli Exclusion Principle; also, by Hund's rules, if the number of electrons in a shell isn't enough to fill it, they distribute themselves each one singly occupying a different orbital, and in these cases this set of unpaired electrons (with their associated spin) are able to produce a net magnetic moment for the atom as a whole. The atoms can be then imagined as microscopic magnets pointing in the direction of the net magnetic moment imparted by their non-paired electrons. • Individual atoms with a net magnetic moment can be, in principle, randomly oriented inside the cristal lattice of the solid (and so cancelling each other's moment and preventing the emergence of a macroscopic field). But in ferromagnetic materials another effect (the exchange interaction between nearby electrons with equally oriented spins) induces the allignment of microsocopic aggregates of atoms in the cristal lattice, which in this way becomes thermodynamically favoured over the random orientation of magnetic moments. As the exchange interaction effect is significative only at short range, these aggregates don't extend far away into the solid lattice, and so instead they form crystal grains called magnetic domains (they are identifiable in the crystallographic structure of the metal exploiting the surface magneto-optic Kerr effect). • Then again, individual magnetic domains are thermodynamically favoured to be randomly oriented in a solid phase, and that's why ferromagnetic materials don't naturally exhibit magnetic properties. Nevertheless, the magnetic domains can be alligned by an external magnetic field (such as the one produced by an electric current in a solenoid configuration) and this indeed implies that work is done for reducing the entropy of the material associated to the allignment process. Then again, being the randomly-oriented domains the thermodynamically stable state, magnetized ferromagnets tend to de-magnetize with the passage of time, and as a result their magnetic field response becomes an hysteresis loop. The energy involved in setting-up the induced magnetic field of the magnet, is both irreversibly transformed into the thermal energy dissipated from the displacements of the domains, and (reversibly) stored in the magnetic field itself; the energy comes from the external power supply which powers the induction coil (electricity) and gets wirelessly transmitted through space to the magnet by the changing field in formation, which in turn generates a voltage drop in the coil. So energy is conserved (the amount added to the system as electrical power is converted to magnetic energy in the field and heat released) and the entropy of the universe strictly increases (because, even if the magnet's entropy is reduced by the magnetization process, the heat released to the environment causes an increase in entropy more than enough than needed to compensate it). Also, when it goes down the hysteresis loop, the decreasing magnetic field intensity induces eddy currents in the material, and the energy initially stored in the field finally becomes also irreversibly lost as heat to the environment. That means that in the end, all of the electric power initially introduced in the system gets converted to heat when the solid phase relaxes to it's thermodynamical stable state. - Your explanation is just what OP means by electron polarization--- he means that the upaired electron spins are aligned, as they are by the complicated mechanism you sketched out, so it's not wrong. This doesn't really answer the question--- the question was "prove perpetual motion is impossible with magnets". Also the dimensional thing at the end, by "maximum force the magnet can exert" he means the "maximum work the magnet can do", he isn't using the right language, but you can see what he means, and it's not nonsense. – Ron Maimon Aug 2 '12 at 3:17 I don't agree with you it's right to use the word "polarization" freely when talking about electromagnetic properties of materials -that is a technical, well-defined term which refers to the development of surface charges by microscopic induction of dipoles in dielectrics. He might not be talking nonsense, but the confusion that the OP seems to have between the concepts of force and work is usually settled by reading some introductory texts in physics. – Mono Aug 2 '12 at 3:31 But I agree that the final sentence is unnecessarily aggressive. I'll remove it. – Mono Aug 2 '12 at 3:33 1 Just read "magnetization" for "polarization", and "maximum work" for "maximum force". There's no point in berating someone over using jargon wrong when the intent is clear. – Ron Maimon Aug 2 '12 at 7:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371723532676697, "perplexity_flag": "middle"}
http://chemwiki.ucdavis.edu/Physical_Chemistry/Thermodynamics/State_Functions/Entropy/Entropy_of_Mixing
More # Entropy of Mixing ##### Table of contents A gas will always flow into a newly available  volume and does so because its molecules are rapidly bouncing off one another and hitting the walls of their container, readily moving into a new allowable space. It follows from the second law of thermodynamics that a process will occur in the direction towards a more probable state. In terms of entropy, this can be expressed as a system going from a state of lesser probability (less microstates) towards a state of higher probability (more microstates). This corresponds to increasing the W in the equation $$S=k_BlnW$$. ### The Mixing of Ideal Gases For our example, we shall again consider a simple system of two ideal gases, A and B, with a number of moles $$n_A$$ and $$n_B$$, at a certain constant temperature and pressure in volumes of $$V_A$$ and $$V_B$$, as shown in Figure 1. These two gases are separated by a partition so they are each sequestered in their respective volumes. If we now remove the partition (like opening a window in the example above), we expect the two gases to randomly diffuse and form a homogenous mixture as we see in Figure 2. Figure 1. (Left) Two Gases $$A$$ and $$B$$ in their respective volumes and  (right) A homogenous mixture of gases $$A$$ and $$B$$. To calculate the entropy change, let us treat this mixing as two separate gas expansions, one for gas A and another for B. From the statistical definition of entropy, we know that $\Delta S=nRln \dfrac{V_2}{V_1} \;.$ Now, for each gas, the volume $$V_1$$ is the initial volume of the gas, and $$V_2$$ is the final volume, which is both the gases combined, $$V_A+V_B$$. So for the two separate gas expansions, $\Delta S_A=n_A R ln \dfrac{V_A+V_B}{V_A}$ $\Delta S_B=n_B R ln \dfrac{V_A+V_B}{V_B}$ So to find the total entropy change for both these processes, because they are happening at the same time, we simply add the two changes in entropy together. $\Delta_{mix}S = \Delta S_{A}+\Delta S_{B}=n_{A}Rln\dfrac{V_{A}+V_{B}}{V_{A}}+n_{B}Rln\dfrac{V_{A}+V_{B}}{V_{B}}$ Recalling the ideal gas law, PV=nRT, we see that the volume is directly proportional to the number of moles (Avogadro's Law), and since we know the number of moles we can substitute this for the volume: $\Delta_{mix}S=n_{A}Rln\dfrac{n_{A}+n_{B}}{n_{A}}+n_{B}Rln\dfrac{n_{A}+n_{B}}{n_{B}}$ Now we recognize that the inverse of the term $$\frac{n_{A}+n_{B}}{n_{A}}$$ is the mole fraction $$x_{A}=\frac{n_{A}}{n_{A}+n_{B}}$$, and taking the inverse of these two terms in the above equation, we have: $\Delta_{mix}S=-n_{A}Rln\dfrac{n_A}{n_A+n_B}-n_BRln \dfrac{n_A}{n_A+n_B}x_{B} = -n_A Rlnx_A -n_B Rlnx_B$ since $$lnx^{-1}=-lnx$$ from the rules for logarithms. If we now factor out R from each term: $\Delta_{mix}S=-R(n_{A}lnx_{A}+n_{B}lnx_{B})$ represents the equation for the entropy change of mixing. This equation is also commonly written with the total number of moles: $\Delta_{mix}S=-nR(x_A lnx_A+x_Blnx_B)$. where the total number of moles is $$n=n_A+n_B$$ Notice that when the two gases will be mixed, their mole fraction will be less than one, making the term inside the parentheses negative, and thus the entropy of mixing will always be positive. This observation makes sense, because as you add a component to another for a 2-component solution, the mole fraction of the other component will decrease, and the log of a number less than 1 is negative. Multiplied by the negative in the front of the equation gives a positive quantity. This equation also applies to ideal solutions as well as ideal gases. ### References 1. Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, California: University Science Books, 2005. ### Contributors • Konstantin Malley (UCD) The ChemWiki has 9226 Modules. Viewing 1 of 1 comments: view all http://www.che.ksu.edu/files/che/imported/7.pdf Posted 18:05, 12 May 2013 Viewing 1 of 1 comments: view all
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9190560579299927, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/99250-splitting-field.html
# Thread: 1. ## Splitting Field Let $F$be a field of characteristic $p$. Prove that $x^p-a$either is irreducible or splits in $F$... 2. if you prove one thing you are through ... all roots of this polynomial are equal (in some appropriate extension) 3. Originally Posted by ynj Let $F$be a field of characteristic $p$. Prove that $x^p-a$ either is irreducible or splits in $F$... i'll prove something more than what the problem is asking! i'll show that either $x^p -a$ is irreducible or $x^p - a = (x + c)^p,$ for some $c \in F$: suppose $x^p - a$ is not irreducible. then $x^p - a = g(x)(f(x))^m,$ for some polynomials $f(x), g(x) \in F[x],$ where $f(x)$ is irreducible with $1 \leq \deg f(x) < p, \ 1 \leq m \leq p,$ and $\gcd(f(x),g(x))=1.$ differentiating (formally) will give us: $0=px^{p-1}=g'(x)(f(x))^m + mg(x)f'(x)(f(x))^{m-1}.$ hence $mg(x)f'(x)=-g'(x)f(x),$ which is impossible unless $m=p, \ \deg g(x) = 0$ and $\deg f(x)= 1$ because $\gcd(f'(x),f(x))=\gcd(f(x),g(x))=1.$ so suppose $m=p, \ g(x) = \alpha$ and $f(x)=\beta x + \gamma.$ then $x^p-a = \alpha (\beta x + \gamma)^p=\alpha(\beta^p x^p + \gamma^p)$ and hence $\alpha \beta^p = 1, \ \alpha \gamma^p = -a.$ letting $c = \gamma \beta^{-1}$ we'll get $x^p - a = (x + c)^p. \ \Box$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9192222952842712, "perplexity_flag": "head"}
http://mathoverflow.net/questions/101196?sort=oldest
## Kähler potentials that depend only on geodesic distance ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hermitian symmetric spaces of constant curvature have the property that the potential for their Kähler metric can be expresed as some function of the geodesic distance. Does anyone know if there are any general results concerning Kähler manifolds with this property? - Could you elaborate on this question? How can a metric fail to be a function of geodesic distance? – Igor Rivin Jul 3 at 14:58 @Igor: He means that a potential for the metric is of the form $f(x)=h(d(p,x))$ where $p$ is a fixed point and $h$ is some function of one variable. – Misha Jul 3 at 21:52 For a hermitian symmetric space of constant curvature, this holds for any point $p$ (as defined in Misha's comment). It should not be difficult to give an example where it holds for only one particular choice of $p$ but not for others. So presumably the question is about manifolds where the potential can be written as a function of distance from a point $p$, for any choice of $p$? – Deane Yang Jul 3 at 22:13 Also, I am under the impression that the statement is not literally true for a compact manifold, since the Hessian of the distance function has to degenerate at the cut locus. It seems to me that the question needs to be stated more carefully. – Deane Yang Jul 3 at 22:14 @Misha: Correct Misha. The Fubini-Study, Begman, and the Euclidean metrics all have potentials of this form. @Yang: I'm speaking locally here; obviously the cut locus will need to be avoided. However, the base point $p$ is arbitrary; you simply avoid the cut locus of $p$. I've thought about this some more and I realized that the potential is also a function of $|z|^2=\sum_i|z_i|^2$. Here $(z_1,\cdots, z_n)$ are local coordinates. Rotationally symmetric metrics have this property. – Oliver Jones Jul 4 at 0:02 ## 2 Answers Update: I have had a little time to think about this further and have been able to show that if a smooth Kähler metric $g$ on a complex $n$-manifold $M$ has a potential $f$ that is a function of the distance from a point $p\in M$, then the metric must be locally rotationally symmetric about $p$, i.e., the group of local $g$-isometries that fix $p$ is isomorphic to $\mathrm{U}(n)$. In particular, this shows that, if it has this property with respect to every point $p\in M$, then it must have constant holomorphic sectional curvature. I'll describe the steps in my argument, but I won't put in the details (which are somewhat long) unless there is interest. I'm leaving my original answer below, for notational purposes, even though that preliminary analysis was inconclusive. Fix notation as in the original answer: There is an open $p$-neighborhood $U\subset M$ on which there exists a Kähler potential $f$ for $g$ that is a function of the $g$-distance $r:U\to\mathbb{R}$ from $p$, i.e., $f = h\bigl(r^2\bigr)$ for some function $h$ and $\frac{i}{2}\partial\bar\partial f = \Omega$, where $\Omega$ is the Kähler form of $g$. (Clearly, one can assume that $h(0)=0$, and it is not hard to show that $h$ must be smooth on $[0,\epsilon)$ for some $\epsilon>0$ and must satisfy $h'(0) = 1$.) Because $r$ satisfies $|\nabla r|^2 = 1$, it follows that $|\nabla f|^2 = 4 r^2 h'(r^2)^2 = \phi(f)$ for some function $\phi$ that satisfies $\phi(0)=0$ and $\phi'(0)=4$. Elementary identities then show that $f$ must satisfy the equation $$\partial f \wedge \bar\partial f \wedge \bigl(\partial\bar\partial f\bigr)^{n-1} = \frac{1}{4n}\phi(f)\ \bigl(\partial\bar\partial f\bigr)^{n}.$$ Conversely, if $f$ is any solution of this equation on a neighborhood of $p\in M$ that satisfies $f(p) =0$, $df_p=0$, and $\frac{i}2\partial\bar\partial f >0$ at $p$ (in the sense of $(1,1)$-forms, then the Kähler form $\frac{i}2\partial\bar\partial f$ deinfes a Kähler metric $g$ such that $|\nabla f|^2 = \phi(f)$ and, from this it easily follows that $f$ is a function of the $g$-distance from $p$. It remains now to show that such an $f$ must be rotationally symmetric in the above sense with respect to some $\mathrm{U}(n)$-action on a neighborhood of $p$. This problem can be simplified in the sense that, if such an $f$ exists, it can be shown that there is a function $\psi$ defined on a neighborhood of $0$ such that $\psi(0)=0$ and $\psi'(0)=1$ and such that $s = \psi(f)$ will satisfy $$\partial s \wedge \bar\partial s \wedge \bigl(\partial\bar\partial s\bigr)^{n-1} = \frac{1}{n}s\ \bigl(\partial\bar\partial s\bigr)^{n}. \tag1$$ In other words, one can reduce to the case $\phi(t) = 4t$ (which is the same as saying that one reduces to the case in which $f = r^2$). Thus, we assume that $s$ satisfies (1) from now on. The next step is to show, using a straightforward Taylor series argument, that, for any $k\ge2$, there exists a local, $p$-centered holomorphic coordinate chart $z_k:U_k\to\mathbb{C}^n$ in which $s - |z_k|^2$ vanishes to order $k$. This shows that the Kähler metric associated to $s$ is flat to infinite order at $p$, but, since we do not (yet) know that $g$ is real analytic, this does not show that $s$ is flat everywhere. (Also, one does not (yet) know that the sequence of coordinate charts $z_k$ has a convergent subsequence.) Finally, one uses the fact that each geodesic through $p$ lies in a totally $g$-geodesic complex curve passing through $p$ to show that the $(2,0)$ part of the Hessian of $s$ restricts to each such curve to be generated by pullback of a holomorphic map from the complex curve to the Hermitian symmetric space $\mathrm{Sp}(n,\mathbb{R})/\mathrm{U}(n)$. Since this map must have its differential vanish to infinite order at $p$ (by the argument in the previous paragraph), it follows that it must be a constant map, which implies that the $(2,0)$ part of the Hessian of $s$ must vanish identically, i.e., that $\mathrm{Hess}(s) = g$. From this, it is elementary to conclude that $g$ must be flat and hence $s$ must be invariant under the $\mathrm{U}(n)$-rotations. Since the original $f$ is a function $s$, it, too, must be invariant under this $\mathrm{U}(n)$-action, and, in particular, must be of the form $f = h\bigl(|z|^2\bigr)$ for some function $h$, as desired. Original Answer: This is only a preliminary answer, but it's too long to go into a comment, so I'm putting it here. I'll add to it when I have the time to figure out more about it. It seems that there are two questions here, a pointwise one and a local one: First, let's say that a (smooth) Kähler metric $g$ on a complex manifold $M$ is polar at $p\in M$ if there is an open $p$-neighborhood $U\subset M$ on which there exists a Kähler potential $f$ for $g$ that is a function of the $g$-distance $r_p:U\to\mathbb{R}$ from $p$, i.e., $f = h\bigl({r_p}^2\bigr)$ for some function $h$ and $\frac{i}{2}\partial\bar\partial f = \Omega$, where $\Omega$ is the Kähler form of $g$. (Clearly, one can assume that $h(0)=0$, and it is not hard to show that $h$ must be smooth on $[0,\epsilon)$ for some $\epsilon>0$ and must satisfy $h'(0) = 1$.) Let's say that such an $f$ is a polar potential for $(g,\Omega)$ at $p$. If a polar potential at $p$ exists on some $\epsilon$-ball about $p$, it is unique on that ball. Now, a polar potential satisfies a natural differential equation, namely, the differential equation that states that the $g$-gradient lines of $f$ are $g$-geodesics. This is, typically, an overdetermined equation for a pseudoconvex potential $f$, so one can hope to get some information by using this. In fact, calculation shows that this implies that each of the geodesics emanating from $p$ lies in a unique, totally geodesic complex curve passing through $p$. Moreover, the induced metrics on all of these complex curves (one tangent to each complex line in $T_pM$) are isometric and, moreover, they are each possess an isometric rotation about $p$ within the complex curve. In complex dimension $1$, this implies that the metric is rotationally symmetric about $p$, so the polar potentials are the rotationally symmetric ones. In particular, in dimension $1$, a Kähler metric that is polar at every point has constant curvature and hence is locally symmetric. In higher dimension, since the curves are totally geodesic and isometric, they all have the same curvature at $p$ and hence it follows that the ambient metric has constant holomorphic sectional curvature at $p$. In higher dimensions, it appears that, while rotationally symmetric (i.e., $\mathrm{U}(n)$-invariant) pseudoconvex potentials are polar with respect to the center of rotation, these are not the only solutions of this equation. Already in dimension $2$, it appears, from the structure equations, that there are (local) pseudoconvex polar potentials that are not rotationally symmetric. I haven't had time to integrate the structure equations, though, so I can't yet say what these solutions look like. However, by the above argument, a Kähler metric that is polar at every point must have constant holomorphic sectional curvature, and hence the only metrics that have this property at every point are the complex space forms. - Robert, is the term \textit{polar} your terminology? If so, what motivated it? I found a paper by Gromov entitled "K\"{a}hler Hyperbolicity and $L_2$-Hodge Theory" in which he states that for Hermitian symmetric spaces the K\"{a}hler potential is expressible as a function of geodesic distance. I didn't see any restrictions on curvature. Have I missed something? – Oliver Jones Jul 10 at 3:36 @Oliver: Yes, I chose polar arbitrarily so that I'd have something to call the property; I was thinking of 'pole of rotation', nothing deeper than that. I'm surprised that Gromov would claim this; it's not true for the product of the Poincaré disk and the complex plane, which is an Hermitian symmetric space, albeit a reducible one. Perhaps Gromov meant to write 'rank 1 Hermitian symmetric space' and inadvertently omitted the rank 1 part? I can put in a sketch of the argument if you are interested. – Robert Bryant Jul 10 at 12:38 @Oliver: Here's a way to see that the irreducible higher rank Hermitian symmetric spaces can't be polar (at any point): The polar property is clearly inherited by any totally geodesic complex submanifold (with the induced Kähler structure). So, for example, if the real Grassmannians $Gr(2,n)=SO(n)/(SO(2)SO(n))$ were polar for $n>3$, then $Gr(2,4)=S^2xS^2$ would be polar, but it's not. The Kähler potential for $S^2$ is $f(r) = 8\log(\sec(r/2))$ and the distance function for $S^2xS^2$ is $r = \sqrt(r_1^2+r_2^2)$, but, clearly, no function of this $r$ can be a Kähler potential for $S^2xS^2$. – Robert Bryant Jul 10 at 19:48 @Robert: Gromov had the proviso that the Hermitian symmetric space has no Euclidean factor. That takes care of your product example. Sorry for the omission. His paper is available online or I can send it to you if you're interested. – Oliver Jones Jul 10 at 22:08 @Oliver: I looked at Gromov's paper; the example you quote is on the second page. He clearly intended the polar claim for all Hermitian symmetric spaces, just not the claim of boundedness for $df$ (which is what he really cared about) if it has an Euclidean factor. In spite of his claim, Hermitian symmetric spaces of rank bigger than one do not have the polar property (whether it has an Euclidean factor or not). The argument that I gave above for $S^2xS^2$ works for the product of two Poincaré disks as well, which shows that this product, which has no Euclidean factor, isn't polar either. – Robert Bryant Jul 11 at 5:16 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. @Robert: This was too long for the comment field and so I'm putting it here. I have another example which backs up what you're saying. Hermitian symmetric spaces of compact type admit a K\"{a}hler embedding into complex projective space. If we consider two points $x$ and $y$ on the embedded submanifold $M$ then there are two distances; the geodesic distance $d_M(x,y)$ in the submanifold and the Fubini-Study distance $d_{FS}(x,y)$ in the projective space. An implication of your result is that it is not possible to express $d_{FS}(x,y)$ as a function of $d_M(x,y)$ (except for complex space forms). I looked at the case of the complex Grassmannian for which there are explicit formulas for geodesic distance in terms of stationary angles. It's clear that you are correct; a simple calculation shows that it's not possible to express the Fubini-Study distance solely as a function of the geodesic distance. Thanks very much for your answer. - @Oliver: Well, since the Kähler isometric embedding of a compact type Hermitian symmetric space into a complex projective space that you speak of is not totally geodesic in the higher rank case, the argument I was mentioning above doesn't really apply. I don't see how you are drawing your nonexistence conclusion. I can see that this says that there is a Kähler potential that is expressed in terms of $d_{FS}$ and it's clear that $d_{FS}$ is not a function of $d_{M}$, but how does that prove that there's not a different Kähler potential that is expressible in terms of $d_{M}$? – Robert Bryant Jul 12 at 11:59 @Robert: yes, good point. When formulating the example above I had in mind Calabi's diastasis potential. This has a nice property with respect to Kahler submanifolds. However, the diastasis is constructed from an arbitrary potential and I don't think it would be difficult to show that if there existed a polar potential then the diastasis must also be polar. I'll check. – Oliver Jones Jul 12 at 22:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 162, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9493653178215027, "perplexity_flag": "head"}
http://mathoverflow.net/questions/tagged/monodromy
## Tagged Questions 1answer 199 views ### Fibered knot with periodic homological monodromy It is well-known that there exist pseudo-Anosov automorphisms of surfaces that act trivially on the homology: they form the Torelli group. Similarly there exists pseudo-Anosov auto … 0answers 191 views ### lifts of maps to $\mathcal{M}_{1,1}$ Hi, here's there's a construction about elliptic curves that I do not completely understand. Suppose I consider the two following families of elliptic curves over $\mathbb{C}^*$. … 2answers 150 views ### Genus one fibered links It is well-known that the only genus one fibered knots are the trefoil and the figure-eight. On the other hand, there exist infinitely many fibered links for any fixed higher genus … 1answer 112 views ### An analog of Picard-Lefschetz theory for finite coverings in lieu of embeddings Suppose that $f\colon X\to \mathbb P^N$ is a finite morphism, where $X$ is a smooth projective variety over $\mathbb C$. Then one may consider monodromy of the (singular) cohomolog … 1answer 205 views ### weight monodromy conjecture for curves? Hi, Is there a simple proof of the weight monodromy conjecture in the case of a curve over a mixed characteristic local field? Thanks! 0answers 284 views ### Grothendieck monodromy theorem for l-adic sheaves Hi, Suppose that $F$ is a local field, $G_F$ its Galois group, $I$ the inertia subgroup, $k$ its residue field. Let $X$ be a finite type scheme over $k$. Let $C$ be a constructibl … 3answers 460 views ### Monodromy group of 1-dimensional families of hyperelliptic curves If $f: \mathcal{C} \rightarrow \mathcal{P}_{2g+2}$ is a general family of hyperelliptic curves (defined over $\mathbb{C}$), we know that the algebraic connected monodromy group \$M … 0answers 378 views ### Rigid Uniformization vs Grothendieck’s Local Monodromy Theory I've noticed that some interesting results about abelian varieties can each be proven using one of two ways: the theory of rigid uniformization of abelian varieties or Grothendieck … 1answer 294 views ### Quasi-unipotent monodromy for general families This must be a naive question, but I'm wondering about the definition of the quasi-unipotent monodromy for general (not only 1-parameter families). The problem is that usually in t … 3answers 450 views ### Relationship between monodromy representations and isomorphism of flat vector bundles This question is somehow related to this one. Let $M$ be a smooth (compact, if you wish) connected manifold. Then, it is well known that there is an equivalence between the isom … 1answer 263 views ### What is the mod l monodromy of a generic trigonal curve? For a hyperelliptic curve H, the mod 2 monodromy is smaller than $GSp_{2g}(F_2)$ -- since the two torsion of the Jacobian H is generated by differences of Weierstrass point, the mo … 0answers 122 views ### Irreducibility of monodromy of eigenspaces of families of cyclic coverings In the article "La conjecture de Weil", Deligne proves that for the primitive cohomology of a universal family $f:X \rightarrow S$ for $M_{d,n}$ the moduli stack of hypersurfaces o … 2answers 249 views ### Analogue of Shafarevich-Ogg’s theorem over complex numbers Let $f:E\to D^*$ be a family of complex elliptic curves parametrized by the punctured open disk $D^*.$ Assume that the monodromy on $H^1$ is trivial (i.e. $R^1f_*\mathbb Z$ is a co … 1answer 602 views ### Monodromy groups of families of abelian varieties: a reference request In Serre's letter to Vigneras of 2 Oct 1986, he summarizes a course he's giving in Paris, explaining how to control the image of the mod-l Galois representations attached to abelia … 1answer 448 views ### monodromy and global cohomology Let $C$ be a compact Riemann surface, and let $U$ be a Zariski open subset in $C.$ Let $L$ be a local system (with coefficients $\mathbb C$ or $\mathbb Q_{\ell}$) on $U.$ For each … 15 30 50 per page
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8800892233848572, "perplexity_flag": "middle"}
http://vladimirkalitvianski.wordpress.com/2009/05/22/problem-of-infinitely-big-corrections/
# Reformulation instead of Renormalizations by Vladimir Kalitvianski ## Problem of infinitely big corrections In this web log I would like to share my findings on reformulation of problems with big (infinite or divergent) perturbative corrections and discuss them. (The blog is regularly updated so do not pay attention to the date – it is a starting date.) I myself encountered big (divergent) analytical perturbative corrections in practice long ago; it was the beginning of my scientific career (1981-1982 years). It was a simple and exactly solvable Sturm-Liouville problem, with transcendental eigenvalue equations solvable exactly only numerically. Analytical solutions (series) were divergent. First I thought to develop a renormalization prescription to cope with  the “bad” perturbative expansion, as I was taught to at the University, but soon I managed to reformulate the whole problem with choosing a better initial approximation by a better variable choice (variable change (see [1])). Since then I have been persuaded that we have to seek a physically/mathematically better initial approximation each time when the perturbative corrections in calculations are too big (in particular, infinite). In fact, here may be at least two types of difficulties: 1) A particular physical and mathematical problem has exact, physically meaningful solutions, but perturbation theory (PT) corrections are divergent, like in the Sturm-Liouville problem considered in my articles. Then a better choice of  the initial approximation may improve the PT series behaviour. No renormalizations are necessary here (although possible, see Appendix 5 in [1]). 2) A particular physical and mathematical problem has not any physically meaningful solutions and PT corrections are divergent, like in theories with self-actions. In this case no formal variable change can help – it is a radical reformulation of the theory (new physical equations) which is needed. In about 1985, considering non-relativistic scattering of charged projectiles from atoms, I derived the positive charge atomic form-factors $f_{nn^\prime}(\vec{q})$ surprisingly unknown to the wide public (English publication is in [2]).  These form-factors described correctly the physics of elastic, inelastic, and inclusive scattering to large angles. Briefly, according to my results, scattering from an atom with a very large momentum transfer is inelastic rather than the elastic, Rutherford. All textbooks describe it in a wrong way – they obtain an elastic cross section due to erroneously neglecting an essential (“coupling”) term. This physics is quite analogous to that of QED with its soft radiation which accompanies any scattering in reality (also inelastic channel), but which is not obtained in the first Born approximation in the theory. QED does not obtain the soft radiation due to decoupling the quantized field from the charge in the initial approximation. Solution for a coupled system (charge + filed oscillators) is not known. In my “atomic” case the corresponding “coupled” solution is formally known and unambiguous, at least conceptually, and this helped me construct a better initial approximation in QED – by a physical ansatz, so that I obtain now the soft radiation automatically. Let me underline here that the QFT Hamiltonians are guessed. And the “standard guess” includes a self-action term first appeared in H. Lorentz works. The self-action idea was supposed to preserve the energy-momentum conservation laws in the point-like electron dynamics, but it failed – it led to infinite correction to the electron mass and “runaway” exact solutions after discarding the infinity (after mass “renormalization”).  In other words, the self-action ansatz in a point-like charge model is just wrong. Many physicists have tried to resolve this problem – to advance new equations with new physics.  They were M. Born, L. Infeld, P. Dirac, R. Feynman, and many many others. As I said, in this case no variable change can help – it is a reformulation of the theory (equations) which is needed and what has been sought by researchers. I personally found that the energy-momentum conservation laws can be preserved in a different, more physical way, if one considers the electron and the electromagnetic filed as features of one compound system: intrinsically coupled charge and field. A physical and mathematical hint of this coupling is the following: as soon as the charge acceleration excites the field oscillators, the charge is a part of these oscillators. Then the external force work splits into two parts – acceleration of the center of inertia of the compound system and exciting its “internal” degrees of freedom (oscillators). So I propose to start from different theory formulation – without self-action, but with another coupling mechanism. This should be done non perturbatively – from the very beginning, just by constructing a better, more physical initial Hamiltonian. Here my understanding corresponds to that of P. Dirac’s who insisted in searching new physical ideas and new Hamiltonians (see, for example, The Inadequacies of Quantum Field Theory by P. Dirac. Reminiscences about a Great Physicist / Ed. B. Kursunoglu, E.P. Wigner. — Cambridge: Univ. Press, 1987. P. 194-198.) In the “mainstream” theories it is the renormalizations that fulfil this “dirty job” perturbatively – they discard unnecessary self-action contributions to the fundamental constants at each PT order. Renormalizations are in fact a transition to another, different result or to the perturbative solution of  different, unknown equations. Recently I found a similar explicit statement by P. Dirac in his “The Requirements of Fundamental Physical Theory”,  Europ. J. Phys. 1984. V. 5. P. 65-67 (Lindau Lecture of 1982). Being done perturbatively, such a transition is not quite visible. Usually everything is presented as the constant redefinitions in the frame of the same theory. As a result, it is not clear at all to what formulation without self-action the renormalized solutions correspond and if they are physical at all. A very simplified analysis of the renormalization “anatomy” in its “working” in an exactly solvable problem is presented in [3] (see also Transparent_Renormalization_1.pdf). In this web-log, in order to demonstrate all this, I am going to present flawless and transparent examples rather than hand waving. References to available publications are the following (they are English translations and adaptations of my Russian publications): [1] “On Perturbation theory for the Sturm-Liouville Problem with Variable Coefficients”, http://arxiv.org/abs/0906.3504. [2] “Atom as a “Dressed” Nucleus”, http://arxiv.org/abs/0806.2635 (invited and published in CEJP, V. 7, N. 1, pp. 1-11, (2009), http://www.springerlink.com/content/h3414375681x8635/?p=309428ad758845479b8aeb522c6adfdd&pi=0), and [3] “Reformulation instead of Renormalizations”,   (an APPENDIX recently added ), http://arxiv.org/abs/0811.4416. [4] “A Toy Model of Renormalization and Reformulation”, http://arxiv.org/abs/1110.3702 With time I am going to develop, improve them and add new examples to this blog. I was repeatedly told that my style of writing is too absolutist and imperfect anyway. I apologize for that. It is not my goal to offend anyone. I do not consider the people advocating self-action and renormalizations as stupid or evil. I consider them as “trapped” and innocent. My expositions, made simple on purpose, are written just to present the moment when and how we all got trapped in this trap. This subject turned out to be extremely tricky for researchers and the only known “resort” has been the “renormalization prescription” for a too long time. Fortunately now there is another physical and mathematical solution and I try to advance it in my works. First of all it is, of course, a new physical insight that makes it possible to reformulate physical problems in the micro-physics. It “contradicts” to the very idea of “elementary” (in the true sense!) particles. That is why it has been hard for fundamental physicists to figure it out – the mainstream development in micro-physics is based on attempts to deal with “elementary”, independent, separated particles. This idea turned out to be blocking the right insight. On the other hand, the quasi-particle ideas and solutions are widely used in many-body problems. Agree, if some particles are in interaction, they can form compound (non elementary) systems. And some compound systems cannot be ever “disassembled”, unlike bricks in a wall. Some compound systems are “welded” by nature rather than made of “separable” bricks. In a compound system the observable variables are those of quasi-particles [3]. So, the electron and the quantized electromagnetic field, always coupled together, form a compound system – I call it an electronium. The photons in it remain photons, the electron remains the electron; what is different is the way how they are coupled in the electronium. The electron is not free any more, but it moves in electronium around the electronium center of inertia, somewhat similarly to the nucleus motion in an atom [2] (the nuclei in atoms are not free). Indeed, it is known that charge-field interaction cannot be “switched off”, even “adiabatically”. The notion of electronium implements this intrinsic property of the charge nature by construction. The photons are just excited states of the electronium – they are quasi-particles describing the “relative” or “internal” motion of this compound system [2, 3]. The electron (a charge) is a part of oscillators and is the external force application point. In the frame of such a compound system the energy-momentum conservation laws hold without the electron’s “self-action”. That is why no corrections to mass (=rest energy) and charge (=coupling constant between “particle” and “wave” subsystems) arise in my approach. The true understanding of electronium is only possible in Quantum Mechanics. It is based on the notion of charge form-factor. The latter describes the charge “cloud” in a bound state. It is practically unknown, but true, that the positive (nucleus electric) charge in an atom is quantum mechanically smeared, just like the negative (electron) charge [3] in a smaller volume. It is also described with an atomic (positive charge or “second”) form-factor, so the positive charge in an atom is not “point-like”. The positive charge “cloud” in atoms is small, but finite. It gives a natural “cut-off” or regularization factor in atomic calculations just because of taking the electron-nucleus coupling exactly rather than perturbatively. Similarly, the electron charge in electronium is quantum mechanically smeared. This gives correct physical and mathematical description of quantum electrodynamics: emission, absorption, scattering, bound states, and all that – without infinities since the electronium takes into account exactly the charge-field coupling – by construction. Thinking of electron as of a free point-like particle is not correct since the point-like free “elementary particle” appears as the inclusive, secondary picture, not a fundamental one (see [2] for details). The point-like electron “emerges” from this theory as the inclusive, classical or average picture. Any mathematician knows that the “better” is the initial approximation in a Taylor series, the smaller are corrections to it. (“Better” here means closer to the exact function.) So the problem of “big” corrections is often the problem of “bad” choice of the initial approximation in an iterative procedure. It is the case 1. In the theoretical physics it holds as well as in the mathematics – the problems are formulated as mathematical problems describing a given physical situation. Theorists choose the total Hamiltonians and the initial approximations following their ideas about physical reality. Unfortunately one can easily obtain the case 2 where the very formulation is non physical and the divergences just show it. I consider the point-like electron model, free electromagnetic field, and the “self-action” ansatz (by H. Lorentz) to be the worst ones although explainable historically. It failed as a physical model (corrections to mass, runaway solutions). Worse, it has given a bad example to follow – the mass renormalization and the perturbative “treatment” of the non-physical remainder. The notion of “infinite bare” mass and an “infinite mass counter-term” is the top of “bad” physics. As long as we follow the flawed approach, we will not advance in physical description of many phenomena. This is what we see nowadays. Fortunately the theory can be reformulated in quite physical terms. The only sacrifice to do on this way is the idea of “elementariness” of electron in the sense of its being “free” of electromagnetic field and being just a “point-like” in reality. My research is not finished yet - I am quite busy with other things at my job. I do not hold an academic position. On the contrary, I am on subcontract works implying no freedom and strict timing for each subcontract. As soon as I find a grant or a position (or at least a part time position) to be able to devote myself to the relativistic calculations, I will carry out the Lamb shift and anomalous magnetic moment calculations at higher orders. If you hold a post in science with sufficient responsibilities , you may take an initiative to make my researching possible. I cannot do everything on my own and the resistance of renormalizators is very high. If you are an extremely rich person, consider sponsoring my research via my PayPal account (all you need for that is my e-mail address). Any constructive proposals/discussions/questions are welcome. Vladimir Kalitvianski. vladimir.kalitvianski@wanadoo.fr ———————- P.S. Funny video of coffee cup experiments at work. You may think it’s a telekinesis, but it’s not: ### Like this: Tags: electronium, kalitvianski, QED, reformulation, renormalization This entry was posted on May 22, 2009 at 13:27 and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site. ### 3 Responses to “Problem of infinitely big corrections” 1. carlbrannen Says: October 5, 2010 at 20:27 | Reply One of the mysteries of Koide’s mass equation (reference: http://arxiv.org/abs/0812.2103 ) is why it works on the measured masses rather than the masses before renormalization. Consequently, it does seem that these ideas mate perfectly with what you’re doing. By the way, regarding “I was repeatedly told that my style of writing is too absolutist and imperfect anyway.” I believe that progress in physics comes about only through great effort. To make that effort requires that the worker believe in what they are doing. I encourage other people to believe in their theories and to expend a lot of energy trying to understand them better. So I don’t have a problem either with people being absolutist, or with them disbelieving my own ideas. It is just healthy physics for us all to disagree. • Vladimir Kalitvianski Says: October 11, 2010 at 12:48 | Reply Masses before renormalization – nobody can tell what it is. In fact, there are only measured masses in the theory. Perturbative “corrections” to them are just discarded, and this is the true meaning of the renormalization, see “Transparent Renormalization” paper in my research forum http://groups.google.com/group/qed-reformulation. This discarding removes unnecessary self-action and leaves the interaction in the total Hamiltonian. 2. Peter Morgan Says: January 28, 2011 at 23:36 | Reply Came here from PhysicsStackExchange, where I seem to be learning quite a bit quite fast; I hope I really am and that it continues. Like Carl Brannen, I encourage you to keep at it, even though I think it’s not clear enough fast enough what your big idea is from this post (there’s no poll entry that seemed to fit my response very well). I have my own idea, which broadly is to work with quantum fields as statistical fields of their own particular kind, and your idea doesn’t seem to have a fit with mine. Given finite time, just as you say of yourself, I expect only to glance at your stuff from time to time in the future to see whether you’ve managed to move forward. Good luck. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9257926344871521, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47571/path-integral-with-zero-energy-modes/47637
Path integral with zero energy modes Consider the field integral for the partition function of a free non-relativistic electron in a condensed matter setting, i.e. $$Z = ∫D\bar\psi D\psi \exp\left(-\sum_{k,ω} \bar\psi_{k,ω} (-iω + \frac{k^2}{2m} - \mu) \psi_{k,ω}\right)$$ where the action is written in Fourier representation and $\mu$ denotes the chemical potential. Now, this integral is well known to be the determinant $$Z = \det\left(β\left(-iω+\frac{k^2}{2m} - \mu\right)\right)$$ which is equal to the product of all eigenvalues of the quadratic form in brackets. But here is my problem: How to calculate a quadratic path integral if the quadratic form has some eigenvalues that are equal to zero? If the chemical potential $\mu$ is positive, then all all momenta with $\frac{k^2}{2m} = \mu$ (the Fermi surface) will represent an eigenvalue equal to zero and would force the determinant to become zero. Of course, I could just drop the problematic eigenvalues from the determinant and call it a day, but unfortunately, I would like to understand quantum anomalies a bit better and zero energy eigenmodes are important for understanding the axial quantum anomaly for example of the $1+1D$ Schwinger model. Fujikawa's book on quantum anomalies argues that the axial anomaly comes from an asymmetry of zero modes of the Dirac operator, but I am very confused because a zero mode would make the determinant of the Dirac operator and hence the path integral vanish. How to make sense of this? - The $\omega$ are discrete Matsubara frequencies. In the case of fermions these can never be zero. For bosons, zero modes can indeed appear. This is related to Bose condensation. You can have anomalies in non-relativistic systems with a Fermi surface, but that's a more complicated story, see for example arxive:1203.2697. – Thomas Dec 26 '12 at 1:00 Ok, so you're saying that the $ω_n$ are always odd multiples of $2π/T$ and $ω=0$ is never among them. But then, I don't understand how zero modes of the Dirac operator can influence the path integral and be responsible for the axial quantum anomaly. (Maybe I should spell out the latter in more detail.) – Greg Graviton Dec 26 '12 at 10:50 You would first have to include chiral fermions and a coupling to gauge fields. – Thomas Dec 27 '12 at 0:18 1 Answer Quillen generalized the definition of the determinant of an oparator to a form applicable to operators with zero modes, between finite or infinite dimensional Hilbert spaces: $D: \mathrm{H_1} \rightarrow \mathrm{H_2}$ According to this generalization, the determinant is not a C-number but an element of a one dimensional vector space : $\mathrm{Det}(D) = (\wedge^{top}( \mathrm{H_1}/\mathrm{ker}(D)))^{\dagger} \wedge^{top}\mathrm{img}(D))$ Where $\wedge^{top}$ denotes the top wedge product. This basically means that we do not include the zero modes in the eigenvalue product. For example consider a three dimensional matrix $A$ without zero modes, then its determinant according to Quillen is: $\mathrm{Det}(A) = e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_3^{\dagger} \wedge A e_1 \wedge A e_2 \wedge A e_3 = \mathrm{det}(A) e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_3^{\dagger} \wedge e_1 \wedge e_2 \wedge e_3$ Where $\mathrm{det}$ is the conventional matrix determinant. Notice that the Quillen determinant in this case is just the conventional determinant multiplied by the one dimensional unit vector $e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_3^{\dagger} \wedge e_1 \wedge e_2 \wedge e_3$. Now, it is not difficult verify that the determinant of a diagonal matrix $A = \mathrm{diag} [ \lambda_1, \lambda_2, , 0]$ with zero eigenvalues will be just the product of its nonvanishing eigenvalues times the unit vector composed from the top wedge product the einvectors with nonvanishing eigenvalues: $\mathrm{Det}(A) =\lambda_1 \lambda_2,e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_1 \wedge e_2 \equiv det^{'}(A) e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_1 \wedge e_2$ Where $det^{'}(A)$ is the determinant on the subspace excluding the zero modes. Please notice that now $e_3$ disappeared from the top wedge product. Relation to anomalies: The scalar value $\lambda_1 \lambda_2$ of the Quillen determinant is basis dependent, because if one applies a unitary transformation: $A \rightarrow U^{\dagger} A U$ Only the full top wedge product $e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_3^{\dagger} \wedge e_1 \wedge e_2 \wedge e_3$ is invariant but not the partial one: $e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_1 \wedge e_2$ .Thus the scalar value of the determinant changes. Thus in this case: $\mathrm{Det}(U^{\dagger} A U) =c(A, U) \mathrm{det^{'}}(A)e_1^{\dagger} \wedge e_2^{\dagger} \wedge e_1 \wedge e_2$ Where $c(A, U)$ is a scalar depending on $A$ and $U$. Consequently, the Quillen determinant is not invariant under unitary transformations. Applying two consecutive unitary transformations one observes that the additional scalar must satisfy the relation: $c(A, UV) = c(V^{\dagger} A V, U) c(A, V)$ This relation is called the one cocycle condition. This phenomenon occurs when $\mathrm{D}$ is a Dirac operator in the background of a gauge field. Due to the fact that there exist zero modes, a unitary transformation on the spinors and the gauge fields gives rise to a scalar multiple to the determinant stemming from the anomaly. Basically, there is one type of function of a gauge field and a unitary operator which satisfies the one cocycle condition (up to a constant multiple). Please see the following lecture notes and the following article by M. Blau for further reading. - Thanks for your answer! If I understand that correctly, the unitary transformation $U$ is assumed to preserve $\ker D$? Otherwise, the transformed wedge product would not be a scalar multiple of the old one, but a multiple of, say $e_1^\dagger\wedge e_3^\dagger \wedge e_1 \wedge e_3$. (I'm thinking of a permutation $Ue_1 = e_1, Ue_2 = e_3, Ue_3 = e_2$) – Greg Graviton Dec 27 '12 at 10:15 Thanks for the references as well. Unfortunately, I am unable to access the article by Blau from my university. :( – Greg Graviton Dec 27 '12 at 10:16 @Greg, 1) The unitary transformation U does not need to preserve $\mathrm{ker}(D)$, sorry for not emphasizing that one must project on the top form after performing the action, because the top form subspace is one dimensional. I'll try to add in a few days an explicit computation of the scalar multiple and the cocycle condition. 2) I'll be happy to help if you need me to send a copy of the article. – David Bar Moshe Dec 28 '12 at 5:10 – Greg Graviton Dec 28 '12 at 16:55 – David Bar Moshe Dec 30 '12 at 4:34 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8870165348052979, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-algebra/191022-basis-0-1-n-operations-induced-boolean-algebra-0-1-a-2.html
# Thread: 1. ## Re: a "basis" in {0,1}^n with operations induced from the boolean algebra {0,1} i think you can for $\{0,1\}^n$, especially since you know that spanning is the hard part. it might be more challenging to prove the more general result (an arbitrary idempotent semiring). 2. ## Re: a "basis" in {0,1}^n with operations induced from the boolean algebra {0,1} OK. It's actually very easy when you know what you want to prove. Unless I'm mistaken of course. Let $X\subseteq \{0,1\}^n$. Suppose X spans the space. Then there must be a combination of vectors in X that is equal to $e_i.$ Suppose $e_i\not\in X.$ Then since $X\neq\{0\},$ every vector in X has 1 on some coordinate different from $i$. No combination of such vectors can be equal to $e_i$. Therefore for any $i=1,2,...,n,\, e_i$ must be in X. If we suppose that X is independent, there can be no other vectors in it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9456386566162109, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/156463-functions.html
# Thread: 1. ## Functions Maximum and minimum value of the function f(x)=1/2x^3-x^2 on the interval: -1 ≤x ≤1 2. What function are you talking about? $f(x)=\frac{1}{2x^3-x^2}$ or $f(x)=\frac{1}{2x^3}-x^2$ 3. I am talking about the function 0.5 of x^3 - x^2 4. Originally Posted by dvmasaka I am talking about the function 0.5 of x^3 - x^2 This really didn't help. $0.5\times (x^3 - x^2)$ or $(x^3 - x^2)^{0.5}$ or $\frac{1}{2 (x^3 - x^2)}$ or .... 5. I am going to assume you mean (1/2)x^3- x^2 just because it is simplest. Your notation is still ambiguous. Now, what have you learned about this? Do you know this theorem: "A function takes on it maximum and minimum values on an interval in one these places: 1) at the endpoints of the interval 2) where the derivative does not exist 3) where the derivative is 0. If I have the right function, it is a polynomial and its derivative always exists. Can you find the derivative?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9101680517196655, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/111329/varieties-with-infinitely-many-etale-covers-and-rational-points
## Varieties with infinitely many etale covers and rational points ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a (smooth projective geometrically connected) variety over a field $k$. Consider the set Et$(X,k)$ of finite etale covers $Y\to X$ over $k$, with $Y$ geometrically connected over $k$. Assume Et$(X,k)$ is infinite. Consider the following question: Does $X$ have a $k$-rational point? The answer should be negative in general. In fact, I think one can construct a surface with infinitely many etale covers but no rational points by taking the product of two curves $C$ and $D$ over $k$, where $C$ has infinitely many etale covers and a rational point, but $D$ doesn't have any rational points. Then $C\times D$ has no rational points, but infinitely many covers. What if $X$ is a curve? Is $X(k)$ non-empty? Note that the converse is true if we consider curves of positive genus. That is, if $X$ is a curve of positive genus over $k$ with a $k$-rational point, then it has infinitely many etale covers. I'm mainly interested in the characteristic zero case, but comments on the situation in positive characteristic would also be interesting. - What if $X$ is an elliptic curve ? – Chandan Singh Dalawat Nov 3 at 3:02 @Chandan: Every elliptic curve $E$ has a $k$-rational point, by definition; namely $0$. More general $E(T)$ is a group (hence non-empty) for any $k$-scheme $T$. – Johan Commelin Nov 3 at 8:29 Of course. I got confused as to whether you wanted a rational point or didn't want any... – Chandan Singh Dalawat Nov 3 at 8:46 @Jan Hendrik: Do you allow étale covers $X_{k'}\to X$ obtained by finite separable extensions of $k$ ? – Qing Liu Nov 3 at 10:37 @Qing Liu. No, I want them to be geometrically connected over $k$. I should have written that in the question. – Jan Hendrik Nov 3 at 11:31 ## 1 Answer Maybe I misunderstand something, but don't all curves have etale covers? Embed $X$ in $J^1$ (divisors of degree $1$ modulo linear equivalence). Then $J^1$ is a torsor for the Jacobian $J$ and since $J$ has etale covers, e.g. coming from multiplication by an arbitrary $n$, $J^1$ does too. Certainly, for those curves with a rational divisor of degree one, they have covers, as $J^1$ is isomorphic to $J$. EDIT: Upon further reflection, I guess it's not true that $J^1$ always has covers, as it may not be in the divisible part of the Weil-Chatelet group of $J$. But there definitely exist curves with no points having divisors of degree one, and therefore covers of arbitrarily large degree. However, you question is a good one and you might be heading in the direction of Grothendieck's section conjecture: For finitely generated fields $k$, $X(k)$ is non-empty if and only if there is a section $G_k \to \pi_1(X)$ of the canonical projection $\pi_1(X) \to G_k$, where $G_k$ is the absolute Galois group of $k$ and $\pi_1$ is the etale fundamental group. - Are you aware of a nice example of such a curve? It seems very plausible to me but I don't know how you'd construct one. – Will Sawin Nov 3 at 14:48 3 @Will: every curve over a finite field has a divisor of degree 1, but not necessarily à rational point. – Laurent Moret-Bailly Nov 3 at 15:39 3 Maybe I also misunderstand the question, but I don't see what the difficulty is over a number field. Take any genus one curve $X$ without a rational point, $3x^3+4y^3+5z^3=0$ over $Q$, for example. Then $X$ will have a finite-to-one map $f:X\rightarrow E$ to its Jacobian $E$, which is an elliptic curve. You can then pull back, say, any isogeny of $E$ of degree prime to that of $f$. You can generalize this easily to other curves without rational points. – Minhyong Kim Nov 3 at 16:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 58, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9462056159973145, "perplexity_flag": "head"}
http://psychology.wikia.com/wiki/Kendall_tau_rank_correlation_coefficient?oldid=142387
# Kendall tau rank correlation coefficient Talk0 31,735pages on this wiki Revision as of 13:37, December 26, 2011 by Dr Joe Kiff (Talk | contribs) (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's tau (τ) coefficient, is a statistic used to measure the association between two measured quantities. A tau test is a non-parametric hypothesis test which uses the coefficient to test for statistical dependence. Specifically, it is a measure of rank correlation: that is, the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938,[1] though Gustav Fechner had proposed a similar measure in the context of time series in 1897.[2] ## Definition Let (x1, y1), (x2, y2), …, (xn, yn) be a set of joint observations from two random variables X and Y respectively, such that all the values of (xi) and (yi) are unique. Any pair of observations (xi, yi) and (xj, yj) are said to be concordant if the ranks for both elements agree: that is, if both xi > xj and yi > yj or if both xi < xj and yi < yj. They are said to be discordant, if xi > xj and yi < yj or if xi < xj and yi > yj. If xi = xj or yi = yj, the pair is neither concordant nor discordant. The Kendall τ coefficient is defined as: $\tau = \frac{(\text{number of concordant pairs}) - (\text{number of discordant pairs})}{\frac{1}{2} n (n-1) } .$[3] ### Properties The denominator is the total number of pairs, so the coefficient must be in the range −1 ≤ τ ≤ 1. • If the agreement between the two rankings is perfect (i.e., the two rankings are the same) the coefficient has value 1. • If the disagreement between the two rankings is perfect (i.e., one ranking is the reverse of the other) the coefficient has value −1. • If X and Y are independent, then we would expect the coefficient to be approximately zero. ## Hypothesis test The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of X or Y. Under a null hypothesis of X and Y being independent, the sampling distribution of τ will have an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance $\frac{2(2n+5)}{9n (n-1)}$.[4] ## Accounting for ties A pair {(xi, yi), (xj, yj)} is said to be tied if xi = xj or yi = yj; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range [-1, 1]: ### Tau-a Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. ### Tau-b Tau-b statistic, unlike tau-a, makes adjustments for ties and is suitable for square tables. Values of tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association. The Kendall tau-b coefficient is defined as: $\tau_B = \frac{n_c-n_d}{\sqrt{(n_0-n_1)(n_0-n_2)}}$ where $\begin{array}{ccl} n_0 & = & n(n-1)/2\\ n_1 & = & \sum_i t_i (t_i-1)/2 \\ n_2 & = & \sum_j u_j (u_j-1)/2 \\ t_i & = & \mbox{Number of tied values in the } i^{th} \mbox{ group of ties for the first quantity} \\ u_j & = & \mbox{Number of tied values in the } j^{th} \mbox{ group of ties for the second quantity} \end{array}$ ### Tau-c Tau-c differs from tau-b as in being more suitable for rectangular tables than for square tables. ## Significance tests When two quantities are statistically independent, the distribution of $\tau$ is not easily characterizable in terms of known distributions. However, for $\tau_A$ the following statistic, $z_A$, is approximately characterized by a standard normal distribution when the quantities are statistically independent: $z_A = {3 (n_c - n_d) \over \sqrt{n(n-1)(2n+5)/2} }$ Thus, if you want to test whether two quantities are statistically dependent, compute $z_A$, and find the cumulative probability for a standard normal distribution at $-|z_A|$. For a 2-tailed test, multiply that number by two and this gives you the p-value. If the p-value is below your acceptance level (typically 5%), you can reject the null hypothesis that the quantities are statistically independent and accept the hypothesis that they are dependent. Numerous adjustments should be added to $z_A$ when accounting for ties. The following statistic, $z_B$, provides an approximation coinciding with the $\tau_B$ distribution and is again approximately characterized by a standard normal distribution when the quantities are statistically independent: $z_B = {n_c - n_d \over \sqrt{ v } }$ where $\begin{array}{ccl} v & = & (v_0 - v_t - v_u)/18 + v_1 + v_2 \\ v_0 & = & n (n-1) (2n+5) \\ v_t & = & \sum_i t_i (t_i-1) (2 t_i+5)\\ v_u & = & \sum_j u_j (u_j-1)(2 u_j+5) \\ v_1 & = & \sum_i t_i (t_i-1) \sum_j u_j (u_j-1) / (2n(n-1)) \\ v_2 & = & \sum_i t_i (t_i-1) (t_i-2) \sum_j u_j (u_j-1) (u_j-2) / (9 n (n-1) (n-2)) \end{array}$ ## Algorithms The direct computation of the numerator $n_c - n_d$, involves two nested iterations, as characterized by the following pseudo-code: ```numer := 0 for i:=1..N do for j:=1..(i-1) do numer := numer + sgn(x[i] - x[j]) * sgn(y[i] - y[j]) return numer ``` Although quick to implement, this algorithm is $O(n^2)$ in complexity and becomes very slow on large samples. A more sophisticated algorithm[5] built upon the Merge Sort algorithm can be used to compute the numerator in $O(n \cdot \log{n})$ time. Begin by ordering your data points sorting by the first quantity, $x$, and secondarily (among ties in $x$) by the second quantity, $y$. With this initial ordering, $y$ is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial $y$. An enhanced Merge Sort algorithm, with $O(n \log n)$ complexity, can be applied to compute the number of swaps, $S(y)$, that would be required by a Bubble Sort to sort $y_i$. Then the numerator for $\tau$ is computed as: $n_c-n_d = n_0 - n_1 - n_2 + n_3 - 2 S(y)$, where $n_3$ is computed like $n_1$ and $n_2$, but with respect to the joint ties in $x$ and $y$. A Merge Sort partitions the data to be sorted, $y$ into two roughly equal halves, $y_{left}$ and $y_{right}$, then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to: $S(y) = S(y_{left}) + S(y_{right}) + M(Y_{left},Y_{right})$ where $Y_{left}$ and $Y_{right}$ are the sorted versions of $y_{left}$ and $y_{right}$, and $M(\cdot,\cdot)$ characterizes the Bubble Sort swap-equivalent for a merge operation. $M(\cdot,\cdot)$ is computed as depicted in the following pseudo-code: ```function M(L[1..n], R[1..m]) n := n + m i := 1 j := 1 nSwaps := 0 while i + j <= n do if i > m or R[j] < L[i] then nSwaps := nSwaps + m - (i-1) j := j + 1 else i := i + 1 return nSwaps ``` A side effect of the above steps is that you end up with both a sorted version of $x$ and a sorted version of $y$. With these, the factors $t_i$ and $u_j$ used to compute $\tau_B$ are easily obtained in a single linear-time pass through the sorted arrays. A second algorithm with $O(n \cdot \log{n})$ time complexity, based on AVL trees, was devised by David Christensen.[6] ## References 1. ↑ Kendall, M. (1938). A New Measure of Rank Correlation. 30 (1–2): 81–89. 2. ↑ Kruskal, W.H. (1958). Ordinal Measures of Association. 53 (284): 814–861. 3. ↑ 4. ↑ 5. ↑ Knight, W. (1966). A Computer Method for Calculating Kendall's Tau with Ungrouped Data. 61 (314): 436–439. 6. ↑ Christensen, David (2005). Fast algorithms for the calculation of Kendall's τ. 20 (1): 51–62. • Abdi, H. (2007). Encyclopedia of Measurement and Statistics, Thousand Oaks (CA): Sage. • Kendall, M. (1948) Rank Correlation Methods, Charles Griffin & Company Limited # Photos Add a Photo 6,465photos on this wiki • by Dr9855 2013-05-14T02:10:22Z • by PARANOiA 12 2013-05-11T19:25:04Z Posted in more... • by Addyrocker 2013-04-04T18:59:14Z • by Psymba 2013-03-24T20:27:47Z Posted in Mike Abrams • by Omaspiter 2013-03-14T09:55:55Z • by Omaspiter 2013-03-14T09:28:22Z • by Bigkellyna 2013-03-14T04:00:48Z Posted in User talk:Bigkellyna • by Preggo 2013-02-15T05:10:37Z • by Preggo 2013-02-15T05:10:17Z • by Preggo 2013-02-15T05:09:48Z • by Preggo 2013-02-15T05:09:35Z • See all photos See all photos >
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 49, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8892044425010681, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/48014-find-volume-solid.html
# Thread: 1. ## Find the volume of the solid Find the volume of the solid whose base is the region between the curve $y=x^3$ and y-axis from $y=0$ to $y=1$, and whose cross sections taken perpendicular to the y-axis are squares. I don't understand "whose cross sections taken perpendicular to y-axis are squares". How can I solve this problem? If possible, would you please explain step by step? Thank you very much. 2. Picture a bunch of squares stacked up perp.to the y-axis in your given region. This is NOT a volume of revolution. Since we are perp to the y-axis we will integrate wrt y. $x=y^{\frac{1}{3}}$ The area of a square is $x^{2}$ So, we have $\int_{0}^{1}y^{\frac{2}{3}}dy$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9069352746009827, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/437/what-equation-describes-the-wavefunction-of-a-single-photon/462
What equation describes the wavefunction of a single photon? The Schrödinger equation describes the quantum mechanics of a single massive non-relativistic particle. The Dirac equation governs a single massive relativistic spin-½ particle. The photon is a massless, relativistic spin-1 particle. What is the equivalent equation giving the quantum mechanics of a single photon? - 10 There is no quantum mechanics of a photon, only a quantum field theory of electromagnetic radiation. The reason is that photons are never non-relativistic and they can be freely emitted and absorbed, hence no photon number conservation. – Igor Ivanov Nov 9 '10 at 20:51 Igor, you should make your comment into an answer. It's at the moment the best one posted. – j.c. Nov 10 '10 at 18:09 OK, posted as an answer with an additional note. – Igor Ivanov Nov 10 '10 at 20:01 7 Answers There is no quantum mechanics of a photon, only a quantum field theory of electromagnetic radiation. The reason is that photons are never non-relativistic and they can be freely emitted and absorbed, hence no photon number conservation. Still, there exists a direction of research where people try to reinterpret certain quantities of electromagnetic field in terms of the photon wave function, see for example this paper. - 7 You can also say that the wavefunction of a photon is defined as long as the photon is not emitted or absorbed. The wavefunction of a single photons is used in single-photon interferometry, for example. In a sense, it is not much different from the electron, where the wave-function start to be problematic when electrons start to be created or annihilated... – Frédéric Grosshans Nov 17 '10 at 10:19 1 I agree. For the electrons there is a possibility to slow them down to non-relativistic speeds, but there is no such possibility for photons. I would also add that there is an interesting discussion about photons and electrons in the Peierls's book "Surprises in theoretical physics". – Igor Ivanov Nov 18 '10 at 21:46 The maxwell equations, just like in classical electrodynamics. You'll need to use quantum field theory to work with them though. http://en.wikipedia.org/wiki/Relativistic_wave_equations http://en.wikipedia.org/wiki/Quantum_electrodynamics - The general concept of quantum mechanics is that particles are waves. On of hand-waveing "derivations" of quantum mechanics is assumption that phase of particles behaves in the same way as phase of light $\exp( i \vec{k}\cdot \vec{x} - iE t / \hbar)$ (see Feynman Lectues on Physics, Volume 3, Chapter 7-2). For light that is monochromatic (or almost monochromatic), just take Maxwell Equations plus add assumption that one photon can't be partially absorbed. Most of the time it suffices to use the paraxial approximation, or even - plane wave approximation. It works for standard quantum mechanics setups like Elitzur–Vaidman bomb-tester. For nonmonochronatic light its much more complicated. More on nature of quantum mechanics of one photon: Iwo Bialynicki-Birula, On the Wave Function of the Photon, Acta Physica Polonica 86, 97-116 (1994). - According to Wigner's analysis, the single photon Hilbert space is spanned by a basis parameterized by energy-momenta on the forward light cone boundary, and a helicity of $\pm 1$. However, a manifestly Lorentz covariant description in position space has to include a fictitious longitudinal photon with a helicity of 0. This degree of freedom is pure gauge, and decouples. Interestingly enough, the state norm is now positive semidefinite, instead of positive definite, with the transverse modes having positive norm and the longitudinal ones having zero norm. - you're on fire ! – user346 Dec 31 '10 at 7:49 There is a slight confusion in this question. In quantum field theory, the Dirac equation and the Schrödinger equation have very different roles. The Dirac equation is an equation for the field, which is not a particle. The time evolution of a particle, ie, a quantum state, is always given by the Schrödinger equation. The hamiltonian for this time evolution is written in terms of fields which obey a certain equation themselves. So, the proper answer is: Schrödinger equation with a hamiltonian given in terms of a massless vector field whose equation is nothing else but Maxwell's equation. - Yup. Like this? http://www.nist.gov/pml/div684/fcdc/upload/preprint.pdf - 2 You should include more than a bare link. Especially a bare link to a PDF. Copying the abstract, for instance, would make this a much better answer. – dmckee♦ Jun 1 '11 at 4:46 1 @kaonix , a +1 because I learned something, but dmckee is right that you should quote the abstract for a good answer. Also use the link tab on the editor to insert a link, or at least put the http:// in front so that one does not have to copy and paste to see the link. I will edit it for you. – anna v Jun 1 '11 at 4:54 A single photon is described quantum mechanically by the Maxwell equations, where the solutions are taken to be complex. The Maxwell equations can be written in the form of the matrix Dirac equation, where the Pauli two-component matrices, corresponding to spin 1/2 electrons, are replaced by analogous three-component matrices, corresponding to spin 1 photons. Since the Dirac equation and corresponding Maxwell equation are fully relativistic, there is no problem with the mass of the photon being zero, as there would be for a Schroedinger-like equation. See http://www.nist.gov/pml/div684/fcdc/upload/preprint.pdf. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9028967618942261, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/48248/what-do-named-tricks-share/50450
What do named “tricks” share? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There are a number of theorems or lemmas or mathematical ideas that come to be known as eponymous tricks, a term which in this context is in no sense derogatory. Here is a list of 10 such tricks (the last of which I learned at MO): Edit: List augmented from the comments and answers: • the Eilenberg–Mazur swindle • the Parshin trick • the Atiyah rotation trick • the Higman trick • Rosser's trick • Scott's trick • the Craig trick • the Uhlenbeck trick • the Alexander trick • Grilliot's trick • Zarhin's trick [For any abelian variety $A$, $(A \times A^{\vee})^4$ is principally polarizable.] Further Edit. And although my original interest was in eponymous (=named-after-someone) tricks, several non-eponymous tricks have been mentioned, so I'll gather those here as well: • the determinant trick • the kernel trick • the W-trick Some of those listed above do not yet have Wikipedia pages (hint, hint—Thierry). I (JOR) am not seeking to extend this list (although I would be incidentally interested to learn of prominent omissions), but rather I am wondering: Is there some aspect or trait shared by the mathematical ideas or techniques that, over time, come to be named "tricks"? I am aware this is a borderline question; feel free to close if it unduly distracts. - 10 "An idea which can be used only once is a trick. If one can use it more than once it becomes a method." Quoted from books.google.co.uk/… – Andrey Rekalo Dec 4 2010 at 4:45 8 That construction of Whitney ought to have a more dignified name. – Tom Goodwillie Dec 4 2010 at 4:58 4 The Eilenberg Swindle is another good one. – Sean Tilson Dec 4 2010 at 5:36 5 You could add the Eilenberg Swindle to your list. (A Swindle sounds even more disreputable than a Trick.) – Charles Rezk Dec 4 2010 at 5:36 1 Why is the page linking to the Whitney trick linking to a "Global Oneness" site? <br> <br> Why do they even have a page on Whitney embedding on a site about spirituality? I'm very confused. – Simon Rose Dec 4 2010 at 5:42 show 16 more comments 8 Answers How about the following (which I think applies to some of these tricks but not others): a trick is something whose usefulness is not fully captured by any particular set of hypotheses, so it would limit its usefulness to write it down as a lemma. - 3 This is my favorite answer. It captures the notion that a trick is one of the essentially human contributions to math, a creative touch that isn't part of some axiomatic or algorithmic approach that something more mechanical might invent. It's generalizable but not categorizable. – Ryan Reich Dec 4 2010 at 22:04 2 I like this answer, but I'm not sure it really distinguishes between tricks and methods. For instance, I don't know of a lemma one could write down that would do justice to the probabilistic method in combinatorics. But actually I'm tempted to say that "Take a random X" feels like a trick that can be applied over and over again, perhaps for this very reason. – gowers Dec 5 2010 at 22:24 3 @Ryan: I must also leap to the defence of the poor old computer. I don't see any reason in principle that a clever program could not invent mathematical tricks. (I think a proof that a program couldn't invent tricks would prove that humans couldn't either.) – gowers Dec 5 2010 at 22:26 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'll take a stab at this. I think that the term "trick" is used to connote a technique that achieves something as if by magic. If I make a cake by combining flour, sugar, and eggs and baking, that is simply a standard technique, but if I make the cake by putting the ingredients into a top hat and waving a wand over it, that is a magic trick. The way that the Weyl unitary trick makes complex groups behave like compact ones seems like a magic trick. (For those of you trying to follow this half baked analogy, the cake is complete reducibility of representations, the oven is integration, and the hat is ... uhhh.... ) - 13 Ha ha, half baked :) – Zev Chonoles Dec 4 2010 at 7:56 I think this is the answer. – timur Dec 4 2010 at 15:29 3 The hat is orthogonality? – Qiaochu Yuan Dec 4 2010 at 16:12 1 The hat is Fourier transform, of course! – Paul Siegel Dec 6 2010 at 12:20 One well-known trick is a way to evaluate the Gaussian integral $G = \int_\mathbb{R} e^{-x^2}dx = \sqrt{\pi}$ by writing $$G^2 = \left(\int_\mathbb{R} e^{-x^2}dx\right)\left(\int_\mathbb{R} e^{-y^2}dy\right) = \int_{\mathbb{R}^2} e^{-(x^2+y^2)}dxdy$$ which when transformed to polar coordinates becomes $$G^2 = 2\pi \int_0^\infty e^{-r^2} r dr = \pi \int_0^\infty e^{-u} du = \pi$$ via the substitution $u=r^2$. It appears this idea is due to Poisson. In a 2005 note in the American Mathematical MONTHLY, R. Dawson has observed that this is a trick that only works once; there are no other integrals that can be evaluated by this method. Specifically: Theorem. Any Riemann-integrable function $f$ on $\mathbb{R}$, such that $f(x)f(y) = g(\sqrt{x^2+y^2})$ for some $g$, is of the form $f(x)=ke^{ax^2}$. See: Dawson, Robert J. MacG. On a "singular" integration technique of Poisson. American Mathematical Monthly 112 (2005), 270-272. So if a technique is a trick that works twice, this one is definitely still a trick. - 1 Note that this trick has something in common with the Rabinowitsch, Cayley and Eilenberg tricks and probably some others on the list: in order to solve a $k$-dimensional problem, you go into more than $k$ dimensions. This also seems to be a distinguishing feature of things called tricks. – darij grinberg Dec 5 2010 at 22:04 2 I think Dawson was anticipated by James Clerk Maxwell. A result called Maxwell's theorem says that if $X_1, \dots, X_n$ are independent real-valued random variables and their joint density is spherically symmetric, then all of them are normally distributed, i.e. the probability density of each of them is a Gaussian function. – Michael Hardy Dec 5 2010 at 22:17 1 Constantine Georgakis, "A Note on the Gaussian Integral", Mathematics Magazine, February, 1994, page 47 This paper gives what its author considers "a better alternative to the usual method of reduction to polar coordinates" for evaluating this integral. See en.wikipedia.org/wiki/Gaussian_integral . – Michael Hardy Dec 5 2010 at 22:19 While the specific form $f(x) f(y) = g(\sqrt{x^2+y^2})$ applies only to Gaussians, there are further uses of this kind of transformation: in one direction, to the volumes of Euclidean spheres in higher dimension (imagine you already know $\Gamma(1/2)$ but not the area of a circle); in another direction, to the usual proof of $B(x,y) = \Gamma(x) \Gamma(y) / \Gamma(x+y)$; combining these, to Dirichlet integrals; and by analogue, to the relation between Gauss sums and Jacobi sums — and probably others that don't come to mind right now. – Noam D. Elkies Jul 9 2011 at 6:11 @Michael Hardy: that "better alternative" in the paper by Georgakis in fact is due to Laplace. See york.ac.uk/depts/maths/histstat/…. – KConrad Mar 26 2012 at 3:15 http://en.wikipedia.org/wiki/Rosser%27s_trick "A technique is a trick that works twice" Note that Grothendieck never published his proof of the Grothendieck-Riemann-Roch theorem because he felt that the proof depended on an "astuce" (trick) rather than flowing naturally. - 5 It is interesting that the accepted English translation of "astuce" is "trick", whereas that of the adjectival form "astucieux/euse" is "clever" or "astute". This seems to give the concept of a trick a better connotation in French than in English. (I am tempted to load up Lord of the Rings with the French subtitles on to see whether Gollum accuses Frodo and Sam of being "astucieux".) – Pete L. Clark Dec 4 2010 at 16:32 2 @Pete: you're absolutely right. For instance, to translate the expression "dirty trick" into French, you would not use the word "astuce" because "astuce" has a uniformly positive connotation of praise (yes, even in that Gollum context). – Thierry Zell Dec 4 2010 at 16:53 Scott's Trick is called a "trick" because it is not actually necessary for the completion of the proof in which it is involved; however, without the trick the proof is massively more tedious. Although the other tricks may not have a widely-agreed-upon-reason for being a trick, I suspect that they may be called such for similar reasons. - 1 Same for the Rabinowitsch trick. Also as pointed in another answer, the Rabinowitsch trick works only once (although similar ideas must be used, albeit in less famous circumstances). – Thierry Zell Dec 4 2010 at 16:10 Well, it’s only unnecessary if you’re assuming AC (which was not, if I understand right, quite as entrenchedly automatic among set theorists back in the ’60s as it is now). It’s necessary if you want to talk about cardinalities — and more generally, construct quotients of classes by equivalence relations — in ZF and many other set theories. – Peter LeFanu Lumsdaine Dec 6 2010 at 5:31 I've long known the adage that a "trick" works only once whereas a "method" works in multiple instances, or maybe is expected to work in yet unanticipated future instances. But there's another POV: a trick is something whose efficacy cannot be anticipated, but only by hindsight is seen to work. All methods I've seen of finding $\int \sec x \ dx$ are "tricks". I've always leaned toward viewing unanticipatability as the essence of trickhood. But I also like Qiaochu Yuan's answer. - 2 There is an algorithm to find the integral of any rational function of trigonometric functions: use a rational parameterization (e.g. using tan x/2), then use partial fractions (or a residue computation, etc.). I don't see how this is a "trick" in any sense. It is a direct corollary of the fact that the circle is birational to the projective line. – Qiaochu Yuan Dec 4 2010 at 17:37 1 There's the argument often used in calculus texts: $$\sec x = \frac{\sec x(\tan x + \sec x)}{\sec x + \tan x} = \frac{\sec x \tan x + \sec^2 x}{\sec x + \tan x} = \frac{du}{u}$$ where $u$ is the last denominator above. That seems like a "trick". I've seen other methods but I don't remember what they are right now. One of them seemed to take some of the mystery out of it, but it was still a trick. But I can't show a residue computation to a first-year calculus class. The "Weierstrass trick" (was it Weierstrass?) of $\tan(x/2)$, etc., does seem a bit more like a "method". – Michael Hardy Dec 4 2010 at 17:50 2 Yes, I agree that that argument is trick-like. The rational parameterization of the circle is not. It is a natural and beautiful geometric argument (just taking the slope of a line between two points) and I think it could easily be presented to a first-year calculus class. – Qiaochu Yuan Dec 4 2010 at 18:05 4 I had a student come up with a different way to do this, on the fly, on the final exam. Write $$\sec(x) = {1\over \sqrt{1 - \sin^2(x)}},$$ then substitute $u = \sin(x)$ and use partial fractions! – Jeff Strom Dec 5 2010 at 2:31 2 I usually integrate this function differently (using something trick-like, but natural one): $$\frac{dx}{\cos(x)} = \frac{\cos(x)dx}{{\cos^2(x)}} = \frac{d(\sin(x)}{1 - {\sin^2(x)}}$$ – Ostap Chervak Jun 14 2011 at 15:00 show 5 more comments Would regarding a scalar as the trace of a $1\times1$ matrix be considered a "trick"? Here's an occasion where that's useful: http://en.wikipedia.org/wiki/Estimation_of_covariance_matrices#Maximum-likelihood_estimation_for_the_multivariate_normal_distribution - While tricks have names because they wind up being associated with with some particular mathematician, tricks are tricks because something important goes on "behind the curtain." For instance, to prove $$(a_1 b_1 + \cdots + a_n b_n)^2 \leq ({a_1}^2 + \cdots + {a_n}^2) ({b_1}^2 + \cdots + {b_n}^2),$$ write \begin{align*} A &= ({a_1}^2 + \cdots + {a_n}^2)\\ B &= (a_1 b_1 + \cdots + a_n b_n)\\ C &= ({b_1}^2 + \cdots + {b_n}^2), \end{align*} then we must show $$B^2 \leq AC.$$ Equality clearly holds when $A = 0$. Otherwise, since $\mathbb{R}$ has no negative squares, for all $x \in \mathbb{R}$, $$0 \leq (a_1 x - b_1)^2 + \cdots + (a_n x - b_n)^2.$$ Expanding the squares, $$0 \leq Ax^2 - 2Bx + C.$$ The quadratic expression vanishes whenever $$x = \frac{B}{A} \pm \sqrt{\left(\frac{B}{A}\right)^2 - \frac{C}{A}}.$$ If $x = \dfrac{B}{A}$, then $$0 \leq A\left(\frac{B}{A}\right)^2 - 2 B\left(\frac{B}{A}\right) + C = \frac{B^2}{A} - 2 \frac{B^2}{A} + C = - \frac{B^2}{A} + C,$$ thus $$B^2 \leq AC.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 12, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9324443936347961, "perplexity_flag": "middle"}
http://en.m.wikibooks.org/wiki/Template:Computer_Programming/Error_Handling/6
# Template:Computer Programming/Error Handling/6 As you see the function demands a precondition of `X >= 0` - that is the function can only be called when X ≥ 0. In return the function promises as postcondition that the return value is also ≥ 0. In a full DbC approach, the postcondition will state a relation that fully describes the value that results when running the function, something like `result ≥ 0 and X = result * result`. This postcondition is √'s part of the contract. The use of assertions, annotations, or a language's type system for expressing the precondition `X >= 0` exhibits two important aspects of Design by Contract: 1. There can be ways for the compiler, or analysis tool, to help check the contracts. (Here for example, this is the case when `X ≥ 0` follows from X's type, and √'s argument when called is of the same type, hence also ≥ 0.) 2. The precondition can be mechanically checked before the function is called. The 1st aspect adds to safety: No programmer is perfect. Each part of the contract that needs to be checked by the programmers themselves has a high probability for mistakes. The 2nd aspect is important for optimization — when the contract can be checked at compile time, no runtime check is needed. You might not have noticed but if you think about it: $A^2 + B^2$ is never negative, provided the exponentiation operator and the addition operator work in the usual way. We have made 5 nice error handling examples for a piece of code which never fails. And this is the great opportunity for controlling some runtime aspects of DbC: You can now safely turn checks off, and the code optimizer can omit the actual range checks. DbC languages distinguish themselves on how they act in the face of a contract breach: 1. True DbC programming languages combine DbC with exception handling — raising an exception when a contract breach is detected at runtime, and providing the means to restart the failing routine or block in a known good state. 2. Static analysis tools check all contracts at analysis time and demand that the code written in such a way that no contract can ever be breached at runtime.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205428957939148, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/138296-need-help-verifying-answer-2.html
# Thread: 1. Affirmative, human. 2. Originally Posted by maddas Affirmative, human. how would i solve this problem? and if i did it right would the answer be (b) ? 3. Originally Posted by bringash how would i solve this problem? and if i did it right would the answer be (b) ? Dear bringash, For $\sqrt{5x}$ to be defined, $x\geq{0},$-----(1) For $\sqrt{x+4}$ to be defined, $x\geq{-4}$------(2) By 1 and 2, $x\geq{-4}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9437847137451172, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/195252-finding-specific-operator.html
# Thread: 1. ## finding a specific operator Hi! I am kinda going mad trying to find the following operator. Here goes the problem: Let $(z_n)$ be a complex seq. such that $\lim z_n=0$ and let $(x_n)$ be a seq. of pos. numbers such that $\lim x_n=\infty$. Show that there exists (i.e. find) a compact operator $P$ on $l^2$ so that the following two are satisfied: (a) $\lim_n \frac{\|(z_n I-P)^{-1}\|}{ x_n}=\infty$ and (b) All the $z_n$ lie in the resolvent of $P$. Thank You! 2. ## Re: finding a specific operator I've not done the computations, but did you try with $P$ diagonal? 3. ## Re: finding a specific operator yes I did try... I'm not sure how that helps. 4. ## Re: finding a specific operator It would reduce the problem to find a sequence instead of an operator. 5. ## Re: finding a specific operator I'm sorry but I just can't get it to work. How am I supposed to find it? 6. ## Re: finding a specific operator There is something I don't understand: if $z_n=0$ for all $n$, then $P$ has to be both compact and invertible in order to make the expression have a sense. It's impossible since $\ell^2$ is infinite dimensional. 7. ## Re: finding a specific operator The assumptio is not $z_n=0$ $\forall n$, it is that $\lim_{n\rightarrow \infty} z_n=0$ 8. ## Re: finding a specific operator Yes, but $z_n=0$ for all n is a particular case of $\lim_{n\to \infty}z_n=0$. So we have to assume that $z_n\neq 0$ for all $n$ otherwise $P$ would be invertible. 9. ## Re: finding a specific operator I see what you mean. Ok, how would I proceed after removing that special case. 10. ## Re: finding a specific operator Originally Posted by Zeke Hi! I am kinda going mad trying to find the following operator. Here goes the problem: Let $(z_n)$ be a complex seq. such that $\lim z_n=0$ and let $(x_n)$ be a seq. of pos. numbers such that $\lim x_n=\infty$. Show that there exists (i.e. find) a compact operator $P$ on $l^2$ so that the following two are satisfied: (a) $\lim_n \frac{\|(z_n I-P)^{-1}\|}{ x_n}=\infty$ and (b) All the $z_n$ lie in the resolvent of $P$. Thank You! As girdav pointed out, none of the $z_n$ can be 0. So make that assumption. Following girdav again, let P be the operator on $\l^2(\mathbb{N})$ whose matrix (with respect to the standard basis) is diagonal, with diagonal elements $\alpha_n.$ Let $Z = \{z_n\}_{n\in\mathbb{N}}\cup\{0\}.$ For each n, choose $\alpha_n$ so that $\alpha_n\notin Z$ and $|\alpha_n-z_n| < (nx_n)^{-1}.$ Then P is compact (because $\alpha_n\to0$). The spectrum of P is $A = \{\alpha_n\}_{n\in\mathbb{N}}\cup\{0\}.$ Since, for each n, $z_n\notin A,$ it follows that (b) holds. Finally, $\|(z_nI-P)^{-1}\| = \max_{k\in\mathbb{N}}|z_n-\alpha_k|^{-1}\geqslant|z_n-\alpha_n|^{-1}>nx_n.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 42, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9321587681770325, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/279015/visualizing-concepts-in-mathematical-logic
# Visualizing Concepts in Mathematical Logic If you were forced to speculate or offer anecdotal evidence, how would you say excellent practicioners of mathematical logic coneptually grasp statements like: $$\vdash ((P \rightarrow Q) \rightarrow Q) \rightarrow Q$$ ...on an intuitive level? Specifically, I'm curious if excellent practicioners will typically VISUALIZE such statements in any sort of way that doesn't involve mentally picturing the literal statement with its syntax as written above. For example, are Venn or Euler Diagrams typically a good way to go about things, or is that a bad idea in the long-run? Personally, I know that with Analysis/Group Theory/Topology I have been successful in finding rough visualizations of pretty much every concept involved (with the understanding that mental pictures are not LITERAL representations of the relevant concepts and in fact can often be quite misleading if one is not careful with them); however, with logic I am finding this more difficult since the mathematical objects in question seem in many ways to be largely, explicitly syntactical. What this means is that the more I rougly convert formal statements into "intuitive" pictures, the more those intuitive pictures start to exactly resemble a specific interpretation which gets in the way of those sentences being explictly syntactical in nature. As a consequence of this mess, I find myself getting confused in lacking basic intuitions over whether elementary statements are true or false before I attempt to prove them, unlike in other mathematical subjects. In short, does anyone have any anecdotal or even speculative advice about how to be successful in visualizing (or NOT visualizing) the objects of Math Logic? How does one gain an intuitive feel for this subject? Thanks! - 2 Use a truth table. – Brian Jan 15 at 2:39 One obstruction I have (but maybe others don't) is that $P \rightarrow Q$ being true whenever $P$ is false immediately doesn't sync of up with my intuition. For example, consider: If I had been born with blue eyes, I would have proved Goldbach's Conjecture by age 13. Now, since I was born with brown eyes, this statement is "true." But, since we are talking intuition here, I believe in a strong way that the preceding statement is false. Relatedly, you might check out: A Counterexample to Modus Ponens. Vann McGee. The Journal of Philosophy, jstor.org/stable/2026276. – B.D Jan 15 at 2:42 – MJD Jan 15 at 2:49 – B.D Jan 15 at 2:56 A mathematician was once asked, what do you see then you read $f(x,y)=0$? He replied, "I see $f(x,y)=0$". So it depends, some mathematicians are more geometric, some more algebraic. – alancalvitti Jan 15 at 3:00 show 2 more comments ## 2 Answers Something that I sometimes find useful is to consider what's called the Curry-Howard Isomorphism: statements of logic correspond to the types of computer programs, and the statements are theorems if and only if the corresponding type is the type of a program that can actually exist. In this correspondence, $A\to B$ represents the type of a function in a program that takes an argument of type $A$ and calculates a value of type $B$. A function of type $A\to A$ is obviously possible, because the function just takes the $A$ that it got and gives it right back: ```` define function f(a) { return a; } ```` And $A\to(B\to A)$ is also possible: the function needs to take an argument of type $A$ and then give back a function of type $B\to A$. It can do this, because it can give back a second function that ignores the $B$ that it got, and that gives back the $A$ that the first function got to begin with: ```` define function f(a) { define function g(b) { return a; } return g; } ```` But in contrast, $A\to B$ is not a theorem, because how can you write a function that gets an argument of type $A$ and gives back a result of some other, completely arbitrary type? Where could it get a $B$ from if all it has is an $A$? The isomorphism also identifies logical AND with pairing: a value of type $A\land B$ is a pair that contains both an $A$ and a $B$, and either (or both) can be extracted. It identifies logical OR with what's called a disjoint union: A value of type $A\lor B$ is either an $A$ or a $B$, but not both, and the program can tell which of the two it is. Similarly there are interpretations for negation, for constant true and false values, and so forth. For a more elaborate example, a program that corresponds to the theorem $\lnot\lnot(P\lor\lnot P)$, see this answer. The only catch is that not every classically true theorem corresponds to a program in a completely straightforward way. Some theorems of logic, such as $((A\to B)\to A)\to A$, (Peirce's law) don't correspond to a simple program. Instead they correspond to programs that use advanced features like continuations or exceptions. Still it's often helpful to think of logical theorems as specifications for programs; if you can think of a program which performs the corresponding calculation, then the statement must be a theorem. Viewed in this way, the statement you originally asked about, $((P\to Q)\to Q)\to Q$, seems quite unlikely: you need a function $f$ which gets an argument $x$, which is itself a function of type $(P\to Q)\to Q$, and $f$ needs to produce a value of type $Q$. If $f$ could call $x$, then $x$ would return the needed value. But it can't call $x$ because to do so it needs to give it an argument of type $P\to Q$, and it doesn't have one. - "But in contrast, A→B is not a theorem, because how can you write a function that gets and argument of type A and gives back a result of some other, completely arbitrary type?" I'm not sure I follow this without a very precise definition of "type". It is quite common to write a program that returns a different kind of data than it was given. – Austin Mohr Jan 15 at 3:05 1. Yes, you are quite right, and if you are interested, I suggest you follow this up, because I think the subject is fascinating. 2. To write a program that returns a different sort of data than it receives requires that your language have some built-in function or operator for converting one to the other, something like C's `atoi` function which converts strings to integers. Such a built-in function of type `string→int` is analogous to having a contextual assumption in the logic, on the left-hand side of the $\vdash$. – MJD Jan 15 at 3:09 1 3. Even in the presence of such built-in functions, $A\to B$ is impossible, because $B$ is completely unconstrained. You may be able to write a program that takes a value of one type and returns a value of another, but you can't write a program that returns a value of a completely arbitrary type. If your function takes a `string` and returns an `int`, I can object that I didn't want an `int`, I wanted a `(int\to(double, string))\to array of float`, and your function doesn't produce one. – MJD Jan 15 at 3:12 I think part of what makes excellent practitioners excellent is that they have many different intuitive representations of the things they work with, and can switch between them effortlessly according to what works well in a given situations. The important thing is to not stick with a single visualization and use that for everything, but to be familiar with a range of possibilities that you can quickly test to see if one of them is helpful for the task at hand. The Curry-Howard isomorphism (see MJD's answers) is great to know, especially if you already have (functional) programming experience. Closely related to it is the intuitionistic conception of proofs as conversational games, an informal version of which is often used to explain the semantics of quantifiers to beginning students ("the adversary chooses an $\epsilon>0$ and then we have to produce a $\delta>0$ such that ..."). This can be extended to propositional connectives: If we want to prove $P\land Q$, the adversary chooses whether he wants to see our proof of $P$ or of $Q$, but to prove $P\lor Q$, we get to choose which of the proofs to present. If we must prove $P\to Q$, the adversary starts by giving us his proof of $P$, and afterwards we must prove $Q$. More specifically visual, Venn diagrams can indeed be useful, but become cumbersome if there are many propositional letters. Personally I often find it surprisingly helpful simply to visualize the abstract syntax tree of a formula instead of its linear written syntax. This is related to (but distinct from) visualizing a propositional formula as a Boolean circuit containing gates and wires. Even something as primitive as truth tables can be helpful for some purposes. For example, my best intuition about how the possible two-variable Boolean functions relate to each other is in terms of two-dimensional truth tables: $$\begin{array}{c|cc} \to & 0 & 1 \\ \hline 0 & 1 & 1 \\ 1 & 0 & 1 \end{array} \quad \begin{array}{c|cc} \land & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 0 & 1 \end{array} \quad \begin{array}{c|cc} \oplus & 0 & 1 \\ \hline 0 & 0 & 1 \\ 1 & 1 & 0 \end{array} \quad \begin{array}{c|cc} \text{a} & 0 & 1 \\ \hline 0 & 0 & 0 \\ 1 & 1 & 1 \end{array} \quad \ldots$$ where, for example "$\to$" and "$\land$" is similar because their truth tables both have a wedge of three identical squares with an odd one in one of the corners. I can make one into the other by flipping and inverting, which tells me that an AND gate can become an IMPLIES gate by adding inverters to some of the inputs and outputs, and vice versa. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 51, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427041411399841, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/48927/godels-incompleteness-theorem-and-real-closed-fields
# Gödel's incompleteness theorem and real closed fields I am familiar with the result of Gödel's incompleteness theorem. I find it hard though, to convince myself that when we replace normal number arithmetic with real closed fields, that there is an axiomatic system that is complete. After all, $\mathbb{N} \subset \mathbb{R}$, so why can't we use the more 'general' system for arithmetic? I know other people also struggle with this. Is there a more intuitive explantion as to why this result is possible and not contradictory to the incompleteness theorem? - 7 The point is that there is no first-order sentence in the theory of real closed fields which is satisfied only by the subring generated by $1$ in a real closed field, so we can't talk about the natural numbers. – Qiaochu Yuan Jul 1 '11 at 19:01 ## 2 Answers The reason that the result does not contradict the Incompleteness Theorem is that the natural numbers are not definable over the theory of real-closed fields. Roughly speaking, this means that there is no formula $F(x)$ of our language such that $F(a)$ is true in the reals if and only if $a$ is a natural number. If there were such a formula, your intuition would be correct, and we could translate any problem about the integers under addition and multiplication into a problem about the reals. Then the theory of real-closed fields, since it is recursively axiomatizable, could not be complete. We can produce a formula in the language of real-closed fields that "says" $x=1$. We can also produce a formula that says $x=2$, and a formula that says $x=3$, and so on. Then we can say also $x=1$, $2$, or $3$. But there is no single formula that says that $x$ is a natural number. (We cannot use infinitely long formulas.) There is, by the way, also no formula that says that $x$ is rational. There is a similar phenomenon in the theory of algebraically closed fields of characteristic $0$. The theory is complete. From the fact of completeness we can immediately deduce, from the Incompleteness Theorem, that the natural numbers are not definable in the theory. Added: There is a complete (in the informal sense!) classification of the sets that are definable over the theory of real-closed fields. The idea has been used to establish connections between algebraic geometry and model theory. A number of important recent results come from exploiting that connection. - Is there any reference for the classification in the addition - or good place to read about it? – Mark Bennet Jul 1 '11 at 20:55 2 @Mark Bennet: The major figure is Ehud Hrushovski. There is a book about his methods. Other important figures are Zil'ber, Wilkie, MacIntyre, Dickmann. There are many others, it is a very lively field. – André Nicolas Jul 1 '11 at 21:10 ## Did you find this question interesting? Try our newsletter email address As has been mentioned, one cannot apply the theory of real-closed fields to solve Diophantine problems since $\rm\:\mathbb Z\:$ is not definable in the elementary language of real-closed fields. However, unless you have studied model theory, such an answer will probably lend little intuition on the matter. Towards such, consider the following remarks. The primary reason that the first-order theory of the reals proves to be much simpler than Diophantine (integer) analogs is that the associated geometry is much simpler. Namely, the sets that are definable by real-polynomial inequations - so-called semi-algebraic sets - can be decomposed into a finite number of cells (e.g. cylinders) where, on each cell, every defining polynomial has constant sign. Therefore testing the truth of a first-order statement reduces to a finite number of tests on constant-sign cells, which is trivially decided by simply evaluating the polynomials at any point in the cell. This yields an algorithm for deciding the truth of first order statements about the reals - the so-called cylindrical algebraic decomposition (CAD) algorithm (an effective realization of Tarski's famous method of quantifier elimination for the reals). For example, in the one-dimensional case the sets definable in the first-order language of $\rm\:\mathbb R\:$ are simply finite unions of intervals. So, e.g. to test if $\rm\ f(x) > 0\ \Rightarrow g(x) > 0\$ we can employ Sturm's Theorem to partition $\:\mathbb R\:$ by the finite number of roots of $\rm\:f,\:g\:,\:$ and then pick a point $\rm\:r_0\:$ in each interval and test if $\rm\ f(r_0)>0\ \Rightarrow\ g(r_0)>0\:.\:$ The CAD algorithm works analogously in higher dimensions, by projecting higher-dimensional cells down to lower dimensions. The key property is a result of Tarski and Seidenberg that semi-algebraic sets are preserved by projections. The essential innate structure has been generalized in model-theoretic study of o-minimal structures. Contrast this to the study of (first-order) Diophantine equations. Anyone who has studied number theory quickly appreciates the immense complexity of the structure of solution sets of integer polynomial equations, e.g. Pell equations, elliptic curves, etc. Undecidability results imply that there can be no simple uniform recursive decision procedure like that for the reals. Each leap into a higher dimension may require completely novel ideas. But this shouldn't be viewed as a detriment. Rather, it yields an unending source of rich problems that will constantly challenge our ingenuity. - Dubuque: The leap in complexity, from the point of view of logic, stops after a while, probably by a dozen variables or so. Problems with more variables can be reduced, with (nothing comes for nothing) a jump in degree. Not that this is much consolation! – André Nicolas Jul 1 '11 at 22:14 @use That's not necessarily true for specific classes of problems. In any case my point is simply that never ending ingenuity is required. – Gone Jul 1 '11 at 22:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9464432001113892, "perplexity_flag": "head"}
http://crypto.stackexchange.com/questions/5062/key-scheduling-of-international-data-encryption-algorithm-idea
Key Scheduling of International Data Encryption Algorithm (IDEA) How to perform the key scheduling in International Data Encryption Algorithm (IDEA), I took a research but I can't understand how to perform it, "further groups of eight keys are created by rotating the main key left 25 bits between each group of eight.", this is the part where I cannot understand the key scheduling of IDEA - – calccrypto Oct 16 '12 at 13:39 1 Answer Have a look at the corresponding paper. The key is a 128-bit string $K$. The round keys for round $r=1,...,8$ are derived as follows: • take the version of $K$ cyclically rotated to the left $25\cdot r$ times; • split the string obtained into eight substrings and call them $Z_1^{(r)}$, ..., $Z_8^{(r)}$. The keys $Z_7^{(r)}$ and $Z_8^{(r)}$ are never used. The subkeys $Z_1^{(9)}$, ..., $Z_4^{(9)}$ are used to mask the output. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9340927004814148, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-challenge-problems/92588-limit-2-a.html
# Thread: 1. ## Limit (2) Let $f: [0,1] \longrightarrow \mathbb{R}$ be continuous. We all know that $\lim_{n\to\infty} \frac{1}{n} \sum_{j=1}^n f \left(\frac{j}{n} \right)=\int_0^1 f(x) \ dx.$ Now let $L=\lim_{n\to\infty} \frac{1}{n} \sum_{j=1}^n (-1)^{j-1}f \left(\frac{j}{n} \right).$ What do you think the value of $L$ is? 2. Is it Zero 3. Originally Posted by pankaj Is it Zero that's the natural candidate and it's absolutely correct. so now we need a proof! 4. Can you direct me to a proof of the limit that everyone knows except for me? 5. $\int_{a}^{b}f(x)dx=\lim_{h\to 0}h(f(a+h)+f(a+2h)+f(a+3h)+.....+f(a+nh))$ If a=0 and b=1,we get $h=\frac{b-a}{n}=\frac{1-0}{n}=\frac{1}{n}.$ It is then that we have $\int_{0}^{1}f(x)dx=\lim_{n\to \infty}\frac{1}{n}\sum_{r=1}^nf(\frac{r}{n})$= $\lim_{h\to 0}h(f(h)+f(2h)+f(3h)+.....+f(nh))$ $\lim_{n\to \infty}\frac{1}{n}\sum_{r=1}^n(-1)^{r-1}f(\frac{r}{n})$ $<br /> =\lim_{n\to \infty}\frac{1}{n}(f(\frac{1}{n})-f(\frac{2}{n})+f(\frac{3}{n})-..........+(-1)^{n-1}f(\frac{n}{n}))<br />$ $<br /> =\lim_{h\to 0}h(f(h)-f(2h)+f(3h)-f(4h)+f(5h)-....+(-1)^{n-1}f(nh))<br />$ = $\lim_{h\to 0}h(f(h)+f(3h)+f(5h)+....)-\lim_{h\to 0}h(f(2h)+f(4h)+f(6h)+...)$ $<br /> =\frac{1}{2}\lim_{h\to 0}(2h)(f(h)+f(3h)+f(5h)+....)-\frac{1}{2}\lim_{h\to 0}(2h)(f(2h)+f(4h)+f(6h)+...)<br />$ $<br /> =\frac{1}{2}\int_{0}^1f(x)dx-\frac{1}{2}\int_{0}^1f(x)dx<br />$ $<br /> =0<br />$ 6. I do not believe that I have done it right. 7. Originally Posted by NonCommAlg Let $f: [0,1] \longrightarrow \mathbb{R}$ be continuous. We all know that $\lim_{n\to\infty} \frac{1}{n} \sum_{j=1}^n f \left(\frac{j}{n} \right)=\int_0^1 f(x) \ dx.$ Now let $L=\lim_{n\to\infty} \frac{1}{n} \sum_{j=1}^n (-1)^{j-1}f \left(\frac{j}{n} \right).$ What do you think the value of $L$ is? Write $L_n = \frac{1}{n} \sum_{j=1}^n (-1)^{j-1}f \left(\frac{j}{n} \right)$. Then $L_{2n} = \frac{1}{2n} \sum_{j=1}^n \Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n} \right)\Big]$ hence $L_{2n} \leq \frac{1}{2n} n \mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n}\right)\Big] = \frac{1}{2}\mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n}\right)\Big]$ and similarily $L_{2n} \geq \frac{1}{2}\mbox{ inf }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n}\right)\Big]$ so that $\frac{1}{2}\mbox{ inf }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n}\right)\Big] \leq L_{2n} \leq \frac{1}{2}\mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-f\left(\frac{2j}{n}\right)\Big]$ but both sides go to 0 as $n \rightarrow \infty$ since $f$ is continous! 8. Originally Posted by Bruno J. Write $L_n = \frac{1}{n} \sum_{j=1}^n (-1)^{j-1}f \left(\frac{j}{n} \right)$. Then $L_{2n} = \frac{1}{n} \sum_{j=1}^n \Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n} \right)\Big]$ n here should be 2n. hence $L_{2n} \leq \frac{1}{n} n \mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n}\right)\Big] = \mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n}\right)\Big]$ and similarily $L_{2n} \geq \mbox{ inf }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n}\right)\Big]$ so that $\mbox{ inf }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n}\right)\Big] \leq L_{2n} \leq \mbox{ sup }_{1 \leq j \leq n}\Big[f \left(\frac{2j-1}{n} \right)-\left(\frac{2j}{n}\right)\Big]$ but both sides go to 0 as $n \rightarrow \infty$ since $f$ is continous! even $L_{2n} \to 0$ will not necessarily imply that $L_n \to 0$ because we don't know that $\lim L_n$ exists. 9. Thanks for the heads up! I was in a hurry when I wrote that (excuses excuses ). Indeed $L_{2n}\rightarrow 0$ does not imply the result, but a slight modification of the argument should show that $L_{n}\rightarrow 0$, ex. by writing $L_{2n+1} = \frac{1}{2n+1}\Big[\sum_{j=1}^n\Big[f\Big(\frac{2j-1}{n}\Big)-f\Big(\frac{2j}{n}\Big)\Big]+f\Big(\frac{2n+1}{n}\Big)\Big]$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 39, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.95467609167099, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/293036/i-want-to-find-the-real-number-c-for-which-we-have-pxc-pyc
# I want to find the real number c for which we have: $P(X<c)=P(Y<c)$ Consider two normal random variables $X$ and $Y$: $$X\sim N(m_1,s_1), \qquad Y\sim N(m_2,s_2)$$ I want to find the real number $c$ for which we have: $$P(X<c)=P(Y<c)$$ - ## 2 Answers I'll assume that by $s_1$ you mean the variance, so that $\sqrt{s_1}$ is the standard deviation. Since the cumulative distribution function of the normal distribution is one-to-one, what you need is only this: $$\frac{c-m_1}{\sqrt{s_1}} = \frac{c-m_2}{\sqrt{s_2}}.$$ It follows that $$\sqrt{s_2}(c-m_1) = \sqrt{s_1}(c-m_2)$$ $$(\sqrt{s_2}-\sqrt{s_1})c = m_1\sqrt{s_2} - m_2\sqrt{s_1}$$ $$c=\cdots$$ I'll let you do the rest. - Two obvious solutions are when $\lim_{x\rightarrow\infty}$ or when $\lim_{x\rightarrow-\infty}$. Then the solution exists independent of the parameters. One other special case is when the likelihood ratio $(f_1/f0)(x)$ is monotone increasing. In this case the solution exists if and only if when $\lim_{x\rightarrow\infty}$ or $\lim_{x\rightarrow-\infty}$, which is true iff when $s_1=s_2$. For an arbitrary solution refer to Michael Hardy's solution. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9077214598655701, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Immersion_(mathematics)
# Immersion (mathematics) The Klein bottle, immersed in 3-space. For a closed immersion in algebraic geometry, see closed immersion. In mathematics, an immersion is a differentiable map between differentiable manifolds whose derivative is everywhere injective. Explicitly, f : M → N is an immersion if $D_pf : T_p M \to T_{f(p)}N\,$ is an injective map at every point p of M (where the notation TpX represents the tangent space of X at the point p). Equivalently, f is an immersion if it has constant rank equal to the dimension of M: $\operatorname{rank}\,D_p f = \dim M.$ The map f itself need not be injective, only its derivative. A related concept is that of an embedding. A smooth embedding is an injective immersion f : M → N which is also a topological embedding, so that M is diffeomorphic to its image in N. An immersion is precisely a local embedding – i.e. for any point x ∈ M there is a neighbourhood, U ⊂ M, of x such that f : U → N is an embedding, and conversely a local embedding is an immersion. An injectively immersed submanifold that is not an embedding. If M is compact, an injective immersion is an embedding, but if M is not compact then injective immersions need not be embeddings; compare to continuous bijections versus homeomorphisms. ## Regular homotopy A regular homotopy between two immersions f and g from a manifold M to a manifold N is defined to be a differentiable function H : M × [0,1] → N such for all t in [0, 1] the function Ht : M → N defined by Ht(x) = H(x, t) for all x ∈ M is an immersion, with H0 = f, H1 = g. A regular homotopy is thus a homotopy through immersions. ## Classification Hassler Whitney initiated the systematic study of immersions and regular homotopies in the 1940s, proving that for 2m < n+1 every map f : Mm → Nn of an m-dimensional manifold to an n-dimensional manifold is homotopic to an immersion, and in fact to an embedding for 2m < n; these are the Whitney immersion theorem and Whitney embedding theorem. Stephen Smale expressed the regular homotopy classes of immersions f : Mm → Rn as the homotopy groups of a certain Stiefel manifold. The sphere eversion was a particularly striking consequence. Morris Hirsch generalized Smale's expression to a homotopy theory description of the regular homotopy classes of immersions of any m-dimensional manifold Mm in any n-dimensional manifold Nn. The Hirsch-Smale classification of immersions was generalized by Mikhail Gromov. ### Existence The Möbius strip does not immerse in codimension 0 because its tangent bundle is non-trivial. The primary obstruction to the existence of an immersion i : Mm → Rn is the stable normal bundle of M, as detected by its characteristic classes, notably its Stiefel–Whitney classes. That is, since Rn is parallelizable, the pullback of its tangent bundle to M is trivial; since this pullback is the direct sum of the (intrinsically defined) tangent bundle on M, TM, which has dimension m, and of the normal bundle ν of the immersion i, which has dimension n−m, for there to be a codimension k immersion of M, there must be a vector bundle of dimension k, ξk, standing in for the normal bundle ν, such that TM ⊕ ξk is trivial. Conversely, given such a bundle, an immersion of M with this normal bundle is equivalent to a codimension 0 immersion of the total space of this bundle, which is an open manifold. The stable normal bundle is the class of normal bundles plus trivial bundles, and thus if the stable normal bundle has cohomological dimension k, it cannot come from an (unstable) normal bundle of dimension less than k. Thus, the cohomology dimension of the stable normal bundle, as detected by its highest non-vanishing characteristic class, is an obstruction to immersions. Since characteristic classes multiply under direct sum of vector bundles, this obstruction can be stated intrinsically in terms of the space M and its tangent bundle and cohomology algebra. This obstruction was stated (in terms of the tangent bundle, not stable normal bundle) by Whitney. For example, the Möbius strip has non-trivial tangent bundle, so it cannot immerse in codimension 0 (in R2), though it embeds in codimension 1 (in R3). In 1960, William S. Massey (Massey 1960) showed that these characteristic classes (the Stiefel–Whitney classes of the stable normal bundle) vanish above degree n−α(n), where α(n) is the number of “1” digits when n is written in binary; this bound is sharp, as realized by real projective space. This gave evidence to the Immersion Conjecture, namely that every n-manifold could be immersed in codimension n−α(n), i.e., in R2n−α(n) and was proven in 1985 by Ralph Cohen (Cohen 1985). ### Codimension 0 Codimension 0 immersions are equivalently relative dimension 0 submersions, and are better thought of as submersions. A codimension 0 immersion of a closed manifold is precisely a covering map, i.e., a fiber bundle with 0-dimensional (discrete) fiber. By Ehresmann's theorem and Phillips' theorem on submersions, a proper submersion of manifolds is a fiber bundle, hence codimension/relative dimension 0 immersions/submersions behave like submersions. Further, codimenson 0 immersions do not behave like other immersions, which are largely determined by the stable normal bundle: in codimension 0 one has issues of fundamental class and cover spaces. For instance, there is no codimension 0 immersion S1 → R1, despite the circle being parallelizable, which can be proven because the line has no fundamental class, so one does not get the required map on top cohomology. Alternatively, this is by invariance of domain. Similarly, although S3 and the 3-torus T3 are both parallelizable, there is no immersion T3 → S3 – any such cover would have to be ramified at some points, since the sphere is simply connected. Another way of understanding this is that a codimension k immersion of a manifold corresponds to a codimension 0 immersion of a k-dimensional vector bundle, which is an open manifold if the codimension is greater than 0, but to a closed manifold in codimension 0 (if the original manifold is closed). ## Multiple points A k-tuple point (double, triple, etc.) of an immersion f : M → N is an unordered set {x1, ..., xk} of distinct points xi ∈ M with the same image f(xi) ∈ N. If M is an m-dimensional manifold and N is an n-dimensional manifold then for an immersion f : M → N in general position the set of k-tuple points is an n−k(n−m)-dimensional manifold. An embedding is an immersion without multiple points (where k > 1). Note, however, that the converse is false: there are injective immersions that are not embeddings. The nature of the multiple points classifies immersions; for example, immersions of a circle in the plane are classified up to regular homotopy by the number of double points. At a key point in surgery theory it is necessary to decide if an immersion f : Sm → N2m of an m-sphere in a 2m-dimensional manifold is regular homotopic to an embedding, in which case it can be killed by surgery. Wall associated to f an invariant μ(f) in a quotient of the fundamental group ring Z[π1(N)] which counts the double points of f in the universal cover of N. For m > 2, f is regular homotopic to an embedding if and only if μ(f) = 0 by the Whitney trick. One can study embeddings as "immersions without multiple points", since immersions are easier to classify. Thus, one can start from immersions and try to eliminate multiple points, seeing if one can do this without introducing other singularities – studying "multiple disjunctions". This was first done by André Haefliger, and this approach is fruitful in codimension 3 or more – from the point of view of surgery theory, this is "high (co)dimension", unlike codimension 2 which is the knotting dimension, as in knot theory. It is studied categorically via the "calculus of functors" by Thomas Goodwillie, John Klein, and Michael S. Weiss. ## Examples and properties • The Klein bottle, and all other non-orientable closed surfaces, can be immersed in 3-space but not embedded. The quadrifolium, the 4-petaled rose. • A mathematical rose with k petals is an immersion of the circle in the plane with a single k-tuple point; k can be any odd number, but if even must be a multiple of 4, so the figure 8 is not a rose. • By the Whitney–Graustein theorem the regular homotopy classes of immersions of the circle in the plane are classified by the winding number which is also the number of double points counted algebraically (i.e. with signs). • The sphere can be turned inside out: the standard embedding f : S2 → R3 is related to f1 = −f0 : S2 → R3 by a regular homotopy of immersions ft : S2 → R3. • Boy's surface is an immersion of the real projective plane in 3-space; thus also a 2-to-1 immersion of the sphere. • The Morin surface is an immersion of the sphere; both it and Boy's surface arise as midway models in sphere eversion. ### Immersed plane curves This curve has total curvature 6π, and turning number 3, though it only has winding number 2 about p. Main articles: Whitney–Graustein theorem, Total curvature, and Turning number Immersed plane curves have a well-defined turning number, which can be defined as the total curvature divided by 2π. This is invariant under regular homotopy, by the Whitney–Graustein theorem – topologically, it is the degree of the Gauss map, or equivalently the winding number of the unit tangent (which does not vanish) about the origin. Further, this is the only invariant – any two plane curves with the same turning number are regular homotopic. Every immersed plane curve lifts to an embedded space curve via separating the intersection points, which is not true in higher dimensions. With added data (which strand is on top), immersed plane curves yield knot diagrams, which are of central interest in knot theory. While immersed plane curves, up to regular homotopy, are determined by their turning number, knots have a very rich and complex structure. ### Immersed surfaces in 3-space The study of immersed surfaces in 3-space is closely connected with the study of knotted (embedded) surfaces in 4-space, by analogy with the theory of knot diagrams (immersed plane curves (2-space) as projections of knotted curves in 3-space): given a knotted surface in 4-space, one can project it to an immersed surface in 3-space, and conversely, given an immersed surface in 3-space, one may ask if it lifts to 4-space – is it the projection of a knotted surface in 4-space? This allows one to relate questions about these objects. A basic result, in contrast to the case of plane curves, is that not every immersed surface lifts to a knotted surface.[1] In some cases the obstruction is 2-torsion, such as in Koschorke's example,[2] which is an immersed surface (formed from 3 Möbius bands, with a triple point) that does not lift to a knotted surface, but it has a double cover that does lift. A detailed analysis is given in (Carter & Saito 1998), while a more recent survey is given in (Carter, Kamada & Saito 2004). ## Generalizations Main article: Homotopy principle A far-reaching generalization of immersion theory is the homotopy principle: one may consider the immersion condition (the rank of the derivative is always k) as a partial differential relation (PDR), as it can be stated in terms of the partial derivatives of the function. Then Smale–Hirsch immersion theory is the result that this reduces to homotopy theory, and the homotopy principle gives general conditions and reasons for PDRs to reduce to homotopy theory. ## References • Adachi, Masahisa (1993), Embeddings and immersions, ISBN 978-0-8218-4612-4, translation Kiki Hudson • Arnold, V. I.; Varchenko, A. N.; Gusein-Zade, S. M. (1985), Singularities of Differentiable Maps: Volume 1, Birkhäuser, ISBN 0-8176-3187-9 • Bruce, J. W.; Giblin, P. J. (1984), Curves and Singularities, Cambridge University Press, ISBN 0-521-42999-4 • • Carter, J. Scott; Saito, Masahico (1998), Knotted Surfaces and Their Diagrams, Mathematical Surveys and Monographs 55, p. 258, ISBN 978-0-8218-0593-0 • Carter, J. Scott; Kamada, Seiichi; Saito, Masahico (2004), Surfaces in 4-space • Gromov, M. (1986), Partial differential relations, Springer, ISBN 3-540-12177-3 • Hirsch M. Immersions of manifolds. Trans. A.M.S. 93 1959 242—276. • Koschorke, Ulrich (1979), "Multiple points of Immersions and the Kahn-Priddy Theorem", Math Z. (169): 223–236 • Smale, S. A classification of immersions of the two-sphere. Trans. Amer. Math. Soc. 90 1958 281–290. • Smale, S. The classification of immersions of spheres in Euclidean spaces. Ann. of Math. (2) 69 1959 327—344. • Spring, D. (2005), "The Golden Age of Immersion Theory in Topology: 1959-1973", (42): 163–180 • Wall, C. T. C.: Surgery on compact manifolds. 2nd ed., Mathematical Surveys and Monographs 69, A.M.S.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8944560885429382, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/20353/calculating-lagrangian-density-from-first-principle
# Calculating lagrangian density from first principle In most of the field theory text they will start with lagrangian density for spin 1 and spin 1/2 particles. But i could find any text where this lagrangian density is derived from first principle. - ## 3 Answers Try Steven Weinbergs comprehensive The Quantum Theory of Fields (Vol. 1, "Foundations"). He follows a very systematic approach from "first principles", i.e. from Wigner's classification of unitary irreducible representations of the Poincaré group, over free fields for different mass/spin configurations (including spin 1 and 1/2, which in different formulation lead up to the Klein-Gordon and Dirac equations) to perturbation theory and Lagrangian densities (and lots more). If you're interested in a more compact treatment of the "first principles" part only (but not Lagrangian densities!), plus theorems that can be proven as a direct consequence of them, such as PCT or spin/statistics, the standard textbook/primer of mathematical QFT is Streater/Wightman, PCT, spin and statistics, and all that. - Seek for 'Klein-Gordon equation' and 'Dirac equation' - they can be found in any textbook concerning basic relativistic quantum mechanics (such as, e.g. Landau ). Klein-Gordon (spin=0 and any natural spin after modifications) comes directly from the energy-momentum conservation of special relativity $p_\mu p^{\mu} = -m^2$, whereas Dirac equation for fractional spins is guessed as 'square root' of Klein-Gordon (in certain sense). - The "first principle" for any Lagrangian is the corresponding equation. If you advance, for any particular reason, an equation, you may construct its Lagrangian knowing the structure of the Lagrange equations:$$\frac{d}{dt}\frac{\partial L}{\partial \dot {\phi}}=\frac{\partial L}{\partial {\phi}}$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.855758786201477, "perplexity_flag": "middle"}
http://jeremykun.com/tag/unsupervised-learning/
# k-Means Clustering and Birth Rates Posted on February 4, 2013 by A common problem in machine learning is to take some kind of data and break it up into “clumps” that best reflect how the data is structured. A set of points which are all collectively close to each other should be in the same clump. A simple picture will clarify any vagueness in this: Here the data consists of points in the plane. There is an obvious clumping of the data into three pieces, and we want a way to automatically determine which points are in which clumps. The formal name for this problem is the clustering problem. That is, these clumps of points are called clusters, and there are various algorithms which find a “best” way to split the data into appropriate clusters. The important applications of this are inherently similarity-based: if our data comes from, say, the shopping habits of users of some website, we may want to target a group of shoppers who buy similar products at similar times, and provide them with a coupon for a specific product which is valid during their usual shopping times. Determining exactly who is in that group of shoppers (and more generally, how many groups there are, and what the features the groups correspond to) if the main application of clustering. This is something one can do quite easily as a human on small visualizable datasets, but the usual the digital representation (a list of numeric points with some number of dimensions) doesn’t yield any obvious insights. Moreover, as the data becomes more complicated (be it by dimension increase, data collection errors, or sheer volume) the “human method” can easily fail or become inconsistent. And so we turn to mathematics to formalize the question. In this post we will derive one possible version of the clustering problem known as the k-means clustering or centroid clustering problem, see that it is a difficult problem to solve exactly, and implement a heuristic algorithm in place of an exact solution. And as usual, all of the code used in this post is freely available on this blog’s Google code page. ## Partitions and Squared Deviations The process of clustering is really a process of choosing a good partition of the data. Let’s call our data set $S$, and formalize it as a list of points in space. To be completely transparent and mathematical, we let $S$ be a finite subset of a metric space $(X,d)$, where $d$ is our distance metric. Definition: We call a partition of a set $S$ a choice of subsets $A_1, \dots, A_n$ of $S$ so that every element of $S$ is in exactly one of the $A_i$. A couple of important things to note about partitions is that the union of all the $A_i$ is $S$, and that any two $A_i, A_j$ intersect trivially. These are immediate consequences of the definition, and together provide an equivalent, alternative definition for a partition. As a simple example, the even and odd integers form a partition of the whole set of integers. There are many different kinds of clustering problems, but every clustering problem seeks to partition a data set in some way depending on the precise formalization of the goal of the problem. We should note that while this section does give one of many possible versions of this problem, it culminates in the fact that this formalization is too hard to solve exactly. An impatient reader can safely skip to the following section where we discuss the primary heuristic algorithm used in place of an exact solution. In order to properly define the clustering problem, we need to specify the desired features of a cluster, or a desired feature of the set of all clusters combined. Intuitively, we think of a cluster as a bunch of points which are all close to each other. We can measure this explicitly as follows. Let $A$ be a fixed subset of the partition we’re interested in. Then we might want to optimize the sum of all of the distances of pairs of points within $A$ to be a measure of it’s “clusterity.” In symbols, this would be $\displaystyle \sum_{x \neq y \in A} d(x, y)$ If this quantity is small, then it says that all of the points in the cluster $A$ are close to each other, and $A$ is a good cluster. Of course, we want all clusters to be “good” simultaneously, so we’d want to minimize the sum of these sums over all subsets in the partition. Note that if there are $n$ points in $A$, then the above sum involves $\choose{n}{2} \sim n^2$ distance calculations, and so this could get quite inefficient with large data sets. One of the many alternatives is to pick a “center” for each of the clusters, and try to minimize the sum of the distances of each point in a cluster from its center. Using the same notation as above, this would be $\displaystyle \sum_{x \in A} d(x, c)$ where $c$ denotes the center of the cluster $A$. This only involves $n$ distance calculations, and is perhaps a better measure of “clusterity.” Specifically, if we use the first option and one point in the cluster is far away from the others, we essentially record that single piece of information $n - 1$ times, whereas in the second we only record it once. The method we will use to determine the center can be very general. We could use one of a variety of measures of center, like the arithmetic mean, or we could try to force one of the points in $A$ to be considered the “center.” Fortunately, the arithmetic mean has the property that it minimizes the above sum for all possible choices of $c$. So we’ll stick with that for now. And so the clustering problem is formalized. Definition: Let $(X,d)$ be a metric space with metric $d$, and let $S \subset (X,d)$ be a finite subset. The centroid clustering problem is the problem of choosing a positive integer $k$ and a partition $\left \{ A_1 ,\dots A_k \right \}$ of $S$ so that the following quantity is minimized: $\displaystyle \sum_{i=1}^k\sum_{x \in A_i} d(x, c(A_i))$ where $c(A_i)$ denotes the center of a cluster, defined as the arithmetic mean of the points in $A_i$: $\displaystyle c(A) = \frac{1}{|A|} \sum_{x \in A} x$ Before we continue, we have a confession to make: the centroid clustering problem is prohibitively difficult. In particular, it falls into a class of problems known as NP-hard problems. For the working programmer, NP-hard means that there is unlikely to be an exact solution to the problem which is better than trying all possible partitions. We’ll touch more on this after we see some code, but the salient fact is that a heuristic algorithm is our best bet. That is, all of this preparation with partitions and squared deviations really won’t come into the algorithm design at all. Formalizing this particular problem in terms of sets and a function we want to optimize only allows us to rigorously prove it is difficult to solve exactly. And so, of course, we will develop a naive and intuitive heuristic algorithm to substitute for an exact solution, observing its quality in practice. Lloyd’s Algorithm The most common heuristic for the centroid clustering problem is Lloyd’s algorithm, more commonly known as the k-means clustering algorithm. It was named after its inventor Stuart Lloyd, a University of Chicago graduate and member of the Manhattan project who designed the algorithm in 1957 during his time at Bell Labs. Heuristics tend to be on the simpler side, and Lloyd’s algorithm is no exception. We start by fixing a number of clusters $k$ and choosing an arbitrary initial partition $A = \left \{ A_1, \dots, A_k \right \}$. The algorithm then proceeds as follows: repeat: compute the arithmetic mean c[i] of each A[i] construct a new partition B: each subset B[i] is given a center c[i] computed from A x is assigned to the subset B[i] whose c[i] is closest stop if B is equal to the old partition A, else set A = B Intuitively, we imagine the centers of the partitions being pulled toward the center of mass of the points in its currently assigned cluster, and then the points deciding selectively who to pull towards them. (Indeed, precisely because of this the algorithm may not always give sensible results, but more on that later.) One who is in tune with their inner pseudocode will readily understand the above algorithm. But perhaps the simplest way to think about this algorithm is functionally. That is, we are constructing this partition-updating function $f$ which accepts as input a partition of the data and produces as output a new partition as follows: first compute the mean of centers of the subsets in the old partition, and then create the new partition by gathering all the points closest to each center. These are the fourth and fifth lines of the pseudocode above. Indeed, the rest of the pseudocode is merely pomp and scaffolding! What we are really after is a fixed point of the partition-updating function $f$. In other words, we want a partition $P$ such that $f(P) = P$. We go about finding one in this algorithm by applying $f$ to our initial partition $A$, and then recursively applying $f$ to its own output until we no longer see a change. Perhaps we should break away from traditional pseudocode illegibility and rewrite the algorithm as follows: define updatePartition(A): let c[i] = center(A[i]) return a new partition B: each B[i] is given the points which are closest to c[i] compute a fixed point by recursively applying updatePartition to any initial partition. Of course, the difference between these pseudocode snippets is just the difference between functional and imperative programming. Neither is superior, but the perspective of both is valuable in its own right. And so we might as well implement Lloyd’s algorithm in two such languages! The first, weighing in at a whopping four lines, is our Mathematica implementation: closest[x_, means_] := means[[First[Ordering[Map[EuclideanDistance[x, #] &, means]]]]]; partition[points_, means_] := GatherBy[points, closest[#, means]&]; updatePartition[points_, old_] := partition[points, Map[Mean, old]]; kMeans[points_, initialMeans_] := FixedPoint[updatePartition[points, #]&, partition[points, initialMeans]]; While it’s a little bit messy (as nesting 5 function calls and currying by hand will inevitably be), the ideas are simple. The “closest” function computes the closest mean to a given point $x$. The “partition” function uses Mathematica’s built-in GatherBy function to partition the points by the closest mean; GatherBy[L, f] partitions its input list L by putting together all points which have the same value under f. The “updatePartition” function creates the new partition based on the centers of the old partition. And finally, the “kMeans” function uses Mathematica’s built-in FixedPoint function to iteratively apply updatePartition to the initial partition until there are no more changes in the output. Indeed, this is as close as it gets to the “functional” pseudocode we had above. And applying it to some synthetic data (three randomly-sampled Gaussian clusters that are relatively far apart) gives a good clustering in a mere two iterations: Indeed, we rarely see a large number of iterations, and we leave it as an exercise to the reader to test Lloyd’s algorithm on random noise to see just how bad it can get (remember, all of the code used in this post is available on this blog’s Google code page). One will likely see convergence on the order of tens of iterations. On the other hand, there are pathologically complicated sets of points (even in the plane) for which Lloyd’s algorithm takes exponentially long to converge to a fixed point. And even then, the solution is never guaranteed to be optimal. Indeed, having the possibility for terrible run time and a lack of convergence is one of the common features of heuristic algorithms; it is the trade-off we must make to overcome the infeasibility of NP-hard problems. Our second implementation was in Python, and compared to the Mathematica implementation it looks like the lovechild of MUMPS and C++. Sparing the reader too many unnecessary details, here is the main function which loops the partition updating, a la the imperative pseudocode: def kMeans(points, k, initialMeans, d=euclideanDistance): oldPartition = [] newPartition = partition(points, k, initialMeans, d) while oldPartition != newPartition: oldPartition = newPartition newMeans = [mean(S) for S in oldPartition] newPartition = partition(points, k, newMeans, d) return newPartition We added in the boilerplate functions for euclideanDistance, partition, and mean appropriately, and the reader is welcome to browse the source code for those. ## Birth and Death Rates Clustering To test our algorithm, let’s apply it to a small data set of real-world data. This data will consist of one data point for each country consisting of two features: birth rate and death rate, measured in annual number of births/deaths per 1,000 people in the population. Since the population is constantly changing, it is measured at some time in the middle of the year to act as a reasonable estimate to the median of all population values throughout the year. The raw data comes directly from the CIA’s World Factbook data estimate for 2012. Formally, we’re collecting the “crude birth rate” and “crude death rate” of each country with known values for both (some minor self-governing principalities had unknown rates). The “crude rate” simply means that the data does not account for anything except pure numbers; there is no compensation for the age distribution and fertility rates. Of course, there are many many issues affecting the birth rate and death rate, but we don’t have the background nor the stamina to investigate their implications here. Indeed, part of the point of studying learning methods is that we want to extract useful information from the data without too much human intervention (in the form of ambient knowledge). Here is a plot of the data with some interesting values labeled (click to enlarge): Specifically, we note that there is a distinct grouping of the data into two clusters (with a slanted line apparently separating the clusters). As a casual aside, it seems that the majority of the countries in the cluster on the right are countries with active conflict. Applying Lloyd’s algorithm with $k=2$ to this data results in the following (not quite so good) partition: Note how some of the points which we would expect to be in the “left” cluster are labeled as being in the right. This is unfortunate, but we’ve seen this issue before in our post on k-nearest-neighbors: the different axes are on different scales. That is, death rates just tend to vary more wildly than birth rates, and the two variables have different expected values. Compensating for this is quite simple: we just need to standardize the data. That is, we need to replace each data point with its deviation from the mean (with respect to each coordinate) using the usual formula: $\displaystyle z = \frac{x - \mu}{\sigma}$ where for a random variable $X$, its (sample) expected value is $\mu$ and its (sample) standard deviation is $\sigma$. Doing this in Mathematica is quite easy: Transpose[Map[Standardize, Transpose[L]]] where L is a list containing our data. Re-running Lloyd’s algorithm on the standardized data gives a much better picture: Now the boundary separating one cluster from the other is in line with what our intuition dictates it should be. ## Heuristics… The Air Tastes Bitter We should note at this point that we really haven’t solved the centroid clustering problem yet. There is one glaring omission: the choice of $k$. This question is central to the problem of finding a good partition; a bad choice can yield bunk insights at best. Below we’ve calculated Lloyd’s algorithm for varying values of $k$ again on the birth-rate data set. Lloyd’s algorithm processed on the birth-rate/death-rate data set with varying values of k between 2 and 7 (click to enlarge). The problem of finding $k$ has been addressed by many a researcher, and unfortunately the only methods to find a good value for $k$ are heuristic in nature as well. In fact, many believe that to determine the correct value of $k$ is a learning problem in of itself! We will try not to go into too much detail about parameter selection here, but needless to say it is an enormous topic. And as we’ve already said, even if the correct choice of $k$ is known, there is no guarantee that Lloyd’s algorithm (or any algorithm attempting to solve the centroid clustering problem) will converge to a global optimum solution. In the same fashion as our posts on cryptoanalysis and deck-stacking in Texas Hold ‘Em, the process of finding a minimum can converge to a local minimum. Here is an example with four clusters, where each frame is a step, and the algorithm progresses from left to right (click to enlarge): One way to alleviate the issues of local minima is the same here as in our other posts: simply start the algorithm over again from a different randomly chosen starting point. That is, as in our implementations above, our “initial means” are chosen uniformly at random from among the data set points. Alternatively, one may randomly partition the data (without respect to any center; each data point is assigned to one of the $k$ clusters with probability $1/k$). We encourage the reader to try both starting conditions as an exercise, and implement the repeated algorithm to return that output which minimizes the objective function (as detailed in the “Partitions and Squared Deviations” section). And even if the algorithm will converge to a global minimum, it might not be the case that it does so efficiently. As we already mentioned, solving the problem of centroid clustering (even for a fixed $k$) is NP-hard. And so (assuming $\textup{P} \neq \textup{NP}$) any algorithm which converges to a global minimum will take exponentially long on some pathological inputs. The interested reader will see this exponentially slow convergence even in the case of k=2 for points in the plane (that is as simple as it gets). These kinds of reasons make Lloyd’s algorithm and the centroid clustering problem a bit of a poster child of machine learning. In theory it’s difficult to solve exactly, but it has an efficient and widely employed heuristic used in practice which is often good enough. Moreover, since the exact solution is more or less hopeless, much of the focus has shifted to finding randomized algorithms which on average give solutions that are within some constant-factor approximation of the true minimum. ## A Word on Expectation Maximization This algorithm shares quite a bit of features with a very famous algorithm called the Expectation-Maximization algorithm. We plan to investigate this after we spend some more time on probability theory on this blog, but the (very rough) idea is that the algorithm operates in two steps. First, a measure of “center” is chosen for each of a number of statistical models based on given data. Then a maximization step occurs which chooses the optimal parameters for those statistical models, in the sense that the probability that the data was generated by statistical models with those parameters is maximized. These statistical models are then used as the “old” statistical models whose centers are computed in the next step. Continuing the analogy with clustering, one feature of expectation-maximization that makes it nice is it allows the sizes of the “clusters” to have varying sizes, whereas Lloyd’s algorithm tends to make its clusters have equal size (as we saw with varying values of $k$ in our birth-rates example above). And so the ideas involved in this post are readily generalizable, and the applications extend to a variety of fields like image reconstruction, natural language processing, and computer vision. The reader who is interested in the full mathematical details can see this tutorial. Until next time! ### Like this: Posted in Algorithms, Optimization, Set Theory | | Cancel
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 63, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9290087819099426, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/53899/question-in-the-exactness-of-the-induced-sequence-of-an-exact-sequence-of-module
# Question in the exactness of the induced sequence of an exact sequence of module homomorphisms using the representation functor I am referring to Theorem 6.3 p. 143 from Lang's Algebra (but the question description will be as self-contained as possible). Let $A$ be a commutative ring and $M$, $W$, $V$, $U$ be $A$-modules. Let the sequence $0 \longrightarrow W \stackrel{\lambda}{\longrightarrow} V \stackrel{\phi}{\longrightarrow} U \longrightarrow 0$ be exact. By Proposition 2.1 p. 122, the induced sequence $0 \longrightarrow Hom_A(U,M) \stackrel{\phi^'}{\longrightarrow} Hom_A(V,M) \stackrel{\lambda^'}{\longrightarrow} Hom_A(W,M)$ is exact. If we let $M$ be $A$ as a module over itself, then we obtain $0 \longrightarrow Hom_A(U,A) \stackrel{\phi^'}{\longrightarrow} Hom_A(V,A) \stackrel{\lambda^'}{\longrightarrow} Hom_A(W,A)$ exact or in Lang's notation $0 \longrightarrow U^{\vee} \stackrel{\phi^'}{\longrightarrow} V^{\vee} \stackrel{\lambda^'}{\longrightarrow} W^{\vee} \label{eq-2}$, where $V^{\vee}$ is the dual module of $V$. Now, this is where i have a problem: it is mentioned in the proof of theorem 6.3 that since $A$ is projective (because it is free), then we also have exactness from the right ,i.e. $0 \longrightarrow U^{\vee} \stackrel{\phi^'}{\longrightarrow} V^{\vee} \stackrel{\lambda^'}{\longrightarrow} W^{\vee} \label{eq-2} \label{eq-3} \longrightarrow 0$ is exact. I don't see where this exactness from the right comes from since $A$ being projective means that the functor $Hom_A(A,\cdot)$ is exact, while to obtain dual spaces we use the functor $Hom_A(\cdot,A)$. Any insights? Thank you:-) - ## 1 Answer You are missing that Lang assumes that the $A$-modules $W,V,U$ are finite free. The exactness of the dual sequence is then immediate. - I am studying the same problem after several months, i got stuck exactly at the same point and now i can not see why the exactness is immediate. Could you please give me a hint? Thanks. – Manos Dec 10 '11 at 18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316475987434387, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/89891-linear-combination-normal-random-variables.html
# Thread: 1. ## Linear combination of normal random variables "Fact: Any linear combination of independent normal random variables has a normal distribution." Is the condition "independent" here absolutely necessary? If we remove the word "independent", would the linear combination still be normally distributed? Why or why not? Thank you! 2. yes you need indep... Let X be any normal rv. Then Y=X-X is not a normal. P(Y=0)=1 is not normal. 3. Say if we have X~Normal(0,1), Y~Normal(0,1) E(X+Y) = E(X)+E(Y) = 0 V(X+Y) = V(X)+V(Y)+2Cov(X,Y) = 2+2Cov(X,Y) Can we now say that X+Y~Normal(0, 2+2Cov(X,Y) ) ? Thanks! 4. Originally Posted by kingwinner Say if we have X~Normal(0,1), Y~Normal(0,1) E(X+Y) = E(X)+E(Y) = 0 V(X+Y) = V(X)+V(Y)+2Cov(X,Y) = 2+2Cov(X,Y) Can we now say that X+Y~Normal(0, 2+2Cov(X,Y) ) ? Thanks! No! (This counterexample is wrong, see Laurents post below) Counterexample:Lets say your r.v pair X,Y has a joint char function $\phi(s_1,s_2) = \mathbb{E}(e^{s_1X+s_2Y}) = \text{exp}\left({\frac{s_1 ^2 + s_2 ^2}2 + s_1s_2}\right)$. Clearly they are marginally gaussian as you require and the sum distribution is not gaussian. However if your question is if "independence" is necessary,then its not. Counterexample: What if X = 3Z+Y and where Z and Y are independent normal? 5. Unfortunately, I haven't learnt joint characteristic function in my class yet (I've learnt moment generating functions, however), so I don't have the required background to understand your example. I am sorry about that. What if X = 3Z+Y and where Z and Y are independent normal? Since they are indepednent, X must be normally distributed as well. Now my question is, if the random variables are NOT indepednent (see my example below), would a linear combination of those random variables also be normally distributed? In short, my question is: Suppose that X~Normal(0,1), Y~Normal(0,1), where X and Y are NOT independent Can we say that X+Y~Normal(0, 2+2Cov(X,Y) ) for sure? Thanks! 6. Originally Posted by kingwinner Since they are indepednent, X must be normally distributed as well. You didnt get my point. I wanted to say X+Y is normal distributed even though X and Y are not independent. Now my question is, if the random variables are NOT indepednent (see my example below), would a linear combination of those random variables also be normally distributed? In short, my question is: Suppose that X~Normal(0,1), Y~Normal(0,1), where X and Y are NOT independent Can we say that X+Y~Normal(0, 2+2Cov(X,Y) ) for sure? Thanks! My previous post is trying to tell you that there is no general answer. Some times the sum could be normal, sometimes not. 7. Originally Posted by Isomorphism No! Counterexample:Lets say your r.v pair X,Y has a joint char function $\phi(s_1,s_2) = \mathbb{E}(e^{s_1X+s_2Y}) = \text{exp}\left({\frac{s_1 ^2 + s_2 ^2}2 + s_1s_2}\right)$. Clearly they are marginally gaussian as you require and the sum distribution is not gaussian. To be precise, what you're dealing with is rather a moment generating function (kind of Laplace transform) than a characteristic function (kind of Fourier transform). By the way, it is not easy to prove that a function is a moment generating function for some probability distribution (it involves Bochner's theorem, which is uneasy to check in general), so that you should prove first that your function is indeed a m.g.f.. Actually it is a m.g.function, because this is that of a Gaussian vector... Marginals are indeed standard Gaussian r.v., but the m.g.f. of the sum is $\mathbb{E}[e^{s(X+Y)}]=\exp(2s^2)$ (take $s_1=s_2=s$), which is the m.g.f. of a centered Gaussian with variance 4... Hence this is no counterexample. Matheagle gave a working counterexample. Since Dirac measures can be seen as "limit cases" of Gaussian distribution when the variance goes to 0, I tend to prefer the following one: Let $X,\varepsilon$ be independent r.v. where $X$ is a standard Gaussian r.v., and $\varepsilon$ has a distribution given by $P(\varepsilon=+1)=P(\varepsilon=-1)=1/2$. Then let $Y=\varepsilon X$, so that $Y$ is a standard Gaussian, while $X+Y=(1+\varepsilon)X$ is 0 with probability 1/2, hence it is not Gaussian, and it is not degenerate. 8. Originally Posted by Laurent To be precise, what you're dealing with is rather a moment generating function (kind of Laplace transform) than a characteristic function (kind of Fourier transform). By the way, it is not easy to prove that a function is a moment generating function for some probability distribution (it involves Bochner's theorem, which is uneasy to check in general), so that you should prove first that your function is indeed a m.g.f.. Actually it is a m.g.function, because this is that of a Gaussian vector... Marginals are indeed standard Gaussian r.v., but the m.g.f. of the sum is $\mathbb{E}[e^{s(X+Y)}]=\exp(2s^2)$ (take $s_1=s_2=s$), which is the m.g.f. of a centered Gaussian with variance 4... Hence this is no counterexample. Whoops! Perhaps I should have gone with my standard counter $\phi(s_1,s_2) = \mathbb{E}(e^{s_1X+s_2Y}) = (1 + s_1s_2)\text{exp}\left({\frac{s_1 ^2 + s_2 ^2}2}\right)<br />$ In my basic random process classes these examples were considered good enough. So does this new example confirm to the rigorous approach? 9. Originally Posted by Isomorphism Whoops! Perhaps I should have gone with my standard counter $\phi(s_1,s_2) = \mathbb{E}(e^{s_1X+s_2Y}) = (1 + s_1s_2)\text{exp}\left({\frac{s_1 ^2 + s_2 ^2}2}\right)<br />$ Where do you get this example from? If my computations are correct, this still doesn't work, because what you gave seems to be the m.g.f. of the "density" $f(x,y)=\frac{1}{2\pi}(1+xy)\exp\left(-\frac{x^2+y^2}{2}\right)$ on $\mathbb{R}^2$, which is negative sometimes... This illustrates what I said about Bochner's theorem in my last post: not any function, even if it equals 1 at 0 and is "smooth", is a m.g.f.. Being a m.g.f. is a strong condition that can't be checked at first sight. 10. Again, if Y=-X, then W=X+Y is not normal. P(W=0)=1. The negative of a N(0,1) is a N(0,1), so both X and Y are st normals, but their sum is not a normal, not even continuous. 11. However if your question is if "independence" is necessary,then its not. Counterexample: What if X = 3Z+Y and where Z and Y are independent normal? You didnt get my point. I wanted to say X+Y is normal distributed even though X and Y are not independent. Then X=3Z-Y would be a better example However, he talked about independence for "any linear combination" of normal distribution. Not for X+Y in particular. With Gaussian vectors, we were told that $X_1,\dots,X_n$ form a Gaussian vector (meaning that any linear combination of its components follows a normal distribution) if (and only if ?) they're independent. 12. He's going to keep asking this question until the COWS (MOOO) come home. He knows the answer by now. "ANY linear combination of uncorrelated normal random variables has a normal distribution." Is this a correct statement? Why or why not? Thanks! 14. Originally Posted by kingwinner "ANY linear combination of uncorrelated normal random variables has a normal distribution." Is this a correct statement? Why or why not? Thanks! Did you check on our counterexamples first? In mine (post #7), the r.v. $X$ and $Y$ are uncorrelated. 15. I guess the cows aren't home yet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 20, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134197235107422, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/89577-indeterminate-forms.html
# Thread: 1. ## indeterminate forms Could someone justify the following statement please. I'm having trouble visualizing what has been said here: If by direct substitution of a given value c into a rational function is made and the following result occurs $r(c)=\frac{p(c)}{q(c)}=\frac{0}{0}$ then it can be concluded that $(x-c)$ is a factor of both $p(x)$ and $q(x)$. Anything to shed light here would be appreciated. Thanks. 2. Originally Posted by VonNemo19 Could someone justify the following statement please. I'm having trouble visualizing what has been said here: If by direct substitution of a given value c into a rational function is made and the following result occurs $r(c)=\frac{p(c)}{q(c)}=\frac{0}{0}$ then it can be concluded that $(x-c)$ is a factor of both $p(x)$ and $q(x)$. Anything to shed light here would be appreciated. Thanks. if x = c is a zero of a function, then x-c is a factor. 3. Originally Posted by VonNemo19 Could someone justify the following statement please. I'm having trouble visualizing what has been said here: If by direct substitution of a given value c into a rational function is made and the following result occurs $r(c)=\frac{p(c)}{q(c)}=\frac{0}{0}$ then it can be concluded that $(x-c)$ is a factor of both $p(x)$ and $q(x)$. Anything to shed light here would be appreciated. Thanks. I am assuming $p(x), q(x)$ are polynomials otherwise this does not make sense i.e $\frac{\sin(\pi)}{\tan(\pi)}=\frac{0}{0}$ but neither have a factor of $x-\pi$ The above is using the fact that If f is a polynomial and $f(c)=0$ then $(x-c)$ divides f(x) example $f(x)=x^2-3x+2$ note that $f(1)=1^2-3(1)+2=0$ so by the above $(x-1)$ divides $x^2-3x+2$ i.e this factors into $f(x)=(x-1)(x-2)$ 4. I can kind of see that, but if you could explain why, I'd really appreciate it. 5. Good stuff Empty dude. And yes of course they are polynomials. I kind of paraphrased what it said in the book. I'm no math writer.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9677991271018982, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/28070/thermodynamics-related-problems
# Thermodynamics related problems [closed] 1. If the lengths of two bars of different solids are inversely proportional to their respective coefficients of linear expansions at the same initial temperature. How should I mathematically express this? 2.For the effective coefficient of linear expansion expression, I got: $L_1a_1+L_2a_2=a_eL_e$ where e is the effective length and coefficient. The question though, is how should I get an expression for either $L_1 \ or\ L_2$ - Hi Dystopian, and welcome to Physics Stack Exchange! This is actually a site for conceptual questions, not just help doing problems. We expect you to show what you've done on the problem, narrow it down to the specific concept that is giving you trouble, and ask about that - don't just ask for the solution. If you edit your question to reflect that, I'll be happy to reopen it. See our FAQ and homework-like question policy for more information. (Also, we prefer you only ask one question per post.) – David Zaslavsky♦ May 9 '12 at 18:00 3. is very simple - you have to use volumetric temperature expansion of the liquid $\beta$; 2. is unclear; 1. do what the problem says, find the lengths of two bars so that they are inversely proportional to their respective coefficients of linear expansions... – Pygmalion May 9 '12 at 18:00 I suggest you open a new question for the 3rd question asking the principle how thermometer works. 1st and 2nd questions are not appropriate here. – Pygmalion May 9 '12 at 18:08 Sorry, I'll start editing. – Dystopian May 9 '12 at 18:13 Can't answer here. It is closed... – Pygmalion May 10 '12 at 11:43 ## closed as too localized by Qmechanic♦, David Zaslavsky♦May 9 '12 at 17:58 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9366707801818848, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/53952/maximal-ideals-in-polynomial-rings-over-algebraically-closed-fields-when-weak-n
## Maximal ideals in polynomial rings over algebraically closed fields - when Weak Nullstellensatz does not apply ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Weak nullstellensatz describes maximal ideals in polynomial rings over algebraically closed fields at least when the cardinality number of variables is finite. Lang obtained the same conclusion also when the transcendence degree of the field over its prime field exceeds the number of variables (I don't know if "weak nullstellensatz" officially now includes Lang's extension, but for here let's say it does.) How explicitly can we describe the maximal spectrum of polynomial rings over algebraically closed fields when weak nullstellensatz fails? - 1 @David: I am pretty sure that "weak Nullstellensatz" does not include the extension by Lang of which you speak. In fact, I know the former pretty well (my notes on commutative algebra contain something like five proofs of it), but I think I have never heard of this result of Lang. It certainly sounds nice: could you give a reference? – Pete L. Clark Feb 1 2011 at 6:29 2 Never mind: it was easy enough to find: Lang, Serge Hilbert's Nullstellensatz in infinite-dimensional space. Proc. Amer. Math. Soc. 3, (1952). 407–410. I'll certainly take a look. – Pete L. Clark Feb 1 2011 at 6:35 Pete - I'm happy to have brought Lang's article to your attention! It always interests me how results/proofs do or don't get into the canon. For example, the American Mathematical Monthly has an endless supply of improvements on basic textbook proofs, but most seem ignored as new textbooks copy proofs out of old... – David Feldman Feb 1 2011 at 6:51 1 An explicit description of all maximal ideals seems unlikely to me in the general case. Lang's article gives an example where the residue field is isomorphic to $k(t)$, and this example can probably be adapted to find arbitrary residue fields $K/k$ with $\operatorname{trdeg}(K/k) \leq$ the number of variables. – François Brunault Feb 1 2011 at 14:29 1 @François If I may make a comparison, an explicit description of even a single non-principal ultrafilter on $\Bbb N$ more than seems unlikely - such a description would surely embody a proof of an axiom, BPIT, provably independent of $ZF$ . But that has not prevented the development of a whole literature concerning the structure of $\beta{\Bbb N}$ (including more independence results). So my question which asks for a description of the maximal spectrum need not fall to the difficulty of describing the individual ideals. – David Feldman Feb 2 2011 at 6:04 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123685956001282, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/212/solving-path-integral-problem-in-quantitative-finance-using-computer/231
# Solving Path Integral Problem in Quantitative Finance using Computer I've asked this question here at Physics SE, but I figured that some parts would be more appropriate to ask here. So I'm rephrasing the question again. We know that for option value calculation, path integral is one way to solve it. But the solution I get from the Black-Scholes formula (derived from the above question): $$\begin{array}{rcl}\mathbb{E}\left[ F(e^{x_T})|x(t)=x \right] & = & \int_{-\infty}^{+\infty} F(e^{x_T}) p(x_T|x(t)=x) dx_T \\ & = & \int_{-\infty}^{+\infty} F(e^{x_T}) \int_{\tilde{x}(t)=x}^{\tilde{x}(T)=x_T} p(x_T|\tilde{x}(\tilde{t})) p(\tilde{x}(\tilde{t})|x(t)=x) d\tilde{x}(\tilde{t}) dx_T \end{array}$$ is very cryptic and simply unusable on a computer. My question is, how can we program this solution? Or more generally, how can we devise computer algorithms to solve path integral problem in quantitative finance? - Not to disregard this question as unrelated, but I'd suggest that you could find answer on how to code this on stackoverflow.com. Just a suggestion. – S_H Feb 7 '11 at 17:17 1 @Harpreet, not to sure whether it's suitable for SO. The current form of solution of path integral, as it stands, is not codable on a computer. – Graviton Feb 8 '11 at 0:30 ## 3 Answers There are many numerical approaches to solving stochastic integrals such as the above. Assuming that there is no closed form slight-of-hand, the easiest approach is the Monte Carlo approach. I would recommend referring to Glasserman's excellent "Monte Carlo Methods in Financial Engineering" If you are not familiar with MC, think of it as evaluating millions of possible paths in N dimensional space (the space of your random variable x time) and computing the expectation from a probability weighted average. Making MC work for you involves: • modeling your distribution accurately • being able to randomly sample your distribution over the simulation in such as way as to have uniformly sampled on its cumulative probability function • having a good random N dimensional number generator with period > total # of samples • various tricks to reduce the required sample space - You can use Monte Carlo methods to generate paths. - It seems to me that you are making the problem more complicated than it is in fact. What is the process $X_t$ and what is the motivation to find this expectation as a path integral? If you would like to find the value of integral on the trajectory of the diffusion process I think it is undefined. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.948081910610199, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/31457/how-to-define-a-field?answertab=oldest
# How to define a field? [duplicate] Possible Duplicate: What is a field, really? What are electromagnetic fields made of? What is a field ? What is magnetic field or other fields made of or what it is, How do u define it (To my knowledge field is thought to exist without a proof and theories are built over it) ? - 3 Possible overlap/duplicate: physics.stackexchange.com/questions/30517/… and physics.stackexchange.com/questions/13157/… – DJBunk Jul 6 '12 at 12:27 Thanks, The answers in the post tell me that a thing could be defined only if its not fundamental but field is a fundamental so it can't de defined – Abhay Kumar Jul 6 '12 at 12:31 A field isn't a fundamental object from a mathematical viewpoint. You can define riggidly how fields work/look like – Michael Jul 6 '12 at 12:42 I didn't get you Micheal – Abhay Kumar Jul 6 '12 at 12:45 1 Perhaps you mean something different when talking about fields. But in QFT, a field itself is just some function of space-time (see my answer below). The really interesting part is, how these fields transform. Or more precisely, what type of elements are assigned to each point in space. This is how we define the particle associated to that field – Michael Jul 6 '12 at 12:50 show 3 more comments ## marked as duplicate by Qmechanic♦, David Zaslavsky♦Jul 6 '12 at 17:21 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 2 Answers Mathematical Answer: A field is simple a function of space(-time) that assigns some value, vector or pretty much anything, to a certain point in space(-time). Nothing fancy really. A normal function like $f(x) = x^2$ can be viewed as a field that assigns a real value to the space $\mathbb R$. We call the field by the type of element it assigns. So a Vector field assigns a Vector to each point in space. You can define your fields over whatever elements you wish. Heck, you could create an apple field, by assigning a certain number of apples to each point in space. Physical Answer: It's pretty much the same as the mathematical answer. The only difference lies in how you interpret these fields. So a Vector field, that assigns a vector to each point in space can be viewed as a magnetic field. For example, think of a ball (the earth). Now, think of a vector field on this ball, i.e. some function that assigns a vector to each point. We say this vector field is smooth, if the vectors of two nearby points only differ by some small $\epsilon$-vector. Think of the vectors as hairs on the ball. Smoothness then just means that the hair looks tidy and two neighbouring hairs have almost the same direction. You can reconstruct the earth magnetic field by choosing an appropriate function. Interestingly, if you do this, we have a mathematical theorem that says, that such a vector field must essentially have (at least) 2 poles (or a pole of multiplicity 2). In our case, the North and South pole. Or in short "You can't comb the hair on a coconut". This theorem is called the Hairy ball theorem. - Thanks, I get your point. Can you tell me why does a field exist. What makes magnet have magnetic field around it ? – Abhay Kumar Jul 6 '12 at 12:47 – Michael Jul 6 '12 at 13:00 I am starting with your argument: To my knowledge field is thought to exist without a proof and theories are built over it. If you've problem with this mathematical modelling of physical problem, ask yourself, "Do you know what Energy is?" Man, its all that number to describe how things happen in the world around us. The field is actually distribution of physical quantity on each point of spacetime (you can say only Space in classical sense, but it'd pose major problems while describing today's complex fields). But, why field when we can measure physical quantities on any point without it? Well, it simplifies things with graphical model. With one field equation (or, family of equations), you can tell the physical reality of spacetime in your problem domain. Its can be seen as attribute of spacetime. With it, you can tell anything related to physical quantity at once in entire region of Spacetime. Quantum Field Here it gets real value. Unlike classical field, quantum fields aren't continuous. Its quantized with discrete values in spacetime. To explain it, quantum of fields were introduced. They are basically boson particles also known as messenger particles or force carriers. In an electromagnetic field, electromagnetic force is exchanged by photon bosons which are quantum (or, messenger particles; or, force carriers) of electromagnetic field. Similarly, Graviton Bosons are quantum of gravitational field. And, popular Higgs Bosons are quantum of Higgs field. There's always gap between them so that the field isn't continuous. Higher field intensity means higher density of force carriers (Think, why not the model became easier)! - 1 Quantum fields are not "quantized with discrete values in spacetime". Please don't just make up random stuff when you feel like answering a question. – user1504 Jul 6 '12 at 13:17 @user1504 Tell me where am I wrong? Can you find a physical quantity continuously over a quantum field? – Sachin Shekhar Jul 6 '12 at 13:24 @Sachin: Yes, you can. It is highly unusual for a quantum field to take only discrete values. The spectrum of a field operator is continuous in all of the standard examples. – user1504 Jul 6 '12 at 14:43 @user1504 So.. is that the only amplitude which is quantum of field? – Sachin Shekhar Jul 6 '12 at 15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933012843132019, "perplexity_flag": "middle"}
http://cs.stackexchange.com/questions/6058/how-can-lazy-learning-systems-simultaneously-solve-multiple-problems
# How can lazy learning systems simultaneously solve multiple problems? On the english Wikipedia it says about lazy learning systems: Because the target function is approximated locally for each query to the system, lazy learning systems can simultaneously solve multiple problems […] What does this mean? I can only guess what "approximated locally" is supposed to say but even then I have no idea how one is supposed to follow from the other. - ## 3 Answers Let's see: Because the target function is approximated locally for each query to the system, If i am going to predict the class (for example) of a brand new instance i compare it with the most similar in the train dataset. Imagine a 2D graphic with lots of points and i put the new point, so i compare with the other points locally, and the class of the nearest to it will help me say what is its class. lazy learning systems can simultaneously solve multiple problems and deal successfully with changes in the problem domain. Maybe, these problems are the ones created by the eager methods that try to simulate the hyperplanes (boudaries) in a function, in logic predicates, etc between the classes and these hyperplanes make lots of mistakes. And simple local comparisons with the similarity with the train dataset could do a better work. - I fail to see how that relates to the simultaneously-part. Could you clarify? – Shedeki Sep 12 '12 at 21:24 The simultaneously is just to say that the lay learning methods are powerful and can solve lots of problems, that i assume to be the ones created by the eager, (see answer) and the changes in the problem domain occur when the data changes, new data is added, and so the lazy learning deals with this changes, while the eager don't (because the models are already created). But, there are better concepts to lazy learning, because it is a research area‌​. – Augusto Sep 13 '12 at 15:54 ## "approximated locally" Eager learning methods use all available training examples to build a classifier in advance that is later used for classification of all query instances. Lazy learning, or instance based learning, is a learning method that delays the building of the classifier until a query is made to the system. The algorithm is trying to find similar examples from the training data to the query and uses them to build hypothesis for the classification. Examples that are used are localized near the query by some similarity. For example if we have points in the plane that are classified with (+) sign and (-) sign, the eager learning will build single rule about how to classify any new point. In contrast, the lazy learning will aproximate only the nearest points signs to predict what will be the sign of the new point. ## "simultaneously solve" I guess that here the author means the online learning. This means that each new query is added to the traing data after its value is known. Because of this, the eager learning must update its hypothesis after each new query and should process each query one at a time. In contrast, the lazy learning can take many simultanious queries (if they are not locally close) because it uses only the examples, locally close to it. - After pondering the question for a while longer and talking to a professor of mine, I was able to gain enough insight to say that it is actually not that difficult to answer. Still, I will consider an example: Given a number of points on plane which, in addition to a position, also have further characteristics, e.g. a color. If the problem is to estimate the color of a new, proposed point at a given position, it can be solved by looking at its $k$-nearest neighbors and determined through a majority vote, i.e. if most of its neighbors are red, assume that the new point will be red as well. If another characteristic is, say, size, this can be determined in the same way without the need to run the $k$-nearest neighbors algorithm again, thereby solving two problems simultaneously. An interesting side-note: By searching the internet for the quote that I have taken from Wikipedia, one may turn up at least one book and one bachelor's thesis that use the exact same wording – apparently without citing any references or giving further explanations that might indicate any understanding on the respective author's side. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9489800930023193, "perplexity_flag": "middle"}
http://www.reference.com/browse/Cubic_reciprocity
Definitions # Cubic reciprocity In mathematics, cubic reciprocity is any of various results connecting the solvability of two related cubic equations in modular arithmetic. ## Algebraic setting The law of cubic reciprocity is most naturally expressed in terms of the Eisenstein integers, that is, the ring E of complex numbers of the form $z = a + b,omega$ where and a and b are integers and $omega = frac\left\{1\right\}\left\{2\right\}\left(-1 + isqrt 3\right) = e^\left\{2pi i/3\right\}$ is a complex cube root of unity. If $pi$ is a prime element of E of norm P and $alpha$ is an element coprime to $pi$, we define the cubic residue symbol $left\left(frac\left\{alpha\right\}\left\{pi\right\}right\right)_3$ to be the cube root of unity (power of $omega$) satisfying $left\left(frac\left\{alpha\right\}\left\{pi\right\}right\right)_3 equiv alpha^\left\{\left(P-1\right)/3\right\} mod pi.$ We further define a primary prime to be one which is congruent to -1 modulo 3, still in the ring E; since any prime will still be prime when multiplied by a unit of the ring E, a sixth root of unity, this is not a deep restriction. Then for distinct primary primes $pi$ and $theta$ the law of cubic reciprocity is simply $left\left(frac\left\{pi\right\}\left\{theta\right\}right\right)_3 = left\left(frac\left\{theta\right\}\left\{pi\right\}right\right)_3$ with the supplementary laws for the units and for the prime $1-omega$ of norm 3 that if $pi = -1 + 3\left(m+nomega\right)$ then $left\left(frac\left\{omega\right\}\left\{pi\right\}right\right)_3 = omega^\left\{m+n\right\}$ $left\left(frac\left\{1-omega\right\}\left\{pi\right\}right\right)_3 = omega^\left\{2m\right\}.$ Since $left\left(frac\left\{thetaphi\right\}\left\{pi\right\}right\right)_3 = left\left(frac\left\{theta\right\}\left\{pi\right\}right\right)_3 left\left(frac\left\{phi\right\}\left\{pi\right\}right\right)_3$ the cubic residue of any number can be found once it is factored into primes and units. ## Note on the definition of "primary" The definition here of primary is the traditional one, going back to the original papers of Ferdinand Eisenstein. The presence of the minus sign is not easily compatible with modern definitions, for example in discussing the conductor of a Hecke character. But if so desired, it is straightforward to move the minus sign elsewhere, as −1 is a cube, in fact, the cube of −1. ## References • David A. Cox, Primes of the form $x^2+ny^2$, Wiley, 1989, ISBN 0-471-50654-0. • K. Ireland and M. Rosen, A classical introduction to modern number theory, 2nd ed, Graduate Texts in Mathematics 84, Springer-Verlag, 1990. • Franz Lemmermeyer, Reciprocity laws: From Euler to Eisenstein, Springer Verlag, 2000, ISBN 3-540-66957-4. ## External links Last updated on Thursday October 09, 2008 at 13:12:49 PDT (GMT -0700) Search another word or see Cubic_reciprocityon Dictionary | Thesaurus |Spanish
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 17, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8593876361846924, "perplexity_flag": "head"}