url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://en.wikipedia.org/wiki/Modus_ponens
# Modus ponens In propositional logic, modus ponendo ponens (Latin for "the way that affirms by affirming"; often abbreviated to MP or modus ponens[1][2][3][4]) or implication elimination is a valid, simple argument form and rule of inference.[5] It can be summarized as "P implies Q; P is asserted to be true, so therefore Q must be true." The history of modus ponens goes back to antiquity.[6] While modus ponens is one of the most commonly used concepts in logic it must not be mistaken for a logical law; rather, it is one of the accepted mechanisms for the construction of deductive proofs that includes the "rule of definition" and the "rule of substitution".[7] Modus ponens allows one to eliminate a conditional statement from a logical proof or argument (the antecedents) and thereby not carry these antecedents forward in an ever-lengthening string of symbols; for this reason modus ponens is sometimes called the rule of detachment.[8] Enderton, for example, observes that "modus ponens can produce shorter formulas from longer ones",[9] and Russell observes that "the process of the inference cannot be reduced to symbols. Its sole record is the occurrence of ⊦q [the consequent] . . . an inference is the dropping of a true premise; it is the dissolution of an implication".[10] A justification for the "trust in inference is the belief that if the two former assertions [the antecedents] are not in error, the final assertion [the consequent] is not in error".[11] In other words: if one statement or proposition implies a second one, and the first statement or proposition is true, then the second one is also true. If P implies Q and P is true, then Q is true.[12] An example is: If it's raining, I'll meet you at the movie theater. It's raining. Therefore, I'll meet you at the movie theater. Modus ponens can be stated formally as: $\frac{P \to Q,\; P}{\therefore Q}$ where the rule is that whenever an instance of "P → Q" and "P" appear by themselves on lines of a logical proof, Q can validly be placed on a subsequent line; furthermore, the premise P and the implication "dissolves", their only trace being the symbol Q that is retained for use later e.g. in a more complex deduction. It is closely related to another valid form of argument, modus tollens. Both have apparently similar but invalid forms such as affirming the consequent, denying the antecedent, and evidence of absence. Constructive dilemma is the disjunctive version of modus ponens. Hypothetical syllogism is closely related to modus ponens and sometimes thought of as "double modus ponens." ## Formal notation The modus ponens rule may be written in sequent notation: $P \to Q,\; P\;\; \vdash\;\; Q$ where ⊦ is a metalogical symbol meaning that Q is a syntactic consequence of P → Q and P in some logical system; or as the statement of a truth-functional tautology or theorem of propositional logic: $((P \to Q) \land P) \to Q$ where P, and Q are propositions expressed in some logical system. ## Explanation The argument form has two premises (hypothesis). The first premise is the "if–then" or conditional claim, namely that P implies Q. The second premise is that P, the antecedent of the conditional claim, is true. From these two premises it can be logically concluded that Q, the consequent of the conditional claim, must be true as well. In artificial intelligence, modus ponens is often called forward chaining. An example of an argument that fits the form modus ponens: If today is Tuesday, then John will go to work. Today is Tuesday. Therefore, John will go to work. This argument is valid, but this has no bearing on whether any of the statements in the argument are true; for modus ponens to be a sound argument, the premises must be true for any true instances of the conclusion. An argument can be valid but nonetheless unsound if one or more premises are false; if an argument is valid and all the premises are true, then the argument is sound. For example, John might be going to work on Wednesday. In this case, the reasoning for John's going to work (because it is Wednesday) is unsound. The argument is not only sound on Tuesdays (when John goes to work), but valid on every day of the week. A propositional argument using modus ponens is said to be deductive. In single-conclusion sequent calculi, modus ponens is the Cut rule. The cut-elimination theorem for a calculus says that every proof involving Cut can be transformed (generally, by a constructive method) into a proof without Cut, and hence that Cut is admissible. The Curry–Howard correspondence between proofs and programs relates modus ponens to function application: if f is a function of type P → Q and x is of type P, then f x is of type Q. ## Justification via truth table The validity of modus ponens in classical two-valued logic can be clearly demonstrated by use of a truth table. p q p → q T T T T F F F T T F F T In instances of modus ponens we assume as premises that p → q is true and p is true. Only one line of the truth table—the first—satisfies these two conditions (p and p → q). On this line, q is also true. Therefore, whenever p → q is true and p is true, q must also be true. ## References 1. Stone, Jon R. (1996). Latin for the Illiterati: Exorcizing the Ghosts of a Dead Language. London, UK: Routledge: 60. 2. Copi and Cohen 3. Hurley 4. Moore and Parker 5. Enderton 2001:110 6. Susanne Bobzien (2002). The Development of Modus Ponens in Antiquity, Phronesis 47. 7. Alfred Tarski 1946:47. Also Enderton 2001:110ff. 8. Tarski 1946:47 9. Enderton 2001:111 10. Whitehead and Russell 1927:9 11. Whitehead and Russell 1927:9 12. Jago, Mark (2007). Formal Logic. Humanities-Ebooks LLP. ISBN 978-1-84760-041-7. ## Sources • Alfred Tarski 1946 Introduction to Logic and to the Methodology of the Deductive Sciences 2nd Edition, reprinted by Dover Publications, Mineola NY. ISBN 0-486-28462-X (pbk). • Alfred North Whitehead and Bertrand Russell 1927 Principia Mathematica to *56 (Second Edition) paperback edition 1962, Cambridge at the University Press, London UK. No ISBN, no LCCCN. • Herbert B. Enderton, 2001, A Mathematical Introduction to Logic Second Edition, Harcourt Academic Press, Burlington MA, ISBN 978-0-12-238452-3.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8634839057922363, "perplexity_flag": "middle"}
http://mathhelpforum.com/number-theory/48655-question-about-proof-prime-number-theorem-print.html
# A question about proof of prime number theorem Printable View • September 11th 2008, 05:20 AM shawsend A question about proof of prime number theorem The following is the basic steps in one proof of the prime number theorem (many steps are left out) $\pi(x)\sim \frac{x}{\ln(x)}\quad\text{iff}\quad \psi(x)\sim x$ Now, $\psi_0(x)=-\frac{1}{2\pi i}\int_{c-i\infty}^{c+i\infty}\frac{\zeta'(s)}{\zeta(s)}\fra c{x^s}{s}ds$ where: $\psi_0(x)=\left\{\begin{array}{ccc} \psi(x) & \text{for}& x\ne p^m \\<br /> \psi(x)-1/2\ln(p) & \text{for}& x=p^m<br /> \end{array}\right.<br />$ That is, $\psi_0(x)$ differs from $\psi(x)$ only when x is a prime power, the difference being $1/2\ln(p)$. Now, via residue integration: $\psi_0(x)=x-\sum_{\rho}\frac{x^{\rho}}{\rho}-\ln(2\pi)-1/2\ln\left(1-1/x^2\right)$ where the sum is over all the non-trivial zeros of the zeta function. Dividing through by x and letting x tend to infinity: $\frac{\psi(x)}{x}\to 1-\lim_{x\to\infty}\frac{1}{x}\sum_{\rho}\frac{x^{\r ho}}{\rho}$ I think it can be shown that: $\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x})$ and therefore: $\frac{\psi(x)}{x}\sim 1$ and thus $\pi(x)\sim\frac{x}{\ln(x)}$ I'm not sure about the order of the sum. Can someone confirm this or explain further how this sum is bounded? • September 12th 2008, 09:04 AM wisterville Hello, I am not an expert, this is what I found in books. (Mainly "The theory of the Riemann zeta-function" by S.J.Patterson). The explicit formula for psi_0(x) is due to von Mangoldt. If we let $S(x, T)=\Sigma_\rho \frac{X^{\rho}}{\rho}$ where $\rho$ runs the zeros of the zeta function with $|Im(\rho)|<T$, then |x^(rho)|<=x, 1/rho=O(1/T), there are O(log T) such zeros. Thus, S(x, T)=O((x log T)/T). I don't know where you got $\lim_{T\to\infty}S(x, T)=O(\sqrt{x})$. Bye. • September 12th 2008, 10:15 AM shawsend Ok. I was wrong (I thought it might be $\sqrt{x}$). Thanks a bunch. I'll try to find that book. I got a question about the number of zeros: I thought the number of roots between $0$ and $T$ is approximately: $\frac{T}{2\pi}\ln\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}$. Can someone explain to me how that's $\textbf{O}(\ln(T))$? • September 13th 2008, 09:07 AM wisterville Hello, Quote: Originally Posted by shawsend I got a question about the number of zeros: I thought the number of roots between $0$ and $T$ is approximately: $\frac{T}{2\pi}\ln\left(\frac{T}{2\pi}\right)-\frac{T}{2\pi}$. Can someone explain to me how that's $\textbf{O}(\ln(T))$? Sorry, I was wrong. Forget my first post. I hope someone wiser might help. Bye. • September 15th 2008, 04:38 AM shawsend Hey guys, Wikipedia under Chebyshev function gives: $\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x}\ln^2 x)$ when this is substituted into the expression for $\frac{\psi(x)}{x}$, I get: $\lim_{x\to\infty}\frac{\textbf{O}(\sqrt{x}\ln^2 x)}{x}\to 0$ which is what one expects. Would be interesting to show how this order is determined. I'll try. • September 16th 2008, 06:14 AM wisterville Hello, Quote: Originally Posted by shawsend Hey guys, Wikipedia under Chebyshev function gives: $\sum_{\rho}\frac{x^{\rho}}{\rho}=\textbf{O}(\sqrt{ x}\ln^2 x)$ The Wikipedia says that you can prove this estimate "if the Riemann Hypothesis is TRUE." (In fact, the estimate is equivalent to RH.) Prove it, and you get a prize! Bye. • September 16th 2008, 10:14 AM shawsend Hey Wisterville. I think the two are separate and Wikipedia is alluding to the fact the sum would not be of this order if other zeros outside the critical line were included. I believe the sum can be considered completely independently of the Riemann Hypothesis like this: What is the order of the sum $\sum_{\rho}\frac{x^{\rho}}{\rho}$ assuming $\rho=1/2+it$ and the density of the set $\{\rho_n\}$ in the range $(0,T)$ is of order $\frac{T}{2\pi}\ln\frac{T}{2\pi}-\frac{T}{2\pi}$. Note: The sum is taken symmetrically over the zeros: $\sum_{\rho}\frac{x^{\rho}}{\rho}=\lim_{T\to\infty} \sum_{|t|\leq T}\frac{x^{\rho}}{\rho};\quad \rho=1/2+it$ All times are GMT -8. The time now is 02:44 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940983772277832, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/45446/almost-complex-structures-in-floer-theory
## Almost complex structures in Floer theory ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) When defining the Floer cohomology $HF(L_0,L_1)$ of 2 Lagrangians in a symplectic manifold $(M,\omega)$, one first has to choose some extra data such a 1-parameter family of almost complex structures $(J_t)$. Usually one requires that $J_t$ be compatible with $\omega$, ie that $g(u,v)=\omega(u,J_tv)$ defines a Riemannian metric. However there is also the related notion of a tame $J$, one such that $\omega(u,Ju)>0$ for all nonzero $u$. My question is: What goes wrong if we try to use tame but not necessarily compatible $J_t$ to define $HF(L_0,L_1)$? - Tameness is somewhat an open condition which is easier to handle. On the other hand, $J$-holomorphic curves for compatible $J$ are minimal surfaces, which are nicer. But anyway, as Michael said in the following answer, as far as I know, there's no much difference between the two choices. – Guangbo Xu Nov 9 2010 at 21:56 ## 1 Answer As far as I know, nothing goes wrong if one uses tame instead of compatible almost complex structures (if anyone knows better please correct me). Either one of these conditions implies that the area of a holomorphic curve is bounded in terms of the integral of the symplectic form on it. This is what one needs to get Gromov compactness and to set up the Novikov ring. I don't think tameness or compatibility is needed elsewhere in the theory. - 2 The linear analysis underlying a Floer theory typically involves (in the $L^2$ version) operators on $(d/ds)+A$ acting on maps $\mathbb{R}\to H$ for some Hilbert space $H$. Here $A$ should be a densely-defined symmetric operator on $H$. In Lagrangian Floer theory, this formulation arises when the inner product on $H$ is derived from the metric associated with a compatible $J$. In the tame setting one would presumably have to proceed differently. I'm not sure whether anyone has done this. – Tim Perutz Nov 9 2010 at 22:58 1 Tim, what you say is correct, but can easily be worked around. For the linear analysis, you only need A to be asymptotically symmetric at the punctures, and it doesn't need to be with respect to the metric given by $\omega(\cdot, J\cdot)$. I can't find a reference right now, but I am certain I have seen the details worked out at least once in the tame case, essentially be deforming the metric near the intersections of the Lagrangians. (If you are willing to have tame except compatible in a neighbourhood of the Lagrangian intersection, then there is no need to do any work at all.) – Sam Lisi Nov 15 2010 at 15:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9466386437416077, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/263416/example-of-stationary-2d-gaussian-process-with-non-symmetric-auto-covariance-fun?answertab=oldest
# Example of stationary 2D Gaussian process with non-symmetric auto-covariance function Let $(X_t, Y_t)$ be a stationary 2D Gaussian process, therefore $\mathbb{E}\left(X_t\right) = \mathbb{E}(Y_t) = 0$. I am looking for an explicit example of a valid auto-covariance matrix, i.e: $$R(h) = \begin{pmatrix} \mathbb{E}\left(X_t X_{t+h}\right) & \mathbb{E}\left(X_t Y_{t+h}\right) \\ \mathbb{E}\left(Y_t X_{t+h}\right) & \mathbb{E}\left(Y_t Y_{t+h}\right) \end{pmatrix}$$ such that $R(h)$ is not symmetric for $h\not=0$. Thank you. - 2 I fail to see why this was down-voted... If I am being signaled about a flaw in the question, please speak up. I'll try to improve it. – Sasha Dec 21 '12 at 22:07 Somebody downvoted this question? Wow. – Did Dec 21 '12 at 22:17 ## 1 Answer Consider your favorite stationary Gaussian process $(X_t)_t$ and define $(Y_t)_t$ by $Y_t=X_{t+\ell}$, for some nonzero $\ell$. Then $\mathbb E(X_tY_{t+h})=\mathbb E(X_0X_{\ell+h})$ is the covariance at distance $|\ell+h|$, while $\mathbb E(Y_tX_{t+h})=\mathbb E(X_0X_{h-\ell})$ is the covariance at distance $|\ell-h|$. - Thanks a lot! Very useful for me. – Sasha Dec 21 '12 at 22:03 This construction leads to a Gaussian process with degenerate covariance of some distinct-positive-time slices of the process, that is covariance of $(X_{t_1}, Y_{t_1}, \ldots, X_{t_n}, Y_{t_n} )$. I guess I need to revise the question. – Sasha Dec 29 '12 at 6:07 I actually found that a vector valued ARMA process fits the bill. – Sasha Dec 29 '12 at 6:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9140896201133728, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34421/does-the-mass-of-a-batterys-change-when-charged-discharged/34424
# Does the mass of a battery's change when charged/discharged? ... and if so, how much? Is it possible to detect it, or is it beyond any measurement? I'd say there are two possible scenarios (depending on the battery type) and both seem interesting: 1. The battery reacts chemically with its environment. 2. The battery doesn't exchange any matter with its environment except electrons. I suppose there should be some difference at least due to the principle of energy-matter equivalence, but the difference is most likely immeasurable. - ## 1 Answer Yes, the total mass of a battery increases when the battery is charged and decreases when it is discharged. The difference boils to Einstein's $E=mc^2$ that follows from his special theory of relativity. Energy is equivalent to mass and $c^2$, the squared speed of light, is the conversion factor. I would omit the scenario I. If the lithium is leaking from a battery, or if any atoms (and it's the nuclei that I am talking about) are moving in or out, the mass of the battery is obviously changing by the mass of these nuclei (or whole atoms). That probably doesn't need an extra explanation. So we will continue with the scenario II in which the atoms inside the battery are only rearranged into different configurations or different molecules but the identity and the number of the nuclei inside the battery is constant. Let me just emphasize that the energy can't be calculated from masses of the electrons. Electrons are not lost when a battery is discharged. If a battery is losing electric energy, it doesn't mean that it's losing the electric charge! They're just moved from one electrode closer to the other and it's just the motion through the wire stretched between the electrodes (and the electric field inside the wires) that powers the electric devices. But the whole battery is always electrically neutral; because it contains a fixed number of protons, it must contain a fixed (the same) number of electrons, too. Instead, the energy difference really boils down to different electrostatic potential energies of the electrons relatively to the nuclei. One could say that when a battery is being discharged, its electrons are moving to places that are closer to the nuclei, perhaps other nuclei, in average and the modified interaction energy affects the amount of energy=mass stored in the electromagnetic field. (There are also interaction energies of electron pairs and kinetic energies of electrons – $m_e c^2 (1/\sqrt{1-v^2/c^2} - 1)$ – but let me simplify it by the potential energies of protons-electrons which are dominant and have the right sign. Well, it could actually be pedagogical to borrow the electrons' kinetic energies as the source of the mass difference because for them, we immediately see that the relativistic mass is $m_e/\sqrt{1-v^2/c^2}$ which depends on the velocity and the average squared velocity of the electrons depends on how we arrange the molecules i.e. on whether or not the battery is charged.) Yes, the change of the mass is pretty much negligible and can't be measured by current scales. For example, Chevrolet Volt has batteries that may store 16 kWh. Multiply it by 1,000 and 3,600 to get the value in Joules; divide it by $10^{17}$ which is (approximately) the squared speed of light and you get the mass difference in kilograms. It's about $$16 \times 1,000 \times 3,600 / 10^{17} = 0.6 \times 10^{-9}$$ That's half a microgram – for this huge Chevrolet Volt battery. One can't really measure it this precisely because pieces of the battery evaporate, the battery may absorb some dust, humidity etc. The mass difference above is comparable to the mass of a water droplet of diameter 0.1 mm or so. Even the national prototype kilograms http://en.wikipedia.org/wiki/International_Prototype_Kilogram#Stability_of_the_international_prototype_kilogram have masses that differ from the mass of the international prototype kilogram by dozens of micrograms. From 1900, each of them has changed by a dozen of micrograms. So the unit of "kilogram" isn't even defined "internationally" with the accuracy needed to distinguish the masses of the battery before and after. However, it's plausible that a fancy device could measure the mass difference more directly; the difference of the mass isn't infinitesimal, after all. But when you're touching the electrodes, you must be careful not to scratch them, not even a little bit, and not to allow the paint to evaporate when the battery gets warmer, not even a little bit, and so on. The measurement problem would become much more manageable with a nuclear battery, of course. ;-) If you let some uranium decay by fission, it creates lots of energy (e.g. in Temelín) and the mass $m=E/c^2$ decreases by 0.1 percent or so. If you had a thermonuclear power plant running on hydrogen, the products of the fusion would be about 1% lighter than the hydrogen at the beginning. That would of course be measurable in principle. Nuclear energy is much more concentrated (about 1 million times higher densities in Joules per kilogram: 1 MeV per nucleus i.e. per atom) than the chemical energy (and batteries run on chemical energies: about 1 eV per atom) so the relative change of the mass would be 1 million times more significant, too. A hypothetical (science-fiction) matter-antimatter fuel producing energy from complete annihilation of matter against antimatter (note that both of them have a positive $m$) to electromagnetic waves (quickly converted to heat etc.) would lower the original mass of the solid material $m=E/c^2$ down to zero i.e. by 100%; the objects that would absorb the heat (or the energy partly converted to more useful forms) would get heavier by the same amount. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443594813346863, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/174867-find-function-knowing-three-points-graph.html
# Thread: 1. ## Find the function by knowing three points on the graph A quadratic function goes through the points (2,3) (3,4) (6,-5). Find the function. Please help me with this. 2. Originally Posted by Anna55 A quadratic function goes through the points (2,3) (3,4) (6,-5). Find the function. Please help me with this. $y = ax^2 + bx + c$ So using (2, 3): $3 = 4a + 2b + c$ Do this for all three points and you will have three equations in the unknown variables a, b, and c. -Dan 3. Thank you I understand now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8772444725036621, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/pigeonhole-principle?page=2&sort=newest&pagesize=15
Tagged Questions Questions involving the pigeonhole principle in Combinatorial Analysis. 1answer 201 views Pigeonhole principle and sequences problem Could you please tell me if this is the right approach to tackle this problem.I translated it from Spanish into English, so please excuse the wording and let me know if there's something that is not ... 3answers 134 views Proof using pigeonhole and greatest integer (floor) function. The question is to prove that if m is a positive integer then, $$[mx] = [x] + \left[x+\frac{1}{m}\right] +\left[x+\frac{2}{m}\right] + \cdots + \left[x+\frac{(m-1)}{m}\right]$$ for \$x \in ... 2answers 177 views Pigeonhole Principle Points in a Triangle Suppose we have an equilateral triangle with side length $1$. In this equilateral triangle, we place $8$ points either on the boundary or inside the triangle itself. Then what is the maximum possible ... 10answers 4k views 100 Soldiers riddle One of my friends found this riddle. There are 100 soldiers. 85 lose a left leg, 80 lose a right leg, 75 lose a left arm, 70 lose a right arm. What is the minimum number of soldiers losing all ... 1answer 165 views If any x points are elected out of a unit square, then some two of them are no farther than how many units apart? If 5 points are randomly positioned in a unit square, no two points can be greater than square root of 2 divided by 2 apart; divide up the unit square into four squares, and, based on the pigeonhole ... 1answer 486 views Pigeonhole: Practical Applications in Computer Science Most of the problems I've seen involving the pigeonhole principle have so far seemed fairly artificial. As I'm studying CompSci I'm interested what kind of practical, real world problems in CompSci ... 2answers 294 views Pigeonhole: 12 numbers between 10 to 100 - 2 have a difference divisible by 11 Prove that given 12 numbers between 10 to 100 - 2 have a difference divisible by 11. I didn't understand the answer given in my lecture and thought that as usual I'd probably get a clearer answer ... 1answer 670 views Combinatorics - pigeonhole principle question This is for self-study. This question is from Rosen's "Discrete Mathematics And Its Applications", 6th edition. An arm wrestler is the champion for a period of 75 hours. (Here, by an hour, we mean a ... 2answers 232 views How to recognize a pigeonhole problem? I'm going to split this into 2 questions, the first I think might have an answer, the second may not. First, is there a general way to recognize a pigeonhole problem as such? I mean are there some ... 1answer 32 views Min Number of Values from {1,2,…,9} Such that diff of 2 picked values is 5 This is a question from Shcaum's whose answer I don't understand. Our textbook has 2 pages on the pigeonhole principle and I'm having quite a bit of difficulty with it. Give the set ${1,2,...,9}$ ... 2answers 480 views Subsets with equal sums I have a problem to solve but I am in need of your help. Subjects with equal sums: Prove that for every set $A$ which consists of $10$ double digit natural numbers( numbers among $10, \ldots, 99$), ... 1answer 52 views Possibility of constructing a desirable subset Here is a question.I am quoting it: Question by user Nahum Litvin Let A be a set of 100 natural numbers. prove that there is a set B B⊆A such that the sum of B's elements can be divided by ... 2answers 211 views regarding Pigeonhole principle Let A be a set of 100 natural numbers. prove that there is a set B $$B\subseteq A$$ such that the sum of B's elements can be divided by 100 I am stuck for a few days now. Please help! 2answers 152 views Pigeonhole principle problem The problem I'm working on says: A basketball player has been training for 112 hours during 12 days. He has trained an integer number of hours every day. Prove that there was two consecutive days ... 1answer 203 views Pigeonhole principle to prove division Here's a little question that we were shown in class: Let $S = \{1,2,\ldots,200\}$ and let $A \subseteq S$ such that $|A| = 101$. Prove that there are two elements of $A$ such that one is a ... 2answers 436 views Some three consecutive numbers sum to at least $32$ Here's a question we got for homework: We write down all the numbers from $1$ to $20$ in a circle. Prove that there is a sequence of $3$ numbers whose sum is at least $32$. I assume we need the ... 2answers 159 views Question about the Pigeonhole Principle The question is: Let $A = \{1,2,3,4,5,6,7,8\}$. If five integers are selected from $A$, must at least one pair of the integers have a sum of $9$? The book explains the solution by dividing the ... 2answers 143 views Bit strings (pigeonhole principle) Here is how the question is posed: Let $s_1$, $s_2$, $s_3, \ldots, s_{90}$ be 90 bit strings of length nine or less. Prove that there exist two strings $s_i$ and $s_j$ with $i \neq j$ that contain ... 5answers 269 views The Pigeon Hole Principle and the Finite Subgroup Test I am currently reading this document and am stuck on Theorem 3.3 on page 11: Let $H$ be a nonempty finite subset of a group $G$. Then $H$ is a subgroup of $G$ if $H$ is closed under the ... 1answer 188 views $16$ natural numbers from $0$ to $9$, and square numbers: how to use the pigeonhole principle? There are $16$ natural numbers placed next to each other. Each is a number from $0$ to $9$. These are in any order, and you can have as many repeats as you want (e.g. all $16$ numbers can be zero, or ... 19answers 2k views What is your favorite application of the Pigeonhole Principle? The pigeonhole principle states that if $n$ items are put into $m$ "pigeonholes" with $n > m$, then at least one pigeonhole must contain more than one item. I'd like to see your favorite ... 1answer 113 views Pigeonhole Principle used for Finding Numbers I’m doing a review exercise that gives me the list of numbers from 100 to 1000. I need to find the number of different numbers that have a 0. I suppose I could do this with the Pigeonhole principle, ... 2answers 281 views Maximum number of mutually orthogonal latin square pairs (definition provided) An $n\times n$ matrix is defined to be a "latin square" if each row and column is a permutation of the first $n$ natural numbers. Two squares of same order are orthogonal if the $n^2$ pairs ... 2answers 1k views In a group of 6 people either we have 3 mutual friends or 3 mutual enemies. In a room of n people? A group of 6 people each pair is either a friend (acquaintance) or an enemy (stranger). It is to be proven that there are either 3 mutual friends or 3 mutual enemies in this group. I have an ad-hoc ... 4answers 332 views Prove that 2 students live exactly five houses apart if There are 50 houses along one side of a street. A survey shows that 26 of these houses have students living in them. Prove that there are two students who live EXACTLY five houses apart on the street. ... 4answers 433 views Another pigeonhole principle question Have another question for you today: A course has seven elective topics, and students must complete exactly three of them in order to pass the course. If 200 students passed the course, show that at ... 2answers 156 views using pigeonhole principle for a hand of thirteen cards Say I shuffle and deal a hand of thirteen cards. How can I apply the pigeonhole principle in these cases: The hand has at least four cards in the same suit The hand has at exactly four cards in some ... 2answers 189 views Pigeonhole principle question confusion Now I understand it. I just learnt this principle. I am doing a problem in which there's a box with many red socks, green socks and blue socks. First question was how many minimum socks should I pick ... 2answers 618 views Chess Master Problem From Introductory Combinatorics by Richard Brualdi We have a chess master. He has 11 weeks to prepare for a competition so he decides that he will practice everyday by playing at least 1 game a day. ... 1answer 317 views Pigeonhole principle question Suppose a graph with 12 vertices is colored with exactly 5 colors. By the pigeonhole principle, each color appears on at least two vertices. True or false? The correct answer is false, but I assumed ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 42, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946187436580658, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/metals?sort=faq&pagesize=15
# Tagged Questions The metals tag has no wiki summary. 1answer 162 views ### Impurity scattering temperature dependence Is there any temperature dependence of relaxation time in impurity scattering of conducting electrons? It seems to me that there is none. But, some people claim that there is. So if you could ... 8answers 8k views ### Will a hole cut into a metal disk expand or shrink when the disc is heated? Suppose you take a metal disc and cut a small, circular hole in the center. When you heat the whole thing, will the hole's diameter increase or decrease? and why? 1answer 731 views ### What is the penetration length of static electric field into conducting metals? How large is the penetration length for static electric field into good conductors? I have two versions: (1) few atomic spacings $$a\sim n_{e}^{-1/3},$$ and (2) Debye length computed by Fermi ... 3answers 4k views ### In electrostatics, why the electric field inside a conductor is zero? In electromagnetism books, such as Griffiths or the like, when they talk about the properties of conductors in case of electrostatics they say that the electric field inside a conductor is zero. I ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.922777533531189, "perplexity_flag": "middle"}
http://peswiki.com/index.php/Electric_charge
PESWiki.com -- Pure Energy Systems Wiki:   Finding and facilitating breakthrough clean energy technologies. Donations to PES are needed and greatly appreciated. Thank you. # PowerPedia:Electric charge ### From PESWiki (Redirected from Electric charge) Electric charge is determines their electromagnetic interactions. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and the electromagnetic field is the source of the electromagnetic force. In mainstreasm physics, it is the conserved property of some subatomic particle and is one of the four fundamental forces. ## Characteristics There are only two kinds of electrical charge. One is called positive and the other is called negative. Two objects that have been charged alike repel each other. Two objects that have been oppositely charged attract each other. ### Subatomic "particle" Electric charge is a characteristic of some subatomic particles, and is quantized when expressed as a multiple of the so-called elementary charge e. Electrons by convention have a charge of −1, while protons have the opposite charge of +1. Quarks have a fractional charge of −1/3 or +2/3. The antiparticle equivalents of these have the opposite charge. There are other charged particles. In general, same-sign charged particles repel one another, while different-sign charged particles attract. This is expressed quantitatively in Coulomb's law, which states the magnitude of the repelling force is proportional to the product of the two charges and decays as the square of the distance. Formally, a measure of charge should be a multiple of the elementary charge e (charge is quantized), but since it is an average, macroscopic quantity, many orders of magnitude larger than a single elementary charge, it can effectively take on any real value. Furthermore, in some contexts it is meaningful to speak of fractions of a charge; e.g. in the charging of a capacitor. The electric charge of a macroscopic object is the sum of the electric charges of its constituent particles. Often, the net electric charge is zero, since naturally the number of electrons in every atom is equal to the number of the protons, so their charges cancel out. Situations in which the net charge is non-zero are often referred to as static electricity. Image from the Electric field, buphy.bu.edu. The electric charge can be distributed non-uniformly (e.g., due to an external electric field), and then the material is said to be polarized, and the charge related to the polarization is known as bound charge (while the excess charge brought from outside is called free charge). An ordered motion of charged particles in a particular direction (typically these are the electrons) is known as electric current. The SI unit of electric charge is the coulomb, which represents approximately 6.24 × 1018 elementary charges (the charge on a single electron or proton). The coulomb is defined as the quantity of charge that has passed through the cross-section of a conductor carrying one ampere within one second. The symbol Q is often used to denote a quantity of electric charge. ### Conservation of charge The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from the gauge invariance of the wave function. Because the time derivative of charge is called electric current, the conservation of charge results in the charge-current continuity equation. More generally, the net change in charge density ρ within a volume of integration V is equal to the area integral over the current density J on the surface of the volume S, which is in turn equal to the net current I: $- \frac{\partial}{\partial t} \int_V \rho\, dV = \int_S {J} \cdot {dS} = I$ The charge is a relativistic invariant. This means that any particle that has charge q, no matter how fast it goes, always has charge q. This property has been experimentally verified by showing that the charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus). ## History As reported by the Ancient Greek philosopher Thales of Miletus around 600 BC, charge (or electricity) could be accumulated by rubbing fur on various substances, such as amber. The Greeks noted that the charged amber buttons could attract light objects such as hair. They also noted that if they rubbed the amber for long enough, they could even get a spark to jump. This property derives from the triboelectric effect. In 1600 the English scientist William Gilbert returned to the subject in De Magnete, and coined the modern Latin word electricus from ηλεκτ?ον (elektron), the Greek word for "amber", which soon gave rise to the English words electric and electricity. He was followed in 1660 by Otto von Guericke, who invented what was probably the first electrostatic generator. Other European pioneers were Robert Boyle, who in 1675 stated that electric attraction and repulsion can act across a vacuum; Stephen Gray, who in 1729 classified materials as conductors and insulators; and C. F. Du Fay, who proposed in 1733 [1] that electricity came in two varieties which cancelled each other, and expressed this in terms of a two-fluid theory. When glass was rubbed with silk, DuFay said that the glass was charged with vitreous electricity, and when amber was rubbed with fur, the amber was said to be charged with resinous electricity. One of the foremost experts on electricity in the 18th century was Benjamin Franklin, who argued in favor of a one-fluid theory of electricity. Franklin imagined electricity as being a type of invisible fluid present in all matter; for example he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained too little of the fluid it was "negatively" charged, and when it had an excess it was "positively" charged. Arbitrarily (or for a reason that was not recorded) he identified the term "positive" with vitreous electricity and "negative" with resinous electricity. William Watson arrived at the same explanation at about the same time. We now know that the Franklin/Watson model was close, but too simple. Matter is actually composed of several kinds of electrically charged particles, the most common being the positively charged proton and the negatively charged electron. Rather than one possible electric current there are many: a flow of electrons, a flow of electron "holes" which act like positive particles, or in electrolytic solutions, a flow of both negative and positive particles called ions moving in opposite directions. To reduce this complexity, electrical workers still use Franklin's convention and they imagine that electric current (known as conventional current) is a flow of exclusively positive particles. The conventional current simplifies electrical concepts and calculations, but it ignores the fact that within some conductors (electrolytes, semiconductors, and plasma), two or more species of electric charges flow in opposite directions. The flow direction for conventional current is also backwards compared to the actual electron drift taking place during electric currents in metals, the typical conductor of electricity, which is a source of confusion for beginners in electronics. The discrete nature of electric charge was demonstrated by Robert Millikan in his oil-drop experiment. The Electric charge was measured with an electrometer. ## References, sources, and further reading GWeb Sites on Electric charge via Google Search G Image Images of Electric charge via Google Image GNews News of Electric charge via Google News Ggroups Newsgroups with Electric charge via Google Groups • Old Man, "Electric Charge On a Planet". sci.physics.relativity, sci.physics, Aug 1 2003. • Charles François de Cisternay DuFay Two Kinds of Electrical Fluid: Vitreous and Resinous - 1733, sparkmuseum.com. • Larry Mead, Does *electrical* *charge* distort spacetime?. sci.physics, Dec 3 1998. • Ron Kurtus, Electrical Charges, school-for-champions.com, 16 August 2005 • Electric charge and Coulomb's law. physics.bu.edu, 7-6-99 • How fast does a charge decay? • Science Aid: Electrostatic charge Easy to understand page on electrostatic charge. • Wikipedia contributors, Wikipedia: The Free Encyclopedia. Wikimedia Foundation. • Alberto Mesquita Filho, Electron and electric charge. alt.sci.physics.new-theories, Sep 10 2000. # See also - PowerPedia main index ##### Search (Searching options) Translate features Premier story new today
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9485387206077576, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/21121/dyer-lashof-based-spectral-sequence-for-homotopy-classes-of-maps-between-infinite/21305
## Dyer-Lashof based spectral sequence for homotopy classes of maps between infinite loop spaces (spectra). ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The homology of an infinite loop space, which represents a spectrum, is an algebra over the Dyer-Lashof algebra (see for example Cohen-Lada-May's Springer volume, or for part of the story the more accessible Luminy notes of Bisson-Joyal). Has anyone used this to construct a spectral sequence converging under some assumptions to $[X,Y]$, the homotopy classes of infinite-loop-maps between $X$ and $Y$, which starts with some kind of derived (Ext/Tor) maps between their homology in the category of algebras over the Dyer-Lashof algebra? Have any calculations been done with such a spectral sequence? - Note: because of "duality" between the Dyer-Lashof algebra action on the homology of \Omega^\infty \Sigma^\infty X and the Steenrod action on the cohomology of X, one might recover something close to the Adams Spectral sequence in these cases. – Dev Sinha Apr 12 2010 at 23:03 Does Peter May say something about this in his paper "A general algebraic approach to Steenrod operations" (ams.org/mathscinet-getitem?mr=281196) ? I can't find my copy, so I don't remember what spectral sequences he writes down. Probably there are no calculations in that paper, in any event. – Bill Kronholm Apr 13 2010 at 15:14 1 No, not there, but a spectral sequence of the sort requested does appear in The geometry of iterated loop spaces'', pages 155-156. That is probably the first reference to such a spectral sequence, but not the best. I believe Kraines and Lada made some calculational use of such a spectral sequence. – Peter May Jan 2 2012 at 0:07 ## 2 Answers This might not quite be what you're looking for, Dev, but you should check out Paul Goerss and Mike Hopkins' "Multiplicative ring spectra project," on Paul's webpage. They construct such a spectral sequence using Andre-Quillen cohomology in "Moduli spaces of commutative ring spectra," and "Andre-Quillen (co-)homology for simplicial algebras over simplicial operads." A relevant theorem would be 4.3 in the first reference, which gives the spectral sequence. Though this doesn't use Dyer-Lashof operations, they appear in section 6 (especially Prop 6.4) where Goerss and Hopkins give a second spectral sequence which computes the $E_2$ term of the original spectral sequence. The new $E_2$ term is given in terms of an $Ext$ functor in the category of unstable modules over the Dyer-Lashof algebra. They use this machinery to show in section 7 that the space of $E_\infty$ maps between Lubin-Tate spectra is homotopically discrete. If you're looking for computations using these spectral sequences, that's a great place to start. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm not sure if this is what you want, but Haynes Miller constructs a spectral sequence computing the homology of a connective spectrum $E$ from the homology of $E_0$ as a Hopf algebra over the Dyer-Lashof algebra in the 1978 Pacific Journal of Mathematics paper "A spectral sequence for the homology of an infinite delooping." -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9043184518814087, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/66485-conditional-connective-logic.html
# Thread: 1. ## Conditional Connective in Logic Hello. Having a bit of trouble understanding the truth table for the statement: $P \to Q$. Could somebody run me through the scenarios and explain how they affect the truth value of $P \to Q$. In particular, the scenario where $P$ is false, but $Q$ is true... how does this imply that $P \to Q$ is true? So : $P$ true, $Q$ true: $P \to Q$? $P$ true, $Q$ false: $P \to Q$? $P$ false, $Q$ true: $P \to Q$? $P$ false, $Q$ false: $P \to Q$? I know the answers are true, false, true, true. But can someone explain the logic behind it? 2. Originally Posted by Mush Hello. Having a bit of trouble understanding the truth table for the statement: $P \to Q$. Could somebody run me through the scenarios and explain how they affect the truth value of $P \to Q$. In particular, the scenario where $P$ is false, but $Q$ is true... how does this imply that $P \to Q$ is true? So : $P$ true, $Q$ true: $P \to Q$? $P$ true, $Q$ false: $P \to Q$? $P$ false, $Q$ true: $P \to Q$? $P$ false, $Q$ false: $P \to Q$? I know the answers are true, false, true, true. But can someone explain the logic behind it? The "logic" behind it is that it is DEFINED that way! You are asking a slightly different question, I think- you want to connect that with out "everyday" idea of "if... then...". And the difficulty with that is our usual concept of "if A then B" does NOT assign a truth value in the case that A is false, no matter what B is. You have to be careful to distinguish between "If A then B" and "A if and only if B". "If A then B" only talks about what happens if A is TRUE. It says nothing at all about what happens if A is false. But for symbolic logic purposes, we must have a value in all cases and it is simplest to assign "true". I like to think of it as "innocent until proven guilty". Suppose a teacher says to his class, "If you get "A" on every test, I will give you an "A" in the course." Okay, you get an "A" on every test and get an "A" in the course (the "T->T" case). His statement was obviously true. On the other hand, suppose you get a "B" on every test and get a B for the course (the "F->F" case). Again, his statement is true. But suppose you got an A on every test but one and a B on that one. If he gives you an "A" was his statement false (the F-> T case)? No, because he never said what would happen if you DIDN'T get an A on every test. The only way you could be SURE his statement was false was if you got an A on every test and did NOT get an A in the course (the "T->F" case). 3. Originally Posted by HallsofIvy The "logic" behind it is that it is DEFINED that way! You are asking a slightly different question, I think- you want to connect that with out "everyday" idea of "if... then...". And the difficulty with that is our usual concept of "if A then B" does NOT assign a truth value in the case that A is false, no matter what B is. You have to be careful to distinguish between "If A then B" and "A if and only if B". "If A then B" only talks about what happens if A is TRUE. It says nothing at all about what happens if A is false. But for symbolic logic purposes, we must have a value in all cases and it is simplest to assign "true". I like to think of it as "innocent until proven guilty". Suppose a teacher says to his class, "If you get "A" on every test, I will give you an "A" in the course." Okay, you get an "A" on every test and get an "A" in the course (the "T->T" case). His statement was obviously true. On the other hand, suppose you get a "B" on every test and get a B for the course (the "F->F" case). Again, his statement is true. But suppose you got an A on every test but one and a B on that one. If he gives you an "A" was his statement false (the F-> T case)? No, because he never said what would happen if you DIDN'T get an A on every test. The only way you could be SURE his statement was false was if you got an A on every test and did NOT get an A in the course (the "T->F" case). Indeed. I had the hunch that I was correct in saying that P being false gave no indication of the falsity of the statement P-> Q... but due to the nature of boolean algebra, the statement had to be either true or false. So, to conclude, if the premises neither negate nor confirm the conditional statement, then we assume it is true by default for the purposes of accordance with boolean algebra?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 34, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9673330187797546, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/65360?sort=newest
## Which maximal closed subgroups of Lie groups are maximal subgroups? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Which maximal closed subgroups of Lie groups are maximal subgroups? - 3 What's an example of a maximal closed subgroup of a non-discrete Lie group? – Pete L. Clark May 18 2011 at 19:56 Pete: See arxiv.org/PS_cache/math/pdf/0605/0605784v3.pdf . – David Feldman May 18 2011 at 21:50 For noncompact Lie groups it's probably essential in this question to separate the semisimple (or reductive) ones from the others, since the compact group preprint mentioned here and its many references indicate already the richness of the problem in the compact case where the structure theory of semisimple groups predominates. In general, there is also a need to compare real and complex Lie groups. But at least in the semisimple (or reductive) case, parallel work on algebraic groups should be a helpful guide. Is there any relevant literature on solvable Lie groups? – Jim Humphreys May 18 2011 at 22:14 @David: thanks, that was helpful. I worked through the commutative case in my head and saw that that was bad for maximal subgroups. I should have thought about it more: I do know that every compact subgroup of $\operatorname{GL}_n(\mathbb{R})$ is contained in an orthogonal group... – Pete L. Clark May 19 2011 at 7:50 A related question is mathoverflow.net/questions/60315/… – Alain Valette Aug 8 2011 at 8:17 ## 1 Answer There is a paper by M. Golubitsky, "Primitive actions and maximal subgroups of Lie groups", J. Differ. Geom. 7 (1972), 175-191: from the Introduction: "...there exist maximal Lie subgroups whose Lie algebras are not maximal subalgebras". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083110094070435, "perplexity_flag": "middle"}
http://naml.us/blog/2009/08
# Notes Geoffrey Irving ## Archive for August, 2009 ### Insurance choice can be bad Sunday, August 30th, 2009 This is a followup to the previous post about health insurance elaborating on the fact that it can be bad to let individuals make choices about their insurance policy. I stated without much detail that “assuming sufficient options and perfect competition, the result of this individual choice would be exactly the same as if the insurers were allowed to use knowledge of $K$.” The “sufficient options” assumption is important (and not necessarily realistic), so more explanation is warranted. Imagine there’s a genetic test that predicts the occurrence of a particular disease with overwhelming probability. Let’s call this disease H (for Huntington’s disease or maybe HIV/AIDS). Further imagine that the disease is treatable, but the treatment is expensive (not true for Huntington’s yet, unfortunately). Say the price of treatment is $c$. If insurers are allowed to administer the genetic test and adjust policy prices accordingly, prices will converge towards being $c$ greater for those with $H$. If you have $H$, you’ll pay the entire cost yourself. This situation is clearly bad, so we’ll ban insurers from knowing about the genetic test. However, individuals still know about the genetic test, and are allowed to make decisions accordingly. Let’s say an insurer provides two insurance policies, identical except that one pays for the treatment for $H$ and one does not. Anyone who knows that they’re $H$-negative will buy the policy that doesn’t treat $H$, and anyone who has $H$ will buy the other. If the insurer is allowed to charge different amounts for the two policies, they will adjust the prices to match the different expectations of cost. This different is $c$. Can we ban insurers from having one policy that covers $H$ and one that doesn’t? Possibly, but it’s hard. First, we have to choose all or none; if one insurer covers $H$ and a different one doesn’t, the non-$H$ people will flock to the second insurer, and the same thing happens. Second, the connection between $H$ and the treatment for $H$ may be far from obvious, and certainly can’t be expected to be known at the time we pass any particular law. For example, Down’s syndrome increases the likelihood of recurrent ear infections, so allowing policies to not cover recurrent ear infections would penalize anyone with Down’s syndrome. This might be harmless by itself, but a thousand similar options could add up quickly. There are probably examples of insurance policy choices which wouldn’t be problematic, but figuring out which these are is an extreme subtle proposition. Moreover, this issue will become rapidly more important as our knowledge about genetic risk factors and relations between different diseases expands. I have no confidence that these nuances can be encoded in any kind of government regulation. Does this mean we can only have one insurance policy for everyone if we want to be fair? Unfortunately to a first approximation, it seems like the answer is yes. I’d love to hear details if anyone knows of a type of policy choice which doesn’t suffer from this problem, though. Tags: choice, insurance Posted in economics, politics | 4 Comments » ### Free market insurance is incompatible with knowledge Sunday, August 30th, 2009 Yes, it's an extreme title, but it's true. The idea of insurance is to average risk over a large group of people. If advance information exists about the outcomes of individuals, it's impossible for a fully competitive free market to provide insurance. In particular, free markets cannot provide health insurance. To see this, consider a function $u:S\to R$ which assigns a utility value to each point of a state space $S$. For example, one of the elements of $S$ could be "you will have cancer in 23 years". This outcome is bad, so the corresponding $u\left(s\right)$ would be a large, negative number. We also have a probability distribution $p:S\to R$ over $S$. Without insurance, the expected value of $u$ is $E\left(u\right)={\sum }_{s\in S}p\left(s\right)u\left(s\right)$. With insurance, we can average over a large number of people to change the utility function to be closer to the average. For simplicity, we'll consider only the case of perfect insurance, where the new utility function is exactly the average. In the perfect insurance model, we pay an insurance company $E\left(u\right)+o$, and in return they agree to pay us $-u\left(s\right)$ depending on the particular outcome $s$. $o$ is an extra amount to cover administrative costs, risks due to lack of independence and finite numbers of customers, and profit (in the case of imperfect competition). Assuming no one has any prior knowledge of the state $s$, the only way for different insurers to compete in the perfect insurance model is to reduce overhead. Everyone looks the same, so there's no advantage in charging different amounts to different people. The insurers profit from anyone with $u\left(s\right)>E\left(s\right)$ and lose money from anyone with $u\left(s\right)<E\left(s\right)$, but there's nothing they can do about it if they can't tell the difference in advance. Now assume there's some prior knowledge about the state, say $S=K×U$ where $K$ is known in advance and $U$ is unknown. In the absence of regulation, it becomes possible for an insurance company to charge different amounts based on the different $k\in K$. In particular, it's possible for an insurer to sell policies only to people with a favorable value of $k$, and charge $E\left(u|k\right)>E\left(u\right)$. In a free market, anyone with a favorable value of $k$ will flock to these cheaper policies. Insurers offering policies to those with unfavorable values of $k$ will have to raise rates in order to stay in business, since they will have lost the customers from which they make money. Assuming a sufficient level of competition, the price of all insurance policies will converge on $E\left(u|k\right)+o\left(k\right)$. The result is that we're now insuring only over the uncertainty contained in $U$, not $K$. In the worst case, if $K=S$, $E\left(u|k\right)=u\left(s\right)$ and insurance vanishes completely. Whether this is good or bad policy-wise depends on what $K$ and $U$ look like. For car insurance, $K$ includes whether the driver was considered at fault in accidents in the past, whether they've driven drunk, whether they drive a muscle car or a Honda Civic, etc. Charging different amounts depending on these factors seems fair, since intuitively these factors can be considered the "fault" of the individual. Similarly, charging more for home owners insurance if you live in the path of a hurricane is also (arguably) reasonable. In the case of car insurance, even with these known factors out of a way, the space of uncertainty $U$ is still quite large. It includes the actions of other drivers, random equipment failure, invisible road conditions, etc. It is impossible for insurers to predict these factors, which means that private, free market insurance can efficiently insure against them. For health insurance,the space of known factors includes all past medical history and preexisting conditions, public genetic information including gender and race, healthy or unhealthy lifestyle, etc. In many cases, it includes information about the current medical problem, since insurers have significant control over what kind of treatment people can receive once they are diagnosed. Now, we can argue about whether it's fair to blame people for unhealthy lifestyles, but I highly doubt anyone will argue that black men should be held responsible for their higher rates of prostate cancer. If we accept that the space of known factors $K$ is too large, the only way to reduce it is to apply some type of regulation to reduce the effective size of $K$. A fair amount of subtlety is required to make such regulation effective. For example, let's say we ban insurers from discriminating based on race, but still allow them to collect information about healthy lifestyle. It's healthy to play sports, so the insurer might ask whether the person plays basketball. People who play basketball are more likely to be black than those who don't (caveat: I'm just guessing here), and therefore it's quite possible that they have higher risks of prostate cancer. Unless the government is smarter than the insurers (impossible, since the insurers have access to the text of laws), the only reliable way to solve this is to ban knowledge of $K$ entirely. However, banning insurers from using knowledge of $K$ is dangerous unless you also ban customers from using knowledge of $K$. In an extreme case, it would be very bad to allow people to buy insurance policies in response to accidents of unexpected diagnoses. Everyone would wait until they needed medical coverage to buy insurance, and all insurers would rapidly go out of business. In general, if individuals are allowed to use any information prohibited to insurers, and the space of available policies is large enough, sufficiently diligent individuals with favorable $k$ values can use this information to lower their insurance premiums without raising their risk. Insurers will have to raise their premiums in response, which results in an increase in cost for those with unfavorable $k$ values. In fact, assuming sufficient options and perfect competition, the result of this individual choice would be exactly the same as if the insurers were allowed to use knowledge of $K$! Wow. I didn't fully understand that point before writing this post. The conclusion is that if we believe true health insurance is a good thing, and that health insurance means insuring over factors which can be known in advantage, free markets don't work either for insurers or for individuals. We can't allow insurers to base prices on prior knowledge, and we can't even allow individuals to choose which policy they buy based on their knowledge of their own medical history. Hmm. The individual side of this is somewhat unfortunate, but I don't see any way around this argument. Followup: Here are more details about the individual side. Tags: insurance Posted in economics, politics | 7 Comments » ### Duck talk Friday, August 21st, 2009 I gave a short presentation on the ideas behind duck at DESRES today. Here are the slides. Caveat: I made these slides in the two hours before the presentation. Posted in Uncategorized | No Comments » ### The Verbosity Difference Tuesday, August 18th, 2009 I like conciseness. Syntactic sugar increases the amount of code I can fit on a single screen, which increases the amount of code I can read without scrolling. Eye saccades are a hell of a lot faster than keyboard scrolling, so not having to scroll is a good thing. However, I recently realized that simple absolute size is actually the wrong metric with which to judge language verbosity, or at least not the most important one. Consider the evolution of a chunk of C++ code. We start with a single idea, and encode it as a single class to encapsulate the structure. We add a class declaration, some constructors and a destructor, perhaps even a private operator= to disallow copying. Fine. After this boilerplate, we add various methods to the class to encode the actual behavior. The class also develops a few fields, because fields let us easily share data between the related methods. Next we have another idea. Conceptually the new idea is distinct from the original one, so we should really make a new class. However, we’ve just gone through all the work of setting up a C++ class, with it’s constructors, destructor, private operator=, access specifiers, etc., and it’d be a shame to have to redo all that effort. Maybe it won’t be so bad if we just add the new idea into the same class… Boom. Now we have two ideas merged into the same class. You can’t pass around one idea without passing around the other. You can’t rewrite one without analyzing dependency chains to make sure the class fields doesn’t overlap between concepts. After a while, we start to forget that the ideas were ever really distinct. That’s right: the language has actually made us stupider. You can’t blame the programmer here. We were only maximizing our local utility. We might be smart, but we’re not omniscient, and we can’t always be bothered to follow style manuals. The problem also can’t be ascribed to the overall verbosity of C++; it’s quite possible that the code would be larger if it was written in C, since C++ class syntax, fields, etc. really can make for smaller (source) code. The problem is that the marginal cost of adding a new class is greater than the marginal cost of extending an existing class. If it was easier to make a new class, we would have done so. But we would also have made a new class if it was harder to add methods to an existing class, because then the trade-off would have been different. In other words, what matters is the difference in verbosity between the “right way” and the “wrong way”, not the absolute level of verbosity. Therefore, the conclusion is that any new abstraction with a large startup cost but a low marginal cost is bad, because people end up merging them in disgusting ways. Examples include interfaces (adding one more method is easier than splitting one interface into two), Haskell type classes (see fail), and monads (once you’ve converted your code into monadic form, making the monad do something else is easy). Similarly, any abstraction which merges two benefits into one language construct is also bad, even if the extra benefits are free. The best example of this is inheritance, which merges the benefits of code reuse and subtyping. If I’m making a new class, and it would be really convenient to be able to call one of them methods in an old class, I may end up inheriting from that class in order to save typing even if subtyping makes no sense. By contrast, if I’d been writing the same code in C, that function I really wanted to call would probably just be a function, and I’d just call it. Object oriented programming makes you stupider. Happily, it’s easy to notice when you’re running into one of these language flaws. Most of us have a good sense for what the right way of doing things is. If we set out to write a new piece of code, the right way will generally be the first thing that comes to mind, but then we’ll remember that doing it the right way is hard. We’ve probably trained ourselves not to notice this conflict after years of painful compromise, so all we have to do is untrain ourselves. Tags: abstractions, C++, non-monotonic utility, object oriented programming, verbosity Posted in code, languages, psychology | 2 Comments » ### Marshmallows and Achievement Gaps Sunday, August 16th, 2009 Here are links related to a few interesting studies that came up in a discussion with Ross. I figured I’d post them here so I have somewhere to point other people: #### Marshmallows and Delayed Gratification Walter Mischel did a study where he put children in a room, gave them single marshmallow, and told them that if they held off from eating the marshmallow for a while they would get two marshmallows later. He then left the room and watched via hidden camera to see how long they would hold out. Several years later, he happened to do a follow-up study on the same kids, and discovered that the time they held out was strongly correlated to their grades, whether they went to college, SAT scores, etc. Here are some links: #### Racial and Gender Achievement Gaps Claude Steele and Joshua Aronson did a study where they gave the GRE exam to African Americans and European American students. The two groups performed at roughly the same level. However, if they told the students they were taking an intelligence test, the black students performed significantly worse. There are a lot of variant of this experiment for different kinds of tests or gaps (physical activities, gender, etc.) with similar results. I.e., you can dramatically change test scores by saying or not saying a single particular sentence to the test takers before the test. I agree with Dan Ariely that these studies can and should be interpreted extremely optimistically. If the driving factors behind success or failure are this simple or fragile, we should be able to find easy ways to make huge improvements. Tags: achievement gap, delayed gratification, marshmallows, stereotype threat Posted in psychology | No Comments » ### The Anonymous, Recursive Suggestion Box Sunday, August 16th, 2009 Good discussion with Ross today, resulting in one nice, concrete idea. Consider the problem of suggesting policy improvements to the government. In particular, let’s imagine someone has a specific, detailed policy change related to health care, financial regulation, etc. Presumably, the people who know the most about these industries are (or were) in the industries themselves, so you could argue that they can’t be trusted to propose ideas that aren’t just self-serving. Maybe it’s possible for someone to build a reputation of trustworthiness, but that’s hard and would ideally be unrelated to the actual ideas proposed. Instead of relying on reputation, we’ll remove the issue entirely by making the suggestion box anonymous. Now we have an anonymous suggestion box on a website. People go to it and propose ideas. There are a few good ideas, and a vast amount of bad, malicious, and nonsense ideas (including spam). Eliminating the spam is easy (I have a single, completely public email address and get roughly one spam message per day, from which I conclude that the spam issue is solved). In order to eliminate the bad or malicious ideas, we need to be able to judge their correctness in a logical manner. For this, we rely on distributed intelligence: other people are allowed to judge whether each idea is good or bad. To get loaded words out of the picture, let’s replace “good” and “bad” with the words “true” and “false”. “Ideas” become propositions of the form “Implementing this idea would be good” (yes, “good” is still there, but keep reading). Let’s assuming voting isn’t a completely reliable system for determine the truth or falsity of ideas (otherwise, we’re done). Therefore, some true propositions will get a lot of false votes, and vice versa. To solve this, we allow people to propose arguments for or against each proposition. These have the form of some statement, like “That proposition is false because the author is a moron”, together with a more details argument for why the statement is true. Now we let people vote on two more things: 1. Whether the truth of the statement would imply that the original proposition is true or false. 2. Whether the argument for the truth of the statement itself is sound. If we get enough votes in favor of both (1) and (2), we conclude that the original idea is true (or false), and discount the votes for or against the original idea. This is the key part, so I’ll restate it. If we have propositions $A$, $B$, and $B&Rightarrow;A$, then enough votes for both $B$ and $B&Rightarrow;A$ override any votes against $A$. You can’t kill a good idea unless you can the arguments for it as well. Now we have to get recursive. What if $B$ gets a lot of votes, but is actually wrong? Then you let people propose arguments for the falsity of $B$, and so on. What if there are two competing arguments which appear to contradict? Then you let people propose arguments about why there isn’t a contradiction? There are a lot of logical issues to deal with, but people can post arbitrary arguments written in normal human languages and we have the full power of human intelligence to judge them, so we’re not limited by artificial logical restrictions. This isn’t a formal proof system. Unfortunately, we are limited by what happens along the full recursive tree. If people lie about the propositions all the way down, and manage to flood away all the counter arguments, the system will fail. However, this is basically a problem of spam, and can be solved in the usual way. If you detect that someone is consistently voting opposite the correct answer, you flag them as malicious and discount their votes. This rule is circular, but that’s what probabilistic analysis is for: we take all the data and compute the most likely assignment of truth values to propositions and spam flags to people. There’s some threshold of validity that you need to achieve in order to such a solver to converge to the correct answer, but that level of trust is often quite low due to network effects and self-reinforcement. In other words, contradictions don’t fit together. Since this is a website, we have to identify whether the “users” are actually people. We could do this conventionally with a system like ReCAPTCHA, but since we’re in recursive mode it’s much cooler to instead ask users to judge the correctness of randomly selected propositions. If you want to vote on whether a proposition is true or false, or propose a new proposition, you need to spent a little time judging the ideas of others. If someone comes up with a way to trick this system by writing a program that can judge the truth or falsity or arbitrary English propositions, this discussion may be obsolete (thanks to Ross for this particular bit of reasoning). Other issues probably abound, but they can be fixed by allowing people to suggest improvements to the system. If deemed reasonable, these ideas can be implemented and tested in parallel with the existing system, resulting in a potentially large number of competing systems for determining truth values from the same data set. The data set itself could probably be made freely available (under a suitable license), so that others could build competing systems. I don’t think this system would be all that difficult to implement. Thanks to the previous paragraph, if it reached a sufficient level of quality it would start to improve itself. Maybe that would even get scary. Of course, if we apply this to a realm like politics, the truth or falsity of various statements will be very controversial, and different people will have legitimately different opinions. This can be solved by adding side conditions to the statements, like “If you believe in flat tax systems, we should do this” or “If you believe that health care is a basic human right, we should do that.” More importantly, however, there is a vast range of ideas that any rational person should agree to. Statements like “the proposed health care bills do not include death panels”, and “given an otherwise equivalent choice between taxing a public good and a public evil, we should tax the latter.” I think it’s fair to say that the U.S. would be better off if we could agree on the statements that don’t need side conditions. Note: I’ve done zero checking to see if this has been proposed or implemented before (this discussion happen just now), so I’m curious if anyone knows related references or links. Another note: presumably this would be set up as a nonprofit supported by donations of some kind. If this system actually existed, I would probably be willing to donate at least \$1000. Tags: distributed system, suggestion box, voting Posted in computer science, economics, politics | 7 Comments »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 63, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517849087715149, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/159386-prove-inverse-transformation-linear-print.html
# prove inverse transformation is linear Printable View • October 12th 2010, 04:24 PM sdh2106 prove inverse transformation is linear Could someone help me with the following problem? Suppose A is a linear transformation from the x-y plane to itself. Show that A^-1 is also a linear transformation (if it exists). Thanks for the help! • October 12th 2010, 04:55 PM Also sprach Zarathustra Quote: Originally Posted by sdh2106 Could someone help me with the following problem? Suppose A is a linear transformation from the x-y plane to itself. Show that A^-1 is also a linear transformation (if it exists). Thanks for the help! Suppose $A:R^2\to R^2$ linear transformation which is Bijection. Let us show now that $A^{-1}:R^2\to R^2$ is also linear transformation. Suppose $a,b \in R^2$, $A$ is a bijection, there are exist unique vectors $c,d \in R^2$ which for them: $A(c)=a$ and $A(d)=b$. From the linearity of $A$ we have also: $A(c+d)=A(c)+A(d)=a+b$ and $A(kc)=kA(c)=ka$. Now, by definition of inverse transformation, $A^{-1}(a)=c$ , $A^{-1}(b)=d$ , $A^{-1}(a+b)=c+d$ and $A^{-1}(ka)=kc$. Hence: $A^{-1}(a+b)=c+d=A^{-1}(a)+A^{-1}(b)$ and $A^{-1}(ka)=kc=A^{-1}(a)$, therefor $A$ is linear transformation. • October 14th 2010, 05:20 AM sdh2106 great explanation. thanks again! All times are GMT -8. The time now is 10:45 AM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930158257484436, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/201369/fourier-series-help
# Fourier Series Help I have the following function: $t(x) = e^{-j k_0 d_0}e^{-i (n-1) k_0 \frac{d_0}{2} \cos(2\pi x/\lambda))}$, which can be written in a Fourier series as $t(x) = \sum_q(C_q e^{-i q 2 \pi x/\lambda})$, where $C_q$ are the Fourier coefficients. However, I am relatively new to Fourier series and am really confused about the steps involved in this derivation. Could somebody help me out? - – Seyhmus Güngören Sep 23 '12 at 22:06 It would be very advisable you go to the site's FAQ and read there (3rd. paragraph) about how to properly write mathematics here with LaTeX. Your expression for $\,t(x)\,$ looks so absurdly messy that it is very likely many people here don't even try to understand it and leave the question behind...It also be nice if you write $\,i\,$ instead $\,j\,$ for the imaginary unit as this is the usual mathematical symbol for it, unlike what happens sometimes in physics. – DonAntonio Sep 24 '12 at 11:10 I cleaned it up a bit. – John Roberts Sep 24 '12 at 14:37 I'm having no luck with this one. Can somebody help me improve this question? – John Roberts Sep 26 '12 at 13:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9615898728370667, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/8162/whats-so-special-about-ads
# What's so special about AdS? This question is coming from someone who has very little experience with M-Theory but is intrigued by the AdS/CFT correspondence and is beginning to study it. Why is the gauge/gravity duality discussed almost always in the context of anti-deSitter space? What is unique about it? What are the difficulties in studying it in Schwarzschild, deSitter, etc.? References to work done on the gauge/gravity duality in these more physical spacetimes would be much appreciated. - ## 5 Answers I'm going to provide a possible controversial answer, in an attempt to provoke some discussion. I do this in good faith, and in the belief that what I state are true, and back with references (comments of "citation needed" will be very handy). I context this with: I'm a condensed matter theorist, and I think the usual exposition of AdS/CFT has the cart before the horse. I will take a long detour, but hopefully I will come back at the end and answer the actual question. Let's start with a spin 1/2 chain on a 1D lattice, infinite in extent. The Hilbert space is a product of 2 dimensional spaces. Let the Hamiltonian be anti-ferromagnetic Ising with an external magnetic field, so that at a critical field strength we will get a quantum phase transition from anti-ferromagnetic to ferromagnetic. We deal only with the ground state (i.e. at zero temperature). Let's then make a couple of observations: away from the phase transition, the correlation length is finite, and the entanglement entropy of any given block of length $L$ is asymptotically a constant (as $L \rightarrow \infty$); at the phase transition, the correlation length is infinite, and the entanglement entropy goes as $\log(L)$. Note that these are quite special features of the ground state, since the typical (defined as average over the canonical Haar measure) state has entanglement entropy which scales as $L$. Therefore, instead of writing the ground state with full generality $$\left| \Omega \right\rangle = \sum_{s_1,s_2,\ldots} c_{s_1,s_2,\ldots} \left|s_1\right\rangle\otimes\left|s_2\right\rangle\otimes\ldots$$ where we would have to specify the matrix $c$ with an exponentially large number of dimensions (spanning the full Hilbert space), we're going to restrict our attention to so-called Matrix Product States (MPS) with the form: $$\left|\Omega\right\rangle = \sum_{s_1,s_2,\ldots} \mathrm{Tr}\left(\hat A^{s_1} \hat A^{s_2} \ldots \right) \left|s_1\right\rangle\otimes\left|s_2\right\rangle\otimes\ldots$$ where the matrices $\hat A^{s_i}$ are arbitrary matrices of dimension $m$. Essentially, we're staring in the corner of Hilbert space which is spanned by a linearly increasing number of dimensions. Now, as $m \rightarrow \infty$ we recover the full Hilbert space, but away from the critical point, a finite $m$ suffices to fully (exactly) describe the ground state, because of the prior point about finite entanglement entropy; essentially, the dimension $m$ controls how much entanglement is possible between adjacent sites, and the MPS ansatz fully spans all such states. But, as mentioned, the entanglement in a critical state is not bounded. In this case, we can use a different ansatz, the Multi-scale Entanglement Renormalisation Ansatz (MERA). The construction is difficult to describe in words, but easier in pictures. If we use tensor network diagrams (first identified by Penrose and called spin networks), we depict each tensor as a blob with a number of legs equal to its rank. Treating the matrices $\hat A^{s_i}$ as 3-rank tensors (one extra due to the spin index), we can draw the MPS as: where the lower legs are the spin indices. The MERA is then (but imagine that the "tree" continues upwards without end). The essence is that we reify coarse graining (i.e. renormalisation) into the ground state description by a tree of disentanglers and coarse graining. Again, if we do this right, this can describe the ground state with perfect accuracy. These tensor network diagrams also give a picturesque reason for why the entanglement entropy scales as a constant and as $\log(L)$ respectively. The argument is that the entanglement is localised at the boundary of a block (as it has to, since each connection in that network can only support a finite amount of entanglement), but the "boundary" actually scales differently in the two cases: the the non-critical case, it is just the edges of a 1D chain, which clearly don't care about the bulk; in the critical case, it needs to include not only the bottom layer, but all the layers above it, and there are $\log(L)$ layers. So far, everything is basically (up to corner cases) true. Let's now turn to more conjectural/interpretational stuff. Focus on MERA. Notice that if we treat it as a space, then a natural distance measure is the number of "hops" we need to do from one vertex to another; notice also, that in the continuum limit this is a homogeneous hyperbolic space, i.e. AdS. In the original Ising model, at the critical point, the field theory should be conformally invariant, and thus be a CFT. This is all but AdS/CFT, except we haven't specified that the MERA coefficients are computed by a quantum gravitational theory (it probably can't be, I think... the central charge is 1, and nothing is supersymmetric). Now, at this point, you might think "Aha! See? AdS/CFT is of primary importance to even mundane things like condensed matter!" However, I'd like to present some evidence that actually, AdS/CFT is a mundane consequence of a very clever idea, which is to geometrically interpret the information in a ground state. Let's consider instead an interacting fermion system in 1D. The usual electrons with Coulomb repulsion will do. It is known that the physical ground state is that of non-interacting solitons of fractionalised electrons: holons (carrying the charge) and spinons (carrying the spin). Our ansatz will then be that of MERA, but at a certain depth in the tree, we duplicate everything above it --- so that we end up with two 1D systems, one for holons and one for spinons. In the geometric picture above, it will be as if we glued an extra AdS space onto the usual one, so that we get a fork. The reason this suggests that actually the ground state should come first and the holography principle second is two fold: 1. Holography only holds for special states like the ground state, where the entanglement entropy scales sub-bulk. 2. The internal AdS space might not be AdS, or even admit any sort of nice geometrical picture, and even if it does, it might not be given by some sort of Lagrangian based field theory. So, back to the question: "what is special about AdS?" Other answers will no doubt focus on the special geometry that makes the maths work, but I would answer that the key is never the inner space, but the boundary: the (super-)CFT. The inner space, in this case, AdS, just comes along for a ride. If we had some other kind of boundary theory, we'd have some other kind of inner space, or not a space at all! References: Seminal (?) paper on correspondence between MERA and holography: http://arxiv.org/abs/0905.1317 Branching MERA as exotic holography: http://pirsa.org/10110076 - Took ages to get to the punchline and probably quite offtopic to the question but I love this answer, so +1 :) – Marek Apr 6 '11 at 23:10 Hi @genneth, that is a nice answer. I edited your final link to point to a more convenient version of the talk. Could you say a few words on what the various labels and elements are in the tree graphs you showed. And yes, Roger Penrose was the original creator of the notion of spin-networks. Tensor-networks are generalizations of these. – user346 Apr 7 '11 at 3:45 ps: the reason AdS is considered "special" is because, as @Daniel mentions in his answer, the symmetry group of AdS coincides with the conformal group. That is a pretty unique characteristic of AdS AFAIK – user346 Apr 7 '11 at 3:47 You say this about the number of hops required to go from one vertex to another that in the continuum limit this is a homogeneous hyperbolic space, i.e. AdS. I don't see how that happens. Can you clarify? – user346 Apr 7 '11 at 4:02 Some interesting ideas there, but I think your point 1 is incorrect. Holography in it's simplest form, the ads/CFT correspondence, applies to asymptotically ads spaces. The ground state is ads, and every excited state of the theory is some other configuration which approaches ads asymptotically. Nobody. would be interested in a correspondence that applies to one state, or a few special states, only. – user566 Apr 7 '11 at 4:22 show 7 more comments AdS$_d$ in any space-time dimension $d\geq 2$ is maximally symmetric with isometry group so(d-1,2) (for Minkowski signature). This group coincides with the conformal group in d-1 dimensions (again for Minkowski signature). For instance, for AdS$_5$ you obtain the isometry group so(4,2), which is the conformal group in 4 dimensions. This matching is a slightly trivial consistency check that something like an AdS$_5$/CFT$_4$ correspondence can work. Another special feature of AdS as opposed to dS is that it provides a stable vacuum in most theories (whereas dS is only meta-stable), and that it is compatible with SUSY (whereas dS is not). Spaces that asymptote to AdS have very special properties, too. Brown and Henneaux showed in d=3 that any consistent quantum theory of gravity must be dual do a 2-dimensional conformal field theory, in the sense that the Hilbert space must fall into irreducible representations of two copies of the Virasoro algebra, with central charge determined by Newton's constant and the cosmological constant. This was an important precursor of the AdS/CFT correspondence, where such a duality is realized explicitly (but in higher dimensions). Minkowski space is also maximally symmetric and stable, but not as susceptible to holography as AdS. In summary, AdS spaces are simple and have interesting physical properties, which is why they are used quite frequently. - There are some extensions of the AdS/CFT correspondence, so it is more accurate to call the more general set of dualities the gauge/gravity correspondence. In all such dualities one has a gravity dual of some quantum field theory, and the theory in question determines the boundary conditions on all the bulk fields (including the asymptotics of the bulk geometry). States of those theories correspond to small fluctuations (normalizable modes) moving in the bulk of that spacetime, and the vacuum state usually corresponds to the maximally symmetric ("empty") space. There are many such examples which are not AdS even asymptotically, though sadly asymptotically dS is not yet one of them (partially because it is not clear what the expression "asymptotically dS" really means). Asymptotically flat examples, alas with linear dilaton background, do exist. But, within the set of all holographic dualities there is something special in spaces which are asymptotically AdS. Those correspond to theories which become conformal at short distances. Using Wilsonian language, those are theories whose renormalization group flow can be continued to all energy scales, so they are completely well-defined quantum field theories without a cutoff. Such field theories are defined as relevant deformations of fixed points in the UV, and the holographic translation of that statement is that the gravity dual is asymptotically AdS. Continuing with the Wilsonian language, usually one needs to use QFT only as an effective field theory, and it is then inherently defined with an UV cutoff. Such more general quantum field theory (defined only up to certain energy scale) correspond to instances of gauge-gravity duality which are not asymptotically AdS. The correspondence between EFT with a cutoff and non-asymptotically-AdS space (at least on some occasions dubbed the "non-AdS/non-CFT correspondence") is less well-understood than AdS/CFT (with the amount of work on AdS/CFT, this applies to many other subjects...). But, it is a very useful and interesting subject, in some ways more so than the original AdS/CFT correspondence, the one that opened the flood gates. In any event, the type of boundary conditions imposed on the space is only restrictive when discussing global questions. Any local process you are interested in (say, the formation and evaporation of black holes) can be embedded in asymptotically AdS space with an arbitrarily small cosmological constant. I wouldn't then think about the set of examples provided by AdS/CFT as "unphysical" in any way - it may not address all the possible questions one may be interested in, but it the best way to address of whole bunch of fascinating ones. - AdS space is basically just hyperbolic space, with a time direction. Here's a nice geometrical fact. Consider 2d hyperbolic space in the Poincaré disc model. (Generalizing to higher dimensions is straightforward.) The metric is $ds^2 = \frac{dr^2 + r^2 d\theta^2}{(1-r^2)^2}$. The corresponding area element is $\frac{rdrd\theta}{(1-r^2)^2}$. So consider the circle at fixed $r = r_0$. This has circumference $2\pi r_0\frac{1}{1-r_0^2}$, and area $2\pi \int_0^{r_0} \frac{rdr}{(1-r^2)^2}$. For $r_0 = 1 - \epsilon$, with $\epsilon \ll 1$, these are $\frac{\pi}{\epsilon} - \frac{\pi}{2} + {\cal O}(\epsilon)$ and $\frac{\pi}{2\epsilon} - \frac{\pi}{2} + {\cal O}(\epsilon)$, respectively. What does this mean? It means that, for circles large compared to the curvature radius of the space, perimeter and area scale in the same way as you make the circle larger. (As opposed to flat space, where one scales like the square of the other.) I think this is a hint about holography; in some sense, AdS is the space in which holography becomes almost trivial, because $d$ and $d-1$ dimensional volumes are almost identical, which is why we understand holography much better in AdS. (Of course, this isn't unrelated to the ideas about conformal symmetry, etc. But I think this geometric fact sheds some light and is easy to understand without getting into details of physics.) - because d and d−1 dimensional volumes are almost identical, which is why we understand holography much better in Ad, very nice perspective. +1 – user346 Apr 7 '11 at 8:22 The $AdS$ in a Euclidean form is a is a hyperbolic space. In two dimensions hyperbolic plane ${\cal H}^2$ is a simply connected manifold with constant Gaussian curvature $-1$. The two dimensional $AdS_2$ holds near the event horizon of a black hole, which is the Poincare disk ${\cal D}^2~=~\{z:|z|~<~z_0\}$, and related to the Rindler spacetime the upper half plane ${\cal H}^2~=~\{z:Im(z)~>~0\}$. The group of isometries $Iso({\cal H}^2)$ is the set of smooth transformations $z^\prime~=~gz$ which satisfy the hyperboloid metric for $s~=~s(z,~gz)$. The half plane and the Poincar{\'e} disk are related by a conformal transformation, so the transformation of coordinates are given by the same group. In the half plane the isometries are the fractional linear transformations, or modular group $$g:{\cal H}~\rightarrow~{\cal H},~z:~\mapsto~gz:~=~{{az~+~b}\over{cz~+~d}},~\left(\matrix{a & b\cr c & d}\right)~\in~SL(2,~{\bf R}).$$ The matrices $g$ and $-g$ are the same fractional linear transformations, so the isometries $Iso({\cal H}^2)$ may be identified with the projective Klein group $PSL(2,~{\mathbb R})~=$ $SL(2,~{\mathbb R})/\{\pm 1\}$. The group is restricted further by the isotropy subgroup which leaves elements of ${\cal H}^2$ unchanged $z~=~gz$. Any such $g~\in~PSL(2,~{\mathbb R})$ defines the $SO(2)$ rotation group. Then ${\cal H}^2~=~PSL(2,~{\mathbb R})/SO(2)$. The discrete structure, or $PSL(2,{\mathbb Z})$ is manifested in the tessellation symmetry of the hyperbolic half-plane or disk. These symmetries are seen in the Escher prints called limit circles. These discrete structures give the MERA structure which Genneth references. The “piling up” of structure towards the boundary is a renormalization of the Ising spin system, for spins at the vertices in the tessellation. What follows is in part a brief outline of the discrete coset $AdS$ completion due to Charles Frances. http://www.math.u-psud.fr/~frances/ The boundary space $\partial AdS_{n+1}$ is a Minkowski spacetime, or a spacetime $E_n$ that is simply connected that with the $AdS$ is such that $AdS_{n+1}\cup E_n$ is the conformal completion of $AdS_{n+1}$ under the discrete action of a Kleinian group. For the Lorentzian group $SO(2,~n)$ there exists the discrete group $SO(2,n,Z)$ which is a Mobius group. For a discrete subgroup $\Gamma$ subset $SO(2,~n,~Z)$ that obeys certain regular properties for accumulation points in the discrete set $AdS_{n+1}/\Gamma$ is a conformal action of $\Gamma$ on the sphere $S_n$. This is then a map which constructs an $AdS/CFT$ correspondence. Given that $AdS_n~=~O(n,2)/O(n,1)$ this coset structure is a Clifford-Klein form, or double coset structure. The lightlike geodesics in $E^n~=~M^n$, the Minkowski spacetime, are copies of $RP^1$, which at a given point p define a set that is the lightcone $C(p)$. The point p is the projective action of $\pi(v)$ for $v$ a vector in a local patch $R^{n,2}$ and so $C(p)$ is then $\pi(P\cap C^{n,2})$, for $P$ normal to $v$, and $C^{n,2}$ the region on $R^{n,2}$ where the interval vanishes. The space of lightlike geodesics is a set of invariants and then due to a stabilizer on $O(n,2)$, so the space of lightlike curves $L_n$ is identified with the quotient $O(n,2)/P$, where $P$ is a subgroup defined the quotient between a subgroup with a Zariski topology, or a Borel subgroup, and the main group $G~=~O(n,~2)$. This quotient $G/P$ is a projective algebraic variety, or flag manifold and $P$ is a parabolic subgroup. The natural embedding of a group $H~\rightarrow~G$ composed with the projective variety $G~\rightarrow~G/P$ is an isomorphism between the $H$ and $G/P$. This is then a semi-direct product $G~=~P~\rtimes~ H$. For the $G$ any $GL(n)$ the parabolic group is a subgroup of upper triangular matrices. This is the Heisenberg group. This connection to Heisenberg groups and parabolic groups is particularly interesting. The structure here has a $\theta$-function realization, and is related to the density of states in string theory. This leads one in my opinion into some very deep structure which is not at all completely explored. - what happened to your account? – user346 Apr 7 '11 at 16:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9302306771278381, "perplexity_flag": "head"}
http://mathoverflow.net/questions/62942/how-to-find-vertex-of-parallelotope-closest-to-given-point-p-in-rn-or-minimi
## how to find vertex of parallelotope closest to given point P in R^n ? (Or minimize quadratic form over {+-1}) Is it NP ? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a parallelotope in R^n and some point "P" in R^n. What algorithms (except of brute force) can be suggested to find the closest vertex of paralleloptope to "P" ? Is it NP ? Parallelotope has 2^n vertex, not arbitrary 2^n point in R^n are vertex of paralleloptope, so clever algorithm should somehow use this additional information, while brute force search over 2^n points does not use. == Reformulation: after choosing origin in the center of parallelotope we can come to the following algebraic version of the problem: minimize over x_i = {-1,+1} the quadratic form: \sum a_ij x_i x_j - \sum x_i v_i - ## 2 Answers This looks almost like the Max-CUT problem to me (you have minimize instead of maximize, but you can just flip the signs of the matrix $A$) In general, you problem is an instance of a binary quadratic program, so it will be hard to solve. Have a look at some solvers on this webpage - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is equivalent to the MIMO detection problem, which is NP hard. Here is a paper with a semidefinite relaxation: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.163.3233&rep=rep1&type=pdf Then, there exist some easy instances of the problem, if your matrix of the quadtratic form is negative semidefinite and rank deficient. Check these for example
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8559568524360657, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/169226-solving-radical-equation.html
# Thread: 1. ## Solving a radical equation....... I cannot figure this part out for the life of me. I have it written in the picture so hopefully it's readable. Please let me know if you need another picture. 2. Originally Posted by redlinethecar I cannot figure this part out for the life of me. I have it written in the picture so hopefully it's readable. Please let me know if you need another picture. What picture? 3. Sorry, I fixed it. My mind is not working right now 4. $\left(\sqrt{x+2}+2\right)^2=\left(\sqrt{x+2}+2\rig ht)\left(\sqrt{x+2}+2\right)$ $=\left(\sqrt{x+2}\right)\left(\sqrt{x+2}\right)+\l eft(\sqrt{x+2}\right)2+2\left(\sqrt{x+2}\right)+2( 2)$ just as $5^2=25$ $(3+2)^2=(3+2)(3+2)=3(3+2)+2(3+2)=3(3)+3(2)+2(3)+2( 2)$ $=9+6+6+4=25$ #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9511208534240723, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/59442/throw-a-die-n-times-observe-results-are-a-monotonic-sequence-what-is-probabi
# Throw a die $N$ times, observe results are a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I throw a die $N$ times and the results are observed to be a monotonic sequence. What is probability that all 6 numbers occur in the sequence? I'm having trouble with this. There are two cases: when the first number is 1, and when the first number is 6. By symmetry, we can just consider one of them and double the answer at the end. I've looked at individual cases of $N$, and have that For $N = 6$, the probability is $\left(\frac{1}{6}\right)^2 \frac{1}{5!}$. For $N = 7$, the probability is $\left(\frac{1}{6}\right)^2 \frac{1}{5!}\left(\frac{1}{6} + \frac{1}{5} + \frac{1}{4} + \frac{1}{3} + \frac{1}{2} + 1\right)$. I'm not sure if the above are correct. When it comes to $N = 8$, there are many more cases to consider. I'm worried I may be approaching this the wrong way. I've also thought about calculating the probability that a number doesn't occur in the sequence, but that doesn't look to be any easier. Any hints/corrections would be greatly appreciated. Thanks - Do you mean that in the sequence of $N$ outcomes, you look for a longest monotonic subsequence ? Should the sequence be increasing or decreasing ? – Sasha Aug 24 '11 at 12:43 Apologies, see my edit. I need to find the probability that all 6 numbers occur, given that the N results form a monotonic sequence. – TRY Aug 24 '11 at 12:45 ## 3 Answers As you observe, you can reduce the problem to monotonically increasing. Consider how many cases give a monotonically increasing sequence from 1 to 6. That requires that you throw a 1 on the first throw; that on five of the later throws you increase by 1; and that on the other later throws you remain that same. So there are $\binom{N-1}{5}$ choices for the throws which change. Now, how many possible monotonically increasing sequences are there? One way to look at this combinatorically is to take the $N$ elements of the sequence, prepend $1$, and postpend $6$. There are $N+1$ places in this sequence where the value can increase, and $5$ of them are used (with repetition). That gives $\binom{N+5}{5}$ Now to convert the reduction into the final answer we want the number of monotonically increasing or decreasing sequences with all values over the number of monotonically increasing or decreasing sequences. A sequence cannot be monotonically increasing through all values and monotonically decreasing through all values, but it can be both monotonically increasing and monotonically decreasing if it is constant, so we must apply inclusion-exclusion for the denominator. Therefore we have $$\begin{eqnarray} P(\text{All 6 included}|\text{Monotonic}) & = & \frac{ 2\binom{N-1}{5} }{ 2\binom{N+5}{5} - 6 } \\ & = & \frac{(N-1)!\;N!}{(N-6)!\;((N+5)! - 360\;N!)} \\ & = & \frac{(N-5)(N-4)(N-3)(N-2)(N-1)}{(N+5)(N+4)(N+3)(N+2)(N+1) - 360} \end{eqnarray}$$ - @Byron, excellent point. Thanks. – Peter Taylor Aug 24 '11 at 14:21 1 As Tom states in his solution below, there is a subtle problem with reducing the question to monotonically increasing only: monotonic includes constant. – David Bevan Jan 3 '12 at 18:21 I have a slightly different answer to the above, comments are very welcome :) The number of monotonic sequences we can observe when we throw a dice $N$ times is 2$N+5\choose5$-$6\choose1$ since the six sequences which consist of the same number repeatedly are counted as both increasing and decreasing (i.e. we have counted them twice so need to subtract 6 to take account of this). The number of increasing sequences involving all six numbers is $N-1\choose5$ (as has already been explained). Similarly the number of decreasing sequences involving all six numbers is also $N-1\choose5$. Therefore I believe that the probability of all seeing all six numbers given a monotonic sequence is 2$N+5\choose5$-$6\choose1$ divided by 2$N-1\choose5$. This is only slightly different to the above answers but if anyone has any comments as to whether you agree or disagree with my logic or if you require further explanation I'd be interested to hear from you. - 2 I think you mean, divide $2\binom{N-1}{5}$ by $2\binom{N+5}{5}-6$. Well done for pointing out the subtle error in considering increasing and decreasing sequences individually. – David Bevan Jan 3 '12 at 18:19 Consider monotonically increasing outcomes. Let there be $k_1$ of ones, $k_2$ of twos, and so on. Clearly $k_1+k_2+k_3+k_4+k_5+k_6 = N$ and $k_i>0$. The probability is then, the number of aforementioned configuration $C$ divided by $6^N$. To count how many $\{k_i\}$ are there, it is best to use generating functions. $$C = [t]_n \left( \frac{t}{1-t} \right)^6 = [t]_{n-6} \left( \frac{1}{1-t} \right)^6 = \binom{n-1}{5}$$ Added: In order to compute the conditional probability, we should count how many monotonically increasing sequences of outcomes are there. To this end we need to drop $k_i>0$ requirement, while keeping $\sum_{i=1}^6 k_i = n$. Let this count be $T$, then $$T = [t]_n \left( \frac{1}{1-t} \right)^6 = \binom{n+5}{5}$$ The final result, thus is $$p = \frac{\binom{n-1}{5}}{\binom{n+5}{5} } = \frac{(n)^{(6)}}{(n)_6}$$ where $(n)_m$ is Pochhammer symbol, and $(n)^{(m)}$ is falling factorial. - As Tom states in his solution below, there is a subtle problem with reducing the question to monotonically increasing only: monotonic includes constant. – David Bevan Jan 3 '12 at 18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483253955841064, "perplexity_flag": "head"}
http://stats.stackexchange.com/questions/14400/random-walks-in-multinomial-case
# Random walks in multinomial case Model: A vector $X=(X_1, X_2, X_3)$ that follows a trinomial distribution with parameters $p=1/3$ and $n$. (I have a coin with three sides $S1$, $S2$, $S3$). I flip the coin $n$ times. The coin has a probability $p=1/3$ to be flipped to the side $S1$, and similarly with $S2$ and $S3$. $X_1$ counts the number of times the coin is flipped to $S1$ ($X_2$ and $X_3$ are defined similarly). Questions: 1. Let $0 \leq \alpha \leq n$. I Want to find $T(\alpha)= P (((X_1 - X_2) \geq \alpha) \cap ((X_1 - X_3) \geq \alpha))$, or a lower bound on $T(\alpha)$ 2. For what values of $\alpha$, The lower bound on $T(\alpha)$ does not depend on $n$ (is a constant) ? If $X$ was following a binomial distribution, the problem would have been easy to solve using random walks, but I do not know how to solve it in the multidimensional case. Any idea? Thank you. Two-dimensional case $X= (X_1, X_2)$ follows a binomial distribution with parameters $p=1/2$ and $n$. (A coin is flipped $n$ times. the coin is up with probability $p=1/2$ and down with probability $p$. $X_1$ counts the number of ups and $X_2$ counts the number of downs) Let $Y= X_1 - X_2$ (We have $n$ coin flips, when the coin is flipped up, we add (+1) to $Y$, when the the coin is flipped down, we add (-1) to $Y$). We can see this as a random walk, when the coin is up we go to the right and when it is down we go to the left. $\mathrm{Var}(Y)=n = \sqrt n ^2$. $E(Y)=0$ By the Central Limit Theorem, $P(Y \geq \alpha) \approx 1 - \Phi(\alpha / \sqrt n)$ (normal distribution). - Could you elaborate on the sense in which this is a "random walk"? You haven't described a stochastic process; something is missing. – whuber♦ Aug 17 '11 at 13:48 @whuber: I edited the question. I hope it is ok now. – user2094 Aug 17 '11 at 14:28 Exactly how does a trinomial distribution lead to a random walk? Now we can form three differences $X_i-X_j$; what do we do with them? You need to be explicit. Even your description of the "two-dimensional" random walk is faulty: there's still no stochastic process in evidence. – whuber♦ Aug 17 '11 at 14:44 I don't know how to use to random walk in multidimensional case, I just say that random walks help to solve the problem in the two-dimensional case. The stochastic process: I flip a coin $n$ times. the coin is up with probability $p=1/2$ and down with probability $p$. $X_1$ counts the number of ups and $X_2$ counts the number of downs. – user2094 Aug 17 '11 at 14:49 1 OK I added a description. (I have a coin with three sides S1, S2, S3). I flip the coin n times. The coin has a probability p=1/3 to be flipped to the side S1, and similarly with S2 and S3. X1 counts the number of times the coin is flipped to S1 (X2 and X3 are defined similarly). – user2094 Aug 17 '11 at 14:58 show 5 more comments ## 1 Answer The probabilities for this problem can be calculated explicitly for quite large $n$. To get a very good approximation for even moderately large $n$, we can use the multivariate central limit theorem. Define $U_1 = X_1 - X_2$ and $U_2 = X_1 - X_3$. Note that by symmetry, $$\newcommand{\e}{\mathbb{E}}\renewcommand{\Pr}{\mathbb{Pr}}\newcommand{\Cov}{\mathrm{Cov}}\e U_1 = \e U_2 = 0 \> .$$ We also have, by the bilinearity of the covariance operator, that $$\Cov(U_1, U_2) = \Cov(X_1,X_1) - \Cov(X_1,X_3) - \Cov(X_1,X_2) + \Cov(X_2,X_3) \>.$$ Now, $\Cov(X_1,X_1) = n p (1-p)$ where $p = 1/3$ here. An only slightly more difficult calculation yields $$\Cov(X_1,X_2) = - n p^2 \>,$$ and, of course, by symmetry again, $\Cov(X_1,X_2) = \Cov(X_1,X_3) = \Cov(X_2,X_3)$. Hence, $\Cov(U_1, U_2) = n p (1-p) + n p^2 = np = n/3$ and by a similar calculation, $\Cov(U_1, U_1) = \Cov(U_2,U_2) = 2 n p = 2 n / 3$. Observe that $U_1$ and $U_2$ are each the sums of independent and identically distributed random variables. For example, if $\xi_i \in \{1,2,3\}$ is the outcome of the $i$th draw, then $U_1 = \sum_{i=1}^n 1_{(\xi_i = 1)} - 1_{(\xi_i = 2)}$ where $1_{(\cdot)}$ is the indicator function. Hence, by the multivariate central limit theorem, we conclude that $$\sqrt{\frac{3}{n}} (U_1,U_2) \xrightarrow{d} \mathcal{N}(0, \Sigma)$$ where $$\Sigma = \left(\begin{array}{cc}2 & 1 \\ 1 & 2\end{array}\right).$$ Now, since $$T(\alpha) = \Pr( \{X_1 - X_2 \geq \alpha \} \cap \{X_1 - X_3 \geq \alpha \} ) = \Pr( U_1 \geq \alpha, U_2 \geq \alpha)$$ then, we can approximate $T(\alpha)$ as follows $$T(\alpha) \approx \int_{\alpha\sqrt{3/n}}^\infty \int_{\alpha\sqrt{3/n}}^\infty \frac{1}{2 \sqrt{3} \pi} e^{-\frac{1}{3}(u_1^2 - u_1 u_2 + u_2^2 )} \mathrm{d}u_1 \mathrm{d}u_2 \> .$$ Below is some very brief $R$ code that compares a simulation of the true process against a simulation using the normal approximation assuming $n = 100$ underlying trinomial trials. First, the picture. Here is the code. ````set.seed(.Random.seed[1]) n <- 100 N <- 10000 X <- matrix( sample(1:3, n*N, replace=T), nc=n ) xt <- apply(X,1,table) dxt <- cbind( xt[1,]-xt[2,], xt[1,]-xt[3,] ) xtt <- table(apply(dxt,1,min)) L <- matrix( c(sqrt(2), 1/sqrt(2), 0, sqrt(3/2)), 2 ) Y <- L %*% matrix( rnorm( 2*N ), nr=2 ) * sqrt(n) / sqrt(3) ytt <- table( floor(apply(Y,2,min)) ) plot( names(xtt), xtt/N, type="h", xlab="a", ylab="Density of T(a)" ) lines( names(ytt), ytt/N, col="red", type="h" ) legend( "topright", legend=c("actual", "normal approx."), lty="solid", col=c("black", "red"), bg="white", inset=0.02 ) ```` There are also R packages to calculate bivariate normal densities and probabilities. Both mnormt and fMultivar are examples, but I don't know enough to recommend any one over another. - Thank you very much for your detailed response. May I ask you another question ? If I generalize my problem the general f-nomial case (f=2 for the binomial and f=3 for the trinomial), so I have $f$ variables $X_1, X_2, .... X_f$. I want to calculate the probability $P(\alpha)$ that some $X_i$ is greater than all the other $X_j, j\neq i$ by at least $\alpha$. Suppose I fix $\alpha=\sqrt(N)$, do you think that I can find an approximation or a lower bound to $P(\alpha)$ that does NOT depend on $f$, even if it is very low. Thank you. – user2094 Sep 5 '11 at 7:19 You want a lower bound as opposed to an upper bound? I'm not sure a nontrivial such bound will exist, but I will think about it a little. – cardinal Sep 5 '11 at 18:37 By lower bound I mean a value $V$ that is lower than the probability I want to compute, to be able that the probability is at least equal to $V$. – user2094 Sep 7 '11 at 16:22 1 I deleted some previous comments that were no longer relevant. I will try to update this answer with a more general one in the next few days if you are still interested. – cardinal Sep 21 '11 at 17:05 Yes, thank you, I would be very grateful. – user2094 Sep 24 '11 at 10:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.918033242225647, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/262702/spectra-of-operators
# Spectra of operators Please help me proof a theorem: If $\mathfrak{U}$ is a complex, commutative Banach algebra with identity and $x\in\mathfrak{U}$, then $$\sigma(x)=\{\phi(x):\phi \text{ is a homomorphism of } \mathfrak {U} \text{ onto } \mathbb{C}\}$$ - Then what? How is $\sigma(x)$ defined? – Nils Matthes Dec 20 '12 at 15:55 this theorem is a book invariant subspaces by Heyder Radjavi and Peter Rosenthal – Matema Tika Dec 20 '12 at 16:04 ## 1 Answer First, some elementary facts about onto homomorphisms $h$. We have $h(0)=0$ and $h(e)=1$. If $x$ is invertible then $\phi(x)\neq 0$. If $\lambda=\phi(x)$ for such a homomorphism, then $\phi(x-\phi(x)e)=0$ which implies that $x-\phi(x)e$ is not invertible hence $\lambda\in \sigma(x)$. If $\lambda\neq\phi(x)$ for all $\phi$, we have to show that $x-\lambda e$ is invertible. We have that $x-\lambda e\notin\ker\phi$ for all $\phi$ an onto homomorphism. In Rudin's book Functional analysis, it's shown that each maximal ideal is the kernel of an onto homomorphism and each strict ideal is contained in a maximal ideal. Can you conclude from that? The result actually says that $\widehat x$, the Gelfand transform of $x$, defined on the set of onto homomorphisms by $\widehat x(\phi):=\phi(x)$, has the spectrum of $x$ a codomain. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9380438923835754, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/157003-halloween.html
Thread: 1. Halloween My colleague's daughter wanted to be a stop sign for Halloween, and he needed to cut out such a sign from a piece of cardboard that was 3 feet by 4 feet. What cuts did he make so that the stop sign (a regular octagon) was as big as possible? 2. Intuitively, we want to make the height as large as possible, leaving a little left over material on the sides. (We could alternately think of it as make the sides as large as possible leaving a little left over for the height, it doesn't matter.) Using this way of thinking, obviously, we're thinking of the cardboard as having the 3-foot side running vertically and the 4-foot side running horizontally. When we see the octagon as triangles and rectangles, we know that the interior rectangle has the same height as the sides of the octagon, which we'll call $a$. Looking at the triangles, then, we want to find out what is the side length of a right triangle with hypotenuse $a$. Whatever that side length is, call it $b$, we want to double it, add it to $a$, and set that equal to 3. When we solve for $a$, we'll know the value of $b$. Thus he started a cut on the left side (could be on the right, again, not really important how you view this), b units down from the top, at a 45-degree angle from the board (since the interior angle of an octagon is 135) until he cut that piece off. He did a symmetrical cut at the bottom. He also made a cut $a+2b$ units to the right of the upper-left corner (measured from the point before anything had been cut), straight down the cardboard. He then made cuts on the right that were symmetric to those on the left. I'll see if I can make and post a .pdf showing the picture that I used to think about this. 3. The red line is b and the blue line is a. 4. Originally Posted by matgrl My colleague's daughter wanted to be a stop sign for Halloween, and he needed to cut out such a sign from a piece of cardboard that was 3 feet by 4 feet. What cuts did he make so that the stop sign (a regular octagon) was as big as possible? Adding some calculations to ragnar's description: 1. All sides of the octogon have the length a. 2. The width of the board is 3'. It consists of $2k + a = 3$ Since a is the hypotenuse of an isosceles right triangle with side length k you know: $k^2+k^2 = a^2~\implies~k = \frac12 a \cdot \sqrt{2}$ Thus the 1st equation becomes: $a\cdot \sqrt{2}+a=3~\implies~a=\dfrac{3}{1+\sqrt{2}}~\app rox~1.24264'$ Attached Thumbnails 5. Elegant solution, earboth! 6. Are there any simplier ways to do this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9713718295097351, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/57802/list
## Return to Question 2 added "geometrically" to certain properties This question is somehow related to the question What properties define open loci in excellent schemes?. Let $f:X\to S$ be a proper (or even projective) morphism between schemes (of finite type over a field or over $\mathbb{Z}$). For $t\in S$, $X_t$ is the fiber of $f$ over $t$. Let $P$ be a property of schemes. We consider the locus: $$U_P = \{ t\in S : X_t \text{ has property } P \}.$$ For which properties $P$ is the set $U_P$ open if 1. $f$ is flat, 2. $f$ is smooth? Examples of such $P$'s I know or suspect to be open in flat families are "being geometrically reduced", "being geometrically smooth" or "being $S_n$". In smooth families, a nice example is that of "being Frobenius split" (we assume that $S$ has characteristic $p$). Copy-paste from the aforementioned thread: Question 1: Do you know other interesting classes of open properties? Question 2: Are there good heuristic reasons for why a certain property should be open? Phrased a bit more ambitiously, are there common techniques for proving openness for certain class of properties? More specific questions: • how about properties $R_n$ and normality? • is being Frobenius split open in flat families? • in general take a property local rings $Q$ and consider $P =$ "all local rings of $X$ satisfy $Q$". Which of the properties $Q$ listed in the cited thread give $P$'s which are open in flat families? 1 # What properties define open loci in families? This question is somehow related to the question What properties define open loci in excellent schemes?. Let $f:X\to S$ be a proper (or even projective) morphism between schemes (of finite type over a field or over $\mathbb{Z}$). For $t\in S$, $X_t$ is the fiber of $f$ over $t$. Let $P$ be a property of schemes. We consider the locus: $$U_P = \{ t\in S : X_t \text{ has property } P \}.$$ For which properties $P$ is the set $U_P$ open if 1. $f$ is flat, 2. $f$ is smooth? Examples of such $P$'s I know or suspect to be open in flat families are "being reduced", "being smooth" or "being $S_n$". In smooth families, a nice example is that of "being Frobenius split" (we assume that $S$ has characteristic $p$). Copy-paste from the aforementioned thread: Question 1: Do you know other interesting classes of open properties? Question 2: Are there good heuristic reasons for why a certain property should be open? Phrased a bit more ambitiously, are there common techniques for proving openness for certain class of properties? More specific questions: • how about properties $R_n$ and normality? • is being Frobenius split open in flat families? • in general take a property local rings $Q$ and consider $P =$ "all local rings of $X$ satisfy $Q$". Which of the properties $Q$ listed in the cited thread give $P$'s which are open in flat families?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 44, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9124196767807007, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/83464-squares-natural-numbers-arithmetic-progression.html
# Thread: 1. ## Squares of natural numbers in an arithmetic progression Hi, I've got a problem solving this task: "Prove that if there is a square of a natural number in an arthmetic progression of natural numbers, then there is an infinite number of squares of natural numbers in this progression" I assume that the difference of this progression must also be natural (particularly 0) so that all the members are natural only. I found out that any square of natural number $x$ can be expressed as a sum of the first $x$ odd numbers, which builds up an arithmetic progression too (starting from 1, with the difference 2): $x^2 = 1 + 3 + 5 + 7 + ... + (2x-1)$, or $\sum_{n=1}^{x} (2x-1)$ But I'm stuck at this point. Any suggestions on how to proceed? 2. Originally Posted by pinkparrot Hi, I've got a problem solving this task: "Prove that if there is a square of a natural number in an arthmetic progression of natural numbers, then there is an infinite number of squares of natural numbers in this progression" I assume that the difference of this progression must also be natural (particularly 0) so that all the members are natural only. I found out that any square of natural number $x$ can be expressed as a sum of the first $x$ odd numbers, which builds up an arithmetic progression too (starting from 1, with the difference 2): $x^2 = 1 + 3 + 5 + 7 + ... + (2x-1)$, or $\sum_{n=1}^{x} (2x-1)$ But I'm stuck at this point. Any suggestions on how to proceed? Hint: if $a+kd=n^2,$ then: $(n+md)^2=a+(k+m^2d + 2mn)d.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167959690093994, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/118100-matrix-using-row-operations.html
Thread: 1. Matrix- using row operations 1.) 7x + 5y - z = 94 x + 5y + 2z = 52 2x + y + z = 29 2. Originally Posted by kelsikels 1.) 7x + 5y - z = 94 x + 5y + 2z = 52 2x + y + z = 29 Set up: $<br /> \left(\begin{array}{ccc|c}<br /> 7 & 5 & -1 & 94 \\<br /> 1 & 5 & 2 & 52 \\<br /> 2 & 1 & 1 & 29<br /> \end{array}\right)<br />$ Move the 2nd row to the top row. You'll see why. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 7 & 5 & -1 & 94 \\<br /> 2 & 1 & 1 & 29<br /> \end{array}\right)<br />$ Take the 2nd row: subtract 7x the top row from each entry. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & -30 & -15 & -270 \\<br /> 2 & 1 & 1 & 29<br /> \end{array}\right)<br />$ Divide all entries in 2nd row by -15. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & 2 & 1 & 18 \\<br /> 2 & 1 & 1 & 29<br /> \end{array}\right)<br />$ Third row: subtract 2x the top row from each entry. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & 2 & 1 & 18 \\<br /> 0 & -9 & -3 & -75<br /> \end{array}\right)<br />$ Divide each entry in the 3rd row by -3. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & 2 & 1 & 18 \\<br /> 0 & 3 & 1 & 25<br /> \end{array}\right)<br />$ Third row: subtract 3/2x of the 2nd row. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & 2 & 1 & 18 \\<br /> 0 & 0 & -\frac{1}{2} & -2<br /> \end{array}\right)<br />$ Multiply the entries in the 3rd row by -2. $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 5 & 2 & 52 \\<br /> 0 & 2 & 1 & 18 \\<br /> 0 & 0 & 1 & 4<br /> \end{array}\right)<br />$ This can be solved by back substitution, now. If you want to practice your row skills, find the answer by back substitution, then check yourself: find the answer directly by reducing it to this form: $<br /> \left(\begin{array}{ccc|c}<br /> 1 & 0& 0 & x \\<br /> 0 & 1 & 0 & y \\<br /> 0 & 0 & 1 & z<br /> \end{array}\right)<br />$ You might also do the steps above yourself and check my work. Maybe I tossed in a monkey wrench somewhere along the way ...?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8938398361206055, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/38205?sort=newest
## Markov chain convergence problem. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Consider a markov chain matrix P of size n x n (n states). P is known to be: 1- there are at least two absorbent states. one of them is denoted by null. (thus, we have that P_null,null = 1) 2- For the set of states that are not absorbent (called set H) , we have that P_h,null > 0 for all h in H. 3- Not all states are recurrent. 4- Aperiodic (the return to some states can occur at irregular times). It is true that limit when n goes to infinity of P^n converges? Is this result well known or is the proof simple? Thanks. - ## 2 Answers Yes, uniqueness holds. Condition 2 implies that every state $j$ is either absorbing $(j\not\in H)$ or transient $(j\in H)$. Define the absorption time to be $T=\inf (n\geq 0: X_n\not\in H)$. This $T$ is almost surely finite for any starting state $i$, that is, the chain is eventually absorbed. If $j\in H$, then $p^n_{ij}=P_i(X_n=j)\leq P_i(T>n)\to 0=:Q_{ij}$ as $n\to\infty$. If $j\not\in H$, then $p^n_{ij}=P_i(X_n=j)\uparrow P_i(X_T=j)=:Q_{ij}$ as $n\to\infty$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Yes. $P^n$ converges to a matrix $Q$ with (i) $Q_{i,i}=1$ for each $i\not\in H$ ($i$ is absorbent), and (ii) $\sum_{j\not\in H} Q_{i,j}=1$ for all $i\in H$. To see (ii) we need that $\sum_{j\not\in H}P^n_{i,j}\rightarrow 1$ for all $i\in H$. For this note that $\sum_{j\not\in H}P^n_{i,j}$ is the probability of going from $i$ to an absorbent state in $n$ steps, and so if $P_{i,\text{null}}\ge\lambda>0$ for all $i\in H$ then for all $j\not\in H$, $$P^n_{i,j}\le (1-\lambda)^n\rightarrow 0.$$ To get a unique such $Q$ we need to show for each absorbent state (say, for null) $$\lim_n \ P^n_{i,\text{null}}\quad\text{exists}$$ for each $i\in H$. But $P^n_{i,\text{null}}\le P^{n+1}_{i,\text{null}}$ since once we get to null we stay there. - 1 ? Let's call another absorbent state foo. Can't we have in addition (1') $P_{i,\mathrm{foo}}>0$ for all $i \in H$ ?? – Gerald Edgar Sep 9 2010 at 18:15 @Gerald Edgar: You're right. I fixed my answer in response to your comment. – Bjørn Kjos-Hanssen Sep 9 2010 at 18:50 The proof of uniqueness is well done? Not clear to me. Thanks. – Gerardo Sep 9 2010 at 20:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272111058235168, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/61225?sort=newest
## How many “different” colorings (excluding exchanges) exist for a given map (graph)? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In particular I'm interested in regular maps, excluding all maps that can be colored with 2 or 3 colors. For what I need to analyze, maps have to be regarded as differently colored, if the same coloring cannot be obtained by subsequent exchanges of colors. In other words, for example, once a map has been properly colored, I don't want to count all other configurations that derive from subsequent exchanges of colors. Since the arbitrary nature of choosing colors, these derived configurations are equivalent (for what I'm analyzine) to the first one, since they could have been obtained just choosing different colors in the first place. Instead, there are colorings that differ in such a way that exchanging colors won't help to transform one configuration into the other. In the following picture the graphs named (A) and (B) are the only ones that cannot be converted into one another by swapping colors. http://4coloring.files.wordpress.com/2011/04/3-colored-in-12-different-ways.png My question is: how many "different" colorings (in the meaning I explained) exist for a given map? I've only found an article on http://en.wikipedia.org/wiki/Graph_coloring that count all possible colorings including swaps. Is there a paper that can help me on this? I already posted it to "math stackexchange" but, so far, I haven't received the answer I was looking for. - 1 Let me see if I understand you correctly: you are looking at maps that are 4-colorable, but not 3-colorable. So given any coloring, you can simply move the colors around in exactly 4! ways (since you have to use all 4 colors) and get essentially the same coloring. So isn't the number you're looking for simply the number of colorings divided by 4! ? – Thierry Zell Apr 10 2011 at 18:11 @all: Thanks for the info. Yes, it is the problem I'm facing. But to get the "number of colorings" the only method I found is to compute the chromatic polynomial, which is known only for few graphs and it is hard to find for more complex cases. Do you know of papers that directly approach the computation of the "number of colorings without exchanges of colors"? I've implemented a brute force algorithm to color a given map with four colors. I'll try to extend it to find all possible colorings manually ... excluding exchanges. youtube.com/user/mariostefanutti#p/u/2/… – Mario Stefanutti Apr 12 2011 at 15:02 ## 5 Answers As a youthful folly I once wrote a paper "On the algebra of the four color problem", Ens. Math 11 (1965), 175-193. It can be found here: http://retro.seals.ch/digbib/view?rid=ensmat-001:1965:11::337&id=browse&id2=browse5&id3=1 In this paper the permutations of colors are systematically "quotiented out" via a certain homological process. - Printed. I'm not that expert but I'll try to read it. Thanks! – Mario Stefanutti Apr 12 2011 at 15:07 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. How many colors do you want to have? I felt the question was ambiguous. If you want to have an arbitrarily fixed number of colors available, or if you want to have an arbitrarily fixed number of colors actually used, you will be computing something equivalent to the chromatic polynomial. If you want only 4 colors, or only 5 colors, the problem need not be equivalent to the chromatic polynomial. I disagree with Emil Jerábek's statement that the number "(presumably) cannot be computed by any algorithm significantly more efficient than a brute-force search." It depends on what you mean by "significantly more efficient". In practical terms, a significant improvement might well be possible for relatively small graphs or graphs of special type. - By “significantly more efficient”, I meant computable in time $2^{n^{o(1)}}$. – Emil Jeřábek Dec 22 at 15:56 Counting the number of 4-colorings (or indeed, $k$-colorings for any fixed $k\ge3$) of a planar graph is a `$\#P$`-complete problem (as proved by Vertigan), hence it (presumably) cannot be computed by any algorithm significantly more efficient than a brute-force search. - Computer experiments yield some interesting numbers. The following numbers do not rule out the maps that are 3-colorable. Without thinking too deeply, my guess is that the number that would be ruled out would be extremely small, but this is only a guess. Under reasonable restrictions (valance three on all vertices, regions intersect in connected sets, sufficient connectivity, etc.) the number of such colorings is not large for numbers of regions where computation is feasible. For 12 regions (the current limit of our computations and patience), the maximum number of colorings is 172. The minimum, of course is 1. However, the second to largest number of colorings for 12 regions is 92. Then comes 85, then 84, then 76, then 64 and so forth. The smallest number of colorings that never appears with 12 regions is a highly suggestive 31. For 11 regions, the maximum is 85, then 48, then 44, then 41, then 40, then 29, ... The smallest number not to appear with 11 regions is an equally suggestive 15. If one starts to suspect a pattern, the smallest number not to appear with 10 regions is 10. With 10 regions the maximum is 44, then 28, then 21, then 20, etc. The estimated time to work with 13 regions with the current program is one month. It could definitely be made faster, but 14, 15, 16 regions are definitely out of reach. For the curious, the number of graphs investigated with 12 regions was 27360612. Certain obvious symmetries were not used to cut down the number but it is not clear how much the cut down would have been. Nothing beyond a factor of (somewhat less than) two was obvious. Note that the assumption that regions intersect in connected sets is a large assumption. Without that assumption, the number of colorings would explode. Bottom line: I too would be interested in any information about the total number (now known to always be at least one) of colorings, modulo permutations of the colors, for planar graphs. - Hi, how did you make these computations? I was planning to implement this feature into the program I'm building, but I'm having trouble to eliminate maps that "seems" different but that are actually the same map (Homeomorphic maps). See this other post: mathoverflow.net/questions/62328/… – Mario Stefanutti Jun 28 2011 at 14:48 Are you looking for a polynomial time solution? If not, you can find the number of colorings of the graph with n colors and divide it by n! to get the number of different colorings as you defined above. One way to find the number of colorings for each n would be to find the chromatic polynomial. http://en.wikipedia.org/wiki/Chromatic_polynomial - Thanks for the answer. I inserted a note to comment made by "Thierry Zell". It applies also to this comment. Thanks again. – Mario Stefanutti Apr 12 2011 at 15:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388248920440674, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=515315
Physics Forums ## find subgroups of finitely generated abelian groups Is there an "easy" method to finding subgroups of finitely generated abelian groups using the First Isomorphism Theorem? I seem to remember something like this but I can't quite get it. For example, the subgroups of $G=Z_2\oplus Z$ are easy...you only have $0\oplus nZ$ and $Z_2\oplus nZ$ for $n\geq 0.$ But if you have a different group, say $G=Z_6\oplus Z_4$, it's possible the subgroups aren't of the form $<a>\oplus<b>$ correct? Like <(2,2)>. How would you describe all the subgroups? I can do it by brute force..I'm looking for an quick easier asnwer if one exists...even in only some situations EDIT: maybe this makes more sense if I only need to know subgroups of a specific index? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus It seems you're looking for the subgroup lattice of finitely generated abelian groups? Well, the following article may help: http://www.google.be/url?sa=t&source...5OYWYg&cad=rja Also keep in mind that if G and H are groups such that gcd(|G|,|H|)=1, then $Sub(G\times H)\cong Sub(G)\times Sub(H)$ So in your example $$Sub(\mathbb{Z}_6\times \mathbb{Z}_4)\cong Sub(\mathbb{Z}_3)\times Sub(\mathbb{Z}_2\times \mathbb{Z}_4)$$ so you only need to find the subgroups of $\mathbb{Z}_2\times \mathbb{Z}_4$. The cyclic subgroups of this group are $$\{(0,0)\},<(1,0)>,<(0,1)>,<(0,2)>,<(1,1)>,<(1,2)>$$ so all the subgroups are just products of the above groups. Thread Tools | | | | |--------------------------------------------------------------------------|----------------------------|---------| | Similar Threads for: find subgroups of finitely generated abelian groups | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Linear & Abstract Algebra | 1 | | | Calculus & Beyond Homework | 2 | | | Linear & Abstract Algebra | 2 | | | Calculus & Beyond Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894826173782349, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/31332/provably-intractable-problems/31344
## Provably intractable problems ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let f(n) be a space-constructible superpolynomial function. Then BQP $\subseteq$ PSPACE $\subset$ SPACE(f(n)), so in particular, SPACE(f(n)) $\not\subseteq$ BQP. Let L be a problem such that every problem in SPACE(f(n)) is BQP-reducible to L. Then L $\notin$ BQP. Are there any problems that have been proven to not be in BQP for which that is not known to be provable by the above method? - For PSPACE $\subset$ SPACE(f(n)), see en.wikipedia.org/wiki/Space_hierarchy_theorem – Ricky Demer Jul 10 2010 at 21:44 ## 1 Answer There are few complexity class separations known which do not follow from some type of diagonalization (a complexity hierarchy theorem of some kind). I know of none for $\mathbf{BQP}$. One canonical example of a separation that doesn't seem to follow from a diagonalization argument is $\mathbf{AC}^0 \subsetneq \mathbf{NC}^1$, which instead follows from the Ajtai-Furst-Saxe-Sipser theorem that the parity of $n$ bits does not have polynomial size circuits of unbounded fan-in and constant depth. Now, if by "the above method" you meant something more specific than just diagonalization, then there is just a little something else you can say about $\mathbf{BQP}$. Adleman, DeMarrais, and Huang proved that $\mathbf{BQP} \subseteq \mathbf{PP}$: Leonard M. Adleman, Jonathan DeMarrais, Ming-Deh A. Huang: Quantum Computability. SIAM J. Comput. 26(5): 1524-1540 (1997) (Recall that $\mathbf{PP}$ consists of languages recognized by randomized polynomial time algorithms with "exponential precision". Without loss of generality, we may say that an input is "accepted" by such an algorithm if and only if the probability of outputting $1$ is strictly greater than $1/2$. Note this probability could be $1/2+1/2^{n^{\Omega(1)}}$. It is known that $\mathbf{PP} \subseteq \mathbf{PSPACE}$, but the other direction is unknown.) Just like $\mathbf{PSPACE}$ has a superpolynomial analogue $\mathbf{SPACE}(f(n))$, $\mathbf{PP}$ has a superpolynomial analogue $\mathbf{PTIME}(f(n))$, so in your above argument you can replace $\mathbf{SPACE}(f(n))$ with $\mathbf{PTIME}(f(n))$. Note the latter is contained in the former. - By "the above method", I meant space hierarchy theorem with PSPACE at the bottom. I did know BQP $\subseteq$ PP, slightly better is BQP $\subseteq$ AWPP. The relevant questions are how good are the hierarchy theorems for PTIME and AWPTIME. – Ricky Demer Jul 11 2010 at 2:32 For PTIME(f), since there is an effectively computable list of all PTIME(f) machines, the time hierarchy will be quite tight, certainly no worse than the hierarchy for DTIME(f). On the other hand, AWPP is a "semantic" class and so I'd imagine the known time hierarchies there would mimic that of BPP and other such classes. They are strictly weaker. A pretty good survey of these issues can be found at: eccc.hpi-web.de/report/2007/004 – Ryan Williams Jul 11 2010 at 3:47 The issue I can't find a way around is that switching the answer of each path won't change the answer the machine gives if the paths split exactly. – Ricky Demer Jul 11 2010 at 4:54 Let p in (0,1) be arbitrary. One can redefine PP so that the acceptance condition becomes: probability of outputting 1 in the algorithm is strictly greater than p. (Consider what happens when you allow an exponential number of extra "dummy" computation paths that ignore the input and always accept...) This makes it easy to do the complementation. – Ryan Williams Jul 11 2010 at 5:44 I managed to work out the proof without having to use that. – Ricky Demer Jul 11 2010 at 8:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062830209732056, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/36145/list
## Return to Answer 2 added 4 characters in body The paper M. L. Cartwright, J. E. Littlewood. On non-linear dierential differential equations of the second order. I. The equation $y''-k(1-y^2)y+y=b\lambda k\cos(\lambda t+a)$ J.London Math. Soc. 20, (1945) was not only written during the war, but also was stimulated by the war. Subsequently it played an important role in prehistory of hyperbolic dynamics. In 1960 Stephen Smale conjectured that Morse-Smale systems are the only structurally stable systems. It was pointed out to Smale that his conjectures are likely to be false. Rene Thom argued that hyperbolic automorphism does not lie in the closure of Morse- Smale systems. Norman Levinson wrote to Smale with a reference to the above paper in which Cartwright and Littlewood studied certain differential equation of second order with periodic forcing. This work arose from war-related studies involving radio waves. The equation leads to a flow on R3. According to Levinson this flow has infinitely many periodic orbits; this phenomenon is robust which can be seen from the paper and also it was directly proved for a dierent equation in his own work. This led Smale to discovery of the famous horseshoe and subsequent explosive development in smooth dynamics. 1 [made Community Wiki] The paper M. L. Cartwright, J. E. Littlewood. On non-linear dierential equations of the second order. I. The equation $y''-k(1-y^2)y+y=b\lambda k\cos(\lambda t+a)$ J.London Math. Soc. 20, (1945) was not only written during the war, but also was stimulated by the war. Subsequently it played an important role in prehistory of hyperbolic dynamics. In 1960 Stephen Smale conjectured that Morse-Smale systems are the only structurally stable systems. It was pointed out to Smale that his conjectures are likely to be false. Rene Thom argued that hyperbolic automorphism does not lie in the closure of Morse- Smale systems. Norman Levinson wrote to Smale with a reference to the above paper in which Cartwright and Littlewood studied certain differential equation of second order with periodic forcing. This work arose from war-related studies involving radio waves. The equation leads to a flow on R3. According to Levinson this flow has infinitely many periodic orbits; this phenomenon is robust which can be seen from the paper and also it was directly proved for a dierent equation in his own work. This led Smale to discovery of the famous horseshoe and subsequent explosive development in smooth dynamics.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9714000225067139, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2686/how-can-encryption-involve-randomness/2687
# How can encryption involve randomness? If an encryption algorithm is meant to convert a string to another string which can then be decrypted back to the original, how could this process involve any randomness? Surely it has to be deterministic, otherwise how could the decryption function know what factors were involved in creating the encrypted string? - If the ciphertext is longer than the plaintext, you can fit additional information in there. – CodesInChaos May 23 '12 at 21:44 So people will not know which segments of the ciphertext are the real information? – CJ7 May 24 '12 at 0:11 – Paŭlo Ebermann♦ May 27 '12 at 0:08 ## 2 Answers Well, the idea behind randomized encryption is that a single plaintext $P$ can encrypt into many different ciphertexts $C_1, C_2, ..., C_n$, and that when we encrypt, we pick one of those ciphertexts randomly. Of course, because the decryptor has no way to knowing apriori which one we picked, it must be able to map any of those ciphertexts back into the original plaintext. If the ciphertext $C_i$ was exactly as long as the plaintext $P$, then there would be an obvious problem; if the plaintext was $k$ bits long (and hence there are $2^k$ distinct plaintexts), and there are $n$ ciphertexts for each plaintext, we have $n2^k$ ciphertexts (which must all be distinct), and only $2^k$ bit patterns available to express them. The obvious solution to this is that the ciphertexts must be longer than the corresponding plaintext. In particular, if each ciphertext was at least $\log n$ bits longer, then everything fits nicely; we have $n2^k = 2^{k + \log n}$ ciphertexts and $2^{k + \log n}$ bit patterns to express them. Now, the obvious question is: why does anyone bother? The answer to that is, well, it provides better protection than deterministic methods. It is generally the case that we'll send multiple messages with the same key. If we happen to send the same message twice, deterministic encryption would make that obvious to the attacker (because the ciphertexts will be exactly identical), and that is information we'd rather the attacker not have. Even if we'll never send the same message twice, we may send related messages. While it is possible to design a deterministic encryption method that doesn't leak any information when given related messages, it's harder than you'd think. In contrast, the goal behind nondetermanstic encryption is to make all the messages look perfectly random (even if we decide the send the same message multiple times); that turns out to be a rather easier goal to achieve. One common way nondetermanistic encryption is implemented with CBC mode; the encryptor chooses a random block (known as an IV; 128 bits if he is using AES), and uses that to encrypt the message. He sends the IV along with the encrypted message to the decryptor, who can uniquely decrypt the message. One nice property about CBC mode is that it is easy to prove that if the IV is chosen randomly, and that the underlying block cipher is secure, then an attacker cannot distinguish the encryption from a random source. - Say you have an algorithm whose security properties are not very good if a lot of fairly predictable data is encrypted with the same key. You can fix this by adding randomness to the process. You encrypt like this: 1. You generate a random key. 2. You encrypt the data with that random key. 3. You encrypt the random key with the shared key. 4. You send the encrypted data from step 2 along with the encrypted key from step 3. You decrypt like this: 1. You decrypt the encrypted random key with the shared key. 2. You decrypt the encrypted data with the random key that you just decrypted. The shared key that is re-used is only used to encrypt random data. The possibly predictable input is only encrypted with a random key that is never reused. Now, even an attacker who gets to choose what data you encrypt has no control over what data is encrypted with the persistent key. Also, this means that an attacker cannot tell if two encrypted outputs correspond to the same plaintext just by comparing them. If an attacker can get the system to encrypt either the plaintext he suspects and he intercepts the ciphertext, he can tell what the original message encrypted by simple comparison. Adding randomness defeats this attack too. For example, an attacker could intercept a daily encrypted message that was always "nothing to report". An attacker could just wait for the day the ciphertext is different from the previous day and infer that the plaintext has changed and therefore that something was happening. You can defeat this by including something like a serial number or something in the plaintext, but this creates the complexity of having to figure out what's adequate to accomplish that task, requiring the people who compose the plaintext to thoroughly understand the properties of the encryption algorithm. If it's a block cipher, if you do "nothing to report - 10:43PM January 7", will the first few bytes of the encrypted message match if the next message is "nothing to report - 10:43PM January 8" (This is a good reason never to encrypt anything but random data with a key you plan to reuse.) This is just a simple example, but it shows two things. First, it shows how an encryption process can be perfectly reversible but still involve randomness. Second, it shows how using randomness in an encryption process can improve the security properties. - This normally is the reason one uses an initialization vector, not so much to change keys very often. – Paŭlo Ebermann♦ May 24 '12 at 7:50 @PaŭloEbermann: You can think of the persistent key and the IV jointly as the "key" used to encrypt the data. – David Schwartz May 24 '12 at 12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9310130476951599, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/90700/where-is-number-theory-used-in-the-rest-of-mathematics/90711
## Where is number theory used in the rest of mathematics? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Where is number theory used in the rest of mathematics? To put it another way: what interesting questions are there that don't appear to be about number theory, but need number theory in order to answer them? To put it another way still: imagine a mathematician with no interest in number theory for its own sake. What are some plausible situations where they might, nevertheless, need to learn or use some number theory? Edit It was swiftly pointed out by Vladimir Dotsenko that the borders between number theory and algebraic geometry, and between number theory and algebra, are long and interesting. One could answer the question in many ways by naming features on that part of the mathematical landscape. But I'd be most interested in hearing about uses for number theory that aren't so obviously near the borders of the subject. Background In my own work, I often find myself needing to learn bits and pieces of other parts of mathematics. For instance, I've recently needed to learn new bits of analysis, algebra, topology, dynamical systems, geometry, and combinatorics. But I've never found myself needing to learn any number theory. This might very well just be a consequence of the work I do. I realized, though, that (independently of my own work) I knew of no good answer to the general question in the title. Number theory has such a long and glorious history, with so many spectacular achievements and famous results, that I thought answers should be easy to come by. So I was surprised that I couldn't think of much, and I look forward to other people's answers. - 1 This is too empty for an answer, so I'll just type a comment. 1) For your third interpretation I have at least one relevant experience of my own - "factor systems" (some) number theorists deal with when talking about central simple algebras over number fields did contribute to the development of homological algebra in general, and learning that somehow expanded my homological algebra horizons a bit. 2) Sometimes, it's hard to say where the border between number theory, commutative algebra, and algebraic geometry lies. Around that "triple point" there should be many potential answers. – Vladimir Dotsenko Mar 9 2012 at 15:05 2 For instance Number Theory is used in Algebraic Geometry when studying problems in characteristic $p$. But even in characteristic 0, some algebraic varieties arise from arithmetic constructions. Hilbert modular surfaces or Shimura varieties are examples of this situation. Also, the recent classification of fake projective planes by Prasad & Yeung (2007) uses Number Theory in a crucial way. I think there are countless examples like these – Francesco Polizzi Mar 9 2012 at 15:06 1 Also the study of the Ring of Endomorphisms for abelian varieties needs the understanding of some Number Theory, especially number fields and quaternion algebras. – Francesco Polizzi Mar 9 2012 at 15:09 4 Thanks very much for the comments. Can I suggest that people write this kind of thing as answers rather than comments, though? That way, replies to your comments are organized more neatly. – Tom Leinster Mar 9 2012 at 15:11 1 Number theory is used in the representation theory of finite groups to address rationality questions. Algebraic integrality seems to come up just about everywhere. – Grant Rotskoff Mar 9 2012 at 17:37 show 15 more comments ## 35 Answers Here are a few examples. In some, number theory provided an essential motivation. In the others, it plays a more direct role. 1) Are there nonisometric Riemannian manifolds that are isospectral (eigenvalues of the Laplacian match, including multiplicities)? An example was given by Milnor in the 1960s, which depended on prior work of Witt involving theta-functions (modular forms) of lattices. In the 1980s, Sunada created examples systematically by exploiting the analogy with the number theorist's construction of pairs of nonisomorphic number fields that have the same zeta-function. These number field pairs are found with Galois theory (find a finite group $G$ admitting a pair of nonconjugate subgroups having appropriate properties and then find a Galois extension of the rationals with Galois group isomorphic to $G$). There is a well-known analogy between Galois theory and covering spaces, and Sunada used this to translate the group-theoretic conditions for Galois groups into the setting of Riemannian manifolds. For more on this story, see the Wikipedia page here, where you'll see that the nonisometric isospectral pairs found between the work of Milnor and Sunada were closely related to other parts of number theory (quaternion algebras over the rationals). 2) Lens spaces are distinguished from each other using quadratic residues. 3) Knot theory uses continued fractions. See one of the answers to the MO question here. (Some of the other answers to that question could also be regarded as more applications of number theory, to the extent that you consider finite continued fractions to be number theory.) 4) The construction of Ramanujan graphs uses number theory. Also look here. 5) Frobenius proved that the only ${\mathbf R}$-central division algebras that are finite-dimensional are ${\mathbf R}$ and the quaternions. If you want to see infinitely many other examples of noncommutative division rings that are finite-dimensional over their centers, especially if you want examples that are more than 4-dimensional, you probably should learn number theory since the simplest examples come from cyclic Galois extensions of the rationals. Verifying the examples really work requires knowing a rational number is not a norm from a particular number field, and that amounts to showing a certain Diophantine equation has no rational solutions. 6) The classical induction theorems of Artin and Brauer about representations of finite groups were motivated by the desire to prove Artin's conjecture on Artin $L$-functions. Although number theory appears in the proof in the context of algebraic integers, the main point I want to make is that a conjecture from number theory provided an essential motivation to imagine the theorems might be true in the first place. 7) Several concepts of general importance in mathematics were originally developed within number theory. The most prominent example is ideals, which were first defined by Dedekind in his work on algebraic number theory. The first examples of finite abelian groups were unit groups mod $m$ and class groups of quadratic forms. The first finitely generated abelian groups to be studied as such were unit groups in number fields (Dirichlet's unit theorem). The first application of the pigeonhole principle was in Dirichlet's proof of the solvability of Pell's equation. The motivation for Steinitz's 1910 paper setting out a general theory of fields was Hensel's creation of $p$-adic numbers. - 1 Wonderful! Thanks. – Tom Leinster Mar 9 2012 at 19:25 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Already one of the Bernoullis, I think Daniel, asked Euler what number theory was good for. He replied that solving diophantine equations such as $x^2 + y^2 = 1$ and similar ones via rational parametrizations would allow to transform integrals $\int dx/\sqrt{1-x^2}$ into rational ones. Perhaps not that exciting today, but it's a time-honoured answer. - 2 @Franz Lemmermeye: Could you please elaborate more on this? – John Jiang Mar 14 2012 at 3:14 3 The standard rational parametrization of the unit circle gives x = 2t/(1+t^2) and y = \sqrt{1-x^2} = (1-t^2)/(1+t^2). Substitute this into the integral and you get an integral over a rational function. – Franz Lemmermeyer Mar 14 2012 at 18:34 If you want to classify 3-manifolds, then at some point you need to start caring about number theory. Most (closed, aspherical, atoroidal) 3-manifolds are hyperbolic, and so are the quotient of hyperbolic 3-space by a Kleinian group. Understanding such groups requires number theory. (I'm not an expert on this topic, so I encourage others to edit this answer to make it better.) - An old example from differential topology: In surgery theory -- the work of Kervaire and Milnor on exotic spheres -- you need to know a sufficient condition for a homogeneous quadratic equation over $\mathbb Z$ in many variables to have a nontrivial solution. - 3 I feel like many application of number theory are about using an known result from number theory. One thing I'm really curious is like, is there some situation, where a purely geometric problem was turned into a hard and unsolved question on number theory and stimulate research on both side? Or has number theory always move ahead compared with other subjects? – temp Mar 9 2012 at 18:50 3 @temp: After the topological research of Milnor led him to think about quadratic forms, he did a lot of wonderful research on his own on this algebraic/number-theiretic subject, inventing his K_2, proposing his famous Milnor's conjecture now solved using mixed motives, etc. (BTW, this story is told in the last Notices of the AMS in an interview of John Milnor). What I am not sure of is that if any of the new result by Minor on quadratic forms had a topological applications (which, if that was true, would answer nicely your question). – Joël Mar 10 2012 at 14:28 5 @temp: A purely geometric problem from ancient Greece was the construction of regular n-gons by straightedge and compass. Gauss turned the problem into number theory: it can be done iff n is a power of 2 times a product of distinct Fermat primes. Moreover, Gauss went back to the geometry and indicated that the same ideas could be applied to equal subdivision of the lemniscate, without giving the details. Abel worked on this, and it later developed into the theory of complex multiplication. – KConrad Mar 11 2012 at 3:21 Assume you are a (differential) geometer and you want to construct locally symmetric spaces of higher rank. Such a space must have a (globally) symmetric space $X$ as its universal covering space, and this can be written as $X=G/K$ where $G$ is the identity component of the isometry group of $X$ and $K$ is the stabiliser of some point in $X$. To get a locally symmetric space of finite volume, you then have to find a lattice $\Gamma\subset G$, i.e. a discrete subgroup such that $\Gamma\backslash G$ has finite volume with respect to the (right-invariant) Haar measure on $G$. Then if $\Gamma$ is torsion-free you get $\Gamma\backslash X$ as a locally symmetric space. Now how does one construct such lattices? One method is by arithmetic groups, and it is a matter of taste whether you want to consider them as objects of number theory. Suffice it to say that their study requires a lot of techniques from other areas in number theory in a broad sense. It is quite technical to define an arithmetic group, but it is easy to give some examples that already give you some flavour: `$\mathrm{SL}_n(\mathbb{Z})$` as a lattice in `$\mathrm{SL}_n(\mathbb{R})$`, similarly `$\mathrm{Sp}_{q}(\mathbb{Z})$` in `$\mathrm{Sp}_q(\mathbb{R})$` or more elaborate constructions where you start from an algebraic number field and an algebra over that field and take some subgroup of the automorphism group of that algebra. Note that in the two cases I presented to you the lattices are not torsion-free, but you can find finite index subgroups which are torsion-free and therefore give you locally symmetric spaces. As I said there is a fairly technical definition of an arithmetic lattice in a Lie group, and the first guess of everybody hearing of this for the first time is that this should be something exceptional - why should a "generic" lattice be constructible by number-theoretic methods? And indeed, the example of $\operatorname{SL}_2(\mathbb{R})$ supports that guess. The associated symmetric space $\operatorname{SL}_2(\mathbb{R})/\operatorname{SO}(2)$ is the hyperbolic plane $\mathbb{H}^2$. There are uncountably many lattices in $\operatorname{SL}_2(\mathbb{R})$ (with the associated locally symmetric spaces being nothing other than Riemann surfaces), but only countably many of them are arithmetic. But in higher rank Lie groups, there is the following truly remarkable theorem known as Margulis arithmeticity: Let $G$ be a connected semisimple Lie group with trivial centre and no compact factors, and assume that the real rank of $G$ is at least two. Then every irreducible lattice $\Gamma\subset G$ is arithmetic. - 2 @Asaf: I do not agree. The easiest constructions are certainly those from tilings of the hyperbolic plane by hyperbolic polygons, like triangle groups. – Robert Kucharczyk Mar 10 2012 at 17:55 show 4 more comments The book The Unreasonable effectiveness of number theory mentions amongst other things that number theory crops up in stability questions in dynamical systems, e.g. the small-divisor problem in KAM-theory where diophantine approximation is used. Another book with several applications is: Number Theory: An Introduction to Mathematics whose blurb contains "As a source for information on the 'reach' of number theory into other areas of mathematics, it is an excellent work." - 1 And small-divisors problem happen also in some nonlinear initial-boundary value problems for PDEs. They prevent from applying the Implicit Function Theorem. – Denis Serre Mar 9 2012 at 15:54 show 3 more comments The http://en.wikipedia.org/wiki/Feit-Thompson_conjecture is about a diophantine equation whose non-solvability would simplify the proof of the Feit-Thompson theorem. - Do you count the application of prime number theory and factorization in cryptography? - 2 I guess so! Internet credit card security is the example that everyone seems to use to justify the benefits of pure mathematics, so I shouldn't have forgotten that one... – Tom Leinster Mar 9 2012 at 16:48 What is your idea about a problem in number theory that says: $\frac{p^q-1}{p-1}$ never divides $\frac{q^p-1}{q-1}$ if $p,q$ are distinct primes. This is a $\textbf{conjecture}$ and the validity of this conjecture would simplify the proof of solvability of groups of odd order, (W. Fiet, J. G. Thompson, $\textit{Pacific J. Math.}$, 13, no.3 (1963), 775-1029), rendering unnecessary the detailed use of generators and relations. An other interesting application of number theory is in real world. Many years ago, cables used for communication. A lot of cables must be gathered near to each other for more efficiency. But for blocking the noises of each cable to the other cable, a special arrange of cables needed. For this arrangement and neighboring of cables, scientist used reminder theorem and number theory. I think first time, Bell company's scientists invented it. Also, I think this relation between group theory, graph theory and number theory is very nice example: Suppose the order of group $G$, $|G|$, is $n=p_1^{k_1}p_2^{k_2}\ldots p_s^{k_s}$. We fix two prime numbers $p_i$ and $p_j$ of divisors of $n$ and define a graph $\Gamma(G)$ as fallow: The vertices of $\Gamma(G)$ are the elements of group $G$ and two vertices $g_1$ and $g_2$ are adjacent if and only if $o(g_1g_2)=p_ip_j$, where $o$ means the order of element $g_1g_2$ as a group element. These graphs have very nice structures and well defined as $\textit{Prime Graph}$ of group. - It is used in homotopy theory (topological modular/automorphic forms). Class field theory is used e.g. in this article http://front.math.ucdavis.edu/math.AT/0607665 of Niko Naumann. Also, it is a trend in algebraic geometry to reduce geometric questions to finitely generated fields, so to the realm of arithmetic geometry. - 2 The article you cite is, in principle, pure arithmetic geometry; but, of course, you're right and it has applications in homotopy theory in the work of Behrens and Lawson. Another example is the theory of Lubin -Tate formal group laws, which is essential in modern stable homotopy theory. Congruences between modular forms, also a number theory classic, came up recently in connection with the f-invariant in topology. – Lennart Meier Mar 9 2012 at 22:39 There is some interesting, recent work on the PORC conjecture (which is about Group Theory, specifically, finite $p$-groups). In some sense this is treading close to number theory even in its statement, I guess: PORC Conjecture (Higman) Let $n$ be a fixed positive integer. The number $f(p,n)$ of nonisomorphic $p$-groups of order $p^n$ is given by a polynomial in $p$ whose coefficients depend only on the residue class of $p$ modulo some fixed $N$. (PORC="Polynomial On Residue Classes") The statement itself makes reference to number theory, of course; but the recent work, by du Sautoy and Vaughan-Lee (Non-PORC behaviour of a class of descendant $p$-groups, in the arXiv) delves deep into number theory and arithmetic geometry (as does a previous paper by du Sautoy associating the problem of counting nilpotent groups with elliptic curves). - One of the first conditional proofs of the undecidability of Hilbert's tenth problem, by Davis and Putnam in 1959, assumed the existence of arbitrarily long arithmetic progressions of primes, as well as a more technical conjecture of Julia Robinson. The number-theoretic hypothesis was soon removed by Robinson (basically by replacing primes by almost primes), and the latter hypothesis removed by Matiyasevich, leading to his famous theorem. So I guess this is an example of how number theory was used in the rest of mathematics. (The existence of arbitrarily long arithmetic progressions of primes was eventually proven several decades later, although the techniques used in the proof were mostly analytical rather than number-theoretic.) - 3 I was thinking of this "example", too. Nevertheless, I decided not to enter it because I arrived at the conclusion that Hilbert's problem was in some sense a number-theoretical one (a tough one at that). Anyway, I suppose that I'm misunderstanding something here. P.S. The second link doesn't work. – J. H. S. Mar 10 2012 at 19:43 2 Another number-theoretic aspect of the work on Hilbert's 10th problem was the description of all solutions to Pell's equation. – KConrad Mar 10 2012 at 21:07 1 My personal view is that the study of special Diophantine equations is number theory, but the study of general Diophantine equations is logic or computability theory. (Admittedly, this viewpoint is only really defensible once one has results such as Matiyasevich's theorem.) – Terry Tao Mar 13 2012 at 2:16 9 "...was eventually proven several decades later..." such modesty! – David Roberts Mar 14 2012 at 4:49 show 2 more comments Number theory naturally arises when analysing nonlinear partial differential equations on the torus, basically one wants to understand the extent to which nonlinear resonances between frequencies can occur, and such frequencies live on an integer lattice if the spatial domain is a torus, so one is naturally led to questions of counting lattice points in some explicit algebraic (or semi-algebraic) set, which is a problem that can often be tackled by number-theoretic methods. For instance, the basic divisor bound in number theory - that the number of divisors of a large integer n is $O(n^{o(1)})$ - already leads to some highly useful consequences for such equations; I discuss this point briefly at http://terrytao.wordpress.com/2008/09/23/the-divisor-bound/ . (This application of number theory is not unrelated to the use of number theory to understand small divisors in dynamics, as alluded to in other responses.) - Here is another example from dynamical systems. As has already been explained in some other posts, the fine structure of dynamical systems often depends on subtle number-theoretic properties of some involved constants. Assume you have a domain $G\subseteq\mathbb{C}$ and a holomorphic map $f\colon G\to G$ with a fixed point $z\in G$, and you want to study how successive iterates of $f$ around $z$ behave. For sake of simplicity assume that $z=0$. Now locally around $0$ we can approximate $f$ by a linear function $\zeta\mapsto a\zeta$ where $a=f'(0)$, so a simple heuristic says that if we want to understand high powers of $f$, we should understand high powers of $a$. It is then quite clear that the only interesting case is $|a|=1$, so let us assume $a=\mathrm{exp}2\pi it$. Then there are some relations between the growth of the entries in the continued fraction expansion of $t$ and the behaviour of $f$ around $z=0$ under iteration. You can read more about this in the book "Complex Dynamics in one Variable" by John Milnor. Keywords are Siegel disks and Brjuno numbers. - I will add to Kevin Walker's and Robert Kucharczyk's responses. I am drawing the following from a nice survey paper by Marc Lackenby, Finite covering spaces of 3-manifolds, in Proceedings of the International Congress of Mathematicians in Hyderabad, India, 2010. There are several open questions about finite covering spaces of hyperbolic 3-manifolds, such as whether every hyperbolic 3-manifold has a finite cover that: has positive first Betti number $b_1$, admits a $\pi_1$-injective embedded surface, or fibers over the circle, for example. Arithmetic lattices seem to be useful in achieving partial results to some of these questions because they permit the use of tools from number theory. For example, Lackenby quotes the following results about arithmetic hyperbolic manifolds (i.e., $\mathbb{H}^3$ modulo an arithmetic lattice): • Every arithmetic hyperbolic 3-manifold admits a closed orientable immersed $\pi_1$-injective surface. • Let an arithmetic hyperbolic 3-manifold $M$ have an invariant trace field $k$ and quaternion algebra $B$. If at every finite place $\nu$ where $B$ ramifies, the completion $k_\nu$ contains no quadratic extension of $\mathbb{Q}_p$ ($p$ a rational prime with $\nu$ dividng $p$), then $M$ has a finite cover with positive $b_1$. • If an arithmetic hyperbolic 3-manifold $M$ has $b_1>0$, then $M$ has finite covers with arbitrarily large $b_1$. • If an arithmetic hyperbolic 3-manifold $M$ contains a closed immersed totally geodesic surface, then $M$ has a finite cover which fibers over the circle. One can also use number theory to construct finite covers ("congruence covers") of any hyperbolic 3-manifold through a process I do not really understand. I understand that The Arithmetic of Hyperbolic 3-manifolds, by MacLachlan and Reid, is a good resource for the use of number theory in this subject. You might also look at the survey I mentioned above if you want something briefer. - 1 The 3-manifold questions addressed in the 2nd paragraph have been resolved, due to the work of Kahn-Markovic, of Wise, and of Agol. – Lee Mosher Aug 21 at 19:54 To develop a point already mentioned: To some extent algebraic geometry, including complex algebraic geometry, is a part of number theory. The reason for this is that any algebraic variety, say over $\mathbb{C}$, is defined by polynomial equations involving only finitely many coefficients, so the coefficients live in a ring which is a finitely generated $\mathbb{Z}$-algebra $A$, and $A$ belongs to the domain of number theory. For example, if $m$ is a maximal ideal of $A$, then $A/m$ is a finite field, and the intersection of all maximal ideals of $A$ is $(0)$, so studying "varieties" over $A$ can be reduced in principle to studying varieties over finite fields. Applications of this method to prove results in complex algebraic geometry by means of number-theoretic methods are many. For a few, relatively elementary but still striking, examples (including the theorem of Ax-Grothendieck and the existence of a fixed point for a $p$-group acting algebraically on $\mathbb{C}^n$), see e.g. this survey paper of Serre. For a more advanced example, consider the beautiful theorem of Batyrev, that birational Calabi-Yau $n$-folds have equal Betti numbers, which is proved by reducing the question to a question over finite field solved using Deligne's proof of the (last) Weil's conjecture, one of the jewel of modern algebraic number theory. - The study of mixing properties of commuting operators in Ergodic Theory leads to problems on unit equations. - 1 @Keith: Yes, S-unit equations. K. Schmidt, The dynamics of algebraic Z^d-actions. European Congress of Mathematics Barcelona 2000 – Felipe Voloch Mar 9 2012 at 19:06 show 1 more comment Number theory has been used to prove many interesting results on $SO(3)$ and related Lie groups, which in turn has attractive applications for the underlying symmetric spaces. A prime example is Drinfeld's solution of the Ruziewicz problem on invariant means of the sphere, or the related recent work of Bourgain-Gamburd on the spectral gap for finitely generated subgroups of $SU(2)$. Related is the Banach-Tarski paradox on doubling the ball, or the recent result of Kiss-Laczkovich that a ball can be decomposed into 22 (or more) congruent pieces. I also regard Gödel's incompleteness theorem as an application of number theory. Some variants of it, like Matiyasevich's theorem on diophantine equations is highly number theoretic both in its statement and proof. - I don't know if the following application has been mentioned in this thread. The last step in the solution of Hilbert's third problem is to prove that $\arccos(1/3)/\pi$ is an irrational number. This proves that the cube and the regular tetrahedron have different Dehn invariants, hence are not congruent. http://en.wikipedia.org/wiki/Hilbert%27s_third_problem - Bourgain has a nice paper, Pointwise ergodic theorems for arithmetic sets, (subsequently extended in various directions by other authors including my co-author, Máté Wierdl) on proving a version of the Birkhoff ergodic theorem where one averages along the sequence of square numbers, rather than the sequence of integers. That is: for a measure-preserving system $T\colon X\to X$ and a (square-integrable) function $f$, one considers convergence of the averages $$\frac{1}{N}\sum_{j=1}^N f(T^{j^2}x).$$ Bourgain proves using analytic number theory techniques involving exponential sums that there is convergence almost everywhere, just as in the regular Birkhoff ergodic theorem (although not to the integral as in the regular case and the convergence fails for a typical $L^1$ function, unlike the regular case). - 4 According to wikipedia (en.wikipedia.org/wiki/…), the univeral arbiter of truth, the Hardy-Littlewood method is a part of Analytic Number Theory – Anthony Quas Mar 11 2012 at 1:26 show 1 more comment I give two application from Mathematical Physics. Quantum chaos: The Selberg trace formula has inspired the Gutzwiller trace formula. The upper half plane is often used as a toy model to understand Quantum Chaos: www.maths.bris.ac.uk/~majm/bib/arithmetic.pdf. Quantum field theory: Also I have heard repeately that automorphic forms turn up in string theory and that there is a connection between toplogical and conformal quantum field theories with the geometric Langlands program. I am not an expert, so feel free to expand and edit. - 1 Goldman in his book The Queen of Mathematics alludes to an article by Weinberg where the partition function of number theory is related to the states of a vibrating string. – Tom Copeland Mar 12 2012 at 10:23 Here is a random example (in the sense that I just happened to come across it here: http://mathoverflow.net/questions/90772/order-of-vanishing-at-the-cusps-for-the-modular-theta-function/90884#90884). The abstract of Elkies' article (http://arxiv.org/abs/math/9906019) reads : We use theta series and modular forms to prove that $\mathbf{Z}^n$ is the only integral unimodular lattice of rank $n$ without characteristic vectors of norm $< n$... By the work of Kronheimer and others on the Seiberg-Witten equation this yields an alternative proof of a theorem of Donaldson on the geometry of $4$-manifolds. The paper has appeared in Math. Research Letters 2 (1995), 321--326. - Nonlinear PDEs were mentioned above, but there is also an old example from linear PDEs (I learned about this from Boris Paneah). In a 1939 paper (link http://www.ams.org/journals/bull/1939-45-12/S0002-9904-1939-07103-6/S0002-9904-1939-07103-6.pdf) Bourgin and Duffin study the Dirichlet problem for the wave equation on a rectangle with sides A and B. They show that the uniqueness of the problem depends on the ratio A/B being irrational. More strikingly, they show that the existence of a solution (with certain regularity) depends on how difficult it is to approximate A/B by rational numbers. - Number Theory is used in Algebraic Geometry when studying problems in characteristic $p$. But also in characteristic 0, some algebraic varieties arise from arithmetic constructions. Hilbert modular surfaces or Shimura varieties, for example. Also, the recent classification of fake projective planes by Prasad & Yeung (2007) uses Number Theory in a crucial way: in fact, this problem is equivalent to the enumeration of discrete cocompact subgroups of $PU(1,2)$. And the study of the Ring of Endomorphisms for abelian varieties needs the understanding of some Number Theory, especially number fields and quaternion algebras. I guess that there are countless examples like these. - 1) For your third interpretation I have at least one relevant experience of my own - "factor systems" (some) number theorists deal with when talking about central simple algebras over number fields did contribute to the development of homological algebra in general, and learning that bit of number theory somehow expanded my homological algebra horizons a bit. 2) Sometimes, it's hard to say where the border between number theory, commutative algebra, and algebraic geometry lies. Around that "triple point" there should be many potential answers. 3) It would be interesting to see examples in the same spirit as how the $n$-factorial conjecture ended up being proved using quite advanced algebraic geometry. Maybe a good candidate along those lines is this result of Kanel-Belov and Kontsevich that uses reduction to characteristic $p$: "The Jacobian Conjecture is stably equivalent to the Dixmier Conjecture", http://arxiv.org/abs/math/0512171 - One thing I completely forgot of was reminded to me by the reference to Feit-Thompson conjecture: applications of number theory/basic Galois theory to characters of finite groups and to structure theory of finite groups, e.g. Burnside's theorem stating that a group of order $p^nq^m$ with $p,q$ prime is solvable. - Julia Robinson proved that the theory of fields is undecidable by showing that the natural numbers form a subset of the rationals definable by a first-order formula in the language of fields. The construction of this formula involved an ingenious application of number theory. For an accessible account, see the article by Flath and Wagon: • Stan Wagon and Dan Flath, How to pick out the integers in the rationals: an application of number theory to logic, Amer. Math. Monthly 98 (1991), no. 9, 812–823. MR 1132996 (93b:03076) - This seems not quite in the spirit of the original question, but I couldn't resist mentioning some work which was presented at a recent PIMS (applied) maths colloquium here: On ringing effects near jump discontinuities for periodic solutions to dispersive partial differential equations. Kenneth D. T.-R. McLaughlin, Nigel J. E. Pitt. (arXiv 1107.1571) My limited understanding of the story, which should be taken with a big lump of salt, is as follows: For certain nonlinear PDEs with a periodic boundary condition, one can write down a Fourier series that represents a weak solution, and then faces the issue of determining in what sense this series converges to the solution. Bizarrely (to my eyes) the shape of the solution is different for rational and irrational times; and to understand what happens at irrational times, the authors have to deal with exponential sums of a form encountered in analytic number theory. Furthermore, Theorem 1.5 in this paper, which demonstrates a kind of "Gibbs phenomenon" for these solutions, is proved for irrational values of $t$ satisfying a complicated condition depending on the continued fraction expansion of $t$. (According to the speaker (McLaughlin), the collaboration started while he was sitting in number theory lectures given by the first author, reading a PDEs paper, and realizing that the sums on the page in front of him were awfully like the sums on the board.) - If Arakelov geometry counts as number theory, then, http://arxiv.org/pdf/math/0401029v1.pdf demonstrates the computation of the Analytic torsion (a purely analytic object involving the product of determinants of laplacians) using the Arithmetic Riemann-Roch theorem. - The uniqueness of the finite Ree groups of type $^2G_2$ was established by E. Bombieri using extremely tricky number theoretical methods (involving involved elimination methods). As Stephen D. Smith wrote in his review on MathSciNet: This result has considerable importance in the classification of finite simple groups. Ordinary mortals such as the present reviewer are overawed by the author's tour de force. (Bombieri, Enrico; Odlyzko, A.; Hunt, D. Thompson's problem (σ2=3). Appendices by A. Odlyzko and D. Hunt, Invent. Math. 58 (1980), no. 1, 77–100.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 130, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9346944093704224, "perplexity_flag": "head"}
http://nrich.maths.org/2160/solution
### Pebbles Place four pebbles on the sand in the form of a square. Keep adding as few pebbles as necessary to double the area. How many extra pebbles are added each time? ### Great Squares Investigate how this pattern of squares continues. You could measure lengths, areas and angles. ### Square Areas Can you work out the area of the inner square and give an explanation of how you did it? # Inscribed in a Circle ##### Stage: 3 Challenge Level: Imagine extending the radius so that you have a horizontal diameter. The hexagon is now split into two identical trapeziums (trapezia?). Area of one trapezium $= {1\over 2}$ height $\times$ sum of parallel sides $$= {1\over 2}\ \times\ \sqrt0.75\ \times\ (2 + 1)$$ $$= {3\over 4} x \sqrt 3$$ The area of the hexagon is therefore: ${3\over2}\ \times\ \sqrt {3}$ The area of the triangle is half the area of the hexagon. triangle drawn in hexagon by joining alternate vertices The area of the triangle is therefore ${3\over4}\ \times\ \sqrt {3}$ The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9162678122520447, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-equations/54386-second-order-inhomogeneous-differential-equation.html
# Thread: 1. ## Second Order Inhomogeneous Differential Equation Hey guys, I thought I had this question solved, wrote it out in final copy and everything and then I realised I'd made a big mistake. Was hoping someone here would be able to give me a few pointers. a) Solve the initial value problem: $x'' + x = 2cos(\omega t)$ So first I solved homogeneous general solution: $\lambda^2 + 1 = 0$ Therefore the solution looks like: $Ae^{it} + Be^{-it}$ But I only want real answers, so I use complex exponential to achieve: $C_1cos(t) + C_2sin(t)$ Where C1 and C2 are real arbitrary constants. Then I have to solve for inhomogeneous solution. My error was that the first time I did this I took the form to be $x = acos(\omega t) + bsin(\omega t)$ However it was pointed out to me that I can't use sine and cosine on their own because they are in the homogeneous general solution. So I think I have to change the form to: $x = atcos(\omega t) + btsin(\omega t)$ Then I differentiate twice to get $x'' = -2a\omega sin(\omega t) - at\omega^2 cos(\omega t) + 2b\omega cos(\omega t) - bt\omega^2 sin(\omega t)$ Subbing these in to $x'' + x = 2cos(\omega t)$ Gives me: $-2a\omega sin(\omega t) - at\omega^2 cos(\omega t) + 2b\omega cos(\omega t)$ $- bt\omega^2 sin(\omega t) + atcos(\omega t) + btsin(\omega t) = 2cos(\omega t)$ Then I equate the co-efficients and get: $sin(\omega t) : -2a\omega = 0$ $cos(\omega t) : 2b\omega = 2$ $tsin(\omega t) : b - b\omega^2 = 0$ $tcos(\omega t) : a - a\omega^2 = 0$ By here I think I've done something drastically wrong. If anyone is able to see if I'm even on the right track it would be greatly appreciated. This is quite urgent, but anything will help! Thanks a lot in advance, U-god 2. I'm beginning to think perhaps $x = atcos(\omega t) + btsin(\omega t)$ is the wrong form.. 3. Originally Posted by U-God Hey guys, I thought I had this question solved, wrote it out in final copy and everything and then I realised I'd made a big mistake. Was hoping someone here would be able to give me a few pointers. a) Solve the initial value problem: $x'' + x = 2cos(\omega t)$ So first I solved homogeneous general solution: $\lambda^2 + 1 = 0$ Therefore the solution looks like: $Ae^{it} + Be^{-it}$ But I only want real answers, so I use complex exponential to achieve: $C_1cos(t) + C_2sin(t)$ Where C1 and C2 are real arbitrary constants. Then I have to solve for inhomogeneous solution. My error was that the first time I did this I took the form to be $x = acos(\omega t) + bsin(\omega t)$ However it was pointed out to me that I can't use sine and cosine on their own because they are in the homogeneous general solution. So I think I have to change the form to: $x = atcos(\omega t) + btsin(\omega t)$ Then I differentiate twice to get $x'' = -2a\omega sin(\omega t) - at\omega^2 cos(\omega t) + 2b\omega cos(\omega t) - bt\omega^2 sin(\omega t)$ Subbing these in to $x'' + x = 2cos(\omega t)$ Gives me: $-2a\omega sin(\omega t) - at\omega^2 cos(\omega t) + 2b\omega cos(\omega t)$ $- bt\omega^2 sin(\omega t) + atcos(\omega t) + btsin(\omega t) = 2cos(\omega t)$ Then I equate the co-efficients and get: $sin(\omega t) : -2a\omega = 0$ $cos(\omega t) : 2b\omega = 2$ $tsin(\omega t) : b - b\omega^2 = 0$ $tcos(\omega t) : a - a\omega^2 = 0$ By here I think I've done something drastically wrong. If anyone is able to see if I'm even on the right track it would be greatly appreciated. This is quite urgent, but anything will help! Thanks a lot in advance, U-god Using $x = a \cos(\omega t) + b \sin(\omega t)$ as the form of the particular is OK provided $\omega \neq 1$. If $\omega = 1$ then the DE is $x'' + x = 2 \cos t$. In this case use $x = (t + ct^2)(a \cos t + b \sin t)$ as the form of the particular solution. 4. Ok thanks for that, However, I was taught that if a term is in the homogeneous general solution, you cannot use that term in your form for the inhomogeneous solution. And in the case of this I would have to multiply through by t. Why is it valid if $\omega$ does not equal 1, despite the fact that sines and cosines are still present in the general homogeneous solution? Also, if you wouldn't mind explaining where you got $x = (t + ct^2)(a \cos t + b \sin t)$ from it would be nice? Is it something that you just derived, or is it a rule of thumb? Cheers. 5. Hang on, I tried and was left with, $a-a{\omega}^2=2$ $b-b{\omega}^2=0$ What did i do wrong? 6. This is starting to confuse me :S I think $x = acos(\omega t) + bsin(\omega t)$ will work despite it goes against what I was initially taught. And I can sort of accept that, but with the second form you gave: $x = (t + ct^2)(a \cos t + b \sin t)$ why is there no omega in the sine and cosine functions? I derived it and put it into x'' + x and got: $(2bc - 2a)sin(t) + (2ac + 2b)cos(t) - 4actsin(t) + 4bctcos(t)$ Where would I go from here? Thanks 7. Originally Posted by jwade456 Hang on, I tried and was left with, $a-a{\omega}^2=2$ $b-b{\omega}^2=0$ What did i do wrong? I did that and got the same thing, and everything simplified very nicely. But someone whose opinion I respect pointed out that I cannot use that form if there are sines and cosines in the homogeneous general solution. So apparently it's incorrect. 8. Originally Posted by U-God $x = (t + ct^2)(a \cos t + b \sin t)$ why is there no omega in the sine and cosine functions? In this particular solution omega must equal one, that's why its not there 9. Originally Posted by jwade456 In this particular solution omega must equal one, that's why its not there Wow! I swear I am getting dumber haha!! Thanks tons for that jwade! Do you know why this is the form of the equation though? 10. Originally Posted by U-God Ok thanks for that, However, I was taught that if a term is in the homogeneous general solution, you cannot use that term in your form for the inhomogeneous solution. And in the case of this I would have to multiply through by t. Why is it valid if $\omega$ does not equal 1, despite the fact that sines and cosines are still present in the general homogeneous solution? Also, if you wouldn't mind explaining where you got $x = (t + ct^2)(a \cos t + b \sin t)$ from it would be nice? Is it something that you just derived, or is it a rule of thumb? Cheers. The sines and cosines in the general solution have a different period ( namely $\frac{2 \pi}{\omega}$ ) to that of $\cos t$ except when $\omega = 1$. You should treat $\omega \neq 1$ and $\omega = 1$ as two seperate cases. Re-read post #3. You've misunderstood the 'respected opinion', I think. But hopefully this thread has cleared things up.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9783006906509399, "perplexity_flag": "head"}
http://wiki.math.toronto.edu/DispersiveWiki/index.php/Schrodinger_equations
# Schrodinger equations ## Overview There are many nonlinear Schrodinger equations in the literature, all of which are perturbations of one sort or another of the free Schrodinger equation. One general class of such equations takes the form $i \partial_t u + \Delta u = f (u, \overline{u}, Du, D \overline{u})$ where D denotes spatial differentiation. In such full generality, we refer to this equation as a derivative non-linear Schrodinger equation (D-NLS). If the non-linearity does not contain derivatives then we refer to this equation as a semilinear Schrodinger equation (NLS). These equations (particularly the cubic NLS) arise as model equations from several areas of physics. One can generalize both the linear and nonlinear perturbations to these equations and consider the class of quasilinear Schrodinger equations or even fully nonlinear Schrodinger equations. Needless to say, these equations are significantly more difficult to analyse than the simpler model cases discussed above. One can combine these nonlinear perturbations with a linear perturbation, leading for instance to the NLS with potential and the NLS on manifolds and obstacles. The perturbative theory of nonlinear Schrodinger equations (and the semilinear Schrodinger equations in particular) rests on a number of linear and nonlinear estimates for the free Schrodinger equation. ## Specific Schrodinger Equations Monomial semilinear Schrodinger equations can indexed by the degree of the nonlinearity, as follows. ### Quadratic NLS NLS equations of the form $i \partial_t u + \Delta u = Q(u, \overline{u})$ with $Q(u, \overline{u})$ a quadratic function of its arguments are quadratic nonlinear Schrodinger equations. They are mass-critical in four dimensions. ### Cubic NLS The cubic nonlinear Schrodinger equation is of the form $i \partial_t u + \Delta u = \pm |u|^2 u$ They are completely integrable in one dimension, mass-critical in two-dimensions, and energy-critical in four dimensions. ### Quartic NLS A nonlinear Schrodinger equation with nonlinearity of degree 4 is a quartic nonlinear Schrodinger equation. ### Quintic NLS NLS equations of the form $i \partial_t u + \Delta u = \pm |u|^4 u$ are quintic nonlinear Schrodinger equations. They are mass-critical in one dimension and energy-critical in three dimensions. ### Septic NLS NLS equations of the form $i \partial_t u + \Delta u = \pm |u|^6 u$ are septic nonlinear Schrodinger equations. ### L2-critical NLS The nonlinear Schrodinger equation $i \partial_t u + \Delta u = \pm |u|^{\frac{4}{d}} u$ posed for $x \in R^d$ is scaling invariant in $L^2_x$. This family of nonlinear Schrodinger equations is therefore called the mass critical nonlinear Schrodinger equation. ### Higher order NLS One can study higher-order NLS equations in which the Laplacian is replaced by a higher power. One class of such examples comes from the infinite hierarchy of commuting flows arising from the completely integrable cubic NLS on R. Another is the nonlinear Schrodinger-Airy system. A third class arises from the elliptic case of the Zakharov-Schulman system. ### Schrodinger maps A geometric derivative non-linear Schrodinger equation that has been intensively studied is the Schrodinger map equation. This is the Schrodinger counterpart of the wave maps equation. ### Cubic DNLS on R The deriviative cubic nonlinear Schrodinger equation has nonlinearity of the form $i \partial_x (|u|^2 u).$ ### Hartree Equation The Hartree equation has a nonlocal nonlinearity given by convolution, as does the very similar Schrodinger-Poisson system, and certain cases of the Davey-Stewartson system. ### Maxwell-Schrodinger system A Schrodinger-wave system closely related to the Maxwell-Klein-Gordon equation is the Maxwell-Schrodinger system.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8753005862236023, "perplexity_flag": "middle"}
http://physics.aps.org/articles/v1/4
# Viewpoint: How Casimir forces are shaping up , Department of Physics, Yale University, New Haven, CT 06520-8120. Published July 14, 2008  |  Physics 1, 4 (2008)  |  DOI: 10.1103/Physics.1.4 Modification of electromagnetic zero-point fluctuations by closely spaced conductors causes an interaction between them called the Casimir force. New experiments with nanostructured silicon substrates show that the geometry of the conducting surfaces has a large effect on this force. #### Measurement of the Casimir Force between a Gold Sphere and a Silicon Surface with Nanoscale Trench Arrays H. B. Chan, Y. Bao, J. Zou, R. A. Cirelli, F. Klemens, W. M. Mansfield, and C. S. Pai Published July 14, 2008 | PDF (free) The last great fundamental discovery in quantum mechanics was made in 1948 by Hendrik B. G. Casimir [1]. This discovery, the so-called Casimir effect, is the theoretical prediction that two closely spaced plane-parallel mirrors will be mutually attracted due to the modification of the electromagnetic mode structure between the mirrors. This attractive force comes about from the zero-point energy associated with the modes; the total energy decreases as the plates are brought together. As this force is due to zero-point energy, the force persists even at absolute zero temperature. This effect has been experimentally demonstrated many times, and the fundamental theory appears sound, at least for the simple systems that have been studied. Now, in a paper published in Physical Review Letters [2], Chan et al. at the University of Florida and Bell Laboratories, go beyond plane parallel surfaces and show how more complex surface geometry can influence the Casimir effect. This may bear directly on the extent to which Casimir forces contribute to the behavior of micro- and nanomechanical systems. The original discovery of the Casimir effect is more fundamental than one might expect: Every quantum field has a zero-point energy, so any system that has accessible states governed by external boundary conditions will have a Casimir-like contribution to its energy. For example, there is a Casimir-like contribution to the energy of quarks bound in a nucleus as a described in the bag model due to boundary that is introduced in this model. These effects and others are well described in the book by Milton [3]. Casimir himself attempted to apply his namesake force to one of the simplest of the elementary particles, the electron. Casimir modeled the electron as a conducting ball of uniform charge that would contract due to the zero-point energy of the external electromagnetic modes. This contractive force would be balanced by the space-charge repulsion of the uniform charge density, when the conducting sphere of constant total charge was just the right diameter. The fine structure constant $α≈1/137$, which relates to the electron diameter, could then be determined from fundamental parameters along with a calculation of how the electromagnetic mode zero-point energy changes as the sphere contracts [4]. So compelling was this possibility of determining the fine structure constant that Boyer did the calculation of the spherical mode problem [5]. He found that the Casimir force, or stress, due to the conducting sphere modes, causes the sphere to expand. Thus Casimir’s lovely model fails. Boyer’s result was interesting enough that it led to the exploration of the effects of geometry on the Casimir force. It has been shown, for example, that for rectangular bodies, the sign and magnitude of the stress depends on the aspect ratio of the rectangle. Until now, no significant or nontrivial corrections to the Casimir force due to boundary conditions have been observed experimentally. The form of boundary deformations so far considered, together with the accuracy and precision of experimental studies, have been adequately theoretically described by straightforward geometrical averaging. For the systems that had previously been considered, it is not clear that an experimental measurement of the external stress is even possible. Cutting a sphere in half clearly changes the boundary value problem; it is unlikely that the two halves of such a sliced sphere will be repelled with a force that is given by the external stress on the unsliced sphere. However, there are other possible ways to generate a geometrical influence on the Casimir force. A conceptually straightforward way is to contour the surfaces of the plates at a length scale comparable to the mode wavelengths that contribute most to the net Casimir force. For a plate separation $z$, the wavelengths that contribute most are $≈πz$. This means that a surface nanopatterned at 400 nm level should show significant geometrical effects for separations below 1 µm. Indeed, the work of Chan et al. has produced a convincing measurement of a nontrivial geometrical influence on the Casimir force [2]. These measurements, between a nanostructured silicon surface (upper left panel of Fig. 1) and a gold sphere, were made using a micromechanical torsional oscillator (upper right panel of Fig. 1). The change in resonant frequency of the oscillator, as a function of separation between the sphere and the surface, provided a measure of the gradient of the Casimir force. The gold sphere, actually a glass sphere of radius 50 µm coated with 400 nm of gold, was attached to one side of the oscillator that comprised a 3.5 µm thick, 500 µm square silicon plate suspended by two tiny torsion rods. The sphere–oscillator assembly was moved toward the nanostructured surface by use of a piezoelectric actuator. Two different nanostructured plates, compared with a smooth plate, were measured in this work. The geometry of the nanostructures—rectangular trenches etched in the surface of highly p-doped silicon—were chosen because the effects are expected to be large in such a geometry. Previously, Büscher and Emig had calculated the effective modification of the Casimir force due to such a geometry, but for the case of perfect conductors [6]. Even though the calculations were not for real materials, these theoretical results appeared as a reasonable starting point for a comparison with an experiment. Casimir’s calculation addressed perfectly conducting plates. The theory of the force was subsequently generalized by Lifshitz to real materials at finite temperature in his seminal 1956 paper [7]. Although much progress has recently been made toward a realistic and believable accuracy and precision with which the Casimir force can be calculated for real materials [8], problems associated with the well-known experimental variability of sputtered or evaporated films were avoided in the work of Chan et al. by comparing two different nanostructured plates with a smooth plate, all made from the same silicon substrate, and all using the same gold-sphere–oscillatory assembly. The geometric modification of the Casimir force was detected by measuring a deviation from that expected by use of the so-called proximity force approximation (PFA), or the pairwise additive approximation (PAA). Briefly, the PFA was introduced in relation to the Casimir force by Derjaguin in 1957 [9] to describe the force between curved surfaces, and this approximation is known to be extremely accurate when the curvature is much less than the separation between the surfaces. Indeed, the use of a sphere and a flat plate vastly simplifies the experiment because the system is fully mechanically defined in terms of the point of closest approach and the radius of curvature of the sphere. For two flat plates the system is specified by two tilt angles, the areas, long-scale smoothness, and a separation, which all need to be defined, measured, and controlled—a daunting problem, particularly when small deviations of the force are being measured. The success of the PFA is so good that it suggests a means of detecting a geometrical effect. Basically, the surface is divided into infinitesimal units, and it is assumed that the total force can be determined by adding the Casimir force, appropriately scaled by area, between surface unit pairs in opposite surfaces; this is the PAA. Thus, for the nanostructured surfaces used by Chan et al., roughly a 50% reduction in force would be expected by the PAA, because the very deep trenches (depth $t=2a≈1μm$), etched as a regular array, were designed to etch away about half of the surface, and this fraction was carefully measured. As mentioned, two different trench spacings, $λ$, were fabricated and measured, such that $λ/a=1.87$ (sample A) and 0.82 (sample B), and were compared to a smooth surface. The Casimir force between the gold sphere and the smooth plate, as calculated from the tabulated properties of gold and silicon, taking into account the conductivity due to the doping, agree with the experimental results to about 10% accuracy. For sample A, the force is 10% larger than expected by the PAA, as evaluated by use of the measured smooth-surface force, and for sample B, it is 20% larger, in the range $150<z<250$ nm. The deviation increases as $λ/a$ decreases, as expected. The theory of Büscher and Emig predicts deviations from the PAA twice as large as were observed. Nonetheless, the results of Chan et al. indicate a clear effect of geometry on the Casimir force in the clear deviation from the PAA. This deviation was detectable through the experimental trick of comparing different aspect ratio trenches to a smooth surface in otherwise identical materials. So even though ab initio calculation of the Casimir force for a real material using tabulated optical properties cannot be accurate to better than 10%, this problem was simply circumvented by the comparison technique. Much theoretical work remains to be done toward gaining a complete understanding of the experimental observations. The already difficult calculations are made more so by the finite conductivity effects of the plates, and the real smoothed shape of the trenches as opposed to ideal sharp features. However, the creativity of theorists on this subject appears limitless, as does the numerical computing power offered by even a modest cluster of computers for problems of this type. We can expect that in the near future the discrepancy between theory and experiment will be resolved; the excitement of further efforts lie in the possibility that our understanding of the Casimir force is incomplete in a significant way. ### References 1. H. B. G. Casimir, Proc. Kon. Ned. Akad. Wetenschap51, 793 (1948). 2. H. B. Chan, Y. Bao, J. Zou, R. A. Cirelli, F. Klemens, W. M. Mansfield, and C. S. Pai, Phys. Rev. Lett. 101, 030401 (2008). 3. K. A. Milton, The Casimir Effect: Physical Manifestations of Zero-Point Energy (World Scientific, New Jersey, 2001) [Amazon][WorldCat]. 4. P. W. Milonni, The Quantum Vacuum (Academic Press, San Diego, 1994), pp. 286-288. [Amazon][WorldCat]. 5. T. H. Boyer, Phys. Rev. 174, 1764 (1968). 6. R. Büscher and T. Emig, Phys. Rev. A 69, 062101 (2004). 7. E. M. Lifshitz, Sov. Phys. JETP 2, 73 (1956). 8. V. B. Svetovoy, P. J. van Zwol, G. Palasantzas, and J. T. M. De Hosson, Phys. Rev. B 77, 035439 (2008). 9. B. V. Derjaguin and I. I. Abrikosova, Sov. Phys. JETP 3, 819 (1957). ### About the Author: Steve K. Lamoreaux Steve K. Lamoreaux received his Ph.D. in atomic physics from the University of Washington in 1986, where he worked until 1996 at which time he moved to Los Alamos National Laboratory. In 2006, he assumed a faculty position in the Department of Physics at Yale. While at the University of Washington, he performed a high precision Casimir force experiment, the result of a series of undergraduate research projects, using a torsion pendulum. ## Related Articles ### More Nanophysics Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Desirable Defects Synopsis | May 10, 2013 ## New in Physics Wireless Power for Tiny Medical Devices Focus | May 17, 2013 Pool of Candidate Spin Liquids Grows Synopsis | May 16, 2013 Condensate in a Can Synopsis | May 16, 2013 Nanostructures Put a Spin on Light Synopsis | May 16, 2013 Fire in a Quantum Mechanical Forest Viewpoint | May 13, 2013 Insulating Magnets Control Neighbor’s Conduction Viewpoint | May 13, 2013 Invisibility Cloak for Heat Focus | May 10, 2013 Desirable Defects Synopsis | May 10, 2013
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9388132691383362, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/70231/how-to-prove-those-curious-identities?answertab=oldest
# How to prove those “curious identities”? How to prove $$\prod_{k=1}^{n-1} \sin\left(\frac{k\pi}{n}\right) = \frac{n}{2^{n-1}}$$ and $$\prod_{k=1}^{n-1} \cos\left(\frac{k\pi}{n}\right) = \frac{\sin(\pi n/2)}{2^{n-1}}$$ - 11 I would have thought that you might have learned from your previous experience here that it's a good idea to say something about where you came across these identities, as it might point the way toward an answer. – Gerry Myerson Oct 6 '11 at 2:25 1 +1 for a question that generated a variety of good answers! – lhf Oct 6 '11 at 2:57 FYI, if you like those kind of identities, those, and some other similar ones, are in "Challenging Mathematical Problems with Elementary Solutions" by Yaglom and Yaglom, volume 2, which is available in an inexpensive Dover edition. – tzs Oct 6 '11 at 4:42 1 Related: Similar reasoning as in some of the answers below (Euler's formulas + geometric series) proves the nice but no so widely known multiple-angle formula $$\sin nx = 2^{n-1} \prod_{k=0}^{n-1} \sin(x + \frac{k\pi}{n}).$$ Your first formula can be obtained as a special case after dividing both sides by $\sin x$ and taking the limit as $x\to 0$. – Hans Lundmark Oct 6 '11 at 7:50 ## 4 Answers For the first: $$\lim_{z=1}\frac{z^n-1}{z-1}=n\tag{1a}$$ $$\frac{z^n-1}{z-1}=\prod_{k=1}^{n-1}(z-e^{2\pi ik/n})\tag{1b}$$ $$|1-e^{i2k\pi/n}|=|2\sin(k\pi/n)|\tag{1c}$$ Combining $(1a)$, $(1b)$, and $(1c)$, we get $$2^{n-1}\prod_{k=1}^{n-1}\sin(k\pi/n)=n$$ since everything is positive. For the second: If $n$ is even, then $\cos(\frac{\pi}{2})=0$ appears in the product (when $k=n/2$) and $\sin(\frac{n\pi}{2})=0$. If $n$ is odd, then combining $$\lim_{z=1}\frac{z^n+1}{z+1}=1\tag{2a}$$ $$\frac{z^n+1}{z+1}=\prod_{k=1}^{n-1}(z+e^{2\pi ik/n})\tag{2b}$$ $$1+e^{i2k\pi/n}=2\cos(k\pi/n)e^{ik\pi/n}\tag{2c}$$ and noting that $\displaystyle\sum_{k=1}^{n-1}k=\frac{n(n-1)}{2}$ so that $\displaystyle\prod_{k=1}^{n-1}e^{ik\pi/n}=(-1)^{(n-1)/2}$ which matches the sign of $\sin(\pi n/2)$, yields $$2^{n-1}\prod_{k=1}^{n-1}\cos(k\pi/n)=(-1)^{(n-1)/2}=\sin(\pi n/2)$$ - Denote $w = e^{i \pi/n}$. We have $$\prod_{k = 1}^{n-1} \sin \left(\frac{k\pi}{n}\right)= \prod_{k = 1}^{n-1} \frac{w^k - w^{-k}}{2i} = \frac{1}{2^{n-1}} \prod_{k = 1}^{n-1} \frac{w^k}{i} (1-w^{-2k})$$ Since we have $$\sum_{k = 0}^{n-1} x^k = \prod_{k = 1}^{n-1} (x-w^{2k})$$ Setting $x=1$ yields $$\prod_{k = 1}^{n-1} (1-w^{2k}) = n$$ So we get $$\prod_{k = 1}^{n-1} \sin \left(\frac{k\pi}{n}\right)= \frac{n}{2^{n-1}} \frac{w^{n(n-1)/2}}{i^{n-1}} = \frac{i^{n-1}}{i^{n-1}} \frac{n}{2^{n-1}} = \frac{n}{2^{n-1}}$$ I guess (but did not check) that the same kind of reasoning gives the one with $\cos$. - 1 similar reasoning for odd $n$ (except you have to watch the sign), but very different, however easy, reasoning for even $n$. – robjohn♦ Oct 6 '11 at 4:03 Define $\zeta_n = e^{2 \pi i/n}$. Proposition For odd integer $n \geq 1$, \begin{align} \prod_{k = 1}^{n-1}(\zeta_n^{k} - \zeta_n^{-k}) = n. \end{align} and \begin{align} \prod_{k = 1}^{n-1} \sin( \tfrac{2 \pi k }{n} ) = \tfrac{n}{(2 i)^{n-1}}. \end{align} Proof: The claimed identities follow from the identity \begin{align} z^n - 1 = \prod_{ k =0}^{n-1} (z - \zeta_n^{k}) = \prod_{ k =0}^{n-1} (z - \zeta_n^{-2k}). \end{align} Writing $z = x/y$, we have \begin{align} x^n - y^n = \prod_{k = 0}^{n-1} ( \zeta_n^{k} x - \zeta_n^{-k} y). \end{align} Thus, \begin{align} n y^{n-1} = \lim_{x \to y} \frac{x^n - y^n}{x - y} = \lim_{x \to y} \ \ \prod_{k = 1}^{n-1} ( \zeta_n^{k} x - \zeta_n^{-k} y) = y^{n-1} \ \prod_{k = 1}^{n-1} ( \zeta_n^{k} - \zeta_n^{-k} ). \end{align} For the second identity, let $x =e^{\pi i z}$ and $y = e^{- \pi i z}$ and recall the complex exponential representation of the sine function. This yields \begin{align} n = \lim_{z \to 0} \frac{\sin n \pi z}{\sin z } = (2 i)^{n-1} \lim_{z \to 0} \ \ \prod_{k = 1}^{n-1} \sin( \pi z + \tfrac{2 \pi k }{n} ) = (2 i)^{n-1} \prod_{k = 1}^{n-1} \sin( \tfrac{2 \pi k }{n} ). \end{align} Similar reasoning works to prove the identities that you mention. - The second purported identity is equivalent to asking for the constant term of $\dfrac{U_{n-1}(x)}{2^{n-1}}$ (i.e., $\dfrac{U_{n-1}(0)}{2^{n-1}}$), where $U_n(x)$ is the Chebyshev polynomial of the second kind. Since $$\frac{U_{n-1}(x)}{2^{n-1}}=\frac{\sin(n \arccos\,x)}{2^{n-1}\sqrt{1-x^2}}$$ letting $x=0$ gives your identity. - 2 This is really clever. – Joel Cohen Oct 6 '11 at 3:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 16, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8638009428977966, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/203639/primes-of-the-form-ak-bk
# Primes of the form $a^k + b^k$ How many primes are there of the form $a^{k/2} + b^{k/2}$ exist for $a$ and $b$ (positive integer solutions). I am hoping there is only one. EDIT $k > 1$ - If $k$ is odd and greater than $1$, there is none since $a+b$ divides $a^k + b^k$. And every prime of the form $4m+1$ can be expressed as a sum of two squares. – user17762 Sep 27 '12 at 21:28 and if k is even? – fosho Sep 27 '12 at 21:28 If $a=2$ and $b=1$ there are known to be multiple solutions, the so-called Fermat primes. If $a=2$ and $b=3$ then $k=1$, $k=2$ and $k=4$ all give solutions. – Steven Stadnicki Sep 27 '12 at 21:30 see edit please – fosho Sep 27 '12 at 21:31 1 @fosho Is your question "Given $a$ and $b$, how many primes are of the form $a^k + b^k$?" or "Given $k$, how many primes are of the form $a^k + b^k$?" or is it just "How many primes are of the form $a^k + b^k$"? – user17762 Sep 27 '12 at 21:34 show 7 more comments ## 1 Answer Infinitely many. In fact, every prime $p \equiv 1 \pmod 4$ can be written as the sum of two squares; a result attributed to Fermat. And there are infinitely many such primes, according to Dirichlet's Theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9353016018867493, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/56146/solving-2nd-order-ode-with-non-constant-coefficient-of-form-1-x
# Solving 2nd order ODE with non-constant coefficient of form 1/x I have a differential equation of the form $$a y'' + b y/x = E y$$ (The origin is a 1D Schrödinger equation for a potential of the form $-1/x$). I am only interested in the ground state energy, i.e. the lowest order solution. Is there a good, systematic way to tackle this? I used a lot of hand waving: I said that for $x \rightarrow \infty$, the potential term is negligible and the equation is a simple homogeneous 2nd order ODE with constant coefficients, which has solution $e^{-kx}$ for some $k$. So as an overall ansatz I choose $$f(x)e^{-kx}$$, which yields $$a (f'' - 2k f' + k^2 f) + b f/x = E f$$. I then argue -- that is where the hand-waving occurs -- that the ground state would have a polynomial of the lowest possible order for $f$. A constant (order $0$) is not possible, since then nothing cancels the $1/x$ in the equation, so I try the ansatz $f(x) = x$. With that, I can indeed solve the equation and obtain conditions for $k$ and $E$: $$-2ka + b = 0$$ $$ak^2 = E$$ This allows me to solve for $k$ and $E$. But is there a better, more rigorous way? - There's always the Frobenius route... which you can use to derive the solutions in terms of confluent hypergeometric functions. – J. M. Aug 7 '11 at 17:10 Since I'm a physicist and not a mathematician, would you briefly outline that route? – Lagerbaer Aug 7 '11 at 17:12 This should be a quick review... it's also in Arfken and Weber. (FWIW, I ain't a mathematician either... :) ) – J. M. Aug 7 '11 at 17:26 Ah, okay. So what that method does is writing $f(x)$ as a power series in $x$, which generates recursive equations for the coefficients. If I set a cut-off for the degree of the polynomial, this should then reproduce my result. – Lagerbaer Aug 7 '11 at 19:59 ## 1 Answer Let's assume that $a=1$ for simplicity. You can then, either use a CAS to solve this differential equation, or notice that it is a differential equation for a confluent hypergeometric functions $_1F_1(x)$ and $U(x)$. Specifically the general solution to equation $y'' + \frac{b}{x} y = \mathcal{E}^2 y$ is $$y(x) = x e^{-x \mathcal{E}} \left( c_1 {}_1F_1(1 - \frac{b}{2\mathcal{E}}, 2, 2 x \mathcal{E}) + c_2 U( 1 - \frac{b}{2\mathcal{E}}, 2, 2 x \mathcal{E} ) \right)$$ Now, you could look up the asymptotic behavior of each independent solution (here and here) and choose indeterminates and the energy to satisfy needed boundary conditions. You will find that $c_1$ must vanish due to decay at infinity, while $c_2$ is arbitrary. Behavior at the origin demands that $1 - \frac{b}{2\mathcal{E}}$ be a non-positive integer, giving you the spectrum. In that case the Tricomi function would degenerate into a polynomial. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937859058380127, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/207061/mountain-number-probability/207279
# Mountain Number Probability I'm trying to calculate the probability that a five digit mountain number (i.e. a number in which the first three digits are in ascending order i.e. $a< b< c$, and the last three are in descending order i.e. $c> d> e$) does not contain any repeated digits. I've calculated that there are $2892$ mountain numbers, simply by looking at how many possibilities there are on each side with each peak (i.e. $36^2+28^2+...+1^2$). I wrote a small program that output the number of mountain numbers without repeating digits($1512$), but I'm not sure how I would get to that number with out the help of a computer. Could anybody help me out here? Thanks! - This is not probability, this is combinatorics. Also, please use the homework tag for any exercise like this. – Douglas Zare Oct 4 '12 at 4:08 ## 3 Answers A few hints: In how many ways can you choose the five digits to be used? When the five digits have been chosen, how many mountain numbers can you form with them? What corrections are needed to avoid a leading $0$? - Deal with mountain numbers that don't have zeroes first. Determine how many unique combinations you can select. To calculate the permutations that are mountain numbers, rather than thinking of how many of the digits can fill each position, think of how many possible positions each digit can be placed in. The highest number can only go in one position. The lowest number can only go in one of two places, et cetera. That will give you the number of mountain numbers you can create without zeroes. Now you'll need to add to that the number that do contain zeroes. Use the same procedure hinted at above. Now your lowest digit (zero) can only go in how many places? - Your initial answer is $$\sum_{n=2}^{9} {n \choose 2}^2$$ presumably because if the middle digit is $n$ then you need to choose two of the $n$ smaller digits (remember $0$) for the left hand side and similarly for the righthand side. For your second question a similar argument gives $$\sum_{n=4}^{9} {n \choose 4}{4 \choose 2}$$ as you need to choose four of the $n$ smaller digits and then put two of these four on one side and the others on the other side. Divide the latter by the former for a probability if all such strings are equally probable. Some versions of the question might not permit a leading $0$ on the left. Others might allow something like 45673 as a "mountain number". Each of these would change the answer. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9364724159240723, "perplexity_flag": "head"}
http://mathoverflow.net/questions/19490/doing-geometry-using-feynman-path-integral/19514
## Doing geometry using Feynman Path Integral? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I have often heard in the folk-lore that Feynman Path Integral can be used to compute geometric invariants of a space. Coming from a background of studying Quantum Field Theory from the books like that of Weinberg, I have myself used Feynman Path Integrals to compute scattering of particles. Earlier I had done courses in Riemannian Geometry and these days I am also doing courses in Algebraic Topology and hence I think it would be very educative if I can see how exactly the calculation of topological invariants that one does here are related to Feynman's ideas. It would be helpful if someone can give me references which explain (hopefully starting with simple examples!) how one can use path integrals in geometry. - 1 You might want to look at the survey article written by Daniel Freed about Chern-Simons theory, even though this is a little closer to topological quantum field theories than just path integrals. You can find it in "Daniel S. Freed. Remarks on Chern-Simons theory. Bull. Amer. Math. Soc. 46 (2009) 221-254.". – Ulrich Pennig Mar 27 2010 at 7:41 I believe the first significant result of this type was Witten's "proof" of the Atiyah-Singer index theorem using the Dirac operator on the free loop space. Sorry, I don't have the reference to hand. – Bruce Westbury Mar 27 2010 at 7:45 3 @Bruce: you are conflating two separate things. Witten's proof of the Atiyah-Singer index theorem uses supersymmetric quantum mechanics and can be made rigorous (Getzler did that). The Dirac operator on the free loop space (the so-called Dirac-Ramond operator) was a separate development and was used for some computations of elliptic genera. – José Figueroa-O'Farrill Mar 27 2010 at 12:50 Here's a link an answer of José's that is also relevant here: mathoverflow.net/questions/14714/… – Steve Huntsman Mar 27 2010 at 14:26 1 Thanks a lot for all the amazing references that have been poured in. I was wondering if one can start off with a simple example. Like say can one re-derive the well-known fundamental group of a circle by doing a path-integral quantization? (Like of a particle confined on a circle?) Like more generally if homotopy group of a space is computable by doing a path-integral on that space? Something along these lines to start off with? – Anirbit Mar 29 2010 at 13:11 show 1 more comment ## 5 Answers Try: Witten, Quantum field theory and the Jones polynomial Witten, The index of the Dirac operator in loop space I have found both of these papers quite difficult to understand. I don't know any easier references, and would greatly appreciate it if anybody could suggest some. Anyway, I guess the basic idea is very simple: Take a manifold, consider some space of "fields" on the manifold (for example a space of sections of a vector bundle), do "integrals" over this space of fields. The results should be invariants of your manifold --- this is not always true, but this is the idea or the hope, anyway. Edit: I want to also add that (T)QFT has applications not just to geometry/topology but also representation theory. For example check out these nice notes of David Ben-Zvi. - 1 Have you looked at the Geometry and Physics of knots, by Atiyah? It's a mathematical exposition of the first Witten article you mention. – Joel Fine Mar 27 2010 at 14:24 1 This part of the answer... "Anyway, I guess the basic idea is very simple: Take a manifold, consider some space of 'fields' on the manifold, do 'integrals' over this space of fields. The results should be invariants of your manifold." is incorrect. If this were the case, then 3+1 QED would be a topological field theory. It is not. One can verify it is not by the fact that one can see the difference between a coffee cup and a donut. :-) The basic requirement for a TQFT is that the action not depend upon the metric. This is the case in Donaldson-Witten theory, Chern-Simons theory... – Kelly Davis Apr 6 2010 at 20:23 @Kelly: Yes, I know what I said was incorrect... – Kevin Lin Apr 6 2010 at 20:57 Asking that the action does not depend on the metric is too strong. It is not true in Donaldson-Witten theory, for example. In the Witten type or cohomological type of TQFT, you find a class of observables that are independent of the metric, where this class of observables is actually a set of cohomology classes associated to an odd scalar symmetry of the theory. Even here, however, you can run into problems -- the standard example is the usual $b_2^+$ problems/interesting results occuring with Donaldson theory. – Aaron Bergman Jul 21 2010 at 21:49 Try the book "The Feynman Integral and Feynman's Operational Calculus", by G.W. Johnson and M.L. Lapidus. Chapter 20 contains some discussion of Witten's know invaraiants, Atiyah-Singer index theorem and some more. – Anton Fetisov Jan 30 2012 at 21:04 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Witten, Supersymmetry & Morse theory is probably the most accessible reference on "physical methods" in topology & geometry. Witten, Two Dimensional Gauge Theory Revisited -- contains a path integral construction of the intersection numbers of the moduli space of flat connections Witten, Topological Quantum Field Theory -- contains a path integral construction of the Donaldson invariants Witten, Topological sigma models, and Witten, Mirror Manifolds and topological field theories -- use path integrals to compute the intersection numbers of moduli spaces of holomorphic maps. Anyone seeing a pattern yet? - 1 I was just waiting for you to post a great answer to this question! – Kevin Lin Mar 27 2010 at 23:57 Thanks. There was a lot of bait on MO today! – userN Mar 28 2010 at 3:51 You might find Witten's lectures on the The Dirac index on manifolds and loop spaces from the IAS course on quantum field theory useful. - Thanks for this reference! – Anirbit Mar 29 2010 at 12:28 "Feynman Path Integral can be used to compute geometric invariants of a space." There several different approaches doing this. Let me try to explain one of them, but remember it is not the only. The point is that first you should omit the world "Feynman" ! Just integrals are useful to compute geometric invariants - for example Gauss-Bonnet theorem expresses the Euler characteristics as integral over manifold. Word "Feynman" appears when we consider infinite-dimensional manifolds - so we need to "integrate" over infinite-dimensional spaces. However we are NOT really interested in geometry of infinite-dimensional manifolds - we are interested in finite-dimensional manifolds. It appears that in some situations infinite-dimensional manifolds are either contractable to finite-dim ones or there is some heuristics which relates invariants of infinite-dimensional manifolds and finite-dim. For example if you consider loop space of M, manifold itself is embeded into loops(M) as subset of constant loops. If you consider the rotations of loops - then constant loops are fixed-point of this action - so in this case manifold is inf-dim but fixed point set is finite-dim - so we considering equivariant calculations we can get the result on finite-dim results. So the red-line is the following - in finite-dim case you integrate closed forms on manifold - and get invariant in Feynman setup certain integrals reminds closed forms on some inf-dim spaces (loop space or whatever) so integrating it you get invariant. (In some situations "closed form" menas with respect to BRST differential). The classical examples are related to Mathai-Quillen formalism and interpretation in terms of QFT. Let me suggest to look a M. Blau The Mathai-Quillen Formalism and Topological Field Theory http://arxiv.org/abs/hep-th/9203026 And cite the abstract: "These lecture notes give an introductory account of an approach to cohomological field theory due to Atiyah and Jeffrey which is based on the construction of Gaussian shaped Thom forms by Mathai and Quillen. Topics covered are: an explanation of the Mathai-Quillen formalism for finite dimensional vector bundles; the definition of regularized Euler numbers of infinite dimensional vector bundles; interpretation of supersymmetric quantum mechanics as the regularized Euler number of loop space; the Atiyah-Jeffrey interpretation of Donaldson theory; the construction of topological gauge theories from infinite dimensional vector bundles over spaces of connections." - @Alexander Chervov, your answer is always wonderful! – shu Jan 30 2012 at 21:00 @Shu Thanks so much for your kind words ! – Alexander Chervov Jan 31 2012 at 7:17 If you read French, Henniart's survey Les inégalités de Morse. Séminaire Bourbaki, 26 (1983--1984), Exposé No. 617, 19 p. might be a good place to start. He explains Witten's analytic proof of the Morse inequalities and calls it natural and elegant. - 2 If you read Russian - that is also fine :) It was translated into Russian - series of translations of selected Bourbaki seminars published by "Mir". – Alexander Chervov Jan 30 2012 at 7:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8963364362716675, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/52041/list
## Return to Answer 3 deleted 1 characters in body This is really a question for http://www.or-exchange.com (since the answer requires practical know-how rather than mathematical abillity). However, since linear programming questions are of some interest to some folks here, I'll make an attempt at an answer that is not completely useless. There are many ways of formulating an absolute function in an LP (some bad, some good). I'm going to discuss the bad ways, in case you are tempted to use them. The bad ways You can try these approaches, but bear in mind that they are not rigorous unless the absolute function is the sole term in the objective (I believe the $l_{1}$-norm LP in compressed sensing fulfills this criterion). As far as I am aware, there is no 100% reliable way of formulating an absolute function that appears in the constraint set in an LP and if there exists a competing objective $\Phi$. • The standard LP approach that is often used (but IMHO, is a poor method) is as follows: introduce a dummy variable $z$ (that is, $z=|C - D|$) and nonnegative slack variables $s_{0},s_{1}$. Write the LP as below: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z = s_{0} + s_{1}\\ & C - D = s_{0} - s_{1}\\ & s_{0} \geq 0, s_{1} \geq 0\\ & A - B = C - D\\ &A + B = z \end{align} $$``` where $\Phi$ is the original objective. In theory, this seems like it will work, but in practice, depending on how the other constraints are posed, and the "downward pressure" of $z$ with respect to $\Phi$, this might not always give you the correct answer. For instance, if $\Phi$ makes the absolute function $z$ tend toward a non-minimum value, it will depend on the weighting between $z$ and $\Phi$ to determine which term "wins". • A similar (but equally flawed) approach is to use the fact that $|C - D| = \max(C-D,D-C)$, and to write this: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z \geq D-C \\ & z \geq C-D\\ & A - B = C - D\\ &A + B = z \end{align} $$``` The good ways (but your problem will no longer remain an LP) The only reliable way to formulate an absolute function in the constraint set is to reformulate your LP as MIP (Mixed Integer Program). • If your solver supports indicator constraints, you can write the following: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & z \geq D - C\\ & z \geq C - D\\ & z \leq D - C\text{ or }z\leq C - D \\ & A - B = C - D\\ &A + B = z \end{align} $$``` where the "or" is handled as an indicator constraint. Most solvers will use a Big-M formulation to convert the problem into an MIP. • The best way is to use a mixed-integer (MIP) formulation. In order to do that, you need to know the upper bound for $C \in [0,C^{U}]$. (Since $D$ is a known, we'll assume it is constant.) If you have no idea what the upper bound is, choose an adequately large value for $C^U$, bearing in mind that very large values of $C^{U}$ can cause conditioning problems. Also, the larger the $C^{U}$, the poorer your LP-relaxation for branching will be, which in turn will adversely impact the performance of the solution process. So choose $C^{U}$ carefully. First, define an upper-bound $U$ as follows, $U = \max(D,C^{U})$. Then write the following constraints: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & 0 \leq C \leq C^{U}\\ & 0 \leq z - (C - D) \leq (2U)\delta_{1}\\ & 0 \leq z - (D - C) \leq (2U)\delta_{2}\\ & \delta_{1} + \delta_{2} = 1\\ & A - B = C - D\\ &A + B = z \end{align} $$``` where `$\delta_{1},\delta_{2} \in \{0,1\}$` (binary variables). 2 added 132 characters in body This is really a question for http://www.or-exchange.com (since the answer requires practical know-how rather than mathematical abillity). However, since linear programming questions are of some interest to some folks here, I'll make an attempt at an answer that is not completely useless. There are many ways of formulating an absolute function in an LP (some bad, some good). I'm going to discuss the bad ways, in case you are tempted to use them. The bad ways You can try these approaches, but bear in mind that they are not rigorous unless the absolute function is the sole term in the objective (I believe the $l_{1}$-norm LP in compressed sensing fulfills this criterion). As far as I am aware, there is no 100% reliable way of formulating an absolute function that appears in the constraint set in an LP and there exists a competing objective $\Phi$. • The standard LP approach that is often used (but IMHO, is a poor method) is as follows: introduce a dummy variable $z$ (that is, $z=|C - D|$) and nonnegative slack variables $s_{0},s_{1}$. Write the LP as below: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z = s_{0} + s_{1}\\ & C - D = s_{0} - s_{1}\\ & s_{0} \geq 0, s_{1} \geq 0 0\\ & A - B = C - D\\ &A + B = z \end{align} $$``` where $\Phi$ is the original objective. In theory, this seems like it will work, but in practice, depending on how the other constraints are posed, and the "downward pressure" of $z$ with respect to $\Phi$, this might not always give you the correct answer. For instance, if $\Phi$ makes the absolute function $z$ tend toward a non-minimum value, it will depend on the weighting between $z$ and $\Phi$ to determine which term "wins". • A similar (but equally flawed) approach is to use the fact that $|C - D| = \max(C-D,D-C)$, and to write this: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z \geq D-C \\ & z \geq C-D C-D\\ & A - B = C - D\\ &A + B = z \end{align} $$``` The good ways (but your problem will no longer remain an LP) The only reliable way to formulate an absolute function in the constraint set is to reformulate your LP as MIP (Mixed Integer Program). • If your solver supports indicator constraints, you can write the following: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & z \geq D - C\\ & z \geq C - D\\ & z \leq D - C\text{ or }z\leq C - D \\ & A - B = C - D\\ &A + B = z \end{align} $$``` where the "or" is handled as an indicator constraint. Most solvers will use a Big-M formulation to convert the problem into an MIP. • The best way is to use a mixed-integer (MIP) formulation. In order to do that, you need to know the upper bound for $C \in [0,C^{U}]$. (Since $D$ is a known, we'll assume it is constant.) If you have no idea what the upper bound is, choose an adequately large value for $C^U$, bearing in mind that very large values of $C^{U}$ can cause conditioning problems. Also, the larger the $C^{U}$, the poorer your LP-relaxation for branching will be, which in turn will adversely impact the performance of the solution process. So choose $C^{U}$ carefully. First, define an upper-bound $U$ as follows, $U = \max(D,C^{U})$. Then write the following constraints: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & 0 \leq C \leq C^{U}\\ & 0 \leq z - (C - D) \leq (2U)\delta_{1}\\ & 0 \leq z - (D - C) \leq (2U)\delta_{2}\\ & \delta_{1} + \delta_{2} = 1 1\\ & A - B = C - D\\ &A + B = z \end{align} $$``` where `$\delta_{1},\delta_{2} \in \{0,1\}$` (binary variables). 1 This is really a question for http://www.or-exchange.com (since the answer requires practical know-how rather than mathematical abillity). However, since linear programming questions are of some interest to some folks here, I'll make an attempt at an answer that is not completely useless. There are many ways of formulating an absolute function in an LP (some bad, some good). I'm going to discuss the bad ways, in case you are tempted to use them. The bad ways You can try these approaches, but bear in mind that they are not rigorous unless the absolute function is the sole term in the objective (I believe the $l_{1}$-norm LP in compressed sensing fulfills this criterion). As far as I am aware, there is no 100% reliable way of formulating an absolute function that appears in the constraint set in an LP and there exists a competing objective $\Phi$. • The standard LP approach that is often used (but IMHO, is a poor method) is as follows: introduce a dummy variable $z$ (that is, $z=|C - D|$) and nonnegative slack variables $s_{0},s_{1}$. Write the LP as below: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z = s_{0} + s_{1}\\ & C - D = s_{0} - s_{1}\\ & s_{0} \geq 0, s_{1} \geq 0 \end{align} $$``` where $\Phi$ is the original objective. In theory, this seems like it will work, but in practice, depending on how the other constraints are posed, and the "downward pressure" of $z$ with respect to $\Phi$, this might not always give you the correct answer. For instance, if $\Phi$ makes the absolute function $z$ tend toward a non-minimum value, it will depend on the weighting between $z$ and $\Phi$ to determine which term "wins". • A similar (but equally flawed) approach is to use the fact that $|C - D| = \max(C-D,D-C)$, and to write this: ```$$ \begin{align} &\min \Phi + z\\ s.t.\;\; & z \geq D-C \\ & z \geq C-D \end{align} $$``` The good ways (but your problem will no longer remain an LP) The only reliable way to formulate an absolute function in the constraint set is to reformulate your LP as MIP (Mixed Integer Program). • If your solver supports indicator constraints, you can write the following: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & z \geq D - C\\ & z \geq C - D\\ & z \leq D - C\text{ or }z\leq C - D \end{align} $$``` where the "or" is handled as an indicator constraint. Most solvers will use a Big-M formulation to convert the problem into an MIP. • The best way is to use a mixed-integer (MIP) formulation. In order to do that, you need to know the upper bound for $C \in [0,C^{U}]$. (Since $D$ is a known, we'll assume it is constant.) If you have no idea what the upper bound is, choose an adequately large value for $C^U$, bearing in mind that very large values of $C^{U}$ can cause conditioning problems. Also, the larger the $C^{U}$, the poorer your LP-relaxation for branching will be, which in turn will adversely impact the performance of the solution process. So choose $C^{U}$ carefully. First, define an upper-bound $U$ as follows, $U = \max(D,C^{U})$. Then write the following constraints: ```$$ \begin{align} &\min \Phi\\ s.t.\;\; & 0 \leq C \leq C^{U}\\ & 0 \leq z - (C - D) \leq (2U)\delta_{1}\\ & 0 \leq z - (D - C) \leq (2U)\delta_{2}\\ & \delta_{1} + \delta_{2} = 1 \end{align} $$``` where `$\delta_{1},\delta_{2} \in \{0,1\}$` (binary variables).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8916955590248108, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/11098/list
## Return to Answer 6 Corrected claims about Hochschild (co)homology; updated a reference. The statement that $HF^{\ast}(X,X)$ is isomorphic to $QH^\ast(X)$ is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). PSS is a canonical ring isomorphism from $QH^{\ast}(X)$ to the Hamiltonian Floer cohomology of $X$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There's a geometrically-defined map $QH^{\ast}(X) \to HH^{\ast}(F(X))$, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in $X$. The question is whether this is an isomorphism. This statement is open, and will probably not be proven true in the near future, for a simple reason: $QH^*(X)$ is non-trivial, while we have no general construction of Floer-theoretically essential Lagrangians. Progress on the significance of this map is in the pipeline, by others, and I'm not going to steal their thunder by discussing it here... There are two positive things I can say. One is that Kontsevich's heuristics, which involve interpreting $HH^{\ast}$ as deformations of the identity functor, now have a natural setting in the quilted Floer theory of Mau-Wehrheim-Woodward (in progress). This says that the Fukaya category $F(X\times X)$ naturally embeds into the $A_\infty$-category of $A_\infty$-endofunctors of $F(X)$. The other is that for Weinstein manifolds (a class of exact symplectic manifolds with contact type boundary), there seems to be an analogous map from the symplectic cohomology $SH^{\ast}(X)$ (a version of Hamiltonian Floer cohomology on the conical completion of $X$) to $HH^{\ast}$ of the wrapped Fukaya category, which involves non-compact Lagrangians. (Edit August 2010: I was careless about homology versus cohomology. I should have said that $HH_{\ast}$ maps to $SH^{\ast}$.) Proving that this is an isomorphism is more feasible because one may be able to prove that Weinstein manifolds admit Lefschetz fibrations. The Lefschetz thimbles are then objects in the wrapped Fukaya category. One might then proceed as follows. The thimbles for a Lefschetz fibration should generate the triangulated envelope of the wrapped category (maybe I should split-close here; not sure) - this would be an enhancement of results from Seidel's book. Consequently, one should be able to compute $HH^{\ast}$ HH_{\ast}$just in terms of$HH^{\ast}$HH_{\ast}$ for the full subcategory generated by the thimbles. The latter should be related to $SH^{\ast}$ by ideas closely related to those in Seidel's paper "Symplectic homology as Hochschild homology". What could be simpler? ADDED: Kevin asks for evidence for or against $QH^{\ast}\to HH^{\ast}$ being an isomorphism. I don't know any evidence contra. Verifying it for a given $X$ would presumably go in two steps: (i) identify generators for the (triangulated envelope of) $F(X)$, and (ii) show that the map from $SH^{\ast}$ QH^{\ast}$to$HH^{\ast}$for the full subcategory that they generate is an isomorphism. There's been lots of progress on (i), less on (ii), though the case of toric Fanos has been studied by Fukaya-Oh-Ohta-Ono, and in this case mirror symmetry makes predictions for (i) which I expect will soon be proved. In simply connected disc-cotangent bundles, the zero-section generates, and both$HH^{\ast}$HH_{\ast}$ for the compact Fukaya category and $SH^{\ast}$ are isomorphic to loop-space homology, but I don't think it's known that the resulting isomorphism is Seidel's. Added August 2010: Abouzaid (1001.4593) has made major progress in this area. 5 added 807 characters in body The statement that $HF^{\ast}(X,X)$ is isomorphic to $QH^\ast(X)$ is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). The PSS is a canonical ring isomorphism says that from $QH^{\ast}(X)$ is canonically isomorphic to the Hamiltonian Floer cohomology of $X$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There is There's a geometrically-defined map $QH^{\ast}(X) \to HH^{\ast}(F(X))$, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in $X$. The question is whether this is an isomorphism. ADDED: Kevin asks for evidence for or against $QH^{\ast}\to HH^{\ast}$ being an isomorphism. I don't know any evidence contra. Verifying it for a given $X$ would presumably go in two steps: (i) identify generators for the (triangulated envelope of) $F(X)$, and (ii) show that the map from $SH^{\ast}$ to $HH^{\ast}$ for the full subcategory that they generate is an isomorphism. There's been lots of progress on (i), less on (ii), though the case of toric Fanos has been studied by Fukaya-Oh-Ohta-Ono, and in this case mirror symmetry makes predictions for (i) which I expect will soon be proved. In simply connected disc-cotangent bundles, the zero-section generates, and both $HH^{\ast}$ and $SH^{\ast}$ are isomorphic to loop-space homology, but I don't think it's known that the resulting isomorphism is Seidel's. 4 inserted asterisks in math The statement that $HF(X,X)$ HF^{\ast}(X,X)$is isomorphic to$QH(X)$QH^\ast(X)$ is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). The PSS isomorphism says that $QH^*(X)$ QH^{\ast}(X)$is canonically isomorphic to the Hamiltonian Floer cohomology of$X\$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There is a geometrically-defined map $QH(X) QH^{\ast}(X) \to HH^*(F(X))$HH^{\ast}(F(X))$, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in$X\$. The question is whether this is an isomorphism. This statement is open, and will probably not be proven true in the near future, for a simple reason: $QH^*(X)$ is non-trivial, while we have no general construction of Floer-theoretically essential Lagrangians. Progress on the significance of this map is in the pipeline, by others, and I'm not going to steal their thunder by discussing it here... There are two positive things I can say. One is that Kontsevich's heuristics, which involve interpreting $HH$ HH^{\ast}$as deformations of the identity functor, now have a natural setting in the quilted Floer theory of Mau-Wehrheim-Woodward (in progress). This says that the Fukaya category$F(X\times X)$naturally embeds into the$A_\infty$-category of$A_\infty$-endofunctors of$F(X)\$. The other is that for Weinstein manifolds (a class of exact symplectic manifolds with contact type boundary), there seems to be an analogous map from the symplectic cohomology $SH(X)$ SH^{\ast}(X)$(a version of Hamiltonian Floer cohomology on the conical completion of$X$) to$HH^*$HH^{\ast}$ of the wrapped Fukaya category, which involves non-compact Lagrangians. Proving that this is an isomorphism is more feasible because one may be able to prove that Weinstein manifolds admit Lefschetz fibrations. The Lefschetz thimbles are then objects in the wrapped Fukaya category. One might then proceed as follows. The thimbles for a Lefschetz fibration should generate the triangulated envelope of the wrapped category (maybe I should split-close here; not sure) - this would be an enhancement of results from Seidel's book. Consequently, one should be able to compute $HH$ HH^{\ast}$just in terms of$HH$HH^{\ast}$ for the full subcategory generated by the thimbles. The latter should be related to $SH$ SH^{\ast}\$ by ideas closely related to those in Seidel's paper "Symplectic homology as Hochschild homology". What could be simpler? 3 Removed superscripts ^* which were not displaying properly.; added 2 characters in body; deleted 4 characters in body The statement that $HF^*(X,X)$HF(X,X)$is isomorphic to$QH^*(X)$QH(X)$ is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). The PSS isomorphism says that $QH^*(X)$ is canonically isomorphic to the Hamiltonian Floer cohomology of $X$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There is a geometrically-defined map $QH^*(X) QH(X) \to HH^*(F(X))$, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in $X$. The question is whether this is an isomorphism. This statement is open, and will probably not be proven true in the near future, for a simple reason: $QH^*(X)$ is non-trivial, while we have no general construction of Floer-theoretically essential Lagrangians. Progress on the significance of this map is in the pipeline, by others, and I'm not going to steal their thunder by discussing it here... There are two positive things I can say. One is that Kontsevich's heuristics, which involve interpreting $HH^*$HH$as deformations of the identity functor, now have a natural setting in the quilted Floer theory of Mau-Wehrheim-Woodward (in progress). This says that the Fukaya category$F(X\times X)$naturally embeds into the$A_\infty$-category of$A_\infty$-endofunctors of$F(X)\$. The other is that for Weinstein manifolds (a class of exact symplectic manifolds with contact type boundary), there seems to be an analogous map from the symplectic cohomology $SH^*(X)$SH(X)$(a version of Hamiltonian Floer cohomology on the conical completion of$X$) to$HH^*\$ of the wrapped Fukaya category, which involves non-compact Lagrangians. Proving that this is an isomorphism is more feasible because one may be able to prove that Weinstein manifolds admit Lefschetz fibrations. The Lefschetz thimbles are then objects in the wrapped Fukaya category. One might then proceed as follows. The thimbles for a Lefschetz fibration should generate the triangulated envelope of the wrapped category (maybe I should split-close here; not sure) - this would be an enhancement of results from Seidel's book. Consequently, one should be able to compute $HH^*$HH$just in terms of$HH^*$HH$ for the full subcategory generated by the thimbles. The latter should be related to $SH^*$SH\$ by ideas closely related to those in Seidel's paper "Symplectic homology as Hochschild homology". What could be simpler? 2 Tricky Preview The statement that `$HF^(X,X)$ HF^*(X,X)$` is isomorphic to `$QH^``(X)$ QH^*(X)$` is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). The PSS isomorphism says that $QH^*(X)$ is canonically isomorphic to the Hamiltonian Floer cohomology of $X$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There is a geometrically-defined map `$QH^(X) QH^*(X) \to HH^(F(X))$, HH^*(F(X))$`, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in $X$. The question is whether this is an isomorphism. This statement is open, and will probably not be proven true in the near future, for a simple reason: $QH^*(X)$ is non-trivial, while we have no general construction of Floer-theoretically essential Lagrangians. Progress on the significance of this map is in the pipeline, by others, and I'm not going to steal their thunder by discussing it here... There are two positive things I can say. One is that Kontsevich's heuristics, which involve interpreting `$HH^*$` as deformations of the identity functor, now have a natural setting in the quilted Floer theory of Mau-Wehrheim-Woodward (in progress). This says that the Fukaya category $F(X\times X)$ naturally embeds into the $A_\infty$-category of $A_\infty$-endofunctors of $F(X)$. The other is that for Weinstein manifolds (a class of exact symplectic manifolds with contact type boundary), there seems to be an analogous map from the symplectic cohomology `$SH^(X)$ SH^*(X)$` (a version of Hamiltonian Floer cohomology on the conical completion of $X$) to \$HH^`$HH^*$` of the wrapped Fukaya category, which involves non-compact Lagrangians. Proving that this is an isomorphism is more feasible because one may be able to prove that Weinstein manifolds admit Lefschetz fibrations. The Lefschetz thimbles are then objects in the wrapped Fukaya category. One might then proceed as follows. The thimbles for a Lefschetz fibration should generate the triangulated envelope of the wrapped category (maybe I should split-close here; not sure) - this would be an enhancement of results from Seidel's book. Consequently, one should be able to compute \$HH^`$HH^*$` just in terms of `$HH^``$ HH^*$` for the full subcategory generated by the thimbles. The latter should be related to `$SH^*$` by ideas closely related to those in Seidel's paper "Symplectic homology as Hochschild homology". What could be simpler? 1 The statement that $HF^(X,X)$ is isomorphic to $QH^(X)$ is a version of the Piunikhin-Salamon-Schwarz (PSS) isomorphism (proved, under certain assumptions, in McDuff-Salamon's book "J-holomorphic curves in symplectic topology"). The PSS isomorphism says that $QH^*(X)$ is canonically isomorphic to the Hamiltonian Floer cohomology of $X$, and the latter can be compared straightforwardly to the Lagrangian Floer cohomology of the diagonal. Now to Hochschild cohomology of the Fukaya category $F(X)$. There is a geometrically-defined map $QH^(X) \to HH^(F(X))$, due to Seidel in a slightly different setting (see his "Fukaya categories and deformations"), inspired by the slightly vague but prescient remarks of Kontsevich from 1994. One could define this map without too much trouble, say, for monotone manifolds. It's constructed via moduli spaces of pseudo-holomorphic polygons subject to Lagrangian boundary conditions, with an incidence condition of an interior marked point with chosen cycles in $X$. The question is whether this is an isomorphism. This statement is open, and will probably not be proven true in the near future, for a simple reason: $QH^*(X)$ is non-trivial, while we have no general construction of Floer-theoretically essential Lagrangians. Progress on the significance of this map is in the pipeline, by others, and I'm not going to steal their thunder by discussing it here... There are two positive things I can say. One is that Kontsevich's heuristics, which involve interpreting $HH^*$ as deformations of the identity functor, now have a natural setting in the quilted Floer theory of Mau-Wehrheim-Woodward (in progress). This says that the Fukaya category $F(X\times X)$ naturally embeds into the $A_\infty$-category of $A_\infty$-endofunctors of $F(X)$. The other is that for Weinstein manifolds (a class of exact symplectic manifolds with contact type boundary), there seems to be an analogous map from the symplectic cohomology $SH^(X)$ (a version of Hamiltonian Floer cohomology on the conical completion of $X$) to $HH^$ of the wrapped Fukaya category, which involves non-compact Lagrangians. Proving that this is an isomorphism is more feasible because one may be able to prove that Weinstein manifolds admit Lefschetz fibrations. The Lefschetz thimbles are then objects in the wrapped Fukaya category. One might then proceed as follows. The thimbles for a Lefschetz fibration should generate the triangulated envelope of the wrapped category (maybe I should split-close here; not sure) - this would be an enhancement of results from Seidel's book. Consequently, one should be able to compute $HH^$ just in terms of $HH^$ for the full subcategory generated by the thimbles. The latter should be related to $SH^*$ by ideas closely related to those in Seidel's paper "Symplectic homology as Hochschild homology". What could be simpler?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 116, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9249917268753052, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/272193/short-proof-for-the-non-hamiltonicity-of-the-petersen-graph
Short proof for the non-Hamiltonicity of the Petersen Graph It is well known that the Petersen Graph is not Hamiltonian. I can show it by case distinction, which is not too long - but it is not very elegant either. Is there a simple (short) argument that the Petersen Graph does not contain a Hamiltonian cycle? - The case distinction can be reduced immensely if you are allowed to use the fact that the Petersen graph is 3-arc-transitive. – Jernej Jan 7 at 16:04 2 Answers If you can use the symmetry (as Jernej suggests), the case argument has a lot going for it. There is a proof using interlacing. Observe that if $P$ has a Hamilton cycle then its line graph $L(P)$ contains an induced copy of $C_{10}$. Eigenvalue interlacing then implies that $\theta_r(C_{10}) \le \theta_r(L(P))$. But $\theta_7(C_{10}) \approx -0.618$ and $\theta_7(L(P))=-1$. [I have forgotten who this argument is due to. There are a number of variants of it too.] - Motivated from the wikipedia page I will add an answer to the question by myself. It is still a little case distinction, but it is small. We know that the Petersen graph is 3-regular and has girth 5. Suppose it has a Hamiltonian cycle $H$ and we draw the graph such that the $H$ is drawn as cycle. The edges that are not in $H$ are chords of $H$. If there would be two chords that do not intersect, then these two chords are part of two disjoint 5 cycles. But in this case the two chords and the two edges of $H$ not in the 5-cycles form a 4-cycle. Hence all chords cross pairwise. The only possibility for this is shown in the second picture. Clearly, we have a 4-cycle. Hence, we have a contradiction. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9539483785629272, "perplexity_flag": "head"}
http://mathoverflow.net/questions/60101/density-of-irreducible-polynomials-in-mathbbzx/60114
## Density of Irreducible Polynomials in $\mathbb{Z}[x]$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Recently I was thinking about some questions concerning $\mathbb{Z}[x]$ and realized that they might be a bit easier if I knew the relative densities of reducible polynomials. Let $P_d$ denote the set of all elements of $\mathbb{Z}[x]$ of degree $\leq d$. My basic question is: Question: What fraction of elements of $P_d$ factor in $\mathbb{Z}[x]$? The fraction $f_d$ of elements of $P_d$ which factor satisfies $f_d\geq 1-\zeta(d+1)^{-1}$ (with $\zeta$ the Riemann Zeta Function) since this is the count of elements which factor as a constant times an element of $P_d$. When $d = 0,1$ one obviously has equality, but what is not clear to me is whether $f_d = 1-\zeta(d+1)^{-1}$ holds for all $d$. I thought about the analogous problem in $\mathbb{F}_p[x]$ (where one ignores constant factors), but in this case as $d\rightarrow\infty$ one has $f_d\rightarrow 1$. If this behavior carries over to $\mathbb{Z}[x]$ then as $d$ grows large, almost every polynomial of degree $d$ factors which seems absurd; therefore looking at this question in $\mathbb{F}_p[x]$ doesn't seem to help much. So, letting $f_d = 1-\zeta(d+1)^{-1}+\varepsilon(d)$, I am wondering if the correction term $\varepsilon$ equals zero or, if not, what is $\varepsilon(d)$ and how would one derive it? - Perhaps I am missing something, but how do you define the 'fraction' (as P_d is infinite)? – quid Mar 30 2011 at 18:42 The conventional way to define the fraction is as the limit of the fraction of reducible elements in a ball centered at the origin as the radius of the ball tends to infinity. – ARupinski Mar 30 2011 at 18:45 1 Actually the conventional way would be to use the height of a polynomial. I.e. fix a bound for coefficients, allowing you to count how many there are; at least that is the naive form of height. Why not look at monic polynomials first? – Charles Matthews Mar 30 2011 at 18:57 I deleted a comment asking 'which norm' as CM's comment basically answers this. – quid Mar 30 2011 at 19:00 1 See also this question: mathoverflow.net/questions/58397/… – Xandi Tuni Mar 30 2011 at 21:03 show 2 more comments ## 3 Answers I restore the following in clarified form as an over-sized comment; in a temporary (at least I hope it was only temporary) state of confusion I posted it as an answer, which it is not (I was not careful regarding the different notions of ir/reducibility, sorry about that). The density of integral polynomials of fixed degree that are reducible as rational polynomials, i.e. that are the product of two non-constant integral polynomials, is $0$. (This contains the statement for monic ones as a special case.) More precisley, the order of the numer of such polynomials of height at most $t$ is `$t^d$` for $d \ge 3$. That is, it is known that for $d \ge 3$ there exists a constant `$C_d$` such that if `$R_d(t)$` denotes the number of reducible polynomials with height at most $t$ and degree $d$ then `$$ t^d \le |R_d(t)| \le C_d t^d $$` For $d= 2$ one has an additional logarithmic factor; the order in this case is `$ t^2 log t $`. This and related ressults are proved by G. Kuba in 'On the distribution of reducible polynomials, Math. Slovaca 59 (2009), no. 3, 349–356.' (The novelty of the paper is the quality of the estimate; the density result is much older, an upper bound of the form `$t^d (log t)^2$` seems to be mentioned in Polya and Szego.) - 1 Looks like exactly what I expected would be the case. Thanks. – ARupinski Mar 31 2011 at 4:23 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Take a look at the book: An introduction to sieve methods and their applications By Alina Cojocaru, Maruti Ram Murty In section 4.3 the Turan Sieve is used to prove that the probability of a random polynomial with integer coefficients is irreducible is 1. It is available online at Google books. - Although that section only deals with monic polynomials with positive coefficients, it seems that the basic argument should modify to be useable for arbitrary polynomials and with a little work ought to be able to verify that $\varepsilon(d) = 0$ for all $d$. I'll have to look at that a bit. – ARupinski Mar 30 2011 at 21:21 Actually, a lot more is true: a random polynomial has Galois group the full symmetric group (a result of van der Waerden) and this is true under various restrictions, and with various error terms. For a summary and some related results, see Igor Rivin Walks on groups, counting reducible matrices, polynomials, and surface and free group automorphisms. Duke Math. J. 142 (2008), no. 2, 353–379 - Thanks for that reference, I will have a look at it as well. – ARupinski Mar 31 2011 at 16:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452925324440002, "perplexity_flag": "head"}
http://outofthenormmaths.wordpress.com/2011/10/27/cut-the-rope/
October 27, 2011 · 8:57 am # Cut the rope! This is a real problem that was sent around my former maths department. The inquirer had a boat, and he wanted ropes of various lengths to knot together to moor his canal barge (I think each rope may have somehow used an eye splice for knotting). This is more or less the e-mail as I received it: “I have 5 pieces of rope of length: • 1 x 10 metres • 2 x 12 metres • 2 x 40 metres I want to be able to cut the ropes into pieces of different lengths and to be able to tie combinations of these together to make longer lengths. Is there a formula to obtain the optimum number and lengths of pieces ropes (i.e. the minimum number of pieces of ropes to give the most possible combinations of lengths of rope!). The minimum length of rope I need is 6m.” Being more or less a combinatorial problem, I doubted whether any nice formula existed (or that it would be any more useful than a specific solution!). So, wanting to help, I cheated slightly, and chatted to the guy to get a bit more info. After our discussion, he decided that he wanted ropes with at most two knots (ie. three pieces), and to be able to make lengths at intervals of one or two metres. He also was quite keen on having a 20m rope. I’ve given some additional assumptions and my solution below. But please have a go first: you may come up with a better way of doing it! After the discussions, I also decided that I should try to meet the following criteria, roughly in the order below, the most important being at the top: • Try to create ways of combining ropes to get most of the integer lengths between 6m and 20m, with no more than two knots; the maximum gap between attainable lengths should be two metres; • Ropes should be of integer length (not necessary, but makes the calculations easier, both initially by reducing the number of options to be explored, and while the ropes are actually being used on the water); • Minimise the number of knots used; • Allow for some longer combined rope lengths, minimising the gaps between those; • Minimise the number of ropes cut (to avoid work, weakening ropes and simplify the instructions). The guy’s initial thought was that, to have jumps of one or two metres, you would need a rope of one or two metres in length. This may seem like quite an obvious conclusion, but it isn’t actually the case: as long as you have a sufficient number of small ropes at various lengths, you can avoid having this (tiny ropes are perhaps annoying for knotting). My main consideration was to avoid ropes of identical lengths, including those lengths obtained by combining ropes. If you can make one length in two possible ways, you’ve essentially wasted a combination. I was fortunate, and my solution turned out to be quite neat: 1. Cut a 12m rope into: 3m, 4m, and 5m. 2. Cut a 40m rope into: 6m, 14m and 20m. 3. Leave the 10m, and the other 12m and 40m ropes as they are. This would leave us with 3m, 4m, 5m, 6m, 10m, 12m, 14m, 20m, and 40m ropes; with these you make any rope integer length from 3m to 20m with at most one knot, and quite a few more beyond that. Resorting to two knots gives an even better range. He seemed quite happy with it, which is, in my books, a mathematical success! Another more general way of approaching such problems was suggested by someone else in the department. It went a little like this: if you cut a rope in half, and then cut one of those halves in half again to make quarters, and so on, you can theoretically make any length of rope (up to the length of the rope you’ve cut up). As we’re in the real world, and can’t use infinitely small objects, instead of cutting up into precise halves, you can make one piece slightly longer  and one piece slightly shorter, and stop much earlier in the process (if you were using only one piece of rope, and wanted at most $n$ knots, you would stop after $n$ cuts or iterations). I thought at the time this was slightly impracticable, but looking back, my more or less combinatorial guess can be viewed in those terms: 1. Cut a 12m rope into: 5m and 7m ($6 \pm 1$m). 2. Cut the new 7m rope into: 3m and 4m ($3.5 \pm 0.5$m). 3. Cut a 40m rope into: two 20m ropes. 4. Cut one of the new 20m ropes into: 6m and 14m ($10 \pm 4$m). As in this example, in general we should vary the ‘halving error’: for example, if we cut ropes of length $2a, 2b$ into $a \pm \delta, b \pm \epsilon$, then we can now recombine them to make $a+b \pm (\delta +\epsilon)$ and $a+b \pm (\delta - \epsilon)$. However, if $\delta=\epsilon$, then we are again wasting an option by being able to construct $a+b$ in two different ways. I was also happy with my intuitive solution, and I didn’t think I could do much better. Having said this, perhaps there is an obvious slightly better solution? Or, if you started measuring the outcomes, perhaps you can start to sensibly try out and evaluate combinations of non-integer lengths. We might, for instance, aim to minimise the sum of the squares of the gaps between possible lengths up to 40m, mine giving a score of $3^2+17 \times 1 + 2^2 + 4 \times 1 + 4^2 +2 \times 2^2 + 6^2=94$ when using a maximum of one knot, as we can’t make lengths of 1,2,21,27–29,31,33, or 35–39m. A score of 40 would be an absolute (though probably unattainable) lower bound for integer-lengthed rope. In my solution, I could wastefully make 9,10,14,15,17,16,18,20,24 and 26m in two ways, leading to the 8+28-10=26 possible lengths under 40m (8 is the number of individual lengths of rope under 40m; 28 (=8 choose 2) is the number of ways of knotting two of those 8 lengths of rope; and 10 being the number we can make in two ways). Perhaps you could profitably start by eliminating some one those redundant combinations? ### Like this: 2 Comments Filed under Accessible, Applications Tagged as knotting ropes, maths, rope cutting, rope problem ### 2 Responses to Cut the rope! 1. Pingback: From rope-cutting to Golomb rulers, via magic | Out of the Norm 2. Pingback: Oil tanks and dipsticks | Out of the Norm %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 12, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9545934200286865, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/251756/integration-of-complex-trigonometric-function
# integration of complex trigonometric function Compute $$\oint_{|z-\frac{\pi}{2}|=\pi+1}z\cdot \tan(z)dz$$ My solution: the integrand is a meromorphic function with simple poles at points: $\frac{\pi}{2}+n\pi$, with $n$ integer. Among these points, $\pm \frac{\pi}{2},\frac{3}{2}\pi$ lie inside the contour. I use the formula for simple poles: $$Res\left(\frac{f}{g},z_0\right)=\frac{f(z_0)}{g'(z_0)}$$ In my case ($f=z\sin z, g=\cos z$) i get: $$Res\left(z\cdot \tan(z),\pm\frac{\pi}{2}\right)=\mp\frac{\pi}{2}$$ and $$Res\left(z\cdot \tan(z),\frac{3\pi}{2}\right)=-\frac{3\pi}{2}$$ I apply residue formula to get $I=-3\pi^2i$. Someone could say me if i made some mistakes? - Fix your accept rate. :-) – Babak S. Dec 5 '12 at 19:36 ## 1 Answer There are three singularities inside the given path, $-\frac\pi2,\frac\pi2,\frac{3\pi}2$. For $z=-\frac\pi2$, let $z=w-\frac\pi2$ $$z\tan(z)=-\frac{\left(w-\frac\pi2\right)\cos(w)}{\sin(w)}\to\frac\pi2$$ For $z=\frac\pi2$, let $z=w+\frac\pi2$ $$z\tan(z)=-\frac{\left(w+\frac\pi2\right)\cos(w)}{\sin(w)}\to-\frac\pi2$$ For $z=\frac{3\pi}2$, let $z=w+\frac{3\pi}2$ $$z\tan(z)=-\frac{\left(w+\frac{3\pi}2\right)\cos(w)}{\sin(w)}\to-\frac{3\pi}2$$ Thus, the sum of the residues inside the contour is $-\dfrac{3\pi}2$. To get the integral, multiply by $2\pi i$ to get $$\oint_{|z-\frac{\pi}{2}|=\pi+1}z\tan(z)\mathrm{d}z=-3\pi^2i$$ It looks as if you are fine. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8656215071678162, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/20435-travelling-boat-problem.html
# Thread: 1. ## travelling boat problem a boat is traveling relative to the water at a speed of 4.6 m/s due south. Relative to the boat, a passenger walks toward the back of the boat at a speed of 2.0 m/s. So with this, so 1, What is the magnitude and direction of the passenger's velocity relative to the water? in m/s also is this north or south? and 2, How long does it take for the passenger to walk a distance of 27 m on the boat? in seconds and 3, How long does it take for the passenger to cover a distance of 27 m on the water? in seconds. 2. Hello, rcmango! A boat is traveling relative to the water at a speed of 4.6 m/s due south. Relative to the boat, a passenger walks toward the back of the boat at a speed of 2.0 m/s. Code: ``` * - - - * | | | ↑ | | o | | | * * \ / \ / \ / * ↓``` 1) What is the magnitude and direction of the passenger's velocity relative to the water? Also is this north or south? The boat is moving 4.6 m/s due south. The man is moving 2.0 m/s due north. His speed relative to the water is: . $4.6 - 2.0 \:=\:2.6$ m/s due south. 2) How long does it take for the passenger to walk a distance of 27 m on the boat? At 2 m/s, it takes him $\frac{27}{2} \:=\:13.5$ seconds. 3) How long does it take for the passenger to cover a distance of 27 m on the water? Relative to the water, his speed is 2.6 m/s. It will take him $\frac{27}{2.6} \,\approx\,10.4$ seconds. 3. ## travel I'm not extravagant hopes to travel all over Europe, but I love my trips in Spain. It is a beautiful country, and cities like costa brava hotel and menorca spain are all unforgettable. I'd love to go back for awhile. If you're in Barcelona be sure to check out the Picasso museum as well and the best way to get around Spain is by train. But I want to travel through all of Europe, it's my dream! 4. ## language Mastering a foreign language today is very important. Many people are studying some other language except their mother tongue. I also want to learn foreign language very much. Especially want to learn English, Sprachaufenthalte or Italian. But I have no idea which to learn easily or convenient. So I hope someone can give me some suggestions. Any help would be much appreciated!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9512363076210022, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/complex-numbers+schrodinger-equation
# Tagged Questions 1answer 371 views ### Solving the time independent Schrodinger equation: Does a complex solution make sense? In my notes, I have the Time Independent Schrodinger equation for a free particle $$\frac{\partial^2 \psi}{\partial x^2}+\frac{p^2}{\hbar^2}\psi=0\tag1$$ The solution to this is given, in my notes, ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.855497419834137, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/61759/project-euler-problem-25
# Project Euler, Problem #25 Problem #25 from Project Euler asks: What is the first term in the Fibonacci sequence to contain 1000 digits? The brute force way of solving this is by simply telling the computer to generate Fibonacci numbers until it finds the first one that has 1000 digits. I wanted to look for a more mathematical solution that doesn't require that many computations. Initially, I expected that by plotting the Fibonacci numbers (mapping the number itself to the number of digits) I would get a logarithmic graph. I only plotted the first few ones and the sequence looked more like it had linear progression (I hope that's the correct mathematical term; I mean similar to $y=x$ functions.) I also noticed that, roughly, every five numbers the number of digits is increased by 1 — except the four digit numbers (there's only four of them) and the one digits numbers (if you count the first 1, there's six of them). $$1, 1, 2, 3, 5, 8$$ $$13, 21, 34, 55, 89$$ $$144, 233, 377, 610, 987$$ $$1597, 2584, 4181, 6765$$ $$10946, ...$$ Thus, the first number with seven digits would be $n = 5 \cdot (7 - 1)$, which turned out to be correct. However, starting with the tenth digit, the behavior of the sequence changes (or rather, it becomes clear that it's not linear). The first number with 10 digits is not $5 \cdot (10 -1)$, but $5 \cdot (10 - 1) - 1$. Plotting the sequence for even larger numbers (again, numbers mapped to their digit count), you get a very slight logarithmic curve. For example, the first number with 1000 digits is $5 \cdot (1000 - 43)$ (WolframAlpha). My question is, How can the Fibonacci sequence's behavior be abstracted such that the problem can be solved more elegantly, as opposed to using the brute force algorithm. Edit: Regarding efficiency The question, I asked mainly to satisfy my curiosity. Doing this by brute-force is probably the best way in a real world application (computers can do it fast and the code is clearer). But (again, just for entertainment) the more efficient solution can still be to use a mathematical formula to find the index of the number you're looking for; here's why: 1. Conditionals are expensive. Doing fewer of them is always good, and the brute-force approach requires you to do a check on every number you generate. 2. Some languages (like C++ and D) allow you to do tricks like generating part of the Fibonacci sequence at compile-time (see meta-programming). This means less work at run-time, though admittedly, this could also apply to the brute-force approach. Not doing checks on the numbers, as well as generating many of them at compile time, can significantly make the algorithm faster (but only in a theoretical sense; most computers nowadays are fast enough that no human will ever notice the difference unless you do the computations a couple of times a second). Besides, the process of counting the digits of a numbers — required by the brute-force approach — is itself pretty expensive, whether you use the $\lceil\log_{10}n \rceil$ algorithm or the $n\mod10$ algorithm. - 12 (+1) "This question shows research effort; it is useful and clear" Check. Check. Check. – The Chaz 2.0 Sep 4 '11 at 15:58 – Mike Spivey Sep 6 '11 at 3:23 Why are you using the log or mod to check the number of digits? It's probably a lot faster to check if the number is greater than 10^1000. Subtracting and checking if the result is negative is surely a lot faster than taking the log or the modulo of a 1000 digit number. – Hannesh Sep 6 '11 at 10:24 @Hannesh I know, but I was talking about a more general algorithm that counts the digits, not one that checks whether a number is within a certain range. – Paul Manta Sep 6 '11 at 10:30 ## 7 Answers The sequence $n \mapsto F_n$ grows exponentially. In fact, I suggest you look at this part of the wikipedia article on Fibonacci numbers, which gives exact and nearly exact formulas for $F_n$ as exponential functions. This should be very helpful in determining the smallest $n$ such that $F_n$ has $1000$ digits. - I didn't map $n \mapsto F_n$ but rather $F_n \mapsto d(F_n)$, where $d$ is a function that gives the number of digits of a number. The second mapping is logarithmic. – Paul Manta Sep 4 '11 at 8:21 2 @Paul: sorry, you said that in your question but I missed it. Right, so $n \mapsto F_n$ is exponential and thus $n \mapsto d(F_n) \approx \log_{10} F_n$ is linear. Anyway, the point of my answer is not just that $F_n$ is exponential but is given up to the nearest integer by a very simple exponential expression, such that you can set it equal to $10^{1000}$ and solve for $n$. Does this make sense? – Pete L. Clark Sep 4 '11 at 8:28 2 Addendum to the above: when I say "solve for $n$", I mean that first you solve some equation that gives you a value of $n$ which is very nearly correct but not necessarily even an integer, and then you check nearby integers to see which one you want. I really don't want to say more than this, except that in between typing the above answer and this comment I decided to try the procedure out for myself...it works! – Pete L. Clark Sep 4 '11 at 8:41 1 Also note that $10^{1000}$ has $1001$ digits (I confess I missed this the first time around!) so that one might speed things up a little bit more by trying $10^{999}$ instead. But if you have access to something which will spit out values of $F_n$ for $n$ in the thousands, either way is already very fast... – Pete L. Clark Sep 4 '11 at 8:55 2 @Pete, the Project Euler problems are posed as programming/algorithmic exercises -- the task is really to figure out how to produce the thing they ask for. Just asking Wolfram Alpha for the solution instead of actually solving the problem posed is not going to be any more educational than cheating on a test or bribing a teacher to pass you. It doesn't matter that not solving the problem "seems more efficient" than solving it -- otherwise, what would be the point in participating at all in the first place? – Henning Makholm Sep 4 '11 at 16:10 show 9 more comments The $n$-th Fibonacci number is given by the following formula: $$F_n = \left\lfloor \frac{\phi^n}{\sqrt 5} + \frac12 \right\rfloor,$$ where $\displaystyle\varphi = \frac{1+\sqrt 5}{2}$. Equivalently, $F_n$ is the integer closest to $\displaystyle\frac{\varphi^n}{\sqrt 5}$. Thus, $\log_{10}F_n$ is very, very close to $\displaystyle n \log_{10}\varphi - \frac12 \log_{10}5$. Since the number of digits in the integer $m$ is $\lceil\log_{10}m \rceil$, so $-$ ignoring that very small discrepancy, which shouldn’t matter with numbers of this size $-$ you want the smallest $n$ such that $$\displaystyle n \log_{10}\varphi - \frac12 \log_{10}5 > 999.5.$$ Added: I guessed wrong about the effect of the discrepancy. By actual calculation it turns out that the value estimated from the inequality above is a little too small. - I'm not quite sure why you have 999.5 rather than 1000. – Henry Sep 4 '11 at 8:48 @Henry: Because I wanted the ceiling of the number on the left-hand side to be at least $1000$, which means that the number itself must be more than $999.5$. – Brian M. Scott Sep 4 '11 at 9:05 I still do not see it. First ceilings do not work like that. Second you want the left hand side to be greater than $\log(10^{1000})$ so you do not need ceilings at this stage. Later you could have $n=\text{ceiling} \left[ \dfrac{1000 + \frac12 \log_{10}5 }{\log_{10}\varphi} \right]$. I think this works for two or more digits. – Henry Sep 4 '11 at 9:38 This answer (with @Henry's correction) is true as far as it goes, but does it actually help find the actual Fibonacci number asked for? Ordinary computer floating-point arithmetic is not precise enough to find all the 1000 digits of the number simply by taking a power of $\phi$, so one has to execute the exact recurrence with bignums anyway after computing how far to go -- which is just the same work as the OP's original algorithm, except for the very minor step of checking for each $F_n$ whether we've reached $10^{1000}$ or not. – Henning Makholm Sep 4 '11 at 15:37 1 – Henry Sep 4 '11 at 22:51 show 2 more comments Here is a useful fact, for any positive number n, the number of digits of n in base 10 is given by: $$\lfloor\log_{10}(n)\rfloor + 1$$ The number of digits of the Fibonacci sequence grows linearly, as can be shown in the graph. - 10 I particularly like the plot of the imaginary part. – RoundTower Sep 4 '11 at 11:43 The other answers giving exact forumlas for Fibonacci numbers are spot on, but if you don't have them handy, you can also solve this problem by shifting the computation to the logarithmic domain. Calculate the log, base 10, of each number, and add it to a double-precision sum; as soon as the value equals or exceeds 999 (i.e. the log, base 10, of the lowest number with 1000 digits), you've reached the right Fibonacci number. - The answers you have got so far will help you finding the index of the first Fibonacci number greater than $10^{999}$, but are not particularly useful for finding that Fibonacci number itself, with all is 1000 digits. For that, I think you have no choice but compute the Fibonacci sequence far enough. If you know the index in advance you can use that to decide how far to go in the computation, but that does not seem to be a significant improvement over simply checking whether each successive Fibonacci number has reached 1000 digits -- you need to compute them all anyway. The point of the exercise must be that a naive implementation of the Fibonacci sequence (top-down, with two recursive calls in each step) is not going to complete for these sized until well after the heat death of the universe. But if you compute the sequence from the bottom up, saving all of them in an array, and simply pull the two previous numbers out of the array at each step, it's all fast and straightforward. (You'll need an arbitrary-precision integer arithmetic library if your programming language doesn't already provide one, of course). - 3 ...or he could do a matrix exponentiation of an appropriate matrix. – J. M. Sep 4 '11 at 15:52 1 @J, yes, but will be more programming work for a dubious benefit unless he has a library that handles both matrix exponentiation and arbitrary-precision arithmetic in one integrated package. The straightforward iterative solution will take $O(n^2)$ time, and ought to complete in comfortably less than a minute. Under the (perhaps not completely realistic) assumption that the library has a sub-quadratic multiplication algorithm, matrix exponentiation by squaring would get down to about $O(n\log^2(n))$, but it seems not to be worth the trouble. – Henning Makholm Sep 4 '11 at 15:58 1 The problem cited asks for the index, not the term itself. – Ross Millikan Sep 4 '11 at 19:22 2 @Henning Makholm: You are right that the quote is accurate, but the answer it accepted from me was the term number. The answer box won't hold 1000 digits (many problems ask for the last 8 digits of a big number). I believe there was discussion on the forum suggesting rewording, but it hasn't happened. – Ross Millikan Sep 4 '11 at 21:26 1 – Peter Taylor Sep 6 '11 at 8:48 show 5 more comments The brute force method IS the elegant method. Compare the gyrations and convolutions above to: find each Fibonnaci number in sequence, count the number of digits, stop at the first one with 1,000 digits (if any). - 1 You're right, that method is much better than it looks at first glance. – Charles Sep 4 '11 at 16:40 "If any"? Since $F_{n+1}$ is extremely close to $((1+\sqrt{5})/2) \cdot F_n$, it is clear that except for very small $d$ (in fact $d \geq 2$ suffices) there are either $4$ or $5$ Fibonacci numbers with $d$ digits. Maybe the gyrations are not so bad after all? – Pete L. Clark Sep 4 '11 at 16:45 Patrick: would you make the same statement in (say) 1950? I don't think brute force is at all elegant; it's just convenient and simple in a time where we have easy access to cheap powerful computing resources. – Fixee Sep 4 '11 at 17:02 1 @Fixee, even in 1950, if you were asked to compute the first Fibonacci number that had 1000 digits, that's what you had to do. Knowing the index of that number in the sequence is actually not going to simplify the computations for you. – Henning Makholm Sep 4 '11 at 21:31 – Paul Manta Sep 6 '11 at 5:36 Violating both the terms of the question and good taste, I spent 1.5 minutes and wrote a program to solve this. Being a computer scientist, it was an irresistible temptation. Python seems the logical choice for something quick: it has arbitrary precision integers built in. Solution deleted due to comment below - In python, `t = j; j += i; i = t` can be written as `i, j = j+i, i`. Also, rather than calculate the log of j in every loop, I thought it would be quicker to just see if it's greater than or equal to `10 ** 999`, but in tests, it made only a small difference. – rjmunro Sep 4 '11 at 23:34 @rjmunro: I had written `i,j = j,j+1` originally, but changed it before posting here, hoping to avoid too much python-specific syntax so that C programmers would understand the code. (Btw, I don't think the two code snippets in your comment are equivalent.) Also, you're right that `10 **999` is probably a bit faster than a `log`, however that depends on how exponentiation is computed in python... it might be just as bad. I usually don't try optimizing in Python much; the language is horrendously slow (even jitted), so I'd never use for anything where I care about performance. – Fixee Sep 5 '11 at 0:15 It's not that 10**999 is faster than log(), it's that you only have to compute it once at the start. You are computing `log(j)` on every loop. – rjmunro Sep 5 '11 at 11:21 @rjmunro: Yup true. So inside the loop, we have to ask if it's faster to compute the log of a huge number or to compare one huge number to another. I'd have to think that the latter is much faster, as you suggested. (Though once again, I think talking about optimized Python is akin to talking about high-performance tricycles.) – Fixee Sep 5 '11 at 16:46 1 I can understand the temptation to solve the problem (I gave into that myself) but not the temptation to post the solution here, contrary to the instructions at Project Euler. Haven't you just spoiled the puzzle for a lot of (presumably young, non-professional computer scientist) people? – Pete L. Clark Sep 5 '11 at 21:19 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 55, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9470717310905457, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/4782/trying-to-model-pinball-physics-for-game-ai/4798
# Trying to model pinball physics for game AI I'm working on an AI for a pinball-related video game. The ultimate goal for the system is that the AI will be able to fire a flipper at the appropriate time to aim a pinball at a particular point on the table. We are using an engine that will handle the underlying physics interactions when the ball is hit (Unreal). I'm trying to quasi-realistically model the physics for the AI's "thinking". I have a reasonable understanding of Newtonian physics but I'm going to need some help making sure I'm correct about everything I may need to account for. For the purposes of simplification, I'm considering the table to be vertical on an XY axis with gravity just being a small fraction of actual gravity accelerating down on the Y axis (yes, this means I'm ignoring friction between the ball and table, for now). By my understanding, I've got the work with the following when a ball is rolling along a flipper. • The ball will have velocity along both X and Y axes as it rolls to the flipper. • The flipper will have a range of motion (minimum and maximum angles) and a Torque value. • When the flipper fires, it will apply force to the ball based on the flipper's Torque, the distance the ball is from the flipper's rotation point, and the length of the arc the contacted point on the flipper needs travel. • The applied force will accelerate the ball back up the table following a new trajectory. My question(s) are: Am I overlooking an important interactions in this system? Or am I misinterpreting anything? Does it seem like this is a reasonable modeling for video game? Et cetera? Thanks! - ## 2 Answers I would need to see - and understand - your code but it is not clear from the text whether you also remember and take care of the rotation of the ball - its angular frequency (and the internal angular momentum, which is proportional). I am convinced that this is totally necessary to get some realistic dynamics and it's where most of the pinball fun is all about. In typical situations, the ball is rolling on the table without any "creeping" but this condition still allows the spin around the axis transverse to the table. This ball obtains this spin when it's hit (or when it collides) from the side, and when it hits another object while it's (the ball is) spinning, it will reflect in a direction that is influenced by the spin. Tennis players etc. are using the spin to confuse the opponent all the time. Moreover, I am not sure whether the flipper gives the ball a fixed torque - as opposed to a fixed "power" in the normal direction (transverse to the flipper's tangent near the contact point with the ball) or some other function or a fixed "velocity" in the normal direction (doesn't it just achieve that the ball's normal velocity away from the flipper ends up being equal to the flipper's high velocity?) At any rate, I think that the degrees of freedom in your approximation are really simple. If the ball moves along the plane and never jumps, then there are 2 components of the velocity and 3 components of the angular momentum. Two components of the angular momentum are, however, determined by the condition that the ball is rolling not creeping. The y velocity is accelerating as you said and the spin is conserved unless the ball is in the contact in which case you have to calculate the right force and torque. - When the ball is hit with the flipper, two velocity components should be considered: parallel to the flipper $v_{\parallel}$ and perpendicular to it $v_{\perp}$. $v_{\parallel}$ will change if the ball is spinning - as Luboš said, this definitely should be included in your model. You can start with a crude approximation $\Delta v_{\parallel}\propto \omega$ for $\omega<\omega_0$ and $\Delta v_{\parallel}=const$ otherwise. $\omega_0$ would be the ball angular speed at which it starts to slip. $v_{\perp}$ changes its sign (elastic collision) and gets additional kick from the flipper: $v_{\perp}=-{\gamma}v_{\perp}+v_{flip}$, where $\gamma$ is elasticity coefficient close to 1 and $v_{flip}=\omega_{flip}r$, $r$ is the distance from the flipper axis to the point where the ball hits it. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331161379814148, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/154929/degrees-of-freedom-vs-cardinality-of-tuples
# Degrees of freedom vs. cardinality of tuples Sometimes it is said that the number of DoF of a system means how many real numbers have to be used at least to describe the system. But we know from set theory that the cardinality of any tuple of reals is the same as the reals, so all the information that is in a 3-tuple of reals, can be represented in just one real number. Of course this representation may not be very useful as the properties of the system will be non-continuous functions of this variable, but still it is enough to describe the state of the system. So how could this confusing thing be resolved? - I think that [linear-algebra] fits, but I'm not sure anymore. Either way [set-theory] does not fit in here. – Asaf Karagila Jun 7 '12 at 6:33 – Rahul Narain Jun 7 '12 at 6:55 ## 3 Answers While $R^n$ and $R^m$ may have the same cardinality for all positive integer $n,m$ dimensions, they are homeomorphic topological spaces if and only if $n=m$, a result due to Brouwer. So one cannot say with assurance that a parameterization by $n$ real values is equivalent to one by $m$ real values, if the continuity of the parameterization plays a role (as it most often will). - Describing a system using a (sufficiently badly) discontinuous parameterization is useless for e.g. any scientific endeavor. Given that it is impossible to measure real-world parameters to infinite precision, all you ever have are approximations, and if you cannot relate the behavior of two similar states of your system using continuity how can you ever make any predictions about it? - Cardinality itself is a very bare notion of size. It disregards any structure given on the sets. Equivalently you could represent each tuple with a Borel set; a continuous function; etc. or any member of a collection of size continuum. Essentially everything can be represented with almost everything else. Representation is simply a way for us to think about one object in terms of another. When exchanging one object by another you need to figure out whether or not such representation is useful for anything. For example, is there coherence between a naturally defined operation on the tuples and naturally defined operations on the real numbers? How simple is the representation itself, does it just "exists out there" or can we describe it in a relatively definitive way? Doing "highly discontinuous" things has very little advantage, since continuity assures some degree of coherence between two structures (continuity is not the only thing which matters though). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389532208442688, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/43713-plz-solve-these-questions-trignometry-statement-questions.html
Thread: 1. Plz solve these questions.(Trignometry statement questions) 2. Originally Posted by prashantvrm you seem to be missing information from number 2), i think you meant to state the angle of elevation, but didn't. anyway, we can still do this problem in principle. the other problems are similar, try attacking them in the same way. that is, draw diagrams and figure out what trig ratio you need to solve for the unknown. i would hope you know what is meant by "angle of elevation" and "angle of depression." anyway, in this problem, let the angle of elevation be $x$ and the distance from the foot of the pole to the point be $d$ (see the diagram below). we can use the tangent trig ratio here. problem 2: $\tan x = \frac {\text{opposite}}{\text{adjacent}} = \frac {12}d$ $\Rightarrow d = \frac {12}{\tan x}$ now, if we know what x is, we can find the distance. otherwise, leave it like that. try the others, be sure to draw diagrams so you know how to think about the problem Attached Thumbnails 3. Hello, prashantvrm! 4. The angle of elevation to the top of a chimney from a point on the ground is 30°. After walking 50 m towards the chimney, the angle of elevation becomes 45°. Find the height of the chimney. Code: ``` * C * * | * * | * * | h * * | * * | * 30° * 45° | * - - - - - - * - - - - - - * A 50 B x D``` Let $CD = h$, the height of the chimney. When the observer stands at $A,\:\angle CAD = 30^o.$ At $B$ (where $AB = 50$), $\angle CBD = 45^o.$ Let $x = BD.$ In right triangle $CDA\!:\;\frac{h}{x+50} \:=\:\tan30^o \:=\:\frac{1}{\sqrt{3}} \quad\Rightarrow\quad \sqrt{3}h \:=\:x + 50\;\;{\color{blue}[1]}$ In right triangle $CDB\!:\;\frac{h}{x} \:=\:\tan45^o \:=\:1 \quad\Rightarrow\quad x \:=\:h\;\;{\color{blue}[2]}$ Substitite [2] into [1]: . $\sqrt{3}h \:=\:h + 50 \quad\Rightarrow\quad \sqrt{3}h - h \:=\:50$ Factor: . $(\sqrt{3}-1)h \:=\:50 \quad\Rightarrow\quad h \:=\:\frac{50}{\sqrt{3}-1} \;\approx\;\boxed{68.3\text{ m}}$ 4. Hello again, prashantvrm! 3. The top of a tree is broken and the broken part makes a 30° angle with the ground. The distance from the tip of the tree to the base of the tree is 40 m. Find the height of the tree. Code: ``` A * | * | * | * | 30° * C * - - - - - - - - - * B 40``` This is a 30-60 right triangle. We know the ratios of ths sides . . . . $AC : BC : AB \:=\:1 : \sqrt{3} : 2$ Hence, we have: . $AC : BC : AB \;=\;\frac{40\sqrt{3}}{3} : 40 : \frac{80\sqrt{3}}{3}$ The height of the tree is: . $AC + AB \;=\;\frac{40\sqrt{3}}{3} + \frac{80\sqrt{3}}{3} \;=\;\frac{120\sqrt{3}}{3} \;=\;\boxed{40\sqrt{3}\text{ m}}$ 5. hai just find the distance and u can find the answer ..... its just too easy.......
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8964027762413025, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Moment_generating_function
# Moment-generating function (Redirected from Moment generating function) In probability theory and statistics, the moment-generating function of a random variable is an alternative specification of its probability distribution. Thus, it provides the basis of an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the moment-generating functions of distributions defined by the weighted sums of random variables. Note, however, that not all random variables have moment-generating functions. In addition to univariate distributions, moment-generating functions can be defined for vector- or matrix-valued random variables, and can even be extended to more general cases. The moment-generating function does not always exist even for real-valued arguments, unlike the characteristic function. There are relations between the behavior of the moment-generating function of a distribution and properties of the distribution, such as the existence of moments. ## Definition In probability theory and statistics, the moment-generating function of a random variable X is $M_X(t) := E\left[e^{tX}\right], \quad t \in \mathbb{R},$ wherever this expectation exists. $M_X(0)$ always exists and is equal to 1. A key problem with moment-generating functions is that moments and the moment-generating function may not exist, as the integrals need not converge absolutely. By contrast, the characteristic function always exists (because it is the integral of a bounded function on a space of finite measure), and thus may be used instead. More generally, where $\mathbf X = ( X_1, \ldots, X_n)$T, an n-dimensional random vector, one uses $\mathbf t \cdot \mathbf X = \mathbf t^\mathrm T\mathbf X$ instead of tX: $M_{\mathbf X}(\mathbf t) := E\left(e^{\mathbf t^\mathrm T\mathbf X}\right).$ The reason for defining this function is that it can be used to find all the moments of the distribution.[1] The series expansion of etX is: $e^{tX} = 1 + tX + \frac{t^2X^2}{2!} + \frac{t^3X^3}{3!} + \cdots +\frac{t^nX^n}{n!} + \cdots.$ Hence: $M_X(t) = E(e^{tX}) = 1 + tm_1 + \frac{t^2m_2}{2!} + \frac{t^3m_3}{3!}+\cdots + \frac{t^nm_n}{n!}+\cdots,$ where mn is the nth moment. If we differentiate MX(t) i times with respect to t and then set t = 0 we shall therefore obtain the ith moment about the origin, mi. ## Examples Here are some examples of the moment generating function and the characteristic function for comparison. It can be seen that the characteristic function is a Wick rotation of the moment generating function Mx(t) when the latter exists. Distribution Moment-generating function MX(t) Characteristic function φ(t) Bernoulli $\, P(X=1)=p$   $\, 1-p+pe^t$   $\, 1-p+pe^{it}$ Geometric $(1 - p)^{k-1}\,p\!$   $\frac{pe^t}{1-(1-p) e^t}\!$, for  $t<-\ln(1-p)\!$ $\frac{pe^{it}}{1-(1-p)\,e^{it}}\!$ Binomial B(n, p)   $\, (1-p+pe^t)^n$   $\, (1-p+pe^{it})^n$ Poisson Pois(λ)   $\, e^{\lambda(e^t-1)}$   $\, e^{\lambda(e^{it}-1)}$ Uniform (continuous) U(a, b)   $\, \frac{e^{tb} - e^{ta}}{t(b-a)}$   $\, \frac{e^{itb} - e^{ita}}{it(b-a)}$ Uniform (discrete) U(a, b)   $\, \frac{e^{at} - e^{(b+1)t}}{(b-a+1)(1-e^{t})}$   $\, \frac{e^{ait} - e^{(b+1)it}}{(b-a+1)(1-e^{it})}$ Normal N(μ, σ2)   $\, e^{t\mu + \frac{1}{2}\sigma^2t^2}$   $\, e^{it\mu - \frac{1}{2}\sigma^2t^2}$ Chi-squared χ2k   $\, (1 - 2t)^{-k/2}$   $\, (1 - 2it)^{-k/2}$ Gamma Γ(k, θ)   $\, (1 - t\theta)^{-k}$   $\, (1 - it\theta)^{-k}$ Exponential Exp(λ)   $\, (1-t\lambda^{-1})^{-1}$   $\, (1 - it\lambda^{-1})^{-1}$ Multivariate normal N(μ, Σ)   $\, e^{t^\mathrm{T} \mu + \frac{1}{2} t^\mathrm{T} \Sigma t}$   $\, e^{i t^\mathrm{T} \mu - \frac{1}{2} t^\mathrm{T} \Sigma t}$ Degenerate δa   $\, e^{ta}$   $\, e^{ita}$ Laplace L(μ, b)   $\, \frac{e^{t\mu}}{1 - b^2t^2}$   $\, \frac{e^{it\mu}}{1 + b^2t^2}$ Negative Binomial NB(r, p)   $\, \frac{((1-p)e^t)^r}{(1-pe^t)^r}$   $\, \frac{((1-p)e^{it})^r}{(1-pe^{it})^r}$ Cauchy Cauchy(μ, θ) does not exist   $\, e^{it\mu -\theta|t|}$ ## Calculation The moment-generating function is given by the Riemann–Stieltjes integral $M_X(t) = \int_{-\infty}^\infty e^{tx}\,dF(x)$ where F is the cumulative distribution function. If X has a continuous probability density function ƒ(x), then MX(−t) is the two-sided Laplace transform of ƒ(x). $\begin{align} M_X(-t) & = \int_{-\infty}^\infty e^{tx} f(x)\,dx \\ & = \int_{-\infty}^\infty \left( 1+ tx + \frac{t^2x^2}{2!} + \cdots + \frac{t^nx^n}{n!} + \cdots\right) f(x)\,dx \\ & = 1 + tm_1 + \frac{t^2m_2}{2!} +\cdots + \frac{t^nm_n}{n!} +\cdots, \end{align}$ where mn is the nth moment. ### Sum of independent random variables If X1, X2, ..., Xn is a sequence of independent (and not necessarily identically distributed) random variables, and $S_n = \sum_{i=1}^n a_i X_i,$ where the ai are constants, then the probability density function for Sn is the convolution of the probability density functions of each of the Xi, and the moment-generating function for Sn is given by $M_{S_n}(t)=M_{X_1}(a_1t)M_{X_2}(a_2t)\cdots M_{X_n}(a_nt) \, .$ ### Vector-valued random variables For vector-valued random variables X with real components, the moment-generating function is given by $M_X(t) = E\left( e^{\langle t, X \rangle}\right)$ where t is a vector and $\langle \cdot, \cdot \rangle$ is the dot product. ## Important properties An important property of the moment-generating function is that if two distributions have the same moment-generating function, then they are identical at almost all points.[citation needed] That is, if for all values of t, $M_X(t) = M_Y(t),\,$ then $F_X(x) = F_Y(x) \,$ for all values of x (or equivalently X and Y have the same distribution). This statement is not equivalent to ``if two distributions have the same moments, then they are identical at all points", because in some cases the moments exist and yet the moment-generating function does not, because in some cases the limit $\lim_{n \rightarrow \infty} \sum_{i=0}^n \frac{t^im_i}{i!}$ does not exist. This happens for the lognormal distribution. ### Calculations of moments The moment-generating function is so called because if it exists on an open interval around t = 0, then it is the exponential generating function of the moments of the probability distribution: $E \left( X^n \right) = M_X^{(n)}(0) = \frac{d^n M_X}{dt^n}(0).$ Here n should be a nonnegative integer. ## Other properties Hoeffding's lemma provides a bound on the moment-generating function in the case of a zero-mean, bounded random variable. ## Relation to other functions Related to the moment-generating function are a number of other transforms that are common in probability theory: characteristic function The characteristic function $\varphi_X(t)$ is related to the moment-generating function via $\varphi_X(t) = M_{iX}(t) = M_X(it):$ the characteristic function is the moment-generating function of iX or the moment generating function of X evaluated on the imaginary axis. This function can also be viewed as the Fourier transform of the probability density function, which can therefore be deduced from it by inverse Fourier transform. cumulant-generating function The cumulant-generating function is defined as the logarithm of the moment-generating function; some instead define the cumulant-generating function as the logarithm of the characteristic function, while others call this latter the second cumulant-generating function. probability-generating function The probability-generating function is defined as $G(z) = E[z^X].\,$ This immediately implies that $G(e^t) = E[e^{tX}] = M_X(t).\,$ ## References 1. Bulmer, M.G., Principles of Statistics, Dover, 1979, pp. 75–79 • Casella, George; Berger, Roger. Statistical Inference (2nd ed.). pp. 59–68. ISBN 978-0-534-24312-8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 53, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7810904383659363, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/70931/can-you-explain-why-plotting-cos-cos90-sqrtx-looks-like-this
# Can you explain why plotting $\cos(\cos(90 \sqrt{x}))$ looks like this? Can you explain why plotting $\cos(\cos(90 \sqrt{x}))$ looks like this: (from here) - 1 what is there to explain? – user12205 Oct 8 '11 at 21:02 Why it has compression , unlike normal cos , it also expands in the spaces between the peaks – xsari3x Oct 8 '11 at 21:06 It's like Longitudinal Waves but without compression – xsari3x Oct 8 '11 at 21:10 1 Zooming near $0$ should make it look more like our mind picture of what it should look like. – André Nicolas Oct 8 '11 at 21:10 2 @xsari3x: Well, the compression is because you precomposed the cosine with the square root. ;) – Rasmus Oct 8 '11 at 21:12 show 7 more comments ## 2 Answers You can't expect it to look like a Cos(x) for example because the argument $\cos(90 \sqrt{x})$ is not linear, and is itself cyclic. Here is the graph of $\cos(90 \sqrt{x})$ - As $x$ changes, $\sqrt{x}$ changes at a varying rate that approaches $\infty$ as $x$ approaches $0$. You can see that by realizing that $y = \pm\sqrt{x}$ is the same as $x=y^2$, and that's a parabola with a vertical tangent at $x=y=0$. Therefore it oscillates very fast when $x$ is near $0$. As $x$ moves away from $0$, then $\sqrt{x}$ changes more slowly as $x$ changes, so this oscillates more slowly. $\cos(\cos(90\sqrt{0}))= \cos 1 \approx 0.54,$ so it starts at $0.54$, and returns to $0.54$ whenever $90\sqrt{x}$ returns to something whose cosine is $1$. The function returns to $1$ whenever $90\sqrt{x}$ returns to something whose cosine is $0$. The infinite rate of change at $x=0$ means the graph will have a vertical tangent at $x=0$, but does not mean it oscillates infinitely many times between $0$ and any particular positive value. The reasons for this can be seen with a little thought. In other words, you should be able to actually count the oscillations between any positive argument and $0$. The number of such oscillations is large because of the "90". - 1 Dear Michael, I think "The infinite rate of change at $x=0$ means the graph will have a vertical tangent at $x=0$" is not true. The reason being that cosine has derivative $0$ at $0$. – Rasmus Oct 8 '11 at 21:36 You're right: I just computed the limit of the derivative at $0$. It's got a numerator and denominator that both approach $0$, and the limit is a finite positive number. So it's increasing pretty fast at $x=0$, but not infinitely fast. – Michael Hardy Oct 8 '11 at 21:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410967826843262, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/210810/solve-3-log-10x-15-left-frac14-rightx/210816
Solve $3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$ $$3\log_{10}(x-15) = \left(\frac{1}{4}\right)^x$$ I am completely lost on how to proceed. Could someone explain how to find any real solution to the above equation? - Is the log a natural log? Or base 10? – ncmathsadist Oct 11 '12 at 1:17 @ncmathsadist Sorry, I should have specified. Its base 10. – abc Oct 11 '12 at 1:18 Are you looking for real solutions or complex solutions? – S.B. Oct 11 '12 at 1:19 @S.B. I am looking for all real solutions. – abc Oct 11 '12 at 1:20 1 @abc: It can have at most one real solution, because the LHS is strictly increasing while the RHS is strictly decreasing (actually, it has exactly one solution). Obviously you must look at $x>15$. I'm not sure if you can find a solution explicitly, but you can solve it numerically. – S.B. Oct 11 '12 at 1:24 show 1 more comment 3 Answers Hint: Consider left and right sides at $x=16$ and $x=17$. You won't find an explicit "closed-form" solution, but you can prove that it exists. - Thanks Robert. The solution is approximately 16 (which I found through just substituting the values in). Sorry, but my mathematical skills are relatively elementary - what is a "closed form solution"? The Wikipedia article on it was very rigorous and hard to understand. – abc Oct 11 '12 at 1:29 1 @abc: "closed form" solution means a formula using only elementary operations (addition, substraction, multiplication, division) plus a handful of functions (exponentiation, roots and logarithms, trigonometric). – Javier Badia Oct 11 '12 at 1:54 2 "Closed-form" is a rather subjective term: basically you allow "well-known" functions, but people will differ on precisely which ones to include. – Robert Israel Oct 11 '12 at 4:59 Put \begin{equation*} f(x) = 3\log_{10}(x - 15) - \left(\dfrac{1}{4}\right)^x. \end{equation*} We have $f$ is a increasing function on $(15, +\infty)$. Another way, $f(16)>0$ and $f(17)>0$. Therefore the given equation has only solution belongs to $(16,17)$. - Thanks. What you did above seems to be just an approximation - isn't there a formal way to find an exact solution? – abc Oct 11 '12 at 1:50 Exact solution is too difficult to find. – minthao_2011 Oct 11 '12 at 2:17 4 At x=16 we have $log_{10}(16-15)=0$ so $f(16)<0$. I guess that's what you should have written in your answer anyway, since the statement $f(17)>0$ is right. You want a sign change to guarantee a zero in between. – coffeemath Oct 11 '12 at 2:25 Let $x=16+y$. After approximating $\log(1+y)$ with $y - \frac{y^2}{2}$, $(\frac{1}{4})^y = \exp(-\log(4) y)$ with $1 - \log(4) y$, $\frac{1}{1+\epsilon}$ with $1 - \epsilon$, get $$y \approx \frac{\log(10)}{3 \cdot 4^{16}}.$$ WIMS Function Calculator gives the exact solution, $1.7870412306 \cdot 10^{-10}$ compared to the approximation, $1.7870412309 \cdot 10^{-10}$. - Please check if my LaTeX-editing did not mess up your equations. In any case, (+1) for the nice approach. – TMM Dec 31 '12 at 0:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9203177690505981, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28737/do-lorentz-boosts-in-the-same-direction-form-a-group/28738
# Do Lorentz Boosts in the same direction form a group? I know that two consecutive Lorentz Boosts in different directions produce a rotation and therefore Lorentz Boosts don't form a group. But, my intuition tells me that, Lorentz Boosts in the same direction should form a group. Two boosts along the x axis should produce another boost along the x axis. Is that correct? - Boosts in different directions form a subgroup of the Lorentz group? If yes, how can I check it? – user11667 Aug 25 '12 at 19:59 No they dont, because their commutator is a rotation. Boost in x, then in y, then backward in x, and backward in y, and you get a rotation. You should delete this answer, as it doesn't answer the question. – Ron Maimon Aug 25 '12 at 21:28 ## 2 Answers Yes, it is a one-dimensional subgroup generated by exponentiating an infinitesimal boost. Every one dimensional exponentiation of a generator forms an abelian group, because $e^{aG} e^{bG} = e^{(a+b)G}$, there is nothing to not commute. This result is the addition of velocities, you can explicity check that this is associative (it is always manifestly commutative). - 1 – Luboš Motl May 21 '12 at 19:40 Thank you for these answers. I can't +1 or check this now because my browser isn't letting me register an account. It is interesting that noncommutativty produces a rotation via the Baker Campbell formula and the fact that the commutator of two Boost Generators produces a Rotation Generator. But the first term is still e^{(a+b)G} in such a case, so I suppose it is actually a boost and a rotation simultaneously. – MadScientist May 21 '12 at 20:03 Firstly, an excellent question. Never considered this before. As has already been said, combining boosts along difference directions clearly doesn't form a group as the closure axiom is not met. However, Lorentz boosts are nothing more than (hyperbolic) rotations in Minkowski space. So I think the set of boosts along the same axis should form a rotation group. I hope someone else can explain the reasoning without resorting to generators, as they're not my strong point. See this YouTube video for some basic derivation. - Don't be scared about generators--- the exponential parameter is just the (hyperbolic) angle of the (hyperbolic) rotation, as Lubos Motl commented on my answer. If you follow the link, you can see that the angles add up in relativity, just as they do in geometry, so that the group is the additive real numbers (although not modulo $2\pi$ like in geometry).. – Ron Maimon May 22 '12 at 9:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271604418754578, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/information-theory?sort=active&pagesize=15
Tagged Questions The science of compressing and communicating information. It is a branch of applied mathematics and electrical engineering. Though originally the focus was on digital communications and computing, it now finds wide use in biology, physics and other sciences. 2answers 49 views How can you use a (fair) coin to draw straws among 3 people? (Information Theory) The following nice riddle is a quote from the excellent, free-to-download book: Information Theory, Inference, and Learning Algorithms, written by David J.C. MacKay. How can you use a (fair) coin ... 3answers 119 views Another Information Theory Riddle The following nice riddle is a quote from the excellent, free-to-download book: Information Theory, Inference, and Learning Algorithms, written by David J.C. MacKay. In a magic trick, there are ... 3answers 275 views Is it wrong to use Binary Vector data in Cosine Similarity? I am doing Information Retrieval using Cosine Similarity. My data is binary vector. Since most of all reference I read is using non-binary vector (non-binary matrix) data, I am wondering if it is ... 0answers 35 views Intregral of exponential of Shannon Entropy Function Here I am going to ask a similar question as rde asked , that is what is the integral of exponential of entropy function. That is what is the value of $F[H(x)]=\int_{-1}^{+1} e^{ikH(f(x^2))} dx$ ... 1answer 70 views Information content associated with an outcome I have the following exam question for a multimedia exam in college: Assume that you roll a single ordinary six-sided die twice, and observe that the second number rolled is greater than the ... 4answers 90 views What exactly is a probability measure in simple words? Can someone explain probability measure in simple words? This term has been hunting me for my life. Today I came across Kullback-Leibler divergence. The KL divergence between probability measure P ... 3answers 130 views Measure of how much information is lost in an implication In an implication like $p \implies q$, is there some measure of how much information is lost in the implication? For example, consider the following implications, where $x \in \{0,1,\ldots,9\}$: ... 1answer 95 views How information works? I am really confused after reading wikipedia... What I don't get is how can something "bring" information, and in mathematics, how a mathematical object (like a set) can "have" information. For ... 0answers 11 views Hellinger distance between 3-parameter Weibull distributions I found Wikipedia to have listed Hellinger distance between pairs of 2-parameter Weibull distributions sharing the same shape parameter http://en.wikipedia.org/wiki/Hellinger_distance However, I ... 1answer 17 views What is being maximised in the channel capacity formula? The channel capacity formula is given as such: $$C=\max_{p(x)}I(X,Y)$$ Does this mean that it is the maximum probability multiplied by the mutual information, or is something else being maximised ... 1answer 26 views Entropy vs predictability vs encodability Imagine there's a guessing game where a series of binary symbols are presented and a human must decide quickly if the symbol is the same as the previous or different. There's a property of the ... 1answer 21 views The information of a Bernoulli random variable and surprisingness Consider a random variable $\mathbb{X}$ with: $f(x;p) = 2^{-n}$ if x = 1 and $f(x;p) = 1-2^{-n}$ Then the information gained from an experiment where x=1 is ... 1answer 28 views Shannon inequalities I have some difficulties in showing the relationship between mutual information $I(X; Y |Z)$ and $I(X; Y)$? What is larger? 4answers 118 views Intuitive explanation of entropy? I have bumped many times into entropy, but it has never been clear for me why we use this formula: If $X$ is random variable then it's entropy is: $H(X) = -\displaystyle\sum_{x} p(x)\log\;p(x)$ Why ... 3answers 170 views What is necessary to exchange messages between aliens? [closed] Lets assume that two extreme intelligent species in the universe can exchange morse code messages for the first time. A can send messages to B and B to A, both have unlimited time, but they can not ... 0answers 28 views Computing Relative entropy? I am doing a project for my CS class and I was wondering if the following would work. I have 50 different people who have rated the same 50 books. The rating system is as follows: negative 5 = hate ... 1answer 165 views measure of information We know that $l_i=\log \frac{1}{p_i}$ is the solution to the Shannon's source compression problem: $\arg \min_{\{l_i\}} \sum p_i l_i$ where the minimization is over all possible code length ... 1answer 28 views Possible Mistake in Calculating Posterior of Distribution using Bayes Rule and Integration I have been struggling on a homework question where I have to compute the posterior density of a distribution. While I can compute the posterior, I believe I made a mistake because the area under the ... 0answers 30 views Why is K-L divergence defined as it is? Why is the K-L divergence defined this way: if $P$ and $Q$ are probability measures over a set $X$, and $P$ is absolutely continuous with respect to $Q$, then the Kullback–Leibler divergence from ... 2answers 63 views Given $\forall x \in \mathbb{R} \: h(p^t(x))=th(p(x))$, how to get $h(p(x)) \propto \ln p(x)$? The whole question is in the title. $p(x)$ is a probability distribution, and $h$ is continuous and monotonic in $p(x)$. The purpose is to motivate that the "degree of surpise", or the "amount of ... 0answers 71 views Which takes more energy: Shuffling a sorted deck or sorting a shuffled one? You have an array of length $n$ containing $n$ distinct elements. You have access to a comparator on the elements (a black-box function that takes $a$ and $b$ and returns true if $a < b$, false ... 0answers 29 views convexity of the product of two entropy-like functions Consider the functions $T_p(q)= \sum_i q_i^p$, where p>1 and q is a finite-dimensional vector satisfying $\sum_i q_i = 1, q_i >0$ (ie, a probability mass function). In information-theoretic terms, ... 1answer 29 views Entropy: Is $H(X_{1},X_{2}) = H(X_{1})$ true? Question: If $X_{1}, X_{2}$ are two discrete random variables. $X_{1}, X_{2}$ have the same probability distribution can we then deduce that: $H(X_{1}, X_{2}) = H(X_{1})$ is true? Remark: ... 1answer 27 views Easy bound involving logs and binomial coefficients I am currently working on an information theory problem where I have to bound the divergence between two distributions. The divergence can be simplified to: \sum_{k=0}^N \ {N\choose k} ... 2answers 151 views Are there simple examples of capacity-achieving block codes for discrete memoryless channels? The title pretty much says it all, but I am particularly interested in the case where the number of input and output symbols are equal and the transition matrix defining the DMC is nondegenerate. I am ... 1answer 31 views i.i.d binary random variable question Suppose there are i.i.d. binary random variables $X_i \sim X$ with distribution $P(X=1) = 0.75$ and $P(X=0) = 0.25$ i) For $n=5$ and $e=0.1$, which sequences fall in the typical set $A_e^n$? What is ... 1answer 33 views Help deciphering Levenshtein formula I am trying to completely understand the Levenschtein formula, and I have been reading the Wikipedia article on this. However, the description of the mathematical formula confuses me: ... 1answer 32 views Expanding information capacity of Gaussian Channel I'm currently try to understand a Gaussian Capacity Channel. I found litterature on internet, and some expand the information capacity of a Gaussian Channel as follow: I(X,Y)= h(Y) -h(Y\mid X) = ... 3answers 94 views Does any error correction code still work in such situation? I'm looking for a kind of error correction code or solution that can correct my codeword in this case: My message holds k bits, and 2*k bits codeword (rate is 1/2) is produced by the generator ... 1answer 32 views Amount of information a hidden state can convey (HMM) In this paper (Products of Hidden Markov Models, http://www.cs.toronto.edu/~hinton/absps/aistats_2001.pdf), the authors say that: The hidden state of a single HMM can only convey log K bits of ... 2answers 73 views Ask for a question about independence This is the question I met while reading Shannon's channel coding theorem. Assume a random variable $X$ is transmitted through a noisy channel with transition probability $p(y|x)$. At the receiver a ... 1answer 81 views Definition of entropy of an ergodic measure I'm reading a paper in which it is stated that The entropy of an ergodic measure is defined as $$\lim_{n \to \infty} -\frac{1}{n} \sum_{|w|=n} \mu[w] \log \mu[w].\tag{1} \label{eq:1}$$ Here ... 3answers 182 views What is the least amount of questions to find out the number that a person is thinking between 1 to 1000 when they are allowed to lie at most once A person is thinking of a number between 1 and 1000. What is the least number of yes/no questions that we can ask and know what that person's number is given that the person is allowed to lie on at ... 1answer 47 views Infinite Bias in a Maximum Likelihood Estimator I'm having some problems calculating the bias of a ML estimator in the following problem: Let $\mu$, $x$, $y$ be random variables such that: $y|x$ is distributed as $\exp(x)$ so that \$p(y|x) = ... 0answers 67 views Information-theoretic aspects of mathematical systems? It occured to me that when you perform division in some algebraic system, such as $\frac a b = c$ in $\mathbb R$, the division itself represents a relation of sorts between $a$ and $b$, and once you ... 2answers 231 views Theoretical basis for overfitting There are many examples in which making more "precise" predictions gives worse performance (e.g. Runge's phenomenon). My professor implied that there was a sound basis for choosing "simple" functions ... 1answer 55 views Expression for the size of type class, or multinomial coefficient. The notations follow those in Cover&Thomas, "Elements of Information Theory", 2ed. I saw from a paper that the size of type class $T(P)$ can be expressed as ... 0answers 45 views Rigorous formulation of Shannon-Hartley theorem The Shannon-Hartley theorem gives an expression for the capacity of a bandwidth and power limited channel. How would one formulate this theorem mathematically (rigorously)? I understand the formula ... 0answers 91 views Random variables identities - how to make a formal proof. Let $X, Y, Z$ be three random discrete variables. Consider the below random variables: $A = X\vert Y\vert Z$ ,$B= X\vert Y,Z$ Question: Can I conclude that $A$ and $B$ are the same ... 0answers 101 views Intuition for Fisher information metric In statistical maniolds $S=\{p_\theta\}$,$\theta=(\theta_1,\dots,\theta_n)$, the Riemaanian metric usually defined is the Fisher information metric g_{ij}(\partial_i,\partial_j)=\int \partial_i(\log ... 0answers 62 views Proof for the upper bound on entropy $H(S)$? I was trying to prove the upper bound on $H(S)$ using the inequalities $\ln(x)\le(x-1)$ and $\ln(1/x)\ge(1-x)$ for independent and memory less source symbols $s_1,\dots,s_q$ . I am trying to prove ... 1answer 36 views Decoding used in Algorithms Using a transposition matrix of size 4 by 6 (4 columns, 6 rows) and key ‘time’ decode the following message: RLAPET HWBUIE EIERSS TELSRT I am just looking for either a starting point or a step by ... 1answer 140 views A generalisation of a well known result in information theory It is well known that Entropy is additive, and that it is the only sensible choice for measuring uncertainty if we want additivity to hold, i.e. $H(XY) = H(X)+H(Y)$ or more explicitly, if we have ... 1answer 56 views About the differential entropies of well-known continuous distributions Assume that the continuous random variable $X$ has a distribution (in a closed form expression) with differential entropy $h(X)$. Q) Then, is it true for any continuous distribution that the ... 1answer 51 views mutual information problem Consider the following problem: What is $I(X;Y)$ where $X$ is the outcome of a roll of a fair 6-sided die and $Y$ is whether the outcome of THAT SAME ROLL was even or odd? Intuitively, I thought ... 1answer 56 views Entropy Problem: mutual information I have a problem about entropy and mutual information that I have attempted, but would like feedback on. 30% Boas 20% Anaconda 50% Cobra Half of the Cobras were medium sized, and the other half were ... 1answer 65 views One cannot know if a number could be written any shorter according to Gödel's incompleteness theorem I am reading Tor Nørretranders (cannot find the English version, sry) and he states that Gödel's incompleteness theorem implies that we cannot know if we can write a number any shorter (e.g. ... 1answer 28 views A question about independence of bivariate random variables Assume we have two bivariate random variables $(X_1,X_2)$ and $(Y_1, Y_2)$ and the distribution satisfies $p(y_1,y_2|x_1,x_2)=p(y_1|x_1)p(y_2|x_2)$. I can prove that if $X_1$ and $X_2$ are ... 0answers 37 views Multivariate Generalizations of the Mutual information I'm interested in Multivariate generalizations of the Mutual information. So I'm just wondering if anyone can point me to a list of all such generalizations currently proposed. I've heard about the ... 1answer 85 views How to prove the following entropy formula? Could anyone show me a proof or redirect to a source where the following entropy equation is proved? =) Thank you!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141648411750793, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/730/an-elegant-description-for-graded-module-morphisms-with-non-zero-zero-component?answertab=votes
# An elegant description for graded-module morphisms with non-zero zero component In an example I have worked out for my work, I have constructed a category whose objects are graded $R$-modules (where $R$ is a graded ring), and with morphisms the usual morphisms quotient the following class of morphisms: $\Sigma=\left\lbrace f\in \hom_{\text{gr}R\text{-mod}}\left(A,B\right) \ | \ \ker\left(f\right)_0\neq 0, \ \mathrm{coker}\left(f\right)_0\neq 0\right\rbrace$ (by quotient I mean simply that this class of morphisms are isomorphisms, thus creating an equivalence relation) I am wondering if this category has a better (more canonical) description, or if I can show it is equivalent to some other interesting category. Thanks! - I don't understand your comment Grigory. – BBischof Jul 27 '10 at 12:41 Since $\Sigma$ isn't an ideal in the category of graded $R$-modules, I don't think that quotienting by it makes much sense. – Rasmus Aug 13 '10 at 10:38 Do you mean to make the maps in $\Sigma$ all the zero maps (which is quotienting) or do you mean to make the maps in $\Sigma$ isomorphisms (which is localizing)? As $\Sigma$ is not an ideal the first isn't well defined, and as $\Sigma$ can contain the zero map between modules the second doesn't seem to make much sense either. – Jim Feb 27 at 7:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9508971571922302, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2010/09/01/
# The Unapologetic Mathematician ## The Extremal Case of Hölder’s Inequality We will soon need to know that Hölder’s inequality is in a sense the best we can do, at least for finite $p$. That is, not only do we know that for any $f\in L^p$ and $g\in L^q$ we have $\lVert fg\rVert_1\leq\lVert f\rVert_p\lVert g\rVert_q$, but for any $f\in L^p$ there is some $g\in L^q$ for which we actually have equality. We will actually prove that $\displaystyle\lVert f\rVert_p=\max\left\{\left\lvert\int fg\,d\mu\right\rvert\Big\vert\lVert g\rVert_q\leq1\right\}$ That is, not only is the integral bounded above by $\lVert f\rVert_p\lVert g\rVert_q$ — and thus by $\lVert f\rVert_p$ — but there actually exists some $g$ in the unit ball which achieves this maximum. Hölder’s inequality tells us that $\displaystyle\left\lvert\int fg\,d\mu\right\rvert\leq\int\lvert fg\rvert\,d\mu\leq\lVert f\rVert_p\lVert g\rVert_q\leq\lVert f\rVert_p$ so $\lVert f\rVert_p$ must be at least as big as every element of the given set. If $\lVert f\rVert_p=0$, then it’s clear that the asserted equality holds, since $f=0$ a.e., and so $0$ is the only element of the set on the right. Thus from here we can assume $\lVert f\rVert_p>0$. We now define a function $g$. At every point $x$ where $f(x)=0$ we set $g(x)=0$ as well. At all other $x$ we define $\displaystyle g(x)=\lVert f\rVert_p^{1-p}\frac{\lvert f(x)\rvert^p}{f(x)}$ In the case where $p=1$ we will verify that $\lVert g\rVert_\infty=1$. That is, the essential supremum of $g$ is $1$. And, indeed, we find that $g(x)=1$ at points where $f(x)>0$, and $g(x)=-1$ at points where $f(x)<0$. If $1<p<\infty$, then we check $\displaystyle\begin{aligned}\lVert g\rVert_q&=\left(\int\lvert g\rvert^q\,d\mu\right)^\frac{1}{q}\\&=\left(\int\lVert f\rVert_p^{-p}\lvert f\rvert^p\,d\mu\right)^\frac{1}{q}\\&=\left(\lVert f\rVert_p^{-p}\int\lvert f\rvert^p\,d\mu\right)^\frac{1}{q}\\&=\left(\lVert f\rVert_p^{-p}\lVert f\rVert_p^p\right)^\frac{1}{q}\\&=1\end{aligned}$ In either case, it’s easy to see that $\displaystyle\int fg\,d\mu=\lVert f\rVert_p$ as asserted. Posted by John Armstrong | Analysis, Measure Theory | 2 Comments ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. • ## Feedback Got something to say? Anonymous questions, comments, and suggestions at Formspring.me!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 33, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92941814661026, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/68873/list
## Return to Answer 3 added 59 characters in body Edit: I misunderstood the question... The answer to the first question for closed orientable manifolds in dimension 4 is negative. The Dold-Whitney theorem states that two oriented 4-plane bundles over the same 4-manifold $M$ are isomorphic if and only if they share the same second Stiefel-Whitney class $w_2$, the same first Pontryagin class $p_1$, and the same Euler class $e$. Each such characteristic class is determined by the homotopy type of $M$ (see below), and hence the cotangent bundle over $M$ is also determined, no matter what differentiable structure one assigns to $M$. These characteristic classes are indeed easily determined by the homotopy type of $M$: $p_1$ is 3 times the signature $\sigma(M)$ of $M$ by Hirzebruch formula, the Euler class is $\chi(M)$, and $w_2$ is determined by the intersection form thanks to Wu's formula, see for instance this question. 2 stupid typo... The answer to the first question for closed orientable manifolds in dimension 4 is positivenegative. The Dold-Whitney theorem states that two oriented 4-plane bundles over the same 4-manifold $M$ are isomorphic if and only if they share the same second Stiefel-Whitney class $w_2$, the same first Pontryagin class $p_1$, and the same Euler class $e$. Each such characteristic class is determined by the homotopy type of $M$ (see below), and hence the cotangent bundle over $M$ is also determined, no matter what differentiable structure one assigns to $M$. These characteristic classes are indeed easily determined by the homotopy type of $M$: $p_1$ is 3 times the signature $\sigma(M)$ of $M$ by Hirzebruch formula, the Euler class is $\chi(M)$, and $w_2$ is determined by the intersection form thanks to Wu's formula, see for instance this question. 1 The answer to the first question for closed orientable manifolds in dimension 4 is positive. The Dold-Whitney theorem states that two oriented 4-plane bundles over the same 4-manifold $M$ are isomorphic if and only if they share the same second Stiefel-Whitney class $w_2$, the same first Pontryagin class $p_1$, and the same Euler class $e$. Each such characteristic class is determined by the homotopy type of $M$ (see below), and hence the cotangent bundle over $M$ is also determined, no matter what differentiable structure one assigns to $M$. These characteristic classes are indeed easily determined by the homotopy type of $M$: $p_1$ is 3 times the signature $\sigma(M)$ of $M$ by Hirzebruch formula, the Euler class is $\chi(M)$, and $w_2$ is determined by the intersection form thanks to Wu's formula, see for instance this question.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9155955910682678, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/32345-simplification-integral.html
# Thread: 1. ## Simplification of Integral Hello! Does anyone of you guys see any chance to reduce/simplify this? The function f is generic/unknown. $<br /> \int_t^T{f(\tau)e^{-D\tau}d\tau}<br />$ Thanks! 2. Originally Posted by paolopiace Hello! Does anyone of you guys see any chance to reduce/simplify this? The function f is generic/unknown. $<br /> \int_t^T{f(\tau)e^{-D\tau}d\tau}<br />$ Thanks! It's similar to a Laplace transform. I can't think of anything that can be done with it if f is arbitrary. -Dan 3. Originally Posted by topsquark It's similar to a Laplace transform. I can't think of anything that can be done with it if f is arbitrary. -Dan In fact, depending on f, it might be impossible to do. Or integration by parts (perhaps even repeated) might be useful: $<br /> \int_t^T{f(\tau)e^{-D\tau}d\tau} = \left[ -\frac{-f(\tau) \, e^{-D\tau}}{D} \right]_{t}^{T} + \frac{1}{D}\int_t^T{f ' (\tau)e^{-D\tau}d\tau}<br />$. The observation made by topsquark could also be useful, especially perhaps if f was periodic with period T.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9637678861618042, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/106283/let-n-be-the-positive-integer-such-that-5n-and-2n-begin-with-same-digit?answertab=active
# Let $n$ be the positive integer such that $5^n$ and $2^n$ begin with same digit. Which digit is that? This is my first time posting here so sorry if I done something wrong, and also my first time encountering a problem like this. Besides trivial $0$, the only solution I found by simply writing down the powers of $2$ and $5$ parallely, is $5$ ($5^5=3125$, $2^5=32$). I couldn't find any kind of period. I've done problems which are about last digit, but not about the first one. Hopefully I could get a hint. Thanks. - ## 1 Answer Hint: $5^n\times2^n=10^n$. ${}$ - 2 +1 This is just so... I'm gonna use this as an exercise given half a chance. – Jyrki Lahtonen Feb 6 '12 at 12:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400873780250549, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/182576/how-long-does-it-take-to-distribute-a-file-among-n-computers?answertab=active
# How long does it take to distribute a file among n computers So i am at a LAN party where there are $n$ computers of which one $c_0$ computer has a file he wants everybody to have. Let's assume that transferring the file always takes $t$ time between any two computers. Initially only the one $c_0$ computer has the file. He then transfers the file to a computer $c_1$ which takes $t$ time. Now there are 2 computers in the LAN that have the file. They then transfer the file to one new computer each, $c_2$ and $c_3$, which takes $t$ time. So after $2t$ time 4 computers have the file. And so in $xt$ time $2^x$ computers have the file. How long does it take for a file that takes $t$ time to be transferred, to be distributed among all $n$ computers? edit: What if only one-to-one transfers are allowed? E.g. two computers cannot 'work together' to transfer the file to one computer in $t/2$. - ## 1 Answer $t*log_2(n)$ time. Each transfer between two computers takes the same $t$ time, so we can assume that all transfers at a "stage" of the process happen concurrently. After each stage, double the number of computers as the last stage have the file (base-2 exponential growth), so the number of stages of transfers that must occur is logarithmically bound, base-2, to the number of computers in total that must have the file. As $log(1) = 0$ regardless of base, the operation is bound to the entire number of computers, even though one of them already has the file (and thus it would take 0 time for the trivial case of a group consisting of the one computer that already has the file). In computer-science lingo, this operation is logarithmic-time, or O(logN). EDIT: From my comment, if $n$ is not an exact power of two, the logarithm will produce a decimal result. If the time $t$ required to transfer one file to one computer is a minimum bound, then the correct answer is the next higher integer value for the log, $t*ceil(log_2(n))$. However, if $t$ is the time it takes for one computer to transfer one file to one recipient, and it is possible in this system for $n$ computers to each transfer $1/n$ of the file to 1 recipient in $t/n$ time, then the decimal log value is correct, because for $n = 2^x+y < 2^{n+1}$, each of the $y$ computers can receive a fraction $y/2^x$ of the file from $2^x/y$ other machines, taking $ty/2^x$ additional time after $t*log(2^x)$, approaching the decimal value of $t*log_2(n)$. This is the ideal behavior of P2P systems like BitTorrent where every computer that has the file (or any portion of it) can re-distribute portions of it to any computer requesting the file, and thus the file's data can come from as many sources as are available, making the bound on transfer time the much larger download bandwidth of the average consumer internet connection and a fraction of $t$. - Thanks! But what about this scenario: there are 5 computers, after 2 steps 4 computers have the file, but now it takes a whole step to transfer the file to that one last computer, which gets to a total of 5 steps, instead of $log_2(5) = 2.32$. I guess i only want integers as answers? How can we take this into consideration? – Pärserk Aug 14 '12 at 20:36 1 Round up to the nearest integer. If there are $2^n+x$ computers where $x<2^n$, those x computers will take the full $t$ time to transfer, but only $x < 2^n$ computers that already have the file will be used during that stage. OR, in a system like BitTorrent, those last transfers could be accomplished by having all $2^n$ computers send a part of the file to the $x$ that don't, which may break the minimum bound of $t$ time to perform a transfer of the file to one computer. – KeithS Aug 14 '12 at 20:38 oh... it's that simple? I feel stupid now! Thanks though :) – Pärserk Aug 14 '12 at 20:43 But wait... Shouldn't an ideal behavior for a P2P system like BitTorrent be that distributing a file among $n$ computers takes only $t$ time? We don't have to wait for the whole file to be transferred before sending it on to other computers. If each computer has as much upload as download bandwidth, and transferring the file between any two computers 'normally' takes $t$ time, then using BitTorrent it should ideally take the same time $t$ to transfer the file to all other computers as well. No? – Pärserk Aug 14 '12 at 21:08 Depends on what is causing the bound of $t$ time to transfer the file. The typical consumer internet connection is asynchronous; less upload than download, because that's what we do more of. So, in that case $t$ is bound to the upload bandwidth of one computer. But, if two computers with the same upload speed were each sending you half of the file, assuming the download speed of the recipient is at least twice the upload speed, you'd get the file in $t/2$ time. By increasing the number of uploaders per downloader, transfer time becomes bound by the much larger download bandwidth. – KeithS Aug 14 '12 at 21:14 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9593692421913147, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/58814/numerically-identifying-discontinuity
## Numerically Identifying Discontinuity ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hey, I need to numerically identify discontinuity points for a function given by a general expression (formula). I am able to evaluate the values at any point. I need it to be fast bu not accurate. The goal is to correctly render functions. With my naive algorithm, I get vertical lines on $x=0$ for $1/x$ and $sign(x)$. The types of discontinuities I need to find are like in the functions $sign(x)$ and $1/x$, that is -,+ adjacent polars and step like functions. I would like to avoid false positives like in the function $sin(1/x)$ which may numerically turn out as discontinues as you approach 0. Thank You!!! - 1 I don't think that this is appropriate for this site. You might have more luck at maths.stackexchange.com – Andrew Stacey Mar 18 2011 at 8:38 1 I think this is appropriate for this site. – Paul Tupper Jul 18 2011 at 4:37 ## 1 Answer It's going to be hard to find a "fast" way of doing this, but there is an algorithm due to Jeff Tupper for reliably sketching discontinuous functions, which you should be able to adapt to your needs. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9345839023590088, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Absolute_value
# Absolute value From Wikipedia, the free encyclopedia Jump to: navigation, search For other uses, see Absolute value (disambiguation). The absolute value of a number may be thought of as its distance from zero. In mathematics, the absolute value (or modulus) | x | of a real number x is the non-negative value of x without regard to its sign. Namely, | x | = x for a positive x, | x | = −x for a negative x, and | 0 | = 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero. Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example an absolute value is also defined for the complex numbers, the quaternions, ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts. ## Terminology and notation Jean-Robert Argand introduced the term "module", meaning 'unit of measure' in French, in 1806 specifically for the complex absolute value[1][2] and it was borrowed into English in 1866 as the Latin equivalent "modulus".[1] The term "absolute value" has been used in this sense since at least 1806 in French[3] and 1857 in English.[4] The notation | x | was introduced by Karl Weierstrass in 1841.[5] Other names for absolute value include "the numerical value"[1] and "the magnitude".[1] The same notation is used with sets to denote cardinality; the meaning depends on context. ## Definition and properties ### Real numbers For any real number x the absolute value or modulus of x is denoted by | x | (a vertical bar on each side of the quantity) and is defined as[6] $|x| = \begin{cases} x, & \mbox{if } x \ge 0 \\ -x, & \mbox{if } x < 0. \end{cases}$ As can be seen from the above definition, the absolute value of x is always either positive or zero, but never negative. From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of two real numbers is the distance between them. Indeed the notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see "Distance" below). Since the square-root notation without sign represents the positive square root, it follows that $|a| = \sqrt{a^2}$ () which is sometimes used as a definition of absolute value.[7] The absolute value has the following four fundamental properties: | | | | |----------------------------------------------------------------------------|----|-----------------------| | $|a| \ge 0$ | () | Non-negativity | | $|a| = 0 \iff a = 0$ | () | Positive-definiteness | | $|ab| = |a||b|\,$ | () | Multiplicativeness | | $|a+b| \le |a| + |b|$ | () | Subadditivity | Other important properties of the absolute value include: | | | | |-----------------------------------------------------------------------------|----|------------------------------------------------------------------------------| | $||a|| = |a|\,$ | () | Idempotence (the absolute value of the absolute value is the absolute value) | | $|-a| = |a|\,$ | () | Symmetry | | $|a - b| = 0 \iff a = b$ | () | Identity of indiscernibles (equivalent to positive-definiteness) | | $|a - b| \le |a - c| +|c - b|$ | () | Triangle inequality (equivalent to subadditivity) | | $\left|\frac{a}{b}\right| = \frac{|a|}{|b|} \mbox{ (if } b \ne 0) \,$ | () | Preservation of division (equivalent to multiplicativeness) | | $|a-b| \ge ||a| - |b||$ | () | (equivalent to subadditivity) | Two other useful properties concerning inequalities are: $|a| \le b \iff -b \le a \le b$ $|a| \ge b \iff a \le -b \mbox{ or } b \le a$ These relations may be used to solve inequalities involving absolute values. For example: $|x-3| \le 9$ $\iff -9 \le x-3 \le 9$ $\iff -6 \le x \le 12$ Absolute value is used to define the absolute difference, the standard metric on the real numbers. ### Complex numbers The absolute value of a complex number z is the distance r from z to the origin. It is also seen in the picture that z and its complex conjugate z have the same absolute value. Since the complex numbers are not ordered, the definition given above for the real absolute value cannot be directly generalised for a complex number. However the geometric interpretation of the absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined as its distance in the complex plane from the origin using the Pythagorean theorem. More generally the absolute value of the difference of two complex numbers is equal to the distance between those two complex numbers. For any complex number $z = x + iy,\,$ where x and y are real numbers, the absolute value or modulus of z is denoted | z | and is given by $|z| = \sqrt{x^2 + y^2}.$ When the complex part y is zero this is the same as the absolute value of the real number x. When a complex number z is expressed in polar form as $z = r e^{i \theta}$ with r ≥ 0 and θ real, its absolute value is $|z| = r$. The absolute value of a complex number can be written in the complex analogue of equation (1) above as: $|z| = \sqrt{z \cdot \overline{z}}$ where $\overline z$ is the complex conjugate of z. The complex absolute value shares all the properties of the real absolute value given in equations (2)–(11) above. Since the positive reals form a subgroup of the complex numbers under multiplication, we may think of absolute value as an endomorphism of the multiplicative group of the complex numbers.[citation needed] ## Absolute value function The graph of the absolute value function for real numbers Composition of absolute value with a cubic function in different orders The real absolute value function is continuous everywhere. It is differentiable everywhere except for x = 0. It is monotonically decreasing on the interval (−∞,0] and monotonically increasing on the interval [0,+∞). Since a real number and its negative have the same absolute value, it is an even function, and is hence not invertible. Both the real and complex functions are idempotent. It is a piecewise linear, convex function. ### Relationship to the sign function The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following equations show the relationship between these two functions: $|x| = x \sgn(x),$ and for x ≠ 0, $\sgn(x) = \frac{|x|}{x}.$ ### Derivative The real absolute value function has a derivative for every x ≠ 0, but is not differentiable at x = 0. Its derivative for x ≠ 0 is given by the step function[8][9] $\frac{d|x|}{dx} = \frac{x}{|x|} = \begin{cases} -1 & x<0 \\ 1 & x>0. \end{cases}$ The subdifferential of | x | at  is the interval [−1,1].[10] The complex absolute value function is continuous everywhere but complex differentiable nowhere because it violates the Cauchy–Riemann equations.[8] The second derivative of | x | with respect to x is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta function. ### Antiderivative The antiderivative (indefinite integral) of the absolute value function is $\int|x|dx=\frac{x|x|}{2}+C,$ where C is an arbitrary constant of integration. ## Distance See also: Metric space The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them. The standard Euclidean distance between two points $a = (a_1, a_2, \dots , a_n)$ and $b = (b_1, b_2, \dots , b_n)$ in Euclidean n-space is defined as: $\sqrt{\sum_{i=1}^n(a_i-b_i)^2}.$ This can be seen to be a generalisation of | a − b |, since if a and b are real, then by equation (1), $|a - b| = \sqrt{(a - b)^2}.$ While if $a = a_1 + i a_2 \,$ and $b = b_1 + i b_2 \,$ are complex numbers, then $|a - b| \,$ $= |(a_1 + i a_2) - (b_1 + i b_2)|\,$ $= |(a_1 - b_1) + i(a_2 - b_2)|\,$ $= \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2}.$ The above shows that the "absolute value" distance for the real numbers or the complex numbers, agrees with the standard Euclidean distance they inherit as a result of considering them as the one and two-dimensional Euclidean spaces respectively. The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to motivate the more general notion of a distance function as follows: A real valued function d on a set X × X is called a metric (or a distance function) on X, if it satisfies the following four axioms:[11] $d(a, b) \ge 0$ Non-negativity $d(a, b) = 0 \iff a = b$ Identity of indiscernibles $d(a, b) = d(b, a) \,$ Symmetry $d(a, b) \le d(a, c) + d(c, b)$ Triangle inequality ## Generalisations ### Ordered rings The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if a is an element of an ordered ring R, then the absolute value of a, denoted by | a |, is defined to be:[12] $|a| = \begin{cases} a, & \mbox{if } a \ge 0 \\ -a, & \mbox{if } a \le 0 \end{cases} \;$ where −a is the additive inverse of a, and 0 is the additive identity element. ### Fields Main article: Absolute value (algebra) The fundamental properties of the absolute value for real numbers given in (2)–(5) above, can be used to generalise the notion of absolute value to an arbitrary field, as follows. A real-valued function v on a field F is called an absolute value (also a modulus, magnitude, value, or valuation)[13] if it satisfies the following four axioms: $v(a) \ge 0$ Non-negativity $v(a) = 0 \iff a = \mathbf{0}$ Positive-definiteness $v(ab) = v(a) v(b) \,$ Multiplicativeness $v(a+b) \le v(a) + v(b)$ Subadditivity or the triangle inequality Where 0 denotes the additive identity element of F. It follows from positive-definiteness and multiplicativeness that v(1) = 1, where 1 denotes the multiplicative identity element of F. The real and complex absolute values defined above are examples of absolute values for an arbitrary field. If v is an absolute value on F, then the function d on F × F, defined by d(a, b) = v(a − b), is a metric and the following are equivalent: • d satisfies the ultrametric inequality $d(x, y) \leq \max(d(x,z),d(y,z))$ for all x, y, z in F. • $\big\{ v\Big({\textstyle \sum_{k=1}^n } \mathbf{1}\Big) : n \in \mathbb{N} \big\}$ is bounded in R. • $v\Big({\textstyle \sum_{k=1}^n } \mathbf{1}\Big) \le 1 \text{ for every } n \in \mathbb{N}.$ • $v(a) \le 1 \Rightarrow v(1+a) \le 1 \text{ for all } a \in F.$ • $v(a + b) \le \mathrm{max}\{v(a), v(b)\} \text{ for all } a, b \in F.$ An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean.[14] ### Vector spaces Main article: Norm (mathematics) Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space. A real-valued function on a vector space V over a field F, represented as ‖V‖, is called an absolute value (or more usually a norm) if it satisfies the following axioms: For all a in F, and v, u in V, $\|\mathbf{v}\| \ge 0$ Non-negativity $\|\mathbf{v}\| = 0 \iff \mathbf{v} = 0$ Positive-definiteness $\|a \mathbf{v}\| = |a| \|\mathbf{v}\|$ Positive homogeneity or positive scalability $\|\mathbf{v} + \mathbf{u}\| \le \|\mathbf{v}\| + \|\mathbf{u}\|$ Subadditivity or the triangle inequality The norm of a vector is also called its length or magnitude. In the case of Euclidean space Rn, the function defined by $\|(x_1, x_2, \dots , x_n) \| = \sqrt{\sum_{i=1}^{n} x_i^2}$ is a norm called the Euclidean norm. When the real numbers R are considered as the one-dimensional vector space R1, the absolute value is a norm, and is the p-norm for any p. In fact the absolute value is the "only" norm on R1, in the sense that, for every norm ‖ ⋅ ‖ on R1, ‖x‖ = ‖1‖ ⋅ |x|. The complex absolute value is a special case of the norm in an inner product space. It is identical to the Euclidean norm, if the complex plane is identified with the Euclidean plane R2. ## Notes 1. ^ a b c d 2. Nahin, O'Connor and Robertson, and functions.Wolfram.com.; for the French sense, see Littré, 1877 3. Lazare Nicolas M. Carnot, Mémoire sur la relation qui existe entre les distances respectives de cinq point quelconques pris dans l'espace, p. 105 at Google Books 4. James Mill Peirce, A Text-book of Analytic Geometry at Google Books. The oldest citation in the 2nd edition of the Oxford English Dictionary is from 1907. The term "absolute value" is also used in contrast to "relative value". 5. Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25 6. Stewart, James B. (2001). Calculus: concepts and contexts. Australia: Brooks/Cole. ISBN 0-534-37718-1. , p. A5 7. ^ a b 8. Bartel and Sherbert, p. 163 9. Peter Wriggers, Panagiotis Panatiotopoulos, eds., New Developments in Contact Problems, 1999, ISBN 3-211-83154-1, p. 31–32 10. These axioms are not minimal; for instance, non-negativity can be derived from the other three: 0 = d(a, a) ≤ d(a, b) + d(b, a) = 2d(a, b). 11. Shechter, p. 260. This meaning of valuation is rare. Usually, a valuation is the logarithm of the inverse of an absolute value ## References • Bartle; Sherbert; Introduction to real analysis (4th ed.), John Wiley & Sons, 2011 ISBN 978-0-471-43331-6. • Nahin, Paul J.; An Imaginary Tale; Princeton University Press; (hardcover, 1998). ISBN 0-691-02795-1. • Mac Lane, Saunders, Garrett Birkhoff, Algebra, American Mathematical Soc., 1999. ISBN 978-0-8218-1646-2. • Mendelson, Elliott, Schaum's Outline of Beginning Calculus, McGraw-Hill Professional, 2008. ISBN 978-0-07-148754-2. • O'Connor, J.J. and Robertson, E.F.; "Jean Robert Argand". • Schechter, Eric; Handbook of Analysis and Its Foundations, pp. 259–263, "Absolute Values", Academic Press (1997) ISBN 0-12-622760-8.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7952615022659302, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/07/12/the-temperley-lieb-category/?like=1&source=post_flair&_wpnonce=9916d21231
# The Unapologetic Mathematician ## The Temperley-Lieb Category Okay, after last week’s shake-ups I’m ready to get back into the swing of things. I mentioned yesterday something called the “Temperley-Lieb Category”, and it just so happens we’re right on schedule to explain it properly. We’ve seen the category of braids and how the braided coherence theorem makes it the “free braided monoidal category on one object”. That is, it has exactly the structure needed for a braided monoidal category — no more, no less — and if we pick any object in another such category $\mathcal{C}$ we get a unique functor from $\mathcal{B}raid$ to $\mathcal{C}$. So of course we want the same sort of thing for monoidal categories with duals. We’ll even draw the same sorts of pictures. A point on a horizontal line will be our generating object, but we also need a dual object. So specifically we’ll think of the object as being “going up through the point” and the dual as “going down through the point”. Then we can draw cups and caps to connect an upward line and a downward line and interpret it as a duality map. Notice, though, that we can’t make any curves cross each other because we have no braiding! Here’s an example of such a Temperley-Lieb diagram: Again, we read this from bottom to top, and from left to right. On the bottom line we have a downward line followed by an upward line, which means we start at the object $X^*\otimes X$. Then we pass through a cap, which corresponds to the transformation $\epsilon_{X^*}$. Then we go through a cup ($\eta_X^*$) to get to $X\otimes X^*$, and another cup to get to $X\otimes X^*\otimes X\otimes X^*$. A cap in the middle ($\epsilon_{X^*}$) is followed by a cup ($\eta_X$), and then another pair of caps ($\epsilon_X\otimes\epsilon_X$). Then we have a cup $\eta_{X^*}$ and another $\eta_X$ to end up at $X\otimes X^*\otimes X\otimes X^*$. We could simplify this a bit by cancelling two cup/cap pairs using the equations we imposed on the natural transformations $\eta$ and $\epsilon$. In fact, this is probably a much easier way to remember what those equations mean. The equations tell us in algebraic terms that we can cancel off a neighboring cup and cap, while the topology of diagram says that we can straighten out a zig-zag. Incidentally, one feature that’s missing from this diagram is that it’s entirely possible to have an arc (pointing either way) start at the bottom of the diagram and leave at the top. Now if we have any category $\mathcal{C}$ with duals and an object $C$ we can build a unique functor from the category $\mathcal{OTL}$ of oriented Temperley-Lieb diagrams to $\mathcal{C}$ sending the upwards-oriented line to the object $C$. It sends the above diagram (for example) to the morphism $\left[(1_C\otimes\eta_C\otimes1_{C^*})\circ\eta_{C^*}\circ(\epsilon_C\otimes\epsilon_C)\circ(1_C\otimes\eta_C\otimes1_{C^*})\circ(1_C\otimes\epsilon_{C^*}\otimes1_{C^*})\circ(1_C\otimes1_C^*\otimes\eta_{C^*})\circ\eta_{C^*}\circ\epsilon_{C^*}\right]:$ $C^*\otimes C\rightarrow C\otimes C^*\otimes C\otimes C^*$. Another useful category is the free monoidal category with duals on a single self-dual object. This is the Temperley-Lieb category $\mathcal{TL}$ which looks just the same as $\mathcal{OTL}$ with one crucial difference: since the object $X$ is its own dual, we can’t tell the difference between the two different directions a line could go. Up and down are the same thing. In the algebra this might seem a little odd, but in the diagram all it means is we get to drop the little arrows that tell us which way to go. And now if we have any category $\mathcal{C}$ with duals and any self-dual object $C=C^*$ we have a unique functor from $\mathcal{TL}$ to $\mathcal{C}$ sending the strand to $\mathcal{C}$. This is how Temperley-Lieb diagrams are turned into (categorified) $\mathfrak{sl}_2$ representations in Khovanov homology. ### Like this: Posted by John Armstrong | Category theory, Knot theory ## 7 Comments » 1. [...] odd. These diagrams look an awful lot like Temperley-Lieb diagrams. And in fact they are! In fact, we get a functor from to that sends to . That is, a [...] Pingback by | July 26, 2007 | Reply 2. Nice post. Do you have a reference for this result? Thanks, Bas Comment by Bas | August 15, 2008 | Reply 3. Thanks, Bas. Unfortunately, I don’t really know a good reference offhand. I have a sneaking suspicion John Baez probably does, though. Comment by | August 15, 2008 | Reply 4. Which result? That oriented tangles in the plane form the free braided monoidal with duals on one object? I wrote about this result as a baby special case of the “tangle hypothesis” in a paper with James Dolan. See page 25, the paragraph beginning “Moving down the n = 1 column to k = 1…” I cite a paper by Freyd and Yetter for this result. However, if proper attribution matters to you, also check out Joyal and Street’s paper “The Geometry of Tensor Calculus I”, which contains closely related results. The never-finished Geometry of Tensor Calculus II is also worth a look. I have not carefully read the predessors to this post, so I’m not sure how John is handling this issue, but it’s worth noting that to get all the oriented tangles in the plane, we need the object X to have an object X* which is both a left dual and a right dual of X. In other words, we need X** to be isomorphic, or perhaps equal, to X. This seems to be implicit in what John write above. In a braided monoidal category, all this is automatic: we can use the braiding to show any left dual is a right dual. In a mere monoidal category, this is not the case. Joyal and Street handle this by introducing the notion of “pivotal” category, while I handle it by introducing duals not only for objects but also for morphisms. The latter solution seems to generalize nicely to more complicated situations. The papers I’m giving links to explain this issue in more detail. Comment by | August 17, 2008 | Reply 5. Okay, now I’ve gone back to John’s post on monoidal categories with duals to see how he handles the nuance I mentioned. I see he insists that in a monoidal category with duals, every object X is isomorphic to X**. One has to be a bit careful here: without a specified isomorphism between X and X**, there is no way to say which morphism corresponds to the closed loop in the above picture. The cheapest solution is to say that X** is equal to X. This works in the tangle example above, but of course it’s not true in the category of finite-dimensional vector spaces. So, at this point one starts wanting to read the definition of “pivotal” category given on page 12 here. Comment by | August 17, 2008 | Reply 6. Yeah, I talked about left and right duals, and I assumed before this point the isomorphism you mention. Comment by | August 17, 2008 | Reply 7. I wrote: Which result? That oriented tangles in the plane form the free braided monoidal with duals on one object? I meant “oriented tangles in the plane form the free monoidal category with duals on one object.” Comment by | August 18, 2008 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 32, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9269696474075317, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/lie-groups
# Tagged Questions A Lie group is a group (in the sense of abstract algebra) that is also a differentiable manifold, such that the group operations (addition and inversion) are smooth, and so we can study them with differential calculus. They are a special type of topological group. Consider using with the ... 0answers 11 views ### Centralizers of connected linear group and its Lie algebra If we have that $G$ is a connected linear group and $H<G$ with $\mathfrak{h}$ the lie algebra of $H$ and we define the centralizers of the elements in the following way: \$Z(H):=\{a\in G| ... 0answers 21 views ### Is the truncated exponential series for matrices injective? If $k$ is a field of characteristic $p$, we can define a map $\exp:\mathfrak{gl}_n(k)\to GL_n(k)$ by: $$\exp(A)=\sum_{i=0}^{p-1}\frac{A^i}{i!}$$ In the answer to this question, we see that if ... 0answers 14 views ### Calculating the lie algebra of $SO(2,1)$ I am trying to calculate the Lie algebra of the group $SO(2,1)$ where this is defined as: $SO(2,1=\{X\in Mat_3(\mathbb{R})|X^t\eta X=\eta, \det(X)=1\}$ where $\eta$ is the matrix defined as: \left ... 1answer 46 views ### The dimension of $SU(n)$ $SU(n)$ denotes the special unitary group. I know its dimension should be $n^2-1$. However, I am trying to prove it and get a wrong result. I have no idea what is wrong with my proof. Therefore, I am ... 2answers 52 views ### Symplectic Form Preserved by Orthogonal Transformation I'm trying to prove that the symplectic form $$\omega = d(\cos\theta) \wedge d\phi$$ is preserved by the action of $SO(3)$ on $S^2$ where $\phi$ and $\theta$ are spherical polars. Now $SO(3)$ simply ... 0answers 29 views ### Action of a Lie group on a coset of its subgroup I am a physicist, so sorry for the lack of rigor. It is well known that a (say compact) Lie group $G$ acts naturally by left multiplication on the coset space $G/H$ where $H\subset G$ is its (Lie) ... 0answers 21 views ### Maximal compact subgroup of $GL_n(\mathbb C_p)$ It is known that the general linear group $GL_n(\mathbb Q_p)$ over the $p$-adic numbers has $GL_n(\mathbb Z_p)$ as a maximal compact subgroup and every other maximal compact subgroup of \$GL_n(\mathbb ... 1answer 20 views ### Proving that the Flag Variety $Fl(n;m_1,m_2)$ is connected. I wish to prove that the flag variety $Fl(n;m_1,m_2) = \{ W_1 \subset W_2 \subset V | dimW_i = m_i \}$, for $0 \le m_1 \le m_2 \le n$ where V is an n-dimensional vector space over $\mathbb{C}$ and ... 0answers 10 views ### Is a quotient of maximal torus maximal for Lie groups? I currently learning about Lie theory. Specifically, I am learning about maximal torus. However, I do not understand how these objects interact with quotients of subgroups. For instance if \$T\subseteq ... 1answer 32 views ### Truncated exponential map from $\mathfrak{gl}_n$ to $GL_n$ Let $k$ be a field of characteristic $p>0$. If $A$ is a nilpotent matrix in $\mathfrak{gl}_n(k)$, with $p>n$, then we can define the unipotent matrix: ... 0answers 18 views ### How to write down the maximal subgroups of $GL(9, \mathbb{C})$ I am wondering about the maximal subgroups of the group $GL(n^2, \mathbb{C})$. My motivation for wondering about these groups is a project (in its most general form) I am working on where I am trying ... 0answers 15 views ### are closed orbits of Lie group action embedded? Consider a smooth action $G\curvearrowright M$ of a Lie group on a manifold. Suppose that an orbit $G\cdot p$ is closed. Is the orbit an embedded submanifold. In general we know that the orbits are ... 1answer 25 views ### Lie subalgebra, Lie subgroup and membership Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$ and let $H$ be a connected Lie subgroup with Lie algebra $\mathfrak{h}$. We have that $X \in \mathfrak{h}$ iff \$exp(tX) \in H \ \ \ \forall t ... 3answers 78 views ### Show that $\exp: \mathfrak{sl}(n,\mathbb R)\to \operatorname{SL}(n,\mathbb R)$ is not surjective It is well known that for $n=2$, this holds. The polar decomposition provides the topology of $\operatorname{SL}(n,\mathbb R)$ as the product of symmetric matrices and orthogonal matrices, which can ... 0answers 58 views ### The Symplectic group is connected Let $K = \mathbb{R}, \mathbb{C}$ be a field and consider the skew-symmetric matrix $$J = \left( \begin{matrix} 0 & I_n \\ -I_n & 0 \end{matrix} \right)$$ where $I_n$ is the unit matrix of ... 0answers 27 views ### Proof of Lie theorem on solvable Lie algebra I am reading a book of Helgason. As you know, solvable Lie algebra $g \subset V= {\bf C}^n$ have a nonzero $v$ such that $v$ is an eigenvector of any element of $g$. I can follow the proof in ... 0answers 28 views ### To what extent are formulas obtained in one Lie group valid in another Lie group with an isomorphic Lie algebra? In quantum optics, I am trying to explore the group generated by squeezing and rotation operators. These are closely related to area-preserving linear transforms, which they induce on the phase space, ... 1answer 73 views ### Show that an orthogonal group is a $\frac{n(n−1)]}2-$dim. $C^\infty$-Manifold and find its tangent space The orthogonal group is defined as (with group structure inherited from $n\times n$ matrices) $$O(n) := \{X\in \mathbb{R}^{n\times n} : X^\text{t}X=I_n\}.$$ (i) Show that $O(n)$ is an ... 1answer 45 views ### Analogues of $SU(2)$ and $SO(3)$ The groups $SU_2(\mathbb{C})$ and $SO_3(\mathbb{R})$ are interesting in geometry, and there is a $2$-to-$1$ map from $SU_2(\mathbb{C})$ to $SO_3(\mathbb{R})$. There are finitely many finite groups in ... 0answers 5 views ### Is the application $D(R_p\circ \imath)(e):\mathfrak{h}\rightarrow T_p\mathcal{L}_p$ an isomorphism? Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$ and $H\subseteq G$ a Lie subgroup with Lie subalgebra $\mathfrak{h}$. Consider the right translation $R_p:G\rightarrow G$ given by $R_p(g)=gp$. ... 1answer 30 views ### Does the equality $[u, v]=[X, Y](e)$ holds? Let $G$ be a Lie group, $\mathfrak{g}$ its Lie algebra and $\mathfrak{h}\subseteq \mathfrak{g}$ a vector subspace. I defined two smooth vector fields $X, Y:G\rightarrow TG$ setting $X(g)=DR_g(e)u$ and ... 0answers 25 views ### Relationship between representations of $\mathfrak{sl}_{2n}\mathbb{C}$ and $\mathfrak{sp}_{2n}\mathbb{C}$ If $V=\mathbb{C}^{2n}$ denotes the standard representation of $\mathfrak{sl}_{2n}\mathbb{C}$, what can we say about $\wedge^kV$ in terms of the standard representation $W$ of ... 0answers 27 views ### The simply-connectedness of quotient space If $U$ is a Lie group with a closed subgroup $K$ such that both $U$ and $U/K$ are simply-connected, then can we conclude that $K$ is connected? 0answers 29 views ### Character of half-spin representation Let $S^\pm$ be the half-spin representations of $\mathfrak{so}_{2n}\mathbb{C}$. Fulton-Harris's Representation Theory says on page 378 that the character $D^\pm$ of $S^\pm$ is the sum \sum x_1^{\pm ... 0answers 26 views ### Optimization of Möbius transformation Say I have a family of points $(w_i, z_i)$ for $i=1,2,...,n$, and I wish to find $a,b,c,d$ such that $\sum_i \left|\frac{a z_i -b}{c z_i - d} - w_i \right|^2$ is minimized. I realize there are things ... 1answer 164 views ### Structure constants of Lie algebra Let $(x^i)$ be a local coordinates system near identity of a Lie group $G$ such that $x(e)=0$. Suppose the multiplication has local form m(x_1,x_2)^k=x_1^k+x_2^k+\frac{1}{2}b_{ij}^k x^i_1 ... 1answer 38 views ### Complexification of the real lie algebra $\mathrm{sp}(m,n)$ I am unable to verify the fact that the complexification of the real lie algebra $\mathrm{sp}(m,n)$ is $\mathrm{sp}(2(m+n),\mathbf C)$, where $\mathrm{sp}(m,n)$ is the set of endomorphisms preserving ... 1answer 35 views ### Is this distribution involutive? For two days I've been trying to show the following: Let $G$ be a Lie group with Lie algebra $\mathfrak{g}$ and consider the smooth distribution $$F=\{F_p=DR_p(e)\mathfrak{h}; p\in G\},$$ where ... 0answers 54 views ### Differentiation in group space In a few physics papers (lattice gauge theory papers, to be more specific) I've seen the following definition for differentiation on group space \frac{\partial}{\partial U} f(U) = ... 1answer 58 views ### Question about lie bracket.. Let $G$ be a Lie group with Lie algebras $\mathfrak{g}$ and let $\mathfrak{h}\subseteq \mathfrak{g}$ be a Lie subalgebra. Write $F_p=DR_p(e)\mathfrak{h}$, $p\in G$, where $R_p:G\rightarrow G$ given by ... 1answer 53 views ### Equality involving Lie Brackets I have a question concerning Lie brackets: Consider the Lie bracket $$[, ]:\mathfrak{g}\times \mathfrak{g}\rightarrow \mathfrak{g},$$ where $\mathfrak{g}=T_eG$ is the Lie algebra of a Lie group $G$. ... 0answers 26 views ### The Lie algebra of the commutator subgroup If $G$ is a connected Lie group with Lie algebra $g$, then is its commutator subgroup $[G,G]$ a closed subgroup with Lie algebra $[g,g]$? 1answer 49 views +50 ### Every principal $G$-bundle over a surface is trivial if $G$ is compact and simply connected: reference? I'm looking for a reference for the following result: If $G$ is a compact and simply connected Lie group and $\Sigma$ is a compact orientable surface, then every principal $G$-bundle over $\Sigma$ ... 0answers 37 views ### Why these two groups are closed in two other? I have no strategy to show that following groups are closed in after group. $$K=\{g=(g_{i,j}\in U(n+1))\mid g_{2,1}=\ldots g_{n+1,1}\}\quad in \quad U(n+1)$$ U(n+1)\quad in \quad\{A = (a_{ij}) \in ... 1answer 24 views ### What is the number of non-compact generators of $\operatorname{so}(p, q)$ and $\operatorname{su}(p, q)$? Setting $n = p + q$, the total number of generators of $\operatorname{so}(p, q)$ or $\operatorname{su}(p, q)$ is respectively $n(n - 1) /2$ and $n^2 - 1$. But what is the number of non-compact ... 1answer 27 views ### The closed subgroup of Lie group $G$ is a connected Lie group with Lie algebra $g$ and $l$ is an abelian ideal of $g$. If $K$ is the connected Lie subgroup of $G$ with the Lie algebra $l$, then is $K$ necessarily closed in $G$? 1answer 55 views ### Tangent space at the identity element of a lie group Let G be a lie group . we know a Lie group is a group with a smooth manifold structure s.t both the multiplication map $m$ and group inversion map $i$ are smooth . Now by identifying ... 1answer 37 views ### Isomorphisms of the Lorentz group and algebra I'm trying to read a few books on QFT and some seem to say the Lorentz algebra obeys $\mathfrak{so}(1,3)\otimes \mathbb{C} \cong \mathfrak{su}(2) \oplus \mathfrak{su}(2)$ while others say ... 0answers 16 views ### concerning coadjoint representation Let $\xi$ be the vector field on $\frak{g}^*$ (dual of Lie algebra) which correspond to element $X$ of the Lie algebra $\frak{g}$. Then why have we $\xi(F)=K_*(X)F$ where here $K=Ad^*(g)$ is ... 1answer 39 views ### Projective linear group - solvable Let $q\geq 5$ and let PGL(2,q) be the projective general linear group. Question Do there exists a $q$ such that PGL(2,q) is solvable? 0answers 26 views ### Orbits of the action of G/H Let $G \subset Iso(M)$ be a Lie group which acts on a (semiriemannian) manifold $M$ properly and smoothly. Let we know the orbits of the action. Suppose that $H$ is a discrete central subgroup of $G$ ... 0answers 25 views ### Classifying all rank 2 and 3 root systems I am working with the representation theory of complex simple Lie algebras, and have a question: It is intuitively clear that the root systems $A_1\times A_1$, $A_2$, $B_2$, and $G_2$ comprise all ... 0answers 29 views ### a question about G-Manifolds I am looking for a clear reason for following fact: Why a $G$-invariant differential form $\omega$ on a homogeneous $G$-manifold $M=G/H$ is uniquely determined by its value at the initial point $m_0$ ... 0answers 16 views ### Why is the dual space of Cartan subalgebra an irreducible representation of Weyl group it is proposition 14.31 in Fulton-Harris book. The proof goes like this. Let $\mathfrak{h}$ be a Cartan subalgebra of $\mathfrak{g}$, and assume $\mathfrak{z}\subseteq\mathfrak{h}^*$ were preserved ... 0answers 30 views ### Usage and determination of “rank” and “dimension” of groups & representations Physicist here. I seem to see conflicting statements about the rank of some groups I've come across lately. A paper I'm reading states that $SO(6)$ is rank 3 and therefore its Cartan subalgebra ... 0answers 32 views ### Finding the dimension of the symplectic group How do you find the dimension of the symplectic group $Sp(2n,\mathbb{R})$? $Sp(2n,\mathbb{R})\subset Gl(2n,\mathbb{R})$ is the group of invertible matrices $A$ such that $\omega = A^T\omega A$, where ... 1answer 83 views ### Proof that $U(n)$ is connected I'm trying to prove that $U(n)= \{ X\in Mat_n(\mathbb{C})|X^T\bar{X}=I\}$ is connected, but most of the proof comes down to proving that $SU(n)= \{ X\in Mat_n(\mathbb{C})|X^T\bar{X}=I$ and \$ ... 0answers 70 views ### How many discrete subgroups does the Heisenberg group have? Is there an easy way to describe an arbitrary discrete group in the Heisenberg group? I figured that at least the family \begin{pmatrix} 1 & x\mathbb Z & z\mathbb Z\\0&1&y\mathbb ... 1answer 19 views ### Is this a set of generators for the conformal group of Minkowski space? My physics textbook asserts that the group of maps $f: M \rightarrow M$ ($M$ is the Minkowski space, i. e. $\Bbb R^4$ with the pseudonorm $||x||=x_0^2-x_1^2-x_2^2-x_3^2$ and scalar product \$x\dot{} ... 1answer 30 views ### Elements of order 2 in a Weyl group. I would like to prove that any element of order 2 in a Weyl group is the product of commuting root reflections. I am told that the proof should be by induction on the dimension of the -1 eigenspace. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 187, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183077216148376, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/37855/where-does-this-equation-for-a-perturbed-metric-come-from/37862
Where does this equation for a perturbed metric come from? I'm reading an article which includes the following equation involving a perturbed metric: $$G_{AB} = \eta_{AB} + \overset{1}{\gamma}_{AB} + 2\overset{1}{\chi}_{(A,B)}\tag{4.1}$$ I don't understand how this equation was obtained; in particular, I don't understand how was obtained the third summand. Is there some literature explaining how this was obtained, or can you explain where it came from? - 1 Answer Given a arbitrary metric $g_{\mu\nu}$ you can introduce a reference (background) metric $\bar{g}_{\mu\nu}$ (in the paper notation it is just the Minkowski metric $\eta_{\mu\nu}$) in a way that $\delta{}g_{\mu\nu} = g_{\mu\nu} - \bar{g}_{\mu\nu}$ is small (in some sense). You can reintroduce the background metric in a way that the perturbation remains small, this action is parametrized by an small diffeomorphism (you can see why in the Mukhanov's review of perturbations: dx.doi.org/10.1016/0370-1573(92)90044-Z) generated by an arbitrary vector field $-\xi^\alpha$, thus, the background change by $$\bar{g}_{\mu\nu}\rightarrow\bar{g}_{\mu\nu}-\mathcal{L}_\xi\bar{g}_{\mu\nu},$$ where $\mathcal{L}_\xi$ is the Lie derivative with respect to $\xi^\alpha$. Given this transformation, the perturbation transform as $$\delta{}g_{\mu\nu}\rightarrow\delta{}g_{\mu\nu} + \mathcal{L}_\xi\bar{g}_{\mu\nu}.$$ If you choose a covariant derivative compatible with $\bar{g}_{\mu\nu}$, say $\bar{\nabla}_\alpha\bar{g}_{\mu\nu} = 0$, the Lie derivative can be written as $$\mathcal{L}_\xi\bar{g}_{\mu\nu} = 2\bar{\nabla}_{(\mu}\xi_{\nu)}.$$ In the paper the background metric is just the Minkowski metric, then in Cartesian coordinates $$\mathcal{L}_\xi\eta_{\mu\nu} = 2\partial_{(\mu}\xi_{\nu)} = 2\xi_{(\mu,\nu)},$$ where the comma represent the partial derivative. So in general you can write a metric in terms of the background, perturbation and a gauge transformation as $$g_{\mu\nu} = \bar{g}_{\mu\nu} + \delta{}g_{\mu\nu} + 2\bar{\nabla}_{(\mu}\xi_{\nu)}.$$ -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371491074562073, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/106455/an-approximate-relationship-between-the-totient-function-and-sum-of-divisors
# An approximate relationship between the totient function and sum of divisors I was playing around with a few of the number theory functions in Mathematica when I found an interesting relationship between some of them. Below I have plotted points with coordinates $x=\dfrac{n\cdot\mu(n)}{\sigma(n)}$ and $y=\dfrac{n\cdot\mu(n)}{\phi(n)}$ for $n$ from 1 to 400,000, where $\sigma$ represents the sum of the divisors function, $\phi$ represents the totient function, and $\mu$ represents the Moebius function. I've also plotted a bit of the curve $y = \frac{9}{10x^{1.5}}$. It seems to me there is some kind of approximate relationship being shown here. Unfortunately, I know basically nothing about this subject, aside from the basic definitions of the functions I'm using. Can anyone provide any insight about what is going on in my example? More generally, how well explored are these kinds of relationships$-$is there a great deal of theory behind all of this? The code to make this diagram in Mathematica is: ````Show[ParallelMap[{(#/DivisorSigma[1, #])* MoebiusMu[#], (#/EulerPhi[#])*MoebiusMu[#]} &, Range[400000]] // ListPlot[#, PlotRange -> All,PlotStyle -> {Black, PointSize[0.005]}] &, Plot[0.9/x^1.5, {x, 0.1, 1}], {x, -1, 1}]] ```` - ## 1 Answer Theorem 329 of Hardy and Wright, An Introduction to the Theory of Numbers, says there is a positive constant $A$ such that $$A\lt{\sigma(n)\phi(n)\over n^2}\lt1$$ In a footnote, they show that $A=6/\pi^2$. - Thanks. I find these kinds of facts remarkable. Do you know what are the prerequisites for Hardy and Wright? – JOwen Feb 7 '12 at 23:10 1 "The book is written for mathematicians, but it does not demand any great mathematical knowledge or technique. In the first eighteen chapters we assume nothing that is not commonly taught in schools, and any intelligent university student should find them comparatively easy reading. The last six are more difficult, and in them we presuppose a little more, but nothing beyond the content of the simpler university courses." My opinion is that the main prerequisite is that elusive quality, "mathematical maturity". – Gerry Myerson Feb 8 '12 at 0:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254720211029053, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/27770/a-resource-theory-of-quantum-discord/27771
# A resource theory of quantum discord? Local Operations and Classical Communication (LOCC) is the classic paradigm for studying entanglement. These are things that are `cheap' and unable to produce entanglement as a resource for a quantum information processing task. We can also describe equivalence classes of entangled states if elements of each class can be transformed to another in that class under LOCC. We can discuss entanglement distillation of going from M copies of a noisy state to N copies of a more entangled state by LOCC. Finally, if some states are undistillable (i.e. N=0 for all M), given a different state $\sigma$, the original noisy state $\rho$ can be activated (or catalysed if you want $\sigma$ to be unchanged) this to a more entangled state. Recently, a lot of discussion has centred around quantum discord. Quantum discord aims to capture nonclassicality in states, if not necessarily entanglement. Loosely speaking, a quantum state $\rho$ without discord (concordant) is one where there is a basis of product states (e.g. $|\psi_{1}\rangle|\psi_{j}\rangle...|\psi_{n}\rangle$ for $n$ parties) with respect to which $\rho$ is diagonal. Discord (but not entanglement) has been related to mixed state quantum computing as well as quantum state merging. Interestingly, it has been shown that given two (non-equal) concordant states, there exists a protocol that produces a distillable entangled state as shown by M. Piani et al and some similar results in A. Streltsov et al. I am curious at how far this analogy between entanglement and `nonclassicalness' distillation can be drawn, in particular, can we construct a reasonable resource theory of discord? I doubt I am the first to think of this so if anyone has any background on this, I'd really appreciate it. We can restrict to being able to produce concordant states and then operations that preserve classicalness. From a paper by B Eastin we know that unitaries that preserve classicalness amount to a permutation of eigenvalues with a change in product basis; we could go beyond the model of local operations. Has anyone produced any results on distillation of discord? If this is all trivial to some of you, my apologies. I am trying to understand what discord actually means from the useful resource theoretic point-of-view. - ## 2 Answers By coincidence I was thinking about exactly this problem myself... actually thinking about why I think it is not a good example of a resource theory. The basic reason is that the state of states with zero discord is not convex, and so not closed under mixing! Take 2 zero-discord states which are diagonal in different bases, mix them, and you have a state with positive discord. Since classical mixing is an that operation always available in the lab, is seems difficult to see how to make a resource theory. In fairness one must all note that the resource theory of non-gaussian states is also non-convex, though often this is fixed up by thinking about a resource theory of continuous variable states where the resource is negativity of the Wigner function (this is a convex resource theory). All other developed resource theories I can think of have a convex structure! Bit of a short answer but that is my opinion on the matter! - Nice answer, Earl. My initial thinking was about restricting operations in entanglement resource theory. Since the CC lets us prepare arbitrary convex combinations of states, if we deny ourselves this, we can deny ourselves the ability to go from concordant to discordant. Personally, I am not really convinced of discord in the way I am convinced of entanglement. However, I want to add some more substance to this ill-feeling. It seems contrived to deny ourselves classical communication though. – Matty Hoban Oct 27 '11 at 20:53 Another point worth mentioning is that quantum discord came to prominence, at least to me, in an analysis of the "Power of 1 qubit model". In this model you have 1 pure qubit, unlimited mixed qubits and any unitary operations and you compute something that looks interesting (the trace of a unitary). At some point in the middle of the computation the state has discord. I would argue that in this model the relevant resource is purity, not discord. – Earl Oct 28 '11 at 13:25 Discord is a manifestation of coherence so I can see your reasoning. I think I will accept your answer as it is a fair comment. I think my question was suitably open ended that I'd accept any worthwhile and informative answer. – Matty Hoban Oct 28 '11 at 20:37 I'm not sure non-convexity is a problem. Generally, a resource theory arises from restrictions on the permitted operations. Is there a corresponding characterization for discord, such as LOCC for entanglement? (I.e.: Discord cannot be created with restriction X, and it can be used to overcome the restriction?) Then I would think it should be possible to obtain a resource theory for discord. (Although the restriction, and thus the resource theory, might be less natural than one arising from LOCC restrictions.) – Norbert Schuch Oct 30 '11 at 19:56 I think the crucial point is the one Norbert has raised, and the question about the set of operations that do not increase discord has not been characterised. Of course, local unitaries don't change discord, but that is a trivial set. There are a couple of papers interpreting quantum discord as a resource in terms of quantum state merging, and more generally, in terms of the mother protocol of quantum information theory. One can also draw some thermodynamic connections, but they are not fully formalised yet. - Thanks for providing a nice answer, Animesh. I was hoping you'd turn up and provide some input. I agree that essentially any resource theory by definition emerges from a restricted set of operations. I was thinking that the set of operations could be the one that Eastin describes, but can we still not produce discord with a larger set of operations. If we have stochastic local operations but no classical communication, can we produce discord? This may seem a silly question with an obvious answer. – Matty Hoban Nov 2 '11 at 14:02 +1 from me. Welcome to the site Animesh. – Joe Fitzsimons Nov 2 '11 at 14:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572619795799255, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/24312/space-time-in-string-theory/24330
# Space-time in String Theory I would like to understand how Physicists think of space-time in the context of String Theory. I understand that there are $3$ large space dimensions, a time dimension, and $6$ or $7$ (or $22$) extra dimensions, and all these dimensions need to fit together in a way such that the extra dimensions are compactified (with a Calabi-Yau or $G_2$ structure). My question, however is not about the possible $10$, $11$ or $26$ dimensional manifolds that may be possible, but about whether string theorists consider space-time as somehow quantized (or discrete), or rather as a continuous manifold, or are both options possible? In other words, can strings move continuously through space, or are there a discrete set of locations where strings can be, and does string theory rule out one of the options? How about the same question in loop quantum gravity (LQG)? Should I think of the spin networks in LQG as describing a discrete space-time? Thanks for your insight, or any references you may be able to provide! - 4 I hope an expert replies to your question to the education of all. Off hand I know from wikipedia articles that string theory views space time as continuous but LQG quantizes it. – anna v Apr 24 '12 at 4:18 My understanding is that in ST spacetime is continuous. In LQG it is NOT discrete, it is quantized in the sense that geometric observables (like length area volume) have discrete spectrum. – MBN Apr 24 '12 at 8:43 Additionally, and a little more one the technical side, I'd like to know how the appearent "get rid of space" idea in pure S-Matrix theory works out with its realization in string theory, where afaik (i.e. in the textbook aproaches to string theory I know) the spacetime is not much different than in QFTs. Spacetime in LQG seems a little cooler (for someone like me who has a huge problem with time), because different possibilities of all-of-space configurations are packaged up in different states. – Nick Kidman Apr 24 '12 at 9:07 Possible duplicates: physics.stackexchange.com/q/817/2451 and physics.stackexchange.com/q/9720/2451 – Qmechanic♦ Apr 24 '12 at 10:01 Thank you all for the replies. @MBN, the geometric observables have a discrete spectrum... but the geometric observables of what? Does this mean that every possible observable length, area, volume (say length of a string, or area of a membrane) is a multiple of a fixed quantity? – Álvaro Lozano-Robledo Apr 24 '12 at 19:27 show 2 more comments ## 3 Answers I think Anna s comment is correct, in LQG spacetime consists of discrete atoms and in ST it is continuous. In addition, This article contains an interesting and quite accessible Nima talk related to the topic. Therein Nima explains why the present notions of spacetime are doomed and introduces the recent cutting edge ideas about how spacetime could emerge from a newly discovered and not yet fully explored structure called T-theory. - 1 Thank you for your answer, and thank you for the link. I'll watch the video soon. – Álvaro Lozano-Robledo Apr 24 '12 at 19:29 How is T-duality newly discovered? It's thirty years old already. – Ron Maimon Jun 22 '12 at 1:08 @RonMaimon Hm, if I remember and understood it correctly T-theory in this context should be something different from T-duality. Some kind of an underlying structure unifying string theory and twistor theory or something (?) ... – Dilaton Jun 23 '12 at 16:55 @Dilaton: I see, it's not T-duality. "T theory" is nothing at all. – Ron Maimon Jun 23 '12 at 18:45 1 @Dilaton: No, I just mean that it's twistors and perturbation theory (the stuff AH is working on now), which is great and interesting and useful, but it isn't a new physical theory, but a bag of (useful and important) calculation techniques. There might be some new idea coming out of this, but I don't see a reconstruction of space time in any of the recent literature on twistors and peturbation relations (although it is very active). The last progress on this question was in the AdS/CFT era. But to advertize this technically challenging work, somebody made up the term "T-theory" to sell it. – Ron Maimon Jun 23 '12 at 20:04 show 2 more comments This is a very good question, because no one knows the answer. In a recent talk I asked the very same question to Dr. Brian Greene. I also asked him why we don’t' see many papers dealing for example with the quantum dynamics of branes in M-Theory, just many low energy solitonic semiclassical physics from some low energy 11-D supergravity lagrangian. Your questions about quantum spacetime are deeply related with the nature of physical quantum branes. His answer was straight foward he said "you aren't missing anything, we just don't know". In string theory, in principle, space time can be a fully quantum membrane in some dimension with open string excitations and a bulk probably another space filling brane with also closed quantum strings degrees of freedom. But brane quantization is still not well known. At the present state of string theory most calculations assume a space time continuum. But its very difficult to reconcile this notion with space time producing close string states, or a string state warping space. Maybe in the future it will be possible to make calculations or formulate string theory in quantized backgrounds that maybe branes themselves - Thanks for your answer! – Álvaro Lozano-Robledo Jun 20 '12 at 21:05 So far only a perturbative formulation of string theory is known, despite the fact that there are some hints for what a non-perturbative formulation should contain. As far as I understand, it is expected that the geometry of the background spacetime in which the string propagates in the perturbative formulation, will ultimately be encoded in some other way in a non-perturbative formulation. Roughly you can think about it as follows: Instead of quantizing General Relativity directly, which fails in a naive perturbative approach, perturbative string theory contains a field, that would arise in a perturbative quantization of General Relativity, too. In string theory this field is the massless part of a whole tower of massive fields. This together with the fact that a consistency condition gives you the vacuum equations $R_{\mu\nu} = 0$ of General Relativity (this is true at least in the bosonic sigma model, without $B$ field or Dilaton) are two reasons to believe that perturbative string theory contains a perturbative quantization of gravity at least in 26 or 10 dimensions. Contrary to the naive quantization, it yields (some) finite results at loop level (for superstring theory, this is actually only known up to two loops I believe). In a sense that can be made somewhat precise: Certain 2-dimensional QFTs should be thought of as generalized (semi-)Riemannian manifolds. Since there is actually no known non perturbative formulation one studies instead low-energy effective theories (supergravity theories), compactifications (here is where the calabi-yau manifolds come in), F-Theory and so on. Always in the hope that they might give a clue to what a non perturbative formulation should contain. In that way the fact that there is a 12-dimensional supergravity theory that dimensionally reduces to 11-dimensional theories leads to the idea that there should be an $M$-Theory or the presence of $p$-Form fields leads to the idea that there should be charged "branes". -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9312673211097717, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Flatness_(mathematics)
# Flatness (mathematics) In mathematics, the flatness (symbol: ⏥) of a surface is the degree to which it approximates a mathematical plane. The term is often generalized for higher-dimensional manifolds to describe the degree to which they approximate the Euclidean space of the same dimensionality. (See curvature.) Flatness in homological algebra and algebraic geometry means, of an object $A$ in an abelian category, that $- \otimes A$ is an exact functor. See flat module or, for more generality, flat morphism. This geometry-related article is a stub. You can help Wikipedia by expanding it.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928704559803009, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/187624-line-element-definition-differentiating-plane-polar-coordinates.html
# Thread: 1. ## Line element definition by differentiating plane polar coordinates Hi folks, hopefully this is in the right forum. My textbook tells me I can use the product rule to go from the functions x = r cos phi and y = r sin phi to find expressions for dx and dy. I'm not clear whether this is partial differentiation (I thought I understood that!), or what I am differentiating with respect to! I have attached a jpg of the textbook section which should be clearer that typing it all out. I think I can get to the final expression, it's finding the middle two I don't understand. Can anyone help? My jpg refers to equation 3.4, this is simply dl^2 = dx^2 + dy^2. Attached Thumbnails 2. ## Re: Line element definition by differentiating plane polar coordinates taking the derivattive w/r to $\phi$ ... $\frac{d}{d\phi}(x = r\cos{\phi})$ $\frac{dx}{d\phi} = -r\sin{\phi} + \cos{\phi} \cdot \frac{dr}{d\phi}$ $dx = \cos{\phi} \cdot dr - r\sin{\phi} \cdot d\phi$ same idea for $dy$ 3. ## Re: Line element definition by differentiating plane polar coordinates It's the "total differential". If f(x,y) is a function of two variables, and each of those variables is a function of the single variable t, x(t), y(t), (think of an object moving along some trajectory in the plane with t as time), then, by the chain rule $\frac{df}{dt}= \frac{\partial f}{\partial x}\frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt}$ Since that is now a function of a single variable, we can write its "differential", $df= \frac{df}{dt} dt$: $df= \frac{\partial f}{\partial x}\frac{dx}{dt} dt+ \frac{\partial f}{\partial y}\frac{dy}{dt} dt= \frac{\partial f}{\partial x}dx+ \frac{\partial f}{\partial y} dy$ (Which no longer has any dependence on t!) In particular, if $x= r cos(\phi)$ and $y= r sin(\phi)$, then $dx= \frac{\partial x}{\partial r}dr+ \frac{\partial x}{\partial \phi}d\phi= cos(\phi)dr- r sin(\phi)\d\phi$. And if $y= r sin(\phi)$, then $dy= \frac{\partial y}{\partial x}dr+ \frac{\partial y}{\partial \phi}d\phi= sin(\phi)dr+ r cos(\phi)d\phi$. Squaring those, $dx^2= cos^2(\phi)dr^2- 2r^2sin(\phi)cos(\phi)drd\phi+ r^2sin^2(\phi)d\phi^2$ and $dy^2= sin^2(\phi)dr^2+ 2r^2sin(\phi)cos(\phi)drd\phi+ r^2 cos^2(\phi)d\phi^2$ Adding, the " $drd\phi$" terms cancel while $sin^2(\phi)+ cos^2(\phi)= 1$ so that $ds^2= dx^2+ dy^2= dr^2+ r^2 d\phi^2$ 4. ## Re: Line element definition by differentiating plane polar coordinates Thanks very much for taking the time to answer chaps, having quoted one of your responses I can see the effort that goes into a properly formatted answer! Skeeter: Your response was the same as I had seen in another textbook; whilst undoubtedly correct, unfortunately I am not able to make sense of it. I don;'t quite see why dx/dphi is not as simple as -r sin phi. Entirely my failing, not yours! HallsofIvy: I have been able to follow quite a lot of your answer, but I still have a couple of questions about it. If you have time to explain further it would be much appreciated, if not it has at least got me to a working solution! Originally Posted by HallsofIvy It's the "total differential". If f(x,y) is a function of two variables, In my case I have x(r,phi) as one function of two variables, and y(r,phi) as a second function of two variables... and each of those variables is a function of the single variable t, x(t), y(t), (think of an object moving along some trajectory in the plane with t as time), I can't quite see how this works in my case. Aren't r and phi two independant variables? What is the analogy to your t variable in my example? then, by the chain rule $\frac{df}{dt}= \frac{\partial f}{\partial x}\frac{dx}{dt}+ \frac{\partial f}{\partial y}\frac{dy}{dt}$ I get this part! This is partial differentiation as I understand it Since that is now a function of a single variable, we can write its "differential", $df= \frac{df}{dt} dt$: $df= \frac{\partial f}{\partial x}\frac{dx}{dt} dt+ \frac{\partial f}{\partial y}\frac{dy}{dt} dt= \frac{\partial f}{\partial x}dx+ \frac{\partial f}{\partial y} dy$ (Which no longer has any dependence on t!) Is this 'thing' df the "total differential"? And does the operation where you "multiply though" by dt have a name (I realise it is not as simple as multiplying through!) In particular, if $x= r cos(\phi)$ and $y= r sin(\phi)$, then $dx= \frac{\partial x}{\partial r}dr+ \frac{\partial x}{\partial \phi}d\phi= cos(\phi)dr- r sin(\phi)\d\phi$. And if $y= r sin(\phi)$, then $dy= \frac{\partial y}{\partial x}dr+ \frac{\partial y}{\partial \phi}d\phi= sin(\phi)dr+ r cos(\phi)d\phi$. Squaring those, $dx^2= cos^2(\phi)dr^2- 2r^2sin(\phi)cos(\phi)drd\phi+ r^2sin^2(\phi)d\phi^2$ and $dy^2= sin^2(\phi)dr^2+ 2r^2sin(\phi)cos(\phi)drd\phi+ r^2 cos^2(\phi)d\phi^2$ Adding, the " $drd\phi$" terms cancel while $sin^2(\phi)+ cos^2(\phi)= 1$ so that $ds^2= dx^2+ dy^2= dr^2+ r^2 d\phi^2$ That last bit all makes sense to me, I have never seen this type of differential manipulation before though which is why I am struggling! 5. ## Re: Line element definition by differentiating plane polar coordinates Skeeter: Your response was the same as I had seen in another textbook; whilst undoubtedly correct, unfortunately I am not able to make sense of it. I don;'t quite see why dx/dphi is not as simple as -r sin phi. if $r$ were a constant, then $\frac{d}{d\phi}(r \cos{\phi}) = -r\sin{\phi}$ ... however, $r$ is not a constant, it is an implicit function of $\phi$ .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9564961194992065, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/162127/a-question-on-modules-over-noetherian-ring?answertab=oldest
# A question on modules over Noetherian ring If $G$ is a module over the non-trivial commutative Noetherian ring $R$ then is it possible that for all maximal ideal $M$ of $R$ we have $MG=G$ ? I guess the answer is no. - I assume you also want the assumption that $G\neq 0$? – Ben Blum-Smith Jun 23 '12 at 19:50 ## 1 Answer Actually the answer is yes: $R=\mathbb Z, G=\mathbb Q$ . - Thanks. I have posted the original question – pritam Jun 23 '12 at 20:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8752262592315674, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/20323/why-doesnt-the-exponential-smoothing-forecast-package-in-r-provide-confidence-i
# Why doesn't the exponential smoothing forecast package in R provide confidence intervals for the fitted values? The upper and lower prediction intervals for the forecast periods are provided by the forecast() function. However, neither prediction or confidence intervals seem to be available for the fitted values within the range of the actual data. Why is this? - ## 1 Answer The fitted values are one-step forecasts, so prediction intervals can be obtained by adding/subtracting a suitable multiple of the standard deviation of the residuals. E.g., assuming normal errors, an approximate 95% prediction interval is given by $\hat{y}_t\pm 1.96\hat{\sigma}$ where $\hat\sigma^2$ is the variance of the residuals. Presumably you mean a confidence interval for the mean of the one-step forecast distributions. I'm not sure why you would want such a thing but you could obtain it by reformulating the model in state space form and using a Kalman filter. You ask why the `forecast()` function does not provide these. Simply, because they are hardly ever useful. - Yes, it makes sense that you really only care about the precision of your forecast. I am accustomed to the intervals returned by predict.lm. Thanks for taking time to explain. – Scott Jan 3 '12 at 19:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9085607528686523, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/2344/why-is-there-a-price-difference-between-30-year-principal-and-interest-strips
# Why is there a price difference between 30 year principal and interest STRIPS? Sorry if this is obvious, I am not a professional. I like to trade 30 year treasury zero's. I have noticed that the price for a 30 year principal payment is never the same as a 30 year interest payment. The difference is small (~.3%), and I haven't tracked it long enough to confirm if one is always greater than the other. Can anyone tell me what is going on here? Is there a tax arbitrage at play here? Is it that the principal is considered slightly more guaranteed than the interest payment? (Or vice versa?) - so I looked today and I found that consistently (4 samples between maturity dates of 1/2040 and 2/2041) the principal payment was worth a full 1% more than the interest payment. I am inclined to believe this has something to do with repayment risk as the margin seems to have widened from yesterday. – Pablitorun Nov 10 '11 at 18:59 Hi Pablitorun, welcome to quant.SE. I doubt very much this is a function of repayment risk, as both payments seem equally at risk of default, and default risk in any case is so miniscule in the case of Treasuries. Not sure why you're seeing this, though. – Tal Fishman Nov 11 '11 at 14:24 Thank you for the cleanup! Well I did find one example where the interest payment was trading for more than the principal payment so this all could be noise. (Although I thought noise like this was arb'ed out....) I should add that I am just looking at schwab online's trading platform, it could just be a function of their trading desk I suppose. It would be interesting if someone with more professional tools saw something similar. – Pablitorun Nov 11 '11 at 18:35 ## 2 Answers This really is an arbitrage. It is caused by differences in supply and demand between the interest cashflow and the principal cashflow and by differences in the financing rates on the two STRIPS. As you noted, the price difference is small, and it would take 30 years to guarantee convergence. In addition, the outstanding amount of the 30-year coupon strip (the interest payment) is quite small, since the only source of this cashflow is the 30yr bond itself and therefore the amount available is only half of the annual interest amount of the bond - if you're talking about the current 30yr, only \$250mm can be stripped compared to a \$16 billion principal amount. Therefore, it is currently not particularly attractive as arbs go. Finally, if you were going to try to capture this arbitrage by shorting the principal strip and buying the coupon strip, you would need to borrow the principal strip and finance the coupon strip. A price difference of 1% is approximately equal to a yield differential on these two instruments of 3 basis points. As a result, if the interest you earned on your short proceeds was just 3bp less than the interest you paid to finance the purchase over the life of the trade, you would not make any money on the trade. Generally speaking, the principal strips or "P"s usually trade rich to the coupon strips, but this is not a hard and fast rule. At times this arbitrage can become quite large, as much as a 3-5% price difference between two bonds. - Great answer I really appreciate this. I have two followups if you don't mind. 1.) If interest rates go up would you expect the arb to shrink as the available pool of securities would move closer to equal size? 2.) As a (very small) individual investor buying either would be equivalent as long as I held the position long enough for any swings in the arb to neutralize? (So I would always buy the cheaper bond on the off chance that it was held to maturity.) – Pablitorun Nov 18 '11 at 16:18 This may not apply in the US, but in some countries the interest income is taxed and the principal isn't, so in some markets the difference amounts to the implicit tax. - well it is a global market, I guess, so there is probably some effect there. That was one of my first thoughts. IE principal payments would thus be more desired, but I don't think the difference is large enough for that to be the main factor. – Pablitorun Nov 23 '11 at 17:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9744415283203125, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/98294/converting-random-sequence
# converting random sequence There is stream (sequence) of uniform random integers $x_i$, each integer in $[0,N-1]$ range, where N is not power of 2. I need to convert it to sequence of integers [0..255] (bytes), such that uniformity in input numbers results in uniformity in output numbers. And non-uniformity in input number results in some non-uniformity in output numbers. I am thinking about doing $L=\sum{x_i N^{i}}$ (using multiple-precision library), then representing L in base 256 (taking bytes from L in binary). Is this good ? Is there better method ? - Have you considered just truncating the binary representation of $x_i$ and taking only the least significant byte? That's as close as you are going to get to a uniform distribution. No matter what transformation you use, the sequence of bytes will not have a uniform distribution since $N$ is not a power of $2$. – Dilip Sarwate Jan 11 '12 at 22:22 ## 2 Answers It may depend on what you mean by uniformity, but if you mean random and independent with a uniform distribution then you can probably move on to the next issue. There is a problem that powers of $N$ are not powers of 256 (unless this is an infinite stream) so with your suggestion you cannot be sure that all your bytes are equally likely. There are ways around this involving rejection sampling: e.g. if $N=258$ you might simply ignore every time $256$ or $257$ appears. If it is an infinite stream of numbers then $L$ will be infinite and it might be better to look at $\sum{x_i N^{-i}}$ written in base $256$ instead. [This illustrates the uniformity issue: there are apparently numbers which may be normal in one base but not another.] Be aware that in any method, it is possible (though perhaps with probability zero) that you do not find out which way to round the calculation of some bytes. More realistically there is a positive though small probability you will have to wait a long time to find even a few bytes. - If you have a decent random number generator available, then just choose the least $K=2^k>N$ and do the following at each step: generate a random number from $0$ to $K-1$. If it is greater than $N-1$, accept it and write down its bits just halting the stream for this step. If not, discard it, accept one number from the stream, and write down its bits. This may slow you down twice on the intake speed but you gain a lot from getting rid of arbitrarily precision arithmetic. If there is no bias in the original sequence, the new sequence will have none either. If there was a bias in the original one, it can get diluted a bit but still will be present. Also, you can recover the original sequence from the new one (just read your numbers and drop the excessively high ones). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9420353174209595, "perplexity_flag": "head"}
http://mathoverflow.net/questions/30373/does-the-deligne-mumford-space-module-s-n-action-have-a-fundamental-chain
## Does the Deligne-Mumford space module $S_{n}$ action have a fundamental chain? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Does the Deligne-Mumford space (without ordering for marked points) $\bar M_{g,n}/S_{n}$ has fundamental chain in signular simplicial chains? (because I read Costello's paper GW potential to TCFT, as a orbifold,what's the fundamental chain?) - 2 Your question is not very well written. You should try to improve it. For example, you might want to explain what you mean by "fundamental chain" on $\overline{M}_{g,n}/S_n$. You might also want to explain the $S_n$ action on $\overline{M}_{g,n}$ -- presumably you mean the action which permutes the marked points. – Kevin Lin Jul 3 2010 at 5:50 In any case, it sounds like you should read about moduli spaces of stable curves, and moduli spaces of stable maps, and virtual fundamental classes. The book "Mirror Symmetry and Algebraic Geometry" by Cox and Katz has a good overview of all of this material from both the algebraic geometry and the symplectic geometry perspectives. – Kevin Lin Jul 3 2010 at 6:01 ## 2 Answers A compact orbifold without boundary will have a fundamental chain in rational singular homology if and only if it is orientable. The fundamental chain will satisfy the same sorts of properties as for a manifold. In particular, collapsing the complement of a small disc will send the fundamental chain to the generator of $H_n(D^n,\partial D^n)$. The reason that closed oriented orbifolds have fundamental chains in rational homology is essentially because, to the eyes of rational homology, they look like manifolds. So the first part of what you are asking amounts to the question of whether $\overline{\mathcal{M}}_{g,n}/S_n$ is orientable or not. Just so we're clear, $\overline{\mathcal{M}}_{g,n}$ is the moduli space of stable curves of genus $g$ with $n$ marked points labelled $1 \ldots n$. The action of the symmetric group $S_n$ is by permuting the labels of the marked points. Before taking the quotient by $S_n$, the space $\overline{\mathcal{M}}_{g,n}$ is a complex orbifold, so the complex structure induces a well-defined orientation just as for complex manifolds. So the question is now whether the $S_n$ action preserves the orientation or not. The answer is that it does indeed. The symmetric group action is in fact algebraic, so it preserves the complex structure and hence the orientation. As an aside: There is a homotopy equivalence between the moduli space $M^{rib}_{g,n}$ of metric ribbon graphs (that thicken to a genus $g$ surface with $n$ labelled boundary components) and the uncompactified moduli space $\mathcal{M}_{g,n}$. The action of $S_n$ on $M^{rib}_{g,n}$ is not orientation preserving. This is because, via Strebel differentials (or Penner's Lambda lengths) one can show that $M^{rib}_{g,n}$ is homeomorphic to $\mathcal{M}_{g,n} \times \mathbb{R}_+^n$ and under this homeomorphism the symmetric group action corresponds to the usual action on the first factor and the permutation action on the coordinates of the second factor. Thus it doesn't act in an orientation preserving way. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. A pseudomanifold is a finite-dimensional topological space $X$ (say Hausdorff and locally compact) that admits a closed subspace $Y$ of dimension $\dim X-2$ such that $X-Y$ is a manifold (see e.g. Goresky-MacPherson, Intersection homology 1, Topology 19, 135-162). If in addition to that $X-Y$ is connected and orientable, then there is a fundamental (cellular) chain: the homology long exact sequence inplies that the $H_{\dim X}(X,\mathbf{Z})=\mathbf{Z}$; take any representative of any of the two generators and this will be a fundamental chain. If $X$ is a CW-complex and $Y$ is a subcomplex, then a cellular fundamental chain can be constructed as follows: take the sum $[X]$ of all cells of the highest dimension with the orientation induced by some orientation of $X-Y$. This is a cycle since any cell of dimension one less will occur twice in $\partial [X]$, once with a plus and once with a minus. The above conditions are satisfied if $X$ is an irreducible compact complex algebraic variety and $Y$ is a closed subvariety, since by Lojasiewicz's theorem one can triangulate $X$ so that $Y$ is a subpolyhedron. If $X$ is complex algebraic but not compact and $Y$ is still closed, then the above construction still works but the number of simplices will no longer be finite, so the fundamental class will live in the Borel-Moore homology. If you have a finite group acting biregularly on an irreducible complex algebraic variety, then the quotient is not necessarily algebraic: it may happen that some orbits do not lie inside an affine open subset. But the fundamental class exists nonetheless: take the union of the singular locus and all points with nontrivial stabilizers. This is a subvariety whose real codimension is at least 2. Using the homology long exact sequence again one can see that the highest homology group is $\mathbf{Z}$, so any representative will ba a fundamental chain Remark: the argument in the above paragraph is not really necessary in the case you are interested in. The quotient of the Deligne-Mumford compactifications of the moduli spaces of curves by the symmetric groups are algebraic and are coarse moduli spaces for the functor a family of smooth or nodal curves plus a set-valued section that does not intersect the nodes''. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9083150625228882, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/48688/linear-algebra-alternated-multilinear-forms?answertab=active
# Linear algebra: Alternated multilinear forms Let $A^{\#}:A_{n}(F)\rightarrow A_{n}(E)$ define by $A^{\#}f(v_{1},\cdots ,v_{n})=f(Av_{1},\cdots, Av_{n})$, where $A_{n}(F)$ is the space of the alternated n multilinear forms in F. Verify that $(\alpha A)^{\#}=\alpha (A^{\#})$, where $\alpha$ is a scalar and $A:E\rightarrow F$ is a linear transformation. $(\alpha A)^{\#}f(v_{1},\cdots ,v_{n})=f(\alpha Av_{1},\cdots,\alpha Av_{n})=\alpha^{n}f(Av_{1},\cdots, Av_{n})\neq \alpha (A^{\#})f(v_{1},\cdots ,v_{n})$ then I showed that it's false, am I missinterpreting all? a hint would be appreciated, thanks in advance . - What is $\alpha$ supposed to be? A scalar? And $A$, is it supposed to be a linear transformation $A: E\to F$? – Willie Wong♦ Jun 30 '11 at 14:27 yes $\alpha$ is a scalar. Edited it – Ivan3.14 Jun 30 '11 at 14:28 yes I forgot, A is linear transformation from E to F – Ivan3.14 Jun 30 '11 at 14:39 Is that the exact wording of the question? (and not something like, verify that pullbacks are linear, etc.)? – Dactyl Jun 30 '11 at 15:32 Unless the notation $\alpha(A^\sharp)$ means something special, instead of just scalar multiplication, I think what you did is correct. – Willie Wong♦ Jun 30 '11 at 18:46 show 4 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194978475570679, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/28600/list
## Return to Answer 2 added 37 characters in body Zariski introduced an abstract notion of Riemann surface associated to, for example, a finitely generated field extension $K/k$. It's a topological space whose points are equivalence classes of valuations of $K$ that are trivial on $k$, or equivalently valuation rings satisfying $k\subset R_v\subset K$. If $A$ is a finitely generated $k$-algebra inside $K$ then those $R_v$ which contain $A$ form an open set. In the case of a (finitely generated and) transcendence degree 1 extension all of these valuation rings are the familiar DVRs -- local Dedekind domains -- and they serve to identify the points in the unique complete nonsingular curve with this function field. (There is also the trivial valuation with $R_v=K$, which corresponds to the generic point of that curve.) In higher dimensions there are lots of complete varieties to contend with -- you can keep blowing up. Also there are more possibilities for valuations. Most of the valuation rings are not Noetherian. A curve in a surface gives you a discrete valuation ring, consisting of those rational functions which can meaningfully be restricted to rational functions on the curve: those which do not have a pole there. A point on a curve on a surface gives you a valuation whose ring consists of those functions which do not have a pole all along the curve, and which when restricted to the curve do not have a pole at the given point. The value group is $\mathbb Z\times \mathbb Z$ lexicographically ordered. A point on a transcendental curve in a complex surface, or more generally a formal (power series) curve in a surface gives you a valuation by looking at the order of vanishing; the value group is a subgroup of $\mathbb R$. This space of valuations has something of the flavor of Zariski's space of prime ideals in a ring: it is compact but not Hausdorff, for example. It can be thought of as the inverse limit, over all complete surfaces $S$ with this function field, of the space (Zariski topology) of points $S$. 1 [made Community Wiki] Zariski introduced an abstract notion of Riemann surface associated to, for example, a finitely generated field extension $K/k$. It's a topological space whose points are equivalence classes of valuations of $K$ that are trivial on $k$, or equivalently valuation rings satisfying $k\subset R_v\subset K$. If $A$ is a finitely generated $k$-algebra inside $K$ then those $R_v$ which contain $A$ form an open set. In the case of transcendence degree 1 all of these valuation rings are the familiar DVRs -- local Dedekind domains -- and they serve to identify the points in the unique complete nonsingular curve with this function field. (There is also the trivial valuation with $R_v=K$, which corresponds to the generic point of that curve.) In higher dimensions there are lots of complete varieties to contend with -- you can keep blowing up. Also there are more possibilities for valuations. Most of the valuation rings are not Noetherian. A curve in a surface gives you a discrete valuation ring, consisting of those rational functions which can meaningfully be restricted to rational functions on the curve: those which do not have a pole there. A point on a curve on a surface gives you a valuation whose ring consists of those functions which do not have a pole all along the curve, and which when restricted to the curve do not have a pole at the given point. The value group is $\mathbb Z\times \mathbb Z$ lexicographically ordered. A point on a transcendental curve in a complex surface, or more generally a formal (power series) curve in a surface gives you a valuation by looking at the order of vanishing; the value group is a subgroup of $\mathbb R$. This space of valuations has something of the flavor of Zariski's space of prime ideals in a ring: it is compact but not Hausdorff, for example. It can be thought of as the inverse limit, over all complete surfaces $S$ with this function field, of the space (Zariski topology) of points $S$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 28, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271616339683533, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/13571/home-experiments-to-measure-the-rpm-of-a-pedestal-fan-without-special-equipment/13608
# Home experiments to measure the RPM of a pedestal fan without special equipment? Is it possible to determine to an approximate degree, the revolutions per minute of a fan, for example a pedesal fan pictured below, without using some electronic/mechanical measuring device? One thing that comes to mind is the markings on a old record player's turntable - can that concept somehow be used to make marks on a fan blade to determine it's RPM? Or can something called a "strobe tuner" help: Can I maybe make markings on a fan with a marker and figure out the RPM using nothing else but a stopwatch? Or maybe some other DIY technique that does not require the purchase of any measuring device? Ps. I'm not entirely sure if this is a physics or engineering question, so please feel free to move it to the appropriate site (I checked all 58 stackexchange sites and only the Physics site seemed to fit the question) - What you've described with the strobe tuner is a type of stroboscope (albeit working in reverse to the more familiar sort used for measuring the angular velocity of rotating machine parts). The disk rotates at a known velocity based on a tuning fork, and the electronic input modulates a flashing light. If the light is flashing at a multiple of the disk rotation rate, the stroboscopic effect makes the disk appear stationary. Conventional stroboscopes work the other way, where you adjust the speed of the flashing light until the fan appears to stop moving. Borrow one from a mechanic :D – Richard Terrett Aug 15 '11 at 11:43 This seems fine as a physics question, since it's basically about measuring angular velocity. – David Zaslavsky♦ Aug 15 '11 at 21:07 Could you use a camera where you can set its shutter speed or is that too fancy? I would tape a piece of straw, or something, to the ends of one of the blades -- to distinguish it from the rest. Then take a number of pictures at different shutter speeds. Hopefully you'd be able to see the degrees of change from the start of the exposure until the end. You might have to use a lamp directly behind the fan or in front of to increase visibility of your marker. – Chris Aug 15 '11 at 23:28 ## 1 Answer Let me first list all of the possibilities I considered that I later rejected. This is far from exhaustive, and I'm looking forward to seeing other people's creativity. Bad Ideas • Sit on a tire swing with the fan pointing to the side. Point the fan up, measure speed of rotation of the system on the tire swing. • Get a laser or collimated flashlight. Point it into the fan at a single point. On a wall on the other side of the fan, you will have a dot of light that is flashing, measuring the rate of the flashing could be an easier problem. • Attach springs to the tip of all the blades. Observe how far out the acceleration deforms them, apply Hooke's Law to find acceleration, convert to angular speed. Now, I am almost sure that the experimenter doesn't have the tools to execute any of these methods very well. Not even one. I'll try to break this down as to why. Firstly, do we know the moment of inertia of the fan? No. Do we know the moment of inertia of the tire swing with a person and a fan on it? No. Is it even constant? No. I'm not saying we can't figure these things out, but it's an absurdly inferior method that will get terrible data. On to the laser method. How are we going to measure the flash rate of the laser? I thought endlessly about this problem. Generally, a reference would be good, or if you could use electronics you could nail down the speed almost exactly and very easily. But I don't think anything is available that will work. Now, the spring idea.. where to begin? The measurement of the deformation length is error-prone. The weight of the springs themselves will affect the speed of the fan. What spring do you have with appropriate characteristics anyway? My best proposal I'm hoping you can take off the cover. If you can't take off the cover, I hope you can take off some part of it, so that you can get a protruding shaft. I think the best way to make this measurement with household stuff would be to: • Get a part of the shaft isolated • Tape or attach the end of a thin string to it • Turn the fan on with a stopwatch • Time how long it takes for the fan to completely wrap the entire string around the shaft with a stopwatch • Count the number of times the string has wrapped around the shaft There you go, you have a number of turns per some amount of time. Ideally you would use a very light string that offered little to no resistance when pulled, as in, have it loosely laid out on the floor. Fishing wire could possibly be very good. You will want the acceleration time to be small compared to the entire measured time. Some other (not terrible) possibilities It occurred to me that acoustic methods might have some merit here. Get something that the blades can smack against, like when you take a pencil and stick it in the fan. Open up Windows sound recorder (accessories -> entertainment -> sound recorder), or a program like Audacity. Use some sound editing program and zoom in really tightly on the sound. See if you can identify a periodic shape that corresponds to a single hit. Count the peaks over a given time frame. Once again, you have number of rotations (or 1/3rd rotations) per unit time. If you already have an educated guess as to the frequency, then identifying the acoustic pattern from individual hits might not be very bad, not to mention, there is a lot of design flexibility in this experiment and computers should have a sufficiently high sample rate. I think the ideal would be some kind of visual timing mechanism like the OP suggests. I'd imagine that a mechanical reference could be of use. Like if you had another fan that you knew the speed of, you could place it in front of the unknown fan and adjust its speed until you saw some patterns that indicated they were in sync. Yes, I'm lacking a lot of what's required to do this effectively, but maybe someone else can offer better advice. # The Experiment Half of the papers on my desk are blown away. I'm getting complaints about the wretched sound of pen on fanblades, and people in my office are not too happy with me right now. But this is all in the name of physics! I am editing to present my experimental results. I used the acoustic method to determine the speed of my fan. Firstly, my experimental apparatus is the Galaxy 20 inch model 4733 fan. It has 5 blades. I can't find any shopping results for you, but maybe someone else can. Here is a pretty good quality demo of the Galaxy 20" fan on youtube. And this video specifically states they have the 4733 model that I'm using. Why do people upload youtube videos of these things?! Do you have to "unbox" every single thing you buy?? Ok, moving on. I'm using the Audicity program and the microphone from a Microsoft Lifechat headset. The fan has 3 settings, plus 0 for off. Setting 1 is the slowest and setting 3 is the highest. It produces quite a good breeze and has served me well. To start off, I'll share a waveform I recorded with it on setting 3 and setting 1 with me doing nothing else to it. As you can see, this is not too useful. It makes a sound, but there's no way to distinguish peaks. Maybe it has a frequency that reflects the speed of the fan, but I can't be sure (and I haven't had much luck with the spectrum visualizations). You can see how the sound it makes is different between the two, and the 3rd setting is obviously louder, and the frequency is obviously different, but we want actual numbers. So I put a plastic pen in it (the butt of the pen). Now, you might not want to try this at home (like I just did), but I kind of had to play with the angle to get it to not miss blades. It's very easy for it to jump and miss one, which would mess up the count. I had to press kind of hard and it was rather loud. But I got results. Here are the waveforms for 0.5 seconds, and my markup in order to count the "hits". I also provide the actual count in the image. You can check my work for the count itself. I'm also happy to upload some mp3 files, but I'm not sure where I'll host them right now. The above image was made with the high-tech research software MS Paint. I'll give answer denoted $rpm_i$, where $i$ is the number of the setting, and the number of hits above will be denoted $hits$. I'll take the error in each hit count to be $\pm 1$ hit. The formula and reported results are as follows. Remember, it has 5 blades. $$rpm_i = \left( hits_i \pm 1 \right) \times \frac{turn}{5 hits} \times \frac{1}{0.5 s} \times \frac{60 s}{min}$$ $$rpm_1 = 456 \pm 24 rpm$$ $$rpm_2 = 624 \pm 24 rpm$$ $$rpm_3 = 864 \pm 24 rpm$$ Power Consumption (addendum) I used my KILL A WATT device to record power consumption for all the different speed settings. I'll denote this with $P_i$ but I need to explain a little about the difficulty in making this measurement. I believe my KILL A WATT to be fairly reliable and it gives stable power measurements for devices with constant power consumption. This is not quite true for the fan. When I first turn on the fan it consumes more power than after I leave it running for some time. The largest swing I observed was $50.5 W$ at max and $45.3 W$ at minimum for setting 1. This gives you another possible source of error. Since the experiment was performed with the fan on for a good while I'll report the lowest readings I have. $$P_0 = 0 W$$ $$P_1 = 45.3 W$$ $$P_2 = 65.1 W$$ $$P_3 = 97.1 W$$ Now, I want to take just one quick second to apply the physics concepts of friction here. Dynamic friction between two solid bodies is often taken as a constant, and fluid friction is often taken as a power law, as in, $v^n$ where $n$ is most commonly from 1 (fully laminar) to 2. We can apply that here! I converted the previous speeds to rad/s and plotted the speed versus power consumption for all 3 "on" settings. Then, I guessed a certain offset and subtracted this value from the power consumption and applied a power fit to what was left. I realized after I did this that a constant power consumption does not correlate to a constant force (that would be linear), so I'm really just assuming some base power consumption for the device, and this yields a more perfect power fit for what's left just due to the mechanics of the motor. I found that I could get a better power fit by making this subtraction. The power fit had $R^2=1.0000$ for a constant offset from $8 W$ all the way to $13 W$ so I took the middle ground of $10.5 W$, which is to say, I made an educated guess that a constant power loss accounts for $10.5 W$ out of the consumed power. The power fit follows a satisfying $1.5$ power law, which is about what I expect for fluid friction. $$P_i = 10.5 W + 0.1343 \omega_i^{1.4374} W$$ Lastly, I want to report the general intensity of the sound in my mp3 files. I need to put a disclaimer with this that it might not be accurate in any physical sense. I would want to ask an audio engineer about this issue - I don't know if the dB of an audio file represents the physical dB of the sound at the point the measurement was made. My guess is that this will depend on the recording device. Anyway, I want to give a dB measurement (I'll denote $A$) for the average peak for the pen hits on the fan blade, as per the audio file. $$A_1 = -8 dB$$ $$A_2 = 0 dB$$ $$A_3 = -3 dB$$ Did I have the microphone in different locations when I took these measurements? Yes, I did. If I had to guess, I would say that I had it about 3-4 inches away from the pen contact point when I did setting 1 & 2 and closer to 7-8 inches when I did the 3rd setting. Obviously the 3rd setting was louder to my ear, and I had to set the headset down when taking that measurement because holding both was getting difficult. I offer this data because with my guesstimates you could potentially calculate the energy released in the sound wave on a hit (assuming the dB measure is a 'real' measurement). Then with a conversion efficiency (from mechanical to sound), estimate the energy dissipated in a hit. You could also take some generic values for fan motor efficiency to relate power consumption to friction forces. You could then use a tailored mathematical form for power consumption (like above) for the friction losses and apply the energy dissipation rate from the pen hits and estimate the speed loss due to the pen. It's just something good to keep in mind for future experiments, so that you can show that the process of measuring isn't affecting what is being measured too much. With my 20" fan I don't think it matters too much, but repeating the experiment on a smaller fan could benefit from these calculations, and in order to do so you should have the microphone located in a fixed position for all measurements (unlike what I did). Comments It's possible that the pen contact was slowing down the fan some. In fact, this is almost surely the case, but this is a rather large fan. It is also an old fan. I would expect these speeds to be less than someone with a newer one. I've taken power measurements that could be used for some other investigation if desired. One use would be to guesstimate the impact the application of the pen on the blades has on the fan speed. I have already taken a shot at putting together a picture of where the friction comes from and developed a formula for power consumption as a function of speed based on a breakdown between static and fluid friction. - The string theory is interesting. Will try that right away. – Zabba Aug 15 '11 at 22:27 @Zabba I'm actually on the verge of trying the acoustic method myself. The string method gets "ordinary" error, like what I would expect from a high school physics class practicum. The acoustic method or any of the optical methods could blow that away. – AlanSE Aug 15 '11 at 22:34 Thank you very much for your experiment! Oh, and I hope you have a safer 4733 model ... See this too! – Zabba Aug 16 '11 at 2:50 1 What a fantastic answer. Entertaining and informative. – Richard Terrett Aug 16 '11 at 8:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 12, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9642234444618225, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/239105/special-integrals
# Special integrals There are special integrals such as the logarithmic integral and exponential integrals. I want to know if there are primitives for such integrals. If not, why not? - 3 Yes, there are primitives. No, they can't be expressed in terms of exponentials, logarithms, trig functions, etc. – Gerry Myerson Nov 17 '12 at 8:46 @Gerry Myerson , So how are they expressed then? – Badshah Nov 17 '12 at 8:50 1 @Badshah: as integrals. – Qiaochu Yuan Nov 17 '12 at 9:17 1 – sos440 Nov 17 '12 at 9:37 3 – Qiaochu Yuan Nov 17 '12 at 9:44 show 1 more comment ## 1 Answer A simple starting point (as indicated by Qiaochu) is Liouville's theorem (or 'principle') based on differential algebra and was extended with the Risch algorithm. This last link should clarify some of the ideas used : 1. the only new term appearing during an integration (i.e. that was not in the integrand) is a linear combination of logarithms (because logarithms alone may disappear during differentiation...) 2. exponentials $e^f$ had to be in the integrand first (since differentiation doesn't make them disappear) and will reappear as $h\,e^f$ (of course subtle points exist like considering $\sqrt{x}=e^{\ln(x)/2}\cdots$) 3. differentiation of an algebraic function $\theta$ (i.e. there is a polynomial $P(\theta)=0$) will give a rational function $\frac {d(\theta)}{e(\theta)}$ with $\,d$ and $e\,$ two polynomials. These 3 ideas will provide logarithmic, exponential and algebraic extensions to the differential algebra (starting for example with the field of rational functions over $\mathbb{Q}$) that will give all the elementary functions. Geddes, Czapor and Labahn's book "Algorithms for Computer Algebra" is very clear too. Now let's use these ideas to study $\int\frac {e^x}x\,dz$. From a more precise version of $2.$ a primitive must be of type $\ I(x)=h(x)e^x$ with $h(x)$ a rational function. Let's suppose this and differentiate : $$(h'(x)+h(x))e^x=\frac {e^x}z$$ so that we need : $$h'(x)+h(x)=\frac 1x$$ We supposed $h$ rational so that it may be decomposed in simple elements but $h'(x)$ can't give $\frac 1x$ so that $\frac 1x$ must be part of $h(x)$. In this case $h'(x)$ will create a term $-\frac 1{x^2}$ that must be compensated by a $\frac 1{x^2}$ term in $h(x)$ that will generate a $-\frac 2{x^3}$ term ... This process clearly doesn't end ! The same method could be used for the sine integral : $\int \frac {\sin(x)}x\,dx\,$ simply by writing $\ \sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$. (this method is from Matthew P Wiener from an old post at sci.math : recommended reading too !) Concerning the logarithmic integral we have $\ \operatorname{li}(x)=\operatorname{Ei}(\ln(x))\$ so that the non-elementary proof for the one should apply for the other as well. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.936573326587677, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/47503/shouldnt-the-change-in-kinetic-energy-be-more-in-a-moving-elevator-from-a-stati/47506
# Shouldn't the change in kinetic energy be more in a moving elevator from a stationary frame of reference? Consider an elevator moving down with uniform velocity. A person standing inside watches an object fall from the ceiling of the elevator to the floor. Say the height of the elevator is $h$. Then the work done by gravity in that frame of reference should be $mgh$. But consider this same event being watched by someone else in the stationary frame of reference. In his reference frame, the object travels a larger distance as it falls from the ceiling to the floor of the elevator because the floor itself is moving downwards (one can calculate this extra distance covered to be $u \sqrt{\frac{2h}{g}}$) and hence the change in kinetic energy should be more in that frame than in the moving frame! I just can't seem to figure out where I'm going wrong here. I'm probably missing something very obvious. So I would be very grateful if anyone could explain this to me. Edit: Okay, let's say the object is a clay ball and it collides with the floor inelastically such that it's kinetic energy is converted into heat. In the moving frame of reference the heat would be simply equal to $\frac{1}{2}mv^2$ which is equal to $mgh$. In the stationary frame of reference it would be equal to $\frac{1}{2}mv^2-\frac{1}{2}mu^2$ since the ball after colliding is moving with speed $u$. This can be calculated to be equal to $mgu\sqrt{\frac{2h}{g}} + mgh$ which is clearly greater than the heat produced in the frame attached to the elevator and this is a contradiction because the heat measured in any frame should be the same. - The energy dissipated is not equal to $\frac{1}{2}mv^2 - \frac{1}{2}mu^2$. The correct expression is $\frac{1}{2}m(v-u)^2$. – Johannes Dec 24 '12 at 18:04 @Johannes What makes you say that? I'm pretty sure it's the other way round. – Alraxite Dec 24 '12 at 18:18 The energy dissipated in a fully inelastic collision is determined by the impact velocity in the center-of-mass frame. In any other frame one has to consider also the kinetic energy transferred. – Johannes Dec 24 '12 at 19:17 ## 4 Answers Boy, this was tricky, but the secret is in conservation of momentum. See, you are assuming that, after the collision, the velocity of the ball-elevator ensemble is $u$, but this is not fully true: it will be $u' = u + \frac{m}{m+M}\sqrt{2gh}$, $M$ being the mass of the elevator. Of course if $M \to \infty$ that reduces to $u' = u$, but when computing the KE, something funny happens: $$\frac{1}{2}(m+M)u'^2 = \frac{1}{2}(m+M)u^2 + \frac{m^2}{m+M}gh + um\sqrt{2gh}$$ That last term which does not depend on $M$ is the key here. Of course the first term, with the $(m+M)$ dominates the others, but it will be cancelled out by identical terms in the KE before the collision. But if you assume that because $M \to \infty$ you can take $u' = u$, you will be missing this last term, which exactly cancels out that extra energy. Doing the math for a finite elevator mass, and using conservation of momentum to compute the final velocity, you eventually get to energy lost in an inellastic collision to be $\frac{1}{2}\frac{mM}{m+M}(u-v)^2$, which for $M \to \infty$ reduces to $\frac{1}{2}m(u-v)^2$, as Johannes already pointed out. - This is the right answer. – Nathaniel Dec 25 '12 at 1:33 Thanks! Just one thing in reference to my question before the edit: so it's true that gravity does more work in the stationary frame than in the elevator frame of reference? I guess the answer is yes, but here's the counterintuitive thing: say I am in space and I apply a force $F$ on an object through a distance $s$. In my frame of reference, I lost potential energy equal to $F.s$. In a frame moving with speed $u$, the object moved an extra distance of $u\sqrt{2.\frac{m}{F}.s}$ and hence more food in my stomach got converted into mechanical energy in that frame. Isn't this a paradox? – Alraxite Dec 25 '12 at 14:06 There is no such thing as conservation of energy between intertial reference frames. (The kinetic energy of a car is larger in any intertial frame that is not it's own restframe) Considering the observer inside the elevation, the free fall takes $t_f=\sqrt{\frac{2h}{g}}$, after which is has velocity $gt_f$, and thus kinetic energy $mgh$ (which is cheating, as this is what you used to calculate $t_f$ in the first place. However, the total energy of the particle itself is conserved within this frame, between two times. Now consider the external observer. It sees an increase in kinetic energy of $$\Delta K = \frac{1}{2}m(u+\sqrt{2gh})^2-\frac{1}{2}mu^2$$ Which simplifies to: $$\Delta K = \frac{1}{2}m\left(u^2+2u\sqrt{2gh}+2gh\right)-\frac{1}{2}mu^2$$ $$= mu\sqrt{2gh}+mgh$$ Where the first term is related precisely to the additional difference in height that you calculated! - You may want to read my edit. – Alraxite Dec 24 '12 at 13:34 @Alraxite The point here, is that in-elasticity is not really well-defined. You have to take into account that in the second case, the clay-ball hits a moving surface. In other words, you should look at it in terms of momentum and momentum-loss rather than energy. – Bernhard Dec 24 '12 at 13:43 Relative to the elevator, the ball approaches the floor with the same speed in both the reference frames. One could define an inelastic collision with a stationary object as a collision in which the moving object loses all its energy to heat. I don't see a problem... – Alraxite Dec 24 '12 at 14:23 @Alraxite Hmm. Another example. Two cars colliding heads on traveling 30 km/h each in opposite direction, or one standing still and the other traveling 60 km/h. These situations are different. Have to think about the relation to your problem. – Bernhard Dec 24 '12 at 14:38 – Alraxite Dec 24 '12 at 14:43 show 1 more comment An observer in the lift and an observer at rest in the building observe the same energy being transferred in heat: A clay ball with mass $m$ drops from the ceiling in an elevator and hits the floor. The elevator has height $h$ and moves downward with uniform speed $u$. An observer at the ground floor observes the clay ball to fall with speed $u+g t$. When it hits the floor, the clay ball has spend a time $\sqrt{\frac{2h}{g}}$ falling and moves with speed $v = u+\sqrt{2hg}$. The lift floor moves with speed u, and the energy dissipated in the inelastic collision corresponds to the velocity difference between the two: $\frac{1}{2} m (v-u)^2 = \frac{1}{2} m (2hg) = mgh$. The velocity $u$ has dropped from this equation, expressing the fact hat he energy transferred is independent of the frame of reference chosen. - Why do you square the difference of velocities and not take the difference of the squares? – Jaime Dec 24 '12 at 17:53 The speed at which the clay ball and the lift floor collide is $v-u$. The energy dissipated corresponds to the square of this collision speed. – Johannes Dec 24 '12 at 17:58 You are right in the collision energy depending on $(u-v)^2$, but that is a result far from obvious that comes from including conservation of momentum in the derivation, see my answer. – Jaime Dec 24 '12 at 19:28 Hmm.. I think this is pretty obvious. In treating an inelastic collision in any frame other than the center-of-mass frame one has to consider the kinetic energy transferred. – Johannes Dec 24 '12 at 19:34 It isn't obvious at all - that's the entire point of the question. – Nathaniel Dec 25 '12 at 1:29 show 1 more comment I think the thing to realise here is that changes in kinetic energy aren't independent of the reference frame. To see this, consider a mass of $2\,\text{kg}$ that accelerates from rest to a speed of $1\,\text{ms}^{-1}$ over some period of time. Its kinetic energy has changed by $\frac{1}{2}m(1^2-0^2)=1\,\text{J}$. But now consider the same mass viewed by someone travelling at $10\,\text{ms}^{-1}$ in the opposite direction. Now the mass's kinetic energy has changed by $\frac{1}{2}m(11^2-10^2)=21\,\text{J}$. This is what's happening when you consider the two different reference frames regarding the object falling in the lift. It does indeed gain more kinetic energy in the moving frame, and I'm sure that if you calculated $\frac{1}{2}m(v_1^2-v_0^2)$ in the moving versus the stationary frames of reference, the difference would also come out to $u\sqrt{\frac{2h}{g}}$. This is counterintuitive, but it has to be that way in order for energy to be conserved. - What's the downvote for? I admit this doesn't add much beyond the other answers, but it was written before they were posted, and also before the edit in the question. – Nathaniel Dec 25 '12 at 1:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503225684165955, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/191674-derivatives-inverse-trig-function.html
# Thread: 1. ## Derivatives of inverse trig function I need help taking the derivative of y = arctan(x+1) This is what I do y = arctan(x+1) x=tan(y+1) cos^2(y+1) = y' cos^2(arctan(x+1)+1) = y' cos^2(((1-sec^2x)^0.5)+2))=y' I am suppose to get a polynomial but I cannot take the derivative of it to get one. I just get stuck at the last step. Any help? 2. ## Re: Derivatives of inverse trig function Originally Posted by Barthayn I need help taking the derivative of y = arctan(x+1) This is what I do y = arctan(x+1) x=tan(y+1) cos^2(y+1) = y' cos^2(arctan(x+1)+1) = y' cos^2(((1-sec^2x)^0.5)+2))=y' I am suppose to get a polynomial but I cannot take the derivative of it to get one. I just get stuck at the last step. Any help? The tan and arctan functions don't work that way. You can only take the tangent of the whole expression $\tan(y) = \tan(\arctan(x+1))$ $\tan(y) = x+1$ $x = \tan(y) - 1$ When you differentiate wrt x you should get: $1 = \sec^2(y)y'$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8758557438850403, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/259576/estimating-pi-very-accurately/259595
# Estimating $\pi$ very accurately [duplicate] Possible Duplicate: Do We Need the Digits of $\pi$? I often hear about people who compute a lot of digits of $\pi$.Does estimating $\pi$ to a large degree of precision have any importance (or potential use) in mathematics ? Thank you - 3 No.${}{}{}{}{}{}$ – Graphth Dec 15 '12 at 23:14 Do you have any idea why its done ? – Amr Dec 15 '12 at 23:14 2 Because we're nerds. – vermiculus Dec 15 '12 at 23:16 7 It is of great importance, but the reasoning involved is circular. – copper.hat Dec 15 '12 at 23:18 1 Knowing many digits of pi has no practical significance, but developing the methods needed to compute many digits of pi in a reasonable amount of time is of use, as those methods have applications in other places. Also, while attempting to compute digits of pi, we have created some beautiful mathematics, for example the Bailey–Borwein–Plouffe formula. – Potato Dec 15 '12 at 23:25 show 4 more comments ## marked as duplicate by Henning Makholm, sdcvvc, Argon, Martin Argerami, Asaf KaragilaDec 16 '12 at 0:03 This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question. ## 1 Answer In my opinion, the value in itself is of as much importance to mathematics as the value of any number, say $\sqrt{2}$. On the other hand, estimating the value of $\pi$ is of great importance due to the vast amount of techniques it generates. Think of all the cute power series and inverse trig relations of $\pi$. 1) Machin's formula: $$\frac{\pi}{4} = 4 \arctan\frac{1}{5} - \arctan\frac{1}{239}$$ 2) Leibniz formula for $\pi$: $$\frac{1}{1} - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \arctan{1} = \frac{\pi}{4}$$ 3) Euler Formula for $\pi$ $$\frac{\pi}{4} = \frac{3}{4} \times \frac{5}{4} \times \frac{7}{8} \times \frac{11}{12} \times \cdots$$ 4) Bailey-Borwein-Plouffe formula $$\pi = \sum_{i = 0}^{\infty}\left[ \frac{1}{16^i} \left( \frac{4}{8i + 1} - \frac{2}{8i + 4} - \frac{1}{8i + 5} - \frac{1}{8i + 6} \right) \right]$$ Apparently, the above formula can be used to extract the digits of $\pi$ from an arbitrary location!! I am reading it now (see spigot algorithms) due to a suggestion from Potato and I am curious about its behavior. Consider the memorable sharp integral bounds you generate: $$\frac{1}{1260} = \int_0^1\frac{x^4 (1-x)^4}{2}\,dx < \int_0^1\frac{x^4 (1-x)^4}{1+x^2}\,dx = \frac{22}{7} - \pi < \int_0^1\frac{x^4 (1-x)^4}{1}\,dx = {1 \over 630}$$ See Lucas for interesting extensions. Especially the error margin of $\frac{355}{113}$ approximation. Think of all the sneaky ways $\pi$ can crop up surprisingly, like in the Buffon's needle problem. We can use this experiment to empirically estimate the value of $\pi$. Euler apparently proved that if you pick two integers at random, the probability that they are co-prime is $\frac{6}{\pi^2}$. The first thing I asked myself the first time I saw it was 'how did $\pi$ appear?' With normal distribution containing $\pi$ in it's p.d.f and due to central limit theorem, I wont be surprised if there are so many other ways of estimating $\pi$ empirically. Estimating $\pi$ seems to be an interesting hobby that has given rise to some beautiful methods ranging from the Gauss-Legendre algorithm, continued fractions, empirical probabilistic techniques, complex numbers, geometry, integrals and infinite series. So I think the spirit of estimating a number is very important to mathematics, while the value of the number in itself may not have much importance. - Great answer. One small suggestion: Add the Bailey-Borwein-Plouffe formula! – Potato Dec 15 '12 at 23:48 Nice post, are there any other examples similar to Buffon's needle problem where $\pi$ shows up unexpectedly? – Michael Boratko Dec 15 '12 at 23:48 @Potato: Woah!! The idea of Bailey-Borwein-Plouffe formula is wonderful.Let me add it. – Isomorphism Dec 15 '12 at 23:55 @Michael, loosely speaking, the probability that two randomly-chosen integers are relatively prime is $\frac{6}{\pi^2}$. – Dan Brumleve Dec 16 '12 at 0:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9098954796791077, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57802/what-properties-define-open-loci-in-families/57831
## What properties define open loci in families? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This question is somehow related to the question What properties define open loci in excellent schemes?. Let $f:X\to S$ be a proper (or even projective) morphism between schemes (of finite type over a field or over $\mathbb{Z}$). For $t\in S$, $X_t$ is the fiber of $f$ over $t$. Let $P$ be a property of schemes. We consider the locus: $$U_P = \{ t\in S : X_t \text{ has property } P \}.$$ For which properties $P$ is the set $U_P$ open if 1. $f$ is flat, 2. $f$ is smooth? Examples of such $P$'s I know or suspect to be open in flat families are "being geometrically reduced", "being geometrically smooth" or "being $S_n$". In smooth families, a nice example is that of "being Frobenius split" (we assume that $S$ has characteristic $p$). Copy-paste from the aforementioned thread: Question 1: Do you know other interesting classes of open properties? Question 2: Are there good heuristic reasons for why a certain property should be open? Phrased a bit more ambitiously, are there common techniques for proving openness for certain class of properties? More specific questions: • how about properties $R_n$ and normality? • is being Frobenius split open in flat families? • in general take a property local rings $Q$ and consider $P =$ "all local rings of $X$ satisfy $Q$". Which of the properties $Q$ listed in the cited thread give $P$'s which are open in flat families? - ## 4 Answers The recentish book of Görtz and Wedhorn (see http://www.algebraic-geometry.de/ ) has an Appendix E which gives a long list of properties of morphisms for which the corresponding set of the base is open or constructible (when only constructibility holds), together with references for the proofs (either to their book or to EGA/SGA). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Piotr, Here are some answers for a slightly different question (although nearly a special case of your question), I'm assuming that $S$ is regular and 1 dimension (for example a curve), which one can often reduce to (especially if the original $S$ was a smooth variety). However, for what I'm about to say I don't need properness / projectiveness of the family (EDIT: although as pointed out in the comments, you do need something). In particular, I know some answers to the question that if $R/f$ has P then $R$ has property P near the vanishing locus of $f$ assuming $f$ is a regular element. (EDIT: This is basically the affine case as mentioned in the comments.) Now, if $R$ has the property $P$, then Bertini-type theorems can often imply that the other (non-special) fibers have the desired property too (especially in characteristic zero). • Normality is open, this is in EGA somewhere I don't remember, but it's also in the paper of Heitmann I'm about to cite. • Seminormality is open, although this isn't trivial. See "Lifting Seminormality" by Ray Heitmann. I don't know if weak normality is open in this way. • Frobenius split is not open in the families I mention. If the total space is $\mathbb{Q}$-Gorenstein with index not divisible by the characteristic, it is open. See the paper $F$-purity and rational singularity'' by Richard Fedder, as well as some papers by Anurag Singh on "deformation". You can jazz these examples up to projective families without much work. By working with cones, you can find a projective family of smooth varieties whose special fiber is $F$-split, but not the general fibers. • The Frobenius split answer is quite closely related to the question of whether log canonical singularities deform, which leads you to a series of papers on Inversion of adjunction'', in particular culminating in a paper on that topic by Kawakita from about 5 years ago. • It is an open question whether Du Bois singularities deform in this way, it is known that it is true in the Gorenstein case by Kawakita's inversion of adjunction. • In characteristic $p$, Du Bois singularities are related to $F$-injective singularities (a weakening of $F$-split). It is known that Cohen-Macaulay $F$-injective singularities deform in the way I describe, but it is an open question in general. • Rational singularities deform by a result of Elkik. I'm sure there are many things that I'm forgetting too. - Thank you for the answer! I don't understand however how to get rid of any kind of properness assumption. We can take any $X$ with bad singularities, take the trivial family $X\times S\to S$ and remove the singular locus of $X\times \\{0\\}$. Then $X_0$ is smooth but $X_t$ is bad. How do you exclude this kind of cheating? – Piotr Achinger Mar 8 2011 at 15:53 1 @Piotr: one way to defend against that kind of cheating is to assume that $f$ is affine. For how this works, see these: mathoverflow.net/questions/57508/… and mathoverflow.net/questions/45347/… – Sándor Kovács Mar 8 2011 at 16:16 Piotr, you are absolutely right. I guess I've been thinking about those questions in the affine world. I've edited my answer to reflect that. Thanks. – Karl Schwede Mar 8 2011 at 16:40 You find plenty of theorems of this type in EGA IV/Part 3, see especially EGA IV.9. Let $f: X\to S$ be a morphism of finite presentation. (We do not have to make a flatness or properness assumption in what follows.) For example let $P$ be one of the following properties: • being geometrically irreducible • being geometrically connected • being geometrically regular • being geometrically normal • being geometrically reduced • having property R_k geometrically Let $U$ be the set of points $s\in S$ such that $X_s/k(s)$ has property P. Then $U$ is at least locally constructible. (cf. IV.9.7.7 and IV.9.9.4) Hence: If $S$ is irreducible and noetherian and $U$ contains the generic point of $S$, then $U$ contains a nonempty open set. (But in EGA IV / Part 3 there are much more results of that flavour...) - "U contains the generic point of S" seems to me a quite difficult condition to test, i.e. a condition which does not improve or explain "$U_P$ is open". I'd be glad to be wrong about this. – Qfwfq Mar 8 2011 at 16:30 With regards to the original question, and not my original answer, one place where $F$-splitting was studied in this context is: K. Shimomoto and Wenliang Zhang, On the localization theorem for $F$-pure rings In a ring of characteristic $p > 0$ where the Frobenius map is a finite morphism (ie, geometric contexts) $F$-pure is equivalent to $F$-split. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 59, "mathjax_display_tex": 1, "mathjax_asciimath": 2, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9396588206291199, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Support_(mathematics)
# Support (mathematics) In mathematics, the support of a function is the set of points where the function is not zero-valued, and the closure of that set.[1]:678 This concept is used very widely in mathematical analysis. In the form of functions with support that is bounded, it also plays a major part in various types of mathematical duality theories. ## Formulation A function supported in Y must vanish in X \ Y. For instance, f with domain X is said to have finite support if f(x) = 0 for all but a finite number of x in X. Since any superset of a support is also a support, attention is given to properties of subsets of X that admit at least one support for f. When the support of f (written supp(f)) is mentioned, it may be the intersection of all supports, {x in X:  f(x) ≠ 0} (the set-theoretic support), or the smallest support with some property of interest. ## Closed supports The most common situation occurs when X is a topological space (such as the real line) and f : X→R is a continuous function. In this case, only closed supports of X are considered. So a (topological) support of f  is a closed subset of X outside of which f  vanishes. In this sense, supp(f ) is the intersection of all closed supports, since the intersection of closed sets is closed. The topological supp(f ) is the topological closure of the set-theoretic supp(f ). ## Generalization If M is an arbitrary set containing zero, the concept of support is immediately generalizable to functions f : X→M. M may also be any algebraic structure with identity (such as a group, monoid, or composition algebra), in which the identity element assumes the role of zero. For instance, the family ZN of functions from the natural numbers to the integers is the uncountable set of integer sequences. The subfamily { f  in ZN :f  has finite support } is the countable set of all integer sequences that have only finitely many nonzero entries. ## In probability and measure theory For more details on this topic, see support (measure theory). In probability theory, the support of a probability distribution can be loosely thought of as the closure of the set of possible values of a random variable having that distribution. There are, however, some subtleties to consider when dealing with general distributions defined on a sigma algebra, rather than on a topological space. Note that the word support can refer to the logarithm of the likelihood of a probability density function. ## Compact support Functions with compact support in X are those with support that is a compact subset of X. For example, if X is the real line, they are functions of bounded support and therefore vanish at infinity (and negative infinity). Real-valued compactly supported smooth functions on a Euclidean space are called bump functions. Mollifiers are an important special case of bump functions as they can be used in distribution theory to create sequences of smooth functions approximating nonsmooth (generalized) functions, via convolution. In good cases, functions with compact support are dense in the space of functions that vanish at infinity, but this property requires some technical work to justify in a given example. As an intuition for more complex examples, and in the language of limits, for any ε > 0, any function f on the real line R that vanishes at infinity can be approximated by choosing an appropriate compact subset C of R such that $|f(x) - I_C(x)f(x)| < \varepsilon$ for all x ∈ X, where $I_C$ is the indicator function of C. Every continuous function on a compact topological space has compact support since every closed subset of a compact space is indeed compact. ## Support of a distribution It is possible also to talk about the support of a distribution, such as the Dirac delta function δ(x) on the real line. In that example, we can consider test functions F, which are smooth functions with support not including the point 0. Since δ(F) (the distribution δ applied as linear functional to F) is 0 for such functions, we can say that the support of δ is {0} only. Since measures (including probability measures) on the real line are special cases of distributions, we can also speak of the support of a measure in the same way. Suppose that f is a distribution, and that U is an open set in Euclidean space such that, for all test functions $\phi$ such that the support of $\phi$ is contained in U, $f(\phi) = 0$. Then f is said to vanish on U. Now, if f vanishes on an arbitrary family $U_{\alpha}$ of open sets, then for any test function $\phi$ supported in $\bigcup U_{\alpha}$, a simple argument based on the compactness of the support of $\phi$ and a partition of unity shows that $f(\phi) = 0$ as well. Hence we can define the support of f as the complement of the largest open set on which f vanishes. For example, the support of the Dirac delta is $\{0\}$. ## Singular support In Fourier analysis in particular, it is interesting to study the singular support of a distribution. This has the intuitive interpretation as the set of points at which a distribution fails to be a smooth function. For example, the Fourier transform of the Heaviside step function can, up to constant factors, be considered to be 1/x (a function) except at x = 0. While x = 0 is clearly a special point, it is more precise to say that the transform qua distribution has singular support {0}: it cannot accurately be expressed as a function in relation to test functions with support including 0. It can be expressed as an application of a Cauchy principal value improper integral. For distributions in several variables, singular supports allow one to define wave front sets and understand Huygens' principle in terms of mathematical analysis. Singular supports may also be used to understand phenomena special to distribution theory, such as attempts to 'multiply' distributions (squaring the Dirac delta function fails - essentially because the singular supports of the distributions to be multiplied should be disjoint). ## Family of supports An abstract notion of family of supports on a topological space X, suitable for sheaf theory, was defined by Henri Cartan. In extending Poincaré duality to manifolds that are not compact, the 'compact support' idea enters naturally on one side of the duality; see for example Alexander-Spanier cohomology. Bredon, Sheaf Theory (2nd edition, 1997) gives these definitions. A family Φ of closed subsets of X is a family of supports, if it is down-closed and closed under finite union. Its extent is the union over Φ. A paracompactifying family of supports satisfies further than any Y in Φ is, with the subspace topology, a paracompact space; and has some Z in Φ which is a neighbourhood. If X is a locally compact space, assumed Hausdorff the family of all compact subsets satisfies the further conditions, making it paracompactifying. ## References 1. Pascucci, Andrea (2011). PDE and Martingale Methods in Option Pricing. Berlin: Springer-Verlag. doi:10.1007/978-88-470-1781-8. ISBN 978-88-470-1780-1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.938182532787323, "perplexity_flag": "head"}
http://en.wikipedia.org/wiki/Marginal_density
# Marginal distribution (Redirected from Marginal density) In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables in the subset without reference to the values of the other variables. This contrasts with a conditional distribution, which gives the probabilities contingent upon the values of the other variables. The term marginal variable is used to refer to those variables in the subset of variables being retained. These terms are dubbed "marginal" because they used to be found by summing values in a table along rows or columns, and writing the sum in the margins of the table.[1] The distribution of the marginal variables (the marginal distribution) is obtained by marginalizing over the distribution of the variables being discarded, and the discarded variables are said to have been marginalized out. The context here is that the theoretical studies being undertaken, or the data analysis being done, involves a wider set of random variables but that attention is being limited to a reduced number of those variables. In many applications an analysis may start with a given collection of random variables, then first extend the set by defining new ones (such as the sum of the original random variables) and finally reduce the number by placing interest in the marginal distribution of a subset (such as the sum). Several different analyses may be done, each treating a different subset of variables as the marginal variables. ## Two-variable case Joint and marginal distributions of a pair of discrete, random variables X,Y having nonzero mutual information I(X; Y). The values of the joint distribution are in the 4×4 square, and the values of the marginal distributions are along the right and bottom margins. Given two random variables X and Y whose joint distribution is known, the marginal distribution of X is simply the probability distribution of X averaging over information about Y. It is the probability distribution of X when the value of Y is not known. This is typically calculated by summing or integrating the joint probability distribution over Y. For discrete random variables, the marginal probability mass function can be written as Pr(X = x). This is $\Pr(X=x) = \sum_{y} \Pr(X=x,Y=y) = \sum_{y} \Pr(X=x|Y=y) \Pr(Y=y),$ where Pr(X = x,Y = y) is the joint distribution of X and Y, while Pr(X = x|Y = y) is the conditional distribution of X given Y. In this case, the variable Y has been marginalized out. Bivariate marginal and joint probabilities for discrete random variables are often displayed as two-way tables. Similarly for continuous random variables, the marginal probability density function can be written as pX(x). This is $p_{X}(x) = \int_y p_{X,Y}(x,y) \, \operatorname{d}\!y = \int_y p_{X|Y}(x|y) \, p_Y(y) \, \operatorname{d}\!y ,$ where pX,Y(x,y) gives the joint distribution of X and Y, while pX|Y(x|y) gives the conditional distribution for X given Y. Again, the variable Y has been marginalized out. Note that a marginal probability can always be written as an expected value: $p_{X}(x) = \int_y p_{X|Y}(x|y) \, p_Y(y) \, \operatorname{d}\!y = \mathbb{E}_{Y} [p_{X|Y}(x|y)]$ Intuitively, the marginal probability of X is computed by examining the conditional probability of X given a particular value of Y, and then averaging this conditional probability over the distribution of all values of Y. This follows from the definition of expected value, i.e. in general $\mathbb{E}_Y [f(Y)] = \int_y f(y) p_Y(y) \, \operatorname{d}\!y$ ## Real-world example Suppose that the probability that a pedestrian will be hit by a car while crossing the road at a pedestrian crossing without paying attention to the traffic light is to be computed. Let H be a discrete random variable taking one value from {Hit, Not Hit}. Let L be a discrete random variable taking one value from {Red, Yellow, Green}. Realistically, H will be dependent on L. That is, P(H = Hit) and P(H = Not Hit) will take different values depending on whether L is red, yellow or green. A person is, for example, far more likely to be hit by a car when trying to cross while the lights for cross traffic are green than if they are red. In other words, for any given possible pair of values for H and L, one must consider the joint probability distribution of H and L to find the probability of that pair of events occurring together if the pedestrian ignores the state of the light. However, in trying to calculate the marginal probability P(H=hit), what we are asking for is the probability that H=Hit in the situation in which we don't actually know the particular value of L and in which the pedestrian ignores the state of the light. In general a pedestrian can be hit if the lights are red OR if the lights are yellow OR if the lights are green. So in this case the answer for the marginal probability can be found by summing P(H,L) for all possible values of L, with each value of L weighted by its probability of occurring. Here is a table showing the conditional probabilities of being hit, depending on the state of the lights. (Note that the columns in this table must add up to 1 because the probability of being hit or not hit is 1 regardless of the state of the light.) Conditional distribution: P(H|L) L=Red L=Yellow L=Green H=Not Hit 0.99 0.9 0.2 H=Hit 0.01 0.1 0.8 To find the joint probability distribution, we need more data. Let's say that P(L=red) = 0.2, P(L=yellow) = 0.1, and P(L=green) = 0.7. Multiplying each column in the conditional distribution by the probability of that column occurring, we find the joint probability distribution of H and L, given in the central 2×3 block of entries. (Note that the cells in this 2×3 block add up to 1). Joint distribution: P(H,L) L=Red L=Yellow L=Green Marginal probability H=Not Hit 0.198 0.09 0.14 0.428 H=Hit 0.002 0.01 0.56 0.572 Total 0.2 0.1 0.7 1 The marginal probability P(H=Hit) is the sum along the H=Hit row of this joint distribution table, as this is the probability of being hit when the lights are red OR yellow OR green. Similarly, the marginal probability that P(H=Not Hit) is the sum of the H=Not Hit row. ## Continuous variables Many samples from a bivariate normal distribution. The marginal distributions are shown in red and blue. The marginal distribution of X is also approximated by creating a histogram of the X coordinates without consideration of the Y coordinates. For multivariate distributions, formulae similar to those above apply with the symbols X and/or Y being interpreted as vectors. In particular, each summation or integration would be over all variables except those contained in X. ## References 1. Trumpler and Weaver (1962), pp. 32–33. ## Bibliography • Everitt, B. S. (2002). The Cambridge Dictionary of Statistics. Cambridge University Press. ISBN 0-521-81099-X. • Trumpler, Robert J. and Harold F. Weaver (1962). Statistical Astronomy. Dover Publications.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8957970142364502, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/149647-projectile-motion.html
# Thread: 1. ## projectile motion Let’s suppose you throw a ball straight up with an initial speed of 50 feet per second from a height of 6 feet. 1. Find the parametric equations that describe the motion of the ball as a function of time. 2. How long is the ball in the air? 3. Determine when the ball is at maximum height. Find its maximum height. 2. how far can you get? 3. I have all the answer correct until C. I cant figure out whether i need to factor, or simplify. I also know that the maximum height is the y coordinate of the vertex. 4. There are two ways to get your maximum height: Method 1 (requires calculus) i assume you ahve a parametric equation for the vertical motion like: y=f(t) differentiate and find the value of t where $\frac{dy}{dt} = 0$. This will give you the time at which the ball is at its maximum height. The height at that moment is then f(t). Method 2 Consider the vertical motion of the ball only and use one of Newton's equations of motion: $v=u + at$ v = 0 (at maximum height, it is no longer gaining altitude) u = ? (should have been given in question) a = accelleration (you should know the value of this!) t = the time at which max height is reached (solve for this) Then use another equation to get the height at this moment, eg $v^2 = u^2 + 2as$ 5. Originally Posted by kenzie103109 Let’s suppose you throw a ball straight up with an initial speed of 50 feet per second from a height of 6 feet. 1. Find the parametric equations that describe the motion of the ball as a function of time. 2. How long is the ball in the air? 3. Determine when the ball is at maximum height. Find its maximum height. $y = 6 + 50t - 32t^2$ set $y = 0$ and solve for $t$ to determine how long the ball is airborne. the graph of $y$ is parabolic ... use $t = \frac{-b}{2a}$ to find the vertex (time of max height)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9175041317939758, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/40763/list
## Return to Answer 2 added 90 characters in body The Picard-Lindelof Theorem is not quite correctly stated in your question. Recall that it is usually referred to as the LOCAL existence and uniqueness theorem, and it only guarantees a solution on a certain maximal interval [0,T). It 0,T), and for as simple a system as $x'=x^2$ the maximal existence time T is finite. That said, it is correct (with caveats) that the Euler method approximating solutions will converge to this "true" solution at all points of this maximal interval. The detailed story, with all the error estimates, is too complicated to state here, but you can find a careful discussion of this not only for Euler's Method, but also for a number of other methods, in Chapter 5 (Numerical Methods) of the book Differential Equations, Mechanics, and Computation (which I wrote together with my son Bob). There is a website for the book at http://ode-math.com where you can download for free more than half the book. In particular clicking here: http://ode-math.com/PDF_Files/ChapterFirstPages/First38PagesOFChapter5.pdf will download the first 38 pages of chapter 5, where starting on page 144 you will find a careful discussion of the rate of convergence and stability properties etc, for Euler's method, starting from scratch. 1 The Picard-Lindelof Theorem is not quite correctly stated in your question. Recall that it is usually referred to as the LOCAL existence and uniqueness theorem, and it only guarantees a solution on a certain maximal interval [0,T). It is correct (with caveats) that the Euler method approximating solutions will converge to this "true" solution at all points of this maximal interval. The detailed story, with all the error estimates, is too complicated to state here, but you can find a careful discussion of this not only for Euler's Method, but also for a number of other methods, in Chapter 5 (Numerical Methods) of the book Differential Equations, Mechanics, and Computation (which I wrote together with my son Bob). There is a website for the book at http://ode-math.com where you can download for free more than half the book. In particular clicking here: http://ode-math.com/PDF_Files/ChapterFirstPages/First38PagesOFChapter5.pdf will download the first 38 pages of chapter 5, where starting on page 144 you will find a careful discussion of the rate of convergence and stability properties etc, for Euler's method, starting from scratch.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9072791337966919, "perplexity_flag": "head"}
http://mathforum.org/mathimages/index.php?title=Parametric_Equations&diff=6888&oldid=6423
# Parametric Equations ### From Math Images (Difference between revisions) Jump to: navigation, search | | | | | |----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | | | | | | Line 4: | | Line 4: | | | | |ImageIntro=The Butterfly Curve is one of many beautiful images generated using '''parametric equations'''. | | |ImageIntro=The Butterfly Curve is one of many beautiful images generated using '''parametric equations'''. | | | |ImageDescElem= | | |ImageDescElem= | | - | We often graph functions by letting one coordinate be dependent on another. For example, graphing the function <math> f(x) = y = x^2 </math> has y values that we trace out depend upon x values. However, it is very useful to consider functions where each coordinate is equal to an equation of an independent variable, known as a parameter. Changing the value of the parameter can change value of any coordinate being used. We choose a range of values for the parameter, and the values that our function takes on as the parameter varies traces out a curve, known as a parametrized curve. Parametrization is the process of finding a parametrized version of a function. | + | We often graph functions by letting one coordinate be dependent on another. For example, graphing the function <math> f(x) = y = x^2 </math> has y values that we trace out depend upon x values. However, it is very useful to consider functions where each coordinate is equal to an equation of an independent variable, known as a parameter. Changing the value of the parameter can change the value of any coordinate being used. We choose a range of values for the parameter, and the values that our function takes on as the parameter varies traces out a curve, known as a parametrized curve. Parametrization is the process of finding a parametrized version of a function. | | | | | | | | ===Parametrized Circle=== | | ===Parametrized Circle=== | | Line 40: | | Line 40: | | | | <math> \begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} sin(t)cos(s) \\ sin(t)sin(s) \\cos(t) \end{bmatrix}</math> | | <math> \begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} sin(t)cos(s) \\ sin(t)sin(s) \\cos(t) \end{bmatrix}</math> | | | | | | | - | While two parameters are sufficient to parametrize a surface, objects of more than two dimensions will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so many times cannot be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry. | + | While two parameters are sufficient to parametrize a surface, objects of more than two dimensions will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry. | | | |other=Linear Algebra | | |other=Linear Algebra | | | |AuthorName=Direct Imaging | | |AuthorName=Direct Imaging | ## Revision as of 11:41, 2 July 2009 Butterfly Curve The Butterfly Curve is one of many beautiful images generated using parametric equations. Butterfly Curve Field: Calculus Created By: Direct Imaging # Basic Description We often graph functions by letting one coordinate be dependent on another. For example, graphing the function $f(x) = y = x^2$ has y values that we trace out depend upon x values. However, it is very useful to consider functions where each coordinate is equal to an equation of an independent variable, known as a parameter. Changing the value of the parameter can change the value of any coordinate being used. We choose a range of values for the parameter, and the values that our function takes on as the parameter varies traces out a curve, known as a parametrized curve. Parametrization is the process of finding a parametrized version of a function. ### Parametrized Circle One curve that can be easily parametrized is a circle of radius one: We use the variable t as our parameter, and x and y as our normal Cartesian coordinates. We now let $x = cos(t)$ and $y = sin(t)$, and let t take on all values from $0$ to $2\pi$. When $t=0$, the coordinate $(1,0)$ is hit. As t increases, a circle is traced out as x initially decreases, since it is equal to the cosine of t, and y initially increases, since it is equal to the sine of t. The circle continues to be traced until t reaches $2\pi$, which gives the coordinate $(1,0)$ once again. It is also useful to write parametrized curves in vector notation, using a coordinate vector: $\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} cos(t) \\ sin(t)\\ \end{bmatrix}$ Click to see a circle drawn parametrically:[show more][hide] The butterfly curve in this page's main image uses a similar method, but with more complicated parametric equations. # A More Mathematical Explanation Note: understanding of this explanation requires: *Linear Algebra [Click to view A More Mathematical Explanation] Sometimes curves which would be very difficult or even impossible to graph in terms of elementary fun [...] [Click to hide A More Mathematical Explanation] Sometimes curves which would be very difficult or even impossible to graph in terms of elementary functions of x and y can be graphed using a parameter. One example is the butterfly curve, as shown in this page's main image. This curve uses the following parametrization: $\begin{bmatrix} x \\ y\\ \end{bmatrix}= \begin{bmatrix} \sin(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right) \\ \cos(t) \left(e^{\cos(t)} - 2\cos(4t) - \sin^5\left({t \over 12}\right)\right)\\ \end{bmatrix}$ Parametric construction of the butterfly curve ## Parametrized Surfaces and Manifolds In the above cases only one independent variable was used, creating a parametrized curve. We can use more than one independent variable to create other graphs, including graphs of surfaces. For example, using parameters s and t, the surface of a sphere can be parametrized as follows: $\begin{bmatrix} x \\ y\\ z\\ \end{bmatrix}= \begin{bmatrix} sin(t)cos(s) \\ sin(t)sin(s) \\cos(t) \end{bmatrix}$ While two parameters are sufficient to parametrize a surface, objects of more than two dimensions will require more than two parameters. These objects, generally called manifolds, may live in higher than three dimensions and can have more than two parameters, so cannot always be visualized. Nevertheless they can be analyzed using the methods of vector calculus and differential geometry. # Teaching Materials There are currently no teaching materials for this page. Add teaching materials. If you are able, please consider adding to or editing this page! Have questions about the image or the explanations on this page? Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. Retrieved from "http://mathforum.org/mathimages/index.php/Parametric_Equations" Categories: | | | | |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8376018404960632, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/221688/dm-cup-h-dm-joining-dm-amalg-dm-along-the-boundary-partial-dm?answertab=votes
# $D^m\cup_h D^m$, joining $D^m \amalg D^m$ along the boundary $\partial D^m$ Given an orientation-preserving diffeomorphism $h: \partial D^m \to \partial D^m$, we can glue two copies of the closed unit disk $D^m$ along the boundary by identifying $x \sim h(x)$ to form the quotient space $$\Sigma(h) := (D^m \amalg D^m)/\sim$$ Now we can give this quotient a smooth structure such that the obvious inclusions $D^m \hookrightarrow \Sigma(h)$ are smooth embeddings and in fact it turns out that for any two smooth structures, there exists a diffeomorphism between them. So $\Sigma(h)$ is a unique manifold up to diffeomorphism. So far, so good. Now, in Kosinski's 'Differential Manifolds', there is the following Lemma: Lemma: $\Sigma(h)$ is diffeomorphic to $S^m$ if and only if $h$ extends over $D^m$. Moreover, $\Sigma(gh) = \Sigma(h)\# \Sigma(g)$. Here $M\# N$ denotes the connected sum of two manifolds as usual. The proof of this is left as an exercise for the reader, but I'm unsure, how one might construct an extension of $h$ to $D^m$, given that $\Sigma(h)$ is diffeomorphic to $S^m$? I know that in this case $h(\partial D^m)$ necessarily separates $\Sigma(h) = S^m$ into two components, and since $h(\partial D^m)$ is an embedded compact $(m-1)$-manifold (which is smooth), I can also prove that $h(\partial D^m)$ is the boundary of both connected components of its complement. But at this point I get lost. Is it clear that these two components are diffeomorphic to disks? Where might I find a proof of this? I'm fine with the other parts of this Lemma, but I just don't see how to extend $h$ given $\Sigma(h) = S^m$. If you could help me out, this would be very much appreciated. Thank you for your help! - 1 – john mangual Oct 29 '12 at 19:27 1 – john mangual Oct 29 '12 at 19:49 @johnmangual: Thank you for these links! Interesting stuff. – Sam Oct 29 '12 at 20:01 Finally, I think your components are m-disks by construction Since $h(\partial \mathbb{D}^m) = \partial \mathbb{D}^m$ you have $S^m \backslash h(\partial \mathbb{D}^m) \simeq S^m \backslash \partial \mathbb{D}^m$. So it's pretty clear that splits into two disks. I think it's possible to address all the parts of your question now. – john mangual Oct 29 '12 at 20:05 @john mangual: Sam's question is one of the key steps in setting up the bijective correspondence between homotopy-spheres (up to diffeomorphism) and the mapping class group of lower-dimensional (but standard) spheres. This is the beginning of the Smale-Milnor-Kervaire machine for computing the group of homotopy spheres in dimensions $n \geq 5$. – Ryan Budney Nov 3 '12 at 19:25 ## 2 Answers If your sphere $\Sigma(h)$ is diffeomorphic to a standard sphere, consider a diffeomorphism $\Sigma(h) \to S^m$. The two discs in $\Sigma(h)$ are smooth discs, so after applying the diffeomorphism, they're smooth discs in $S^m$. But smooth discs are tubular neighbourhoods of their centres. So they're unique up to embedded isotopy. In particular, this means that by the isotopy extension theorem you can isotope your diffeomorphism $\Sigma(h) \to S^m$ so that it sends the `bottom' disc $D^m$ in $\Sigma(h)$ to the lower hemi-sphere of $S^m$, moreover, you can ensure your diffeo $\Sigma(h) \to S^m$ is a standard diffeomorphism between the lower $D^m$ and the lower hemi-sphere. But now the upper $D^m$ in $\Sigma(h)$ is identified (via a diffeomorphism) with the upper hemi-sphere in $S^m$. So compose that map with a standard diffeomorphism between the upper-hemisphere and a $D^m$. Provided you choose it appropriately on the boundary, this is by design your extension of $h : \partial D^m \to \partial D^m$ to a diffeomorphism $\overline{h} : D^m \to D^m$. So above, when I talk about `standard' diffeomorphisms between $D^m$ and the lower/upper hemi-spheres of $S^m$ what I mean is that $\partial D^m \times \{0\} = \partial H$ (set equality) where $H \subset S^m$ is either the upper or lower hemi-sphere in $S^m$. So to be standard I mean the diffeo must be the identity on the boundary in this sense. Cerf went further than this, his pseudo-isotopy theorem now says that $h$ is isotopic to the identity on $\partial D^m$, provided $m \geq 6$. - This is a great explanation! Very clear. What I was missing was the key step of moving one of the two hemi-spheres by an isotopy to make it 'nice' again. Thank you ever so much for writing it up for me and explaining it so well! – Sam Nov 3 '12 at 22:21 Topologically $\mathbb{D}^m \amalg \mathbb{D}^m / \sim$ is the m-sphere. Let's see if we can get this identification to agree with the smooth structures. Kosinski's book helped. If $h:\partial\mathbb{D}^m \to \partial \mathbb{D}^m$ extends to a diffeomorphism $h:\mathbb{D}^m \to \mathbb{D}^m$, then there is a diffeomorphism $$\mathbb{D}^m \amalg_h \mathbb{D}^m \simeq \mathbb{D}^m \amalg_{id} \mathbb{D}^m \simeq S^m$$ fixing the first disk and letting $h$ act on the second disc and its boundary. This map is a homeomorphism and differentiable. If $\mathbb{D}^m \amalg_h \mathbb{D}^m \simeq S^m$ how do we get an extension? Using whatever diffeomorphism we have, $$S^m \backslash h(\partial \mathbb{D}^m) \simeq S^m \backslash \partial \mathbb{D}^m = \mathbb{D}^m \amalg \mathbb{D}^m$$ so we get a disjoint union of two disks. Every diffeomorphism of $S^1 \to S^1$ can be extended to the disk. Such a map as a Fourier expansion: $$\sum a_n e^{i n \theta} \to \sum a_n r^n e^{i n \theta} \text{ with }|r|<1$$ so we have a map from $\mathbb{D} \to \mathbb{D}$ (by Cauchy-Schwartz). Diffeomorphisms from $S^2 \to S^2$ extend to $\mathbb{D}^3$ this was proven by Munkres and by Smale. In the comments of Tim Gower's blog, Greg Kuperberg explains that not all maps $S^6 \to S^6$ extend to $\mathbb{D}^7$. This has to do with the exotic structures on the 7-sphere, by Milnor. - Thank you for your answer. I'm still not sure as to how exactly you get the extension of $h$. Could you maybe write out the details? – Sam Oct 30 '12 at 13:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 64, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538645148277283, "perplexity_flag": "head"}
http://mathoverflow.net/questions/84032?sort=oldest
## Short Course Suggestions For High School Students ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am planning to teach a course for talented high school students at a summer camp and I need suggestions for possible topics. The students usually have different backgrounds but most of them are familiar with single variable calculus and very basic linear algebra over the reals. The teaching format will be two hours per day, six days per week and two weeks in total. Suggestions for one week courses are also welcome. There are two things I want about this course. First, it should have a direction and a final goal. So it shouldn't be based on isolated Olympiad-type problems. Second, it should introduce at least one new concept or object which is not a part of the high school curriculum. For instance, classification of frieze patterns is a good topic. There is a clear goal and one needs to introduce the concept of a group which is new for high school students. Any other suggestions? - 2 In case you decide to go with something that involves ruler and compass constructability and/or the theory of equations (by which I mean what was a standard undergraduate course from the late 1800s until its disappearance in the 1950s), you may find the following manuscript of use: pballew.net/Constructable_17gon.pdf I wrote this for situations such as you find yourself in, more as a secondary reference than as a primary reference. – Dave L Renfro Dec 21 2011 at 20:30 Have you seen Etingof's notes on group theory? – B. Bischof Dec 22 2011 at 4:58 1 I have a series of lectures given to supplement a low-level freshman university course that cover RSA, P/NP, Turing machines, diagonalization (a la Cantor, Turing, and Godel), random walks, and the central limit theorem. If you want copies, email me (my contact info is on my MO profile). – Steve Huntsman Dec 22 2011 at 16:48 Include some homework exercises with your lectures! – KConrad Dec 23 2011 at 8:55 ## 17 Answers If I may forgiven for self-promotion, you might examine How To Fold It: The Mathematics of Linkages, Origami, and Polyhedra (Cambridge University Press, 2011). All of its topics are accessible to high-school students, but all fall outside the high-school curriculum. See also `howtofoldit.org` for some (not yet well-organized) supplementary material. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. When I was at that stage, I really enjoyed some introductory lectures in set theory a la Cantor in two weeks you can probably get to Schroeder-Bernstein or thereabouts... - I believe (and I may have some of the details wrong) Mark Sapir had at one time a curriculum for fourth graders that involved combinatorics on infinite words. I do not know if he still has it, or how adaptable it is, but it introduced the Thue-Morse sequence and (I believe) had some applications, such as (non-FIDE) unending chess, analysis of certain dynamical systems, and so on. Since I may be mistaken as to the details and availability, I suggest asking Mark Sapir or rolling your own. - My first suggestion would be a course on set theory. Starting with naive set theory, you examine the diagonal argument, paradoxes, and early developments. Then you use that to motivate axiomatic set theory (perhaps ZF), derive Peano Postulates, prove Cantor-Schroeder-Bernstein, survey cardinal arithmetic. If they've seen computational side of calculus, another idea could be to do introductory analysis course. Assuming little but rational numbers, you could construct real numbers, show their uncountability; then do the calculus they were taught, proving everything on your way. Other suggestion with which, I feel, one cannot go wrong, is elementary number theory. I would stress the prime number theory, proving Bertrand's Postulate and stating prime number theorem. Full disclosure: I'm currently a high school student. - The fact that you are a high school student can not be ignored here... it is good to hear what you think, and I agree that those are interesting and natural topics. – Alan Haynes Dec 21 2011 at 20:41 2 I can't help stating the fact that neither of the three suggested courses would improve my interest in mathematics in the long run. Set theory fascinates people when they first hear of it (I too had such a phase during my school time), but at some point tends to lose them and leave nothing behind except for an impression that mathematics is about playing around with virtual infinities beyound any hope of real understanding or intuition. What, for instance, does Cantor-Schröder-Bernstein mean? I also don't consider it a good idea to give a more technical and axiomatic take on a subject ... – darij grinberg Dec 22 2011 at 1:39 3 ... already overstressed ad nauseam in school, such an analysis. In my circles (German IMO training 2004-2006) the predominant feeling towards (single-variable) calculus was annoyed contempt; differentiating and integrating was something to be left to lesser beings (applied mathematicians, schoolteachers), whereas topological stuff like continuity and the construction of the reals was considered reasonable but boring and technical. Only complex analysis evoked better feelings. This probably has to do with the fact that in school, the only analysis taught is trivial and boring analysis. ... – darij grinberg Dec 22 2011 at 1:42 1 ... If you want to change something about this, I think the better idea would be to introduce some of the more surprising and fresh material (complex analysis, $p$-adic analysis, nonstandard analysis - handle with care -, even some of the nicer numerical analysis), rather than just to do the standard 1-dimensional real calculus with more attention to proofs and details. – darij grinberg Dec 22 2011 at 1:44 2 As for elementary number theory, it is a good proposal, but the Prime Number Theorem is a problematic matter: just stating it doesn't help a lot; proving it might be too much for the course. I'd better go with quadratic residues and some of the more interesting modular arithmetic. – darij grinberg Dec 22 2011 at 1:48 show 5 more comments 2 by 2 Markov chains? (Don't formally define eigenvectors etc at start; just introduce the idea of the matrix as an update rule for some kind of "dynamical system", get them to do some calculations and make some guesses, then do some ad hoc proof of convergence to equilibrium in non-degenerate case.) - Some elementary graph theory with the intent of solving traversal or traveling salesman type problems is pretty easy at that level. Introducing incidence matrices can also be a foothold for learning matrix multiplication. - You have not told us where the students are coming from but it is pretty safe to say that these days no high school students (with exception of few countries on the world) are exposed to any meaningful Geometry course. How about teaching them some real old fashion synthetic (Euclidean/Lobachevsky) geometry course based let say on Kiselev's classic http://www.amazon.com/Kiselevs-Geometry-Book-I-Planimetry/dp/0977985202 with possible excursion into Projective geometry. There is no more natural place to introduce the concept of groups (actions on the sets) than in Geometry (composition of isometric transformations). There is no more natural place to introduce them to the concept of measure. It is very easy to involve hard combinatorial problems and many other things. Finally, set theory is all over the geometry and axiomatic method rules. Many of the most challenging "Olympic problems" are geometric in its nature. Best, Predrag - I realized recently that you can do something really cool with good students after they learn the standard forms for conic sections: you can compute the compactifications of their moduli spaces. I gave an undergraduate talk based on this, and I think it went really well. You have to wave your hands a bit and you might not want to use the word compactification. It is pretty obvious how to draw the uncompactified spaces of conic sections centered at the origins, but something really cool happens when you approach the boundary. I think this could be stretched out a bit longer than an hour and you could probably do several nice lectures on it, one for each different moduli space. Let me know if you come up with any new low level examples for the moduli spaces. Mine were: -Triangles in the plane, -circles centered at the origin, -circles in the plane, -ellipses centered at the origin, -hyperbolas centered at the origin. You could also look at how the discriminant is a function on the moduli space. - In similar situation I gave courses: Groups and combinatorics (Polya theorem} Semigroups and automata - Serge Lang's book "Math Talks for Undergraduates" (Springer, 1999) has quite a few topics which will work for anyone with some calculus. Topics include symmetric polynomials, approximation theorems in analysis, prime numbers, and the abc conjecture. - If you can teach game theory, that could be good. It's bread and butter for mathematical economics and political science (even ecologists learn it now) -- I think the subject illustrates the point that math is not limited in application to situations which involve numbers. In addition to being useful, it's very elementary to solve games (although the fundamental fact that mixed strategy Nash equilibria exist requires topology to prove, it doesn't provide an algorithm for finding them -- actually solving games is more combinatorial). Proving that sets of strategies are/are not Nash equilibria can introduce students to the concept of a formal mathematical proof in a setting which I think is straightforward. Unfortunately, I can't think of a textbook that would be good, but maybe someone else knows one. - Maybe Game Theory and Strategy by Philip Straffin, Jr.? It is an MAA Textbook. A reviewer said, "The only mathematical background necessary is that found in the college-track high-school curriculum." – Joseph O'Rourke Dec 22 2011 at 2:51 Winning Ways is fantastic for combinatorial number theory, because everything is built up through games which anyone can play, and numbers only come in as a notation to keep track of who is ahead in the game. Plus the "numbers" include surreal numbers like stars and arrows. I remember seeing this during a math summer camp when I was 15 and being shocked that something that wasn't a number (like "double up star") could be used to count things, and that those numbers could be combined (added) in a meaningful way. – Zack Wolske Dec 22 2011 at 6:07 Chip-firing, rotor-routing and cycle-popping can be understood by anyone with or without combinatorial background, and provide new insights on lots of old combinatorial problems (counting Eulerian cycles and spanning trees, for instance). Here are some more basic facts, and here are some newer results. While the papers linked are probably too concise and too scholarly to be understood by students directly, it shouldn't be that difficult to make the results accessible for school students by writing them down in a more expository manner. (Needless to say, this would actually add a lot of value.) There are some notions from algebra used (group, group action, monoid, determinant), but (except for some linear algebra, which also can be avoided if so desired) mostly just the language is being used, not any nontrivial theorems. Gröbner bases and elimination theory are another good field, but I don't have a good elementary reference for this. The question how to solve a system of polynomial equations in general is a natural one and a good student should have asked himself this question at least once. Unfortunately the answer is never given even in university lectures. Algebraic geometry is not an answer. Now that we are talking about solving equations, I remember Vladimir Arnold having written a school-level (well, something he considered school level, referring to Russian schools) treatment of a topological proof (or an almost-proof, up to some intuitively obvious technicalities that should be cleared up in an analysis course) of the unsolvability of the generic quintic in radicals. Unfortunately I remember neither the proof nor the source, and it might be just my imagination... EDIT: Here is the text (not by Arnold, but based on Arnold's lectures). It is much longer than what I had in my memory, although the price tag of over \$100 is questionable... You can get the Russian original for free, but then again with some rudimentary Russian you can just as well get the translated book in djvu... PS. I got from chip-firing to Gröbner bases through a curious and tremendously useful mathematical fact, the Newman lemma (often also called diamond lemma by algebraists, whereas computer scientists use "diamond lemma" for a much easier version of this fact), which is (sometimes) used in proving the basic facts of both of these fields. While it can be avoided in both chip-firing and Gröbner bases, I think it should at least be mentioned (the proof is a wonderful exercise on algorithmic thinking) for the sake of general education. - 1 Here is a link to an English translation of Arnold's book (or at least a substantial part of it): www.nairanalytics.com/abel.pdf – KConrad Dec 23 2011 at 8:54 It really depends on what is the objective of the summer camp, and what kind of students are recruited. Are the students selected based on their competency in mathematics, and the goal is to convince them to pursue mathematics in college? In that case, it doesn't help to have a course of more of the same, like a course in geometry, conic sections, or even set theory, as has been suggested. That would only reinforce the view in these students' minds that all the great math has already been done centuries ago. I see a lot of great suggestions here already, and I would add a recommendation of exploring the mathematics of origami. There have been many recent discoveries in that field. The obvious advantage is that it deals with the obviously beautiful. - I think a course about homogeneous linear recurrence relations with constant coefficients should be manageable. The simplest nontrivial example is probably the Fibonacci recurrence $$F_{n+2} = F_{n+1} + F_n.$$ A large supply of nontrivial accessible examples is given by counting walks on graphs or, roughly equivalently, words in regular languages (e.g. the language of all words not containing a particular word $w$). When the characteristic polynomial has distinct roots, the solutions are given by powers of the roots, and this is a very nice example of how using a non-obvious basis for a vector space (the vector space of all solutions) can clarify a situation, and also a fairly concrete example of how complex numbers can naturally occur in answers to real questions (if the characteristic polynomial has complex roots). The general case is somewhat difficult to explain directly, but can be described using any of the following approaches, roughly in increasing order of abstraction: • partial fraction decomposition of a generating function, • factorization of a polynomial in the shift operator $S(f_n) = f_{n+1}$, • Jordan normal form of a companion matrix. The second approach allows the clearest analogy to the case of homogeneous linear ODEs with constant coefficients if the students are familiar with those. Of course to cut down on the abstraction it's probably best to focus on examples, and I think the students will be pleasantly surprised at how many difficult-looking combinatorial questions reduce to the counting of words in regular languages, which turns out to be relatively easy. There is also a cute connection to Pisot numbers, e.g. it is not obvious why the powers of $2 + \sqrt{3}$ should rapidly approach integers until you realize that $$(2 + \sqrt{3})^n + (2 - \sqrt{3})^n$$ is a sequence of integers satisfying a linear recurrence with integer coefficients and that $|2 - \sqrt{3}| < 1$; moreover, this sequence counts the number of closed walks of length $n$ on the multigraph with adjacency matrix $\left[ \begin{array}{cc} 2 & 1 \\ 3 & 2 \end{array} \right]$ so has a direct combinatorial interpretation as well. The closest thing I know to a complete reference for this material is Chapter 4 of Stanley's Enumerative Combinatorics Vol. I; section A.I.4 of Flajolet and Sedgewick's Analytic Combinatorics may also be useful. - see also "Recursion sequences" by Markushevich; this book is written for kids – Anton Petrunin Jan 1 2012 at 1:36 I have successfully taught a course for gifted high school students (somewhat shorter than yours, about 9 hours) devoted to the probabilistic method (based, naturally, on Alon and Spencer + some other material). I managed to cover the basics, second moment method, some random graphs, games and derandomization. With a little more time I would have squeezed in the Lovasz Local Lemma. There was a lot of problem solving, but I was also able to show them some more advanced techniques. In general, combinatorics seems to be a good context to introduce some nontrivial probabilistic tools (say, Chernoff-type bounds). Another probability-based course in the similar format was "random walks and electrical networks". Very nice topic, quite elementary^1, lots of physical intuition - and at the same time, points at the more advanced math beneath (Markov chains, spectral graph theory) 1 - until the kids ask you "wait, how is this probability on the set of infinite trajectories defined? ;) Luckily, I managed to avoid invoking the Kolmogorov extension theorem. - Combinatorial Nullstellenzatz is a great topic. You can combine it with Dwir's ideas and some other stuff where the key idea is to construct an impossible polynomial though there is no mention of any polynomial in the original problem setup. Let me know if you want more details (I've got to run now) :). - 1 See Noga Alon's 1999 paper, "Combinatorial Nullstellensatz": citeseerx.ist.psu.edu/viewdoc/… – Joseph O'Rourke Dec 23 2011 at 2:12 You can do Monsky's theorem, that a square cannot be divided into an odd number of equal area triangles. On the way you will have to do 1. p-adic numbers, 2. Sperner's lemma, 3. present $\mathbb{R}$ as a vector space over $\mathbb{Q}$. In case if you have more time, you could • use Sperner's lemma to prove Brouwer's fixed point theorem. • use (3) to do Dehn Invariants • and I am sure you can find what to do with p-adic numbers -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497964978218079, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3895265
Physics Forums ## Is this a valid argument about box topology? Given the following sequence in the product space R^ω, such that the coordinates of x_n are 1/n, x1 = {1, 1, 1, ...} x2 = {1/2, 1/2, 1/2, ...} x3 = {1/3, 1/3, 1/3, ...} ... the basis in the box topology can be written as ∏(-1/n, 1/n). However, as n becomes infinitely large, the basis converges to ∏(0, 0), which is a single set and not open. Therefore the sequence is not convergent in box topology. Since there exists a basis that converges to ∏(x, x), for any element x of R^ω, a sequence in box topology does not converge to any element in R^ω. Recognitions: Science Advisor Quote by Pippi However, as n becomes infinitely large, the basis converges to ∏(0, 0), which is a single set and not open. Therefore the sequence is not convergent in box topology. To have a basis $B$ for $R^\omega$ don't you need to be able to represent any set in $R^\omega$ as a union of sets in $B$ , not merely the sets that are near {0,0,...}? What kind of convergence are you talking about? Are you talking about a sequence of sets or a sequence of points? Under the usual definition of convergence, a sequence of points in a toplogical space that converges will converge to a point. There is no requirement that it converge to an open set. I don't know the right terminology. x_n represents a point in R^ω that has infinite number of coordinates. I want to use the sequence that if each of the coordinate converges as n grows large, x_n converges to a point. I want to show the difference between product topology and box topology using the sequence x_n = {1/n, 1/, 1/n, ..., 1/n, ... }. A textbook argument, if I read correctly, says that because 1/n eventually goes to 0, there is no open set in the box topology that contains (-δ, δ) in R, hence no function converges. Am I on the right track? Recognitions: Science Advisor ## Is this a valid argument about box topology? To get a clear answer, you're going to have state a clear question. You mention a sequence of points and then you talk about a function converging without explaining what function you mean. If there is a textbook argument, then quote the argument. Quote it, don't just give a mangled summary. (Perhaps the discipline of copying it will make it clearer to you.) Thanks but no thank you. You are not being helpful at all. Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Quote by Pippi Thanks but no thank you. You are not being helpful at all. That is because your question is a bit weird. What does it mean for a basis to converge?? The only things which can converge in topology are sequences, nets and filters. Things like basises can't converge. Except if you're talking about a filter basis, but even then the OP makes little sense. Tags box topology, converge, sequence, topology Thread Tools | | | | |-------------------------------------------------------------------|--------------------------------------------|---------| | Similar Threads for: Is this a valid argument about box topology? | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 4 | | | Set Theory, Logic, Probability, Statistics | 1 | | | Calculus & Beyond Homework | 4 | | | Linear & Abstract Algebra | 2 | | | Introductory Physics Homework | 5 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.923591673374176, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Ruled_surface
Ruled surface A hyperboloid of one sheet is a doubly ruled surface: it can be generated by either of two families of straight lines. In geometry, a surface S is ruled (also called a scroll) if through every point of S there is a straight line that lies on S. The most familiar examples are the plane and the curved surface of a cylinder or cone. Other examples are a conical surface with elliptical directrix, the right conoid, the helicoid, and the tangent developable of a smooth curve in space. A ruled surface can always be described (at least locally) as the set of points swept by a moving straight line. For example, a cone is formed by keeping one point of a line fixed whilst moving another point along a circle. A surface is doubly ruled if through every one of its points there are two distinct lines that lie on the surface. The hyperbolic paraboloid and the hyperboloid of one sheet are doubly ruled surfaces. The plane is the only surface which contains three distinct lines through each of its points. The properties of being ruled or doubly ruled are preserved by projective maps, and therefore are concepts of projective geometry. In algebraic geometry ruled surfaces are sometimes considered to be surfaces in affine or projective space over a field, but they are also sometimes considered as abstract algebraic surfaces without an embedding into affine or projective space, in which case "straight line" is understood to mean an affine or projective line. Ruled surfaces in differential geometry Parametric representation A ruled helicoid The "moving line" view means that a ruled surface has a parametric representation of the form $S(t,u) = p(t) + u r(t)\$ where $S(t,u)$ is the generic point on the surface, $p(t)$ is point that traces a curve lying on the surface, and $r(t)$ is a unit-length vector that traces a curve on the unit sphere. Thus, for example, if one uses $\begin{align} p(t) &= (\cos(2t), \sin(2t), 0)\\ r(t) &= ( \cos t \cos 2 t , \cos t \sin 2 t, \sin t ) \end{align}$ one obtains a ruled surface that contains the Möbius strip. Alternatively, a ruled surface can be parametrized as $S(t,u) = (1-u) p(t) + u q(t)$, where $p$ and $q$ are two non-intersecting curves lying on the surface. In particular, when $p(t)$ and $q(t)$ move with constant speed along two skew lines, the surface is a hyperbolic paraboloid, or a piece of an hyperboloid of one sheet. Developable surface Main article: Developable surface A developable surface is a surface that can be (locally) unrolled onto a flat plane without tearing or stretching it. If a developable surface lies in three-dimensional Euclidean space, and is complete, then it is necessarily ruled, but the converse is not always true. For instance, the cylinder and cone are developable, but the general hyperboloid of one sheet is not. More generally, any developable surface in three dimensions is part of a complete ruled surface, and so itself must be locally ruled. There are developable surfaces embedded in four dimensions which are however not ruled. (Hilbert & Cohn-Vossen 1952, pp. 341–342) Ruled surfaces in algebraic geometry A doubly ruled hyperbolic paraboloid with equation z=xy In algebraic geometry, ruled surfaces were originally defined as projective surfaces in projective space containing a straight line through any given point. This immediately implies that there is a projective line on the surface through any given point, and this condition is now often used as the definition of a ruled surface: ruled surfaces are defined to be abstract projective surfaces satisfying this condition that there is a projective line though any point. This is equivalent to saying that they are birational to the product of a curve and a projective line. Sometimes a ruled surface is defined to be one satisfying the stronger condition that it has a fibration over a curve with fibers that are projective lines. This excludes the projective plane, which has a projective line though every point but cannot be written as such a fibration. Ruled surfaces appear in the Enriques classification of projective complex surfaces, because every algebraic surface of Kodaira dimension −∞ is a ruled surface (or a projective plane, if one uses the restrictive definition of ruled surface). Every minimal projective ruled surface other than the projective plane is the projective bundle of a 2-dimensional vector bundle over some curve. The ruled surfaces with base curve of genus 0 are the Hirzebruch surfaces. Ruled surfaces in architecture Doubly ruled surfaces are the inspiration for curved hyperboloid structures that can be built with a latticework of straight elements, namely: • Hyperbolic paraboloids, such as saddle roofs. • Hyperboloids of one sheet, such as cooling towers and some trash bins. The RM-81 Agena rocket engine employed straight cooling channels that were laid out in a ruled surface to form the throat of the nozzle section. • The roof of the school at Sagrada Familia is a sinusoidally ruled surface. • Cooling hyperbolic towers at Didcot Power Station, UK; the surface can be doubly ruled. • Doubly ruled water tower with toroidal tank, by Jan Bogusławski in Ciechanów, Poland • A hyperboloid Kobe Port Tower, Kobe, Japan, with a double ruling. • The gridshell of Shukhov Tower in Moscow, whose sections are doubly ruled. • A ruled helicoid spiral staircase inside Cremona's Torrazzo. • Village church in Selo, Slovenia: both the roof and the wall are ruled surfaces. • A hyperbolic paraboloid roof of Warszawa Ochota railway station in Warsaw, Poland. • A ruled conical hat. See also • Differential geometry of ruled surfaces • Conoid • Helicoid • Rational normal scroll, ruled surface built from two rational normal curves References • Barth, Wolf P.; Hulek, Klaus; Peters, Chris A.M.; Van de Ven, Antonius (2004), Compact Complex Surfaces, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. 4, Springer-Verlag, Berlin, ISBN 978-3-540-00832-3, MR2030225 • Beauville, Arnaud (1996), Complex algebraic surfaces, London Mathematical Society Student Texts 34 (2nd ed.), Cambridge University Press, ISBN 978-0-521-49510-3; 978-0-521-49842-5 Check `|isbn=` value (help), MR1406314 • Sharp, John (2008), D-Forms, Tarquin . Models exploring rules surfaces Review: Jrnl of Mathematics and the Arts 3 (2009), 229-230 ISBN 978-1-899618-87-3 • Edge, W. L. (1931), The Theory of Ruled Surfaces, Cambridge, University Press . Review: Bull. Amer. Math. Soc. 37 (1931), 791-793, doi:10.1090/S0002-9904-1931-05248-4 • Hilbert, David; Cohn-Vossen, Stephan (1952), Geometry and the Imagination (2nd ed.), New York: Chelsea, ISBN 978-0-8284-1087-8 . • Iskovskikh, V.A. (2001), "Ruled surface", in Hazewinkel, Michiel, , Springer, ISBN 978-1-55608-010-4
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 10, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9062449932098389, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/208163-isotropic-forms-field-p-adic-numbers-print.html
# Isotropic forms in field of p-adic numbers Printable View • November 22nd 2012, 04:41 AM AlexanderW Isotropic forms in field of p-adic numbers Hi, for which primes $p$ are the quadratic forms $7x^2-y^2-5z^2$ and $w^2+x^2+3y^2+11z^2$ isotropic over the field over $p$-adic numbers $\mathbb Q_p$ ? A abbreviated form for the two quadratic forms above is $\left \langle 7,-1,5 \right \rangle$ and $\left \langle 1,1,3,11 \right \rangle$. Define $\varepsilon (\left \langle a_1,...,a_n \right \rangle):=\prod_{1\leq i< j\leq n} (a_i,a_j) \in \left \{ \pm 1 \right \}$. I can use the fact that a regular quadratic form $\phi$ over a field $F$ is isotropic if $dim(\phi)=3$ and $\varepsilon(\phi)=(-1,-d)$. [ $(. , .)$ denotes the Hilbert symbol and $d:=disc(\phi)$ the discriminant of $\phi$.] Moreover, a regular quadratic form $\phi$ over a field $F$ is isotropic if $dim(\phi)=4$ and $d \neq 1$ or { $d=1$ and $\varepsilon(\phi)=(-1,-1)$}. $d \neq 1$ means actually that the residue classes of $d$ and $-1$ are not equal modulo $F^*^2$. Let's come to the first form: $\varepsilon (\left \langle 7,-1,-5 \right \rangle)=(7,-1)(7,-5)(-1,-5)$ $d=disc(\left \langle 7,-1,-5 \right \rangle)=35$, therefore $-d=-35$. Now I have to check for which primes $p$ $(7,-1)(7,-5)(-1,-5) = (-1,-35)$ in $\mathbb Q_p$. But I don't know how to solve it. Thanks! Regards Alexander • November 28th 2012, 08:48 AM AlexanderW Re: Isotropic forms in field of p-adic numbers Hello, this problem is still unsolved. If you have any hints or ideas, it would be great if you would answer. Bye, Alexander All times are GMT -8. The time now is 01:37 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8971686363220215, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/201823-tangent-proofs-y-x-3-a.html
2Thanks • 1 Post By skeeter • 1 Post By skeeter # Thread: 1. ## tangent proofs for y=x^3 Hey everyone, I recently decided to reopen to old uni maths book and work it through from cover to cover, completing every single problem, proof etc. It is taking a while, but enjoying it. I am sure I will get stuck again after this at some point, so thanks in advance everyone! Actually supposed to be working right now, but spending an hour every morning on this stuff Anyway stuck on the following problem (R. A. Adams, Calculus a Complete Course, 5th Edition, Ch 2, ch review prob 15 if you want to know ): Let C be the graph of y = x^3 a) show that if a /=0 then the tangent to C at x=a also intersects C at a second point x=b b) show that the slope of C at x=b is four times its slope at x=a c) Can any line be tangent to C at more than one point d) Can any line be tangent to te graph of y = Ax^3 + Bx^2 + Cx + D I have not gotten very far, but this is what I have at present: y' = 3x^2 let x = a, therefore the tangent is: y=3a^2(x-a)+a^3 = 3a^2x - 2a^3 For a) I assume that C(b) means that y = b^3 for x=b Then I get stuck... 2. ## Re: tangent proofs for y=x^3 Let C be the graph of y = x^3 a) show that if a /=0 then the tangent to C at x=a also intersects C at a second point x=b b) show that the slope of C at x=b is four times its slope at x=a c) Can any line be tangent to C at more than one point d) Can any line be tangent to te graph of y = Ax^3 + Bx^2 + Cx + D (a) at $(a,a^3)$ , tangent slope is $3a^2$ if this tangent line intersects the curve again at $(b,b^3)$ , $a \ne b \ne 0$ , then $3a^2 = \frac{b^3-a^3}{b-a} = b^2 + ab + a^2 \implies b^2 + ab - 2a^2 = 0$ $b^2 - a^2 + ab - a^2 = 0$ $(b+a)(b-a) + a(b-a) = 0$ since $a \ne b$ ... $(b-a)(b + 2a) = 0 \implies b = -2a$ (b) $y'(a) = 3a^2$ $y'(b) = 3b^2 = 3(-2a)^2 = 12a^2 = 4 \cdot 3a^2 = 4 \cdot y'(a)$ (c) the answer to (b) answers this part. (d) "any line" ??? I would say no ... how about the vertical line $x = k$? or any horizontal line where $3Ax^2 + 2Bx + C \ne 0$ ... 4. ## Re: tangent proofs for y=x^3 Thanks for the help, my other reply vanished somehow For c) how does the answer to b) answer this? Would you mind giving me a little explaination? 5. ## Re: tangent proofs for y=x^3 I think I have some kind of proof for d), thoughts? this curve will only have two solutions for horizontal lines if the equation y = 3Ax^2 + 2xB + C = 0, has two real solutions. depending on A, B, C the discriminant = Delta = sqrt(4B^2 - 3AC) case 1, Delta < 0: no real solutions, therefore the critical point is an inflection, and the tangent at this point will only be tangent at the critical point and will not be horizontal, since that is impossible. i.e. y = x^2 + x + 1 case 2, Delta > 0: two real solutions, therefore two critical points, and the tangent at that point is a horizontal line, i.e. y = x^2 + x - 1 case 3, Delta = 0: one real solution, i.e. the case y = x^3 therefore tangents where y = 3Ax^2 + 2xB + C <= 0 have no real solutions/one solution, will only be tangent at one point. actually at points of inflection is a tangent even possible? Thinking of limits, the left and right limits should be different? If this is the case then where delta < 0 has be no point where it is a tangent therefore there are tangent lines that have no solution (delta<0) or one solution (delta=0) summary: no, some lines are not tangent to the graph y = Ax^3 + Bx^2 + Cx + D 6. ## Re: tangent proofs for y=x^3 Originally Posted by tamanous For c) how does the answer to b) answer this? Would you mind giving me a little explaination? (b) established that the line tangent to any point on the curve intersects the curve at another point on the curve ... if a line is tangent to a curve twice, then it has to intersect the curve at both points and have the same slope, right? 7. ## Re: tangent proofs for y=x^3 Originally Posted by skeeter (b) established that the line tangent to any point on the curve intersects the curve at another point on the curve ... if a line is tangent to a curve twice, then it has to intersect the curve at both points and have the same slope, right? Ah, of course, thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.909275472164154, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/28875/what-is-the-stress-energy-tensor?answertab=votes
# What is the stress energy tensor? I'm trying to understand the Einstein Field equation equipped only with training in Riemannian geometry. My question is very simple although I cant extract the answer from the wikipedia page: Is the "stress-energy" something that can be derived from the pseudo-Riemannian metric (like the Ricci tensor, scalar curvature, and obviously the metric coefficients that appear in the equation) or is it some empirical physics thing like the "constants of nature" that appear in the equation? Or do you need some extra mathematical gadget to specify it? Thanks and apologies in advance if this is utterly nonsensical. Also, as a non-physicists I'm not sure how to tag this either so sorry for that as well. - ## 1 Answer Good question! From a physical perspective, the stress-energy tensor is the source term for Einstein's equation, kind of like the electric charge and current is the source term for Maxwell's equations. It represents the amounts of energy, momentum, pressure, and stress in the space. Roughly: $$T = \begin{pmatrix}u & p_x & p_y & p_z \\ p_x & P_{xx} & \sigma_{xy} & \sigma_{xz} \\ p_y & \sigma_{yx} & P_{yy} & \sigma_{yz} \\ p_z & \sigma_{zx} & \sigma_{zy} & P_{zz}\end{pmatrix}$$ Here $u$ is the energy density, the $p$'s are momentum densities, $P$'s are pressures, and $\sigma$'s are shear stresses. In its most "natural" physical intepretation, Einstein's equation $G^{\mu\nu} = 8\pi T^{\mu\nu}$ (in appropriate units) represents the fact that the curvature of space is determined by the stuff in it. To put that into practice, you measure the amount of stuff in your space, which tells you the components of the stress-energy tensor. Then you try to find a solution for the metric $g_{\mu\nu}$ that gives the proper $G^{\mu\nu}$ such that the equation is satisfied. (The Einstein tensor $G$ is a function of the metric.) In other words, you're measuring $T$ and trying to solve the resulting equation for $G$. But you can also in principle measure the curvature of space, which tells you $G$ (or you could pick some metric and get $G$ from that), and use that to determine $T$, which tells you how much stuff is in the space. This is what cosmologists do when they try to figure out how the density of the universe compares to the critical density, for example. It's worth noting that $T$ is a dynamical variable (like electric charge), not a constant (like the speed of light). - 1 – John Rennie May 24 '12 at 6:42 Yeah, I'd meant for that sort of thing to be included in my second case. I put in a clarification. – David Zaslavsky♦ May 24 '12 at 7:40 @DavidZaslavsky :Thanks so much and sorry for the delayed response, David and John. Thats exactly what I was looking for. In case you feel like answering another naive question, here's one more (you've already been quite generous so feel free to tell me to go read a physics book): So when people speak of a "solution" to the equation do they most often mean a local solution (i.e. in some coordinate chart) or do they mean the exhibition of an entire manifold whose metric and stress-energy satisfy the equation globally? Does T range over the tangent bundle of the observable universe or some ... – rorypulvino May 25 '12 at 1:14 subset of interest to the person solving the equation? – rorypulvino May 25 '12 at 1:14 My question could be more succinctly stated as follows "do 'solutions' to the EFE exhibit a model of the universe in its entirety or rather some particular coordinate patch with its peculiar arrangement of energy and matter and other 'stuff'?" – rorypulvino May 25 '12 at 1:25 show 1 more comment
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9475820064544678, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/3327/optical-explanation-of-images-of-stars?answertab=votes
# Optical explanation of images of stars? Very often when viewing pictures of the cosmos taken by telescopes, one can observe that larger/brighter stars do not appear precisely as points/circles on the image. Indeed, the brighter the light from star, the more we see this effect of four perpendicular rays "shooting" out from the star. My question is: what are the optics responsible for this effect? I would suppose that both the hazy glow around the stars and the rays shooting outwards are optical effects created by the camera/imaging device, but really can't surmise any more than that. (The fact that the rays are all aligned supports this, for one.) A proper justification, in terms of (geometrical?) optics for both the glow and the rays would be much appreciated. Here are a few other examples of such images: - ## 4 Answers This is, as Lubos mentioned, an effect of the wave nature of light, and cannot be explained using geometrical optics. What you are seeing is called the Point Spread Function (PSF) of the imaging system. Because stars are so far away that they are effectively point sources of light (i.e. they are spatially coherent) their image will be the PSF of the imaging system. Up to a scale factor, the PSF is the Fourier transform of the pupil of the imaging system. For a lens system, the pupil is usually just a circle, so the PSF is the 2D Fourier transform of a circle: $$\frac{J_1(2 \pi \rho)} {2 \pi \rho}$$ Where $J_1$ is the order 1 bessel function of the first kind. However, most modern telescopes are built with reflective optics, and there are various obscurations in the pupil due to the structures that support the secondary mirror. This more complicated pupil shape can produce a variety of artifacts in the PSF. The starburst pattern in your example images could be due to a simple "plus" shaped structure supporting the secondary mirror, but the effect is so strong that I suspect it was emphasized for creative effect. I'm not sure how the Hubble PSF looks, off the top of my head. In general, an image can be represented by the convolution of the ideal image $g(x,y)$ with the PSF, usually denoted $h(x,y)$. In the case of a point source (so $g(x,y)$ is a delta function, $\delta(x,y)$) it is trivial that the image is a copy of the PSF: $$h(x,y)=\int_{-\infty}^{\infty} \delta(\xi,\eta) h(x-\xi, y-\eta) d\xi d\eta$$ But in the case of a more complicated object, the convolution by the PSF acts to smooth or blur the image. This is why an out of focus camera produces blurry images. Although aberrations also degrade the image under a geometrical approximation, this is more accurate. The geometrical case and the wave optics (diffraction) result will become closer as the aberrations become large. Sometimes this effect is produced intentionally. You can actually buy filters for commercial cameras that have a fine grid of wires to produce this starburst effect for creative purposes. NB: This answer ignores any discussion of phase effects in diffraction (because I'm short on time, I may update later). If you would like to learn about diffraction and the wave optics approach to imaging, the leading text on the planet is "Introduction to Fourier Optics" by J. Goodman. It is an absolutely spectacular book. - @Colin: Thanks for your detailed answer. I'll have a closer look at it shortly! :) – Noldorin Jan 19 '11 at 18:24 @Noldorin: Of course:) I do plan on adding a little bit to it though. There is a lot of interesting details that I completely ignored. It will have to wait until I get home from work though. – Colin K Jan 19 '11 at 18:27 Sounds great; I look forward to it. It would be interesting to simulate these optical effects computationally in fact (perhaps as an image processor), which I may well do. – Noldorin Jan 19 '11 at 18:30 1 @Noldorin: You're talking about things very close to what I do professionally! I will be expanding this answer, but if you have other questions on this or related topics, please ask. I'd be happy for the chance to get some more good answers on here:) – Colin K Jan 19 '11 at 18:39 @Colin K: Yes, I just noticed - you're studying/working as an optical engineer! :) Thanks for the openness. I'm definitely interested in these sorts of topics. (I've been doing some light research on methods and solutions to the rendering equation recently - photon mapping, ray tracing, etc. Interesting stuff.) – Noldorin Jan 19 '11 at 19:13 show 3 more comments a great question. These crosses around the stars are called "diffraction spikes". They emerge because of wave properties of light - interference around rods that have to be inserted into the reflective telescope. See e.g. http://en.wikipedia.org/wiki/Diffraction_spike http://apod.nasa.gov/apod/ap010415.html All the best Lubos - Many thanks. I suspected they may not be geometrical, but was not sure. Now that I have the pointers, the physics should be easy to understand. :) – Noldorin Jan 19 '11 at 18:23 That is one reason why planetary astronomers prefer Cassegrain reflectors (where a glass corrector plate holds the secondary mirror), or refractor telescopes. You avoid diffraction off of supporting structures. – Omega Centauri Jul 10 '11 at 1:44 These optical effects of course have nothing to do with the size of stars. Until recently the size of a star could not be imaged on a photoplate or CCD. An image can have diffraction effects from the aperture and other optical elements. However, these are not directly related to the size of a star and our estimates on stellar diameters. The first estimate of a star size is with the bolometric luminosity. The emission of energy is proportional to the fourth power of the effective absolute temperature $T$, and the area is proportional to the square of the diameter of the star, we have the diameter $D$ proportional to the square root of the luminosity $L$ and the square of the effective temperature $T$. If $D$, $L$ and $T~=~5800K$ for the Sun, then for another star with temperature $T’$ and luminosity $L’$ D/D' = √(L/L')(5800/T)^2. $$D’~=~D\sqrt{L’/L}(T’/T)^2.$$ This is a black body or Stephan law approach to calculating the diameter of a star A more exact size of a star can be deduced optically for a telescope that his at it resolution limit. The physical effect is the Hanbury Brown and Twiss effect. An electromagnetic wave with phase $e^{i\omega t}$ will reach two detectors with a relative phase $e^{i\phi}$. The irradiances on the two detectors will then be $$I_1~=~E^2 e^{i\omega t},~ I_2~=~E^2 e^{i\phi}e^{i\omega t}$$ The correlation function between them $\langle I_1I_2\rangle$ over a measurement time $T$ is $T^{-1}\int_0^T I_1I_2dt$, which for sufficiently long time is given by $$\langle I_1I_2\rangle ~=~\lim_{T\rightarrow\infty}T^{-1}\int_0^T I_1I_2dt~=~\frac{E^4}{4}\Big(1~+~\frac{1}{2}cos(2\phi)\Big)$$ The phase difference can exist for a large telescope. The Airy pattern is $I(r)~=~I(0)(2J1(z)/z)^2,$z~=~2π[r/(2\lambda f/d)]$, where$d\$ is the diameter of the aperture. If this is made small enough the HBT phase or correlation may be detected and used to more directly measure the diameter of a star. I included this to illustrate something related to this issue, and to iclear a common misconception about how images of stars are often thought to convey direct information about their sizes. - There is a really nice paper on how they fixed the Hubble Space Telescope. They shifted the CCD by small amounts and captured several different images of the point spread function (which was quite bad at the time). Then they propagated their measurement back into the plane of the mirror and were able to determine the phase error and locate the original manufacturing problem. This enabled to fix the telescope by installing a correction mirror. Complicated paper: http://www.optics.rochester.edu/workgroups/fienup/PUBLICATIONS/AO93_PRComplicated.pdf Some nice animation on the work they do for the new telescope: http://www.optics.rochester.edu/workgroups/fienup/Tom/tom_research.html -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354342818260193, "perplexity_flag": "middle"}