url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://en.wikipedia.org/wiki/Divisor
# Divisor "Divisible" redirects here. For divisibility of groups, see Divisible group. For the second operand of a division, see Division (mathematics). For divisors in algebraic geometry, see Divisor (algebraic geometry). For divisibility in the ring theory, see Divisibility (ring theory). Calculation results Addition (+) addend + addend = sum Subtraction (−) minuend − subtrahend = difference Multiplication (×) multiplicand × multiplier = product Division (÷) dividend ÷ divisor = quotient Exponentiation baseexponent = power nth root (√) degree √ = root Logarithm logbase(power) = exponent The divisors of 10 illustrated with Cuisenaire rods: 1, 2, 5, and 10 In mathematics, a divisor of an integer $n$, also called a factor of $n$, is an integer which divides $n$ without leaving a remainder. ## Terminology The name "divisor" comes from the arithmetic operation of division: if $\frac{a}{b} = c$ then $a$ is the dividend, $b$ the divisor, and $c$ the quotient. In general, for non-zero integers $m$ and $n$, it is said that $m$ divides $n$—and, dually, that $n$ is divisible by $m$—written: $m \mid n,$ if there exists an integer $k$ such that $n = km$.[1] Thus, divisors can be negative as well as positive, although sometimes the term is restricted to positive divisors. (For example, there are six divisors of four, 1, 2, 4, −1, −2, −4, but only the positive ones would usually be mentioned, i.e. 1, 2, and 4.) 1 and −1 divide (are divisors of) every integer, every integer (and its negation) is a divisor of itself, and every integer is a divisor of 0, except by convention 0 itself (see also division by zero). Numbers divisible by 2 are called even and numbers not divisible by 2 are called odd. 1, −1, n and −n are known as the trivial divisors of n. A divisor of n that is not a trivial divisor is known as a non-trivial divisor. A number with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors. There are divisibility rules which allow one to recognize certain divisors of a number from the number's digits. The generalization can be said to be the concept of divisibility in any integral domain. ## Examples • 7 is a divisor of 42 because $42/7 = 6$, so we can say $7 \mid 42$. It can also be said that 42 is divisible by 7, 42 is a multiple of 7, 7 divides 42, or 7 is a factor of 42. • The non-trivial divisors of 6 are 2, −2, 3, −3. • The positive divisors of 42 are 1, 2, 3, 6, 7, 14, 21, 42. • The set of all positive divisors of 60, $A = \{ 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, 60 \}$, partially ordered by divisibility, has the Hasse diagram: ## Further notions and facts There are some elementary rules: • If $a \mid b$ and $b \mid c$, then $a \mid c$. This is the transitive relation. • If $a \mid b$ and $b \mid a$, then $a = b$ or $a = -b$. • If $a \mid b$ and $c \mid b$, then it is NOT always true that $(a + c) \mid b$ (e.g. $2\mid6$ and $3 \mid 6$ but 5 does not divide 6). However, when $a \mid b$ and $a \mid c$, then $a \mid (b + c)$ is true, as is $a \mid (b - c)$.[2] If $a \mid bc$, and gcd$(a, b) = 1$, then $a \mid c$. This is called Euclid's lemma. If $p$ is a prime number and $p \mid ab$ then $p \mid a$ or $p \mid b$. A positive divisor of $n$ which is different from $n$ is called a proper divisor or an aliquot part of $n$. A number that does not evenly divide $n$ but leaves a remainder is called an aliquant part of $n$. An integer $n > 1$ whose only proper divisor is 1 is called a prime number. Equivalently, a prime number is a positive integer which has exactly two positive factors: 1 and itself. Any positive divisor of $n$ is a product of prime divisors of $n$ raised to some power. This is a consequence of the fundamental theorem of arithmetic. A number $n$ is said to be perfect if it equals the sum of its proper divisors, deficient if the sum of its proper divisors is less than $n$, and abundant if this sum exceeds $n$. The total number of positive divisors of $n$ is a multiplicative function $d(n)$, meaning that when two numbers $m$ and $n$ are relatively prime, then $d(mn)=d(m)\times d(n)$. For instance, $d(42) = 8 = 2 \times 2 \times 2 = d(2) \times d(3) \times d(7)$; the eight divisors of 42 are 1, 2, 3, 6, 7, 14, 21 and 42. However the number of positive divisors is not a totally multiplicative function: if the two numbers $m$ and $n$ share a common divisor, then it might not be true that $d(mn)=d(m)\times d(n)$. The sum of the positive divisors of $n$ is another multiplicative function $\sigma (n)$ (e.g. $\sigma (42) = 96 = 3 \times 4 \times 8 = \sigma (2) \times \sigma (3) \times \sigma (7) = 1+2+3+6+7+14+21+42$). Both of these functions are examples of divisor functions. If the prime factorization of $n$ is given by $n = p_1^{\nu_1} \, p_2^{\nu_2} \cdots p_k^{\nu_k}$ then the number of positive divisors of $n$ is $d(n) = (\nu_1 + 1) (\nu_2 + 1) \cdots (\nu_k + 1),$ and each of the divisors has the form $p_1^{\mu_1} \, p_2^{\mu_2} \cdots p_k^{\mu_k}$ where $0 \le \mu_i \le \nu_i$ for each $1 \le i \le k.$ For every natural $n$, $d(n) < 2 \sqrt{n}$. Also,[3] $d(1)+d(2)+ \cdots +d(n) = n \ln n + (2 \gamma -1) n + O(\sqrt{n}).$ where $\gamma$ is Euler–Mascheroni constant. One interpretation of this result is that a randomly chosen positive integer n has an expected number of divisors of about $\ln n$. ## In abstract algebra The relation of divisibility turns the set $N$ of non-negative integers into a partially ordered set, in fact into a complete distributive lattice. The largest element of this lattice is 0 and the smallest is 1. The meet operation ^ is given by the greatest common divisor and the join operation v by the least common multiple. This lattice is isomorphic to the dual of the lattice of subgroups of the infinite cyclic group $Z$. ## See also • Arithmetic functions • Divisibility rule • Divisor function • Euclid's algorithm • Fraction (mathematics) • Table of divisors — A table of prime and non-prime divisors for 1–1000 • Table of prime factors — A table of prime factors for 1–1000 ## Notes 1. Durbin, John R. (1992). Modern Algebra : an Introduction (3rd ed. ed.). New York: Wiley. p. 61. ISBN 0-471-51001-7. "An integer $m$ is divisible by an integer $n$ if there is an integer $q$ (for quotient) such that $m = n q$." 2. $a \mid b,\, a \mid c \Rightarrow b=ja,\, c=ka \Rightarrow b+c=(j+k)a \Rightarrow a \mid (b+c)$ Similarly, $a \mid b,\, a \mid c \Rightarrow b=ja,\, c=ka \Rightarrow b-c=(j-k)a \Rightarrow a \mid (b-c)$ 3. Hardy, G. H.; E. M. Wright (April 17, 1980). An Introduction to the Theory of Numbers. Oxford University Press. p. 264. ISBN 0-19-853171-0. ## References • Richard K. Guy, Unsolved Problems in Number Theory (3rd ed), Springer Verlag, 2004 ISBN 0-387-20860-7; section B. • Øystein Ore, Number Theory and its History, McGraw–Hill, NY, 1944 (and Dover reprints).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 85, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.89836585521698, "perplexity_flag": "head"}
http://mathoverflow.net/questions/4764?sort=votes
## Does some version of U_q(gl(1|1)) have a basis like Lusztig’s basis for \dot{U(sl_2)}? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) There's a non-unital algebra $\dot{U}$ formed from $U_q (sl_2)$ by including a system of mutually orthogonal idempotents $1_n$, indexed by the weight lattice. You can think of this as a category with objects $\mathbb{Z}$ if you prefer. Lusztig's basis $\mathbb{\dot{B}}$ for $\dot{U}$ has nice positivity properties: structure coefficients are in $\mathbb{Z}[q,q^{-1}]$. Has anyone tried to write down a similar type of basis for the algebra associated to $U_q (gl_{1|1})$? - Hi Sammy (!!). Can you give a reference for the U(sl_2) idempotent algebra you are referring to in the question? – David Jordan Nov 9 2009 at 22:05 I've removed the \mathfrak's since they weren't rendering properly. I'll look into this problem. – Anton Geraschenko♦ Nov 9 2009 at 23:19 One nice description is by Aaron Lauda: arXiv:0803.3652 – Sammy Black Nov 9 2009 at 23:27 2 I don't feel sure enough to give an answer, but I'm not optimistic. Certainly Lusztig's original construction doesn't work, and I don't think crystal theory does either, and those are the usual "avatars" of the canonical basis. – Ben Webster♦ Nov 9 2009 at 23:43 Actually, the structure constants of the canonical basis are in $\mathbb{N}[v,v^{-1}]$ which is explained by the fact that there exists an abelian categorification. If one has a categorification of one half of gl(1,1) using triangulated categories (which seems likely after the recent paper of Khovanov linked in David Hill's answer below) then one would get a basis like the one you are asking for. (Moral: abelian implies $\mathbb{N}$, triangulated implies $\mathbb{Z}$!) – Geordie Williamson Sep 4 2010 at 7:24 ## 1 Answer Kashiwara has developed some crystal theoretic methods for the Lie superalgebra $\mathfrak{q}(n)$. However, I think you should look at Khovanov, to get an idea of what it should look like. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9412786364555359, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/280478/what-is-the-maximum-value?answertab=active
# What is the Maximum Value? If $a, b, c, d, e$ and $f$ are non negative real numbers such that $a + b + c + d + e + f = 1$, then what is the maximum value of $ab + bc + cd + de + ef$? - I'm pretty sure this should be $\frac{1}{4}$, similarly to simply maximizing $ab$ subject to the same constraint. – gnometorule Jan 17 at 4:46 @gnometorule agreed. – Rustyn Yazdanpour Jan 17 at 4:48 Yes the Answer is 1/4..still me a bit confused – user58452 Jan 17 at 5:26 1 My Approach : a + c + e + b + d + f = 1 So maximum value of (a + c + e) . (b + d + f) is (1/2).(1/2) i.e. (ab + bc + cd + de + ef) + (ad + af + cf + be) = (1/4) and now i am stuck...!! – user58452 Jan 17 at 5:28 As you said, $(a+c+e)(b+d+f) \le \frac{1}{4},$ from which $(ab + bc + cd + de + ef) + (ad + af + cf + be) \le \frac{1}{4}.$ Since all of our variables are non-negative, the second expression is also non-negative, which means that $(ab + bc + cd + de + ef) \le \frac{1}{4}$ as well. The maximum occurs when the second expression is $0$ and when $a+c+e = b+d+f,$ which is easy to achieve. – lyj Jan 17 at 5:34 show 5 more comments ## 3 Answers Here's the full solution in case the comments weren't enough. Note that $(a+c+e)(b+d+f) = (a+c+e)(1-(a+c+e)) \le \frac{1}{4},$ with equality iff $a + c + e = \frac{1}{2}.$ Expanding the first expression above gives $(ab+bc+cd+de+ef)+(ad+af+be+cf) \le \frac{1}{4}.$ Since all of our variables are non-negative, $ad + af + be + cf \ge 0,$ or $-(ad+af+be+cf) \le 0,$ which gives $ab+bc+cd+de+ef \le \frac{1}{4} - (ad+af+be+cf) \le \frac{1}{4}.$ To achieve equality, we need the following: 1) $a + c + e = \frac{1}{2},$ our original condition. 2) $\frac{1}{4} - (ad + af + be + cf) = \frac{1}{4},\, \textrm{ i.e. } ad + af + be + cf = 0.$ So we have shown that the upper bound is $\frac{1}{4},$ and that we can achieve this upper bound. For example, let $a = \frac{1}{2} = b$ and $c = d = e = f = 0.$ Another example is $a = 0,\, b = \frac{1}{3},\, c = \frac{1}{2},\, d = \frac{1}{6}, e = 0,\, f = 0.$ - Note that lyj's comment pretty much answers this question. On top of that, we can achieve this maximum by taking $(a, b, c, d, e, f) = (0, 0, 11/32, 1/2, 5/32, 0)$. I.e., We finish showing the following. 1. The quantity in question has an upper bound $1/4$ (by lyj's argument). 2. The upper bound $1/4$ can be achieved. - I'm pretty sure the answer is $1/4$, but I can't prove it rigorously... We want to multiply the two largest numbers we can together, so simply letting $a=1/2$ and $b=1/2$ will do the trick. In fact if we let any of $b, c, d, e$ equal $1/2$, and let the number before and after it add to $1/2$, we will also get $1/4$. So for example we can let $e=1/2$, and also let $d=1/8$ and $f=3/8$, we will get a maximum value of $1/4$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9141248464584351, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/180912-covering-spaces-r-n-homomorphism.html
# Thread: 1. ## Covering spaces, R^n, homomorphism Hello all, Let $A$ be a subset of $R^n$ . let $h : (A, a_0) \rightarrow (Y, y_0)$ . show that if $h$ is extendable to a continuous map of $R^n$ into $Y$, then $h_*$ is the zero homomorphism. (the trivial homomorphism that maps everything to the identity element) thanks 2. Hint: Let $[f]\in \pi_1(A,a_0)$. Show that $h\circ f$ is nullhomotopic.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8575324416160583, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73346/sequences-of-squares-with-all-square-differences
## Background The following question was first asked by Alex Rice, who was thinking about small subsets $A\subset [1,\ldots , N]$ with lots of square differences. Certainly for any set $A$ the maximum number of square differences is going to be $\binom{|A|}{2}$. From the point of view of someone working in additive combinatorics, an infinite set of positive integers can't get much less substantial than the squares, and so it's natural to wonder if there are arbitrarily large sets $A$ inside the squares, all of whose differences are squares [edit: I apparently misunderstood the original motivation, see Alex's answer/comment below]. This question was asked of a few others, including Adrian Brunyate, Jacob Hicks and Nathan Walters before it was asked of me by Adrian in this form: Definition: We say that a sequence $(a_1, \ldots, a_n) \in \mathbf{Z}^n_{\ge 1}$ is a Super-$n$ if for all $1 \le i \le n$, $a_i$ is an integer square and for all $1 \le i < j \le n$, $a_j - a_i > 0$ is also an integer square. Clearly a Super-2 defines a Pythagorean triple. Perhaps less clearly, a Super-3 defines an Euler Brick, and is strongly related the the question of whether there is a perfect rational cuboid. Question 1: For which positive integers $n$ does there exist a Super-$n$ ? If the answer is yes to the above question, we may also ask the following: Question 2: For which positive integers $n$ do there exist infinitely many Super-$n$'s? One may note that the following problems are related to some problems already asked on MO about rational polytopes and sequences of squares http://mathoverflow.net/questions/72040/how-many-sequences-of-rational-squares-are-there-all-of-whose-differences-are-al http://mathoverflow.net/questions/71949/totally-rational-polytopes ## What seems to be known already It has been known for millenia that there are infinitely many Pythagorean Triples. Euler discovered in 1772 that there are infinitely many Super-$3$'s, and in fact he gave a parametrized family of them. None of us have been able to find a Super 4 (although I haven't been searching myself). ## The connection to algebraic geometry Definition: The Super-$n$-variety is the intersection of the following $\binom{n}{2}$ quadratic polynomials in projective space over $\mathbf{Q}$. $$d_1^2 = c_2^2 - c_1^2$$ $$\vdots$$ $$d_{\binom{n}{2}}^2 = c_{n}^2 - c_{n-1}^2$$ Clearly the Super-2 variety is a copy of $\mathbb{P}^1_{\mathbf{Q}}$. In Section 8 of the link given above for Euler's family of "Euler Bricks" we see that the Super-3 variety is birational to a singular K3 surface of Mordell-Weil rank 2. In this setting, one could say that Euler found a rational curve on this variety. It is also noted in the article that Narumiya and Shiga found a different rational curve on this variety. Question 2': Could there be rational curves on the Super-$n$ variety for all $n$? But perhaps (probably) this is way too much to ask. More generally, I'd like to know: Question 3: Is there any interesting geometry to the Super-$n$ variety for $n\ge 4$? In general this seems like an interesting problem, and one that people may have studied before, but perhaps in some guise that I'm not familiar with, so any input is appreciated. - 1 Observation: The posiitive differences of a Super-3 also lead to a Pythagorean triple. Thus so does any 3-set of a Super-n. Whereas the constructions in question 72040 (about rational square sequences, thanks for the plug) will create some Pythagorean triples from differences of certain subtriples (and eventually terminate), there may be a way to "invert" the process to generate infinitely many triples from a Super-n. Put another way, maybe the known constructions can be reorganized to provide answers to your and my questions. Gerhard "Ask Me About System Design" Paseman, 2011.08.21 – Gerhard Paseman Aug 21 2011 at 19:23 There's basically no way I could not plug your question in asking this question! I, too feel like perhaps there's some way to exploit the constructions given as answers to your question, but I'm not sure how far that will go. – stankewicz Aug 21 2011 at 21:14 1 Correction/clarification re "K3 surface of Mordell-Weil rank 2": in general the M-W rank depends not only on the surface but also on the choice of elliptic fibration, and typically K3 surfaces have more than one such fibration (but still finitely many up to isomorphism). So more properly what that paper finds is a choice of fibration for which the M-W rank is 2. Once the rank is positive there are infinitely many rational curves on the surface, one for each element of the Mordell-Weil group of the fibration (and usually also other rational curves not of this form). – Noam D. Elkies Aug 22 2011 at 5:01 I am tempted to ask a new question about non trivial rational points on the surfaces in my answer related to perfect cuboids. Is this question too elementary/wildly known and will be closed ? (Haven't seen such simple models) – joro Aug 22 2011 at 12:34 Is the relaxation of allowing one difference to not be a square clear? Suppose such relaxed super-4 exist. – joro Aug 23 2011 at 13:10 show 2 more comments ## 3 Answers The "Super-$n$" variety, call it $V_n$, seems to be of general type once $n \geq 4$. It probably has no nontrivial rational curves (where "trivial" means that it lies on a hyperplane $c_1=0$ or $c_j=c_k$ some distinct $j,k$; over ${\bf C}$ one must also exclude $c_j=0$ for $j>1$). For $n$ large enough this should follow from the Bombieri-Lang conjectures for the "Super-4" variety $V_4$. In general if a smooth variety is the "complete intersection" in some projective space ${\bf P}^{N-1}$ of hypersurfaces $P_i=0$ of degrees $d_1,\ldots,d_r$ then it is of general type iff $\sum_i d_i > N$. Here we have $N=(n^2+n)/2$ and $r=(n^2-n)/2$, with each $d_i=2$, so we would get general type once $n\geq 4$; our variety is not quite smooth but the singularities look mild enough not to change the result. A (very plausible but extremely hard) conjecture of Bombieri and Lang asserts that all the rational points on a variety $X$ of general type lie on a finite union $X_0$ of subvarieties of lower dimension. (This would vastly generalize Faltings' theorems on curves of genus $>1$ [Mordell's conjecture] and subvarieties of abelian varieties.) For complete intersections in ${\bf P}^{N-1}$, the following naïve but suggestive heuristic points in the same way: try the $\sim H^n$ points $(x_1:x_2:...:x_N)$ with integers $x_m$ such that $H \leq \max_m |x_m| < 2H$; for any choice of such $x_m$ we have $|P_i(\vec x)| \ll H^{d_i}$ for each $i$, and if we imagine these $r$ numbers $P_i(\vec x)$ are more-or-less randomly and independently distributed among integers of those sizes then the expected number of $\vec x$ where they're all zero is of order $H^{N-\sum_i d_i}$. This means that the general-type case is precisely when the exponent is negative, and thus that (summing over $H=1,2,4,8,16,\ldots$) the total number of rational points if finite. This heuristic cannot account for non-random rational points due to polynomial identities, but those are precisely the subvarieties that the Bombieri-Lang conjecture allows. It seems reasonable to guess that already for $n=4$ there are no nontrivial rational curves, and that the nontrivial part of $X_0$ is finite or even empty. Unfortunately, even assuming the B-L conjecture there is no known way to determine $X_0$. Nevertheless it may be possible to deduce that some $V_n$ has no nontrivial points under the assumption that B-L holds for $V_4$. The reason is that for $n > 4$ there are many maps from $V_n$ to $V_4$, obtained by choosing any $4$ of the $n$ variables $c_1,\ldots,c_n$ in order, and the corresponding six $d$'s. If all nontrivial points of $V_4$ were known to lie on a union $X_0$ of proper subvarieties, then any nontrivial point of $V_n$ would have to lie on the intersection of preimages of $X_0$ under $n \choose 4$ different maps, and whatever $X_0$ turns out to be, such an intersection ought to be trivial if $n$ is large enough. NB it would likely require some nontrivial [sic] work to make a proof of this even assuming the B-L conjecture for $V_4$, but such an analysis was carried out in a similar context in the famous paper L.Caporaso, J.Harris, and B.Mazur: Uniformity of rational points, J. Amer. Math. Soc. 10 #1 (1997), 1-45 and something like that should be possible here. - What a wonderful answer. I don't think I could have hoped for much more! – stankewicz Aug 22 2011 at 5:40 Well, I'm still holding out for a proof/counterexample of no Super-4. I am thinking of searching for distinct positive integers a,b,c such that each of ab, ac, and bc is 4 times a triangular number. Is this more specific system known to be insoluble? Gerhard "Ask Me About System Design" Paseman, 2011.08.21 – Gerhard Paseman Aug 22 2011 at 6:37 (I'll actually need more relations than those above to hold between a,b, and c to build a Super-4, but let us see if the basic 3 relations holld first.) Gerhard "Ask Me About System Design" Paseman, 2011.08.21 – Gerhard Paseman Aug 22 2011 at 6:44 @stankiewicz: Thanks! (Still note that some steps need to be filled in to actually prove general type.) @G.Paseman: You don't have to search very far... $(2,3,5)$ is the first example. But yes, that's a different problem, and not only because you dropped some squareness conditions but also because you've added a condition of integrality. – Noam D. Elkies Aug 22 2011 at 12:10 @Paseman: I agree that a proof/counterexample of the existence of a Super-4 is perhaps really what we as mathematicians should hold out for, but that for me (grad student going on the market) and for now, "Possibly nontrivial algebro-geometric work + Bombieri-Lang for $V_4$ implies only finitely many $n$ for which there exists a Super-$n$" is a pretty good answer, particularly when coupled with what's basically a wikipedia-ready tutorial on applying Bombieri-Lang. Moreover, I think that once some of the details are filled in, it will be easier to profitably re-examine this question. – stankewicz Aug 22 2011 at 14:12 show 2 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To clarify, this question arose when my adviser Neil Lyall and I were attempting to provide simple upper bounds on the size of the LARGEST subset of $[1,2,...,N]$ with NO square differences, for the purposes of an introduction to a talk about generalizations of the Sarkozy-Fursternburg Theorem, which states that this quantity is $o(N)$. In particular, if $S$ is the set of squares, $A \subset [1,2,...,N]$ with $(A-A) \cap S = \emptyset$, and $a_1 < a_2< ... < a_k$ is any collection of non-negative integers such that $a_i-a_j \in S$ whenever $i > j$, then the sets $A+a_1$, $A+a_2,...A+a_k$ are all pairwise disjoint. Therefore, $|A| \leq (N+a_k)/k$. If such a collection was possible for every $k$, then this would immediately provide a remarkably (perhaps disturbingly) elementary proof of the Sarkozy-Fursterberg Theorem that would not require any of the harmonic analysis or ergodic theory tools utilized in other proofs. Notice that the statement that such a collection of non-negative integers is possible for every $k$ is equivalent to the statement that such a collection of positive squares is possible for every $k$, as you can just translate the smallest element to $0$ and get a set of positive squares with one fewer element satisfying the desired property. Given that nobody has found a set of more than 3 positive squares (and hence no set of more than 4 non-negative integers) with this property, the above method can currently only show that the size of the largest subset of $[1,2,...,N]$ with no square differences is (asymptotically) less than $N/4$. - I realize that this probably should've been a comment as opposed to an "answer". I'm a bit new to this... – Alex Rice Aug 21 2011 at 21:16 Comments are limited to 600 characters here, which would not have sufficed to accommodate your contribution. – Noam D. Elkies Aug 23 2011 at 22:53 This is more of a comment. If I understand correctly, a perfect cuboid will give a Super-4 according to On Perfect Cuboids Are there four squares all pairs of which have square differences? For a perfect cuboid we could take the squares of $y_3 z$, $y_2 y_3$ , $x_1 z$ and $x_1 y_3$. While wasting my time with perfect cuboids, I found 2 surfaces on which they are nontrivial rational points. The first might be for all perfect cuboids, the second is not for all. The surfaces are: $$x^{4} y^{2} z^{4} + x^{2} y^{4} z^{4} - 2 x^{4} y^{2} z^{2} - 2 x^{2} y^{4} z^{2} - 4 x^{2} y^{2} z^{4} + x^{4} y^{2} + x^{2} y^{4} - 8 x^{2} y^{2} z^{2} + x^{2} z^{4} + y^{2} z^{4} - 4 x^{2} y^{2} - 2 x^{2} z^{2} - 2 y^{2} z^{2} + x^{2} + y^{2} = 0$$ and $$x^{4} y^{3} z^{3} - x^{4} y^{2} z^{4} + x^{2} y^{4} z^{4} + 2 x^{4} y^{3} z + 2 x^{4} y^{2} z^{2} - 2 x^{2} y^{4} z^{2} - 2 x^{4} y z^{3} + 4 x^{2} y^{3} z^{3} - x^{4} y^{2} + x^{2} y^{4} - 2 x^{4} y z + 4 x^{2} y^{3} z - 4 x^{2} y z^{3} + 2 y^{3} z^{3} + x^{2} z^{4} - y^{2} z^{4} - 4 x^{2} y z + 2 y^{3} z - 2 x^{2} z^{2} + 2 y^{2} z^{2} - 2 y z^{3} + x^{2} - y^{2} - 2 y z =0$$ In machine readable form: ````x^4*y^2*z^4 + x^2*y^4*z^4 - 2*x^4*y^2*z^2 - 2*x^2*y^4*z^2 - 4*x^2*y^2*z^4 + x^4*y^2 + x^2*y^4 - 8*x^2*y^2*z^2 + x^2*z^4 + y^2*z^4 - 4*x^2*y^2 - 2*x^2*z^2 - 2*y^2*z^2 + x^2 + y^2 = 0 ```` and ````2*x^4*y^3*z^3 - x^4*y^2*z^4 + x^2*y^4*z^4 + 2*x^4*y^3*z + 2*x^4*y^2*z^2 - 2*x^2*y^4*z^2 - 2*x^4*y*z^3 + 4*x^2*y^3*z^3 - x^4*y^2 + x^2*y^4 - 2*x^4*y*z + 4*x^2*y^3*z - 4*x^2*y*z^3 + 2*y^3*z^3 + x^2*z^4 - y^2*z^4 - 4*x^2*y*z + 2*y^3*z - 2*x^2*z^2 + 2*y^2*z^2 - 2*y*z^3 + x^2 - y^2 - 2*y*z = 0 ```` - Actually as I read it (page 7, but page 13 of the pdf) it seems that Leech claims that if there were a perfect cuboid there would be a Super-4 – stankewicz Aug 22 2011 at 12:05 That said, your contribution is much appreciated. van Luijk's document is a nice find. – stankewicz Aug 22 2011 at 12:16 @stankewicz you are right, my mistake, fixed it. Thanks. – joro Aug 23 2011 at 13:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 103, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392457604408264, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/208/is-the-distance-between-the-sun-and-the-earth-increasing/211
# Is the distance between the sun and the earth increasing? M = mass of the sun m = mass of the earth r = distance between the earth and the sun The sun is converting mass into energy by nuclear fusion. $F = \frac{GMm}{r^2} = \frac{mv^2}{r} \rightarrow r = \frac{GM}{v^2}$ $\Delta E = \Delta M c^2 = (M_{t} - M_{t+\Delta t}) c^2 \rightarrow \Delta M = \Delta E / c^2$ $\rightarrow \frac{\Delta r}{\Delta t} = \frac{G}{v^2 c^2}.\frac{\Delta E}{\Delta t}$ Sun radiates $3.9 × 10^{26} W = \Delta E/\Delta t$ Velocity of the earth $v = 29.8k m/s$ There is nothing that is stopping the earth from moving with the same velocity so for centripetal force to balance gravitational force $r$ must change. Is $r$ increasing? ($\Delta r/ \Delta t = 3.26070717 × 10^{-10} m/s$) - 2 The sun is converting mass into energy by nuclear fusion, not fission – Eran Galperin Nov 4 '10 at 9:42 Oops my biggest blunder! Thanks! @Eran Galperin – Pratik Deoghare Nov 4 '10 at 9:52 As written, the prediction is for Earth to get closer to the Sun because $\Delta E$ is negative, not positive. – Mark Eichenlaub Nov 4 '10 at 10:40 @Mark Eichenlaub I have written $M_{t} - M_{t+\Delta t}$ and not $M_{t+\Delta t} - M_{t}$. – Pratik Deoghare Nov 4 '10 at 11:11 Okay, but in that case the interpretation of $\Delta r/\Delta t$ is backwards, and a positive $\Delta r/\Delta t$ translates to a decreasing Earth-Sun distance. Just look at your proportionality relation. If $M$ is going down as time increases, so is $r$. – Mark Eichenlaub Nov 4 '10 at 11:46 ## 10 Answers I think the reasoning has an error. It assumes $v$ is constant, but instead we ought to assume the angular momentum is constant. By dimensional analysis that leads to $r \propto \frac{L^2}{GM}$ so as $M$ decreases, $r$ increases (the original post had $r \propto M$, not $r \propto 1/M$. On the other hand, assuming a circular orbit seems dubious. As the other commenters said, this effect is minute. A significant effect on the orbit of the moon around the earth is tidal evolution, which does actually push the moon further away. See http://en.wikipedia.org/wiki/Orbit_of_the_Moon#Tidal_evolution - $r = \frac{k(mvr)^2}{GM} \rightarrow r = \frac{GM}{m^2v^2k}$ ? – Pratik Deoghare Nov 4 '10 at 14:02 2 @TheMachineCharmer: but $v$ is also a function of $r$, so you can't conclude that $r \propto M$. You could write $rv^2 = GM/m^2k$, and since the right side is independent of $r$, that tells you that $rv^2 \propto M$ - but $rv^2 \sim \frac{1}{r}$ because angular momentum is conserved. – David Zaslavsky♦ Nov 4 '10 at 17:52 Such a small magnitude makes this process negliable among other factors; indeed this result literally means that you don't need to care about this process until you build extreamly accurate theory of the full solar system dynamics. To the extent that is probably unreachable due to deterministic chaos. - The Sun is also losing mass due to the solar wind. Again the fraction of mass lost is very small compared to the mass of the Sun, so the effect is very small. There are these relevant papers that I think you will find interesting: Orbital effects of Sun's mass loss and the Earth's fate Astrometric Solar-System Anomalies - That is basically correct, however that change is not very significant. The orbit of the planets in the solar system is chaotic over long periods of time (2 - 230 million years according to this wikipedia entry), and this effect is relatively minor. Other causes for change of orbit include gravitational pull from other planets, collisions with asteroids, solar wind and other variables. - I think your reasoning is correct, but the values involved are very small. In one year r will increase by 1mm, so in 1 billion years it will have increased by 1000 km or by about0.01% - 5 Agreed... Now just change "you're" to "your" please, it's really bugging me. ;) – Noldorin Nov 4 '10 at 14:43 Downvoted because the reasoning actually is not correct. – Mark Eichenlaub Dec 4 '10 at 20:28 @Noldorin: Yeah, I'm a "grammar grump" too. – Mike Dunlavey Oct 21 '11 at 14:13 The distance is increasing due to friction from the tides. Besides the moon, the sun has a tidal component for the earths oceans, and when that crashes into continents the energy absorbed comes from the potential energy of the sun-earth system. I am not sure how this compares to the distance loss due to the mass and energy radiation of the sun you mentioned. - This is a different mechanism then the one the OP proposes (stellar mass loss through fusion). Anyone care to do a BOTE calculation of the magnitude of the tidal angular momentum transfer? – dmckee♦ Nov 18 '10 at 20:35 aghhhhhh ... (runs screaming away) – ja72 Nov 19 '10 at 7:00 This is an old question, but I thought it might be worth chewing on a bit. The loss of mass due to fusion in the sun is piffle. The Earth’s orbital radius will change more likely due to interactions with the other planets. The first order perturbations in the orbital elements of the Earth are its eccentricity and right ascension. The change in the orbital radius or the sem-major axis distance is higher order. However, that can occur and there is an over all orbital drift in planetary orbits which is chaotic in nature. The Earth is in a near 1/12 orbital resonance with Jupiter. The Earth may over the next billion years shift away from this and enter into a near 1/11 orbital resonance with Jupiter, where our orbital radius is about 1.06AU. This early Earth may have been at .83AU relative to today’s orbit very early on. This is an orbital resonance of about 16 with Jupiter. The sun had a power output of 70% of current power. If you factor these together you get a solar irradiance on the Earth comparable to today. If the Earth had the same orbital radius as today, even factoring in a $CO_2$ atmosphere temperatures would be $30C$ cooler than today. Curiously if Earth does drift outwards this delays the solar death of the Earth. If Earth remains at the current radius temperatures will become intolerant in 500 million years for complex life. Some numerical analyses of this I have run. The interaction with Jupiter results in a periodic oscillation, and a computation over a longer period of time result in a drift which pushes the Earth outwards on average by about $4.2km/sec$. $\bf[addendum]$ This is in part due to alpha Centuri’s commets. One big uncertainty is with understanding the early Earth. I did some homework on this and at 1AU about the warmest the Earth could have been is about -25C with various estimates. Of course this is my interpretation of geo-modelling. The orbital dynamics is based on computer modeling. This is a general plot of 45,000 years. I should have posted this image. This illustrates the “signal” in these long runs, where the low frequency stuff has the largest amplitude. This is the main signal for an outwards drift. This does extend the future for life on Earth. If this planet stays at 1AU the prognosis becomes grim about 500 million years from now. The planet will start to reach temperatures 30C higher than today and complex life will begin to die out, and further in a billion years oceans will start to boil. That will really foul things up. However, with the outwards drift these time frames are almost doubled. The luminosity increase in the sun will accelerate faster in time and over take this. The outwards range on this is 2.5 billion years before the oceans start boiling. Once the oceans start boiling this planet will transform into a 400C version of Venus. So I figure complex life on this planet, life which emerged with the Cambrian revolution 550 million years ago, might have a good 750 to maybe 1000 million years ahead of it. When I first read about the future time frame of life on Earth my mind instantly questioned what happened going back in time. It implies a very cold early Earth; one where it seems the development of life would have been far more difficult. - Thats very interesting. How solid is the oribital mechanics for a significantly smaller early earth solar distance? The faint young sun has been a problem for planetary atmospheres/climate, as you say adding CO2, and methane isn't enough (although changing the planets albedo could also have an effect). Currently the planet reflects roughly 30% of sunlight, but under early earth, with a radically different atmosphere, and very little land we don't habe a good handle on the albedo. – Omega Centauri Feb 5 '11 at 15:53 Right now? No, it's decreasing, until we get to perihelion (91.4M miles, near the January 3rd), and then it'll start increasing again 'til aphelion (94.5M miles, near July 4th). (As Mark Eichenlaub pointed out -- Earth's orbit is not circular) - Definitely Yes. The Earth is moving away from the Sun at a rate ($0.57H_{0}$) that can not be explained by any of the current official models. Once again I point to a MODEL in which the data is consistent with the theory. This model is public since 2002 (and my personal knowledge since 1982) in anticipation of the factual finding that the Earth was moving away from the Sun. It is very problematic to place the origin of life in a cold world. With this model the life has started on an Earth full of energy. I think that someday this model will be studied with due attention. I am not the author of the model. I'm just a messenger (preaching in the desert?). by G. A. Krasinsky and V. A. Brumberg, 2004 Secular Increase of astronomical unit from analysis of the major planet motions, and Its Interpretation $\frac{dAU}{dt}=15\pm4\: m/cy$ at present there is no satisfactory explanation of the detected secular increase of AU WeiJia Zhang, ZhengBin Li and Yang Lei, 2010 Experimental measurement of growth patterns on fossil corals: Secular variation in short distances ancient Earth-Sun both the modern and ancient leaving rates could be measured with high precision, and it was found that the Earth has been leaving the Sun over the past 0.53 billion years. The Earth’s semi-major axis was 146 million kilometers at the beginning of the Phanerozoic Eon, equating to 97.6% of its current value. Measured modern leaving rates are 5–14 m/cy, whereas the ancient rates were much higher. Experimental results indicate a special expansion with an average expansion coefficient of $0.57H_{0}$ - I think even currently mass loss due to the solar wind is orders of magnitude greater than mass loss due to $E=mc^2$. But even so it isn't much. Ideally you would have enough mass loss, so that a planets radius would increase at just that rate that kept the stars luminosity divided by r squared constant, i.e. if that were the case the planet would stay within the habitable zone, even as the star brightened due to stellar evolution. Alas the solar wind is much too weak to accomplish the task. But we should be moving further out in any case. - 1 Could someone edit this post ! It is illegible (one very long monospaced line) ? – Frédéric Grosshans Nov 17 '10 at 17:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9434338808059692, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/46867/list
## Return to Answer 4 edited body There is a general version of this question which is known as "the rumpled dollar problem". It was posed by V.I. Arnold at his seminar in 1956. It appears as the very first problem in "Arnold's Problems": Is it possible to increase the perimeter of a rectangle by a sequence of foldings and unfoldings? According to the same sourse source (p. 182), Alexei Tarasov has shown that a rectangle admits a realizable folding with arbitrarily large perimeter. A realizable folding means that it could be realized in such a way as if the rectangle were made of infinitely thin but absolutely nontensile paper. Thus, a folding is a map $f:B\to\mathbb R^2$ which is isometric on every polygon of some subdivision of the rectangle $B$. Moreover, the folding $f$ is realizable as a piecewise isometric homotopy which, in turn, can be approximated by some isotopy of space (which corresponds to the impossibility of self-intersection of a paper sheet during the folding process). Have a look at • A. Tarasov, Solution of Arnold’s “folded rouble” problem. (in Russian) Chebyshevskii Sb. 5 (2004), 174–187. • I. Yashenko, Make your dollar bigger now!!! Math. Intelligencer 20 (1998), no. 2, 38–40. A history of the problem is also briefly discussed in Tabachnikov's review of "Arnold's Problems": It is interesting that the problem was solved by origami practitioners way before it was posed (at least, in 1797, in the Japanese origami book “Senbazuru Orikata”). 3 Reference added; [made Community Wiki] There is a general version of this question which is known as "the rumpled dollar problem". It was posed by V.I. Arnold at his seminar in 1956(see . It appears as the very first problem in "Arnold's Problems", p.2, problem 1956-1). : Is it possible to increase the perimeter of a rectangle by a sequence of foldings and unfoldings? According to the same sourse (p. 182), Alexei Tarasov has shown that a rectangle admits a realizable folding with arbitrarily large perimeter. A realizable folding means that it could be realized in such a way as if the rectangle were made of infinitely thin but absolutely nontensile paper. Thus, a folding is a map $f:B\to\mathbb R^2$ which is isometric on every polygon of some subdivision of the rectangle $B$. Moreover, the folding $f$ is realizable as a piecewise isometric homotopy which, in turn, can be approximated by some isotopy of space (which corresponds to the impossibility of self-intersection of a paper sheet during the folding process). Have a look at • A. Tarasov, Solution of Arnold’s “folded rouble” problem. (in Russian) Chebyshevskii Sb. 5 (2004), 174–187. • I. Yashenko, Make your dollar bigger now!!! Math. Intelligencer 20 (1998), no. 2, 38–40. A history of the problem is also briefly discussed in Tabachnikov's review of "Arnold's Problems": It is interesting that the problem was solved by origami practitioners way before it was posed (at least, in 1797, in the Japanese origami book “Senbazuru Orikata”). 2 added 58 characters in body; deleted 5 characters in body There is a general version of this question which is also known as "the rumpled dollar problem"problem". It was asked posed by V.I. Arnold at his seminar in 1956 (see "Arnold's Problems) ", p.2, problem 1956-1). Is it possible to increase the perimeter of a rectangle by a sequence of foldings and unfoldings? According to the same sourse (p. 182), Alexei Tarasov has shown that a rectangle admits a realizable folding with arbitrarily large perimeter. A realizable folding means that it could be realized in such a way as if the rectangle were made of infinitely thin but absolutely nontensile paper. Thus, a folding is a map $f:B\to\mathbb R^2$ which is isometric on every polygon of some subdivision of the rectangle $B$. Moreover, the folding $f$ is realizable as a piecewise isometric homotopy which, in turn, can be approximated by some isotopy of space (which corresponds to the impossibility of self-intersection of a paper sheet during the folding process). 1 There is a general version of this question which is also known as "the rumpled dollar problem". It was asked by V.I. Arnold (see Arnold's Problems) Is it possible to increase the perimeter of a rectangle by a sequence of foldings and unfoldings? According to the same sourse, Alexei Tarasov has shown that a rectangle admits a realizable folding with arbitrarily large perimeter. A realizable folding means that it could be realized in such a way as if the rectangle were made of infinitely thin but absolutely nontensile paper. Thus, a folding is a map $f:B\to\mathbb R^2$ which is isometric on every polygon of some subdivision of the rectangle $B$. Moreover, the folding $f$ is realizable as a piecewise isometric homotopy which, in turn, can be approximated by some isotopy of space (which corresponds to the impossibility of self-intersection of a paper sheet during the folding process).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9607831239700317, "perplexity_flag": "middle"}
http://www.chemeurope.com/en/encyclopedia/Drag_%28physics%29.html
My watch list my.chemeurope.com my.chemeurope.com With an accout for my.chemeurope.com you can always see everything at a glance – and you can configure your own website and individual newsletter. • My watch list • My saved searches • My saved topics • Home • Encyclopedia • Drag_(physics) # Drag (physics) In fluid dynamics, drag (sometimes called resistance) is the force that resists the movement of a solid object through a fluid (a liquid or gas). Drag is made up of friction forces, which act in a direction parallel to the object's surface (primarily along its sides, as friction forces at the front and back cancel themselves out), plus pressure forces, which act in a direction perpendicular to the object's surface. For a solid object moving through a fluid or gas, the drag is the sum of all the aerodynamic or hydrodynamic forces in the direction of the external fluid flow. (Forces perpendicular to this direction are considered lift). It therefore acts to oppose the motion of the object, and in a powered vehicle it is overcome by thrust. In astrodynamics, depending on the situation, atmospheric drag can be regarded as inefficiency requiring expense of additional energy during launch of the space object or as a bonus simplifying return from orbit. Types of drag are generally divided into three categories: parasitic drag, lift-induced drag, and wave drag. Parasitic drag includes form drag, skin friction, and interference drag. Lift-induced drag is only relevant when wings or a lifting body are present, and is therefore usually discussed only in the aviation perspective of drag. Wave drag occurs when a solid object is moving through a fluid at or near the speed of sound in that fluid. The overall drag of an object is characterized by a dimensionless number called the drag coefficient, and is calculated using the drag equation. Assuming a constant drag coefficient, drag will vary as the square of velocity. Thus, the resultant power needed to overcome this drag will vary as the cube of velocity. The standard equation for drag is one half the coefficient of drag multiplied by the fluid density, the cross sectional area of the specified item, and the square of the velocity. Wind resistance or air resistance is a layman's term used to describe drag. Its use is often vague, and is usually used in a relative sense (e.g., A badminton shuttlecock has more wind resistance than a squash ball). ## Stokes's drag The equation for viscous resistance or linear drag is appropriate for small objects or particles moving through a fluid at relatively slow speeds. In this case, the force of drag is approximately proportional to velocity, but opposite in direction. [1] The equation for viscous resistance is: $\mathbf{F}_d = - b \mathbf{v} \,$ where: b is a constant that depends on the properties of the fluid and the dimensions of the object, and v is the velocity of the object. When an object falls from rest, its velocity will be $v(t) = \frac{mg}{b}\left(1-e^{-bt/m}\right)$ which asymptotically approaches the terminal velocity vt = mg / b. For a given b, heavier objects fall faster. For the special case of small spherical objects moving slowly through a viscous fluid (and thus at small Reynolds number), George Gabriel Stokes derived an expression for the drag constant, $b = 6 \pi \eta r\,$ where: r is the Stokes radius of the particle, and η is the fluid viscosity. For example, consider a small sphere with radius r = 0.5 micrometre (diameter = 1.0 µm) moving through water at a velocity v of 10 µm/s. Using 10−3 Pa·s as the dynamic viscosity of water in SI units, we find a drag force of 0.28 pN. This is about the drag force that a bacterium experiences as it swims through water. ## Drag at high velocity The Drag equation calculates the force experienced by an object moving through a fluid at relatively large velocity, also called quadratic drag. The equation is attributed to Lord Rayleigh, who originally used $L^2 \$ in place of $A \$ (L being some length). The force on a moving object due to a fluid is: $\mathbf{F}_d= -{1 \over 2} \rho v^2 A C_d \mathbf{\hat v}$     see derivation where Fd is the force of drag, ρ is the density of the fluid (Note that for the Earth's atmosphere, the density can be found using the barometric formula. It is 1.293 kg/m3 at 0°C and 1 atmosphere.), v is the speed of the object relative to the fluid, A is the reference area, Cd is the drag coefficient (a dimensionless constant, e.g. 0.25 to 0.45 for a car), and $\mathbf{\hat v}$ is the unit vector indicating the direction of the velocity (the negative sign indicating the drag is opposite to that of velocity). The reference area A is related to, but not exactly equal to, the area of the projection of the object on a plane perpendicular to the direction of motion (i.e., cross sectional area). Sometimes different reference areas are given for the same object in which case a drag coefficient corresponding to each of these different areas must be given. The reference for a wing would be the plane area rather than the frontal area. ### Power The power required to overcome the aerodynamic drag is given by: $P_d = \mathbf{F}_d \cdot \mathbf{v} = {1 \over 2} \rho v^3 A C_d.$ Note that the power needed to push an object through a fluid increases as the cube of the velocity. A car cruising on a highway at 50 mph (80 km/h) may require only 10 horsepower (7.5 kW) to overcome air drag, but that same car at 100 mph (160 km/h) requires 80 hp (60 kW). With a doubling of speed the drag (force) quadruples per the formula. Exerting four times the force over a fixed distance produces four times as much work. At twice the speed the work (resulting in displacement over a fixed distance) is done twice as fast. Since power is the rate of doing work, four times the work done in half the time requires eight times the power. It should be emphasized here that the drag equation is an approximation, and does not necessarily give a close approximation in every instance. Thus one should be careful when making assumptions using these equations. ### Velocity of falling object Main article: Terminal velocity The velocity as a function of time for an object falling through a non-dense medium is roughly given by a function involving a hyperbolic tangent: $v(t) = \sqrt{ \frac{2mg}{\rho A C_d} } \tanh \left(t \sqrt{\frac{g \rho C_d A}{2 m}} \right) \,$ In other words, velocity asymptotically approaches a maximum value called the Terminal velocity: $v_{t} = \sqrt{ \frac{2mg}{\rho A C_d} } \,$ With all else (gravitational acceleration, density, cross-sectional area, drag constant, etc.) being equal, heavier objects fall faster. For a potato-shaped object of average diameter d and of density ρobj terminal velocity is about $v_{t} = \sqrt{ gd \frac{ \rho_{obj} }{\rho} } \,$ For objects of water-like density (raindrops, hail, live objects - animals, birds, insects, etc.) falling in air near the surface of the Earth at sea level, terminal velocity is roughly equal to $v_{t} = 90 \sqrt{ d } ,$ For example, for human body (d~0.6 m) vt ~70 m/s, for a small animal like a cat (d~0.2 m) vt ~40 m/s, for a small bird (d~0.05 m) vt ~20 m/s, for an insect (d~0.01 m) vt ~9 m/s, for a fog droplet (d~0.0001 m) vt ~0.9 m/s, for a pollen or bacteria (d~0.00001 m) vt ~0.3 m/s and so on. Actual terminal velocity for very small objects (pollen, etc) is even smaller due to the viscosity of air. Terminal velocity is higher for larger creatures, and thus more deadly. A creature such as a mouse falling at its terminal velocity is much more likely to survive impact with the ground than a human falling at its terminal velocity. A small animal such as a cricket impacting at its terminal velocity will probably be unharmed. ## See also • Ram pressure • Parasitic drag • Added mass • Angle of attack • Drag-resistant aerospike • Gravity drag • Stall (flight) • Terminal velocity • Boundary layer • Coanda effect • Drag coefficient • Reynolds number • Stokes' law ## References • Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.). Brooks/Cole. ISBN 0-534-40842-7. • Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.). W. H. Freeman. ISBN 0-7167-0809-4. • Huntley, H. E. (1967). Dimensional Analysis. Dover. LOC 67-17978.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.890863835811615, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=4240051
Physics Forums ## electromagnetic hamiltonian factor of 1/c question I often see the EM Hamiltonian written as $$H=\frac1{2m}\left(\vec p-\frac ec\vec A\right)^2+e\phi,$$ but this confuses me because it doesn't seem to have the right units. Shouldn't it just be $$H=\frac1{2m}\left(\vec p-e\vec A\right)^2+e\phi,$$ since the vector potential has units of momentum per unit charge? And if so, why do so many authors put in the factor of 1/c? PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Blog Entries: 9 Recognitions: Homework Help Science Advisor You're missing the Hamiltonian for the EM field. It's obvious the 1/c factor comes from the convention to describe the 4 potential, either as A0=ϕ/c or just ϕ. If the second option is chosen, you get the 1/c inside the brackets. Quote by dextercioby You're missing the Hamiltonian for the EM field. It's obvious the 1/c factor comes from the convention to describe the 4 potential, either as A0=ϕ/c or just ϕ. If the second option is chosen, you get the 1/c inside the brackets. Wow thanks for pointing out how obvious it was to you! But I'm not convinced you're correct. I didn't write the Hamiltonian in terms of the 4-potential, in which case I agree a factor of c would be required; I wrote it (and derived it) in terms of the ordinary magnetic vector potential and the scalar potential, just as its written here. It's very easy to check and see that without the factor of c each term has units of energy. With the factor of c, the vector potential term has units of energy over velocity. ## electromagnetic hamiltonian factor of 1/c question This is just a convenient choice of definition of ##\mathbf A##, introduced already in classical electromagnetic theory. In this definition, the electromagnetic force in field ##\mathbf E,\mathbf B = \nabla \times \mathbf A## is given by \mathbf F = q\mathbf E + q \frac{\mathbf v}{c}\times \mathbf B. Then E and B have the same units and the velocity v appears always in the companionship of speed of light ##c## as the ratio ##\frac{\mathbf v}{c}##. This has its practical advantages in relativistic theory. For example, it is very convenient to use in description of the motion of a particle in an external EM wave. In this convention, E and B have usually magnitudes of the same order of magnitude. The magnitude of the magnetic force term is then easily estimated from the value of v/c. Also approximate low-velocity approximations are best formulated in terms of v/c. Recognitions: Science Advisor Unfortunately, in electromagnetism there are still (at least) three systems of units in use. The oldest are the Gaussian units, where the Lagrangian reads $$\mathcal{L}=-\frac{1}{16 \pi} F_{\mu \nu} F^{\mu \nu} - \frac{1}{c} j_{\mu} A^{\mu},$$ where $(A^{\mu})=(c \Phi,\vec{A})$ is the four-vector potential of the electromagnetic field $F_{\mu \nu} =\partial_{\mu} A_{\nu}-\partial_{\nu} A_{\mu}$, and $j^{\mu}=(c \rho,\vec{j})$ the four-dimensional current density. I've used the west-coast convention $(\eta_{\mu \nu})=\text{diag}(1,-1,-1,-1)$ and the four-vector $(x^{\mu})=(c t,\vec{x})$, keeping all factors of $c$. Because of the "irrational" factor $1/(4 \pi)$ in front of the kinetic term of the gauge field, this system of units is called irrational CGS system (CGS standing for centimeters, grams, seconds, which form the basic units in this system). Another CGS system just differs by this factor $1/(4 \pi)$. This is the rationalized Heaviside-Lorentz system of units and usually used in relativistic quantum field theory and thus theoretical high-energy physics. It has the advantage to put the factors $1/(4 \pi)$ where they belong and to reflect the physical dimensions of the quantities best. Of course, electromagnetics is relativistic and thus this system of units is the most natural one. Here, the Lagrangian reads $$\mathcal{L}=-\frac{1}{4} F_{\mu \nu} F^{\mu \nu}-\frac{1}{c} j_{\mu} A^{\mu}.$$ From this Lagrangian it follows that the force on a point charge without magnetic moment is given by $$\vec{F}=q \left (\vec{E}+\frac{\vec{v}}{c} \times \vec{B} \right ).$$ The official system of units, called SI (for Systeme International), in use in experimental physics and engineering everywhere on the world is taylormade for practictal purposes and provides welldefined accurate realizations of the units. In theoretical electromagnetics it's on the other hand a disease, if you ask me, because the beautiful Lorentz symmetry of the relativistic theory is spoiled. Of course, there is no principle problem to use it also in theory, but then you always get questions like, what are $\epsilon_0$ and $\mu_0$? The answer is they are conversion factors to transform from the SI units to the more natural Gauß or Heaviside-Lorentz units. Also the SI adds a fourth basis unit to the three mechanical units (the SI concerning the mechanics is an MKS system, using metre, kilogram, second), the Ampere for the electric current. The Lorentz force in this units reads $$\vec{F}=q \left (\vec{E}+\vec{v} \times \vec{B} \right).$$ The only physical universal constant in electromagnetics is the velocity of light, $c$, and it's related to the conversion factors of the SI by $$c=\frac{1}{\sqrt{\mu_0 \epsilon_0}}.$$ The Lagrangian reads $$\mathcal{L}=-\frac{1}{4 \mu_0} F_{\mu \nu} F^{\mu \nu} -j_{\mu} A^{\mu}.$$ Thread Tools | | | | |-------------------------------------------------------------------------|---------------------------|---------| | Similar Threads for: electromagnetic hamiltonian factor of 1/c question | | | | Thread | Forum | Replies | | | Advanced Physics Homework | 1 | | | Advanced Physics Homework | 0 | | | Advanced Physics Homework | 0 | | | Advanced Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9212197065353394, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-applied-math/25500-simple-harmonic-motion-print.html
# simple harmonic motion Printable View • January 3rd 2008, 07:37 AM franklina simple harmonic motion a bungee jumper falls a total distance of 48 metres, of which the first 20 metres are free fall under gravity alone before the string becomes taut. find the total time she takes to the lowest point of her jump, and the greatest speed she attains. • January 3rd 2008, 12:08 PM topsquark Quote: Originally Posted by franklina a bungee jumper falls a total distance of 48 metres, of which the first 20 metres are free fall under gravity alone before the string becomes taut. find the total time she takes to the lowest point of her jump, and the greatest speed she attains. She falls 20 m before the bungee starts to slow her down, so she takes $t = \sqrt{\frac{2d}{g}} = 2.02031~s$ to fall this distance. The bungee slows her down at this point, so the fastest she is moving is $v = \sqrt{2gd} = 19.799~m/s$. (I can show you how do derive these formulas if you like.) Now, she starts out at 20 m with her highest speed and ends at 48 m with a speed of 0. This is going to represent 1/4 of the cycle of a cosine function. This cosine function takes the form: $y = y_0 + A~cos(\omega t)$ where $y_0 = 20~m$, A is the amplitude of the "wave" which is 48 m - 20 m = 28 m, and $\omega$ is the angular frequency of the oscillation of her motion on the bungee. The amount of time it takes for one period of the motion is given by $\omega T = 2 \pi$, or $T = \frac{2 \pi}{\omega}$ Now, $\omega$ is given by the spring equation by $\omega = \sqrt{\frac{k}{m}}$ where k is the spring constant of the bungee cord and m is the mass of the diver. As we are given neither of these data we cannot finish the problem. We can, of course, give it in its form depending on k and m, though. The time it takes for the diver to fall from where the bungee starts to retard her motion to her lowest point is $t = \frac{T}{4} = \frac{\pi}{2}\sqrt{\frac{m}{k}}$ So the total time for the fall will be $2.02031 + \frac{\pi}{2}\sqrt{\frac{m}{k}}$ seconds. -Dan • January 3rd 2008, 02:29 PM franklina shm question thank you very much for attempting the question. when the string becomes taut, i think that this just means that its tension is greater than 0, i.e. there is still time while the resultant force is downward and thus the speed is increasing your answer to the second question is based on assumption that the equilibrium position is at 20m rather than somewhere between 20 and 48. • January 3rd 2008, 03:07 PM JaneBennet You don’t need to calculate the tension in the string! You just need to calculate the deceleration for the second part of her jump. The first part is straight forward. Use $s=ut+\frac{1}{2}\,at^2$ to find the time of her freefall: $20=0t+\frac{1}{2}\,(10)t^2\ \Rightarrow\ t=2$ (taking $g=10\ \textrm{m}\,\textrm{s}^{-2}$). Then use $v=u+at$ to find her speed at the end of the freefall (this will also be the greatest speed she attains): $v=0+(10)(2)=20\ \textrm{m}\,\textrm{s}^{-1}$. For the second part, first find her deceleration using $v^2=u^2+2as$: $0^2=20^2+2a(48-20)\ \Rightarrow\ a=-\frac{50}{7}$. Now use $s=ut+\frac{1}{2}\,at^2$ again to find the time for the second part of her descent: $28=20t+\frac{1}{2}\cdot\left(-\frac{50}{7}\right)t^2$. This can be arranged into the quadratic equation $25t^2-140t+196=0\ \Rightarrow\ (5t-14)^2=0\ \Rightarrow\ t=2.8$. Hence the total time of her fall is 2 + 2.8 = 4.8 seconds. • January 3rd 2008, 04:02 PM topsquark Quote: Originally Posted by JaneBennet For the second part, first find her deceleration using $v^2=u^2+2as$ Unfortunately this assumes that her deceleration is constant. A bungee acts like a spring in that the tension is proportional to the distance the bungee is stretched, so $a \propto x$ meaning acceleration is not constant. -Dan • January 3rd 2008, 04:27 PM topsquark Quote: Originally Posted by franklina thank you very much for attempting the question. when the string becomes taut, i think that this just means that its tension is greater than 0, i.e. there is still time while the resultant force is downward and thus the speed is increasing your answer to the second question is based on assumption that the equilibrium position is at 20m rather than somewhere between 20 and 48. The speed of the diver can't be increasing after the bungee starts to stretch because the force the bungee exerts on the diver is upward, whereas the velocity of the diver is downward. Yes, 20 m is the equilibrium height since this is the point on the oscillation that has the greatest speed. -Dan • January 4th 2008, 01:49 AM franklina unfortunately, again yu are wrong, though you are correct that there is a force (tension) acting in the opposite direction to the string, since tension is proportional to displacement from where the tension starts to act (in this case 20m) there must be a point at which the displacement is small enough that the tension will be smaller than the weight force, therefore, when the string becomes taut, for some length of time afterward, the jumper will still be accelerating downwards (albeit with a decreasing acceleration). • January 4th 2008, 04:40 AM topsquark Quote: Originally Posted by franklina unfortunately, again yu are wrong, though you are correct that there is a force (tension) acting in the opposite direction to the string, since tension is proportional to displacement from where the tension starts to act (in this case 20m) there must be a point at which the displacement is small enough that the tension will be smaller than the weight force, therefore, when the string becomes taut, for some length of time afterward, the jumper will still be accelerating downwards (albeit with a decreasing acceleration). You are correct. Unfortunately I can't finish my post as the baby is crying. I'll get back to you. -Dan • January 4th 2008, 05:36 AM franklina i think that the answer should be purely numerical, as, with other questions in the book, if the answers are algebraic the question says so. but dont worry too much about it. • January 4th 2008, 08:32 AM topsquark Quote: Originally Posted by franklina i think that the answer should be purely numerical, as, with other questions in the book, if the answers are algebraic the question says so. but dont worry too much about it. You are indeed right that this can be solved without explicit values for m and k. But to find the solution isn't pretty. My problem earlier was that I was trying to simplify it to shortcut the work. This is not always a good idea and gave me incorrect answers. So without further ado, here is the solution start to finish. (Be warned, it's long.) I am going to start by labeling a coordinate system. I'm going to put the origin as the "ground" and I'm going to label upward as positive. So the diver starts at y(0) = 48 m. The problem goes in two stages: freefall to a height of 28 m, then the bungee starts to affect the motion and the diver falls the additional 28 m. The freefall part of the motion is simple and gives a time of fall of $t = \sqrt{2 \cdot 20g} \approx 2.02031~s$ and a velocity at the end of the 20 m fall of $v = -\sqrt{2 \cdot 20 \cdot g} \approx -19.799~m/s$ Now for the hard part. I'm going to let c = 28 m and d = 20 m. Since we don't know a value for m and k yet, I'm just going to leave these as m and k for now. Newton's 2nd Law says: $\sum F = ma = -mg + T = -mg + k(c - y)$ So we need to solve $m\frac{d^2y}{dt^2} + ky = kc - mg$ with the initial conditions $y(0) = c$ and $v(0) = -\sqrt{2dg}$. After a little work I find the solution to be $y(t) = -\sqrt{2dg \cdot \frac{m}{k} }~sin \left ( t \sqrt{\frac{k}{m}} \right ) + \left ( g \cdot \frac{m}{k} \right )~cos \left ( t \sqrt{\frac{k}{m}} \right ) + \left ( c - g \cdot \frac{m}{k} \right )$ To simplify this a bit I'm going to define $\omega = \sqrt{\frac{k}{m}}$, so $y(t) = -\sqrt{\frac{2dg}{\omega ^2}}~sin( \omega t) + \left ( \frac{g}{\omega ^2} \right )~cos( \omega t) + \left ( c - \frac{g}{\omega ^2} \right )$ So $v(t) = -\sqrt{2dg}~cos( \omega t) - \left ( \frac{g}{\omega } \right )~sin( \omega t)$ and $a(t) = \omega \sqrt{2dg}~sin( \omega t) - g~cos( \omega t)$ We want to first get the time of fall to the lowest point, which is 0 m. So we need to solve v(t) = 0 for t: $v(t) = -\sqrt{2dg}~cos( \omega t) - \left ( \frac{g}{\omega } \right )~sin( \omega t) = 0$ This gives $t = \frac{1}{\omega } tan^{-1} \left ( -\omega \sqrt{\frac{2d}{g}} \right )$ Now, if you plug the numbers in (assuming reasonable values for k and m for the moment) you will find that t is negative. This actually corresponds to a maximum of the y(t) function in a position that doesn't physically exist. To find the minimum we shift the result of the inverse tangent function by $\pi$. So we get $t = \frac{1}{\omega } tan^{-1} \left ( -\omega \sqrt{\frac{2d}{g}} \right ) + \frac{\pi}{\omega }$ We don't have any numbers yet to calculate t. But we do have the condition that y(t) = 0 m at this time for the minimum: $y_{min} = -\sqrt{\frac{2dg}{\omega ^2}}~sin \left ( \omega <br /> \left [ \frac{1}{\omega } tan^{-1} \left ( -\omega \sqrt{\frac{2d}{g}} \right ) + \frac{\pi}{\omega } \right ] \right ) +$ $\left ( \frac{g}{\omega ^2} \right )~cos \left ( \omega <br /> \left [ \frac{1}{\omega } tan^{-1} \left ( -\omega \sqrt{\frac{2d}{g}} \right ) + \frac{\pi}{\omega } \right ] \right ) + \left ( c - \frac{g}{\omega ^2} \right ) = 0$ I get that $-\sqrt{\frac{2dg}{\omega ^2}} \left ( \frac{\omega \sqrt{2d}}{\sqrt{2d \omega ^2 + g}} \right ) - \left ( \frac{g}{\omega ^2} \right ) \left ( \frac{\sqrt{g}}{\sqrt{2d \omega ^2}} \right ) + \left ( c - \frac{g}{\omega ^2} \right ) = 0$ This can be solved numerically for $\omega$ and I get that $\omega \approx 1.19482~Hz$. Now we can find the time of fall from the point where the freefall ends: $t = \frac{1}{\omega } tan^{-1} \left ( -\omega \sqrt{\frac{2d}{g}} \right ) + \frac{\pi}{\omega } \approx 1.64337~s$ so the total time of fall is $t = 2.02031~s + 1.64337~s = 3.66368~s$. To find the maximum speed we use a similar process. We need to find the minimum of v(t). (Recall that v(t) is negative for the time period we are talking about because the diver is falling. So a minimum velocity will correspond to the maximum speed.) $a(t) = \omega \sqrt{2dg}~sin( \omega t) - g~cos( \omega t) = 0$ I get $t = \frac{1}{\omega} tan^{-1} \left ( \frac{g}{\omega \sqrt{2dg}} \right ) \approx 0.328705~s$ So the maximum speed will be $v_{max} = \sqrt{2dg}~cos( \omega t) + \left ( \frac{g}{\omega } \right )~sin( \omega t) \approx 21.4307~m/s$ Did your teacher really expect you to go through all that? -Dan • January 4th 2008, 12:09 PM franklina i really am very grateful for the effort you put into this! thanks again! All times are GMT -8. The time now is 12:41 PM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 48, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9479171633720398, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/186566-should-i-worried-if-im-having-hard-time-first-assignment-print.html
# Should I be worried if I'm having a hard time with this first assignment? Printable View • August 22nd 2011, 10:05 PM StudentMCCS 1 Attachment(s) Should I be worried if I'm having a hard time with this first assignment? Hi. I'm taking calculous 1 this semester. I haven't taken a math class in 8 years, and the last class I took was trig. I studied pretty hard and placed in calculous through the assessment test. The administrator said, she had never seen anyone score as high as I did on the placement test. But, this first assignment, which is a review of algebra and trig, seams considerably harder than the placement test, and I am struggling on many of the problems. If someone could look at the review sheet, and give an opinion on wether or not a person struggling with this assignment should take calculous 1, or if maybe these problems are intentionally on the difficult side to force us to do a serious review. Thanks. • August 22nd 2011, 10:45 PM abhishekkgp Re: Should I be worried if I'm having a hard time with this first assignment? Quote: Originally Posted by StudentMCCS Hi. I'm taking calculous 1 this semester. I haven't taken a math class in 8 years, and the last class I took was trig. I studied pretty hard and placed in calculous through the assessment test. The administrator said, she had never seen anyone score as high as I did on the placement test. But, this first assignment, which is a review of algebra and trig, seams considerably harder than the placement test, and I am struggling on many of the problems. If someone could look at the review sheet, and give an opinion on wether or not a person struggling with this assignment should take calculous 1, or if maybe these problems are intentionally on the difficult side to force us to do a serious review. Thanks. It seemed pretty easy to me. Which questions exactly bother you? • August 22nd 2011, 11:53 PM StudentMCCS Re: Should I be worried if I'm having a hard time with this first assignment? Quote: Originally Posted by abhishekkgp It seemed pretty easy to me. Which questions exactly bother you? Question number 4 is bothering me a little. Looking at my list of identities, and thinking about an approach, I'm not seeing the light yet. Probably something simple I fail to recognize. • August 23rd 2011, 12:08 AM abhishekkgp Re: Should I be worried if I'm having a hard time with this first assignment? Quote: Originally Posted by StudentMCCS Question number 4 is bothering me a little. Looking at my list of identities, and thinking about an approach, I'm not seeing the light yet. Probably something simple I fail to recognize. i don't know what is meant by "rational expression" so i can't help. But if this is the question which you couldn't do then why are you worried? • August 23rd 2011, 06:16 AM StudentMCCS Re: Should I be worried if I'm having a hard time with this first assignment? Quote: Originally Posted by abhishekkgp i don't know what is meant by "rational expression" so i can't help. But if this is the question which you couldn't do then why are you worried? I'm also having trouble with number 6. (((A+rA-x)r-x)r-x)r-x=o solve for x. Simplified (AR^3)-(2AXR^2)+(ARX^2)+(AR^4)-(AXR^3)+AX^2(R^2)-(XR^3)+(2X^2)(R^2)-(XR^3)-x=0 Obviously this is leading me nowhere. maybe start by saying (((A+RA+X)R-x)R-X)R=x • August 23rd 2011, 07:03 AM abhishekkgp Re: Should I be worried if I'm having a hard time with this first assignment? Quote: Originally Posted by StudentMCCS I'm also having trouble with number 6. (((A+rA-x)r-x)r-x)r-x=o solve for x. Simplified (AR^3)-(2AXR^2)+(ARX^2)+(AR^4)-(AXR^3)+AX^2(R^2)-(XR^3)+(2X^2)(R^2)-(XR^3)-x=0 Obviously this is leading me nowhere. maybe start by saying (((A+RA+X)R-x)R-X)R=x $(((A+rA-x)r-x)r-x)r-x=0 \Rightarrow (((A(r+1)-x)r-x)r-x)r-x=0 \Rightarrow ((Ar(r+1)-rx-x)r-x)r-x=0 \Rightarrow ((Ar(r+1)-x(r+1))r-x)r-x=0 \Rightarrow (((Ar-x)(r+1))r-x)r-x=0 \Rightarrow r^2 (Ar-x)(r+1)-xr-x=0 \Rightarrow r^2 (Ar-x)(r+1)-(r+1)x \Rightarrow (r+1)[r^2 (Ar-x)-x]=0$ • August 23rd 2011, 12:22 PM Siron Re: Should I be worried if I'm having a hard time with this first assignment? For exercice 4 I just think a rational expression is an expression of the form $\frac{P}{Q}$ so which have a numerator and a denominator. In this case you can just use some well known trig formulas: $\sec(x)\cdot \tan(x)-\sqrt{2}\sec^2(x)=\sec(x)\cdot[\tan(x)-\sqrt{2}\sec(x)]$ $=\frac{1}{\cos(x)}\cdot \left[\frac{\sin(x)}{\cos(x)}-\frac{\sqrt{2}}{\cos(x)}\right]=\frac{\sin(x)-\sqrt{2}}{\cos^2(x)}$ Which is in my opinion a rational expression. All times are GMT -8. The time now is 12:31 PM. Copyright © 2005-2013 Math Help Forum. All rights reserved. Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9585193395614624, "perplexity_flag": "middle"}
http://mathhelpforum.com/pre-calculus/181142-solving-x-logs.html
# Thread: 1. ## solving for X with logs Hi could someone please let me know if this is correct? Log4_X + log4_(x-3)=1 Log4_X + log4_X - Log4_3=1 Change base ln Log4_X= lnX/ln4 Log4_3=ln3/ln4 1=lne^1=lne Now lnX/ln4 + lnX/ln4 - ln3/ln4 = lne As all ln the cancel all and solve with algebra X/4 +X/4 - 3/4 = 1 (2X-3)/4 = 1 2X-3 = 4 2X = 4+3 X = 7/2 Thanks in advance 2. You should ALWAYS check your answer by plugging it back into the original equation. In this case, it doesn't fit. I would rather solve your problem this way: $\log_{4}(x)+\log_{4}(x-3)=1$ $\log_{4}(x(x-3))=\log_{4}(4^{1}).$ Can you continue from here? 3. I think so. Produces the quadratic x^2-3X-4=0? 4. Yep. Looks good to me. You should definitely check both answers against the original equation to make sure you're not doing something weird like taking the logarithm of a negative number or some such verboten action. 6. x^2-3X-4=0 (x-4)(x+1) X=4 or -1 s.t x=4 into Log4_X + log4_(x-3)=1 Log4_(4) + log4_(4-3)=Log4_4 Log4_(4)=1 log4_(1)= log1/log4=0 1+0=1 7. log4_X + log4_(x-3)=1 From here, I would just do this for x = 4 checking: $\log_{4}(4)+\log_{4}(4-3)=1+\log_{4}(1)=1+0=1,$ as required. It looks like that's essentially what you've done. What happens when you substitute x = -1? 8. Log4_(-1) + log4_(-1-3)=1 Log4_(-1) + log4_(-4)=1=(- log(1)/log4) + (- log(4)/log4)=0+(-1)=-1 9. Neither -1 nor -4 are in the domain of the log4 function. That is, log4(-1) and log4(-4) are undefined. What does that tell you? 10. They are vertical asymptotes or poles where x=-1, -4? 11. Neither. The technical explanation is that there is a branch cut discontinuity in the complex plane. That is, you have two surfaces representing the function, and they don't meet on the negative real axis or at zero. It's a lot like a parking garage: you're heading clockwise on one floor, and your friend is heading counter-clockwise on the next floor up. You're not going to meet, are you? That's what this is like. But that's more than you probably need to know right now. What you need to know right now is that, for real-valued logarithm functions with a positive base, the domain is the positive real numbers. You can't plug zero or a negative number into the logarithm functions you're dealing with.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386350512504578, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/32288?sort=newest
## Why pi-systems and Dynkin/lambda systems? On the relative merits of approaches in measure theory. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) What is the point of $\pi$-systems and $\mathcal{D}$ / Dynkin / $\lambda$-systems? I am an analyst in the process of consolidating my measure theory knowledge before moving on to harder/newer things, having been first introduced to measure theory in a course with a probability as opposed to an analysis viewpoint. So far, everything that I've needed from elementary measure theory for analysis can be done (and is done in all of my analysis textbooks) without mention of the $\pi$-systems and $\mathcal{D}$-systems which were used in my first course. Do these set systems belong strictly to probability and not analysis? Heuristically, are they useful or important in any way? Why? - 2 These are incredibly useful in the strangest of places. On my measure theory midterm last year, there was a problem that was harder than our professor intended it to be. The problem came down to some nasty analysis where you had to come up with n separate bounds bounded by another bound. However, a friend of mine told us later that he came up with a beautiful concise proof using Dynkin systems (which was confirmed when he received full marks). – Harry Gindi Jul 17 2010 at 15:44 2 I also am puzzled, Spencer. When I teach Real Analysis, I use the $\pi$-$\lambda$ theorem even though the book we use does not mention it. – Bill Johnson Jul 17 2010 at 16:53 I think this is nothing but a historical accident. I believe that Dynkin - a probabilist - proved the $\pi-\lambda$ theorem after the modern foundations of measure theory had been basically set up. Probabilists have, since then, used it in writing their textbooks, which are usually (at least presented as) books on probability as opposed to analysis. Analysis textbooks, on the other hand, are mostly written by functional analysts, harmonic analysts, PDEists, etc., who may never have opened a probability textbook, and so $\pi-\lambda$ systems have been slow to cross the divide. – Mark Meckes Jul 20 2010 at 17:57 I seem to recall, in fact, hearing someone say he'd taken measure theory from Dynkin as an undergraduate, then took it again without the $\pi-\lambda$ in grad school, and was surprised at how much more complicated it seemed the second time. – Mark Meckes Jul 20 2010 at 17:59 ## 2 Answers I'm not sure that $\pi$-systems and $\lambda$-systems are important objects in their own right, not in the same way that $\sigma$-algebras are. I think they're convenient names attached to two sets of technical conditions that appear in Dynkin's theorem. The theorem itself, though, is a huge convenience. It's properly a theorem of measure theory (measurable theory, if you want to be pedantic, since it doesn't have any measures in its statement), and so it belongs to both probability and analysis. It does seem to be more widely used in probability, most likely because Dynkin himself was a probabilist, and some popular books from the Cornell probability school use it, such as Durrett and Resnick. But it's also very useful in analysis, especially in the functional form cited by Peter (hi!). For instance, lots of approximation theorems about things being dense in $L^p$ spaces can be obtained from it. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. My guess is that they are more useful in probability than in analysis. Many people have the impression that probability is just analysis on spaces of measure 1. However, this is not exactly true. One way to tell analysts and probabilists apart: ask them if they care about independence of their functions. Suppose that $\mathcal{F}_1,\mathcal{F}_2,...,\mathcal{F}_n$ are families of subsets of some space $\Omega$. Suppose further that given any $A_i\in \mathcal{F}_i$ we know that $P(A_1\cap A_2 \cap ...\cap A_n)=P(A_1)P(A_2)...P(A_n)$. Does it follow that the $\sigma(\mathcal{F}_i)$ are independent? No. But if the $\mathcal{F}_i$ are $\pi$-systems, then the answer is yes. When proving the uniqueness of the product measure for $\sigma$-finite measure spaces, one can use the $\pi$-$\lambda$ lemma, though I think there is a way to avoid it (I believe Bartle avoids it, for instance). However, do you know of a text which avoids using the monotone class theorem for Fubini's theorem? This, to me, has a similar feel to the $\pi$-$\lambda$ lemma. Stein and Shakarchi might avoid it, but as I recall their proof was fairly arduous. Here is a direct consequence of the $\pi$-$\lambda$ lemma when you work on probability spaces: Let a linear space H of bounded functions contain 1 and be closed under bounded convergence. If H contains a multiplicative family Q, then it contains all bounded functions measurable with respect to the $\sigma$-algebra generated by Q. Why is this useful? Suppose that I want to check that some property P holds for all bounded, measurable functions. Then I only need to check three things: 1. If P holds for f and g, then P holds for f+g. 2. If P holds for a bounded, convergent sequence $f_n$ then P holds for $\lim f_n$. 3. P holds for characteristic functions of measurable sets. This theorem completely automates many annoying "bootstrapping from characteristic functions" arguments, e.g. proving Fubini's theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9693413376808167, "perplexity_flag": "middle"}
http://crypto.stackexchange.com/questions/2669/attacks-of-the-mac-construction-mathcalhmk-for-common-hashes-mathcal
# Attacks of the MAC construction $\mathcal{H}(m||k)$ for common hashes $\mathcal{H}$? Consider a common practically-collision-resistant Merkle–Damgård hash function $\mathcal{H}$ (e.g. SHA-1, RIPEMD-160, SHA-256, SHA-512). We define a Message Authentication Code $\mathcal{C}$ $$(k,m) \mapsto \mathcal{C}(k,m)=\mathcal{H}(m||k)$$ where $||$ denotes concatenation, $k$ is a secret key (constant, or at least of fixed size), and $m$ is a message (possibly of variable length). Assume that an adversary can (iteratively) submit queries with $m_j$ and obtain $C(k,m_j)$, and wants to obtain $k$ or otherwise compute $C(k,m)$ for some $m\ne m_j$. That MAC $\mathcal{C}$ is not trivially bad. In particular, if $\mathcal{H}$ was indistinguishable from a random function in the Random Oracle Model, $\mathcal{C}$ would be secure. And even though $\mathcal{H}$ has the length-extension property, it does not turn into a devastating attack on $\mathcal{C}$. The less impractical generic attack that I see is that if a collision was known for $\mathcal{H}$ with the colliding messages of moderate identical length, it could be deduced countless collisions for $\mathcal{C}$. Hence security is demonstrably not better than collision-resistance of $\mathcal{H}$ (for identical-length messages). We could assume that $k$ is half the size of the result of $\mathcal{H}$, and hope that the security is about 269 or is it 257 or even 252, 280, 2128, 2256 hash rounds for SHA-1, RIPEMD-160, SHA-256, SHA-512. What are the known attacks against $\mathcal{C}$ (better than the above), and their cost, for each of these common hashes? Is there hope for an argument that an attack against $\mathcal{C}$ would turn into an attack of similar cost against $\mathcal{H}$, or hint of the contrary? Update: this answer to a similar question is of interest, but I fail to find that it really answers the present question. Update 2: I am aware that the construction considered is weaker than HMAC, and in particular is vulnerable to collision on $\mathcal H$; I stated that, and that it is thus pointless to have the key wider than half the hash's size. I'm asking exactly what cryptanalytic attack better than finding a collision on $\mathcal H$ there are. There is room for such an attack only by exploiting a weakness in the structure or/and the round function of a concrete $\mathcal H$. - ## 2 Answers One issue with this construction is described in section 6 of the original HMAC paper, "Keying hash functions for message authentication" by Bellare, Canetti and Krawczyk, where they note that finding a collision on $\mathcal H$, i.e. two inputs $x \ne x'$ such that $\mathcal H(x) = \mathcal H(x')$, directly yields a collision on $\mathcal C$ such that $\mathcal C(k,x) = \mathcal C(k,x')$ regardless of $k$. (Technically, this only works if the collision is internal, in the sense that $\mathcal H(x \| s) = \mathcal H(x' \| s)$ for any suffix $s$, but that's true for pretty much all known M-D hash collision attacks anyway.) Of course, this issue is mostly irrelevant if $\mathcal H$ is assumed to be collision resistant. (Although it should be noted that, even for a perfect $n$-bit hash, a birthday attack can find a collision with only about $2^{n/2}$ evaluations, and that this collision can then be used to break $\mathcal C$ for any $k$.) However, given how hard achieving complete collision resistance seems to be compared to most other security properties asked of hash functions, immunity to collision attacks (which the HMAC construction provides, as long as the other security properties it depends on aren't compromised) is nothing to sneer at. - Yes; that's the "generic attack" that I mention. – fgrieu May 26 '12 at 20:06 The MAC you created is what's commonly called a keyed hash function. The way you have done it has a couple of issues. One is that you're hashing the message and then the key, but it's better to do the key and then the message. The reason for that is that if someone finds a collision with your message, then they are going to end up with the same MAC. It is better to have the known-different data at the front of the construction, where it makes the most difference. The other is the length extension attack. It's just a generalization of the above -- you want to reduce the chance that two messages of different lengths will end up making a collision. If you assume a hash function that is immune to a length extension attack, then a keyed hash (with the key at the front) is as good as an HMAC. Skein has this property, and also combines with it the fact that it's built on a tweak able cipher with the tweak carrying deltas. That's Skein's one-pass MAC, and there's a description of it and the security proofs in the Skein papers (see www.skein-hash.info). - The length extension attack applies to $\mathcal C(k,m)=\mathcal H(k||m)$, not $\mathcal C(k,m)=\mathcal H(m||k)$. Yes I'm aware that the later is not more secure than half the hash size, this is sated in the question. – fgrieu May 29 '12 at 20:18 I'm not buying that with $\mathcal H$ immune to a length extension attack, $\mathcal H(k||m)$ is as good as HMAC; my understanding is that part of HMAC's revised security argument relies on having $k$ processed at both ends. – fgrieu May 29 '12 at 20:49 1 @fgrieu Skein uses a scheme similar to $H(k||m)$ as MAC, and I believe the paper contains some security proofs for this mode. You could compare that with the proof for HMAC, and check if they made any additional assumptions. I think most proofs in the Skein paper assume certain properties of the underlying block cipher. – CodesInChaos May 30 '12 at 10:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9522358179092407, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/37946/why-is-my-speed-imaginary
# Why is my speed imaginary? A particle is released from a dense atmosphere free planet of radius r at a distance R (R >> r ). find the speed on impact. $$F = \frac{GmM}{(r + R)^2} = m \frac{dv}{dt} = mv\frac{dv}{dR}$$ $$GM\int^{R_f}_{R_0}(r + R)^{-2} dr = \int^{V_f}_{V_o}vdv$$ But $R_f$ = 0 when the object strikes the planet $$-GM[r^{-1} - (r +R_0)^{-1}] = 1/2[v_f^2 - v_0^2]$$ $$-2GM[\frac{R_0}{r(r + R_0)}] + V_0^2 = V_f^2$$ why is it imaginary? - 1 Why don't you just try the conservation of energy. mdvdt=mvdvdR is false. – Shaktyai Sep 21 '12 at 16:35 2 In the first line, shouldn't the force point in the direction of decreasing R? Shouldn't the force have a negative sign? – Alfred Centauri Sep 21 '12 at 16:37 A free body diagram would have helped here. I think $v$ is measured in the opposite sense as $R$ causing a problem. – ja72 Sep 21 '12 at 17:52 ## 1 Answer First, I think the essential problem is that the gravitational force points in the direction of decreasing distance so the force formula should have a negative sign. Also, your notation is mixed up. You should be integrating with respect to the radial coordinate $R$, not the constant $r$. But, it would be more conventional to denote the constant radius of the planet with $R$ and the radial coordinate with $r$. Assume that convention in the following: $F = -\dfrac{GmM}{r^2} = m \dfrac{dv}{dt} = m \dfrac{dv}{dr} \dfrac{dr}{dt} = m v \dfrac{dv}{dr}$ Integrating both sides with respect to the radial coordinate: $-GM \int^{R}_{r_0}r^{-2} dr = \int^{v_R}_{v_0}vdv$ $GM[\dfrac{1}{R} - \dfrac{1}{r_0}] = 1/2[v_R^2 - v_0^2]$ $2GM[\dfrac{1}{R} - \dfrac{1}{r_0}] + v_0^2 = v_R^2$ For $r_0 = \infty$ and $v_0 = 0$, we recover the escape velocity formula $v_e = \sqrt{\dfrac{2GM}{R}}$ - Awesome you are the best! – Cactus BAMF Sep 21 '12 at 17:51 @CactusBAMF, thanks and one final note: the distance from the surface of the planet is $r_0-R$ in my notation so if your problem gives the initial distance, make sure to add the planet's radius to it to get $r_0$ in my formula. – Alfred Centauri Sep 21 '12 at 18:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9142104387283325, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/116896/probability-that-given-a-set-of-uniform-random-variables-the-difference-betwee
# Probability that, given a set of uniform random variables, the difference between the two smallest values is greater than a certain value Let $\{X_i\}$ be $n$ iid uniform(0, 1) random variables. How do I compute the probability that the difference between the second smallest value and the smallest value is at least $c$? I've messed around with this numerically and have arrived at the conjecture that the answer is $(1-c)^n$, but I haven't been able to derive this. I see that $(1-c)^n$ is the probability that all the values would be at least $c$, so perhaps this is related? - 1 Thanks for the accept, but it would be a good idea to take it back and let the question be listed as "still open" for a day or so, to see if someone can find a more intuitive explanation. – Henning Makholm Mar 6 '12 at 1:40 ## 1 Answer There's probably an elegant conceptual way to see this, but here is a brute-force approach. Let our variables be $X_1$ through $X_n$, and consider the probability $P_1$ that $X_1$ is smallest and all the other variables are at least $c$ above it. The first part of this follows automatically from the last, so we must have $$P_1 = \int_0^{1-c}(1-c-t)^{n-1} dt$$ where the integration variable $t$ represents the value of $X_1$ and $(1-c-t)$ is the probability that $X_2$ etc satisfies the condition. Since the situation is symmetric in the various variables, and two variables cannot be the least one at the same time, the total probability is simply $nP_1$, and we can calculate $$n\int_0^{1-c}(1-c-t)^{n-1} dt = n\int_0^{1-c} u^{n-1} du = n\left[\frac1n u^n \right]_0^{1-c} = (1-c)^n$$ - 1 More generally, if $X_{(j)}$ are the order statistics, the joint density of $X_{(i)}$ and $X_{(i+1)}$ is $f(u,v) = \frac{n!}{(i-1)!(n-i-1)!} u^{i-1} (1-v)^{n-i-1}$, so $$P(X_{(i+1)}-X_{(i)}>c) = \int_0^{1-c} du\ \int_{u+c}^1 dv \ f(u,v) = (1-c)^n$$ – Robert Israel Mar 6 '12 at 2:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9406673908233643, "perplexity_flag": "head"}
http://mathoverflow.net/questions/42569/examples-of-zfc-theorems-proved-via-forcing/57895
## Examples of ZFC theorems proved via forcing ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is an old suggestion of Joel David Hamkins at the end of his answer to this question: http://mathoverflow.net/questions/29945/forcing-as-a-tool-to-prove-theorems I just noticed it while trying to understand his answer. But indeed it would be nice to have a big list of $ZFC$ theorems that were proven first by forcing. A very well known example is Silver's Theorem about the fact that the $GCH$ can't fail first at a singular cardinal of uncountable cofinality (say for instance $\aleph_{\omega_1}$), I had read somewhere (Jech, maybe) that Silver proved it first using forcing. Also if anyone knows theorems of pcf theory that were first proven using forcing, please post them. - Community wiki, right? – Andres Caicedo Oct 18 2010 at 2:42 Yes. I did not know what community wiki was actually, just read the FAQ :) – alephomega Oct 18 2010 at 3:06 meta.mathoverflow.net/discussion/933/… – Andres Caicedo Jan 31 2011 at 16:23 ## 6 Answers The Baumgartner-Hajnal theorem, from "A proof (involving Martin’s Axiom) of a partition relation". Fund. Math., 78(3):193–203, 1973. Actually, there is a very interesting mathematical story here, and several problems. The question was first asked about uncountable sets of reals and $\omega_1$. Quickly, it was recognized to be a problem about what we now call non-special orders. $L$ is non-special iff $L\to(\omega)^1_\omega$, meaning that if $L$ is split into countably many pieces, at least one is not reverse-well-ordered, i.e., it contains a strictly increasing sequence. Baumgartner and Hajnal proved that $L\to(\alpha)^2_n$ for any countable ordinal $\alpha$ and $n<\omega$. (In human: If L is non-special, and to each subset of $L$ of size 2 we assign a color, there being only finitely many colors to begin with, then for any countable ordinals $\alpha$ there is a subset of $L$ order isomorphic to $\alpha$, all of whose 2-sized subsets are assigned the same color.) Their original proof uses Martin's axiom, as it depends on a kind of diagonalization over certain functions $f:\omega\to\omega$ and one needs that if there are not "too many" of them, then there is one dominating all. This is to my mind the key use of MA in their paper, although there is another one. Then one argues that being special is preserved by ccc forcing and that the conclusion is absolute. Galvin later found a very nice combinatorial argument that avoids forcing. Clinton Conley recently found a similar proof. It rests on a kind of abstract Fubini theorem, the point being that the special linear sub-orders of a non-special $L$ form a proper $\sigma$-complete ideal. Galvin noticed that the result should hold in a more general setting, and conjectured that that's the case. The conjecture was later proved by Stevo Todorcevic: $P\to(\alpha)^2_n$ holds if $P$ is non-special, but it suffices that $P$ is a partial order, rather than a linear order. Stevo's beautiful argument proceeds by three stages: 1. To each $P$ we can associate a certain tree; if $P$ is non-special, so is the tree (in the usual sense of non-special, hence the name), and the result holds for $P$ iff it does for the tree. This is a direct combinatorial argument, but it is very general (not just for colorings of pairs). For example, it simplifies the proof that $P\to(\omega)^1_\omega$ implies $P\to(\alpha)^1_\omega$ for any $\alpha<\omega_1$. We get a nice combinatorial theory of non-special trees: For example, an appropriate version of Fodor's lemma holds. 2. The result holds for non-special trees of size less than the pseudo-intersection number ${\mathfrak p}$. (This is one of the cardinal invariants of the continuum.) Again, the proof does not use forcing. 3. Finally, a forcing argument shows that ${\mathfrak p}$ can be made as large as one wants while preserving being non-special, and by absoluteness we get the full theorem. The argument here shows in particular, that one does not need preservation of being non-special under ccc forcing, simpler particular classes of forcing notions suffice. Stevo's paper is "Partition relations for partially ordered sets". Acta Math., 155(1-2):1–25, 1985. As far as I know, there is no forcing-free proof of 3., that the result holds for all non-special trees $T$, even if $|T|\ge{\mathfrak p}$. It cannot be a direct argument, as Stevo found examples of non-special trees all of whose subtrees of small size are special. Albin Jones indicated a while ago that he had an argument, but I never saw it and his webpage and contact information vanished since. In my mind, this remains open. A few years ago, Rene Schipperus proved a "topological" version of Baumgartner-Hajnal, namely that if $L$ is an uncountable subset of ${\mathbb R}$, or $\omega_1$, then for any $\alpha<\omega_1$ and any coloring of the 2-sized subsets of $L$ with finitely many colors, we can find monochromatic sets of type $\alpha+1$ that, moreover, are closed in the natural topology of ${\mathbb R}$ or $\omega_1$. Rene uses an argument that builds on the original approach, and in particular uses MA. I don't know how to prove his theorem without using forcing. Finally: The corresponding result in dimension 3 should be that if $P$ is a non-special partial order, then $P\to(\alpha,n)^3$, i.e., that if the 3-sized subsets of $P$ are colored with 2 colors, then either for the first color for each $\alpha<\omega_1$ there are homogeneous sets of type $\alpha$, or else for the second color there are linearly ordered homogeneous sets of any finite size. This is open, and several people have worked hard on it for years. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. One example is Solovay's theorem that if the axiom of determinacy holds, then each subset of $\omega_1$ is constructible from a real. The proof breaks into cases. Case One is when $\omega^{L[t]}_1=\omega_1$ holds for some real $t$. Case Two is when this does not hold. Case One has a direct proof, and then Case Two is reduced to Case One via forcing. The punchline is that Case One never holds! - The Gitik-Shelah theorem is also perhaps an example, first proved with forcing by its discoverers, and then without by Anastasis Kamburelis and David Fremlin independently: Moti Gitik, Saharon Shelah, Forcing with ideals and simple forcing notions, Israel J. Math., 68 (1989), 129-160. And the same authors have more in: More on simple forcing notions and forcing with ideals, APAL, 59 (1993), 219-238. - meta.mathoverflow.net/discussion/933/… – Andres Caicedo Feb 4 2011 at 19:40 This is a nice example! – Andres Caicedo Feb 4 2011 at 19:41 My favorite example of this is Stevo Todorcevic's paper "Compact subsets of the first Baire class" (JAMS, 1999). Fix a Polish space $X$ (for us it will be no loss of generality to take $X = \mathbb{N}^\mathbb{N}$). The Baire class 1 functions on $X$ are those functions which are the limit of a pointwise convergent sequence of continuous functions. A compact space which embeddable into the Baire class 1 functions with the pointwise topology is said to be Rosenthal compact. A typical example of a Rosenthal compacta is the set $\mathbb{H}$ of monotone increasing functions from $[0,1]$ to $[0,1]$. Two others are the split interval'' (which consists of those elements of $\mathbb{H}$ whose range is contained in $\{0,1\}$) and the one point compactification of a discrete set of cardinality at most continuum. The class of Rosenthal compacta is closed under countable products and closed subspaces. Todorcevic proved several ZFC results about Rosenthal compacta using forcing. Probably the best example in the paper (in terms of the use of forcing machinery) is the proof that any Rosenthal compacta contains a dense metrizable subspace. Before this it was an open problem whether there was a c.c.c. non-separable Rosenthal compacta. Todorcevic also proves in this paper that a Rosenthal compacta which does not contain an uncountable discrete subspace must map at most two-to-one into a metric space. Furthermore if such a space is non-metrizable, it must contain a homeomorphic copy of the split interval. Finally, he showed that any non $G_\delta$-point in a separable Rosenthal compacta is the unique accumulation point of a discrete subspace of cardinality continuum. One of the key lemmas of the paper is that the property of being a Rosenthal compacta is preserved when one appropriately reinterprets the compacta in any generic extension. - Another example is Solovays result to partition any stationary subset $S \subset \kappa$ into $\kappa$ many staionary subsets of $S$. It is mentioned in Jechs book that Solovay first used a generic ultrapower construction to prove this. Later a more elementary proof was found, using no 'metamathematical' concepts as forcing or ultrapowers of the universe, similar to the history of the proofs of Silvers theorem. - I think a (different?) forcing proof for Solovay's theorem is given in this paper: J. E. Baumgartner, A Ha̧jņal, A. Mate: Weak saturation properties of ideals, in: Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. I, Colloq. Math. Soc. Janos Bolyai, Vol. 10, North-Holland, Amsterdam, 1975, 137-158. – Péter Komjáth Oct 18 2010 at 10:33 In descriptive set theory, there is a significant number of results that have been established using forcing; typically, dichotomy theorems such as Silver's (a $\Pi^1_1$ equivalence relation has only countably many classes, or else there is a perfect set of pairwise inequivalent points). Harrington used what is now called the Gandy-Harrington topology to eliminate the use of forcing from these arguments, replacing it instead with an appeal to effective'' techniques. So, for example, Silver's result can actually be stated as: If $E$ is a $\Pi^1_1$-in-the-parameter-$a$ equivalence relation, then either every $E$-class is itself $\Pi^1_1(a)$ (so, in particular, there are only countably many), or else there are perfectly many inequivalent classes. For a long while, we actually thought these uses of forcing or effective descriptive set theory were essential to the theory. Benjamin Miller recently transformed the field by showing how derivative'' arguments can eliminate just about all these uses. The latest twist is that Richard Ketchersid and I have been studying structural properties of models of determinacy, and have shown that the descriptive set theoretic dichotomies hold in this context. This is more general than what Miller's technique can establish. Once again, our arguments make essential use of forcing (and ultrapower constructions). For example, we have shown that the $G_0$-dichotomy of Kechris-Solecki-Todorcevic holds in models of ${\sf AD}^+$ of arbitrary graphs on reals: Any such graph either can be colored by ordinals (so that points connected by an edge receive different colors) or else, there is a continuous homomorphism of the graph $G_0$ into $G$. (See for example these slides from a recent talk for details and complete definitions). Ben has shown how Baire category arguments allow one to deduce most other dichotomies from appropriate versions of the $G_0$-dichotomy. Using this, Richard and I have deduced some interesting global dichotomies in these models (meaning, they hold of all sets, not just sets of reals). For example, in the presence of large cardinals, $L({\mathbb R})$ is a model of determinacy, and for any $X\in L({\mathbb R})$, either $X$ can be well-ordered inside $L({\mathbb R})$, or else, there is in $L({\mathbb R})$ an injection of ${\mathbb R}$ into $X$. In short: containing a copy of the reals is the only obstacle to being well-orderable. (This is a strong version of the statement that one cannot well-order the reals "definably".) --- Actually, Richard and I first established this directly, via a forcing argument, but it can now be deduced from our version of the $G_0$-dichotomy. (There is another, subtle use of forcing in the context of determinacy, via the theory of generic codings.) -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 0, "mathjax_asciimath": 3, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9259180426597595, "perplexity_flag": "head"}
http://mathoverflow.net/questions/60609?sort=oldest
## Background/Motivation Let $R$ be a commutative ring with unit. If $G$ is a finite (or in general, discrete) group, let $R[G]$ be the group $R$-algebra associated to $G$. The isomorphism problem for group rings asks for condiions on groups $G$ and $H$ such that $R[G]\simeq R[H]$. Group rings are not a complete invariant, even of finite groups; in a 2001 Annals paper, Hertweck discovered two finite groups $G$ and $H$ with $\mathbb{Z}[G]\simeq \mathbb{Z}[H]$ and $G\not\simeq H$. In general, group algebras over a field are even weaker invariants; for example, if $G$ and $H$ are any two finite abelian groups of order $n$, then $\mathbb{C}[G]\simeq \mathbb{C}[H]\simeq \mathbb{C}^n$, by e.g. the Chinese Remainder Theorem or Artin-Wedderburn. I am curious about a slight strengthening of the isomorphism problem for group rings. Namely, if $(S, +, \cdot)$ is a (not necessarily commutative) ring with unit, let the opposite ring $S^{op}=(S, +, \times)$ be the ring whose underlying Abelian group under addition is the same as that of $S$, but with the multiplicative structure reversed, i.e. $a\times b=b\cdot a$; the formation of the opposite is clearly functorial. Note that if $R[G]$ is a group ring, it is naturally isomorphic to its opposite through the map $\phi_G: g\mapsto g^{-1}$. ## The Problem Now if $G, H$ are groups and $\psi: R[G]\to R[H]$ is an isomorphism of group rings, we may ask if it is compatible with the formation of the opposite ring---that is, does $\phi_H\circ \psi=\psi^{op}\circ \phi_G$? Say that $G, H$ have strongly isomorphic group rings if such a $\psi$ exists. What is known about groups with strongly isomorphic group rings over commutative rings $R$? Are there non-isomorphic finite groups $G, H$ with $\mathbb{Z}[G]$ strongly isomorphic to $\mathbb{Z}[H]$, for example? More weakly, when is $\mathbb{C}[G]\simeq \mathbb{C}[H]$? ## Strong Isomorphism is Strong Just to convince you that strong group ring isomorphism is in fact a stronger condition that group ring isomorphism, note that $\mathbb{C}[\mathbb{Z}/4\mathbb{Z}]$ is isomorphic to $\mathbb{C}[\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}]$ but not strongly isomorphic. This is because $\phi_{\mathbb{Z}/4\mathbb{Z}}$ is not the identity, but $\phi_{\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}}$ is the identity on the underlying set of $\mathbb{C}[\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}]$. ## Addendum (4/6/2011) Andreas Thom points out in his excellent answer that the case of finite abelian groups over $\mathbb{C}$ is not much harder than the case of usual group ring isomorphisms. Unfortunately the question over e.g. $\mathbb{Z}$ is likely to be extremely difficult, since the usual isomorphism problem over $\mathbb{Z}$ is apparently quite hard---I don't yet understand Hertweck's construction well enough, for example, to tell if the groups he constructs have strongly isomorphic group rings. In any case, I would accept as an answer a summary of the current state of the art for strong isomorphism over $\mathbb{Z}$ (for example, does Hertweck's construction admit a strong isomorphism?), or any relatively recent reference addressing the more general question (as Qiaochu Yuan points out in a comment, the question is equivalent to asking when the group rings of $G, H$ are isomorphic as $*$-algebras, which suggests to me that the question must have been studied by someone). - 2 I think this is equivalent to asking that the two be isomorphic as *-algebras (over $R$ considered as a *-ring with trivial involution). – Qiaochu Yuan Apr 5 2011 at 6:54 1 If you would even demand that RG and RH be isomorphic as Hopf rings, and if R is an integral domain, then G = group-like-elts(RG) = group-like-elts(RH) = H. So your question is in between the original isomorphism problem and this Hopf-ring observation. – Matthias Künzer Apr 5 2011 at 14:11 @Qiaochu: Yes, that seems right. – Daniel Litt Apr 5 2011 at 15:43 ## 1 Answer If $G$ is a finite abelian group, then $\mathbb C[G] = \lbrace f \colon \hat G \to \mathbb C \rbrace$, where $\hat G$ is the Pontrjagin dual of $G$. The isomorphism $g \mapsto g^{-1}$ translates into the same map on the Pontrjagin dual (basically multiplication by $-1$ on $\hat G$), but now it is a bit easier to analyze. Note also, that there is a non-canonical isomorphism $G \cong \hat G$. Hence, in order to find two non-isomorphic abelian groups which have strongly isomorphic complex group rings, we just have to analyze (in addition to the cardinality of the group) the orbit structure of multiplication by $-1$ on the group. Indeed, we are now just talking about an algebra of function on a set with some $\mathbb Z/2 \mathbb Z$-action. Example: In $\mathbb Z/8\mathbb Z \times \mathbb Z/2\mathbb Z$, there are precisely $4$ elements, which are fixed under multiplication by $-1$, namely $(0,0),(4,0),(0,1)$ and $(4,1)$. The same is true for $\mathbb Z/4\mathbb Z \times \mathbb Z/4\mathbb Z$. Here, one has $(0,0),(2,0),(0,2)$ and $(2,2)$. Hence, there exists an isomorphism between $\mathbb C[\mathbb Z/8\mathbb Z \times \mathbb Z/2\mathbb Z]$ and $\mathbb C[\mathbb Z/4\mathbb Z \times \mathbb Z/4\mathbb Z]$, which respect the isomorphism which is induced by $g \mapsto g^{-1}$. I do not know about an example with coefficients in $\mathbb Z$. - This is very nice! Unfortunately, $\mathbb{Z}$ coefficients are likely to be much harder, since the isomorphism problem itself was open til 2001. – Daniel Litt Apr 5 2011 at 4:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9416989088058472, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/18295/can-the-general-term-my-recurrence-equations-be-written-with-floor-or-mod/18344
Can the general term my recurrence equations be written with Floor or Mod? I want to know the formula for the general term of the following recurrence system. I guess it can be written with `Floor` or `Mod`. How can I find it? ````RSolve[{a[n + 5] == a[n] + 6, a[1] == 1, a[2] == 3, a[3] == 2, a[4] == 1, a[5] == 1}, a[n], n] DiscretePlot[-2 + (6 n)/5 + 2/5 Sqrt[28 + 53/Sqrt[5]] Sin[(2 n \[Pi])/5 - ArcTan[15/Sqrt[85 + 38 Sqrt[5]]]] + 2/5 Sqrt[28 - 53/Sqrt[5]] Sin[(4 n \[Pi])/5 - ArcTan[3 Sqrt[5 (85 + 38 Sqrt[5])]]], {n, 1, 20}] ```` Updated I would like to have the formula for the general term without using recursion just as ````RecurrenceTable[{y[n + 5] == y[n] + 1, Sequence @@ Table[y[i] == i, {i, 5}]}, y, {n, 1, 30}] ```` can be written ````n - 4 Floor[(n - 1)/5] /. n -> Range@30 ```` - Not sure how to obtain this programmatically, but it is `b[n_] := 6*(n - Mod[n, 5, 1])/5 + {1, 3, 2, 1, 1}.Map[DiscreteDelta, Mod[n, 5, 1] - Range[5]]` – Daniel Lichtblau Jan 23 at 15:52 1 @Daniel Please use backticks to offset code in comments. Thanks. – Mr.Wizard♦ Jan 23 at 17:39 I am curious about the reason for the edit, because although it suggests you are getting recursive formulas in the answers, none of the (three) replies so far use recursion. – whuber Jan 23 at 18:22 4 Answers The argument of `DiscretePlot` is the sum of a linear function and explicitly periodic functions (`Sin`). The periodicity can be expressed by the relationship $$\sin(x) = \sin(\text{mod}(x, 2\pi)).$$ Whence, because $n$ is multiplied by $2\pi/5$ where it appears within the arguments of $\sin$, you can reduce the evaluation of the argument to values of $n$ between (say) $1$ and $5$ via the replacement ````g = f /. Sin[x_] -> Sin[Mod[x, 2 \[Pi]]] ```` where `f` represents the function. Noting that $n$ always appears divided by $5$, we can further simplify it by expressing $n$ in the form $5m + j$, $j=0, 1, 2, 3, 4$: ````FullSimplify[g /. n -> 5 m + # , Assumptions -> m \[Element] Integers] & /@ Range[0, 4] ```` {-5 + 6 m, 1 + 6 m, 3 + 6 m, 2 + 6 m, 1 + 6 m} Because $m = \text{floor}\frac{n-1}{5}$ and the remainder is given by `Mod`, we can now proceed to write a formula in terms of `Floor` and `Mod`. However, none of this manipulation is necessary. The first line in the question exhibits `a` as a sum of a linear function (with slope $6/5$) and a periodic function of period $5$ defined by the values of $n$ from $1$ through $5$. Both `Mod` and `Floor` naturally work with periods starting at $0$, whence we need to (a) offset `Mod` by $1$ and (b) subtract $1$ from $n$ before dividing by $5$ and applying `Floor`. This immediately leads to the solution ````a[1] = 1; a[2] = 3; a[3] = 2; a[4] = 1; a[5] = 1; a[n_] /; n > 5 := a[Mod[n, 5, 1]] + 6 Floor[(n - 1)/5]; ```` Looking at `DiscretePlot[a[n], {n, 1, 20}]` shows this to be exactly the same as the trigonometric formula. - Pardon me of this is tautological but perhaps there will be some value to be found here. Your sequence can be described as a linear recurrence: ````LinearRecurrence[{1, 0, 0, 0, 1, -1}, {1, 3, 2, 1, 1, 7}, 50] // ListPlot[#, Filling -> Bottom] & ```` Seeding this with symbolic values a simple pattern is apparent: ````LinearRecurrence[{1, 0, 0, 0, 1, -1}, {a, b, c, d, e, f}, 31]; % ~Drop~ 6 ~Partition~ 5 // Column ```` ````{a, b, c, d, e, f} = {1, 3, 2, 1, 1, 7}; Table[ With[{x = Quotient[n, 5]}, {b, c, d, e, f}[[1 + Mod[n, 5]]] - a x + f x], {n, -1, 48} ] // ListPlot[#, Filling -> Bottom] & ```` The sequence is the same, though the index is offset by two. Correcting that offset and converting this to a function: ````fn[n_Integer] := {-5, 1, 3, 2, 1}[[1 + Mod[n, 5]]] + 6 Quotient[n, 5] Array[fn, 50] ```` ````{1, 3, 2, 1, 1, 7, 9, 8, 7, 7, 13, 15, 14, 13, 13, 19, 21, 20, 19, 19, 25, 27, 26, 25, 25, 31, 33, 32, 31, 31, 37, 39, 38, 37, 37, 43, 45, 44, 43, 43, 49, 51, 50, 49, 49, 55, 57, 56, 55, 55} ```` Or: ````fn[n_Integer] := {-5, 1, 3, 2, 1}[[1 + #2]] + 6 # & @@ QuotientRemainder[n, 5] ```` - Also: ````y[n_] := Ceiling[6/5 n] + Switch[Mod[n, 5], 0, -5, 1, -1, 2, 0, 3, -2, 4, -4]; ```` - Also: ````fn[n_] := 6 Floor[(n - 1)/5] + Mod[2 n^4 + 3 n^3 + 3 n^2 + 2 n + 1, 5] Array[fn, 30] DiscretePlot[fn[n], {n, 30}] ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8149681687355042, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/10400/comaximal-ideals-in-a-commutative-ring
# Comaximal ideals in a commutative ring Let $R$ be a commutative ring and $I_1, \cdots, I_n$ pairwise comaximal ideals in $R$, e.g. $I_i + I_j = R$ for $i \neq j$. Why are the ideals $I_1^{n_1}, ... , I_r^{n_r}$ (for any $n_1,...,n_r \in\mathbb N$) also comaximal? - 5 This is a standard exercise. The standard hint is to take powers of the relation I_i + I_j = R. – Qiaochu Yuan Nov 15 '10 at 14:27 ## 4 Answers It is sufficient to prove this for the case two comaximal ideals, say $I,J$. Need to show, $I^m+J^n=R$ for any positive integers $m,n$. Now, $R=R^{m+n-1}=(I+J)^{m+n-1}\subseteq I^m+J^n$ since in the binomial expansion of $(I+J)^{m+n-1}$, in every term, either the power of $I$ is at least $m$ or the power of $J$ is at least $n$ by pigeonhole principle. - If you are familar with ideal radicals then, as I mentioned on sci.math, the proof is a one-liner: $$\rm rad\ (I^m +\: \cdots\: + J^n) \ \supset\ I +\:\cdots\:+ J\: =\: 1\ \ \Rightarrow\ \ I^m +\:\cdots\: + J^n\: =\: 1$$ Alternatively, and much more generally, it may be viewed as an immediate consequence of the Freshman's Dream binomial theorem $\rm\ (A + B)^n = A^n + B^n\$. This theorem is true for both arithmetic of GCDs and (invertible) ideals simply because, in both cases, multiplication is cancellative and addition is idempotent, i.e. $\rm\ A + A = A\$ for ideals and $\rm\ (A,A) = A\$ for GCDs. Combining this with the associative, commutative, distributive laws of addition and multiplication we obtain the following very elementary high-school-level proof of the Freshman's Dream: $\rm\qquad\quad (A + B)^4 \ =\ A^4 + A^3 B + A^2 B^2 + AB^3 + B^4$ $\rm\qquad\quad\phantom{(A + B)^4 }\ =\ A^2\ (A^2 + AB + B^2) + (A^2 + AB + B^2)\ B^2$ $\rm\qquad\quad\phantom{(A + B)^4 }\ =\ (A^2 + B^2)\ \:(A + B)^2$ So $\rm\quad\ {(A + B)^2 }\ =\ \ A^2 + B^2\$ if $\rm\ A+B\$ is cancellative, e.g. if $\rm A+B = 1$ The same proof works generally since, as above $\rm\qquad\quad (A + B)^{2n}\ =\ A^n\ (A^n + \:\cdots\: + B^n) + (A^n +\:\cdots\: + B^n)\ B^n$ $\rm\qquad\quad\phantom{(A + B)^{2n}}\ =\ (A^n + B^n)\ (A + B)^n$ In the GCD case $\rm\ A+B\ := (A,B) = \gcd(A,B)\$ for $\rm\:A,B\:$ in a GCD-domain, i.e. a domain where $\rm\: \gcd(A,B)\:$ exists for all $\rm\:A,B \ne 0,0\:$. So the Dream is true since $\rm\:(A,B)\:$ is cancellable, being nonzero in a domain. In a domain, nonzero principal ideals are cancellable, so Dream is true for ideals in a PID (e.g. $\mathbb Z\:$), or f.g. (finitely generated) ideals in a Bezout domain. More generally, Dream also holds true in any Dedekind domain (e.g. any number ring) since nonzero ideals are invertible hence cancellable. In fact this "Freshman's Dream" is true for all f.g. ideals in domain $\rm\:D\:$ iff every nonzero f.g. ideal is invertible. Such domains are known as Prufer domains. They're non-Noetherian generalizations of Dedekind domains. Moreover they form an important class of domains because they may also be equivalently characterized by a large number of other important properties, e.g. they are precisely the domains satisfying CRT (Chinese Remainder Theorem); $\$ Gauss's Lemma: the content ideal $\rm\ \ c(fg) = c(f)\ c(g)\:$;$\$ nonzero f.g. ideals are cancellable; $\$ f.g. ideals satisfy contains $\Rightarrow$ divides; $\:$ etc. It's been estimated that there are close to a hundred such characterizations known. See my post here for about thirty such characterizations. - As it suffices to prove that the claim holds modulo any given maximal ideal, we are reduced to the easy case where our ring is a field. (See this MO answer of Georges Elencwajg and the accompanying comments.) - +1 I agree that its worth explicitly mentioning this local view of the standard proof, i.e. that in MZ's answer (and in one of my posts in said sci.math thread Oct 6, 2008). Thanks much for the link to the interesting Mathoverflow discussion. – Gone Sep 11 '11 at 18:16 A slight variation on the radical one-liner: Note that two ideals $A$ and $B$ are comaximal (i.e. $A+B=R$) if and only if the ideal $A+B$ is not contained in any maximal ideal of the domain. Now take the ideal $A^{m} + B^{n}$. Claim: $A^{m} + B^{n} = R$. For if not then $A^{m} + B^{n}$ must be contained in a maximal ideal $M$. But as $A^{m}$, $B^{n}$ are contained in $A^{m} + B^{n}$, we have $A^{m}$, $B^{n}$ contained in $M$. Since $M$ is a prime we get $A$, $B$ contained in $M$, a contradiction. Similar reasoning will show that $A^{m}$, $B^{n}$ comaximal implies that $A$, $B$ are comaximal. Muhammad -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9150294065475464, "perplexity_flag": "head"}
http://mathoverflow.net/questions/72807/sigma-n-version-of-hod/72809
## $\Sigma_n$ version of HOD ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Fix a natural number, $n \geq 1$. Consider the class, M, of all sets hereditarily ordinal-definable using some $\Sigma_n$ formula. Since there is a universal $\Sigma_n$ formula, M is definable. Is M necessarily a model of ZF? It seems to me that it is closed under Godel operations and almost universal for the same reasons that HOD is, and therefore a model of ZF. But I feel like I'm missing something, since I've never heard anything about this model. If it is a model of ZF, where can I learn more about it? Has anybody done any research about it? How does it relate to HOD? Note: I require $n \geq 1$ because the formula witnessing that HOD is almost universal is $\Sigma_1$. - ## 1 Answer It follows from the Reflection Principle that every ordinal definable set is ordinal definable by a $\Sigma_2$-formula in the language of set theory. Indeed, if $A = \{x : \phi(x,\bar\alpha)\}$, then $$\exists\gamma\,(\gamma \in \mathrm{Ord} \land \bar\alpha \in \gamma \land \forall x\,(x \in A \leftrightarrow x \in V_\gamma \land V_\gamma \vDash \phi(x,\bar\alpha))).$$ Therefore, there is some $\gamma \in \mathrm{Ord}$ such that $$x \in A \leftrightarrow \exists U\,(U = V_\gamma \land x,\bar\alpha \in U \land U \vDash \phi(x,\bar\alpha)),$$ which is $\Sigma_2$ since $U = V_\gamma$ is $\Pi_1$. - I agree about `$\Sigma_2$` but not about the reason. The quantifier $\exists\gamma$ isn't relevant, since you're allowed an ordinal parameter (like $\gamma$) anyway. I think the relevant formula defining $x\in A$ is `$\exists z\,(z=V_\gamma \land z\models\phi(x,\alpha)$`. The clause $z=V_\gamma$ here is `$\Pi_1$`, so the whole formula is `$\Sigma_2$`. – Andreas Blass Aug 13 2011 at 3:46 In my preceding comment, add a final parenthesis to the "relevant formula", so that it's a formula. – Andreas Blass Aug 13 2011 at 3:48 Well, of course! Thanks for the correction. – François G. Dorais♦ Aug 13 2011 at 3:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315144419670105, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/87466/list
## Return to Answer 2 added 497 characters in body The function has no real roots, so all its roots are complex numbers, in conjugate pairs. Thus we can factor it into terms of the form $((x-a_i)^2+b_i)$. First consider the product of all the $(x-a_i)^2$. This will be strictly smaller than the original polynomial. The difference will have degree $2d-2$. Our challenge is to replace each $a_i$ with a nearby rational number $d_i$ without making the difference negative anywhere. Since we can choose $d_i$ arbitrarily close to $a_i$, we can make the difference arbitrarily small, so our only concern is its rate of growth. This will be determined by the coefficient of $x^{2d-1}$, which will be $\sum 2(a_i-d_i)$. Since $\sum a_i$ is a rational, we can choose very nearby rationals satisfying $\sum d_i=\sum a_i$, and these will satisfy your inequality. More formally, the coefficients of $P-\prod (x-d_i)^2$ are continuous functions of $d_i$, and where the $x^{2d-1}$ coefficient is $0$, the minimum value of $(P-\prod(x-d_i)^2)/(1+x^{2d-2})$ is a continuous function of the coefficients. (Adding or subtracting $\epsilon x^k$ changes the result by no more than $\epsilon \max |x^k/(1+x^{2d-2})|.$) So there must be some open ball in the hyperplane where $\sum a_i=\sum d_i$ where it is still positive. Choose a rational point in that open ball. 1 The function has no real roots, so all its roots are complex numbers, in conjugate pairs. Thus we can factor it into terms of the form $((x-a_i)^2+b_i)$. First consider the product of all the $(x-a_i)^2$. This will be strictly smaller than the original polynomial. The difference will have degree $2d-2$. Our challenge is to replace each $a_i$ with a nearby rational number $d_i$ without making the difference negative anywhere. Since we can choose $d_i$ arbitrarily close to $a_i$, we can make the difference arbitrarily small, so our only concern is its rate of growth. This will be determined by the coefficient of $x^{2d-1}$, which will be $\sum 2(a_i-d_i)$. Since $\sum a_i$ is a rational, we can choose very nearby rationals satisfying $\sum d_i=\sum a_i$, and these will satisfy your inequality.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9386196136474609, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/computational-complexity+elementary-number-theory
# Tagged Questions 1answer 31 views ### Computing the period of a fraction polynomial in the number of digits So I have a fraction a/b that is known to be repeating. How do I compute the period of the repeating decimal in polynomial-time in the number of digits of A and B? 1answer 98 views ### How to show that Eratosthenes sieve algorithm has a complexity of $O(n\log n)$ I know this is a loose upper bound, but I am in an entry level CS course that is just trying to get us used to evaluating algorithms. Any pointers on how to move forward on this problem? 0answers 54 views ### Calculate time complexity of modular arithmetic I wanted to calculate $a \mod n$. What is the time complexity of this? I'm using a Java program? Can anyone provide an explanation of the time complexity for the following calculations? c=a \cdot ... 1answer 35 views ### Why does it take maximum of $n/\log n$ digits to represent the number $2^n - 1$ in base of $n$? Given the number $n$. Why does it take maximum of $\frac{n}{\log n}$ digits to represent the number $2^n - 1$ in base of $n$? 1answer 58 views ### Number of solutions $x_1x_2\dots x_k = n, x_i, n \in \mathbb{N}$ Here's a question I've been asked: Let $n\in \mathbb{N}$ and let $d_k(n)$ be the number of solutions of $$x_1\dots x_k = n, \hspace{5mm}x_i\in \mathbb{N}$$ I need to show d_k(n) = ... 0answers 367 views ### The Average Running Time Of Euclid Algorithm? What is the average running time of Euclid Algorithm with respect to all possible input pairs $(m,n)$ such that $\gcd(m,n) = d$? It seems very hard to deduce from the recurrence \$T(m,n) = T(n, m ... 2answers 231 views ### An interesting way of producing positive integers If we define $$\cal N _1 := \{ 1\}$$ and by induction $$\cal N_{n+1}:=\{x\in \mathbb N | \exists a,b \in\cal N_n : x= a+b \text{ or }x=ab \text{ or }x=a^b \}$$ it's easy to prove that, for every \$m ... 2answers 403 views ### Factoring n, where n=pq and p and q are consecutive primes So in RSA, there is a modulus n which is the product of two primes. My question is regarding when p and q are consecutive primes, what would the time complexity be? So, n=pq and p and q are ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9285392761230469, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/55948/i-dont-understand-what-we-really-mean-by-voltage-drop/56198
# I don't understand what we really mean by voltage drop This post is my best effort to seek assistance on a topic which is quite vague to me, so that I am struggling to formulate my questions. I hope that someone will be able to figure out what it is I'm trying to articulate. If we have a circuit with a resistor, we speak of the voltage drop across the resistor. I understand all of the calculations involved in voltage drop (ohm's law, parallel and series, etc.). But what I seek is to understand on a conceptual level what voltage drop is. Specifically: what is the nature of the change that has taken place between a point just before the resistor and a point just after the resistor, as the electrons travel from a negatively to a positively charged terminal. Now as I understand it, "voltage" is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal, and "resistance" is a force caused by a material which, due to its atomic makeup, causes electrons to collide with its atoms, thus opposing that flow of electrons, or "current". So I think I somewhat understand voltage and resistance on a conceptual level. But what is "voltage drop"? Here's what I have so far: • Voltage drop has nothing to do with number of electrons, meaning that the number of electrons in the atoms just before entering the resistor equals the number of atoms just after • Voltage drop also has nothing to do with the speed of the electrons: that speed is constant throughout the circuit • Voltage drop has to do with the release of energy caused by the resistor. Maybe someone can help me understand what voltage drop is by explaining what measurable difference there is between points before the resistor and points after the resistor. Here's something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn't the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I'd like to get clarification as to why. Perhaps I can clarify what I'm trying to get at with the famous waterwheel analogy: we have a pond below, a reservoir above, a pump pumping water up from the pond to the reservoir, and on the way down from the reservoir, the water passes through a waterwheel, the waterwheel being analogous to the resistor. So if I were to stick my hand in the water on its way down from the reservoir, would I feel anything different, depending on whether I stuck my hand above or below the waterwheel? I hope that this question clarifies what it is I'm trying to understand about voltage drop. EDIT: I have read and thought about the issue more, so I'm adding what I've since learned: It seems that the energy which is caused by the voltage difference between the positive and negative terminals is used up as the electrons travel through the resistor, so apparently, it is this expenditure of energy which is referred to as the voltage drop. So it would help if someone could clarify in what tangible, empirical way could we see or measure that there has been an expenditure of energy by comparing a point on the circuit before the resistor and a point on the circuit after the resistor. EDIT # 2: I think at this point what's throwing me the most is the very term "voltage drop". I'm going to repeat the part of my question which seems to be still bothering me the most: "Here's something that may be contributing to my confusion regarding voltage drop: if voltage is the difference in electrons between the positive terminal and the negative terminal, then shouldn't the voltage be constant at every single point between the positive terminal and the negative terminal? Obviously this is not true, but I'd like to get clarification as to why." In other words, whatever takes place across the resistor, how can we call this a "voltage drop" when the voltage is a function of the difference in number of electrons between the positive terminal and negative terminal? Now I've been understanding the word drop all along as "reduction", and so I've been interpreting "voltage drop" as "reduction in voltage". Is this what the phrase means? Since I've read that voltage in all cases is a measurement between two points, then a reduction in voltage would necessarily require four different points: two points to delineate the voltage prior to the drop and two points to delineate the voltage after the drop, so which 4 points are we referring to? Perhaps a more accurate term would have been "drop in the potential energy caused by the voltage" as opposed to a drop in the voltage? EDIT # 3: I think that I've identified another point which has been a major (perhaps the major) contribution to the confusion I've been having all along, and that is what I regard as a bit of a contradiction between two essential definitions of voltage. When we speak of a 1.5V battery, even before it is hooked up to any wiring / switches / load / resistors / whatever, we are speaking of voltage as a function of nothing other than the difference in electric charge between the positive and negative terminals, i.e the difference in excess electrons between the two terminals. Since there is a difference in number of electrons only in reference to the terminals, I therefore have been finding it confusing to discuss voltage between any other two points along the circuit -- how could this be a meaningful issue, since the only points on the circuit where there is a difference in the number of electrons is at the terminals -- so how can we discuss voltage at any other points? But there is another definition of voltage, which does make perfect sense in the context of any two points along a circuit. Here we are speaking of voltage in the context of Ohm's law: current/resistance. Of course, in this sense, voltage makes sense at any two points, and since resistance can vary at various points along the circuit, so clearly voltage can vary at different points along the circuit. But, unlike the first sense of voltage, where the voltage is a result of the difference in electrons between the terminals, when we speak of voltage between two points along the circuit, say, between a point just before a resistor and a point just after the resistor, we are not saying that there any difference in number of electrons between these two points. I believe that it is this precise point which has been the main source of my confusion all along, and that's what I've been trying to get at all along. And this is what I've been struggling to ask all along: okay, in a battery, you can tell me that there is a voltage difference between the two terminals, meaning that you can show me, tangibly and empirically, that the atoms at the positive terminal have a deficit of electrons, and the atoms at the negative terminal have a surplus of electrons, and this is what we mean by the voltage between the two, then I can understand that. But in contrast, I accept that there is voltage (I/R) between a point just before a resistor and just after a resistor -- but can you take those two points, the one before the resistor and the one after the resistor, and show me any measurable qualitative difference between the two? Certainly there is no difference between the number of electrons in the atoms of those two points. In point of fact, I believe that there is no measurable difference between the two points. Ah, now you'll tell me that you can show me the difference between the two points: you'll hook up a voltmeter to the two points, and that shows the voltage between them! Sure, the voltmeter is telling us that something has happened between the two points. But the voltmeter does not tell us anything inherent in the points themselves -- unlike the two terminals of a battery, where there is an inherent difference between the two points: one has more excess electrons than the other -- that is a very inherent, concrete difference. I guess what we can say is that the electrons travelling at a point just before the resistor are travelling with more energy than the electrons travelling at a point just after the resistor. But is there any way of observing the difference in energy other than a device that simply tells us that the amount of energy has dropped between the two points? Let me try another way: we could also hook up a voltmeter to the two battery terminals, and the reading would indicate that there is voltage between the two terminals. And if I would ask you yes, but what is it about those two points that is causing that voltage, you could then say, sure: look at the difference in electrons between the two points -- that is the cause for the reading of the voltmeter. In contrast, when we hook up the voltmeter to the points just before and after the resistor, and the reading indicates a voltage between the two terminals. But in this case if I would now ask you the same question: yes, but what is it about those two points that is causing the voltage, I'm not sure if you'd have an answer. I think this crucially fundamental difference between the two senses of voltage is generally lost in such discussions. - Your three points are very good ones. To make your third one more specific: "The voltage drop has to do with the energy released PER electron caused by the resistor." – jeffdk Mar 5 at 18:25 – Hal Swyers Mar 8 at 2:02 I'll just point out one misconception I noticed throughout your question: voltage has nothing to do with the number of electrons (that's what current is). Voltage is the energy per electron. It has the units of joules per coulomb, where a coulomb is just a way of counting electrons. The same number of electrons leave the resistor as enter it (the current out is the same as the current in); but they have less energy going out than they did coming in, since they've give up some energy as heat. – Nathan Reed Mar 9 at 22:04 "unlike the two terminals of a battery, where there is an inherent difference between the two points: one has more excess electrons than the other -- that is a very inherent, concrete difference." That is not really correct. When the battery is drained it is no longer true. – dmckee♦ Mar 9 at 22:48 Nathan: when we speak of a 1.5V battery or a 12V battery, if these varying voltages are not a result of the difference in number of electrons at the terminals, what then are the voltages a function of? – oyvey Mar 11 at 21:35 ## 9 Answers Perhaps I can clarify what I'm trying to get at with the famous waterwheel analogy 99 years ago, Nehemiah Hawkins published what I think is a marginally better analogy: Fig. 38. — Hydrostatic analogy of fall of potential in an electrical circuit. Explanation of above diagram • In this diagram, a pump at bottom centre is pumping water from right to left. • The water circulates back to the start through the upper horizontal pipe marked a-b • The height of water in the vertical columns C,m',n',o',D indicates pressure at points a,m,n,o,b • The pressure drops from a to b due to the resistance of the narrow return path • The pressure difference between a and b is proportional to the height difference between C and D Analogy • Pump = Battery • Water = Electric charge carriers • Pressure = Voltage • Vertical Pipes = Voltmeters • pipe a-b = Resistor (or series of four resistors) Note • A "particle" of water at a has a higher potential energy than it has when it reaches b. There is a pressure drop across a "resistive" tube. Voltage (electric potential) is roughly analogous to water pressure (hydrostatic potential). If you could open a small hole at points a,m,n,o,b in the tube and hold your finger against the hole, you would be able to feel the pressure at those points is different. The potential at some point is the amount of potential energy of a "particle" at that point. it would help if someone could clarify in what tangible, empirical way could we see or measure that there has been an expenditure of energy by comparing a point on the circuit before the resistor and a point on the circuit after the resistor. 1. Purchase a 330 ohm 1/4 watt resistor and a 9V PP3 battery 2. Place the resistor across the battery terminals 3. Place your finger on the resistor. 4. Wait. - Unfortunately, I don't really follow this diagram. Perhaps if you explained how it works... – oyvey Mar 6 at 20:36 @oyvey: See updated answer. – RedGrittyBrick Mar 7 at 11:23 Sorry, even with your explanation, I find the water pump analogy difficult both to follow on its own and to relate to electricity. – oyvey Mar 7 at 21:19 "Place your finger on the resistor". But read the part of my post that you cited just prior: "comparing a point on the circuit before the resistor and a point on the circuit after the resistor." – oyvey Mar 7 at 21:20 @oyvey: My problem was that the only way I know to measure flow of energy into, and out of the resistor involves using voltmeters and ammeters which is somewhat empirical but involves a somewhat circular argument as more resistors are introduced. Viz measure V & I at each end, multiply V.I to get Joules/sec at each end, subtract to get "expenditure of energy". I worried you would find this equally unsatisfying. – RedGrittyBrick Mar 7 at 21:53 The "drop" comes from the analogy of current being the flow of water and each difference in height that makes the water flow is a drop = a voltage difference. So voltage drop is just a difference in voltage across a component that makes a current flow. - Sorry, but this does not help me at all. – oyvey Mar 7 at 9:35 Consider a circuit with just a battery and a loop of wire. The wire has ideally zero resistance, that means the Voltage between any two points on the wire will be zero! $V=IR= I\times0=0$ $^1$ Now consider the circuit you mention (i.e. with a resistor). The resistor will cause a decrease in current flow - following ohms law which you know so well. What does that mean? Well the voltage across the resistor must be equal to the that of the source. I.e. $|V_{resistor}|=IR=|V_{Battery}|$. Note that ideally, no voltage drop is present if you probe the voltage across any segment of wire (just as before). Why is it called a voltage drop? The sum of all voltages in a loop must yield zero. So the technically we would have $V_{battery} +V_{resistor}=0$. (See Kirchoffs circuit laws, specifically Kirchoffs Voltage Law) Speaking of 'voltage drops' is just an easy way to say Power from the source battery is going somewhere - as you pointed out "Release of energy". So in more complicated circuits you may keep track of things in terms of voltage drops across the various components. Edit: Further example. Consider now a circuit with two resistors in series. The voltage drop across both in total must be as we discussed before but the voltage drop accross each individual resistor will be weighted by its value. $V_{batterry}=I(R_1+R_2)$ Voltage drops: $V_1=IR_1$ $V_2=IR_2$ $^1$ In real circuits, wires have a really low resistance. - The poster said that he/she understood the simple calculations that go along with voltages in circuits "(ohm's law, parallel and series, etc.)". Since your answer just restates the simplest of these with the typical black-box-don't-worry-about-the-physics type of answer, this is not answering the question asked at all. I believe the OP would like an explanation of the actual physics behind differences in voltages when current passes through simple circuit components. Such as the energy lost to collisions with the material of the resistor when electrons pass through and so on. – user44430 Mar 5 at 21:49 user444320: absolutely, this post simply restates the formulae I'm already familiar with from all of the usual basic electricity texts, and doesn't address the issue I asked at all. – oyvey Mar 6 at 20:34 I take it back. I re-read this post (actually, I've been re-reading all of them several times) and it was somewhat helpful. My apologies. Fire: can you please read my remarks in edit # 2? – oyvey Mar 9 at 12:00 I might need to think about this more but I've read Edit 2 a few times. Certainly a battery works by a separation of charge, but when in a circuit I think one must be careful about thinking about things in terms of "difference in number of electrons". Its all about flow (current). The Voltage is the potential difference which allows flow to occur (think gravitational potential difference allowing an object to fall from high to low). What if something impedes this motion? In our classical mechanics example we could introduce a viscous liquid or pinball maze to divert flow. – Fire Mar 9 at 21:37 (Part II) Energetically we could talk about an effective potential which now takes into account this new resistive element. It mightn't be a bad idea to think about voltage like this. I can't get around it - but the basis of this concept is $V=IR$ which is why my answer focused on it. Note it is the relationship between a flow, impedance and potential difference. One needs to think about the entire circuit at once, macroscopically - I don't know what happens individual electrons. Once the battery is connected, practically instantly the circuit will reach a steady state of flow. – Fire Mar 9 at 21:39 show 1 more comment I trust that you've read the Wikipedia pages on voltage and voltage drops. Anyhow, this isn't particularly rigorous but it helps with the intuition. A resistor, as the name implies, tries to resist current flow through it. What that really means is that there are less "free" electrons in the material to help with the flow of current. If the electrons are tightly bound to the atom, they tend to not want to move, so there's more resistance to current flow. A voltage difference is the electric potential difference between two points on the circuit, and the current flows in a direction in which the potential difference can be minimized. So when current is flowing through a resistor (Note: In a circuit, the wires are usually assumed to be of zero resistance), it finds it hard(er) to flow across the resistor, but there's still an energy flowing "into" the resistor. And we all know that energy has to be conserved at all costs. So what effectively happens is that some of this energy is lost when the current flows across the resistor, either because it spent energy in trying to get those tightly bound electrons to leave their atoms or in the form of heat. This means that on the other side of the resistor, there's been some energy lost, which really means that the voltage at that point is lower than at the point before the resistor. So there's less "push" for the electron to get to the side with lower potential, because of the loss of energy. - Kitchi, can you please see what I wrote in "Edit # 2" and see if it clarifies what I'm trying to get at? – oyvey Mar 9 at 5:41 @oyvey - Does this help clarify this to some extent for you? Or do you need me to elaborate ore on why there is a potential difference between the two points? The analogy with the battery can be extended to this case, if modified slightly. There is a difference in voltage between the two ends. But in your question it seems like you think that the difference in voltage in a battery is significant, but here it isn't... I'm a little confused about your position on this question. – Kitchi Mar 9 at 18:10 "voltage" is the force caused by the imbalance of charge which causes pressure for electrons to travel from a negatively charged terminal to a positively charged terminal, Nope, voltage is not a force. Voltage is a difference in potential energy per unit charge. More precisely: electric potential is the potential energy per unit charge (just like $gh$ is the gravitational potential energy per unit mass), and a voltage (a.k.a. voltage difference a.k.a. voltage drop) is a difference in electrical potential between two points. The actual value of electric potential at any point has no physical meaning; only its difference relative to the electrical potential at some other point, i.e. the voltage, is meaningful or measurable. This means the whole idea of voltage is inherently bound to a choice of two points. There's no measurement you can make at a single point only that will tell you anything about voltage or electric potential. However, if you have two points, you can determine the voltage between them by pushing a unit charge from one point to the other and measuring how much work it takes (or gives). This is how we can establish voltages in a circuit with resistive elements: move a charge through the circuit from one point to another and see how much energy needs to be put in to get it there. The reason it takes energy is fundamentally complicated, having to do with quantum mechanical effects, but as a rough classical model, you could say that the electrons lose energy from colliding with the atoms and molecules of the resistive material, and you need to put in enough energy to make up for those losses. - "Nope, voltage is not a force." From the website allaboutcircuits.com: 'The force motivating electrons to "flow" in a circuit is called voltage.' – oyvey Mar 7 at 21:01 1 @oyvey that website is wrong. Voltage is absolutely not a force. – David Zaslavsky♦ Mar 7 at 22:36 I think that the website and I are using "force" in a much looser way than you are. – oyvey Mar 8 at 11:47 3 Perhaps so. In that case I would suggest that, at least for purposes of this question, you use "force" to mean what it actually means in physics, rather than whatever looser definition you may have in mind. – David Zaslavsky♦ Mar 8 at 19:18 @DavidZaslavsky, you're right. But if you subject the electron to an electric field (the field being caused by a potential difference), the electron will feel a force according to the strength and sign of the field, and that force is what causes the electron to move. Which I think is in the same spirit as what oyvey is trying to say. I just think that the terminology, though important, can be given some leeway in this case, because it's not what is being asked about? – markovchain Mar 10 at 4:43 show 1 more comment Voltage is an electrical potential (relative some arbitrary value called "ground"). What it mean is that if I take an electron from ground and move it to a point with voltage $V$ it requires that I do work $W = V e$ (here $e$ is the magnitude of the charge on the electron) because there was a force $\vec{F} = q\vec{E}$ due to the electrical field, $\vec{E}$ along path and $W = \int \vec{F} \cdot d\vec{s}$, conversely moving the electron from a point of potential $V$ to a point at the potential of ground gets back the same amount of energy. This is exactly like the gravitational potential energy of mass $m$ relative it's position on the floor. $V$ is analogous to $gh$ in introductory mechanics (where the work of lifting the mass is $W = mgh$ and the force is $F_g = -mg\hat{z}$). The thing to note is that $V$ is a property of a particular position at a particular time, and if you look at two points along a circuit and find that the one "further along" has lower potential then you can say that the potential "dropped" by $V_1 - V_2$ between points 1 and 2. That is the whole meaning of "voltage drop", but it does not explain the microscopic physics that are responsible for the change. Like many other things in physics it is easier to get a handle on the physics if you will just agree to accept the meaning of the symbols and vocabulary before you begin. When you understanding matures it will be clear that definitions are internally consistent and useful for performing calculation. - I think what your getting at is a question about the energy associated with the electron shell states of a material before a resistor and after a resistor. So in this sense one can think in terms of relative ionization in two different materials. Good thought experiments in this regard are to consider how a battery works, and then consider how a transistor works. In a battery, (as you probably already know) the potential energy is stored chemically, and it is the reaction of two materials when there is an ion bridge that allows for the reaction to occur and give off energy. The important piece of course being the bridge that completes the circuit and allows for a flow of ions. So in any case, there are higher entropy states (more energetically favorable) available to the system, and flow of electricity can be seen as a response to achieve the more energetically favorable conditions. So a voltage drop can be understood to not only involve a change of energy, but also a change in entropy as well. The second example, the transistor, one the one of the layers is doped so that there is natural bias in the distribution of the electrons. This bias can be used as a resistance to current flow, and in most cases transistors are used as switches and changing the voltage in the "gate" allows one to control current flow. Again, this is a change in the distribution of the electrons in the "shells" of the relevant ions. A voltage drop then is viewed as a change in charge, or ionization across a resistor. It is a direct control of the ionization that allows for flow. This brings up the point that one of the surest ways to determine if a circuit design is faulty is to look for instances in the diagrams where there are missing grounds. Current will not flow if there is no ground which would cause a more energetically favorable condition in the circuit. If this is helpful, I can expand, but I think several of us are struggling with understanding what you are after. Update: The question now appears to be whether one can tell a difference between two points across the resistor other than using a voltmeter? What can you tell me about the difference between those two points? It helps to understand a little about Energy, Power, Voltage, Current and Charge. In its simplest mathematical definition, Energy is the product of Power and time. $$E=Pt$$ where time is actually an interval so that $$E=P(t_2 - t_1)$$ Power is most simply defined as : $$P = \dfrac{QV}{(t_2 - t_1)}$$ Where $Q$ is charge and $V$ is voltage Charge is actually very well defined as More abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge", the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge. Thus, for example, the electric charge is the generator of the U(1) symmetry of electromagnetism. The conserved current is the electric current. In the case of local, dynamical symmetries, associated with every charge is a gauge field; when quantized, the gauge field becomes a gauge boson. The charges of the theory "radiate" the gauge field. Thus, for example, the gauge field of electromagnetism is the electromagnetic field; and the gauge boson is the photon. Sometimes, the word "charge" is used as a synonym for "generator" in referring to the generator of the symmetry. More precisely, when the symmetry group is a Lie group, then the charges are understood to correspond to the root system of the Lie group; the discreteness of the root system accounting for the quantization of the charge. It should be added that charge is quantized in the real world, and electrons carry a fundamental unit of charge, however, it is not necessarily electrons that are flowing when we talk about a circuit, rather it is the more abstract conserved quantity of charge that is flowing. The electrons may move in the circuit, but the physical distance they are moving will be very small. The charge is moving, and since charge by itself is effectively massless, it can move very rapidly in the circuit, however it is still fundamentally quantized. Looking back to the Power equation: $$P = \dfrac{QV}{(t_2 - t_1)}$$ One can see that since energy is the product of power and time, one can derive energy as the simple product of charge and voltage. $$E = QV$$ So the difference of energy between two points is $$E_2 - E_1 = (QV)_2 - (QV)_1$$ However charge is conserved, which means that the number of fundamental charges associated with a current flowing between two points must be the same at the beginning and end of the flow. Therefore we can assume $Q$ is constant and we have: $$E_2 - E_1 = Q(V_2 - V_1)$$ This tells us that the change in energy between two points is directly proportional to the change of voltage (or drop in voltage) between the two points. Here we see the similarity of this equation to the equation for the potential energy in gravitational field. If we define weight as: $$\text{Weight} = F_g = mg$$ where $m$ is mass and $g$ is the acceleration due to gravity, then the change of energy between two points in a gravitational field is: $$E_2 - E_1 = F_g(h_2 - h_1)$$ where $h$ is height. Here the similarity of the two equations should be evident. Classically the units of charge (coulomb - C) are given as the product of unit of capacitance (Farad - F) and the units of voltage (V). $$1C = 1F \times 1V$$ Where capacitance is simply a proportionality constant relating charge and voltage, which is more clear in the time varying equation: $$i(t) = C \dfrac{dv(t)}{dt} = Cv'(t)$$ The energy (or equivalently work) emitted by a resistor over time is: $$W = E = \int_{t_1}^{t_2} i(t)v(t) dt$$ Substituting we have $$W = E = \int_{t_1}^{t_2} Cv'(t)v(t) dt$$ As we have shown above the voltage is analogous to height (or position), so $v'(t)$ would be analogous to velocity (time varying position). If we look at the integral for mechanical work (where Force and velocity (s'(t)) are in one dimension): $$W = E = \int_{t_1}^{t_2} F(t) s'(t) dt$$ One could regroup the electrical version as: $$W = E = \int_{t_1}^{t_2} Cv(t)v'(t) dt$$ This suggest that the quantity $Cv(t)$ is analogous to force (but it is not force since force is a mechanical term and the units here are different, the relationship here is shown so that an analogy can be understood). Charge can also be written as $$Q(t) = Cv(t)$$ So our integral becomes $$W = E = \int_{t_1}^{t_2} Q(t)v'(t) dt$$ Charge again can be considered a constant since it must be conserved, so we can write: $$W = E = Q\int_{t_1}^{t_2} v'(t) dt$$ and since $$v(t) + const = \int_{t_1}^{t_2} v'(t) dt$$ We can write $$W = E = Q[v(t_2) - v(t_1)]$$ So again we find that work or energy is proportional to a change in voltage with respect to some variable, where the proportionality constant is the charge. This is enshrined in the definition of the energy as an electronvolt, where Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E=qV after passing through the potential V; if q is quoted in integer units of the elementary charge and the terminal bias in volts, one gets an energy in eV. The point of all the above is to illustrate the following: 1. Voltage is not a force, it is a potential analogous to height in a gravitational field. 2. By means of analogy, if voltage is a potential, then charge would be the analogy to force, but it is not "force" as used in physics, since that is a mechanically defined term and is equivalent to mass times acceleration. 3. Charge is quantized, and it is also a conserved quantity. So we can readily relate a change in voltage to a change in energy. So when there is a voltage drop across a resistor, there is a change in energy. For the resistor this energy is usually given off as heat. If there is no change in voltage, then their will be no change in energy. So in a circuit, the energy contained in the wire before the resistor is greater than the energy contained in the wire after the resistor. Again, this is reflected in a change in the configurations accessible to the electrons in the wire. I hope this is helping. Let me know if we need to expand more. - Hal, can you please see what I wrote in "Edit # 2" and see if it clarifies what I'm trying to get at? – oyvey Mar 9 at 5:40 @oyvey I think this will help. – Hal Swyers Mar 9 at 19:08 I'm not a physicist, rather a simple engineering student, and I believe your confusion has confused me, too. That's great, because it made me look for the answers myself. This is just my personal guess (though grounded in some fact), but it makes a lot of sense to me, and I hope it makes sense to you, too. Bear with me if this is a long answer. First off, when we speak of the difference in the amount of electrons just before and just after a resistor, you're right: there is fundamentally no difference in the number of electrons. We know this because all the electrons (current) that comes in must be equal to the number of electrons (current) that comes out. This is one of Kirchoff's laws. Moreover, we know that matter doesn't just disappear and it isn't being converted into energy, either. Of course, the second of Kirchoff's laws tell us that the sum of all the voltages in a circuit must be zero (so, in a simple circuit, the initial voltage from the battery minus all the voltage drops from all the resistors is zero). That tells us that something must already be happening within those resistors that has nothing to do with the battery. So that's a good start. Then, we should also understand that current and voltage are not the same thing. That is, if you already have a flowing current, that current does not have its own voltage. You might need voltage to jumpstart the flow of current, but once it's already flowing, the current exists as the jumping of one electron across ions, and each electron is only a single point -- it doesn't have voltage because it's only one point. More importantly, current flows uniformly, so there is no voltage that is being generated as it does flow. Whether it is just before or just after the resistor, the current is the same there, and it produces no voltage. Here's a drawing I made for you: I drew it as it really is, against the convention. Notice that, despite there being a pre-existing voltage difference between the two batteries, the distribution of electrons is uniform. So, if you tried to take a voltage difference between the first electron and the fourth electron, you get a voltage difference of zero, even if they are flowing, because their distribution will continue being uniform. This, I think, is the equivalent of measuring the voltage across a wire with no resistor. Because I think it will be useful in analyzing what happens across the resistor, I'd like to point out that the battery is the initial source of voltage. It's made so that one end is much more electronegative than the other, and that one pole has a large supply of electrons whereas the other has a large deficit. So, if you link the two ends together, the electrons flow from the less electronegative end (with more electrons) to the more electronegative end (with less electrons). In the process, the difference in electrons from pole to pole equalizes. Moreover, the chemical reactions happening inside the battery make it so that the poles (anode for negative, cathode for positive) degrade both cathode and anode -- unless it's a rechargeable battery, in which case this reaction is reversible. But that doesn't quite spell out why there is a voltage drop across the load. If the voltage difference exists between the poles, then shouldn't the same voltage difference exist all throughout the circuit? The answer seems to be yes... but only if there is no load. So, we have to go back and ask, what is a voltage drop, and where does it come from? I think your question really is, what happens to the energy at a molecular level? It doesn't just disappear. When the voltage drops, what does it mean? If the same amount of electrons enter the resistor and then leave it, doesn't their energy stay the same? First, we know that if their energies are the same, then the voltmeter will tell us there is no voltage. We also know that, if their energies are the same, there is no difference between the two points. And finally, if their energies are the same, we know that no energy was lost, and therefore, no work was taken out of the system. But we do know that work was taken out of the system, and we can see the voltmeter giving us a reading. But if the number of electrons is the same across the resistor, then there must be energy being lost. But, where did that energy come from? We go back to what voltage is -- it is the amount of energy per unit charge. One unit of charge is the electron. So voltage is just the measure of how much energy each electron carries. There are two separate energies in the electron -- its rest energy and its momentum energy. Its rest energy is constant, but its momentum energy is definitely not. An electron carries energy by its orbital and shell. That is, the more energy an electron has, the higher up it goes into its shell. I think this is because it has more kinetic energy, so it tends to move faster. Moving faster causes it to occupy a larger volume, and so for the nucleus to keep the electron, there must be more pull to keep it in place. I'm imagining it's like the orbit of a planet around the sun, where wider orbits signify more total energy in the system. The orbitals are (I'm guessing, but fairly certain) caused by the discrete nature of energy -- photons are the most irreducible form of energy, so an electron can only jump between orbitals by discrete amounts. So, inside the battery, the electrons feel a potential difference between the different poles of the battery. This potential difference imparts energy into the electron, causing it to jump to a higher orbital. It then travels through the atoms in the wire until it reaches the resistor. Inside the resistor, energy is taken out of the electron. The electron jumps to a smaller orbital, and in the process it releases a photon. This release of a photon means it has given off energy. The photon, having energy, is converted to work and waste. The electrons, now occupying a smaller orbital, exits the resistor with less energy.Because their energy is smaller, their voltage drops. And this, I think, is the reason why it is called a voltage drop. This answer raises the question, though -- how does a voltmeter detect this voltage drop, if it occurs in the orbital shells of an atom? And the answer is interesting: it doesn't. At least, not directly. From what I can read, what voltmeters do is measure the deflection of a pointer against a spring. The pointer, in turn, is moved by the repulsive forces of the electrons on a pivoting wheel inside the voltmeter. And the repulsive forces is, of course, directly proportional to the amount of energy inside the electrons. Because we know the equations of energy, all this is calibrated. What the voltmeter actually measures is not the voltage itself, but rather the force that the voltage difference exerts on the pointer of the voltmeter. I can't say much more on how the voltmeter works. I'm frankly not so sure if that's true, but I know that the moving current with a certain voltage must work in some way like I just said. When the electron comes back home to the cathode (because we're thinking in terms of actual electrons moving, rather than the conventional electron holes moving), its voltage will not be zero, but rather will be equal to the energy per eletron in the cathode. Kirchoff's law tells us that at every load, the voltage difference decreases due to the voltage drop. Because the electrons still have energy inside the cathode, this doesn't mean the voltage is zero there (causing electrons to have no energy, which intuitively makes them have no momentum, which can't be true). It just means that the energy when the electron comes back is now the same as the energy at the cathode, so there is no more potential difference and the journey of the electron stops. If I've made any glaring errors here (and I'm bound to have, as most of this is me guessing and trying to make sense of what I know), please feel free to edit me. And if I mis-explained something, please point out where in a comment, so I and everybody else can see where it's wrong and why. Thank you! EDIT: I realize I used voltage in two ways here: the difference in potential between two points, and the energy per unit charge. One definition requires two points to be defined, whereas one requires only one point. I think we can reconcile this by imagining that there is the same voltage everywhere. Each electron has exactly the same amount of energy. In such a case, there is no voltage difference. There is no potential difference, meaning there is no gradient which will cause the electrons to move. So I guess I'm saying, voltage can be a point quantity, whereas the potential difference, or voltage difference, requires two points to be defined. Certainly, this is consistent with the definitions of voltage (and as I remember in class, voltage by itself is a scalar, so it only requires itself to be defined, not another point; the gradient or difference is what requires another point). - "One definition requires two points to be defined, whereas one requires only one point." There is an implicit definition of some other set of points which mark $V=0$ (AKA "ground"). They really are the same usage. – dmckee♦ Mar 10 at 6:57 Thanks. I realize that now (the voltage at that one point is just the work required to pull it from infinity, right?), but in conceptual usage it sounded like only one point. – markovchain Mar 10 at 10:00 Voltage is just a way of talking about energy per unit charge. What a batter guarantees is that charge entering from the negative terminal will be raised in potential energy (through an electrochemical process) to a specified higher energy at the positive terminal. What does it mean to give charges energy? It's the same as saying that a bowling ball has more energy at rest at a height of 10 feet than it does at rest on the ground. A battery pushes the charge that comes in from the negative terminal against the electric force, thus giving those charges energy. The energy given is proportional to the amount of charge. Talking about voltage, then, is just a convenient way of talking about a battery in terms of one quantity: how much energy it bestows per unit charge. The reason a battery can do this is the chemical reaction going on within. In a battery, the electric field points opposite the direction of positive current. If there were only this E-field alone, movement of current from the negative to positive terminals would not be possible, but the chemical reaction allows it. Conversely, a resistor is an object whose electric field points along the direction of current, so that currents moving through the resistor lose electrical energy. Here's an analogy for you: a battery is an escalator. Put a ball at the base of an escalator, and the ball will go to a higher gravitational potential due to the escalator's influence, even though gravity always points down. Note that the escalator is able to do this because it provides energy from some other source--in this case, it is powered--rather than from the gravitational field. This is how it is able to do work against gravity. A resistor, on the other hand, is like a way from the top floor to the base of the escalator. Batteries do not produce any difference in the number of charge-carriers at the two terminals. Similarly, you can feed a constant rate of objects to an escalator and it produces no difference in the number of objects between the top and ground floors. The current of such objects is taken to be steady. For this reason, there is no fundamental difference between the voltage measured across a battery and the voltage measured across a resistor. They both measure electrical energy per unit charge flowing through the meter. Yes, charges flow through a voltmeter. A voltmeter does not measure potential across two points as much as the change in energy per unit charge from the input to the output of the voltmeter. In this sense, adding a voltmeter to a circuit fundamentally changes the circuit--it does not strictly tell you about the original circuit--but you can quantify the error in induces. In general, you can only use a voltmeter reliably when it has much greater resistance than what you're trying to measure across. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 20, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9582750201225281, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/44/what-methods-do-you-use-to-improve-expected-return-estimates-when-constructing-a/73
# What methods do you use to improve expected return estimates when constructing a portfolio in a mean-variance framework? One of the main problems when trying to apply mean-variance portfolio optimization in practice is its high input sensitivity. As can be seen in (Chopra, 1993) using historical values to estimate returns expected in the future is a no-go, as the whole process tends to become error maximization rather than portfolio optimization. The primary emphasis should be on obtaining superior estimates of means, followed by good estimates of variances. In that case, what techniques do you use to improve those estimates? Numerous methods can be found in the literature, but I'm interested in what's more widely adopted from a practical standpoint. Are there some popular approaches being used in the industry other than Black-Litterman model? Reference: Chopra, V. K. & Ziemba, W. T. The Effect of Errors in Means, Variances, and Covariances on Optimal Portfolio Choice. Journal of Portfolio Management, 19: 6-11, 1993. - ## 3 Answers Short of having a 'reasonable' predictive model for expected returns and the covariance matrix, there are a couple lines of attack. 1. Shrinkage estimators (via Bayesian inference or Stein-class of estimators) 2. Robust portfolio optimization 3. Michaud's Resampled Efficient Frontier 4. Imposing norm constraints on portfolio weights Naively, shrinkage methods 'shrink' (of course,no?) your estimates (arrived at using historical data), toward some global mean or some target. Within the mean-variance framework, you can use the shrinkage estimators, for both, the expected returns vector, as well as the covariance matrix. Jorion introduced application of a 'Bayes-Stein estimator' to portfolio analysis. Bradley & Efron have a paper on the James-Stein estimator. Alternatively, you can stick to the global minimum variance portfolio, which is less susceptible to estimation errors (in expected returns)), and use either the sample covariance matrix or a shrunk estimate. Robust portfolio optimization seems to be another way 'nicer' portfolios can be constructed. I haven't studied this in any detail, but there's a paper by Goldfarb & Iyengar. Michaud's Resampled Efficient Frontier is an application of Monte Carlo and bootstrap to addressing the uncertainty in the estimates. It is a way of 'averaging' out the frontier and it perhaps is best to read up Michaud's book or paper to know what they really have to say. Finally, there might be a way to directly impose constraints on the norm of the portfolio weight vector which would be equivalent to regularization in the statistical sense. Having said all that, having a good predictive model for E[r] and Sigma, is perhaps worth the effort. References: Jorion, Philippe, "Bayes-Stein Estimation for Portfolio Analysis", Journal of Financial and Quantitative Analysis, Vol. 21, No. 3, (September 1986), pp. 279-292. Philippe Jorion, "Bayesian and CAPM estimators of the means: Implications for portfolio selection", Journal of Banking & Finance, Volume 15, Issue 3, June 1991 Robert R. Grauer and Nils H. Hakansson, "Stein and CAPM estimators of the means in asset allocation", International Review of Financial Analysis, Volume 4, Issue 1, 1995, Pages 35-66 Donald Goldfarb, Garud Iyengar: "Robust Portfolio Selection Problems". Math. Oper. Res. 28(1): 1-38 (2003) Michaud, R. (1998). Efficient Assset Management: A Practial Guide to Stock Portfolio Optimization, Oxford University Press. - Thanks for a detailed answer. Haven't heard of Michaud's approach at all, so I will have to get into this paper in some spare time. And indeed - good return estimates are a value in themselves. Heck, I wouldn't need fancy theories when I could absolutely trust my return forecasts. ;-) – Karol Piczak Feb 1 '11 at 22:04 Michaud's re-sampling approach has perverse behavior (namely assigns higher weights to assets that high volatility in some circumstances). See Bernd Scherer's "Robust Portfolio Optimization", or Martin/Scherer's text for the details. There is also no theoretical basis for re-sampling. For this reason I would avoid re-sampling. – Quant Guy Aug 1 '11 at 23:51 Both answers from Shane and Vishal Belsare make sense and detail different models. In my experience, I have never been satisfied by a unique model since the majority of papers out there can be split in two categories: 1. Those that predict the mean component of the problem. 2. Those that predict the variance component of the problem. The ideal (to read "practical") model would be the one that allows you to incorporate your own views in both expectation of returns and variance. On the expected returns, Black-Litterman seems interesting since it enables you to get a relative point of view of the expectations which is far more stable and less risky than absolute expected returns. On the variance side, you can use two variance matrices. Theoritically, this would be using a markov switch regime regression or a 2-state regression. There is enough literature on the markov switch model that you can read, the latter model is simpler and easier to use. It consists in considering the returns of your assets as a bivariate normal distribution, one that explains the returns in quiet state of the market and the other explains the hectic state in the market. The result of such regression would be a variance matrix conditioned by the state of the market. (You can use, then, the VIX as a proxy of the state of the market in order to choose between both). I have tried in the past different models, but, to my opinion, this framework seems to be ahead of the theoretical ones. I'll add some references that may be of interest: Kim, J. and Finger, C., A Stress Test to Incorporate Correlation Breakdown, Journal of Risk, Spring 2000 McLachlan, G. and Basford, K., Mixture models: Inference and Applications to Clustering, Marcel Dekker Inc., 1988 - You raise a very important point, which unfortunately doesn't have a simple answer. Black-Litterman addresses the allocation problem by allowing you to provide a prior within a bayesian framework. It doesn't really tell you how to produce the prior itself. But more importantly, it doesn't address the fundamental problem: it's difficult to accurately predict expected returns. So, you can improve this by having a better model to predict the expect returns besides assuming a static, simple linear model ("this was the mean return over the last $n$ years"). But improving it is the big challenge in finance in general. And standard textbook models haven't done too much to improve the situation; the most success in time series modeling has been around volatility prediction (e.g. with some of the GARCH models), which addresses the variance part of the problem. But ARIMA and other time series models have mixed success success when trying to predict returns for financial assets. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.908348798751831, "perplexity_flag": "middle"}
http://nrich.maths.org/6951/solution
nrich enriching mathematicsSkip over navigation ### Walk and Ride How far have these students walked by the time the teacher's car reaches them after their bus broke down? ### Fence It If you have only 40 metres of fencing available, what is the maximum area of land you can fence off? ### How Far Does it Move? Experiment with the interactivity of "rolling" regular polygons, and explore how the different positions of the red dot affects the distance it travels at each stage. # Exploring Simple Mappings ##### Stage: 3 Challenge Level: Ed from St Peters College noticed that: If the numbers aren't changed ($\times2$ and $+3$), the final answer will always be odd. When the values of the $2$ and $3$ are changed, from an even and an odd, to an odd and an even, the answer will be the same type as the number placed in the input box at the start: if the input is even, the answer will be even if the input is odd, the answer will be odd Thomas from Wilson's School noticed that: The higher the numbers you are multiplying by, the steeper the gradient of the graph. The addition or subtraction determines where the graph cuts the y-axis. Rajeev from Fair Field Junior School in Radlett summarised his findings clearly here He started to consider the equations of perpendicular lines. You can go to Perpendicular Lines if you would like to explore this further. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9216568470001221, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/84266/on-robins-criterion-for-rh/84285
## On Robin’s criterion for RH [closed] ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) \begin{equation} \sigma(n) < e^\gamma n \log \log n \end{equation} In 1984 Guy Robin proved that the inequality is true for all n ≥ 5,041 if and only if the Riemann hypothesis is true (Robin 1984). I could not get a hold of his paper. I am trying to understand how did he derived the inequality. Can anyone can outline the steps, how Robin derived this criterion? - 1 It turns out that the OP asked the same question on math.SE, where someone supplied the following link: mpim-bonn.mpg.de/preblob/2960 Double posting (especially without mention) is not cool, so I am voting to close now. – Igor Rivin Dec 25 2011 at 18:14 When I click on Igor's link, I get a screen full of gobbledegook. To make it work, save the link (e.g. by right clicking on it). It's a PDF file, which you can then view in the normal way. – Tom Leinster Dec 25 2011 at 18:34 1 If you have access to a math library you should be able to find Robin's original paper. It's in French but, fairly readable. If I recall, the main idea of the paper was to sharpen some existing bounds on the sum-of-divisors function for highly composite numbers. I'm curious if there's a different derivation. – Alex R. Dec 25 2011 at 18:46 @Tom: sorry about that, I just cut and pasted the url :( – Igor Rivin Dec 25 2011 at 21:01 @Igor: Yes, I posted a similar question on math.SE. But I realized that this question is better suited here. – Roupam Ghosh Dec 25 2011 at 23:12 ## 3 Answers I have requested a pdf of Robin 1984 from campus scanning service. One highlight of the article that really should be mentioned is this: For $n \geq 13,$ we have $$\sigma(n) \; < \; \; e^\gamma \; n \log \log n \; + \; \frac{ \; 0.64821364942... \; \; n \; }{\log \log n},$$ with the constant in the numerator giving equality for $n=12.$ see: http://mathoverflow.net/questions/79927/which-n-maximize-gn-frac-sigmann-log-log-n/79987#79987 That, at least, rests on effective bounds of Rosser and Schoenfeld (1962), which can be downloaded from ROSSER Well, maybe not so directly. R+S do the unconditional bound for $n/\phi(n)$ in Theorem 15, pages 71-72, formulas (3.41) and (3.42). The treatment for $\sigma(n)$ is quite similar in spirit, maybe Robin was the first to write it down. The analogue of the primorials PRIMO and $n^{1-\delta}/\phi(n)$ is the colossally abundant CA numbers and $\sigma(n)/ n^{1 + \delta}.$ Well, I am not sure where it is written down, but it is easy enough to show that the maximum value, for some $0 < \delta \leq 1,$ of $$\frac{ n^{1-\delta}}{\phi(n)}$$ occurs when the prime factor $p$ of $n$ has exponent $$v_p(n) = \left \lfloor \frac{p^{1-\delta}}{p-1} \right \rfloor.$$ Since, for a fixed $\delta,$ this expression is either 0 or 1 and nonincreasing in $p,$ it turns out that the optima occur at the primorials, the products of the consecutive primes from 2 to something... From Alaoglu and Erdos, the maximum value, for some $0 < \delta \leq 1,$ of $$\frac{\sigma(n)}{ n^{1+\delta}}$$ occurs when the prime factor $p$ of $n$ has exponent $$v_p(n) = \left\lfloor \frac{\log (p^{1 + \delta} - 1) - \log(p^\delta - 1)}{\log p} \right\rfloor \; - \; 1.$$ This is Theorem 10 on page 455. The results of this construction are the colossally abundant numbers. The construction is originally due to Ramanujan, but the part of his manuscript that dealt with ca numbers was not printed owing to paper shortages at the time. Hardy and Wright use $d(n)$ for the number of divisors of $n.$ This is in the original paper by Ramanujan. For some $0 < \delta \leq 1,$ the maximum of $$\frac{d(n)}{ n^{\delta}}$$ occurs when the prime factor $p$ of $n$ has exponent $$v_p(n) = \left\lfloor \frac{1}{p^\delta - 1} \right\rfloor.$$ The results are called the superior highly composite numbers SHC. So, taking all three with $\delta = 1/2,$ we get lemmas $$\phi(n) \geq \sqrt{\frac{n}{2}}, \; \; d(n) \leq \sqrt{3n}, \; \; \sigma(n) \leq 3 \left( \frac{n}{2} \right)^{3/2}.$$ In all three cases, if $\delta$ is such that more than one number $n$ achieves the maximum value of the ratio specified, we are choosing the largest of these $n$'s. - Thanks for the links! :) – Roupam Ghosh Dec 25 2011 at 23:14 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. In view of OP's comment on Igor Rivin's answer it seems that the 'actual' question could be something else. The inequality under RH `\[\sigma(n) < e^{\gamma} n \log \log n\]` for sufficiently large $n$ is not due to Robin, but due to Ramanujan. And still before that Grönwald (1913) showed (uncoditionally) `\[\limsup_{n\to \infty}\frac{\sigma(n)}{n \log \log n} = e^{\gamma}\]` As to why questions like this are linked to RH at all. For example, recall that if one defines `$\sigma_y (n) = \sum_{d|n}d^y$` then for the asociated Dirichlet series one has `\[ \sum_{n=1}^{\infty} \frac{\sigma_y(n)}{n^s} = \zeta(s)\zeta(s-y)\]` so `\[ \sum_{n=1}^{\infty} \frac{\sigma (n)}{n^s} = \zeta(s)\zeta(s-1).\]` Without having followed up on the precise historical deveopment it seems rather like so: one studies the growth of $\sigma$ as for plenty of other arithmetical functions. Somebody (Grönwald) shows a nice result, somebody else (Ramanujan) shows something more precise under RH. Then somebody (Robin) decdides to investigate whether this is in fact equivalent to RH (as some other results known under RH, most notably the asymptotic count of prime numbers). This seems like a quite natural development to me. - This is certainly quite a reasonable view, but the statements (of Robin, etc) are still quite surprising... – Igor Rivin Dec 25 2011 at 18:06 Yes, of course, the results are surprising and nice. All I meant to say is that this fits fairly naturally into some general development, and did not come out of nowhere. And, the inequality is not Robin's so the question 'how did he derive it' seems in some sense ill-posed; as he was not the (first) one to derive it. – quid Dec 25 2011 at 18:17 Thanks quid. Thats a pretty nice historical overview :) – Roupam Ghosh Dec 25 2011 at 23:18 quid, I got the Robin pdf today. If you would like a copy email me, you could create an address such as quid@gmail.com for this type of purpose. I do not see the explicit Dirichlet series you show above, but he has a long bibliography. In particular, he is a student of J. L. Nicolas, who has a later survey article on this area in a book called Ramanujan Revisited. – Will Jagy Jan 3 2012 at 21:34 @Will Jagy, in case you still read this: thank you for the kind offer and sorry for not following up earlier; my activity was a bit spurious lately. While not at the time of writing, in general it would be reasonable easy for me to get the paper, so thank you but it is not needed. Also, thank you regarding the clarification with the series; I did not want to imply (but perhaps did not make this clear enough) that it is this what is used (I did not know). The intention was merely to say something that shows that it is not toally unexpected that properties of zete could play a role. – quid Jan 9 2012 at 13:10 I cannot find Robin's paper either (thank you, Elsevier), but a stronger theorem was proved by Jeff Lagarias in 2002: J. C. Lagarias, An elementary problem equivalent to the Riemann hypothesis, Amer. Math. Monthly 109 (2002), 534–543. Lagarias' statement is: The RH is true if and only if $\sigma(n) < H_n + \exp(H_n) \log(H_n),$ where $H_n$ are the usual harmonic numbers. - I have gone through Lagarias' paper already. It seems to me that he has just treated Robin's inequality to give a better bound. But I never quite got how Robin managed to relate RH and sigma and got this inequality. – Roupam Ghosh Dec 25 2011 at 16:54 5 Since it's an if and only if statement, isn't neither statement technically stronger? – Will Sawin Dec 25 2011 at 17:57 @Will: very true... – Igor Rivin Dec 25 2011 at 18:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946384608745575, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/functional-analysis+banach-algebras
# Tagged Questions 1answer 39 views ### A problem on bounded invertible linear operator in Banach space Let $X$ be a Banach space. Let $T : X \to X$ be a invertible linear operator and $M > 0$ be such that $\|T^{-k}\| \le M$ for all $k \ge 1$. Prove that $\inf_ {n\ge1} \|T^n(x)\| > 0$ for all \$x ... 1answer 20 views ### Multiplicative functionals on Banach algebra closed in weak-* topology Let $\mathcal A$ be commutative unital Banach algebra denote by $M(A)$ the space of all non-zero multiplicative functionals on $\mathcal A$. I want to show that $M(A)$ is closed in the weak-* ... 1answer 30 views ### Gelfand transform and spectrum Let $\mathcal A$ commutative, unital Banach algebra and denote by $\mathcal M(\mathcal A)$ the space of multiplicative functionals on $\mathcal A$. The Gelfand transform is defined by \Gamma: ... 1answer 32 views ### Derivative of norm on Banach algebra Let $\mathcal A$ be a unital Banach algebra. I want to consider $f(z):= \vert \vert e^{-zA}Be^{zA} \vert \vert, z\in \mathbb C$ and $A,B \in \mathcal A$. How can I properly define the derivative of ... 0answers 56 views ### Maximal ideals in the algebra of continuously differentiable functions on [0,1] This is an exercise in Rudin's Functional Analysis, in the chapter on commutative Banach algebras. My (uneducated) guess was that every homomorphism on $C^{1}[0,1]$ is an evaluation at some point of ... 1answer 32 views ### Norm product inequality The following is about a proof in Bratteli Robinson vol 1. Let $\mathcal{A}$ be some C*-algebra. Show that $$\mathcal{B}=\{(A,\alpha)~|~A\in\mathcal{A}, \alpha\in\mathbb{C}\}$$ together with the norm ... 1answer 27 views ### Banach algebra: norm distance of non-invertible elements to unit element Let $\mathcal A$ be a commutative, unital Banach algebra. Take $A \in \mathcal A$ such that $A$ is non-scalar, i.e. $A\neq \alpha \mathbb I$, where $\mathbb I$ is the unit element. Denote the ... 0answers 48 views ### Unitary equivalent In general, if two irreducible representations of a $C^*$-algebra have the same kernel we can say this two representations are approximately unitarily equivalent. When our $C^*$-algebra is GCR, how to ... 1answer 47 views ### Continuous functional calculus Let $\mathscr H$ be a Hilbert space, and $\mathscr B(\mathscr H)$ is a $C^*$-algebra, $T\in \mathscr B(\mathscr H)$ is a normal operator. Let $C^*(T)$ denote the $C^*$- subalgebra generated by $T$ ... 1answer 127 views ### Maximal abelian subalgebra of Banach algebra is closed and contains the unity I'm studying Murphy's book: C*-Algebras and Operator Theory, and got stuck in exercise 8 from chapter 1: "Show that if $B$ is a maximal abelian subalgebra of a unital Banach algebra $A$, then $B$ is ... 1answer 57 views ### Algebra (Not *)-Isomorphisms of von Neumann algebras Let $A$ and $B$ be any two infinite-dimensional von Neumann algebras, they are operator algebras with operator composition as the multiplication and as infinite dimensional vector spaces they're ... 0answers 55 views ### Proving properties of exponential map on a Banach algebra $$\exp(a) := \sum\frac {a^k}{k!}$$ Can you help me prove that: $\exp$ is well defined (i.e. converges for all $a$ in $A$) $\exp$ is continuous $\exp(A)$ is a subset of $A_0$ (where $A_0$ is the ... 2answers 59 views ### Banach algebra problem? Let $A$ be a Banach algebra and let $$A_1=\{(x,\alpha)\;;\;:x∈A, \alpha\in\mathbb{C}\}$$ with the following operations: (x_1,\alpha_1 )+(x_2,\alpha_2 )=(x_1+x_2 ,\alpha_1+\alpha_2 )\qquad ... 0answers 59 views ### In relation with the set of polynomially Fredholm perturbation elements Let $A$ and $B$ be two unital Banach algebras and $T\colon A\to B$ an homomorphism of Banach algebras. Let denote the set of polynomially Fredholm perturbation elements in $A$, i.e. ... 0answers 62 views ### Open map in Banach algebra I'm having trouble showing a certian function is open and can be extended. Let $\Omega$ be a completely regular topological space and $A=C_b(\Omega)$ the space of all complex-values bounded ... 1answer 74 views ### Are nilpotent Lie groups unimodular? The continuous homomorphism $\Delta:G \rightarrow \mathbb{R}^{\times}_+$ is defined by \begin{equation*} \int_G f(xy)dx = \Delta(y)\int_Gf(x)dx \end{equation*} where $dx$ is a left Haar measure on ... 1answer 67 views ### Properties of $\text{Exp}(A)$, where $A$ is a Banach algebra. $\newcommand{\Exp}{\operatorname{Exp}}$ Let $A$ be a unital Banach algebra. For $a \in A$, consider \Exp(A) \stackrel{\text{def}}{=} \{ e^{a_{1}} e^{a_{2}} \cdots e^{a_{n}} ~|~ n \in ... 1answer 26 views ### Why locally compact in the Gelfand representation? I'm missing something in the Gelfand representation. Let's just say $\mathfrak{A}$ is a Banach algebra. Then it's a Banach space, and so we have $\mathfrak{A}^\ast$. The multiplicative linear ... 1answer 85 views ### Polar decomposition of invertible elements in a unital C$^{*}$-algebra. If $A$ is a unital C$^{*}$-algebra and $a$ is invertible, then $a = u|a|$ for a unique unitary element $u$ of $A$. If $\| a \| = \| a^{-1} \| = 1$, what can you say about $|a|$? I ... 1answer 111 views ### On the spectrum of the sum of two commuting elements in a Banach algebra Original: Soit A une algèbre de Banach unitaire et a et b deux éléments tels que a*b=b*a. Pourquoi σ (a+b) с σ(a)+σ(b) Et qu’elle est la relation entre σ (a*b) et σ(a) et σ(b)? Translation: Let ... 1answer 99 views ### Spectral radius in Banach Algebra Let $A$ be a unital Banach algebra and $a\in A$ and $\lambda \in \rho(a)$. I want to prove that $$r(R(a,\lambda))=\frac{1}{d(\lambda,\sigma(a))}.$$ where $R(a,\lambda)=(\lambda 1-a)^{-1}$ and $r(.)$ ... 2answers 94 views ### Banach-algebra homeomorphism. Let $A$ be a commutative unital Banach algebra that is generated by a set $Y \subseteq A$. I want to show that $\Phi(A)$ is homeomorphic to a closed subset of the Cartesian product \$ ... 0answers 35 views ### Biduals generated by projections This question is motivated by a similar question recently posed at MO: http://mathoverflow.net/questions/122091/masas-in-second-duals-of-banach-algebras In this setting, let $B$ be a Banach algebra ... 2answers 141 views ### Turning Banach space into Banach algebra Given a Banach space, how can we determine if we can turn it into a Banach algebra or not? 1answer 67 views ### Prove that $L^1$ is a Banach algebra with multiplication defined by convolution To be more specific, prove that $L^1(\mathbb{R}^n)$ with multiplication defined by convolution: $$(f\cdot g)(x)=\int_\mathbb{R^n}f(x-y)g(y)dy$$ is a Banach algebra. All the properties of Banach ... 1answer 67 views ### The exponential function of Banach algebra I am wondering how to prove the following question: In any unital Banach algebra, we have $\exp(x+y)=\exp(x)\exp(y)$, if $xy=yx$, where $$\exp(x)=\sum_{n=0}^{\infty}\frac{x^n}{n!}.$$ 2answers 115 views ### Left topological zero-divisors in Banach algebras. Let $A$ be a unital Banach algebra. Define $\zeta: A \longrightarrow [0,\infty)$ by $$\forall a \in A: \quad \zeta(a) \stackrel{\text{def}}{=} \inf_{b \in \mathbb{S}(A)} \| ab \|,$$ where \$ ... 2answers 69 views ### ‎‎If $A$ contains ‎an ‎idempotent $e‎$ (‎‎$‎e‎\neq ‎‎0,1‎‎$‎) , then $‎\Omega(A)‎$ ‎is ‎disconnected If $A$‎ ‎be a‎ ‎unital ‎abelian ‎Banach ‎algebra ‎and ‎contains ‎an ‎idempotent $e$‎ ‎(that ‎is ‎‎$‎e=‎e‎^{‎2‎}‎‎$‎) ‎other ‎than $0$‎ ‎and $1$‎ ,‎ ‎then help me to show that ‎‎$‎\Omega(A)‎$ ‎is ... 0answers 84 views ### invariant subspace of a Hardy space Let $T$ be the unit circle and $H^1=\{f\in L^1(T): \int_0^{2\pi} f(e^{it})\chi_n(e^{it})dt=0 \text{ for } n>0\}$ where $\chi_n(e^{it})=e^{int}$. Let $M$ be a closed subspace of $H^1$. Then ... 1answer 128 views ### Fourier transform as a Gelfand transform One question came to my mind while looking at the proof of Gelfand-Naimark theorem. Is Fourier transform a kind of Gelfand transform? Are there any other well-known transforms which are so? 2answers 120 views ### Stone-Čech via $C_b(X)\cong C(\beta X)$ I am having some trouble constructing the Stone-Čech compactification of a locally compact Hausdorff space $X$ using theory of $C^*$-algebras. I did some search but could not find a good answer on ... 2answers 63 views ### Characterization of small Banach subalgebras Let $A$ be a unital Banach algebra and $x \in A$ nonzero. We can consider the subalgebra $B$ of $A$ generated by $\{1,x\}$. This is the norm closure of the subspace of polynomials in $x$. So for any ... 2answers 95 views ### $(\lambda-a)^{-1}$ as limits of 'polynomials' For a unital $C^*$-algebra $\mathcal{A}$ the spectral permanence gives \begin{equation} \sigma_{\mathcal{B}}(a)=\sigma_{\mathcal{A}}(a) \end{equation} for any unital $C^*$-subalgebra $\mathcal{B}$. ... 1answer 73 views ### Spectrum of elements in $C^*$-subalgebras Assume $\mathcal{A}$ is a $C^*$-algebra with unit $1$ and $\mathcal{B}\subset\mathcal{A}$ is a $C^*$-subalgebra (i.e. a closed $*$-subalgebra) such that $1\in\mathcal{B}$. It is said that under these ... 1answer 148 views ### $\sigma(x)$ has no hole in the algebra of polynomials Let $A$ be a unital banach algebra generated by two elements $1$ and $x$. Then it seems $\sigma(x)$ cannot have holes. At least this is true in the case for disk algebras. This amounts to prove that ... 2answers 174 views ### $Conv(Ex((C(X))_1))$ is dense in $(C(X))_1$? Let $X$ be compact Hausdorff, and $C(X)$ the space of continuous functions over $X$. Denote the closed unit ball in $C(X)$ by $(C(X))_1$, then it can be shown $f$ is an extreme point of $(C(X))_1$ if ... 0answers 89 views ### Density of operators I am interested in operators on non-reflexive Banach space. Let $X$ be a Banach space and let $L(X)$ be the algebra of operators acting on $X$. We may embed $L(X)$ into $L(X^{**})$ by ... 2answers 55 views ### Multiplication operators Consider a commutative Banach algebra $A$ and the Banach algebra of bounded operators $B(A)$ on $A$. Associate to each $a\in A$ the multiplication operator $T_ax =ax$ ($x\in A$). Is always the mapping ... 0answers 91 views ### Decomposing $\mathcal{B}(H)$ Let $H$ be an infinite-dimensional Hilbert space and let $\mathcal{B}(H)$ be the (C*/W*-)algebra of bounded operators on it. Actually, you may forget about the involution in $\mathcal{B}(H)$ because I ... 1answer 169 views ### spectral radius. I am stuck in a problem of Conway's A course in a Functional Analysis. Can anyone give me a hint to solve the problem? The question is "If $A$ is a Banach Algebra, then show that the function \$r:A\to ... 1answer 344 views ### Some examples in C* algebras and Banach * algebras I would like an example of the following things. A Banach * algebra that is not a C* algebra for which there exists a positive linear functional (it takes $x^*x$ to numbers $\geq 0$) that is not ... 1answer 93 views ### Universal separable Banach algebras The well-known Banach-Mazur theorem says that $C([0, 1])$ is a universal separable Banach space, in the sense that if $X$ is any separable Banach space then there is a map $f : X \to C([0, 1])$ which ... 0answers 97 views ### When is a Banach Algebra stellar? I know that if there are enough Hermitian elements in a Banach algebra, then the Banach algebra is stellar. In particular, I'm interested in the two spaces $B(L^1(S^1,\Sigma,\mu))$ the space of ... 1answer 112 views ### Schwarz inequality for unital completely positive maps I came across the following form of Schwarz inequality for completely positive maps in Arveson's paper: Let $\delta:\mathcal{A}\to\mathcal{B}$ be a unital completely positive linear map between ... 2answers 137 views ### Why are compact operators 'small'? I have been hearing different people saying this in different contexts for quite some time but I still don't quite get it. I know that compact operators map bounded sets to totally bounded ones, that ... 0answers 45 views ### Topology of $(\mathcal{A},*)$ determined by $\mathcal{A}_{sa}$? Let $(\mathcal{A},*)$ be a $*$-algebra, we have the following observation: Let $\|\cdot\|_1$ and $\|\cdot\|_2$ be two norms on $\mathcal{A}$ such that the involution is an isometry with respect to ... 2answers 80 views ### What does $(B+I)/I\sim B/(B\cap I)$ tell us? Let $A$ be a $C^*$-algebra in which $B$ is a $C^*$-subalgebra and $I$ is a closed ideal. In several books on $C^*$-algebras I have encountered the following: $(B+I)/I$ is $*$-isomorphic to ... 1answer 79 views ### Every ideal has an approximate identity? Averson's 1970 paper on extensions of $C^*$-algebras seems to assume that every ideal has an approximate identity. However, I am a little bit suspicious here, since he does not assume the closeness ... 2answers 127 views ### Linear functionals can be decomposed as linear combinations of positive ones? I am reading Arveson's Notes on Extensions of $C^*$-algebras. In proving theorem 1, he needs to establish some results concerning bounded linear functionals. However, he said it suffices to prove for ... 2answers 85 views ### If $a\ge 0$ and $b\ge 0$, then $\sigma(ab)\subset\mathbb{R}^+$. This is an exercise in Murphy's book: Let $A$ be a unital $C^*$-algebra and $a,b$ are positive elements in $A$. Then $\sigma(ab)\subset\mathbb{R}^+$. The problem would be trivial if the algebra ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 187, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.881466805934906, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/foundations?sort=faq&pagesize=15
# Tagged Questions The foundations tag has no wiki summary. 4answers 565 views ### Reason for the discreteness arising in quantum mechanics? What is the most essential reason that actually leads to the quantization. I am reading the book on quantum mechanics by Griffiths. The quanta in the infinite potential well for e.g. arise due to the ... 11answers 971 views ### Why quantum mechanics? Imagine you're teaching a first course on quantum mechanics in which your students are well-versed in classical mechanics, but have never seen any quantum before. How would you motivate the subject ... 6answers 549 views ### Is the density operator a mathematical convenience or a 'fundamental' aspect of quantum mechanics? In quantum mechanics, one makes the distinction between mixed states and pure states. A classic example of a mixed state is a beam of photons in which 50% have spin in the positive $z$-direction and ... 3answers 382 views ### Is the classical world an illusion? In the paper Zeh, H. D. The Wave Function: It or Bit? In Science and Ultimate Reality, eds. J.D. Barrow, P.C.W. Davies, and C.L. Harper Jr. (Cambridge University Press, 2004), pp. 103-120. ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8882626891136169, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/267916/categorical-interpretation-of-the-group-generated-by-two-subgoups
# categorical interpretation of the group generated by two subgoups Let $H,K$ be two subgroups of a group $G$ and let $A=H\cap K$ and $B$ be the subgroup of $G$ generated by $H$ and $K$. We know that \begin{array}{rcl} A & \rightarrow & H \\ \downarrow & & \downarrow \\ K & \rightarrow & G \end{array} is a pull-back square (morphisms are inclusions). Does there exists a similar interpretation for $B$? - ## 2 Answers The most blunt way of describing $\langle H \cup K \rangle \le G$ is as the image of the canonical homomorphism $H * K \to G$ (determined by the inclusions $H \hookrightarrow G, K \hookrightarrow G$), where $H * K$ denotes the coproduct (free product) of $H$ and $K$ as abstract groups. This generalises readily to any cocomplete regular category and any small family of subobjects. (Exercise.) - $H$ and $K$ are both objects in the category of subobjects of $G$. The category of subobjects is a preorder (exercise), so it behaves like a poset except that some objects are isomorphic. The categorical product in this category, which agrees with the pullback, is the intersection $H \cap K$, and the categorical coproduct in this category is the subgroup of $G$ generated by $H$ and $K$. The nLab asserts that in a topos, the coproduct of subobjects is their pushout along their product, but this is false for groups. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9186279773712158, "perplexity_flag": "head"}
http://quant.stackexchange.com/questions/598/how-do-i-incorporate-time-variability-in-a-pair-trading-framework/600
# How do I incorporate time-variability in a pair trading framework? Recently I have been looking at pair trading strategies from a cointegration perspective, as described in chapter 5 of Carol Alexander's Market Risk Analysis volume 2. As most quantitative finance texts the science is well explained, but the description of applications is a bit on the light side. Theoretically it's pretty straightforward and easy to run the tests for a given time period to see whether a certain pair is likely to be conintegrated or not. To apply the theory to an actual pairs trade however, I would like to add the time dimension to my parameters. If I simply model the spread as $residuals = y - \alpha - \beta x$, the first thing I'd like to know is the mean and variance of the residuals. In order to setup my bid/ask limits I must have a mean and some measure of the variance. In an ideal case the two would be stable, but how can I quantify this and incorporate the information into the model? Also, how can I make this analysis structural, rather than having to depend on subjective eye-balling of data? Could anyone point me in the right direction here? My guess is that I should take a look at regime switching models. But since that topic is unknown to me, I'd appreciate very much if someone could give some pointers so I can avoid the worst pitfalls. EDIT: Perhaps I wasn't clear enough, but my question is not how to do the tests, it's how can do can I quantify the stability of the parameters, i.e., the mean and variance of the spread? - ## 2 Answers If you're using R and would like a few examples, here's Paul Teetor's website: http://quanttrader.info/public/ And, Ernie Chan's website: http://epchan.blogspot.com/ Energy pairs:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506276845932007, "perplexity_flag": "head"}
http://mathematica.stackexchange.com/questions/tagged/list-manipulation?page=1&sort=faq&pagesize=15
# Tagged Questions Questions on the manipulation of List objects in Mathematica, and the functions used for these manipulations. 6answers 4k views ### Elegant operations on matrix rows and columns Question The Mathematica tutorial has a section 'Basic Matrix Operations', describing operations like transpose, inverse and determinant. These operations all work on entire matrices. I am missing a ... 3answers 343 views ### Using pure functions in Table I need a table with the elements made of pure functions and list elements. This is a simplified example: I need a list as: ... 3answers 1k views ### Delete duplicate elements from a list If a list contains duplicate elements, for example list = {a, 1, 5, 3, 5, x^2, x^2}, how can the duplicate elements be removed? The result would be ... 4answers 650 views ### Partitioning with varying partition size How can I partition a list into partitions whose sizes vary? The length of the $k$'th partition is a function $f(k)$. For example: if $l = \{1, 2, 3, 4, 5, 6\}$ and $f(k) = k$. Then the partitioning ... 6answers 996 views ### How to select minimal subsets? I am a newbie, so please point me in the right direction if you feel this question has been answered somewhere else before. Here goes: Suppose I have a list like this: ... 5answers 721 views ### Partition a set into subsets of size $k$ Given a set $\{a_1,a_2,\dots,a_{lk}\}$ and a positive integer $l$, how can I find all the partitions which includes subsets of size $l$ in Mathematica? For instance, given ... 2answers 464 views ### Transpose uneven lists Is there a quick method to transpose uneven lists without conditionals? With: Drop[Table[q, {10}], #] & /@ Range[10] Thus the first list would have the ... 12answers 812 views ### Map a function across a list conditionally It seems that this is a really basic question, and I feel that the answer should be obvious to me. However, I am not seeing. Can you please help me? Thanks. Suppose I have a list of data ... 6answers 811 views ### Mathematica Destructuring Context I'm writing a function that look something like: ... 7answers 481 views ### How to Set parts of indexed lists? I would like to assign a list to an indexed variable and then change it using Part and Set like this: ... 3answers 890 views ### Local max/min of Mathematica data sets Is there a way in Mathematica to find the local maxima of a set of points? Suppose you have ... 3answers 406 views ### Efficient way to combine SparseArray objects? I have several SparseArray objects, say sa11, sa12, sa21, sa22, which I would like to combine into the equivalent of {{sa11, sa12}, {sa21, sa22}}. As an example, I ... 3answers 367 views ### How do I use Map for a function with two arguments? I'm a newbie who tries to be a good boy, and use Map instead of writing out a list of functions. I have a table I want to Map ... 2answers 414 views ### Finding all partitions of a set I'm looking for straightforward way to find all the partitions of a set. IntegerPartitions seems to provide a useful start. But then things get a bit complicated. ... 6answers 2k views ### Does Mathematica have advanced indexing? I have two $M \times K$ arrays $L, T$ where I would like to set all the elements in $L$ to zero whenever the corresponding element of $T$ is greater than 15. The ... 2answers 590 views ### Tiling a square I wondered if there was a way to automate the process of finding a way to tile a tile into a square. The idea is to represent the tile with a matrix of 0s for blank space and 1s for filled spaces ... 6answers 1k views ### How to visualize/edit a big matrix as a table? Is it possible to visualize/edit a big matrix as a table ? I often end up exporting/copying big tables to Excel for seeing them, but I would prefer to stay in Mathematica and have a similar view as in ... 5answers 954 views ### What is the most efficient way to add rows and columns to a matrix? Say I have a matrix m and a vector v. ... 6answers 891 views ### Find zero crossing in a list I'm looking for a function that finds the index of the zero-crossing points of a list. Before I go making my own subroutine to do this, I was wondering if anyone knows of any built-in Mathematica ... 6answers 847 views ### Insert $+$, $-$, $\times$, $/$, $($, $)$ into $123456789$ to make it equal to $100$ Looks like a question for pupils, right? In fact if the available math symbol is limited to $+$, $-$, $\times$, $/$ then it's easy to solve: ... 7answers 975 views ### How to apply or map a list of functions to a list of data? Say I have a group of functions: f1[a_] := a * -1; f2[a_] := a * 100; f3[a_] := a / 10.0; and some data in a list: ... 7answers 684 views ### Selectively Mapping over elements in a List I am using the following code to easily generate a row of images of all eight planets of our Solar System: ... 2answers 770 views ### Simple algorithm to find cycles in edge list I have the edge list of an undirected graph which consists of disjoint "cycles" only. Example: {{1, 2}, {2, 3}, {3, 4}, {4, 1}, {5, 6}, {6, 7}, {7, 5}} Each ... 2answers 688 views ### Extracting values from nested rules in JSON data I have been using Mathematica to analyse some data from the StackExchange API. It is conveniently available in JSON form, which Mathematica interprets as replacement rules. However, some of the rules ... 8answers 536 views ### How do you check if there are any equal arguments(even sublist) in a list? I would like to set up a function which has to return True if at least two arguments of a given List are equal. So if I give {1,4,6,2} to the function it has to ... 4answers 325 views ### How can I remove B -> A from a list if A -> B is in the list? I have a list of transformations like this: list = {"A" -> "B", "B" -> "A", "C" -> "D"} As this is used to plot an undirected graph with ... 7answers 287 views ### How do I obtain an intersection of two or more list of lists conditioned on the first element of each sub-list? Given two lists like list1 = {{1, 1}, {2, 4}, {3, 9}, {4, 16}}; list2 = {{2, 6}, {3, 9}, {4, 12}, {5, 15}}; I would like to produce an output like ... 6answers 294 views ### List-operations only when restrictions are fulfilled (Part 1) Consider the following: ... 5answers 248 views ### Computing the equivalence classes of the symmetric transitive closure of a relation I have a list of pairs, for example: ... 3answers 545 views ### Emulating R data frame getters with UpValues What's the best way to emulate R's data frames functionality? This includes the ability to select rows and columns in a 2-dimensional table by the string identifiers positioned typically in the first ... 11answers 639 views ### Generating an ordered list of pairs of elements from ordered lists I have a pair of ordered lists. I want to generate a new ordered list (using the same ordering) of length n by applying a binary operator to pairs of elements, one from each list, along with the index ... 5answers 957 views ### Finding all elements within a certain range in a sorted list Suppose we have a sorted list of values. Let's use list = Sort@RandomReal[1, 1000000]; for this example. I need a fast function ... 7answers 809 views ### How to Derive Tuples Without Replacement Given a couple of lists like a={1,2,3,4,6} and b={2,3,4,6,9} I can use the built-in Mathematica symbol ... 4answers 383 views ### Interlacing a single number into a long list This seems like it should be a simple question, but I am running into some difficulty in doing this with Mathematica. Right now, I have a list like this: ... 3answers 203 views ### How to “ignore” an element of Map or MapIndexed Say I have some function that I'm applying every element in a list to... if that element matches some criteria: ... 2answers 505 views ### Finding a subsequence in a list I have a list and I want to find (in this particular case the first) appearance of a any of some subsequences, of possible different lengths. None of the subsequences is a subsequence of each other. ... 6answers 397 views ### How to Map a subset of list elements to a function? How would you, given a list {1, 2, 3, 4}, apply a function f to 1 and ... 8answers 422 views ### Applying And to lists of Booleans I'd like to take {True,True,False} and {True,False,False} and apply And to get ... 7answers 604 views ### Selecting a sublist based on Length If you have a simple list of lists as follows: test = {{1, 2}, {4, 5, 6, 7}, {5, 4, 3}} How do you ask Mathematica to return the sublist of greatest length? ... 4answers 361 views ### Best way to extract values from a list of rules? Mathematica has a lot of list manipulation functions, and, also because I don't work with lists often, at times I'm a bit lost. I'll find a way, but I'm sure it's not the most efficient. Case in ... 5answers 747 views ### How to find rows that have maximum value? Suppose if I have following list { {10,b,30}, {100,a,40}, {1000,b,10}, {1000,b,70}, {100,b,20}, {10,b,70} } How to find rows that have max value in ... 6answers 679 views ### extract values from replacement list Solve returns a list of replacement rules In: Solve[x + y == 3 && x - y == 6, {x, y}] Out: {{x -> 9/2, y -> -(3/2)}} I am only interested in the ... 1answer 177 views ### DeleteCases messing with my mind I'm losing my mind. Please tell me my laptop is doing weird things and not me. Or do I need to get committed someplace? ... 2answers 207 views ### From a list of dates get a list of the last date available in a each month I wondered if anyone has another or even a more direct way of finding the last dates of each month available from a list of successive dates? I currently do the following (note: nothing special about ... 5answers 264 views ### Assigning a particular value to array elements I have an array of 10000 elements. I want to randomly assign energy to these 10000 elements using Gaussian or Exponential distribution, such that each time a particular element is selected its energy ... 3answers 211 views ### How to create functions of arbitrary number of variables? In the following code what would be the simplest way to generalize it to say some $N_f$ number of $z$ instead of just $z_1$ and $z_2$? ... 3answers 336 views ### Determining all possible traversals of a tree I have a list: B={423, {{53, {39, 65, 423}}, {66, {67, 81, 423}}, {424, {25, 40, 423}}}}; This list can be visualized as a tree using ... 3answers 2k views ### K-means clustering In MATLAB, there is a command kmeans() that divides an array into $k$ clusters and calculates the centroid of each cluster. Is there any command in Mathematica to ... 3answers 343 views ### How to efficiently find positions of duplicates? Is there an efficient way to find the positions of the duplicates in a list? I would like the positions grouped according to duplicated elements. For instance, given ... 4answers 365 views ### Implementing a function which generalizes the merging step in merge sort One of the key steps in merge sort is the merging step. Given two sorted lists sorted1={2,6,10,13,16,17,19}; sorted2={1,3,4,5,7,8,9,11,12,14,15,18,20}; of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8857048749923706, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/25085/the-riemann-correspondence-for-riemann-surfaces-made-explicit-and-its-generalizat
## The Riemann correspondence for riemann surfaces made explicit and its generalizations ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Riemann showed (not proved rigorously) that there is a correspondence between compact riemann surfaces and algebraic function fields in one variable (does anyone know the year?). To construct the algebraic function field of the compact riemann surface take the field of meromorphic functions M(M). Main question: Seen as an antiequivalence between categories, what is the inverse functor? Or explicitly, how to construct a riemann surface X from its function field M(X), preferably as a polynomial in one variable with coefficients depending on another variable. If there are singularities on the recovered surface, is there a systematic way to remove them? A simpler but useful question is: how many generators do M(X) have, and how does the genus g(X) depend on M(X)? (notice that i am not restricting X to sit in any particular space). Here is the part i am not so good at but it shows a possible solution: I think the function field K(X) can always be written as K[x](y)/< P(x,y) > where P is an irreducible polynomial. This would be fine if any function field of X has uniquely this form. Then the equation i am looking for should be the extracted P(x,y)=0. Did this make the problem any simpler? And still, what is the functor inverse to M()? If this is really impossible maby a parametric representation of X from M(X) is possible. Reading another post on mathoverflow i think i am asking for higher genus Weierstrass Pe-functions related to the generators, that should be constructed in terms of Riemann theta-functions. As i understand it, for X in the g(X)=1 case K=C(Pe,Pe') (for a given lattice in C), and X is parametrized by Weierstrass Pe and Pe'-functions. But that didn't help me since there were no formulas or references for higher genera. Here is one reference for something that looks like higher genus Pe-functions. Google books: Symmetries and integrability of difference equations p68-70 Aside: since Riemann showed there is this correspondence, birational equivalence should be the same as biholomorphic equivalence for compact riemann surfaces. I have never even seen a theorem about it. How is the situation for general riemann surfaces? Note: i will from now on write trdeg(F) for the transcenden degree of a field extension F over K. A generalization to higher dimensions (background: infered from wikipedia). Is this true (even for singular algebraic varieties or only smooth algebraic varieties)? Every algebraic variety X over a field K has a function field K(X) that is the field of rational functions on X, is a field extension of K that is finitely generated, and has trdeg(K(X))=dim(X) (both over K). And, all such field extensions of K with finite trdeg(K(X)) are the function field of some algebraic variety X over K. And, question: is there a categorical antiequivalence between the the category of algebraic function fields of trdeg(K(X)) variables that are extensions of K, and ring homomorphism as morphisms, and the category of algebraic varieties over K of dimension trdeg(K(X)) and rational functions between them as morphisms? Finally, does this correspondence hold locally for schemes over a field? - There is no unique equation P(x,y) = 0. Simple reason: you can always replace x with x + 1. It's like in algebra, where a field extension is an intrinsic thing but it can be generated by many particular elements. – KConrad May 18 2010 at 3:33 1 As far as the genus, see mathoverflow.net/questions/152/… . – Qiaochu Yuan May 18 2010 at 3:37 1 Corona, this is specific to dim 1 and trdeg 1. Learn alg. curves over an alg. closed field (with singularities, and without, via normalization). Then passage in reverse direction is easier to grock. Yes, Riemann et al. worked in days before algebraic geometry. But honestly, it is easier to pass from algebraic data of function field to algebro-geometric data of plane curve with singularities on to smooth possibly non-planar proj. alg. curve, which can be "analytified" than to follow the historical route and rigorously "desingularize" an "analytic singularity" by elementary analytic tools. – BCnrd May 18 2010 at 4:29 If it is correct to attribute this correspondence to Riemann (which I'm not completely sure about) then the year would be 1857, in his paper Theorie der Abel'schen Functionen, Journal für die reine und angewandte Mathematik, vol. 54 (1857), pp. 101-155. – John Stillwell May 18 2010 at 8:22 ## 3 Answers In his paper cited above on Abelian functions, and appealing to his earlier thesis results, Riemann sketches a functor from the category of irreducible plane algebraic curves with rational maps and rational functions, to compact connected complex one manifolds with holomorphic maps and meromorphic functions. [One compactifies the curve as a projective curve, and then desingularizes it as a manifold. Rational maps become holomorphic on the desingularization because of the Riemann extension theorem. Much of this occurs in the book of Miranda.] Riemann then proves that the field of rational functions on the plane curve equals the field of meromorphic functions on the manifold. Indeed, he shows that if there is a single non constant meromorphic function on the manifold say of degree n, such as one of the plane variables gives, then the entire field of meromorphic functions on the manifold is algebraic of degree at most n over the field obtained by adjoining this one function to the constants. It follows that all holomorphic maps of the manifolds arise from rational maps of the curves and that in particular holomorphic equivalence of the manifolds is the same as birational equivalence of the curves. This implies that your construction of a (non unique) plane curve from a function field, always yields birationally equivalent curves, hence isomorphic Riemann surfaces. Riemann himself considers the problem of birational equivalence in his paper and determines the lowest degree of a plane polynomial representation for a given Riemann surface in terms of the lowest degree of a map from that surface to the Riemann sphere. All this is actually proved essentially rigorously in his paper, appealing only to his extension theorem for holomorphic functions. What has been criticized as to rigor is the inverse correspondence that all compact connected complex one manifolds arise from plane curves. The method was to produce harmonic functions in plane regions by the Dirichlet principle, which method was justified by Hilbert and others later, as recorded in the books of Weyl and Siegel and Springer. More modern approaches occur in Gunning, and the article by Cornalba in his Trieste lectures. As noted above, for higher dimension there exist compact complex manifolds not arising from algebraic varieties, and there exist such examples in Shafarevich, e.g. of compact complex tori which do not have meromorphic function field of the correct transcendence degree. Manifolds which do have such meromorphic function fields, and hence could be algebraic, are called Moishezon manifolds, and he showed they can always be blown up to become algebraic, if I recall correctly. In that famous Abelsche Functionen paper, Riemann goes on to deduce rigorously his famous inequality, by estimating the rank of a period matrix, assuming only the existence of sufficient meromorphic one forms of 1st and 2nd kinds, i.e. either holomorphic, or having zero residues at every pole. Although his proof of the existence of these forms in the manifold setting relies on his disputed use of the Dirichlet principle, he remarks in section 9 of the paper that one can simply write them down in the case of plane curves, and he actually does so for the holomorphic ones, using the "Poincare" residue principle. He says he could write down the others as well, but will not stop to do so. Such explicit expressions are given in the book on Plane Algebraic Curves of Brieskorn, by way of showing how to represent all cohomology classes on a curve by meromorphic forms of 1st and 2nd kinds. E.g. on the cubic curve y^2 = x(x-1)(x-t), the form x(x-1)dx/y^3 is an elementary form of 2nd kind with one double pole (at (t,0)) but zero residue. If one grants that Riemann knew how to do this, as he said, then the foundation for his proof of the Riemann inequality is completely provided, and at least for plane curves, there is no need for Hilbert's analytic foundations to bolster Riemann's argument in the complex algebraic case. The 1865 paper of Roch, in which he completes Riemann's argument, rests solely on Green's theorem to compute Riemann's period matrix as a residue integral, hence is completely solid. 17 years later, Brill and Noether, using the same matrix computed by Roch, apparently showed that one can exploit the duality between divisors of form D and K-D to actually give the full proof using only the existence of the integrals of 1st kind. Since that paper was so influential, Roch's residue matrix (occurring in the middle of the second page of his paper) is now usually known as the Brill Noether matrix. In addition to the functor from curves C to one manifolds X, Riemann also considered two more functors, the symmetric products X^(d) and the Jacobian variety J(X), as well as a natural transformation between them X^(d)--->J(X), called the Abel map. The fibers of this map are the linear series |D|, ("Abel's theorem"), and the derivative of this map is the Roch ("B-N") matrix, (by the fundamental theorem of calculus). Hence the Riemann Roch theorem becomes the assertion that the fibers of the Abel map are non singular as schemes. I.e. the fiber dimension dim |D|, equals the dimension of the kernel of the derivative, d-g+h^0(K-D). This is the formulation of Mattuck and Mayer. - I have heard the non singularity of the fibers of the abel map attributed to Riemann as well, but have not found it. If this were true, then in conjunction with the full Abel theorem, as observed above, this would already imply the full RRT. Notice also that the full Abel theorem already implies the Riemann inequality, since the fibers of a map from X^(d) to J(X) must have dimension ≥ d-g. – roy smith Apr 23 2011 at 21:02 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Though I don't have it in front of me for the details, check out chapter I of Hartshorne...it's much less terrifying than the rest of the book is for non-algebraic geometry people. Given a function field of transcendent degree one, you can take all the DVR's with field of fractions that field, and they'll form the points of a Riemann surface (actually, smooth projective algebraic curve over an algebraically closed field). As for polynomials, yes, any such field is the field of fractions of a field of the form $k[x,y]/(f)$ but not uniquely, as KConrad pointed out, for rather simple reasons. But also in Hartshorne chapter I, it's prove that every variety is birational to a hypersurface in affine space, and function fields are the same as birational classes of varieties. As for the singularities that occur, which will occur generically, because most curves of large genus don't embed into the plane smoothly, you can just compute the normalization of the curve. As Qiaochu said, there's this question which talks about the function field and the genus, and as for generators, the fact that things are birational to singular plane curves will tell you that you can always choose two generators to make things work out. Now, every variety has a function field, as for every function field having a variety, I believe it's true, but don't have a proof off the top of my head (though that might be that it's 1:30 am and it's obvious when I'm awake). As for the proposed anti-equivalence, you'll need to make it dominant rational maps, and then I believe it's true. Dominant means that the image is dense, and if it fails, then the image might be in the exceptional locus of the next morphism, and composition doesn't work out, because then you don't have a rational map, you have the empty map. I think I covered most of your questions...but really, chapter I of Hartshorne would be a good read for you. - 2 @Charles: The aspect of Ch. I which is painful is exactly the step of making the smooth curve model...because there Hartshorne introduces a rather wacky (to a beginner) notion of "abstract curve", and doesn't have tools at that point to discuss normalization in a meaningful way. Getting rid of the singularities is (I think) the most serious part of the leap from the function field back to the Riemann surface. It's also subtle in a purely analytic approach if one avoids the use of algebraic methods (and doesn't have available a theory of complex-analytic spaces...). – BCnrd May 18 2010 at 5:54 You could take a look in Lectures on Riemann surfaces by Otto Forster (Springer, Graduate Texts in Mathematics 81). The book starts at a moderate level (you just need to know basic complex analysis and the Lebesgue integral), but covers quite some topics like the correspondence between Galois groups of field extensions and covering transformations as well as how to remove the removable singularities. I don't have the book available now, but from my memory (of the German original) there should be enough in it to answer at least your first three questions. PS: You can google for the two terms "forster" and "field of meromorphic functions" to get a link to google books for a first impression of the book. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9452608823776245, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/symplectic-geometry?sort=active&pagesize=15
# Tagged Questions Symplectic geometry is a branch of differential geometry and differential topology which studies symplectic manifolds; that is, differentiable manifolds equipped with a closed, nondegenerate 2-form. 2answers 52 views ### Symplectic Form Preserved by Orthogonal Transformation I'm trying to prove that the symplectic form $$\omega = d(\cos\theta) \wedge d\phi$$ is preserved by the action of $SO(3)$ on $S^2$ where $\phi$ and $\theta$ are spherical polars. Now $SO(3)$ simply ... 0answers 24 views ### Trivialization of a path of tamed almost complex structures I am wondering if the following result is true: Let $(V,\omega)$ be a symplectic vector space and $\{J_t\}_{0\leq t\leq 1}$ a smooth path of almost complex structures on $V$ which are tamed by ... 1answer 20 views ### Prove that the $2$ form defines a symplectic structure Prove that the $2$ form $$\omega = -2[(1+x_2^2)dx_1 \wedge dx_2 + dx_1 \wedge dx_3 + dx_3 \wedge dx_4]$$ defines a symplectic structure on $\mathbb{R}_x^4$. My definition of as ... 2answers 34 views ### Question about symplectic tranformations Suppose I know that two vectors $\vec{a}$ and $\vec{b}$ are perpendicular in a given basis spanned by basis vectors $\vec{x}$. Now suppose I transform to another basis $\vec{x'}$ using a symplectic ... 2answers 46 views ### Non-degenerate solutions to constant Hamiltonian flow As I'm trying to work my way through Dietmar Salamon's "Notes on Floer Homology", I'm having trouble with the very first exercise. Let $(M, \omega)$ be a compact symplectic manifold. Let $H$ be a ... 1answer 32 views ### Lagrangian subspaces Let $\Lambda_{n}$ be the set of all Lagrangian subspaces of $C^{n}$, and $P\in \Lambda_{n}$. Put $U_{P} = \{Q\in \Lambda_{n} : Q\cap (iP)=0\}$. There is an assertion that the set $U_{P}$ is ... 0answers 47 views ### Computation of a pullback of a two form If we have a Lagrangian immersion from $C^{2}$ to $C^{4}$ defined like this \begin{align} \notag \phi : (x,y,u,v) \to (x, y, u, v, \frac{\partial f}{\partial x}, \frac{\partial f}{\partial y}, ... 0answers 24 views ### Hamiltonian Isotopy in Symplectic geometry In any standard symplectic geometry/topology textbook, the concept of Hamiltonian isotopy was introduced: $(M, \omega)$ is a sympplectic manifold. Given a symplectic isotopy \$\phi_t : M \rightarrow ... 1answer 43 views ### Symplectic Forms Let $(M, \omega)$ be a symplectic manifold, so that $\omega$ is a non-degenerate 2-form. If $\dim M = 2n$ why does $\omega$ being non-degenerate imply that \$\underbrace{\omega \wedge \ldots \wedge ... 1answer 52 views ### Symplectic geometry as a prequisite for Heegaard Floer homology I would like to study Heegaard Floer homology in the future in the connection to knot theory. I read a wikipedia article and it seems that I need to first learn a symplectic geometry (topology?). I ... 0answers 32 views ### Torsion-free $G$-Structures I have the following question. Let $G \subset SO(n)$ be a Lie Group and $M$ be a smooth manifold of dimension $n$. Furthermore let $P$ be a $G$-structure on $M$ i.e. $P$ is a principal subbundle of ... 0answers 25 views ### Area of flux homomorphism in symplectic topology Let $(M,\omega)$ be a symplectic manifold. Let $f:M \to \mathbb R$ be a smooth function. We have vector fields $X_f$ defined by $\omega(X_f,)=df$. Let $\phi_t$ be the flow of $X_f$ and let \$\gamma: ... 2answers 335 views ### symplectic lie algebra is simple The symplectic lie algebra defined by $sp\left(n\right)=\left\{ X\in gl_{2n}\,|\, X^{t}J+JX=0\right\}$ when $J=\begin{pmatrix}0 & I\\ -I & 0\end{pmatrix}$. So $X\in sp\left(n\right)$ is of ... 1answer 35 views ### what are the holomorphic curves in $T^{*}S^3$ with boundary on the zero section? How would I characterize such things? Is the minimal spanning (real) surface of a (real) curve in $S^3$ contained entirely in that $S^3$? 4answers 552 views ### Why does symplectic geometry have many applications in mathematics It is not quite intuitive , at least from its origin. Could any one can give me an intuitive explanation?Thank you! 1answer 130 views ### Darboux's theorem in the symplectic geometry From the Darboux's theorem in the symplectic geometry, we know that symplectic manifolds with the same dimension is locally "equivalence". I have a little puzzle with the meaning of "equivalence". ... 0answers 44 views ### Normal Bundle of Twistor lines I am reading the paper "hyperkaehler metrics and supersymmetry" by Hitchin etc.. Here is the link: ... 0answers 51 views ### Index of a Symmetric Matrix In Hansjorg Geiges' introductory textbook on Symplectic Geometry, is defined a projective conic given by $q^tAq=0$ where $A$ is a symmetric matrix of rank 3 and index 2. What does "index 2" mean? I ... 0answers 166 views ### Checking my understanding of $T^*M$ as a symplectic manifold and the links between the classical description of Lagrangians vs this invariant way. I am working through a book titled "An introduction to mechanics and symmetry" by Marsden and Ratiu. I have written up a brief summary trying to solidify my understanding of the general principles. ... 2answers 90 views ### Symplectic 2-Torus Consider the 2-torus $T=S^1\times S^1$ with symplectic form $\omega=d\theta\wedge d\varphi$ and the vector field $X=\partial_\theta$. I wonder if $X$ is hamiltonian. In other words, is $\iota_X\omega$ ... 1answer 59 views ### Visualizing diffeomorphisms This is probably a really basic question (hence my asking it here as opposed to MO). In a comment to a question on mathoverflow ... 1answer 59 views ### Special Kaehler manifolds If we have complex vector space $V=T^{*}C^{m}$ with standard complex symplectic form $\Omega =\sum_{i=1}^{m}dz^{i}\wedge dw^{i}$, and if $\tau : V\to V$ is standard real structure of $V$ with set of ... 1answer 68 views ### Why is the moduli space of flat connections a symplectic orbifold? In her Lectures on Symplectic Geometry on page 159, Ana Cannas da Silva writes "It turns out that $\mathcal{M}$ is a finite-dimensional symplectic orbifold." Can somebody give me a reference for ... 1answer 73 views ### Symplectic 2-Sphere Consider the sphere $S^2\subset\mathbb{R}^3$ in cylindrical coordinates $(\theta, z)$ (away from poles $z=\pm 1$) with symplectic structure $\omega=d\theta\wedge dz$. I want to show that the vector ... 1answer 65 views ### Why should a symplectic form be closed? Thanks for reading my question. I'm wonder why a symplectic form should be closed. I found many different answers in the internet, but it sounds like a technical requirement (if we omit this ... 1answer 67 views ### $Sp(V)$ acts transitively on $V^*-\{0\}$ where $\Omega$ here is symplectic 2 form Let $\dim(V)=6$. Show that $Sp(V,\Omega)$ acts transitively on $V^*-\{0\}$, where $\Omega$ here is a symplectic 2 form on $V$. ($V^*$ here is algebraic dual of $V$) 0answers 71 views ### transformation of symplectic structure by a matrix Suppose that in canonical symplectic basis $e_1,e_2,f_1,f_2$ we have $$\Omega=pf_1^*\wedge f_2^*+qe_1^*\wedge e_2^*+r(e_1^*\wedge f_2^*+e_2^*\wedge f_1^*)+s(e_1^*\wedge f_1^*-e_2^*\wedge f_2^*)$$ Let ... 2answers 128 views ### When does the $\mathfrak g$-invariance of the symplectic form imply $G$-invariance? Let $G$ be a Lie group with Lie algebra $\mathfrak g$, and let $M$ be a smooth manifold. Suppose $G$ acts on $M$, $G \to \text{Diff}(M)$. This naturally induces an action $\mathfrak g \times M \to M$ ... 0answers 60 views ### A proof of simply connectedness of a symplectic quotient Let $\rho$ be a unitary representation of a torus $G$ on $\mathbb{C}^n$. The action of $\rho$ is Hamiltonian with a moment map $\mu:\mathbb{C}^n \to \mathfrak{g}^*$. Here $\mathfrak{g}^*$ is the dual ... 1answer 109 views ### Tautological 1-form on the cotangent bundle I'm trying to understand a little bit about symplectic geometry, in particular the tautological 1-form on the cotangent bundle. I'm following Ana Canas Da Silva's notes. On page 10 she describes the ... 0answers 184 views ### basis free volume form for a symplectic vector space It's easy to show, using a symplectic basis, that if $\omega$ is a symplectic form on a $2n$-dimensional vector space $V$, then $\omega^n \neq 0$. I'd like to be able to prove it without choosing a ... 0answers 94 views ### Geodesic Flow is the flow of the Hamiltonian Vector field of $|\xi|$ Let $M$ be a complete Riemannian manifold with metric $g$. The geodesic flow is the one-parameter family of diffeomorphisms $\phi_t$ on $T^*M$ defined as follows. If $\xi \in T^* M$ is a vector based ... 1answer 61 views ### Will this set of functions form coordinates on a symplectic manifold? Consider a symplectic manifold $(M, \omega)$. Let us define a concept of a complete set of observables: A set of functions $f_i : M \to \mathbb R$ form a complete set of observables if any ... 1answer 84 views ### $1$-form on a symplectic manifold. If $\omega$ is a $1$-form on a symplectic manifold, will it be closed? It seems to be trivial that if $\sigma$ is symplectic structure on a manifold $M$, then the induced map \sigma^\vee: TM\to ... 1answer 102 views ### Symplectic Chart I was reading the article "Symplectic structures on Banach manifolds" by Alan Weinstein. In this article there is one theorem, which is as following: If $B$ is a zero neighborhood in Banach space. ... 1answer 108 views ### What is an “invariant form” of a group? I have often seen this phrase used in at least two frequent contexts, One uses the notation of $\omega_{AB}$ (the matrix $\{ [0 , I],[-I,0]\}$) to denote the symplectic form for $USp$ group. One ... 1answer 167 views ### Symplectic reduction: involutive and non-involutive first integrals Suppose I have a Hamiltonian $H$ with the phase space $\mathcal{M}$, a symplectic manifold with a symplectic 2-form $\omega.$ Now assume that the Hamiltonian system has two first integrals $C_1,C_2$. ... 1answer 130 views ### Moment map of the action of $\operatorname{SO}(3)$ on the sphere The moment map of the action of $\operatorname{SO}(3)$ on the sphere can be thought of as inclusion from $S^2$ into $\mathbb R^3$ by identifying $\mathfrak{so}(3)$ (the Lie algebra of ... 1answer 56 views ### Symplectic submanifolds Suppose I have the symplectic manifold $(M, \omega)$. Now consider a function $C: M \rightarrow \mathbb{R}$ whose differential is non-zero. Then restricting to the submanifold of $M$ given by $C=0$ ... 2answers 148 views ### Regarding Legendre transform from tangent bundle to cotangent bundle (I'm a complete beginner at differential geometry) I'm studying about constrained systems, in which we "map a Lagrangian system from a tangent to a cotangent bundle. Hamiltonian dynamics then appears ... 0answers 87 views ### A vector field on a symplectic submanifold intersecting the symplectic complement Consider a 4-dim symplectic vector field $X$ on the symplectic manifold $(M, \omega)$ in $\mathbb{R}^4$ with $\omega= \sum_{i=1}^2 dy_i \wedge dx_i$. Moreover, the linear terms of $X$ are given by ... 1answer 52 views ### Open sets of symplectic manifolds Suppose I have a symplectic manifold $(\mathcal{M}, \omega)$. Does it hold that any open subset of $(\mathcal{M}, \omega)$ is a symplectic submanifold? The statement trivially holds for smooth ... 1answer 178 views ### is the geodesic flow on Hyperbolic Plane completely integrable? I'm looking for examples of completely integrable systems and specifically geodesic flows. We remember that when we have a symplectic manifold $(M,\omega)$ (with $M$ of dimension $2n$) and ... 1answer 155 views ### Any intuitive examples of symplectic vector space? Recently I come into symplectic vector space and its properties in my linear algebra class. However, this interesting thing is so different from the usual inner-product spaces I've met before, and I ... 0answers 125 views ### Energy displacement of a cylinder is at most $\pi r^2$. I want to show that the energy displacement of $Z^{2n}(r)$ is at most $\pi r^2$. In the textbook of Mcduff and Salamon they write that I should identify the two dimensional ball (with radius $r$) ... 2answers 81 views ### Proving that a particular submanifold of the cotangent space is Lagrangian I have the following problem in my differential topology class: Let $M$ be an $n$-dimensional manifold and let $\omega$ denote the standard symplectic form on the cotangent space $T^*M$. Let \$f \in ... 1answer 37 views ### Given an integral symplectic matrix and a primitive vector, is their product also primitive? Given a matrix $A \in Sp(k,\mathbb{Z})$, and a column k-vector $g$ that is primitive ( $g \neq kr$ for any integer k and any column k-vector $r$), why does it follow that $Ag$ is also primitive? Can ... 1answer 110 views ### Symplectic positive definite matrix. I want to prove that any symmetric positive definite symplectic matrix, $A$, and any real number $\alpha >0$, also $A^{\alpha} \in \operatorname{Sp}(2n)$. I was given a hint to decompose ... 2answers 200 views ### Showing that some symplectomorphism isn't Hamiltonian I have the next symplectomorphism $(x,\xi)\mapsto (x,\xi+1)$ of $T^* S^1$, and I am asked if it's Hamiltonian symplectomorphism, i believe that it's not, though I am not sure how to show it. I know ... 1answer 201 views ### $4$-form $\omega \wedge \omega$ vanishes on $S^4$ If $\omega$ is a closed $2$-form on $S^4$, how can I show the $4$-form $\omega \wedge \omega$ vanishes somewhere on $S^4$? I am guessing that the fact we're talking about the $2$-form being ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 165, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9102824926376343, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=545824&page=2
Physics Forums Page 2 of 2 < 1 2 Recognitions: Gold Member ## Light year, explanation required? One thing you might want to compare the lightyear to is the kilowatt-hour (kWh), the unit of energy the electric company bills you by. A kilowatt is a unit of power. It is equal to 1000 joules of energy per second. A 1 kilowatt motor, for example, is a motor that uses 1000 joules of energy every second. A kilowatt hour is that rate of energy use multiplied by 1 hour or 3600 seconds. So 1 kilowatt hour is 1,000 Joules per second * 3600 seconds = 3,600,000 Joules of energy. A lightyear is the same idea. Light travels at 300,000,000 meters per second. A lightyear is this rate multiplied by the 31,536,000 seconds in a year. This is $9.46 x 10^{15} meters$. So a kilowatt is a rate of change of energy, a speed is a rate of change of position. The kWh and Lightyear are just ways of dealing with very huge numbers in more manageable ways while having some important physical connection in mind. Recognitions: Homework Help No worries - some questions turn out to be very big. Remember, lots of weighty books have been written on these topics. To get the most out of these forums, refine your question. The more specific you can be the better quality your answer will be. It helps us to help you. Dear Brothers, Thanks all for your support and patience to answer my question. @Simon Bridge: Can you guide me to some good books about cosmology (Particularly about stars) which explains in simple terminology with examples. In other words, a man like me can understand from that. @Cosmic Eye: Yes, I am clear of what is Light year. @Pengwuino : Thanks for your effort to make me understand about the logic. Recognitions: Homework Help Almost anything on paper will be fine at your level - browse a bookstore science section you'll find dozens. But I can save you money: At the level you are asking questions, you want a FAQ like these: http://www.astro.ucla.edu/~wright/cosmology_faq.html http://preposterousuniverse.com/writ...rimer/faq.html http://supernova.lbl.gov/~evlinder/umass/faq.html ... these allow you to get a quick overview of the concepts and refine your questions. (I've tried to rank them as to difficulty, so start at the top - don't worry if you don't understand something, make a note of it and push on: chances are it will become clear in a bit.) http://www.phys.vt.edu/~jhs/faq/stars.html ... same sort of thing but for stars. A lot of video sources, like you'll find on TV, look good but tend to emphasize the sensational aspects at the expense of the science which is a bit sad. The TED talks are pretty good though - for example: http://www.ted.com/talks/lucianne_wa...her_stars.html Notice that some of these sites are from quite good Universities, and the TED talks are by experts in their fields. One of the beauties of the internet is this sort of access - the trick is to find it! Thus: practice your search skills. After a while you'll get good at sorting the wheat from the chaff. Kahnacademy.org is a tremendous resource that will answer a lot of your questions. It has a huge collection of short videos (10-15 minutes) on a wide range of subjects including cosmology and physics. They range from very basic to intermediate complexity and build on each other. It's like a whole series of mini lectures where you can skip the classes about things you already know, jump to the interesting bits, and then jump back if you find you need firm up your foundations. Mentor Quote by Simon Bridge Quote by optical mouse 1. If sun is having such enormous magnetic power to hold all the 9 planets and make revolving around sun then why the moon is revolving around the earth instead sun? 1. the force holding the planets in is gravity not magnetism 2. the moon is orbiting the Sun along with the Earth 3. the moon is more strongly attracted to the earth than the sun because it is much closer. Correction: 1. Correct. 2. Correct. 3. Incorrect! Do the math. The gravitational force of the Sun on the Moon is more than twice that of the Earth on the Moon. So why do we say that the Moon is orbiting the Earth rather than the Sun? First of all, this begs the question regarding the meaning of "orbiting." It assumes that it is an exclusive relationship. The Moon orbits the Earth. And the Sun. And the Milky Way. And the Local Group. And the Virgo Supercluster. "Is in orbit about" is not an exclusive relationship. That doesn't explain why we can say that the Moon is orbiting the Earth. The answer is that force is the wrong metric. A better metric is that the Moon is gravitationally bound to the Earth. That isn't quite good enough due to perturbations from the Sun. An even better metric than being gravitationally bound is the concept of a gravitational sphere of influence. One such is the Hill sphere. There are others, but none is "perfect." It's a bit hard to be "perfect" in the N-body problem. Recognitions: Homework Help Quote by D H A better metric is that the Moon is gravitationally bound to the Earth. This is quite correct - The moon and the Earth form a gravitationally bound system that, in turn, is bound to the Sun as part of the Solar System which in turn... In terms of potential, the moon has a hump to get over before it can be claimed by the Sun. Plot the gravitational poltential of the earth-sun-moon system, you can clearly see the moon is well inside the dimple around the Earth and this is what we mean by "bound". Bottom line: we say the moon orbits the earth because it is a useful approximation. It is good enough to get spacecraft to land on it or to bounce a laser beam off a mirror. Recognitions: Gold Member To bounce off what DH said, both the Moon and Earth are in "free fall" around the Sun. In addition to that the Moon is also in free fall around the Earth as well. The moons orbital velocity around the Sun varies between 28-30 km/s, and it has an average orbital speed of 1 km/s around the Earth. If it helps, think of the Moon and Earth as one system that orbits the Sun. Quote by optical mouse Dear Brothers, Thanks all for your support and patience to answer my question. @Simon Bridge: Can you guide me to some good books about cosmology (Particularly about stars) which explains in simple terminology with examples. In other words, a man like me can understand from that. @Cosmic Eye: Yes, I am clear of what is Light year. @Pengwuino : Thanks for your effort to make me understand about the logic. Go watch the PBS TV series from the 1970s called "Cosmos" hosted by Carl Sagan. Informative AND fun to watch. Recognitions: Homework Help Sagan is a good showman but tends to emphasize the "mystery" of the cosmos. I don't like to suggest anything that draws it's appeal partly from ignorance. Of course - with cosmology thats a bit of a big ask. One of the signs you may be a scientist (IMO :) ) is if you don't lose the sense of wonder when you know how something works. Also I think Cosmology has already moved on in some areas since the 80s. Page 2 of 2 < 1 2 Tags distance, light year, star Thread Tools | | | | |--------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Light year, explanation required? | | | | Thread | Forum | Replies | | | General Physics | 9 | | | General Physics | 5 | | | Introductory Physics Homework | 4 | | | Advanced Physics Homework | 3 | | | Academic Guidance | 12 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387843012809753, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/106425/difference-between-npsat-intersection-conpsat-and-np-intersection-conpsat
Difference between NP^SAT intersection CoNP^SAT and (NP intersection CoNP)^SAT ? Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Hi, Is there any difference between the classes $NP^{SAT} \cap CoNP^{SAT}$ (i.e. $\Sigma_2^p \cap \Pi_2^p$) and $(NP \cap CoNP)^{SAT}$? If so, what is the structural difference between the two? Searching around, I found this paper which shows a relativized world where $NP_{poly} \cap CoNP_{poly}$ is not contained in $(NP \cap CoNP)_{poly}$. Is there any similar result for the question I've asked? - 4 $\let\M\mathrm$How do you define $(\M{NP}\cap\M{coNP})^{SAT}$, other than $\M{NP}^{SAT}\cap\M{coNP}^{SAT}$? Note that there is no general way of assigning $C^A$ to any given complexity class $C$. There is no concept of “$\M{NP}\cap\M{coNP}$ Turing machines” that you could just relativize. (This is in contrast to nonuniformity, which you mention in the second paragraph: $C/\M{poly}$ is well defined for any class $C$. The result you quote is actually that there is an oracle $A$ such that $\M{NP}^A/\M{poly}\cap\M{coNP}^A/\M{poly}\nsubseteq(\M{NP}^A\cap\M{coNP}^A)/\M{poly}$.) – Emil Jeřábek Sep 5 at 13:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300917983055115, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/167091-can-parabolas-have-tangent-lines.html
# Thread: 1. ## can parabolas have tangent lines? Prove that the two tangents to a parabola from any point on the directix are perpendicular. Im not sure about this question. I was drawing it out and like.. can you have a line thats tangent to a parabola or wont they all eventually cross it? I think if yes... then wouldnt it be more then 2 tangent lines ? anyway can anyone maybe talk about this problem and maybe point me in the right direction? 2. A parabola is the set of all points equidistant from a point (the focus) and a line (the directrix). In this problem, you're asked to pick an arbitrary point on the directrix, find the equations for the two tangent lines to the parabola that go through the arbitrary point on the directrix, and show that the slopes of those two lines are negative inverses of each other. That is, if $P$ is a point on the directrix, and line $1$ is tangent to the parabola at some point and also contains $P$ and has slope $m_{1},$ and if it's also true that line $2$ is tangent to the parabola at some point and also contains point $P$ and has slope $m_{2},$ then you're asked to show that $m_{1}=-1/m_{2}.$ Does that make sense? 3. Originally Posted by emakarov A picture is worth a thousand words.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9658465385437012, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/108678/alternate-expression-for-the-following-function?answertab=votes
# Alternate expression for the following function So if the following function is evaluated with the floating-point arithmetic, we get poor results for certain range of values of $x$. Therefore, I need to provide an alternate function that can be used for those values of $x$. The function is: $f(x)= \sqrt{1+x}-\sqrt{1-x}$ I have found the range for this and it is: $-\sqrt{2} \le y \le \sqrt{2}$ So how would I make an alternate expression or function for this. Do I just multiply by its conjugate. - Multiplying by its conjugate is a good idea. – lhf Feb 12 '12 at 23:37 On a random sample of $10^7$ numbers in $[0,1)$, I got a maximum error of $3.33\times 10^{-16}$ between $f(x)$ as given and after multiplying by its conjugate. So I guess the two expressions have the same performance in floating-point arithmetic. I used double precision. – lhf Feb 12 '12 at 23:48 @lhf: Did you use uniformly distributed samples? The stability problems with the naive expression ought to show up for very small $x$, so it's not likely for $10^7$ uniform samples to contain any small enough points. In fact a typical pseudorandom generator that produces uniformly distributed numbers in $[0,1)$ will return only numbers that can be subtracted from $1$ without loss of precision. – Henning Makholm Feb 12 '12 at 23:57 2 Just wondering what do you mean by saying "we get poor results"? – Emmad Kareem Feb 13 '12 at 0:03 2 @Emmad: if $x$ is tiny, the expression given is exceedingly prone to subtractive cancellation. – J. M. Feb 13 '12 at 3:33 show 1 more comment ## 2 Answers The problem with this function is that for small $|x|$ the value $f(x)$ is the difference of two almost equal numbers, as has been noted by other readers. Very often this poses serious problems (as, e.g., in numerical differentiation), but here is an easy way out (you have hinted at it yourself): Use the formula $$a-b ={a^2-b^2\over a+b}$$ in order to obtain an alternative expression for $f$: $$f(x)\ =\ {2x\over \sqrt{1+x}+\sqrt{1-x}}\ .$$ - what if used the L'hopital's rule would that work....??? – A_1_615 Feb 14 '12 at 0:49 1 @A_1_615: De l'Hopital's rule gives you a limit value in certain situations, but you cannot use it to derive an expression. – Christian Blatter Feb 14 '12 at 9:17 This is more likely to be the expected solution than my attempt. I had considered something like this briefly but discarded it because I was afraid that rounding errors in the $1+x$ and $1-x$ subexpressions might do harm for small $x$. With a bit more thought I see that this isn't likely to be the case. (And an arcsine computation would likely be based on a hardware arc_tangent_ and have a $\sqrt{1-x^2}$ intermediate anyway). – Henning Makholm Feb 15 '12 at 1:06 A good first step in getting a handle on a problem of this kind is to graph the function. We see (or would see, if Wolfram Alpha didn't insist on having the x and y axis scale independently) that $f(x)\approx x$ for small $x$. So the risk is that for very small $x$, adding to and subtracting from $1$ would throw away many bits of $x$, which we then won't get again by subtracting the square roots. HINT: Write down an expression for $f(x)^2$. Does anything look trigonometrical? Suppose $x$ is the sine of something? Remember the half-angle formulas. Edit: Here's the solution alluded to in the above hint: Let's see if we can do something about those square roots. The natural thing to try in order to make them go away is to square the entire thing (and worry about signs later): $$f(x)^2 = (1+x)+(1-x)-2\sqrt{(1+x)(1-x)} = 2 - 2\sqrt{1-x^2}$$ That doesn't immediately seem to help, but the square root that remains looks distinctly trigonometrical. Let's see if it helps to define $x=\sin \theta$. Then we can compute $$f(x)^2 = 2 - 2\sqrt{1-\sin^2\theta} = 2 - 2\cos\theta = 4\sin^2\left(\frac12\theta\right)$$ So we get (checking now that the signs are right): $$f(x) = 2\sin\left(\frac12\arcsin x\right)$$ A self-respecting floating-point implementation ought to be able to compute sines and arcsines of small arguments without significant loss of precision. Closer to $x=\pm 1$ you probably want to switch over to the naive formula, though. - For $x$ close to zero perhaps we can just use the series expansion: $x + \frac{x^3}{8} + \frac{7x^5}{128} + \mathcal{O}(x^7)$? – Aryabhata Feb 13 '12 at 0:01 Im not sure I follow...so I shouldn't multiply by its conjugate....instead i should just solve it using the Taylor series...??? – A_1_615 Feb 13 '12 at 2:35 1 @A_1: My suggestion is to square the function (multiply it by itself, not its conjugate). Aryabhata has a different (unrelated) suggestion. – Henning Makholm Feb 13 '12 at 2:48 1 @Aryabhata: or a Padé approximant, for that matter.... – J. M. Feb 15 '12 at 1:04 @J.M.: I am not too familiar with them. Thanks for mentioning those! – Aryabhata Feb 15 '12 at 1:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382737278938293, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/104580-tangent-normal.html
# Thread: 1. ## Tangent and normal to the curve, x^2 - sqrt(3) xy + 2y^2 = 5 I got the slope to equal 0. Can someone help me out please? 2. Originally Posted by Morgan82 to the curve, x^2 - sqrt(3) xy + 2y^2 = 5 I got the slope to equal 0. Can someone help me out please? What exactly are you trying to do? 3. The equation for the tangent plane at a point is: Find your partial derivatives, plug everything in, and simplify. To find the normal line to this tangent plane, you then want to define x, y, and z parametrically. They will all have the form: $x\left(t \right)=x_{0}+f_{x}\left(x_{0},y_{0},z_{0} \right)$ except with the appropriate variable in place of x. You then solve each equation for t and set all 3 equations equal (since t=t=t, this must be true).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8856669664382935, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/121/how-do-you-evaluate-a-covariance-forecast/123
# How do you evaluate a covariance forecast? Suppose you have two sources of covariance forecasts on a fixed set of $n$ assets, method A and method B (you can think of them as black box forecasts, from two vendors, say), which are known to be based on data available at a given point in time. Suppose you also observe the returns on those $n$ assets for a following period (a year's worth, say). What metrics would you use to evaluate the quality of these two covariance forecasts? What statistical tests? For background, the use of the covariances would be in a vanilla mean-variance optimization framework, but one can assume little is known about the source of alpha. edit: forecasting a covariance matrix is a bit different, I think, than other forecasting tasks. There are some applications where getting a good forecast of the eigenvectors of the covariance would be helpful, but the eigenvalues are not as important. (I am thinking of the case where one's portfolio is $\Sigma^{-1}\mu$, rescaled, where $\Sigma$ is the forecast covariance, and $\mu$ is the forecast returns.) In that case, the metric for forecasting quality should be invariant with respect to scale of the forecast. For some cases, it seems like forecasting the first eigenvector is more important (using it like beta), etc. This is why I was looking for methods specifically for covariance forecasting for use in quant finance. - ## 4 Answers You are correct: evaluating volatility forecasts is quite different from evaluating forecasts in general, and it is a very active area of research. Methods can be classified in several ways. One criterion is to consider evaluation methods for single forecasts (e.g., for the time series of returns of a specific portfolio) vs multiple simultaneous forecasts (e.g., for an investable universe). Another criterion is to separate direct evaluation methodsfrom indirect evaluation methods (more on this later). Focusing on single-asset methods: historically the most commonly used approach by practitioners, and the one advocated by Barra is the "bias" statistics. If you have a forecast return process $r_t$ and a forecast $h_t$, then under the null hypothesis that the forecast is correct, $r_t/h_t$ has unit variance. The Bias statistics is defined as $T^{-1} \sum_{t=1}^T (r_t/h_t)^2$, which is asymptotically normally distributed with unit mean and st.dev. $1/\sqrt{T}$, which can be used for hypothesis testing. An alternative is Mincer-Zarnowitz regression, in which one runs a regression between realized variance (say, 20-trading day estimate between $t$ and $t+20$) and the forecast: $$\hat\sigma^2_t =\alpha +\beta h^2_t + \epsilon_t$$ Under the null one tests the joint hypothesis $\alpha=0, \beta=1$. Patton and Sheppard also recommend the regression, which yields a more powerful test: $$(\hat\sigma_t/h_t)^2 =\alpha/h^2_t +\beta + \epsilon_t$$ Both these tests can be (non-rigorously) extended to multiple forecasts by simulating random portfolios and generating statistics for each portfolio, or by assuming an identical relationship between forecasts and realizations across asset pairs: $$vech(\Sigma_t) = \alpha + \beta vech ( H_t) + \epsilon_t$$ in which $vech$ is the "stacking" operator on a matrix (in this case, the forecast and realized variance matrices). As for indirect tests, a popular approach is the minimum variance portfolio for risk model comparison. One finds the minimum variance portfolio under a unit budget constraint using two or more asset covariance matrices. One can prove that the true covariance matrix would result in the portfolio with the lowest realized variance. In other words, better models hedge better. The advantage of this approach is that it does take into account the quality of the forecast of $\Sigma^{-1}$, which is used in actual optimization problems; and it doesn't require providing alphas, so that the test is not a joint test of risk model and manager skill. - You probably want to take it back to how one evaluates forecast models in general: using some metrics over one- or many-step forecasts, see e.g. here for a Wikipedia discussion. But instead of forecasting first moments, it would now be second moments. This can still use (root) mean squared error, or mean absolute percentage error, or related measures; see e.g. this paper by Rob Hyndman on comparisons of methods. - I think a good approach is to compare your two covariance matrices on a set of random portfolios (see for instance http://www.portfolioprobe.com/about/applications-of-random-portfolios/assess-risk-models/). What you want is a high correlation (across the portfolios) between the predicted and realized portfolio volatility. We're never going to estimate the level of volatility especially well. But if you get the right ranking across portfolios, then that is as much as you can ask. It would be best to generate random portfolios that look like the ones you will actually have, but even naively generated portfolios may be good enough. - In mean-variance portfolio work, the elements of the covariance matrices are highly volatile and infused with error, so how to obtain forecasts that are usable ? A simple idea is to use a Stein-equal covar shrinkage estimator which, in practice, is easy to calculate and produces superior portfolios when evaluated on out-of-sample data ( see Continuous Time Mean Variance Portfolios, Zhou, 2000) . So to evaluate a proposed covar matrix,: 1) calculate the equal covar matrix, covar.eq, where covar.eq[i,i] = covar[i,i]; and covar.eq[i,j] = mean(covar[i,j]); i not = j. 2) del = Sum(i,j)[ Abs(covar.eq[i,j]-covar[i,j] )], or if you prefer delsq = sum(i,j)[ (covar.eq[i,j]-covar[i,j])^2 ] and pick the covar with the smallest del or delsq. The selected covar matrix will be closest to the stein-shrinkage matrix which (see above ) produces superior portfolio's. Paul H. Lasky B & P Investments -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9219499826431274, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3383686
Physics Forums ## Can the Earth have an asteroid stuck in permanent orbit like our moon? Good morning, I've searched about this case in the internet but just found things about asteriods sharing the same orbit as earth over the sun, but I wonder if an asteroid can get stuck in earth's orbit like the moon. Thanks! I am not a scientist of any sort, just a huge fan of the universe that likes to read about it. I have just finished a book called Cosmos by Giles Sparrow and I have learned that large gas giants such as Jupiter and Saturn have irregularly shaped moons which are believed to be asteroids who have become trapped by the tremendous gravity of the large planets. With that being said I could not say for certain that an asteroid could become trapped in an orbit around Earth. I do not know the specific formulas to calculate gravitational effects on orbital patterns, however I imagine if an asteroid with a specific size and velocity came within a specific distance to the Earth it could be pulled into orbit. Probably very elliptical at first and over time would orbit in a more circular pattern. Anyone with actual knowledge of this please correct anything I said that is misleading or wrong. Mentor Blog Entries: 1 It's entirely plausible that an asteroid could fall into orbit around the world, given the right altitude and speed. ## Can the Earth have an asteroid stuck in permanent orbit like our moon? Quote by ryan_m_b It's entirely plausible that an asteroid could fall into orbit around the world, given the right altitude and speed. Thanks for the fast answer! Not wanting too much to ask. There is any software or math formula to calculate this? I wonder what is the maximum size of this asteroid. Quote by TitanRZ Thanks for the fast answer! Not wanting too much to ask. There is any software or math formula to calculate this? I wonder what is the maximum size of this asteroid. Maximum size would be one that's large enough to affect the orbits of Earth and Moon. Note that all the planets outside Earth's orbit have two or more moons. There's little reason why Earth couldn't. An asteroid could happily orbit way out beyond the Moon. It would have to be far out, really. The Moon is so large relative to Earth that they're effectively a binary system. Any asteroid would have to be far enough away so as to see the Earth-Moon system as essentially one gravitational point, otherwise the orbit is unstable. Recognitions: Gold Member The conditions has to be just right and it will only happen with some help from the Moon and perhaps also the atmosphere. In terms of everyday likelihoods, it would be an exceedingly rare event to have a significantly sized asteroid captured into Earth orbit. And then to have such an orbit being a stable orbit in the Earth-Moon system would be an even more rare event, as most orbits that gets anywhere near the Moon sooner or later will result in the object either impacting the Moon or Earth, or being ejected from the Earth-Moon system. Without doing the "math", I would venture a guess that you are most likely to live your whole life without such a stable capture of an asteroid ever occurring. See also http://earthsky.org/space/asteroids-accretion Recognitions: Gold Member Science Advisor Quote by TitanRZ ...Not wanting too much to ask. There is any software or math formula to calculate this? I wonder what is the maximum size of this asteroid. If you have a Windows computer you can use Gravity Simulator, which is a program I wrote, to try out different capture scenerios: www.gravitysimulator.com Asteroids can get captured through the L1 or L2 regions. This happened in 2006. Asteroid 2006 RH120 (aka 6R10DB9) was captured into Earth orbit in the L1 region. It orbited Earth for 15 months before escaping Earth orbit through the L2 region. This object is beleived to be 3-6 meters in diameter. For every object 3-6 meters in diameter, there are probably thousands of objects 3-6 cm in diameter. So I think it is highly likely that Earth has some meteoroids orbiting it now. Asteroids captured in this manner don't stay very long. They need to lose energy so the can't climb back out to the L1 or L2 regions. Here's a link to a Gravity Simulator simulation I did with this object. You can download the simulation and run it on your own computer: http://www.orbitsimulator.com/cgi-bi...1182030550/0#0 The Moon can help capture an asteroid with a gravitational assist that robs the asteroid of energy. But such an asteroid would be in a Moon-crossing orbit, and would probably only complete a few orbits before the Moon ejects it. An asteroid can also graze Earth's atmosphere, robbing it of energy and capturing it into Earth orbit. But such an asteroid would always have its perigee inside Earth's atmosphere, so it would sprial down before crashing into Earth. There was some speculation that this happened (10-20 years ago I think), when a group of observers saw a bright fireball that escaped back into space. Hours later, and hundreds of miles away another group of observers saw a bright fireball. This caused some to speculate that it was the same object. The first pass through the atmosphere captured it into an elliptical orbit. The second pass through destroyed it. One possible way of capturing an asteroid into a stable orbit is to have a double asteroid pass through the Earth/Moon system. As it gets close to Earth, the pair of asteroids become unbound from each other, one gaining energy and one losing energy, with the one losing energy being left in a stable orbit. Some think that this is how Neptune captured Triton. Stable prograde orbits can not exist beyond the Moon's orbit. The Moon would destabalize them. But stable retrograde orbits can exist beyond the Moon's orbit out to about 800,000 km. Once asteroids get as large as Ceres, we stop calling them asteroids, and start calling them dwarf planets. Ceres, at 1/77 the mass of the Moon, is not massive enough to disrupt the Earth / Moon system. So the answer to "how big" is as big as you want. Thank you for the answers they were very enlightening! I hope you all have a good night. see 3753 Cruithne http://en.wikipedia.org/wiki/3753_Cruithne You get very complicated orbital dynamics between the earth, moon, and sun. Recognitions: Gold Member Science Advisor A small asteroid could get stuck in a Lagrange point. Recognitions: Gold Member Quote by Chronos A small asteroid could get stuck in a Lagrange point. But for an asteroid to enter the Earth-Moon system and come "to rest" at a Lagrange point it must somehow loose some of its excess hyperbolic speed and it must do so very close to the point. In addition, all Lagrange points in the Earth-Moon system are unstable so the asteroid would eventually move away from the point again. Recognitions: Gold Member Science Advisor Quote by Filip Larsen But for an asteroid to enter the Earth-Moon system and come "to rest" at a Lagrange point it must somehow loose some of its excess hyperbolic speed and it must do so very close to the point. In addition, all Lagrange points in the Earth-Moon system are unstable so the asteroid would eventually move away from the point again. L4 & L5 can be stable for millions of years. But you're right, coming to rest in these regions would be unusual. Mentor I'm not sure Cruithne is a very good example. Cruithne is a body in an orbit around the sun, more elliptical than the earth's, and with a similar period, so it appears to circle the earth in an earth-centered coordinate system. However, it never is gravitationally bound to the earth. A better example, although not one without its own problems is J002E3. This is an object which periodically is captured by the earth, orbits a few times, and then is ejected by gravitational perturbations from the sun and moon. It then orbits the sun until it falls back in to earth orbit. A close lunar approach could put it in permanent earth orbit - it would need to be a "reverse slingshot", where it loses velocity rather than gains it. The problem I alluded to is that J002E3 is almost certainly artificial, and the most likely candidate is the third stage of Apollo 12. The reason I say that it is artificial is because its surface is covered in paint. Mentor I'm not sure Cruithne is a very good example. Cruithne is a body in an orbit around the sun, more elliptical than the earth's, and with a similar period, so it appears to circle the earth in an earth-centered coordinate system. However, it never is gravitationally bound to the earth. A better example, although not one without its own problems is J002E3. This is an object which periodically is captured by the earth, orbits a few times, and then is ejected by gravitational perturbations from the sun and moon. It then orbits the sun until it falls back in to earth orbit. A close lunar approach could put it in permanent earth orbit - it would need to be a "reverse slingshot", where it loses velocity rather than gains it. The problem I alluded to is that J002E3 is almost certainly artificial, and the most likely candidate is the third stage of Apollo 12. The reason I say that it is artificial is because its surface is covered in paint. Recognitions: Gold Member Science Advisor Quote by Vanadium 50 ...A close lunar approach could put it in permanent earth orbit - it would need to be a "reverse slingshot", where it loses velocity rather than gains it... But a close lunar approach would leave it in a lunar-crossing orbit. Interior to the Moon, it needs something to cause it to lose more energy so its apogee was well out of the Moon's grasp. Otherwise it would make a few orbits before the Moon ejected it. Recognitions: Gold Member Homework Help Science Advisor Quote by TitanRZ Thanks for the fast answer! Not wanting too much to ask. There is any software or math formula to calculate this? I wonder what is the maximum size of this asteroid. The size won't matter much. It actually isn't that hard to calculate whether or not an asteroid would be captured by the Earth's gravity. You look at the object's relative velocity (relative to the Earth) and its distance from Earth and calculate it's specific energy (relative to Earth). If that specific energy is negative, then it will be captured by Earth's gravity. If that specific energy is 0 or greater, then the object won't be captured. $\epsilon = \frac{v^2}{2} - \frac{\mu}{r}$ $\epsilon$ is the specific energy per unit of mass relative to Earth $\mu$ is the geocentric gravitational constant (the universal gravitational constant times the Earth's mass) v is object's speed relative to Earth r is the object's distance from Earth Essentially, you have an object orbiting the Sun in a fairly similar trajectory to the Earth's (otherwise, the relative velocity would surely be too large). The object is still orbiting the Sun (just as the Moon is), but with periodic perturbations that cause it to circle the Earth. In practice, from an Earth reference frame, the object is orbiting the Earth, but looking at it from outside the system might help to visualize just how the capture would happen and the limitations for it to occur (the fact that it has to be close to the same distance from the Sun as the Earth with a velocity close to the same as the Earth's). As others mentioned, the object would also be affected by other objects besides the Earth - most significantly by the Moon. The chances of getting a stable orbit would be small, mainly because of the difficulty of getting a trajectory similar enough to Earth's to be captured in a fairly circular, stable orbit not disrupted by the Moon; but not in a trajectory so similar that Earth puts the object into a solar orbit similar to Cruithne's. Recognitions: Gold Member Homework Help Science Advisor Tags asteriod, earth, orbit Thread Tools | | | | |---------------------------------------------------------------------------------------------|-------------------|---------| | Similar Threads for: Can the Earth have an asteroid stuck in permanent orbit like our moon? | | | | Thread | Forum | Replies | | | General Astronomy | 68 | | | Astrophysics | 6 | | | Earth | 18 | | | General Astronomy | 1 | | | General Astronomy | 8 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556042551994324, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/158473-calculating-number-possible-combinations.html
# Thread: 1. ## Calculating number of possible combinations Hi, Im sure there is an easy answer to my question, but its been a while since maths class and I cant seem to work it out in my head! Imagine I have two objects, lets call them 'A' and 'B', what I want to do is work out a formula for the total number of possible combinations of the objects, with repeats not allowed. so... for two letters the possibilities are: 1)A 2)B 3)A+B 3 possible combinations (B+A isnt valid as its the same as A+B as far as im concerned). and for three letters: 1)A 2)B 3)C 4)A+B 5)A+C 6)B+C 7)A+B+C (7 combinations) what I would ideally like would be a formula for N objects! I understad the use of factorials (n!) in similar problems, but cant quite find the way to relate it to this... Can anyone help me? Thanks! 2. Originally Posted by chowner Imagine I have two objects, lets call them 'A' and 'B', what I want to do is work out a formula for the total number of possible combinations of the objects, what I would ideally like would be a formula for N objects! You want the total number of non-empty subsets of a set of N elements. That is $2^N-1$ If $N=3$ we get $2^3-1=8-1=7$ 3. *brain ticks for a few seconds* Yep thats it! Thanks Plato! Still need to wrap my head around why that formula works, but at least thats not going to be bugging me all afternoon 4. Originally Posted by chowner Still need to wrap my head around why that formula works. It works because $2^N = \sum\limits_{k = 0}^N \binom{N}{k}$ is the total number of subsets. Subtract 1 for the empty set. 5. Originally Posted by chowner *brain ticks for a few seconds* Yep thats it! Thanks Plato! Still need to wrap my head around why that formula works, but at least thats not going to be bugging me all afternoon Imagine you were also allowed to select none of the objects. For 1 object, you have 2 choices {} {A} For 2 objects, double the number of choices {} {A} {B} {AB} For 3 objects, double the number of choices {} {A} {B} {AB} {C} {AC} {BC} {ABC} Keep doubling, so the number of choices is a power of 2. Then reduce it by 1 choice.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426922798156738, "perplexity_flag": "middle"}
http://en.wikiversity.org/wiki/Introduction_to_Limits
# Introduction to Limits From Wikiversity This lesson assumes you have a working knowledge of the topics presented in the following lessons: ## Introduction to Limits Limit processes are the basis of calculus. As opposed to algebra, where a variable is considered to have a fixed value (think of the solution of word problems, where there are one or more discrete answers), we allow a variable to change continuously and study how a function's value changes. ## Outline This article addresses limits of functions of a single variable. It starts with an informal definition, discusses the basic properties of the limit operation, and progresses to the precise definition of limit. This rigorous definition is used to prove the earlier results, which were stated without proof. A number of examples of applying the definition are given, which helps develop facility with inequalities. ## An Informal Definition Consider the function $y = 2x$. As x increases, y increases. As we crank x up towards a number, say 100, y gets closer to the number 200. Limits are concerned with what the value of a function (in this case, y) approaches as a variable it is based on (in this case, x) approaches a number (in this case, 100), not the actual value of the function when the variable equals the number. This is useful because not all functions are continuous and you may get a different result in the limit of y as you crank x down instead of up to that number. ## Continuous Functions Look at these two graphs. Notice that the top one is a single, unbroken curve, whereas the bottom one has many "jumps." Now the top graph is continuous and the bottom one is not. This is the basic idea of continuity for a function; a function that is not continuous will have jumps. Points themselves can also be continuous and not continuous. For example, the point where x equals -1/2 in the second graph is continuous because there are no jumps in that specific section. On the other hand, where x equals two, there is a jump, so the point is not continuous. Not a very formal definition, this jump thing! Because the jump thing is so informal, they invented a new way to phrase it using limits ### Definition of a Continuous Function Say we want to prove that the top function is continuous for all values between -3 and 3. (This is a closed interval, or a section of a function that includes the two endpoints. Likewise, an open interval is a section of a function that does not include the two endpoints. The closed interval between -3 and 3 includes -3 and 3; the open interval does not.) The first definition is that of continuity in an interval Definition of continuity in a closed interval A continuous closed interval is continuous at every point, including the left and right endpoints. Not too complicated, eh? Now on to the definition of continuity at a point that is in a open interval; that is, not including the endpoints Definition of continuity at a point in an open interval in a function Let c be the x-value of the coordinate we want to prove is continuous. The function f is said to be continuous at the point c if the following two conditions obtain: • $\lim_{x\to c} f(x)$ exists, and • $\lim_{x\to c} f(x)=f(c)$ Now, when can the limit as x aproaches c not exist? Just look at the second graph! If you try to find the limit at two from numbers larger than two, you get a result that does not equal the results from using numbers smaller than two! There is a way to write this mathematically, so we define new types of limits. • The mathematical way to write "The limit of f(x) as x approaches c created by using numbers larger than c" is $\lim_{x\to c^+}f(x)$ • The mathematical way to write "The limit of f(x) as x approaches c created by using numbers smaller than c" is $\lim_{x\to c^-}f(x)$ ## Basic Theorems for Limit Operations The limit operator satisfies linearity. That is, "the limit of sum, is the sum of the limits". There are also basic rules for doing arithmetic with limits. They can be found in the calculus textbook for reference. Follow this link and study the limits. Convince yourself that these rules are intuitive. Also note that if f(a) is defined, and if f is a continuous function, then $\lim_{x\to a} f(x)=f(a)$. ## Proofs What does $\lim_{x\to 5} (x^2)$ equal? Find the value of this expression for values close to 5: | | | | | | | | | | | | |----------------------------------------------------------------------------|----|-------|---------|--------|---------|---------|--------|---------|-------|----| | $x$ | 4 | 4.9 | 4.99 | 4.999 | 4.9999 | 5.0001 | 5.001 | 5.01 | 5.1 | 6 | | $x^2$ | 16 | 24.01 | 24.9001 | 24.99 | 24.999 | 25.001 | 25.01 | 25.1001 | 26.01 | 36 | Notice how, as we get closer to 5 from both sides, the value of the function, x2 approaches 25. This may seem obvious, since 5 squared actually equals 25. In fact, the limit of f(x) as x approaches c in a continuous open interval is equal to f(c) if it is defined. However, this problem is merely to help you get acquainted to limits, before we go into limits about a very special quotient, 0/0 and other indeterminate forms, or expressions that cannot be determined by substituting c for x in a limit where x goes to c. NOTE: Wikipedia is a very good reference on these limits; however, avoid going into the section on evaluating indeterminate limits, for this uses a statement called l'Hôpital's rule. This is a useful rule but is confusingly unnecessary until you know what a derivative is. Notice that 0/0 by itself is meaningless; also notice that $\lim_{x\to 0}(x/x)$ equals one, since for all values near 0, x/x=1. ### The main purpose of limits You can find otherwise undefined expressions with limits. They allow you to use algebraic rules, even at values when the rules are false! For example, look at $\lim_{x\to 0} (x^2/x)$. What does this equal? Let's use an algebraic rule that is true at all values of x besides zero. The rule states that $x^2/x$ equals x for all numbers beside 0. When we apply this rule to our old limit, we see that the limit is equivalent to $\lim_{x\to 0} x$, which is easily seen to be equal to 0. For more on finding limits, see the calculus wikibook. Exercise 1 More about the definition of limits and basic limit operations ## A More Precise Definition of Limit For almost all purposes, the informal definition of a limit works very well; however, because of its vague wording, it is very dificult to use it in any sort of proof about limits. For proofs, the formal definition of a limit, from Wikibooks, is used instead. ## Applying the Definition ### Limits involving Polynomials Finding the limit of a polynomial is a simple process. The easiest method of taking the limit of a polynomial is substitution. The value of a polynomial as x approaches a is equal to f(a). Consider the following example: $\lim_{x\rightarrow 3} x^3 + 2x^2 + 7 = (3)^3 + 2(3)^2 + 7 = 52$ ### Limits involving Trigonometric Functions This is a unit circle. It has a radius of one unit, and its angles are measured in radians. Using this circle, we can prove that $\lim_{\theta\to 0} \,\ \frac{\sin(\theta)}{\theta}=1$ Notice that this makes sense, since as theta approaches 0, arc DA becomes very close to being congruent to arc DC. This doesn't prove anything, though! To prove this, we need to look at areas. First notice that $ODC \le ODA \le OBA$, since each area contains the last are, plus another layer. Now, we can find the area of these three areas. The first area, ODC, is a triangle. This triangle has a base of $\,\ \cos(\theta)$, and it has a height of $\,\ \sin(\theta)$. Using the area formula for a triangle, we find $ODC=\frac{1}{2} \cos(\theta) \sin(\theta)$ Now, we go on to the next area, ODA. It is a sector of a circle. The formula to find the area of the sector of a circle is $\frac {\theta r^2} {2}$ This makes sense, since when you think that a complete circle would have two pi radians, the formula turns into the formula for a circle. In our unit circle, the area is simply $\frac {\theta} {2}$, since the radius is one. The third area is also a triangle, with height of $\,\ \tan(\theta)$, or $\frac {\sin(\theta)}{\cos(\theta)}$. With a base of one, its area turns into $\frac {1}{2} \frac {\sin(\theta)}{\cos(\theta)}$ Now from the original expression of $ODC \le ODA \le OBA$, we have $\frac{1}{2} \cos(\theta) \sin(\theta)\le \frac {\theta}{2} \le \frac {1}{2} \frac {\sin(\theta)}{\cos(\theta)}$ Multiply this whole inequality by two $\cos(\theta) \sin(\theta) \le \theta \le \frac {\sin(\theta)}{\cos(\theta)}$ Now divide the whole thing by $\,\ \sin(\theta)$ $\cos(\theta) \le \frac {\theta} {\sin(\theta)} \le \frac {1}{\cos(\theta)}$ Now we apply the limit! $\lim_{\theta \to 0} \,\ \cos(\theta) \le \lim_{\theta \to 0} \,\ \frac {\theta} {\sin(\theta)} \le \lim_{\theta \to 0 } \,\ \frac {1}{\cos(\theta)} \equiv 1 \le \lim_{\theta \to 0} \,\ \frac {\theta} {\sin(\theta)} \le \frac {1}{1}, \,\ or \,\ 1$ ## Further Study More information on this topic may be found in other lessons that list this one as a prerequisite. ## Tools L'Hôpital's Rule Consider the limit limxaf(x)g(x) If both the numerator and the denominator are finite at a and g(a)=0, then limxaf(x)g(x)=f(a)g(a) Example limx3x+2x2+1=510=2. But what happens if both the numerator and the denominator tend to 0? It is not clear what the limit is. In fact, depending on what functions f(x) and g(x) are, the limit can be anything at all! Example limx0x2x3=limx0x=0 limx0xx2=limx0x1= limx0x3−x=limx0x2−1=− limx0xkx=limx0k=k These limits are examples of indeterminate forms of type 00. L'Hôpital's Rule provides a method for evaluating such limits. We will denote limxalimxa+limxa−limx and limx− generically by lim in what follows. L'Hôpital's Rule for 00 Suppose limf(x)=limg(x)=0. Then ``` 1. If limf(x)g(x)=L, then limf(x)g(x)=limf(x)g(x)=L. 2. If limf(x)g(x) tends to + or − in the limit, then so does f(x)g(x). ``` Geometrical Interpretation Sketch of the Proof of L'Hôpital's Rule Example ``` * limx0xsinx=limx0ddx(x)ddx(sinx)=limx01cosx=1 * limx12lnxx−1=limx1ddx(x−1)ddx(2lnx)=limx11 x2 =2 * limx0x2ex−1=limx0ddx(x2)ddx(ex−1)=limx0ex2x= ``` If the numerator and the denominator both tend to or −, L'Hôpital's Rule still applies. L'Hôpital's Rule for Suppose limf(x) and limg(x) are both infinite. Then ``` 1. If limf(x)g(x)=L, then limf(x)g(x)=limf(x)g(x)=L. 2. If limf(x)g(x) tends to + or − in the limit, then so does f(x)g(x). ``` The proof of this form of L'Hôpital's Rule requires more advanced analysis. Here are some examples of indeterminate forms of type . Example limxxex=limx1ex= Sometimes it is necessary to use L'Hôpital's Rule several times in the same problem: Example limx0x21−cosx=limx02xsinx=limx02cosx=21 Occasionally, a limit can be re-written in order to apply L'Hôpital's Rule: Example limx0xlnx=limx0x1lnx=limx0 x1 −1x2=limx0(−x)=0 We can use other tricks to apply L'Hôpital's Rule. In the next example, we use L'Hôpital's Rule to evaluate an indeterminate form of type 00: Example To evaluate limx0+xx, we will first evaluate limx0+ln(xx). limx0+ln(xx)=limx0+xln(x)=0 by the previous example Then since limx0+ln(xx)0 as x0+ and ln(u)=0 if and only if u=1, xx1asx0+ Thus, limx0+xx=1 Notice that L'Hôpital's Rule only applies to indeterminate forms. For the limit in the first example of this tutorial, L'Hôpital's Rule does not apply and would give an incorrect result of 6. L'Hôpital's Rule is powerful and remarkably easy to use to evaluate indeterminate forms of type 00 and . Key Concepts L'Hôpital's Rule for 00 Suppose limf(x)=limg(x)=0. Then ``` 1. If limf(x)g(x)=L then limf(x)g(x)=limf(x)g(x)=L. 2. If limf(x)g(x) tends to + or − in the limit, then so does f(x)g(x). ``` L'Hôpital's Rule for Suppose limf(x) and limg(x) are both infinite. Then ``` 1. If limf(x)g(x)=L, then limf(x)g(x)=limf(x)g(x)=L. 2. If limf(x)g(x) tends to + or − in the limit, then so does f(x)g(x). ```
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9177175164222717, "perplexity_flag": "head"}
http://en.wikibooks.org/wiki/The_Science_of_Programming/Auld_Lang_Sine
# The Science of Programming/Auld Lang Sine ## Contents You have, in all likelihood, previously made acquaintance with sines and cosines. Sinusoids (which include sines and cosines) are periodic functions. Periodic functions have the following property: ``` $f\,(x) = f\,(x + p) = f\,(x + 2p) = ...$ ``` for all x and certain values of p. The smallest value of p for which the above equation holds is known as the period of the function. If you not familiar with sines and cosine functions, this is what a sine wave looks like: Note that for sine waves without phase and frequency shifts (as above), when x is zero, the amplitude (y-value) of the sine wave is zero (with a rising slope). The wave reaches zero again (with a rising slope) x = 2π. Contrast this with the cosine wave: Unlike the sine wave, which has zero-crossings at multiples of π, the cosine wave has peaks and troughs at multiples of π. As with the sine wave, the period, or peak to peak length, is 2π. ## The general sine wave If you look closely at the two waves, you'll see that the cosine is just the sine wave shifted $\frac{\pi}{2}$ units to the left. So, we can define cosine in terms of sine: ``` $\cos(x) = \sin(x + \frac{\pi}{2})$ ``` Note that mathematicians often abbreviate sine to sin and cosine to cos. The above illustrates one of the common ways to modify a sine wave: shifting the phase of the wave, for which the symbol $\theta$ is often used: ``` $y = \sin(x + \theta);$ ``` Another modification is to change the amplitude of a sine wave or, on other words, to change how high and how low the peaks range. The variable a is used to indicate amplitude, so the formula for a sine wave that allows amplitude modification is: ``` $y = a \sin(x + \theta);$ ``` Finally, it is common to change the frequency of sine waves, or how many peaks (or troughs) are in a given region. The variable $\omega$ is often used for this task: ``` $y = a \sin(\omega x + \theta);$ ``` If we set $\omega$ to 2, we will get twice as many peaks (or troughs) within a given region. The inverse of frequency is period, so setting $\omega$ to 2 will shorten the peak to peak distance by half. In the particular case of frequency 2, the period of the sine wave would be π. Here's a neat trick. If you want to find the location of the 'first' zero crossing, of a general sign wave of this form, change the sign of both the phase shift and the operator combining it with x and then factor out $\omega$. For cosine, with unit amplitude and frequency doubled, we have: ``` $\cos(x) = \sin(2x + \frac{\pi}{2})$ $\cos(x) = \sin(2x - (-\frac{\pi}{2}))$ $\cos(x) = \sin(2[x - (-\frac{\frac{\pi}{2}}{2})])$ $\cos(x) = \sin(2[x - (-\frac{\pi}{4})])$ ``` This version of cosine has its first zero crossing at $x = -\frac{\pi}{4}$. [1] In other words, to find the first zero crossing, divide the phase shift by the frequency and negate it (assuming that the phase shift is being added in). ## Implementing the general sine wave Sway has the functions sin and cos built-in, but these functions assume amplitude = 1, frequency = 1, and phase shift = 0. [2] To implement sine and cosine, we will take our usual object approach: ``` function sine(amp,freq,shift) { function value(x) { amp * sin(freq * x + shift); } this; } function cosine(amp,freq,shift) { sine(amp,freq,shift + (pi() / 2)); } ``` Note how we make cosine a wrapper for sine as cosine is just sine with a phase shift. Let's take that little trick we learned in the previous section for finding the 'first' zero crossing and implement it: ``` function sine(amp,freq,shift) { function value(x) { ... } function firstZero() { // phase shift is added in so just divide and negate -(real(shift) / freq); } this; } ``` Let's see if it works for cosine with frequency 2: ``` var w = cose(1,2,0); sway> -(pi() / 4); REAL_NUMBER: -0.7853981634 sway> w . firstZero(); REAL_NUMBER: -0.7853981634 ``` Good. Let's also check that the value of the sine wave is indeed zero at that point: ``` sway> var fz = w . firstZero(); REAL_NUMBER: -0.7853981634 sway> w . value(fz); w . value(fz) is 0.000000e+00 ``` Bingo! ## Differentiating sine and cosine As SPT points out in Chapter XV in CMT, the derivative of sine is cosine and the derivative of cosine is the negative of sine. Using the first rule, we can add a diff function to the sine constructor. Of course, in keeping with our unassuming differentiation system, we need to pass in independent and with-respect-to variables, as appropriate: ``` function sine(amp,freq,shift) { function value(x) { ... } function firstZero() { ... } function diff() { cosine(amp,freq,shift); } this; } ``` Of course, this implementation assumes that the independent variable and the with-respect-to variable are one and the same. It also assumes that the independent variable is just a symbol. As such, we could not construct a sine wave of the form; ``` $y = 3 \sin(2 x^2 + pi)$ ``` To do so, we will have to take the same approach as for terms, by allowing the independent variable to be a differentiable object. Recall the term constructor: ``` function term(a,iv,n) { function value(x) { ... } function toString() { ... } function diff(wrtv) { if (n == 0) { constant(0); } else { term(a * n,iv,n - 1) times iv . diff(wrtv); } } if (iv is :SYMBOL) { iv = variable(iv); } this; } ``` Recall also how the diff function implements the chain rule. We will need to follow the same strategy for our sine constructor: ``` function sine(amp,freq,iv,shift) { function value(x) { ... } function firstZero() { ... } function diff(wrtv) { cosine(amp,freq,iv,shift) times iv . diff(wrtv); } if (iv is :SYMBOL,iv = variable(iv)); this; } ``` All that's left for us to do is implement our visualization for sine (and, of course, test): ``` function toString() { "" + amp + " sin(" + freq + "(" + iv . toString() + ")" + shift + ")"; } ``` ## What is -sin? According to CMT, the derivative of sine is cosine and the derivative of cosine is -sin. Therefore, the second derivative of sine is -sine. ## Questions 1. What is the difference between the function $\sin(x + \frac{\pi}{2})$ and the function $\sin(x) + \frac{\pi}{2}$? 2. Why can't we name sine and cosine constructors sin and cos? 3. Add the following simplification to the sine constructor. If the phase shift is equal to or greater than 2 π, subract off 2 π. 4. Explain why the previous simplification is mathematically valid. 5. Simplify the construction of sine objects so that a zero object is generated if the amplitude is zero. 6. Simplify the construction of sine objects so that a constant term object is generated if the frequency is zero. 7. Simplify the visualization of sine objects so that the amplitude, frequency, and phase shift are omitted if they are 1, 1, and 0, respectively. 8. Find and add other visualization mods. Hint: what should your visualization look like if the frequency is != 1 but the independent variable is a term with a coefficient != 1? ## Footnotes 1. Contrast this with the cosine wave with unit amplitude and frequency (shown at the beginning of this chapter), which has a zero crossing at $x = -\frac{\pi}{2}$. 2. There is also a generalized cosine function: $y = a \cos(\omega x + \theta)$. Of course, this is equal to $a \sin(\omega x + \frac{\pi}{2} + \theta)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145091772079468, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/141009-how-integrate-sin-x-y-dx.html
# Thread: 1. ## how to integrate sin(x/y) dx can anyone assist with evaluating the integral if sin (x/y) with respect to x & y, i have tried using integration by parts but cant seem to get an answer. i know by using the online calculators the answer is -ycos(x/y) 2. Originally Posted by john1985 can anyone assist with evaluating the integral if sin (x/y) with respect to x & y, i have tried using integration by parts but cant seem to get an answer. i know by using the online calculators the answer is -ycos(x/y) When you integrate with respect to x, treat y as a constant. When you integrate with respect to y, treat x as a constant 3. ## suggestion Originally Posted by john1985 can anyone assist with evaluating the integral if sin (x/y) with respect to x & y, i have tried using integration by parts but cant seem to get an answer. i know by using the online calculators the answer is -ycos(x/y) t=x/y. 4. would some be able to post some of the initial steps i get you have to use the substitution method but am having difficultiy getting the answer out original expression integrate with respect to x = sin(y/x) step 2 using substitution = sin (t) dt step 3 -cos (t) t ??? 5. Is y a constant? If so: let x = yt, so dx = y dt. Then $\int \sin (\frac xy)\;\mathrm{d}x = \int \sin (t) \cdot y\;\mathrm{d}t = y \int \sin (t) \;\mathrm{d}t = -y \cos t + c = -y\cos(\frac xy) + c$. 6. Originally Posted by john1985 can anyone assist with evaluating the integral if sin (x/y) with respect to x & y, i have tried using integration by parts but cant seem to get an answer. i know by using the online calculators the answer is -ycos(x/y) You might not think that the region of integration is important, but it is. Post the whole question please if you hope to get effective help. 7. $\int sin \bigg( \frac{x}{y} \bigg)dx=y\int sin \bigg(\frac{x}{y} \bigg)\bigg(\frac{1}{y}dx\bigg)=-ycos\bigg(\frac{x}{y}\bigg)+C$ Try doing dy now. 8. Originally Posted by dwsmith $\int sin \bigg( \frac{x}{y} \bigg)dx=y\int sin \bigg(\frac{x}{y} \bigg)\bigg(\frac{1}{y}dx\bigg)=-ycos\bigg(\frac{x}{y}\bigg)+C$ Try doing dy now. I doubt this is what the question has asked the OP to do. See the boldface in the quote below: Originally Posted by john1985 can anyone assist with evaluating the integral if sin (x/y) with respect to x & y, i have tried using integration by parts but cant seem to get an answer. i know by using the online calculators the answer is -ycos(x/y) Until what I've boldfaced is explained, the real question cannot be reliably answered. 9. Originally Posted by mr fantastic I doubt this is what the question has asked the OP to do. See the boldface in the quote below: Until what I've boldfaced is explained, the real question cannot be reliably answered. I am under the impression that he wants to integrate with dx and then a separate integral of dy. 10. Originally Posted by dwsmith I am under the impression that he wants to integrate with dx and then a separate integral of dy. Doubtful, since integrating again w.r.t. y will require non-elementary functions... 11. Originally Posted by dwsmith I am under the impression that he wants to integrate with dx and then a separate integral of dy. I'll bet dollars to doughnuts that the original question gives a region that has to be integrated over. But we will never know unless the OP replies. Further posts are useless until then. 12. evaluate the double integral sin(x/y)dA where R is the region bounded by the y-axis , y=pi and x=y^2 13. Originally Posted by sr917 evaluate the double integral sin(x/y)dA where R is the region bounded by the y-axis , y=pi and x=y^2 Draw the region of integration. It is then clear that the required integral is $\int_{y = 0}^{y = \pi} \int_{x = 0}^{x = y^2} \sin \left(\frac{x}{y}\right) \, dx \, dy$ $= \int_0^\pi \left[-y \cos \left( \frac{x}{y}\right) \right]_0^{y^2} \, dy$ $= - \int_0^\pi y \cos (y) \, dy = ....$ (If only the complete question had been asked in the first place a lot of time and energy would not have been wasted).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9354878067970276, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/51228/why-arent-two-systems-in-thermal-equilibrium-the-same-as-one-system
# Why aren't two systems in thermal equilibrium the same as one system? I am reading Molecular Driving Forces, 2nd ed., by Dill & Bromberg. On page 53, example 3.9, we consider why energy exchanges between two systems from the point of view of the 2nd law. We consider two separate systems. Each has ten particles, and each particle has two possible energy states. System A has total energy $U_{a}$ = 2, and $U_{b}$ = 4. Thus binomial statistics predicts the multiplicities of these systems: $W(U_{a})$ = $\frac{10!}{8!2!} = 45$ $W(U_{b})$ = $\frac{10!}{6!4!} = 210$ Now the confusing part, to me, is the math when these two systems come into thermal contact. Then the author asserts that the initial multiplicity is $W(U_\text{total})$ = $\frac{10!}{8!2!}\frac{10!}{6!4!}$ And that maximum multiplicity is found at $W(U_\text{total})$ = $\frac{10!}{7!3!}\frac{10!}{7!3!}$ = 14,400 But why consider the systems in this way, as opposed to thinking of a new system, with 20 particles, having $U_{a}$+$U_{b}$ = 6? Then we get $W(U_{a+b})$ = $\frac{20!}{14!6!}$ = 38760 ≠ $W(U_\text{total})$ I'm trying to develop a sense of the difference, I suppose, between two systems in thermal contact and one system. After they've equilibrated, how are they not treatable as one system? They clearly aren't, because if they were, then the total multiplicity of that one system must = the total multiplicity of the two systems A and B. - Put a different way, why couldn't I take any system and arbitrarily partition it up and consider it as several systems in thermal contact with each other? What we've just seen here is that the math is different between "one system" and "two systems in thermal contact". – masonk Jan 15 at 0:23 ## 1 Answer In this example both the systems are of the same type of particles (with two energy states) and same number of particles. Therefore thermal equilibrium is defined when energy is equally shared between the systems, but the particles are still not allowed to be exchanged. The particles, although of the same kind, are distinguished as being in system A or B. If you allowed particles to be exchanged then you are allowing swapping of particles between the two systems effectively raising the possibility to 20 particles with 6 energy units. - By introducing rigid partitions you are "distinguishing" particles between each partition. While the complete system doesn't distinguish between the particles. If you allow leaky partitions, i.e., both matter and energy can move between partition you get back to the same answer for the whole system – Sankaran Jan 15 at 0:36 Yep, this is it! Thanks. – masonk Jan 15 at 3:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441766142845154, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/270687/irreducibility-of-multivariable-polynomials
# Irreducibility of Multivariable Polynomials Question: Let $k$ be a field and $p,q\in k[x]$ two relatively prime polynomials. Prove $p(x)y-q(x)$ is irreducible in $k[x,y]$. How does one show this? More generally, how does one show that multivariable polynomials are irreducible? In one variable we have access to tools like Gauss's lemma and Eisenstein's criteria, but I do not know any methods applicable to the multivariable case. - ## 2 Answers Define $f(x,y) = p(x) y - q(x)$. Suppose $f(x,y) = g(x,y) h(x,y)$. In the ring $k(x)[y]$, $f(x,y)$ is clearly irreducible; therefore one of $g(x,y)$ and $h(x,y)$ is a unit. WLOG, assume $g(x,y)$ is the unit. The only units in $k(x)[y]$ are the nonzero elements of $k(x)$, therefore $g(x,y) = e(x)$ for some $e$. If $f(x,y) = e(x) h(x,y)$, then $e(x)$ must be a common divisor of $p(x)$ and $q(x)$; therefore $e(x)$ is a unit of $k[x]$, and thus $g(x,y)$ is a unit of $k[x,y]$. - All multivariate polynomials are also one-variable polynomials. This polynomial is a polynomial in $y$ with coefficients in $k[x]$, which is a UFD, so Gauss's lemma and Eisenstein's criterion continue to apply (but are unnecessary because here the polynomial is linear...). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9214564561843872, "perplexity_flag": "head"}
http://mathhelpforum.com/statistics/83843-trouble-calculating-confidence-interval.html
# Thread: 1. ## Trouble calculating a confidence interval Your company plans to buy disks that are supposed to come in containers of exactly 100 disks. A random sample of 49 containers is taken and the mean is found to be 102 disks with a standard deviation of 3 disks. (a) Using a .05 significance level, should the disk -packing machinery be adjusted? (b) Construct a two-sided 95% confidence interval about the mean using the above data. 2. ## hi hi $\left[\mu^{*} -\lambda_{0.05} \cdot \sigma^{*} , \mu^{*} +\lambda_{0.05} \cdot \sigma^{*} \right]$ - Asymptotic normality where $\mu^{*} \mbox{ is the estimated mean, and } \sigma^{*}$ is the estimated standard deviation. If the number of observations n is not large enough, use $\left[ \mu^{*} -t_{\frac{\alpha}{2}}(n-1) \frac{s_{n-1}}{\sqrt{n}}, \mu^{*} +t_{\frac{\alpha}{2}}(n-1) \frac{s_{n-1}}{\sqrt{n}} \right]$ The later is the so-called Students t-distribution, with $f=n-1$ degrees of freedom. For this you need the t-distribution table.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.888562798500061, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/132559/two-linearly-independent-eigenvectors-with-eigenvalue-zero?answertab=oldest
# Two linearly independent eigenvectors with eigenvalue zero What is the only $2\times 2$ matrix that only has eigenvalue zero but does have two linearly independent eigenvectors? I know there is only one such matrix, but I'm not sure how to find it. - ## 3 Answers Answer is the zero matrix obviously. EDIT, here is a simple reason: let the matrix be $(c_1\ c_2)$, where $c_1$ and $c_2$ are both $2\times1$ column vectors. For any eigenvector $(a_1 \ a_2)^T$ with eigenvalue $0$, $a_1c_1 + a_2c_2 = 0$. Similarly, for another eigenvector $(b_1 \ b_2)^T$, $b_1c_1 + b_2c_2 = 0$. So $(a_2b_2 - a_1b_2)c_1 = 0$, therefore $c_1=0$ as the eigenvectors are linearly independent. From this, $c_2=0$ also. - Another way to look at the problem. Consider the geometric multiplicity of the matrix. It has two linearly independent eigenvectors corresponding to zero and so the geometric multiplicity is equal to its algebraic multiplicity. Therefore the matrix is diagonalizable. But the diagonal form is the zero matrix and any vector similar to the zero matrix is still just the zero matrix. - Let $A$ be any such matrix. Let $\beta=[\mathbf{v}_1,\mathbf{v}_2]$ be a basis made up of eigenvectors of $A$. If $P$ is the matrix that has $\beta$ in the columns, then $P^{-1}AP$ is diagonal, with the eigenvalues of $A$ in the diagonals. But such a matrix is $$\left(\begin{array}{cc} 0&0\\0&0\end{array}\right).$$ So $PAP^{-1}=0$. Multiplying on the left by $P^{-1}$ and on the right by $P$, we get $A = P^{-1}0P = 0$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9332067370414734, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-statistics/50375-constructing-p-m-f-print.html
# Constructing a p.m.f. Printable View • September 23rd 2008, 08:39 PM Chris L T521 Constructing a p.m.f. I'm still having a bit of trouble trying to understand this stuff. Quote: Let a chip be taken at random from a bowl that contains six white chips, three red chips, and one blue chip. Let the random variable $X=1$ if the outcome is a white chip; let $X=5$ if the outcome is a red chip; and left $X=10$ if the outcome is a blue chip. (a) Find the p.m.f. of $X$ [snip] Source: Probability and Statistical Inferences, 7E, by Hoggs and Tanis I understand that $P(X=1)=\frac{6}{10}$, $P(X=5)=\frac{3}{10}$, and $P(X=10)=\frac{1}{10}$. My issue here is determining a proper value for the numerator of my $f(x)=P(X=x)$. My stab at this would be to say that the p.m.f. has the form of $f(x)=\frac{u}{10}$, where $u$ is the part I can't figure out. I see a pattern though: $X=1:~~~~~6$ $X=5:~~~~~3$ $X=10:~~~~\!1$ The difference between the first two terms is 3, and the last two terms is 2. Other than that, I'm at a standstill. I'd appreciate any input! --Chris w00t!!! my 9(Sun)(Sun)th post!! :D • September 23rd 2008, 11:00 PM CaptainBlack Quote: Originally Posted by Chris L T521 I'm still having a bit of trouble trying to understand this stuff. Source: Probability and Statistical Inferences, 7E, by Hoggs and Tanis I understand that $P(X=1)=\frac{6}{10}$, $P(X=5)=\frac{3}{10}$, and $P(X=10)=\frac{1}{10}$. My issue here is determining a proper value for the numerator of my $f(x)=P(X=x)$. My stab at this would be to say that the p.m.f. has the form of $f(x)=\frac{u}{10}$, where $u$ is the part I can't figure out. I see a pattern though: $X=1:~~~~~6$ $X=5:~~~~~3$ $X=10:~~~~\!1$ The difference between the first two terms is 3, and the last two terms is 2. Other than that, I'm at a standstill. I'd appreciate any input! --Chris w00t!!! my 9(Sun)(Sun)th post!! :D $<br /> f(x) = \begin{cases} <br /> \frac{6}{10}, &x\in \{1\},\\<br /> \frac{3}{10}, &x\in \{5\},\\<br /> \frac{1}{10}, &x\in \{10\},\\<br /> 0, &x\in \mathbb{R}\backslash \{1,\ 5,\ 10\}.\end{cases}<br />$ RonL All times are GMT -8. The time now is 06:28 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942798912525177, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/13960/how-to-solve-this-equation-with-positive-integers-as-a-solutions?answertab=oldest
# How to solve this equation with positive integers as a solutions? This is a problem of United Kingdom Mathematical Olympiad. Find all triples $(x,y,z)$ of positive integers such that $$\biggl(1+\dfrac{1}{x}\biggr)\cdot \biggl(1+\dfrac{1}{y}\biggr)\cdot \biggl(1+\dfrac{1}{z}\biggr)=2.$$ I tried ````Reduce[ (1 + 1/x)(1 + 1/y)(1 + 1/z) == 2 && x > 0 && y > 0 && z > 0, {x, y, z}, Integers] ```` And I get ````(x | y | z) ∈ Integers && x >= 2 && y > (1 + x)/(-1 + x) && z == (1 + x + y + x y)/(-1 - x - y + x y) ```` How do I tell Mathematica to do that? O.K, ````Reduce[ (1 + 1/x)(1 + 1/y)(1 + 1/z) == 2 && x > 0 && y > 0 && z > 0 && x >= y && y >= z, {x, y, z}, Integers] ```` - 1 It already did it for you... – rm -rf♦ Nov 1 '12 at 2:51 3 – 0xFE Nov 1 '12 at 2:56 Thank you very much. Without loss of generality, we may assume $x \geqslant y \geqslant z$. And I tried Reduce[(1 + 1/x)*(1 + 1/y)*(1 + 1/z) == 2 && x > 0 && y > 0 && z > 0 && x >= y && y >= z, {x, y, z}, Integers] – minthao_2011 Nov 1 '12 at 2:56 ## 1 Answer The Backsubstitution option will help here. ````Reduce[ 1 + x + y + x y + z + x z + y z - x y z == 0 && x >= y >= z >= 1, {x, y, z}, Integers, Backsubstitution -> True] (* (x == 5 && y == 4 && z == 3) || (x == 7 && y == 6 && z == 2) || (x == 8 && y == 3 && z == 3) || (x == 9 && y == 5 && z == 2) || (x == 15 && y == 4 && z == 2) *) ```` - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5273635983467102, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/236489/tangent-vector-on-a-complex-manifold
# tangent vector on a complex manifold Let $x_1,x_2, \ldots, x_n$ be local coordinates on a manifolds $M$. One can interpret $\frac{\partial}{\partial x_i}(p)$ as a tangent vetor to a curve with constant $x_j$ (where $j \neq i$). What is the interpretation of $\frac{\partial}{\partial z_i}(p)$ on a complex manifold in local coordinates? I'll be glad for any references. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.841242790222168, "perplexity_flag": "head"}
http://mathoverflow.net/questions/119118/totally-geodesic-submanifolds
## Totally Geodesic Submanifolds ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose that $N$ is a totally geodesic submanifold of a complete Riemannian manifold $(M,g)$. Is it the case that a geodesic segment that minimizes length in the submanifold $N$ also minimizes length in the ambient manifold $M$? - 1 The answer is no. Consider a surface of revolution that looks like a cylinder with a spherical cap. The complete geodesic through the origin is a totally geodesic submanifold but it is not distance minimizing. – Igor Belegradek Jan 16 at 23:28 So it's not true in general that the cut locus of a point $p$ w.r.t. $N$ is the intersection of the cut locus of $P$ w.r.t. $M$ intersected with $N$? That is $C_p(N)=C_p(M)\cap N$? – Oliver Jones Jan 17 at 0:13 1 No, there is no such relation between the cut loci. Take $p$ be the the origin in the surface of revolution described above, then $p$ is a pole, so its cut locus is empty, while the cut locus of $N$ is nonempty and complicated. – Igor Belegradek Jan 17 at 1:20 @Igor: I think you mean $M$, not $N$. In your example, $C_p(N)=\phi$ and so $C_p(N)\subseteq C_p(M)\cap N$ trivially. However, this is also too much to hope for in general. For example, a geodesic segment in $N$ joining $p$ to a cut point $q$ may hit a cut point earlier than $q$ when considered as a segment in $M$. – Oliver Jones Jan 17 at 3:25 ## 3 Answers Let $M$ the flat cylinder $R\times S^1\subset R\times C$ and $N=\{(t,e^{it})\,\vert\,t\in R\}$ which is a geodesic (hence a complete totally geodesic submanifold of $M$) minimizing between any two points of $N$ (among the geodesics of $N$). But the minimizing geodesic in $M$ between the points $(0,1)$ and $(2\pi,1)$ is the segment $\{(s,1)\,\vert\,s\in[0,2\pi]\}$. - Nice example; thanks! – Oliver Jones Jan 17 at 3:26 Hi, Carlo! – Pietro Majer Jan 17 at 7:29 Ciao Pietro! – Carlo Mantegazza Jan 17 at 12:33 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As for an example where $N$ is complete: Slice a 2-sphere just above and below a great circle. Keep the piece containing the great circle. Glue flat disks along the resulting boundaries and smooth the surface near the boundaries. - Thanks for the example. – Oliver Jones Jan 17 at 0:16 What about $M$ an Euclidean sphere, and $N$ a great circle minus a point? - I'll add the condition that $N$ is geodesically complete. – Oliver Jones Jan 17 at 0:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 37, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9055869579315186, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/taylor-expansion+power-series
# Tagged Questions 4answers 128 views ### Are Taylor series and power series the same “thing”? I was just wondering in the lingo of Mathematics, are these two "ideas" the same? I know we have Taylor series, and their specialisation the Maclaurin series, but are power series a more general ... 2answers 58 views ### How does one get the Bernoulli numbers via the generating function? Here is the definition: Bernoulli numbers arise in Taylor series in the expansion $$\frac{x}{e^x-1}=\sum_{k=0}^\infty B_k \frac{x^k}{k!}$$ I've tried to naively expand $\frac{x}{e^x-1}$ around ... 1answer 33 views ### Points around which one expands and the radiuses of convergence I'm trying to make sense of the following passage: Let $f(x)=\frac{1}{x+1}$ and $R_0$ the radius of convergence of the Taylor series of $f$ around $x_0=0$, analogously: $R_1$ — around ... 2answers 45 views ### Analytic functions of a real variable which do not extend to an open complex neighborhood Do such functions exist? If not, is it appropriate to think of real analytic functions as "slices" of holomorphic functions? 1answer 55 views ### Why do power series converge to a function symmetrically? Why does the taylor series of $\ln (1 + x)$ only approximate it for $-1<x \le 1$? The selected answer to the above question says that for a a power series, the interval of convergence for the ... 3answers 32 views ### Demonstrate that $\frac{1}{e^e} - 1 + e - \frac{e^2}{2} + \frac{e^3}{6}\ge 0$ How do I prove the inequality? $$\frac{1}{e^e} - 1 + e - \frac{e^2}{2} + \frac{e^3}{6} \geq 0$$ I can see that \$e^e = \sum_{k=0}^{\infty} \frac{e^k}{k!} = 1 + e + \frac{e^2}{2} + \frac{e^3}{6}+\dots ... 2answers 46 views ### Taylor series of $f(x^2)$ If you know the taylor series for $f(x)$ can you find the taylor series for $f(x^2)$ by letting $x = x^2$? The taylor series in question is $\cos(x^2)$ I know the taylor series for $\cos(x)$ is ... 1answer 48 views ### Why do Maclaurin series approximate a function for negative domain values? A common analogy used as an intuitive explanation for a Maclaurin series is that of a car. If you know the position, velocity, acceleration, jerk etc. of a car at time zero, you are able to predict ... 0answers 142 views ### power series of arcsin(x) centered at x = 0 I am trying to prove that the Taylor expansion of $\arcsin(x) = \sum\limits_{n=0}^\infty \cfrac{(2n!)x^{2n+1}}{(2^nn!)^2(2n+1)}$. Sorry about the notation, I'm not sure what syntax to use. S stands ... 1answer 64 views ### Adding two Power series or Maclaurin sums together and their radius of convergence Say you have two power series. One of them has ROC of 2, and the other one has an ROC of 4. If you add the two series together is the ROC ALWAYS the lesser ROC? It seems to be a trend I've noticed, ... 0answers 40 views ### Taylors Inequality to evaluate $f(x) = x\sin(x)$ when $a = 0$ and $-1\le x\le1$ Trying to calculate the error of this function when you use a Taylor expansion to degree 4. I keep getting $.039$ when the answer in the back of the book is $.042$. I take the fifth derivative of ... 2answers 32 views ### Lagrange remainder to approximate $3^{2.1}$ less than 0.1 How do I solve this problem: Use the appropriate Taylor polynomial $P_n(x,c)$ to estimate $3^{2.1}$ with error less than $0.1$, given $\ln 3$ is about $1.099$. I understand that the remainder ... 1answer 41 views ### Find taylor polynomial that approximates e^x with accuracy at least 1. Find Taylor polynomial at $x=0$ which approximates $e^x$ with accuracy at least $1$ for each $x \in [-2,2]$. I dont undestand these questions that involve the $n^{th}$ remainder. I know I need to ... 2answers 113 views ### Confused by Laurent series A typical problem related to Laurent series is this: For the function $\frac 1{(z-1)(z-2)}$, find the Laurent series expansion in the following regions: \$\\(a) |z|<1, \\ (b) 1<|z|<2, ... 1answer 50 views ### Confused over analytic functions, point convergence of power series It is well-known that a power series sums to a function that is analytic at every point inside its circle of convergence and that conversely, if a function is analytic on an open disc then its Taylor ... 2answers 102 views ### How do I obtain the Laurent series for $f(z)=\frac 1{\cos(z^4)-1}$ about $0$? I know that $$\cos(z^4)-1=-\frac{z^8}{2!}+\frac{z^{16}}{4!}+...$$ but how do I take the reciprocal of this series (please do not use little-o notation)? Or are there better methods to obtain the ... 2answers 44 views ### Solving limit by substituting a power series I dont understand why I am getting 2 and the textbook says it is -2. $$\lim_{x\to 0} \frac{1-e^x}{\sqrt{1+x}-1}$$ I subbed the power series for $e^x$ and $(1+x)^{1/2}$ then got rid of the $1$ on top ... 3answers 46 views ### Precise differences in meaning of Power Series, Taylor Series Being an physicist/artist, not a real mathematician, I often toss around the terms "Taylor Series" and "Power Series" without any concern. Are these terms be considered interchangeable by ... 2answers 88 views ### Finding the Maclaurin series Find the Maclaurin series for $f(x)=(x^2+4)e^{2x}$ and use it to calculate the 1000th derivative of $f(x)$ at $x=0$. Is it possible to just find the Maclaurin series for $e^{2x}$ and then multiply it ... 1answer 63 views ### Exponential as power series Is there a function that does not depend on $a$ such that $\sum_{x=1}^\infty \frac{a^x}{x!}f(x) = \mathrm e^{-a}$? Just to be clear, the summation starting from 1 is intentional, otherwise the ... 2answers 144 views ### Proof of the “Radius of Convergence Theorem” I can't figure out how it is valid to invoke the Absolute Convergence Theorem, whose hypothesis is "Let the power series have radius of convergence R", to establish case c of the Radius of Convergence ... 1answer 48 views ### Taylor series representation of a function. I'm working on expressing the function $f(x)=\frac{6}{x}$ as a taylor series about $-4$. I've got the general idea, but I'm not quite there yet. I've come up with the equation ... 1answer 137 views ### Laurent Series and Taylor Series I am trying to find the Laurent series of $\dfrac{1}{(1+x)^3}$; would this be the same as finding the Maclaurin series for the same function? 1answer 109 views ### Taylor / Maclaurin series expansion origin. [closed] Soo we all know Taylor series expansion formula for expansion around expansion point $A(a,f(a))$: f(x) \approx \underbrace{f(a)}_{1st~term} + \underbrace{\frac{f'(a)\, (x-a)}{1!}}_{2nd~term}+ ... 1answer 102 views ### Laurent series of $$g(z)=\frac{z^n+z^{-n}}{z^2-(a+\frac{1}{a})z+1}=?$$ How to find Laurent series of g(z) ? $$g(z)=\frac{z^n+z^{-n}}{z^2-(a+\frac{1}{a})z+1} \hspace{10mm} \begin{cases} n \in N \\ 0<a<1 \end{cases}$$ answer is : ... 1answer 91 views ### Differentiating power series Consider the power series $$\sum_{n=0}^\infty{\frac{x^{2n}}{(2n)!}}$$ From this, it follows that its sum defines an infinitely differentiable function $f$, given by ... 2answers 393 views ### Find the Taylor Series for $f(x)$ centered at a given value $a$ $$f(x) = \frac{6}{x}\,\, \mathrm{at}\,\, a = -4 .$$ Assume that $f$ has a power series expansion. Do not show that $R_n(x) -> 0$ I took the derivatives of f(x): $$f(x) = 6/x$$ $$f'(x) = -6/x^2$$ ... 0answers 148 views ### domain of convergence of a multivariable taylor series consider the rational function : $$f(x,z)=\frac{z}{x^{z}-1}$$ $x\in \mathbb{R}^{+}/[0,1]\;\;$, $z\in \mathbb{C}\;\;$ .We wish to find an expansion in z that is valid for all x and z. a Bernoulli-type ... 2answers 142 views ### Why do we need Taylor polynomials? This question doubles as "Is my understanding of what a Taylor polynomial is for, correct?" but In order to write out a Taylor polynomial for a function, which we will use to approximate said function ... 2answers 244 views ### Why is Taylor series expansion for $1/(1-x)$ valid only for $x \in (-1, 1)$? After finding an expansion of $$\frac{1}{1-x} = 1 + x + x^2 + x^3 + \ldots$$ a quick test of various values for $x$ reveals that this expansion is not valid for $\forall x \in \mathbb{R}-\{1\}$. ... 1answer 114 views ### Asymptotic of Taylor series Let $f(x)$ and $g(x)$ be two Taylor series such that: $$f(x)= \sum_{n=0}^{\infty}(-1)^{n} a(n) x^{n}$$ and $$g(x)= \sum_{n=0}^{\infty} b(n) x^{n}$$, for $a(n) >0$ and $b(n) > 0$. My ... 2answers 94 views ### Problem regarding infinite sum of remainders. Before here @math.SE there was a question regarding a problem on a maths magazine. I decided to look at the link provided, and one problem proposed was (if I'm not recalling this wrongly): Find ... 1answer 195 views ### Power Series Definition What does it mean for a series to be centered around a number? I'm taking complex analysis and am suddenly very confused. I didn't have this explanation, or proof of taylor and power series in ... 4answers 150 views ### Formula for calculating $\sum_{n=0}^{m}nr^n$ I want to know the general formula for $\sum_{n=0}^{m}nr^n$ for some constant r and how it is derived. For example, when r = 2, the formula is given by: $\sum_{n=0}^{m}n2^n = 2(m2^m - 2^m +1)$ ... 4answers 127 views ### Infinite series expansion of $e^{-x}\cos(x)$ Establish an infinite series expansion for the function $y=e^{-x}\cos(x)$ from just the known series expansions of $e^x$ and $\cos(x)$. Include terms up to the sixth power. I know that the ... 3answers 294 views ### A deceiving Taylor series When we try to expand \begin{align} f:&\mathbb R \to \mathbb R\\ &x \mapsto \begin{cases} \mathrm e^{-\large\frac 1{x^2}} &\Leftarrow x\neq 0\\ 0 &\Leftarrow x=0 ... 2answers 374 views ### Using Taylor series expansion as a bound I have a function $f(x)$ that has convergent Taylor series expansion around $x=0$ in the following form: ... 3answers 176 views ### Taylor series for different points… how do they look? I can't understand what it means to do the Taylor series at the point $a$. The best way would be showing me how it looks for different $a$ on a graph. Do I find those graphs on the Internet? 1answer 148 views ### Maclaurin series of $\frac{1}{1+x^2}$ I'm stumped here. I''m supposed to find the Maclaurin series of $\frac1{1+x^2}$, but I'm not sure what to do. I know the general idea: find $\displaystyle\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n$. ... 1answer 157 views ### Really basic question about the Taylor expansion of a CDF I am sorry for such a basic question... but I want to try to do a Taylor expansion on my function, which is a CDF defined over 0-1. However, when I expand around 0, which is what I read is typical, ... 1answer 123 views ### A question about the product of two series Given two power series, $$f(x)=\sum_{n=0}^{\infty}a_{n}x^{n}$$ and $$g(x)=\sum_{n=0}^{\infty}b_{n}x^{n}.$$ It is easy to form their product $$f(x)g(x)=\sum_{n=0}^{\infty}c_{n}x^{n}$$ where ... 2answers 959 views ### How to use the Lagrange's remainder to prove that log(1+x) = sum(…)? Using Lagrange's remainder, I have to prove that: $\log(1+x) = \sum\limits_{n=1}^\infty (-1)^{n+1} \cdot \frac{x^n}{n}, \; \forall |x| < 1$ I am not quite sure how to do this. I started with the ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 17, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9284188747406006, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/86119/exact-definition-of-convergence
# Exact definition of convergence Let us consider a sequence $x_n$. Now let it converge to a limit $L$. Now which one of the following is the correct definition of convergence? 1. A sequence $x_n$ is said to be convergent to a limit $L$ if given any integer $n$ there exists a positive real number $\epsilon$ such that for all $M\gt n$, $|x_M-L|\lt\epsilon$. 2. A sequence $x_n$ is said to be convergent to a limit $L$ if given any real positive number $\epsilon$ there exists an integer $n$ such that for all $M\gt n$, $|x_M-L|\lt\epsilon$. If the two definitions are equivalent then how to prove it? - You overuse commas a little bit... – Arturo Magidin Nov 27 '11 at 18:43 ## 1 Answer The second definition is correct; the first definition is incorrect. For example, the sequence $x_n = (-1)^n$ satisfies your first definition with both $L=1$ and $L=-1$: given any $n\gt 0$, let $\epsilon=3$. Then for every $M\gt n$ we have $|x_M-1|\lt 3$ and $|x_M+1|\lt 3$. In fact, your first definition is satisfied by any bounded sequence (in particular, any convergent sequence) with any value of $L$. If you suspect that convergent sequences should have only one limit, that should be tip-off that the first definition is incorrect. - Thank you, sir. – user16186 Nov 28 '11 at 3:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8840101957321167, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=4160740&postcount=6
View Single Post Quote by Philip Wood You've acknowledged my post with the thanks. P, V and T will be the only variables (for a given number of moles). You should try out the equation I gave you on (a) an ideal gas (b) a V der W gas. Then you'll get a better understanding of how to use it. So every time when I get the answer I think is correct then I should acknowledge by saying thank? That is all? $\left(\frac{\partial U}{\partial V}\right)_T = T\left(\frac{\partial P}{\partial T}\right)_{V} - P.$ For ideal gas I get (∂u/∂v) =RT/(v-b) -P Do you mean this?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432255625724792, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/82075-limits-series-question.html
# Thread: 1. ## Limits / Series question hi Let P = [(2^3-1)/(2^3+1)][(3^3-1)/(3^3+1)]........[(n^3-1)/(n^3+1)] n=2,3,4,...... Find Limit P when N tends to infinity. 2. Originally Posted by champrock hi Let P = [(2^3-1)/(2^3+1)][(3^3-1)/(3^3+1)]........[(n^3-1)/(n^3+1)] n=2,3,4,...... Find Limit P when N tends to infinity. Factorise $\frac{n^3-1}{n^3+1}$ as $\frac{(n-1)(n^2+n+1)}{(n+1)(n^2-n+1)}$, and notice that $n^2+n+1 = (n+1)^2 - (n+1) + 1$. You will then find that you have a telescoping product. 3. what should i do with n^2 - n + 1 ? I was able to cancel the (n-1)/(n+1) terms. but dont know what to do with n^2 - n + 1 and n^2 + n + 1 4. You easily undestand if you write the sequence of terms... $P= \prod_{n=2}^{\infty} \frac{(n-1)\cdot (n^{2} + n + 1)}{(n+1)\cdot (n^{2} - n + 1)}= \frac {1\cdot 7}{3\cdot 3}\cdot \frac {2\cdot 13}{4\cdot 7}\cdot \frac {3\cdot 21}{5\cdot 13}\cdot \frac {4\cdot 31}{6\cdot 21}\cdot \dots$ ... and it is evident you can simplify both in numerator and denominator [3 with 3, 4 with 4, ... , 7 with 7, 13 with 13, 21 with 21,...] Finally is... $P=\frac{2}{3}$ Very nice!... Kind regards $\chi$ $\sigma$ 5. Originally Posted by champrock what should i do with n^2 - n + 1 ? I was able to cancel the (n-1)/(n+1) terms. but dont know what to do with n^2 - n + 1 and n^2 + n + 1 Use the hint I gave before: $n^2+n+1 = (n+1)^2 - (n+1) + 1$. That tells you that the term $n^2+n+1$ in the numerator of each fraction cancels with the term $(n+1)^2 - (n+1) + 1$ in the denominator of the next one. 6. I think Only 1/3 is left. All the other terms cancel out. How are you getting 2/3 7. The term of order n is... $a_{n}= \frac{(n-1)\cdot (n^{2} +n + 1)}{(n+1)\cdot (n^{2} - n + 1)}$ For $n=3$ is... $a_{3}= \frac{2\cdot 13 }{4\cdot 7}$ Kind regards $\chi$ $\sigma$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9002881646156311, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/tagged/regression
# Tagged Questions Techniques for analyzing the relationship between one (or more) "dependent" variables and "independent" variables. 0answers 21 views ### How can i find code( matlab, vb, c++) for running fuzzy regression? I am new in fuzzy regression, i knew about its theory but have problems in running it in matlab or other programs. I will be so helpful if someone helps me in finding codes for fuzzy regression. 2answers 17 views ### Calculate the tendency of a set of samples I develop an application in which i constantly get samples of heart pulse. I defined an interval of t seconds. In each t seconds I have n samples. In every interval, I want to calculate the ... 1answer 82 views ### How to compare two regression slopes for one predictor on two different outcomes? I need to compare two regression slopes where: y ~ a + b1 x y ~ a + b2 x How can I compare b1 and b2? Or in the language of my specific example in rodents, I ... 0answers 21 views ### Breakpoint for bivariate data The breakpoint(s) estimation approach implemented in the strucchange package (Zeilei & al) seems to work very well (based on my little experience with this package on real case studies). Is ... 1answer 36 views ### First difference or log first difference? I am evaluating the effect of covariances between series on returns. That is I run the following regression: $$r_t = \beta_0 + \beta_1\text{Cov}(Y_t,r_t) + ...$$ I have conducted my analysis with ... 0answers 18 views ### R module for creating plots of prototypical individuals from fitted models? In order to help interpret fitted models — especially those with interaction terms and non-linear components — I've found it useful to plot predicted values of a dependent variables for what we might ... 0answers 13 views ### Addressing multicollinearity with key driver analysis I am trying to determine the key drivers from a series of 30 Independent Variables (IVs) (attributes rated on 10 pt scale) on 3 Dependent Variables (DVs) (i.e. purchase intent). The 30 IVs are pretty ... 1answer 26 views ### Conditional expected value from a regression model using ordinary least squares I have a query regarding part (a) of the following question. I cannot figure out how to calculate the conditional expected value of collections for the month of Easter. Is it not possible to calculate ... 1answer 45 views ### Can I use a regression model with ANOVA significance greater than 0.05? I have a multiple regression using SPSS. The significance of my model in the ANOVA table is p=0.174 which is >0.05. what does this mean for my model? Can I still use it and proceed in the ... 2answers 70 views ### How to interpret the output of the summary method for an lm object in R? [duplicate] I am using sample algae data to understand data mining a bit more. I have used the following commands: ... 1answer 28 views ### Do you always measure reliability through the Cronbach's alpha coefficient? Or is there another way? (multiple linear regression) I started doing a quantitative research for my thesis without any previous SPSS or proper statistic experience, and the first thing my professor told us in the spss workshop is that you start by ... 1answer 22 views ### Heckman's two stage: multiple selection models If a sample is likely to be self-selected on multiple selection criteria, does it make sense to include Inverse Mill's ratios for multiple selection models in the same second stage OLS model? 1answer 43 views ### How to calculate confidence intervals of $1/\sqrt{x}$-transformed data after running a mixed linear regression in stata? I have run a series of mixed linear regressions in Stata, some with inverse-square-root ($1/\sqrt{x}$) transformations and others with square root ($\sqrt{x}$) transformations. How do I calculate ... 0answers 26 views ### Performing Linear Regression on Rolling Averaged Data I have multiple years of daily aggregated data for which I want to conduct a multi-variable linear regression. Auto correlation is high with one particular variable included and there is a strong ... 1answer 46 views ### Linear Regression with a Dependent Variable that is a Ratio I'm doing linear regressions where the dependent variable is a ratio that can range from 0.01 to 100. Is it ok to take the log of the dependent variable and the regression on that? I'm matching the ... 0answers 23 views ### Regret in the linear regression setting I've seen the concept of regret apply mostly to online learning problems, but while going through the definition it does seem that is not bounded to this setting. I'm trying to come with a simple ... 1answer 96 views ### Random forest assumptions I am kind of new to random forest so I am still struggling with some basic concepts. In linear regression, we assume independent observations, constant variance… What are the basic ... 1answer 50 views ### Logistic regression: controlling variables not significant, what should I conclude/further test? [closed] I ran annual logisitic regression on time-series datas. The most important independant variable have coefficient that are significant in a lot of years, that's a relief. But the "controlling ... 0answers 33 views ### How to build a model where variance depends on covariate? I have what I believe is a very simple problem for anyone used to modelling with unequal variances (which I am unfortunately not). I have a dependent variable "totrich" which I want to model as a ... 0answers 8 views ### Error function of noisy input and target variables Why is the sum of squares error function of noisy input and noisy target variables very similar to the error function for only noisy input? This is the relevant part in Bishop's book: Another ... 2answers 185 views ### How are regression, the t-test, and the ANOVA all versions of the general linear model? How are they all versions of the same basic statistical method? 0answers 25 views ### Appropriate method for supervised learning of small data set with few variables What method exc. for regression can be used in order to get y=f(x1,x2) on a training set of 800 to 2000 samples? y is a whole number <0,15>, x1,x2 are real <0,40>? I'm interested in prediction ... 1answer 33 views ### Weighted multiple regression in R with prespecified weights I would like to run a regression of the following form: Y ~ B1*predictor1 + B2*predictor2 + B3*predictor3 I would like to specify ... 1answer 39 views ### Basic questions concerning the interpretation of results from summary(lm(…~…)) in R [duplicate] set.seed(11) a = runif (12) b = rep(c(1,2,3),4) summary(lm(a~b))$coeff summary(lm(a~b-1))$coeff What does a p.value for the intercept means ? What differences ... 2answers 70 views ### Does the intercept count as a parameter for the n/parameters sample size rule for multiple regression? When estimating parameters, I know the general rule of thumb is n/parameters should be >10. Does the intercept in a model count as one of the estimated parameters in this "rule"? For example, if I ... 1answer 48 views ### R Forward and backward Selection I have a data set with large number of attributes some are not relevant and some are relevant for the regression model. My approach was to do forward and backward selection to identify a starting ... 0answers 19 views ### Changing prior for a regression model I have a regression model trained on a particular output distribution (for example N(0, 1)). I now have to do a prediction on a test set, with a caveat that I know that the distribution of the test ... 0answers 19 views ### Strategy for building best fit multiple regression model with time lagged variables I am building a multiple regression model - wrapped in a function - with one dependent variable and a dozen independent variables. The reason why I am building a function is that I need to do this ... 0answers 33 views ### OLS standard error log log regression I am estimating the following Power Law relationship: $$\ln(\text{Rank}) = \text{constant} + \alpha \ln(\text{Size})$$ where $\text{Rank}$ is $1,~2,~3,~...,~n$, and $\text{Size}$ is the raw value. ... 1answer 47 views ### How many observations are enough to perform linear regression with fixed effects I am new to econometrics. I am studying earnings management in banks during the financial subprime crisis. I manage to collect data from 23 banks from 2005 to 2010. Few years of them are missing but ... 1answer 89 views +50 ### What does the residual higher level variance tell me? I have a multilevel logistic regression model predicting the probability of item nonresponse, where the random intercept variance at country level takes on the following distribution for the different ... 3answers 65 views ### Which glm algorithm to use when predictors are numerical as well as categorical? I just need a direction on which regression algorithm (preferably glm or similar) algorithm to use when the predictor variables are a mix of numerical and categorical variables. The output is ... 1answer 73 views ### Multiple Choice on Linear Regression 1. Which one is NOT a linear regression models? Please give a 1-2 sentences brief explanation to your choice. (a) $y_i = β_0 +\exp(β_1x_i)+E_i, i = 1, 2, \ldots, n$ (b) \$y_i = β_0 + β_1x_i + β_2 ... 0answers 66 views ### Universal Approximation Theorem — Neural Networks I have posted this question elsewhere--MSE-Meta, MSE, TCS, MetaOptimize. Previously, no one had given a solution. But now, here is a really excellent and comprehensive answer. Universal approximation ... 1answer 52 views ### Model Building: Missing Data or Large Gap between data points I am currently trying to build a model using a data set that has large gap between data points. When I look for the correlation I clearly see a negative regression line. But I am worried about the gap ... 1answer 66 views ### normality distribution I have a problem with normality test. In order to make sure that I can use parametric test, I need to make sure that my residual distribution is normal. However, when I refer to the value of skewness ... 1answer 45 views ### Hold-one-out linear regression : a shortcut? For a series of observations $(\vec{x}_i, y_i), i = 1 \cdots N$ from the linear model $Y = \beta^T X + \epsilon$, the least squares estimate of $\beta$ is: \$\hat{\beta} = (\mathbf{X}^T ... 0answers 31 views ### Brant test in R In testing the parallell regression assumption in ordinal logistic regression I find there are several approaches. I've used both the graphical approach (as detailed in Harrell´s book) and the ... 1answer 39 views ### Polynomial regression using scikit-learn I am trying to use scikit-learn for polynomial regression. From what I read polynomial regression is a special case of linear regression. I was hopping that maybe one of scikit's generalized linear ... 1answer 67 views ### Is $R^2$ value valid for insignificant OLS regression model? I am interested in stating that ___ % of the variance in Y is explained uniquely by $X_1$ and ___ % is explained uniquely by $X_2$. Is there some way to obtain this from a multiple regression ... 1answer 82 views ### How to handle Regression data thats not linear I'm new to stats and am using Python 2.7 to fit a regression model (Random Forest). When I plot the percentile plot of the prices before and after a log ... 2answers 99 views ### Estimating $b_1 x_1+b_2 x_2$ instead of $b_1 x_1+b_2 x_2+b_3x_3$ I have a theoretical economic model which is as follows, $$y = a + b_1x_1 + b_2x_2 + b_3x_3 + u$$ So theory says that there are $x_1$, $x_2$ and $x_3$ factors to estimate $y$. Now I have the real ... 0answers 20 views ### Testing for structural break on included variables under heteroscedasticity I am doing an analysis on how energy ratings affects the prices in the housing market. My data series ranges from 2003 to 2013, and to account for fluctuations in the sales price over time, I use ... 2answers 67 views ### Help with Anova of categorical and continuous variable in R and SPSS output I am having some trouble running an Anova on categorical variables in R and matching SPSS output. What I need to do is run an anova on the dataset below (its a made up data set). But, I need to know ... 0answers 44 views ### When to Log/Exp your Variables when performing Linear Regression? I'm doing regression using Random Forests for predicting prices based on several attributes. Code is written in Python using Scikit-learn. How do you decide whether you should transform your ... 0answers 15 views ### Non-integer dependent variable in negative binomial models I have non-nested count data that I've interpolated from one area to another based on the proportion of the area that lays in each. This is ZIP codes to counties, so most nest cleanly, with a few ... 0answers 16 views ### Proc GENMOD estimate statement with 2 continuous variables? I'm testing an ordinal scale measuring dyskinesia (range=0-4) via proc genmod, with the independent variables in the model being Drug1 and Drug2. Both of these variables are continuous. I want to ... 0answers 31 views ### Ratios in Regression, aka Questions on Kronmal Recently, randomly browsing questions triggered a memory of on off-hand comment from one of my professors a few years back warning about the usage of ratios in regression models. So I started reading ... 1answer 62 views ### Calculating the linear model with R I need to calculate the linear model in R, i did the following: summary(model) But what if I wanted to calculate only the first point? A bit stuck with this one... Many thanks! Here is the code ... 1answer 45 views ### Mean squared error definition I'm currently working through (part of) a textbook on non-parametric regression techniques. Regarding the choice of smoothing parameter the book starts out explaining the MSE which is defined as: ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8969458341598511, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/332?sort=votes
## Does the space of n x n, positive-definite, self-adjoint, real matrices have a better name? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) This is also the space of real, symmetric bilinear forms in R^n. - 1 The bilinear form should still be positive definite. – S. Carnahan♦ Oct 12 2009 at 2:54 Better in what sense? – Felix Goldberg May 25 at 19:23 ## 6 Answers Two possible answers: • Standard jargon is SPD (for "symmetric positive-definite"). • This isn't exactly a "name," but the n x n symmetric positive-definite matrices are exactly those matrices A such that the bilinear function (x, y) -> yTAx defines an inner product on Rn. Conversely, every bilinear function is of that form for some A, so with some abuse of terminology, you could equate the set of those matrices with the set of inner products on Rn. There are many other ways to characterize SPD matrices, but that's the only one I can think of at the moment that can be summarized as a single noun phrase. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Note that this space is not a vector space, but is a convex cone in the vector space of nxn matrices (it is closed under addition and multiplication by positive scalars). Hence people sometimes refer to the "positive semidefinite cone". - Yes, I was going to post that as well. – Ilya Nikokoshev Oct 23 2009 at 19:18 This is the symmetric space of GL_n(R) - How about ? I have seen or used to denote the set of positive linear transformations in a set of linear transformations on an inner product space, but this was in the context of operator algebras. - For starters, since they're real I'd say symmetric instead of self-adjoint. - It is often usefull to know that this set can be identified with the set of non-singulat covariance matrices of random vectors with values in $\mathbb(R)^n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425185918807983, "perplexity_flag": "head"}
http://physics.aps.org/articles/print/v2/39
# Viewpoint: A little entanglement helps , Physics Department and Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, D-80333 München, Germany Published May 11, 2009  |  Physics 2, 39 (2009)  |  DOI: 10.1103/Physics.2.39 A new algorithm allows for the extremely efficient calculation of thermally averaged quantities in one dimension, in conjunction with the density matrix renormalization group method. The key is the judicious selection of a few representative states. Over the past decades, the numerical simulation of quantum many-body systems has evolved into a major field of condensed matter physics. Strongly interacting Hamiltonians on lattices, such as the Hubbard and Heisenberg models, are of particular interest as they are relevant for a variety of physical systems, including low-dimensional magnets, high-$Tc$ superconductors, and ultracold atomic gases in optical lattices. Writing in Physical Review Letters [1], Steve White at the University of California, Irvine, provides, at least in one dimension, a new algorithm (see Fig. 1) that for a given quantum system allows for a highly efficient calculation of static quantities (e.g., energy or magnetization of a chain of spins) at arbitrary finite temperature and holds promise for dynamical quantities. The underlying theme of this success is the peculiar behavior of entanglement in many-body systems. Why are simulations of strongly interacting Hamiltonians difficult? The principal challenge is that the number of states needed to describe a quantum system increases exponentially with its size. For $N$ classical two-valued spins versus $N$ quantum spins of $1/2$, a point in state space is characterized by $2N$ versus $2N$ variables. Numerical methods have to adapt to this exponential growth: exact diagonalization techniques analyze the full state space, while Monte Carlo techniques explore it stochastically. But does one have to work with the entirety of the Hilbert space? A wealth of other techniques try to find and work with much smaller, hopefully physically relevant, subspaces. Examples include all renormalization group and variational techniques, and among them is the density matrix renormalization group (DMRG) pioneered by White [2, 3] in 1992. DMRG is an already highly successful method that currently generates a lot of excitement because it is profoundly connected to quantum information theory through the idea of entanglement. Expanding on this well-established connection to ground states at zero temperature, White shows that one can use entanglement even at finite temperature, where it is understood poorly, to design a highly efficient simulation method based on DMRG. DMRG is a method which, for a given Hamiltonian, variationally optimizes over a particular set of states—the so-called matrix product states (MPS). These are states where the scalar coefficients of the wave function expansion are derived from a product of $D×D$ dimensional matrices, depending only on local lattice site states. $D$ is the key control parameter, determining both accuracy and computation time: compared to the exponential number of wave-function coefficients for the full Hilbert space, there is only a polynomial number of parameters in an MPS. Verstraete and Cirac [4] recently proved that MPS approximate ground-state physics of generic local Hamiltonians in one dimension, even for small $D$, to almost exponential accuracy—an observation that had intrigued practitioners of DMRG for a long time. In ground-state physics, therefore, the enormously large Hilbert space is, in some sense, only an illusion. The nice feature exploited by White is that the efficiency of MPS and DMRG can be motivated (if we abandon rigor and focus on “physically” realistic Hamiltonians) by the existence of area laws for quantum mechanical entanglement in ground states. Let us partition a lattice into parts A and B, and measure pure state entanglement as the von Neumann entropy of part A, $S=-TrρAlog2ρA$ with the reduced density operator defined as $ρA=TrB|ψ〉〈ψ|$ by explicit summation over the states in part B. $S$ will be extensive for a random state from Hilbert space (e.g., it will depend on the number of lattice sites comprising part A). However, ground states turn out to be highly atypical: for gapped systems, entanglement scales merely as the surface of A (e.g., in one dimension it is a single lattice site), with possible logarithmic corrections at criticality. For a MPS, reduced density operators have dimension $D$, and the maximum entanglement the state can carry is $S=log2D$. Conversely, we will need at least $D=2S$ as the dimension of a MPS that is an accurate description of a state with entanglement $S$. DMRG therefore succeeds or fails depending on the amount of entanglement present! In one dimension, where $S$ is roughly a constant in the system size $L$ (or logarithmic at criticality), $D$ does not grow substantially, i.e., at most polynomially with $L$. In two dimensions, for systems of size $L2$, $S∝L$ entails exponential growth of $D$, and DMRG fails, at least for larger systems. At finite temperature, however, the special nature of ground states will not help in DMRG, and therefore it seems a natural expectation that setting up a DMRG procedure would be a real challenge, if not impossible. But the apparent complexity of the thermal density operator $e-βH=Σie-Ei|Ei〉〈Ei|$ as an ensemble of exponentially many pure states is again, in some sense, only an illusion! Indeed, we can interpret any mixed state of some physical system A as the reduced density operator $ρA$ for some pure state $|ψ〉$ living on system AB, where B is just a copy of system A: a mixed state on a spin chain corresponds to a pure state on a spin ladder. This trick has been used recently for thermal mixed states $e-βH$ to develop a finite-temperature DMRG algorithm. One starts with a pure state with maximal entanglement between A and B (e.g., a spin singlet on each bond for the spin ladder) [5], which leads to a maximally mixed state on A, i.e., the infinite temperature $(β=0)$ density operator. This state is then subjected to an imaginary time evolution using time-dependent DMRG [6] to the desired temperature $β$. This works well, but as White points out, one can avoid purification entirely. This is particularly relevant at low temperatures, where the mixed state of A evolves towards the pure ground state. But then A is not entangled with B anymore, and DMRG simulates a product of two pure states. That amounts to describing a ground state with $D2$ states where only $D$ would have been enough. As a result, low-temperature simulations, where most relevant quantum effects occur, become overly costly. What White proposes instead is to move away from the focus on the energy representation of $ρ$—as already pointed out by Schrödinger many decades ago, although mathematically correct, such a representation is unphysical since real systems at finite temperature will usually not be in energy eigenstates. Equilibration would be exponentially slow, and eigenstates are highly fragile. White rather exploits unitary freedom in the representation of $ρ$ and introduces “typical” states by doing imaginary time evolutions on any complete orthonormal set of states and constructing $ρ$ from those. The intriguing part of White’s work is that he considers a special set of such typical states: as his initial set he simply takes the “classical” product states, which have no entanglement. Subsequently, the imaginary time evolution introduces entanglement due to the action of the Hamiltonian, but it is a reasonable expectation that the final entanglement will be lower than for similar evolutions of already entangled states. Hence he calls the typical states he obtains “minimally entangled typical thermal states” (METTS). The computational cost is low: dimensions will not blow up as in the purification approach, and low entanglement means that the DMRG computing cost will be low. Still, this would not be useful if it had to be done for all classical product states. However, White formulates the procedure by analogy to the updates in Monte Carlo steps: the last METTS is used to produce the next classical state (and from there the next METTS) by a quantum measurement of all spins in the current METTS (see Fig. 1). It turns out that–after discarding the first few METTS to eliminate effects of the initial choice–averaging quantities over only a hundred or so states allows calculation of local static quantities (magnetizations, bond energies) with high accuracy and extremely low computational cost compared to previous approaches. This is surprising and exciting. Intriguing questions concerning both the potential and the foundation of the algorithm remain. How well will it perform for correlation functions? Dynamical quantities can be accessed easily, as the time-evolution of the weakly entangled METTS is not costly, but will the efficiency of averaging over only a few “typical” states continue to hold? This relates to the fundamental question: Why are so few METTS sufficient? My conjecture is that the choice of classical initial states is not only convenient for entanglement reasons, they also have large variance in energy and overlap with many eigenstates, such that sequences of imaginary time evolution and quantum measurement should mimic a thermalized ensemble very quickly (the most inefficient approach would be to start from the eigenstates themselves). In any case, we seem to get a tantalizing hint that for physical manifestations, only small parts of the Hilbert space really matter, and that we can find them systematically. ### References 1. S. R. White, Phys. Rev. Lett. 102, 190601 (2009). 2. S. R. White, Phys. Rev. Lett. 69, 2863 (1992). 3. U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005). 4. F. Verstraete and J. I. Cirac, Phys. Rev. B 73, 094423 (2006). 5. F. Verstraete, J. J. García-Ripoll, and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004); G. Vidal, 91, 147902 (2003). 6. S. R. White and A. E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004); A. J. Daley, C. Kollath, U. Schollwöck, and G. Vidal, J. Stat. Mech.: Theor. Exp. P04005 (2004). ### Highlighted article #### Minimally Entangled Typical Quantum States at Finite Temperature Steven R. White Published May 11, 2009 | PDF (free) ### Figures ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9122154712677002, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/12208/how-come-an-anti-reflective-coating-makes-glass-more-transparent/12220
# How come an anti-reflective coating makes glass *more* transparent? The book I'm reading about optics says that an anti-reflective film applied on glass* makes the glass more transparent, because the air→film and film→glass reflected waves (originated from a paraxial incoming wave) interfere destructively with each other, resulting on virtually no reflected light; therefore the "extra" light that would normally get reflected, gets transmitted instead (to honor the principle of conservation of energy, I suppose?). However, this answer states that "Superposition is the principle that the amplitudes due to two waves incident on the same point in space at the same time can be naively added together, but the waves do not affect each other." So, how does this fit into this picture? If the reflected waves actually continue happily travelling back, where does the extra transmitted light come from? * the film is described as (1) having an intermediate index of refraction between those of air and glass, so that both the air-film and film-glass reflections are "hard", i.e., produce a 180º inversion in the phase of the incoming wave, and (2) having a depth of 1/4 of the wavelength of the wave in the film, so that the film-glass reflection travels half its wavelength back and meets the air-film reflection in the opposite phase, thus cancelling it. - ## 6 Answers The thickness of the AR coating is chosen such that the reflections from the two interfaces cancel out (at the wavelength for which the AR coating was designed): See Anti-reflective coating in Wikipedia. As endolith points out in the comments, to explain how the transmission is enhanced, you have to draw a few more rays in the diagram. Here's another illustration, from the Wikipedia article for Fabry–Pérot interferometer, which shows a few higher-order reflections: For the anti-reflective coating, you choose the thickness such that R1 and R2 cancel while T1 and T2 constructively interfere. Note that this is dependent on the wavelength, the angle of incidence, and the index of refraction of whatever is being coated. With other thicknesses, you can make a high-reflectivity coating, or a coating of whatever reflectivity you want. - 1 That's a very nice diagram of the situation. – Colin K Jul 13 '11 at 12:00 However, this still doesn't intuitively explain how T becomes larger. I loses energy when it gets reflected, but R1 and R2 canceling doesn't explain how the energy gets back to the ns side. Could it also be said that R2 is reflecting again off the n0/nl surface and constructively interfering with T? – endolith Jul 14 '11 at 14:37 1 @endolith: Yes. In general there will be an infinite sum over reflections to get the final result. – nibot Jul 14 '11 at 17:09 1 I added another illustration which shows this. – nibot Jul 14 '11 at 17:26 Sorry for taking a long time to give you feedback. I believe this answers my question. Thanks nibot and @endolith :) – Waldir Jan 7 '12 at 15:12 I think that to really understand this, you have to abandon the idea that there are individual, separate electromagnetic waves. In reality, there's just one global electromagnetic field, $\bigl(\mathbf{E}(\mathbf{x}),\mathbf{B}(\mathbf{x})\bigr)$. It evolves in space and time in a manner determined by Maxwell's equations. For certain configurations of the EM field - specifically, those with 2D translational symmetry - the evolution described by Maxwell's equations results in the shape of the field propagating in one direction. It's much like the way waves on the ocean (normally) propagate across its surface in one direction, without changing their shape. For this reason, we call these configurations of the EM field "plane waves." This is the sort of wave most people usually think of when they imagine a light wave. The key point, though, is that the idea of propagating plane waves really only arises in one particular case: when you have an isolated, 2D-symmetric EM field configuration. In general, the way the field evolves in time and space is more complicated than simple directional propagation, so in general, you can't always think of the field evolution as a wave. In the case of reflection specifically, even just reflection from a single surface, what this means is that the model of the incident wave reflecting off the boundary to produce a separate reflected wave is too simplistic. A more realistic description would be that the EM field has to satisfy specific conditions at the boundary between the surfaces, and that the only way to do this is for the field on the side of the incident wave to take a different value than it would have based on the incident wave alone. The difference between the actual field and the field that would be produced by the incident wave alone is called the reflected wave, because if you shut off the incident wave and wait a long time, you'll wind up with a simple plane wave propagating backwards away from the boundary. The same holds true (i.e. the wave description is too simplistic) for double reflection with thin film interference; in fact, even more so, because it's a more complicated system. In this case, if you have a particular relationship between the distances and frequencies involved, you can arrange it so that the boundary conditions are satisfied by the incident wave alone, so there's no "extra" contribution to the field to be considered a reflected wave. Or in other words, if you shut off the incident wave and wait a long time, you will wind up with nothing propagating backwards, and thus we say that there is no reflected wave. - Perhaps it will help to recall that energy is a nonlinear function of the electromagnetic field. The superposition principle applies to the electromagnetic field, not the energy or power. So if two waves are superimposed out of phase, 1 - 1 = 0, we can say they are both happily traveling "independent" of each other (from the point of view of the EM field), but from the point of view of the energy they contain they are not independent. - This is what I was going to say. There is a superposition principle for amplitude, but not for power. The power can have interference, which is the case here. – Keenan Pepper Jul 25 '11 at 18:37 The wave reflected from the air-film interface continues happily traveling back, as you say, but so does the wave reflected from the film-glass interface. Since they are the same frequency but in antiphase, they interfere destructively as long as they keep going. Since the superposition principle states that they do not affect each other in any way, they keep on going as long as they like. However, there is no energy transported backwards in the reflected waves, because the energy is proportional to the square of the total electric field. It is the energy that is conserved, not the electric field, so all the energy (if not absorbed) is transmitted. - ok, this sounds logical, but on the other hand it's rather counter-intuitive :/ are these reflected waves real (in a physical sense) or just an abstract construct due to the theoretical description of this phenomenon? I mean, they don't seem to be anything measurable... I don't know, this concept just seems rather strange to me. – Waldir Jul 12 '11 at 18:37 Waldir. I think you just have to solve the wave equation for the problem in question. What must be conserved is the net energy flux, upwards minus downwards. We know since less is going up, more must be going down, but the details depend upon the amplitude and phases of the various waves. There will be a total of five waves to match up: down and up in air, down and up in film, and down in glass. – Omega Centauri Jul 12 '11 at 19:49 1 I think you could say the idea of multiple separate waves canceling out is an abstract construct. That's basically the view I try to explain in my answer, although it winds up being rather confusing unless you have some intuition about how solutions to the wave equation behave. – David Zaslavsky♦ Jul 12 '11 at 23:17 1 I think the gist of the contradiction of intuition, is that the sign of the reflected wave depends upon whether one is entering a denser/less dense (higher index of refraction) medium. So the wave that is reflected back down from the surface of the film (this was the primary reflection off the glass), constructively interferes with the downgoing refracted wave. So the amplitude within the film can actually be greater than expected by simply thinking about intensities rather than phases. – Omega Centauri Jul 13 '11 at 3:45 In the WP-link we can obtain a substantial explanation. Read about the Fresnel coefficients WP-here Play with multilayers at «thinfilm» (it uses the WP-transfer-matrix method). The transmission is increased due to the forward and backward reflections in the coated film (medium 2) that are forward propagated to the medium 3 as seen in this image from pag 13.7 of B.O Sernelius lectures What happens to the reflected rays in the layer 1 ? They simply cancel (vanish,do not exist) because they are in phase opposition in relation to the incident wave and almost all of the energy in the layer 2 will be forward propagated to 3. Lets analyse the following experiment : The image is a rework of a picture from the first WP-link, that can be misleading because it figures two reflected rays R1 and R2 that probably do not exist. If those rays are really there, and we can not measure them because they are in phase opposition, then the observer in the bottom of the image will see the light restored to a nice level because the second coated window do the opposite action of the first window (beam splitter). The particle nature of light may favour this outcome. If the observer can not see light then we must conclude that there is no field, energy, photons in the regions ?!?!?! of the experiment. The wave nature of light favour this outcome. The experiment is very easy to perform. Can someone post the outcome? - 2 To the easy downvoters: Can you make an effort to explain why you have considered this one a wrong, or off target, answer? (I took a lot of time to construct this answer). Most welcome are your reasoning about the expected outcome of the proposed experiment. – Helder Velez Jul 13 '11 at 15:43 1 Why do you think that the 2nd window will restore the light? It will act just like the first, reducing the reflection to the observer still further. – user2963 Jul 14 '11 at 15:18 Thank you Mr. @zephyr, you're right, and I will have to reformulate the answer. – Helder Velez Jul 15 '11 at 17:52 Let's talk about Where the extra transmitted energy comes from. The energy quantum collapses into that position where it is found when measured. Where the energy is found is a fundamentally random thing. So, sometimes all energy is found in the area where waves cancel. Where cancellation approaches perfect cancellation, there probability of finding energy approaches zero. So I'm saying that you will find photons with the normal photon energy in the cancellation area, and sometimes you find more photons in the cancellation area, than in the area of constructive interference. Let's consider a very short pulse of light going through a window pane with anti-reflective coating. When a photon is detected behind the window, energy comes from where the reflected pulse is, and from where the transmitted pulse is. When absence of photon is detected behind the window, energy goes from where the transferred pulse is, to where the reflected pulse is. - 1 Dealing of this phenomens in terms of photons is masochism. – Georg Jul 14 '11 at 17:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9442299008369446, "perplexity_flag": "middle"}
http://mathhelpforum.com/geometry/154687-give-geometrical-construction-root-2-etc-etc.html
# Thread: 1. ## Give the geometrical construction of root 2 etc. etc.. I did this and the following root(2+root2) and root(2+root(2+root2)) by drawing a circle of radius one and then working from there, and it came about eventually. Is there any other easier way? How many ways are there? What is meant, exactly, by "geometrical construction"? Compass and ruler? ------------------------------------ PS Is there a textbook which explains the classical geometry which, if completed, would endow one with a mastery over the classical geometry methods? ________________________ Thanks. (sorry this post is in the wrong place) 2. Originally Posted by berachia I did this and the following root(2+root2) and root(2+root(2+root2)) by drawing a circle of radius one and then working from there, and it came about eventually. Is there any other easier way? How many ways are there? What is meant, exactly, by "geometrical construction"? Compass and ruler? ------------------------------------ PS Is there a textbook which explains the classical geometry which, if completed, would endow one with a mastery over the classical geometry methods? ________________________ Thanks. (sorry this post is in the wrong place) construct a square of side length 1 ... the length of its diagonal is $\sqrt{2}$ http://whistleralley.com/construction/reference.htm 3. ## construct rad 2 Hi beracia, You are in the right forum if you simply want to consruct a line segment equal to rad 2 in some given units. If the units are inches draw a circle with compass of radius 1 inch. Then draw a diameter and construct its perpendicular bisector.A chord between the two diameters in one of the quarter circles is equal to rad 2 or 1.4142 inches theoretically. bjh
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.899078905582428, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/18882/can-conservation-of-energy-be-applied-if-trajectory-is-not-smooth?answertab=oldest
# Can conservation of energy be applied if trajectory is not smooth? In this video http://www.khanacademy.org/video/conservation-of-energy?playlist=Physics Khan academy explains conservation of energy for a falling object. He looks at an object falling perpendicularly from height h and computes its velocity at height zero. So far so good. But then he draws a curve that has several variations in height such that the object needs to climb back to certain height before falling back. At the end he computes again the velocity of the object and finds the same as object falling on the perpendicular. Intuitively this seems wrong to me. To simplify his drawing I drew this picture: I reason like this: The object falls to C but then it needs to climb back up to D and at D its vertical velocity is zero (because it changes direction, but not sure if this correct). So at B the velocity will be as if the object fell from D not from E. Is this correct? What is the best way of thinking about this problem? - 1 Conservation of energy doesn't care... the velocity you end up with is dependent only on the initial velocity and the height difference (assuming no friction). You can apply it for E-B or C-B or D-B, but each case will have a different initial velocity and height. – Chris Gerig Dec 30 '11 at 0:32 Is there a formula I can use for this case to find velocities at C and D? – Zeynel Dec 30 '11 at 13:24 1 Yes, the definition of conversation of energy, $KE+PE=\frac{1}{2}mv^2+mgh=constant$. – Chris Gerig Dec 30 '11 at 13:39 ## 2 Answers Assuming your diagram here is something like a ball rolling down a hill or a bead sliding on a wire (ignoring friction of course), you are right to say that the vertical velocity at D is 0, but this is irrelevant for the energy. What is relevant is the total velocity, and at D the total velocity there will be entirely horizontal, and given by the height difference $h_E - h_D$. For a simplifying example, consider a ball rolling down a quarter-pipe onto a flat surface. When the ball reaches the bottom, it has no vertical velocity, but only horizontal. The horizontal velocity will have the same magnitude as the veritcal velocity of a ball dropped straight down (with no intervening surface) from the same height. - I am not disputing what happens in quarter-pipe, or an inclined plane situation where the ball is moving smoothly along an inclined plane and it is losing height at every moment. In this question, the ball falls to C and then climbs back up to D. Your example is not a "simplifying example" it is a different problem, the question is about how the body climbs up from C to D. On an inclined plane the ball does not climb up. – Zeynel Dec 30 '11 at 13:26 Again, intuitively, it appears to me that in this case we cannot assume no friction. How does the object change direction at C without friction? Assuming that it did and it started to move upward to D. What would make the object change direction at D instead of continuing to move upward in the direction of CD above D? In this case, assumption of no friction is equivalent to saying "assume that ECDB is an inclined plane" but ECDB is not an inclined plane. – Zeynel Dec 30 '11 at 13:26 Friction has nothing to do with it. The surface can exert a normal force while being frictionless. The shape of the path doesn't really matter, as long as the starting point is the highest point in the path, and the curvature constraint that Ron mentions is not violated. – Colin K Dec 30 '11 at 16:47 If the radius of curvature R at point D satisfies $$v^2/R > g$$ (gravity is insufficient centripetal force) where v is the velocity computed from the conservation of energy, the sliding object will leave the ramp. This is why your intuition is upset-- if the object is moving fast enough, and the ramp is not sufficiently slowly curving, gravity will not keep the sliding object on the ramp. In the picture, the point D has a pretty small looking R. ### Answering the title question The title question is much more interesting than the example--- can energy be conserved when the constraints are non-differentiable? The answer is no, and a simple example is a cylinder that hits a step bump, and rises up. Conservation of angular momentum at the contact point requires that if the cylinder doesn't bounce off, it loses energy at the bump. This is also true in other constrained systems with non-differentiable constraints, and the amount of energy loss is readily calculable from conservation principles alone, just from the form of the non-differentiability. This is a common olympiad style exercize in physics. - I really doubt that the OP meant "smooth" in the sense that you (correctly) use it. This would be a good answer if it weren't almost certain to confuse the OP. – Colin K Dec 30 '11 at 16:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9323667287826538, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9037/how-is-it-that-you-can-guess-if-one-of-a-pair-of-random-numbers-is-larger-with-p/9045
## How is it that you can guess if one of a pair of random numbers is larger with probability > 1/2? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My apologies if this is too elementary, but it's been years since I heard of this paradox and I've never heard a satisfactory explanation. I've already tried it on my fair share of math Ph.D.'s, and some of them postulate that something deep is going on. The Problem: You are on a game show. The host has chosen two (integral and distinct) numbers and has hidden them behind doors A and B. He allows you to open one of the doors, thus revealing one of the numbers. Then, he asks you: is the number behind the other door bigger or smaller than the number you have revealed? Your task is to answer this question correctly with probability strictly greater than one half. The Solution: Before opening any doors, you choose a number $r$ at random using any continuous probability distribution of your choice. To simplify the analysis, you repeat until $r$ is non-integral. Then you open either door (choosing uniformly at random) to reveal a number $x$. If $r < x$, then you guess that the hidden number $y$ is also smaller than $x$; otherwise you guess that $y$ is greater than $x$. Why is this a winning strategy? There are three cases: 1) $r$ is less than $x$ and $y$. In this case, you guess "smaller" and win the game if $x > y$. Because variables $x$ and $y$ were assigned to the hidden numbers uniformly at random, $P(x > y) = 1/2$. Thus, in this case you win with probability one half. 2) $r$ is greater than $x$ and $y$. By a symmetric argument to (1), you guess "larger" and win with probability one half. 3) $r$ is between $x$ and $y$. In this case, you guess "larger" if $x < y$ and "smaller" if $x > y$ -- that is, you always win the game. Case 3 occurs with a finite non-zero probability $\epsilon$, equivalent to the integral of your probability distribution between $x$ and $y$. Averaging over all the cases, your chance of winning is $(1+\epsilon)/2$, which is strictly greater than half. The Paradox: Given that the original numbers were chosen "arbitrarily" (i.e., without using any given distribution), it seems impossible to know anything about the relation between one number and the other. Yet, the proof seems sound. I have some thoughts as to the culprit, but nothing completely satisfying. - 3 Meta-answer: there's no such thing as a paradox, just a failure of your intuition. – Scott Morrison♦ Dec 15 2009 at 21:45 19 Disagreement by definition: a paradox is a failure of your intuition ;) – Andrew Critch Dec 15 2009 at 21:48 ## 9 Answers After Bill's latest clarifications in the commentary on Critch's answer, I think the question is interesting again. My take: One thing that always seemed to fall through the cracks when I learned about probability theory is that probability is intricately tied to information, and probabilities are only defined in the context of information. Probabilities aren't absolute; two people who have different information about an event may well disagree on its probability, even if both are perfectly rational. Similarly, if you get new information relevant to a certain event, then you should probably reevaluate what you think is the probability that it will occur. Your particular problem is interesting because the new information you get isn't enough for you to revise that probability by purely mathematical considerations, but I'll get to that in good time. With the previous paragraph in mind, let's compare two games: G1. You are given two closed doors, A and B, with two numbers behind them, and your goal is to choose the door with the higher number. You are given no information about the doors or numbers. G2. You are given two closed doors, A and B, with two numbers behind them, and your goal is to choose the door with the higher number. You are allowed to look behind one of the doors and then make your choice. For the first game, by symmetry, you clearly can't do better than choosing a door randomly, which gives you a success probability of exactly 1/2. However, the second game has a chance of being better. You are playing for the same goal with strictly more information, so you might expect to be able to do somewhat better. [I had originally said that it was obviously better, but now I'm not so sure that it's obvious.] The tricky thing is quantifying how much better, since it's not clear how to reason about the relationship between two numbers if you know one of the numbers and have no information about the other one. Indeed, it isn't even possible to quantify it mathematically. "But how can that be?" you may ask. "This is a mathematical problem, so how can the solution not be mathematically definable?" There's the rub: part of the issue is that the problem isn't formulated in a mathematically rigorous way. That can be fixed in multiple ways, and any way we choose will make the paradox evaporate. The problem is that we're asked to reason about "the probability of answering the question correctly," but it's not clear what context that probability should be computed in. (Remember: probabilities aren't absolute.) In common probability theory problems and puzzles, this isn't an issue because there is usually an unambiguous "most general applicable context": we should obviously assume exactly what's given in the problem and nothing else. We can't do that here because the most general context, in which we assume nothing about how the numbers $x$ and $y$ are chosen, does not define a probability space at all and thus the "probability of answering the question correctly" is not a meaningful concept. Here's a simpler ostensible probability question that exhibits the same fallacy: "what's the probability that a positive integer is greater than 1,000,000?" In order to answer that, we have to pick a probability distribution on the positive integers; the question is meaningless without specifying that. As I said, there are multiple ways to fix this. Here are a couple: I1. (Tyler's interpretation.) We really want the probability of answering the question correctly given a particular $x$ and $y$ to be greater than 1/2. (The exact probability will of course depend on the two numbers.) I2. (Critch's interpretation.) More generally, we want the probability of answering correctly given a particular probability distribution for $(x,y)$ to be greater than 1/2. (The exact probability will of course depend on the distribution.) (Those two are actually equivalent mathematically.) Clearly, if we knew what that distribution was, we could cook up strategies to get a success probability strictly above 1/2. That's pretty much obvious. It is not nearly as obvious that a single strategy (such as the one in the statement of the question) can work for all distributions of $(x,y)$, but it's true, as Bill's proof shows. It's an interesting fact, but hardly paradoxical now. Let me summarize by giving proper mathematical interpretations of the informal statement "there is a strategy that answers the question correctly with probability strictly greater than 1/2," with quantifiers in place: (1a) $\exists \text{ strategy } S: \forall x, y: \exists \delta > 0$: $S$ answers correctly on $x$, $y$ with probability at least $1/2 + \delta$. (1b) $\exists \text{ strategy } S: \forall \text{ probability distributions } P \text{ on } \mathbb{N}^2: \exists \delta > 0$: $S$ answers correctly, when $x$, $y$ are chosen according to $P$, with probability at least $1/2 + \delta$. I think with the proper quantifiers and the dependence on $x$ and $y$ explicit, it becomes a cool mathematical result rather than a paradox. Actually, based on my arguments at the beginning, it's not even that surprising: we should expect to do better than random guessing, since we are given information. However, simply knowing one number doesn't seem very useful in determining whether the other number is bigger, and that's reflected in the fact that we can't improve our probability by any fixed positive amount without more context. Edit: It occurred to me that the last part of my discussion above has a nonstandard-analytical flavor. In fact, using the first version of the formula for simplicity (the two being equivalent), and the principle of idealisation, I think we immediately obtain: (2) $\exists \text{ strategy } S: \exists \delta > 0: \forall \text{ standard }x, y:$ $S$ answers correctly on $x$, $y$ with probability at least $1/2 + \delta$. (Please correct me if I'm wrong.) The number $\delta$ is not necessarily standard, and a basic argument shows that it must actually be smaller than all standard positive reals, i. e., infinitesimal. Thus, we can say that being able to look behind one door gives us an unquantifiably small, infinitesimal edge over random guessing. That actually meshes pretty well with my intuition! (It might still a nontrivial observation that the strategy $S$ can be taken standard; I'm not sure about that...) - Thank you for your thoughtful answer, and for clarifying the questions under debate! Your first formula (call it (1)) is the one that I intended to inquire about, and to me remains the strongest statement (in any instance of the game, there is a quantifiable advantage to looking under a door). I agree that this implies your second statement (2), though because delta is standard in (1) but non-standard in (2), I'm not sure that (2) implies (1). – Bill Thies Dec 17 2009 at 18:37 I wasn't sure which you were referring to, so I numbered my formulas. Delta in (1a) and (1b) is of course standard (because you don't need nonstandard analysis to prove them), while delta in (2) is nonstandard. I think the three formulas are all equivalent (I'm not very good at nonstandard analysis), but in any case, (1a) and (1b) are equivalent, and they imply (2). – Darsh Ranjan Dec 17 2009 at 20:05 @Bill, I agree with Darsh's assessments here, which are of game G2, and that the conflation of games G1 and G2 could be a significant source of confusion, depending on the confusee of course (as all paradoxes do). @Darsh, I edited our game/interpretation labels to make them easier to refer back to... hope you don't mind. – Andrew Critch Dec 18 2009 at 2:21 Great, so we're all in agremeent about the formulation of the problem! Perhaps the fact that there is information revealed in opening a door will be the best intuition we can have for the result. I'll leave this thread open for a few days, but I expect Darsh has had the final word! – Bill Thies Dec 18 2009 at 3:49 Great response. I would vote it up more if I could, and Bill should accept it and be done. (Bill: MO is not a great place for a debate, methinks, so I hope "questions under debate" is meant rhetorically.) I'm never much impressed with "paradoxes" except when they are used to illustrate a failure of well-formedness or of intuition; I think your answer exactly describes how a "paradox" can best be used to extend mathematical understanding. – Theo Johnson-Freyd Dec 18 2009 at 4:54 show 6 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Thanks for writing out the proof in detail, so that answers can easily refer back to it! The math is fairly straightforward, but as a paradox I find this very interesting. Spoiler alert: Paradoxes are awesome! If you like paradoxes, don't read below unless you've thought about it yourself first, or just don't care :) EDIT (Dec. 17, after change in question statement): This answer interprets the question with the values { $x,y$ } varying over possible games. Mathematically speaking this is only an integral more general than the case where { $x,y$ } is fixed (which is the special case where the hosts's choice distribution $Q$ is supported uniformly on two fixed integers), but a new level of potentially paradoxical issues arise. The answer below is intended presuming the fixed case is understood, and addresses issues that arise in passing to the variable case. For a discussion of the fixed case, I recommend Darsh's excellent answer. Logical/mathematical remarks: (0) An easier method than "repeating until non-integral" is to directly fix a probability distribution $P$ on numbers of the form $n+0.5$ and choose one randomly via $P$. (1) The proof for this strategy is valid provided that you know the host is choosing his number according to SOME fixed, well-defined probability distribution $Q$ on the integers (but you don't have to know what $Q$ is). In this sense, you know that his choice is actually not completely arbitrary... you know he's following some system, even if you don't know which one. For example (1.1) Specifically, consider the step "Averaging over all the cases" (after Case 3 in the question). The number $\epsilon$ in Case 3 depends on $x$, $y$, and your distribution $P$. "Average over all the cases" means that you integrate the function $(1+\epsilon)/2$ over the space of all possible pairs $(x,y)$. This requires a probability distribution on the pairs $(x,y)$, which is where $Q$ comes in. Since any $Q$ will give a result greater than $1/2$, it is tempting to think $Q$ is irrelevant to the conclusion. This is a fallacy! I cannot stress enough that treating an unknown as a random variable is a non-vacuous assumption that has real consequences, this scenario itself being an example. Without this assumption, the step "Averaging over all the cases" is meaningless and invalid. (2) Although the result of your strategy is that you "know something" about the relation between the two numbers, your confidence in this knowledge is arbitrary. That is, you have no estimate, not even a probabilistic one, on how much bigger than $0.5$ your chances of winning are. Resolving the paradox: (isolating what causes the "weird feeling" here) In short, I'd say mixing "random" and "arbitrary" in the same scenario makes for a patchy idealization of reality. Here, one unknown, the choice of ordered pair $(x,y)$, is being treated via a probability distribution $Q$, whereas a related unkown, the choice of probability distribution $Q$, is being treated as "arbitrary". This is a weird metaphysical mix of assumptions to make. Why? For example, in science and in everyday life, an estimate of "what's likely" can be accompanied, consciously or unconsciously, with an estimate of how accurate the first estimate is, and in principal one could have estimates of those accuracies as well, and so on. This capacity for self reflection is a big part of being sentient (or at least the "illusion of sentience", whatever that means). In this scenario, such self-reflective estimates are in principle impossible (because we don't assume that the host's distribution $Q$ was chosen according to any distribution on distributions), which makes it a very unfamiliar situation. This is just one reason why mixing "random" and "arbitrary" can make for a weird-seeming model of reality, and I blame this mixture for allowing the scenario to appear paradoxical. - Thank you for considering! In response to your points: (1) I'm not sure why the host has to follow any fixed distribution, unless we define his behavior over (unbounded) time to be that distribution. But I agree that the notion of "arbitrary" is what makes this weird! (2) This might be fixable. Say that you reveal your probability distribution to the host ahead of time, and he promises to choose two numbers that lead to an epsilon of at least a given size. I don't think this changes the proof, or the paradox. Yet it does provide a level of confidence regarding your knowledge in the game. – Bill Thies Dec 15 2009 at 21:26 I would add that I agree the assumptions are unusual, and perhaps that is to blame. But is there a better way to model this situation? If you played the game for unbounded time, could you beat the house? If yes, then that seems like a paradox to me. – Bill Thies Dec 15 2009 at 21:32 (1) If you don't assume a probability distribution on the hosts outputs, then talking about probabilities of his outputs doesn't make sense! This is just a mathematical prerequisite for probabilistic reasoning. (2) How is the scenario paradoxical at all of the host is cooperating with you? – Andrew Critch Dec 15 2009 at 21:39 (1) Let's say that we define the host's probability distribution as his behavior over time. I think that suffices? In other words, ANY host would draw from some distribution, as defined in this way. It is not that we are imposing an additional assumption on the scenario. (2) Even though the host is cooperating in this way, this does not reveal any information about which number is bigger, which is the source of the paradox. – Bill Thies Dec 15 2009 at 21:51 1 (2) I don't see what you could be wondering about here. If the host conspires with the guest to ensure that the guest is right more than half the time, there is no surprise if the guest succeeds. If you mean to assume the guest also knows the host is conspiring, then this extra information will allow the guest to unsurprisingly predict for himself that he will succeed. If you mean something else, I don't know what it could be. – Andrew Critch Dec 16 2009 at 0:41 show 9 more comments It might be useful to the intuition to consider the following simpler strategy: If the revealed number is positive, guess that it is the larger of the two. If the revealed number is negative, guess that it is the smaller of the two. This simple strategy already guarantees you a chance of winning that is always at least 50% and sometimes greater. In particular, your chance of winning is 50% if the numbers are either both positive or both negative, and 100% if they have opposite signs. That's not quite a solution to the original problem, but it works --- or comes close to working --- for exactly the same reason that the actual solution works, and it's easy to understand in an instant. - The problem is that there's no way to define a random sample over all the real numbers, where every real number has equal probability of being chosen. See this related problem and solution by Randall Munroe, the creator of XKCD. - 2 Yes, I agree with this. It is the most simple answer, but the correct one. You say 'Given that the original numbers were chosen "arbitrarily" (i.e., without using any given distribution)'. This is an inconsistent assumption, which leads to the paradox. You have to choose a distribution. In case of a die, you can choose the same probability for each side. In case of infinite set, you get troubles and you should take care that you don't assume something impossible. This is also the cause of the envelope problem paradox. – Lucas K. Jul 4 2010 at 21:00 I always advice for these kinds of problems to simulate it (and put real money on it!!!). Then the wrong thinking becomes clear. You will see that it is rather hard to take an arbitrary number, especially when you have a computer with finite memory. – Lucas K. Jul 4 2010 at 21:04 I think you mean Randall Munroe. – Daniel Asimov Jul 4 2010 at 22:59 I don't think this is the only issue. As Darsh puts it, the surprise, mathematically speaking, is that the same strategy works regardless of the probability distribution on x and y. This is a nontrivial fact which is not accounted for by the observation that we should specify such a distribution. – Qiaochu Yuan Jul 5 2010 at 5:41 I don't agree with this either, and moreover I think the xkcd (continuous) version adds nothing and just muddies things more. Among other things the xkcd answer isn't a "strategy" since there is no finite way to compare two arbitrary real numbers (not even two computable ones). If they are compared to only finite precision, I think there is a way to defeat the strategy. – Daniel Mehkeri Jul 5 2010 at 17:20 show 1 more comment I don't think there is any need to reason about your opponent's probability distribution (and therefore the above explanations of the paradox seem spurious). For concreteness, say that you draw your guess from the Laplace distribution. You can now claim: "For every pair of integers (a,b), the probability of success is strictly greater than 1/2". There's no distribution here -- we can just make this claim about the set of all integer pairs, and so we no longer have to reason about how the opponent comes up with them. - There is a related "paradox" known as the two envelopes problem which has a nice article on wikipedia. - 8 Actually, I find it to be one of the worst written Wikipedia articles on mathematics I've seen and it badly needs some attention from a probabilist. For example, it starts trying to resolve the problem by saying "You cannot denote an unknown amount chosen by chance by a variable" which rules out any argument that makes use of random variables. And having claimed that you can't write a random variable in this way, it proceeds to use it anyway. – Dan Piponi Dec 15 2009 at 22:48 I'm finding most of the above either wrong or unnecessarily obtuse. My take is that the paradox arises in two parts. The first is a simple trick. The game looks like it reduces to: (Incorrect:) The game show host puts the prize behind one of two doors. He tells you an arbitrary unrelated integer m. You pick a door. But the integer m is actually related. The game really reduces to: The host puts the prize behind one of two doors, and picks an integer n. You pick a door. He tells you m, computed as follows: if you picked the right door, m=n, if not, m=n+1 (or greater). You then get an option to switch doors before he opens them. Now, for the sake of brevity, let's just take it for granted that you should pick a random door initially, and privately choose an integer r. On hearing m, you should switch doors if r<m. Let's also take the worst case that m=n+1 when you initially guess the wrong door. The whole thing reduces to the following: The host picks an integer n. You guess what the integer is. If you guess right, he gives you the prize. If not, he flips a coin to decide if you win. Let me emphasise that up to this point, the reduction involves only integers and makes no assumptions on probability distributions for guessing arbitrary integers. The second part of the paradox is just this: Proposition: You have a strictly positive probability of correctly guessing an arbitrary integer. Here comes in all the talk of distributions, interpretations, random versus arbitrary, etc. - As previously explained above, the success of the proposed contestant strategy is entirely dependent upon the assumption made by the contestant about the host's method of choosing x and y. An example where a fixed contestant strategy almost always fails. Suppose the host always chooses an "unexpectedly" large negative number for x, say x = -n, where n has >1000 digits, and r is more "ordinary", say having <100 digits. The method depends on the contestant betting that x < r < y. If the host's uniform strategy has always chosen y = x-1, then the contestant will almost certainly lose round after round of play. (If, after numerous rounds of this, the contestant then realizes this strategy of the host and tries to change his/her betting strategy, the host can adapt as well, choosing y = -x = n for later rounds) - Consider the following simplified problem: I will pick two numbers from the non-zero integers with probability proportional to $\frac{1}{|k|}$. So: $$p(|k|) \propto \frac{1}{|k|}$$ And we play the same game as above, I reveal one and ask you whether the other is smaller or not. Now, what is p(5)? This is just $\frac{1}{R} \frac{1}{|5|}$, where $R$ is the renormalization constant. So: $$R = 2 \sum_{k=1}^{\infty} \frac{1}{k}$$ whoops! So, now, what does this mean? is $p(5) = 0$? is $p(k) = 0$ for all k? Does this problem make any more sense if I took the probability proportional to $\frac{1}{|k|}^\alpha$ as $\alpha \to 0$? What does it mean to pick a number whose probability is 0? How would I even write down these numbers to compare them? They're obviously much larger than the number of atoms in the universe...Do I have an algorithm to spit out the bits and try to compare them that way? This problem is a syntax error. The premise that you can 'pick a random number' from all integers is invalid. You can't do this. The reason why your (faulty) logic appears to work is because you're implicitly capping the distribution from $[-M,M]$ and then waving your hands by taking $M \to \infty$, which can't be done. - 2 Sorry, but that's just wrong. That's not what this paradox is about at all. There is no problem choosing a probability distribution on the integers so that each integer gets positive probability, or in taking a continuous distribution on the reals so that each nonempty interval has positive probability. – Douglas Zare Feb 19 2010 at 6:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 86, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9538619518280029, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/40367-inverse-tangent-equation.html
# Thread: 1. ## Inverse Tangent equation Hi, Got a bit of a problem, I have an equation of a curve y=c*atan(d(x+e))-x and i want to find out values of x where y equals zero. Any ideas? I differentiated my equation so i could find the peak if that helps anyone. dy/dx = (c*d/(1+(d(x+e))^2))-1 Thanks in advance, Andrew 2. Originally Posted by ajwillshire Hi, Got a bit of a problem, I have an equation of a curve y=c*atan(d(x+e))-x and i want to find out values of x where y equals zero. Any ideas? I differentiated my equation so i could find the peak if that helps anyone. dy/dx = (c*d/(1+(d(x+e))^2))-1 Thanks in advance, Andrew Solving $0 = c~tan^{-1}(dx + e) - x$ for x has nothing to do with the derivative (unless you are trying something like Newton's method to approximate it.) This equation cannot be solved exactly as it stands. (There may be some specific values of c, d, and e such that it can be, but I can't think of any at the moment.) -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.928756833076477, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/120100?sort=votes
## In your opinion, what are the relative advantages of n-fold categories and n-categories? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For example 2-categories seem simpler at first compared to double categories because the latter is a "wider environment" (cf Bertozzini), however when doing calculations, many people prefer to use double categories (cf Brown) and when we describe them in the 2-arrows-only language it seems that double categories have less axioms. (Recall the theorem of Brown-Mosa-Spencer that the category of 2-categories is equivalent to the category of edge symmetric double categories with connection and the theorem by Ehresman that all double categories can be embedded in an edge symmetric double category.) It seems that more work has been done on n-categories though, why is that? Is it easier to start with n-categories? - I cannot give an answer, but usually what is studied is what appears "in nature": maybe n-cats just appear more often in nature than n-fold cats?... – Qfwfq Jan 28 at 11:53 There can be different (technical) correct answers depending on people's experiences. – Rachel Jan 28 at 13:24 1 There is quite a lot of mathematics related to your question. In the future, please try look for other titles --- "in your opinion" is a poor way to frame MathOverflow discussions. – Theo Johnson-Freyd Jan 28 at 15:54 OK thanks, i think I'm starting to get the hang of it...if I had deleted the first 3 words and just asked "What are the relative advantages..." then it would have been OK, and following with, "Why is it that since n-fold categories are deemed to be more efficient, more work seems to have been done on n-categories?" – Rachel Jan 29 at 14:39 1 @Rachel: Yes, I think the title without the first three words would be mildly better. I also think the question has a few places that approach (but by no means surpass) the "argumentative/subjective" threshold. In any case, don't take my comment too strongly --- aesthetics is certainly one of the driving forces in n-category theory, and perhaps in all of mathematics. – Theo Johnson-Freyd Jan 30 at 5:21 ## 2 Answers I think there are good technical reasons for preferring one particular mode in certain situations, depending on how easily certain concepts are expressed. For me, the main intuition since 1965 was based on the diagram and the idea that the big square should be the composition of all the little squares. This I termed "algebraic inverses to subdivision". Subdivision is an important tool in mathematics for local-to-global problems, which are themselves an important range of problems in mathematics and its applications. I found that Ehresmann's notion of double category, or groupoid, was very suited to express this notion, and was easy to generalise to higher dimensions. This led to proofs of what we now call Higher Homotopy Seifert-van Kampen Theorems, and for which the globular notions were not of any help. The notion of strict higher cubical category or groupoid is also useful for formulating and proving monoidal closed structures, due to the rule $I^m \times I^n \cong I^{m+n}$, see the final section of this paper in Advances of mathematics, 170 (2002) 71--118.. The paper Ellis, G.~J. and Steiner, R., Higher-dimensional crossed modules and the homotopy groups of $(n+1)$-ads. J. Pure Appl. Algebra 46 (1987) 117--136, relates certain $(n+1)$-fold groupoids, i.e. those in which one structure is a group, to a fascinating structure called crossed $n$-cube of groups, and which is closely related to classical ideas in the homotopy theory of $n$-ads, see particularly Theorems 3.7,3.8, which have not been obtained by other methods. On the other hand, to discuss the notion of commuting cube in a strict cubical category with connections, the relation with the globular case was crucial, see this paper by Higgins. The notions of the globular, simplicial, or cubical sites have been well studied. I am not sure that globular sets are very convenient. What seems not to have been well studied, or even studied, is the underlying geometric site for $n$-fold categories, since it is the geometry of cubes in which all the directions are distinct, so the direction $i$ faces of a cube are distinct from the direction $j$ faces if $i \ne j$. Also weak cubical categories do not seem much studied, though the classical example is the cubical singular complex of a space. - Ronnie, the link to the Advances paper returns a "not found" error. – Vidit Nanda Jan 28 at 16:26 Thanks Vel. Corrected. (It weas on my computer but had not been copied over.) – Ronnie Brown Jan 28 at 17:12 1 Adding to Ronnie's comment on crossed n-cubes of groups. These are very very easy to define and give models for homotopy n+1-types. The corresponding n+1-groupoid will have a lot of seemingly extra structure whose details are not that clear once you get to n= 4. The reason, intuitively, is that the crossed n-cube lays things out for you, but at the cost of a lot of repetition of the information, while the n-category folds it all into a small space, so naturally things interact in an apparently more complex way. – Tim Porter Jan 29 at 8:46 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I believe is more a matter of tastes, personally I find easier and simpler n-fold categories than categories. For me n-fold categories are more natural and so are easier, for different reasons: for start one interesting thing is the various sources and targets of the composition are given just by $k-1$-cells (faces), where for $n$-categories sources are given by a $i$-cells for each $i < k$, this gives an intuitive representation of $k$-cells as $k$-dimensional cubes, with orientation for each pair of opposite faces, and a representation of composition as pasting cubes along the faces coherently with these orientations. To do something similar with $n$-categories you should work with $k$-cells as $k$-globes and see compositions as a sort of pasting of globes which involves also deformations of such globes and so (at least by me) it's a little more difficult to figure. On the other end this cubical approach has proven to be more easier to write computations: consider the case of fundamental group in which to do computations it usually preferred to use maps from the cubical interval rather then maps from the spheres. Another point in favor of n-fold categories is that every n-categories can be seen as a n-fold category in which every cells have collapsed faces. I something else come to my mind I reserve the right to add something later. :) - One of the matters of taste is to have first definitions which are symmetric and without choices, and this applies to $n$-fold categories. This aesthetic advantage, and others, apply to the use of the homotopy double groupoid of a pair of spaces, rather than the second relative homotopy group as a crossed module. Of course for special reasons, such as I mentioned above, and also for computation, one often needs to move to an asymmetric situation. – Ronnie Brown Jan 28 at 21:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9497368335723877, "perplexity_flag": "middle"}
http://mathematica.stackexchange.com/questions/18743/calculating-volume/18753
# Calculating volume [closed] I'm new in mathematica and I'm stucked on how I can get the volume of a solid created by the inequation: ````(radFld[Grp, "Bx", {x, y, z}] - Btotal)/Btotal <= Value ```` where radFld is a function from the plug-in Radia and it does calculs given an x and z to find a magnetic field of the configuration of magnetics given by Grp. In other words: `radFld[Grp, "Bx", {x, y, z}] - Btotal)/Btotal = f(x,y,z)` Also, "vmax" is a constant. To get the volume, I tried this: ````Integrate[Boole[(radFld[Grp, "Bx", {x, y, z}] - Btotal)/Btotal <= vmax], {x, -3, 3}, {y, -3, 3}, {z, -10, 10}] ```` but my results seems not to be correct, as it goes from 0 to 540 and does not assume any other value between this range, whatever is "vmax". Also, I tried this to make sure the region has been modified with "vmax": ````RegionPlot3D[(radFld[Grp, "Bx", {x, y, z}] - Btotal)/Btotal] <= vmax , {x, -3, 3}, {y, -3, 3}, {z, -10, 10}] ```` and I got different regions for different "vmax". Is there any other suggestion to get the volume or of what I am doing wrong? Thanks in advance - Hi Felipe, welcome to Mathematica SE. – Murta Jan 30 at 11:10 1 Hi Luiz! Welcome to this site. I think an explicit form of your inequality (i.e. `(radFld[Grp, "Bx", {x, y, z}] - Btotal)/Btotal <= Value`) should be helpful for others to understand what happens here. – Silvia Jan 30 at 11:18 – Luiz Felipe Santos Jan 30 at 12:09 1 Have you tried `NIntegrate`? – Michael E2 Jan 30 at 13:47 2 It's almost certain that your problem has to do with the particulars of your integrand. If you can't provide `radFld`, it would be good to provide at least a minimal example function that shows the problem. – Jens Mar 23 at 16:41 show 6 more comments ## closed as too localized by J. M.♦Apr 21 at 15:15 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, see the FAQ. ## 1 Answer This is too long for a comment, but I can remove this answer later. I don't really understand your sentence but my results seems not to be correct, as it goes from 0 to 540 and does not assume any other value between this range, whatever is "vmax". Maybe you can try to rephrase this, so that it will become crystal clear, what you mean ;-) Additionally, the approach you already mentioned should work. Let's assume a simple example: a sphere. We can define a sphere in the same way you defined your function ````f[v_] := Sqrt[v.v] RegionPlot3D[f[{x, y, z}] < 1, {x, -2, 2}, {y, -2, 2}, {z, -2, 2}] ```` Now you can use `NIntegrate` in the same you already showed with `Integrate` ````NIntegrate[Boole[f[{x, y, z}] < 1], {x, -2, 2}, {y, -2, 2}, {z, -2, 2}] ```` and you get `4.18879` which happens to be $$\frac{4}{3}\pi$$ which is of course the correct solution. You can try other radii and you'll see that the result is correct. - lang-mma
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9198434948921204, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/39846-fourier-integral-representation.html
# Thread: 1. ## Fourier integral representation Having trouble with the following finding the fourier integral representation for the following: f(t) = te^-|t| Ive got answer but I need verfication 2. Originally Posted by taltas Having trouble with the following finding the fourier integral representation for the following: f(t) = te^-|t| Ive got answer but I need verfication Showing your working and answer might expedite the process of helping you. You did realise that f(t) is odd, yes? That simplifies the calculation ..... 3. I've been working on a similar problem myself, i've plotted the graph in MATLAB and see that it is an odd function, equating all A terms to 0. I'm having trouble doing the integral of t*exp(-abs(t))sin(wt) thought. Any help would be greatly appreciated!! 4. Originally Posted by taltas Having trouble with the following finding the fourier integral representation for the following: f(t) = te^-|t| Ive got answer but I need verfication You will have to sort out the constants and a few other things yourself, but we are interested in: $F(\omega)=\int_{-\infty}^{\infty} t e^{|t|}e^{i \omega t} ~dt$ Split the integral into two parts: $F(\omega)=\int_{0}^{\infty} t e^{t}e^{i \omega t} ~dt<br /> +\int_{-\infty}^{0} t e^{-t}e^{i \omega t} ~dt$ in the second integral change the variable $\tau=-t$: $F(\omega)=\int_{0}^{\infty} t e^{t}e^{i \omega t} ~dt<br /> +\int_{\infty}^{0} \tau e^{\tau}e^{-i \omega \tau} ~d \tau$ Now the second integral is minus the complex conjugate of the first so we can write: $F(\omega)=2 i {\rm{Im}} \left[ \int_{0}^{\infty} t e^{t}e^{i \omega t} ~dt\right]=<br /> 2 i {\rm{Im}} \left[ \int_{0}^{\infty} t e^{(1+i \omega) t} ~dt\right]<br />$ and the integral inside the rightmost brackets can be done by parts. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9234485626220703, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/240364/the-squares-of-an-8-8-chessboard-are-colored-black-or-white
# The squares of an $8 × 8$ chessboard are colored black or white. Prove that no matter how we color the chess board, there must be two L-regions that are colored identically. Explanation: An L-region is a collection of $5$ squares in the shape of a capital L. Such a region includes a square (the corner of the L) together with the two squares above and the two squares to the right. Related Topics: Pigeonhole principle - 1 Can the two L's partly overlap? If so, there are $36$, then use Pigeonhole. – André Nicolas Nov 19 '12 at 4:37 i don't think it matters whether they overlap or not as long as we show such two L's exist. how do you get 36? – UH1 Nov 19 '12 at 4:48 Consider the location of the "middle" square. It can be on any of the little squares in the bottom left $6\times 6$ subsquare of the original chessboard. It has to start the third row down, and can occupy $6$ positions in that row. Same for next row down, and so on all the way to the bottom. – André Nicolas Nov 19 '12 at 4:52 So basically the question can be answered like this: Since there are 2^5 = 32 ways to color an L figure and there are 36 L's possible on the board, by the pigeonhole principle, there are always two L's that are covered identically. Is that complete and correct in your view? – UH1 Nov 19 '12 at 4:53 The question can be answered as in the post by Amr, except that the $64$ (and $\gt 64$) has to be replaced by $36$. But that's big enough, though not by much. I just saw your edited comment. It is coloured identically, not covered. And probably you need to explain why $36$. – André Nicolas Nov 19 '12 at 4:56 show 3 more comments ## 1 Answer Let A be the set of L-regions and S be the set lower left 6x6 squares. Define a function $f:A→ S$ such that f sends each L-region to the square at its corner. Clearly, f is onto. Therefore $|A|>=|f(A)|>=|S|=36$. Number of ways to color an L region is $2^5=32$ Thus, by the pigeonhole principle we know that there are two L-regions with the same color - OH. I just read his question again and I noticed that he specified directions ( 2 right and 2 above). I thought that 2 below 2 left is possible as well – Amr Nov 19 '12 at 4:46 Actually, if orientation does not matter. We can deduce that we have three with the same color – Amr Nov 19 '12 at 4:47 I will edit my answer, but I have to leave now. Thank you – Amr Nov 19 '12 at 4:47 can you please edit your answer quickly? thanks, much appreciated! – UH1 Nov 19 '12 at 4:49 Thank you Andre Nicolas – Amr Nov 19 '12 at 6:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454261064529419, "perplexity_flag": "middle"}
http://en.wikipedia.org/wiki/Decision_rule
# Decision rule In decision theory, a decision rule is a function which maps an observation to an appropriate action. Decision rules play an important role in the theory of statistics and economics, and are closely related to the concept of a strategy in game theory. In order to evaluate the usefulness of a decision rule, it is necessary to have a loss function detailing the outcome of each action under different states. ## Formal definition Given an observable random variable X over the probability space $\scriptstyle (\mathcal{X},\Sigma, P_\theta)$, determined by a parameter θ ∈ Θ, and a set A of possible actions, a (deterministic) decision rule is a function δ : $\scriptstyle\mathcal{X}$→ A. ## Examples of decision rules • An estimator is a decision rule used for estimating a parameter. In this case the set of actions is the parameter space, and a loss function details the cost of the discrepancy between the true value of the parameter and the estimated value. • Out of sample prediction in regression and classification models.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8379260301589966, "perplexity_flag": "head"}
http://mathhelpforum.com/advanced-math-topics/206751-proving-union-intersection-indexed-family-sets.html
# Thread: 1. ## Proving union and intersection of indexed family of sets I have to prove that the union for n an element of the natural numbers of the indexed set D=(-n,1/n) is equal to (-infinity,1). And that the intersection for n an element of the natural numbers of the indexed set D=(-n,1/n) is equal to (-1,0]. I've asked the professor twice now for help and he hasn't been able to explain it at all. 2. ## Re: Proving union and intersection of indexed family of sets show inclusions in both directions. for the intersection :what x is between -n and 1/n for all n? 3. ## Re: Proving union and intersection of indexed family of sets Originally Posted by skippenmydesign I have to prove that the union for n an element of the natural numbers of the indexed set D=(-n,1/n) is equal to (-infinity,1). And that the intersection for n an element of the natural numbers of the indexed set D=(-n,1/n) is equal to (-1,0]. I think you are confused by the notation. Let $D_n = \left( { - n,\frac{1}{n}} \right)$. Now consider two examples: $D_3 \cup D_1 = \left( { - 3,1} \right)\;\& \;D_3 \cap D_1 = \left( { - 1,\frac{1}{3}} \right)$ Now let's do an indexed example: $\bigcup\limits_{k = 1}^{100} {D_k } = \left( { - 100,1} \right)\;\& \;\bigcap\limits_{k = 1}^{100} {D_k } = \left( { - 1,\frac{1}{{100}}} \right)$ Note how the right and left endpoints are limits. Do you see how it works? 4. ## Re: Proving union and intersection of indexed family of sets Ok, I'm not familiar with latex so I can't get this into symbolic form but I'm going to attach a picture of what the question is. I have to prove it using the definition that for two things to be equal they each have to be a subset of the other. Attached Thumbnails 5. ## Re: Proving union and intersection of indexed family of sets Originally Posted by skippenmydesign Ok, I'm not familiar with latex so I can't get this into symbolic form but I'm going to attach a picture of what the question is. I have to prove it using the definition that for two things to be equal they each have to be a subset of the other. Frankly I have no idea why you posted exactly what you posted exactly what I had already posted. $\lim _{n \to \infty } \bigcup\limits_{k = 1}^n {D_k } = \left( { - \infty ,1} \right)\;\& \,\;\;\lim _{n \to \infty } \bigcap\limits_{k = 1}^n {D_k } = \left( { - 1,0} \right]$ BTW: Why not learn to use LaTeX? 6. ## Re: Proving union and intersection of indexed family of sets because I need a proof for the answer? not the answer itself? I know the answer but not the proof for it. 7. ## Re: Proving union and intersection of indexed family of sets Also like I said I know from talking to the professor that the proof needs to use the definition of for things to be equal they have to be subsets of each other. I hope that is clear. Also I might learn latex sometime but not tonight! 8. ## Re: Proving union and intersection of indexed family of sets try proving this "in-between" step: for the first problem- show that if k < m, that (Dk)U(Dm) = (-m,1/k). what is the smallest k can be, and what is the largest m can be? for the second problem- show that for k < m that (Dk)∩(Dm) = (-k,1/m). again: how small can k be, and how large can m be? (the answer for k in both cases should be "easy". answering for m might take a little thought). try drawing a picture with k = 3, and m = 4. draw another one with k = 1, and m = 10. what do you notice?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450997710227966, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/14802/trouble-with-constrained-quantization-dirac-bracket/14807
# Trouble with constrained quantization (Dirac bracket) Consider the following peculiar Lagrangian with two degrees of freedom $q_1$ and $q_2$ $$L = \dot q_1 q_2 + q_1\dot q_2 -\frac12(q_1^2 + q_2^2)$$ and the goal is to properly quantize it, following Dirac's constrained quantization procedure. (This is a toy example related to Luttinger liquids and the fractional quantum Hall effect. The degrees of freedom $q_1$ and $q_2$ correspond to two bosonic modes $a_k$ and $b_k$.) First, note that the equations of motion are $$\dot q_1 = -q_2 ,\quad \dot q_2 = -q_1 ,$$ which shows that this model is not complete nonsense as it carries interesting dynamics. (EDIT: Many thanks to the answerers for pointing out my silly mistake: these equations are wrong, the correct ones would be $q_1=q_2=0$.) However, the question is How to quantize the above Lagrangian in a systematic fashion? (I'm actually trying to quantize a different model, but with similar difficulties, hence the emphasis on "systematic"). The usual procedure of imposing canonical commutation relations does not work because the velocities cannot be expressed in terms of the conjugate momenta. According to Dirac, we have to interpret the equations for the canonical momenta as constraints $$\phi_1 = p_1 - \frac{\partial L}{\partial \dot q_1} = p_1 - q_2 \approx 0$$ $$\phi_2 = p_2 - \frac{\partial L}{\partial \dot q_2} = p_2 - q_1 \approx 0$$ The hamiltonian is $$H = \frac12 (q_1^2 + q_2^2)$$ Unfortunately, the constraints have poisson brackets $\lbrace\phi_1,\phi_2\rbrace = 0$ and the secondary constraints read $$\lbrace \phi_1 , H \rbrace = q_1 \approx 0$$ $$\lbrace \phi_2 , H \rbrace = q_2 \approx 0$$ Clearly, these weirdo constraints no longer have any dynamics and no useful quantization will come out of them. Is there a systematic method to quantize this theory, for example BRST quantization? Or did I simply make a mistake while trying to apply Dirac's constrained quantization procedure? - 2 Somehow I could not get (classical) EOM from your Lagrangian. For instance dL/dq1 = q2' - q1. dL/dq1' = q2. Using dL/dq1 = d[dL/dq1']/dt I get q2' - q1 = q2', which does not look like a valid EOM. – valdo Sep 18 '11 at 15:26 @Greg,if you change the relative sign between the derivative terms, you would have got an interesting problem, where one can exercise the treatment of Dirac's first class constraints, or equivalently symplectic reduction or the projection onto the lowest landau level. – David Bar Moshe Sep 19 '11 at 9:28 ## 2 Answers The classical equations of motion are not affected by changing the Lagrangian $$L \qquad \longrightarrow \qquad L' = L+ \frac{dF}{dt}$$ by a total time derivative. Put $F= -q_1 q_2$. Then $$L' = -\frac{1}{2}(q_1^2 + q_2^2).$$ This Lagrangian $L'$ does not contain time derivatives, and thus there are no dynamics. The classical equations of motion are $$q_1=0 \qquad \mathrm{and}\qquad q_2=0,$$ in conflict with what is said in the original question formulation (v1). - I am so dumb. Thanks. :-) – Greg Graviton Sep 18 '11 at 17:06 The problem is that this Lagrangian is directly integrable with respect to time, and so the time-dependence is determined by a boundary condition, and there are no locally defined conjugage momenta. Consider just the portion of this action that has explicit time derivatives $$S=\int dt \int d^{3}x \,\,q_1 {\dot q_{2}} +q_{2} {\dot q_{1}}$$ Now, perform the transformation: $$\begin{align*} q_{1}&=a Q_{1} + b Q_{2}\\ q_{2}&=c Q_{1} + d Q_{2} \end{align*}$$ If $bc+ad=0$, then this part of the action becomes: $$\begin{align*} S&=\int d^{3}x\int dt\, A Q_{1}{\dot Q_{1}} + B Q_{2} {\dot Q_{2}}\\ &=\int d^{3}x\,\left[\frac{A}{2} Q_{1}^{2} + \frac{B}{2} Q_{2}^{2}\right]_{t_{0}}^{t^{f}} \end{align*}$$ Where $A$ and $B$ are constants that depend on the choice of $a,b,c,d$ under the constraint that $bc+ad=0$. Now, there are no explicit time derivatives in the action at all (and we would be free to transform back to the $q_{1},q_{2}$ coordinates if we wished, and formally, the conjugate momenta are zero. This is why you must be able to solve for the velocities when doing the Legendre transform. Otherwise, the conjugate momenta will be ill-defined and the Lagrange transformation will fail. This action is topological and doesn't have a well-defined local Hamiltonian dynamics. - Thanks! While a topological action would be interesting, I finally found the mistake I made in my original problem. – Greg Graviton Sep 19 '11 at 11:26 @Greg: and yeah, Qmechanic's solution to this is much cleaner than mine. :) – Jerry Schirmer Sep 19 '11 at 11:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9071524143218994, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/86042?sort=oldest
## How many vertices/edges/faces at most for a convex polyhedron that tiles space? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I wonder if this problem has already been examined before: Consider a convex polyhedron that tiles $\mathbb R^3$. What is the maximum of vertices/edges/faces that such a polyhedron can have? Intuitively, it seems that the truncated octahedron is best possible for edges (36) as well as for vertices (24) Its packing is also known as "bitruncated cubic honeycomb". For faces, we can do better than 14, as there is a polyhedron with 16 faces that can be obtained as follows: Take a truncated tetrahedron and add on each triangular face a small pyramid that is a quarter of a tetrahedron. The tessellation of it is known as the quarter cubic honeycomb, with each small tetrahedron "distributed" among its four neighbors. Questions: Are these best possible? What about the corresponding problem in higher dimensions? In $\mathbb R^4$, it looks like the polytope yielded by the equivalent of the "Quarter cubic honeycomb" tiles it. This one, based on the truncated 5-cell has 25 cells, 60 faces, 60 edges, and 25 vertices, and so for cells and vertices, it does again (slightly) better than the 24-cell with its 96 faces, 96 edges, and 24 vertices. - ## 2 Answers In $\mathbb{R}^3$ this is a famous problem.See this nice reference. (Danzer, Grunbaum, Shepard -- Does every type of polyhedron tile three-space). The best example at the time of the writing had 38 faces (an example of Engel). For a lattice (periodic) tiling, the problem was solved (I think) by Delone (AKA Delaunay). - Thank you very much. So it seems quite hopeless to look for something better in $\mathbb R^4$ in spite of knowing that there are probably solutions with at least 50 cells... – spanferkel Jan 19 2012 at 17:32 Nothing is hopeless, but serious thought is called for... – Igor Rivin Jan 22 2012 at 15:16 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. To supplement Igor's citation of Engel's polyhedron, here it is: The figure above is from the paper by Branko Grünbaum and G. C. Shephard, "Tilings with congruent tiles." Bull. Amer. Math. Soc. (N.S.), Volume 3, Number 3 (1980), 951-973. Engel's paper appeared in the journal Kristallographie in 1980. It is certainly remarkable that this polyhedron tiles space! - Thank you. Nice article! I see on p.961 that others have had the same idea as me a century before... – spanferkel Jan 19 2012 at 17:41 it's only a pity that they didn't ask their computer to provide an illustration how to tile space with it. Hopefully their program is trustworthy! – spanferkel Jan 19 2012 at 21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9556865096092224, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/112245/why-can-i-write-a-singular-cardinal-as-the-limit-of-an-increasing-continuous-se
Why can I write a singular cardinal as the limit of an increasing, continuous sequence of cardinals. Let $\kappa$ be a singular cardinal, such that $\operatorname{cf}\kappa = \lambda < \kappa$. Now because $\operatorname{cf}\kappa = \lambda$, then I can write down an increasing sequence of ordinals $\langle \alpha_{\xi} \mid \xi < \lambda \rangle$ such that $\displaystyle\lim_{\xi \rightarrow \lambda}\alpha_{\xi}=\kappa$. But why is it possible to construct an increasing, continuous sequence of cardinals $\langle \beta_{\xi} \mid \xi < \lambda \rangle$? I think that replacing each $\alpha_{\xi}$ with $| \alpha_{\xi}|^+$ yields an increasing sequence of cardinals (since $\kappa$ is not a successor cardinal), but how do I guarantee continuity? Do we require that $\lambda > \omega ?$ Any help would be appreciated. - 1 Answer Suppose $\kappa$ is a limit cardinal, whose cofinality is $\lambda<\kappa$. This means that there exists a strictly increasing sequence $\langle\kappa_\xi\mid\xi<\lambda\rangle$ of cardinals such that $\sup\kappa_\xi=\kappa$. When is a sequence like that is continuous? Exactly if the following condition holds: If $\delta<\lambda$ is a limit ordinal, then $\kappa_\delta=\sup\{\kappa_\beta\mid\beta<\delta\}$. Define a new sequence $\kappa^\prime_\xi$ as: $$\kappa^\prime_\xi=\begin{cases}\kappa_\xi & \xi=\alpha+1\\\sup\{\kappa_\beta\mid\beta<\xi\} &\xi\text{ is a limit ordinal}\end{cases}$$ To see that this sequence is continuous note that whenever $\delta$ is a limit ordinal then $\kappa^\prime_\delta$ is defined to be the correct cardinal (recall that the limit of cardinals is a cardinal). We need to see that $\sup\kappa^\prime_\xi=\kappa$, but since $\kappa^\prime_{\xi+1}=\kappa_{\xi+1}$ their $\sup$ is also the same. - What I mean is, how can I construct the sequence of cardinals with limit $\kappa$. How can I be sure that $\displaystyle\lim_{\alpha \rightarrow \lambda} \aleph_{\alpha} = \kappa$? Is $\kappa$ the $\gamma$-th cardinal? – Paul Slevin Feb 23 '12 at 13:02 @Paul: I don't understand your comment. – Asaf Karagila Feb 23 '12 at 13:18 My goal is to construct a continuous, increasing sequence of cardinals of length $\lambda$ with supremum $\kappa$, given that $\kappa$ is singular and $\operatorname{cf}\kappa = \lambda$. I can only construct an increasing sequence of ordinals which might not be continuous. – Paul Slevin Feb 23 '12 at 13:28 1 @Paul: Since you already have $\langle\aleph_\beta\mid\beta<\lambda\rangle$ which is an increasing sequence approaching $\kappa$, I claim that it has at most $\lambda$ many limit points. Add those and you will have a sequence of length $\lambda$ which is continuous. – Asaf Karagila Feb 23 '12 at 14:03 1 @Paul: No, these are ordinals approaching $\gamma$. Read my comments (and the answer) again more closely. Recall that the cofinality of $\aleph_\alpha$ is $\aleph_\alpha$ if $\alpha$ is not a limit ordinal and it is the cofinality of $\alpha$ if it is a limit ordinal. – Asaf Karagila Feb 23 '12 at 14:55 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9521411657333374, "perplexity_flag": "head"}
http://nrich.maths.org/257
### Exhaustion Find the positive integer solutions of the equation (1+1/a)(1+1/b)(1+1/c) = 2 ### Code to Zero Find all 3 digit numbers such that by adding the first digit, the square of the second and the cube of the third you get the original number, for example 1 + 3^2 + 5^3 = 135. ### After Thought Which is larger cos(sin x) or sin(cos x) ? Does this depend on x ? # Shades of Fermat's Last Theorem ##### Stage: 5 Challenge Level: There are exactly three solutions of the equation $$(x - 1)^n + x^n = (x + 1)^n$$ where $x$ is an integer and $n= 2, 3, 4$ or $5$. Prove this statement and find the solutions. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8843019008636475, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41990/basic-question-on-experimental-plots
# Basic question on experimental plots On the following Higgs $\rightarrow$ Tau Tau plot, since we are plotting the ratio of $\frac{\sigma}{\sigma_{SM}}$ on the y axis, shouldn't the expected for this be 1? i.e., shouldn't the expected 68% and 95% be centered at a dotted line at 1? Anything else seems to imply we are expecting something other than the Standard Model... - 2 – David Zaslavsky♦ Oct 29 '12 at 16:27 ## 2 Answers In addition to referring you to the previous question and answer that David linked, I will try once more to postulate my interpretation of these graphs, called "Brazil bands". In my opinion they are the phenomenologist's attempt to extract limits out of very few events. Once there are enough events this type of plots and their yoga positions ( catching your right ear behind your back with the left hand) are abandoned, as the Higgs mass plot of CMS shows . The use of these Brazil plots is to concentrate the attention to regions which are not excluded even by scarce data, and thus give a hope of finding a desired higgs there. Now that we have it they are useless. Where the Higgs is the value should be 1 if it is a standard model Higgs. We see in the plot you give above that the measured crossection over the crossection calculated for the standard model Higgs the value is 1 at the value of 125 GeV within errors. Thus it is consistent with the real Higgs seen when the statistics improved. The confusion arises because there are two Monte Carlo simulations entering the "expected" plot. The reason is that one is necessary to get the theoretical value , since it cannot be found analytically, to large enough accuracy so that statistical errors would be irrelevant. The expected curves are curves that in the numerator immitate the data, i.e. if the data has 10 events a monte carlo is generated with 10 events and passed through all the limitations of the experimental setup, and the denominator the pure theory monte carlo. This ratio is distorted : The fewer statistics as the mass increases in collusion with the detector limitations and errors from the ideal Higgs mass at each point create the distorted from 1 ratio seen in your plot. When one has adequate statistics, the one and only Higgs would appear as 1 in the observed ratio and all the rest of the x axis would be depressed bellow 1, since the computed crossection would be much larger for the putative higgs mass over what the data has at that value , since the Higgs is at 125 GeV and only then will the ratio be 1. The expected over observed would be at 1 all the way through, as you observed. As I said when one has enough statistics this type of plots are useless. - No, it shouldn't be one. The dashed line encodes the expected upper bound on the cross section that may be extracted from the same amount of collisions and this expected upper bound isn't one. For values of the new particle's mass where the LHC experiment isn't sufficiently sensitive, the upper bound one may impose may be larger or much larger than the actual Standard Model cross section. Let me explain what is done. You have computers that may "simulate" the LHC according to the laws of the Standard Model without the Higgs boson included. Well, this assumption is mostly true. You run the "simulation" many times and you get a certain number of events of a given type, for example the $\tau^+\tau^-$ final states discussed in this chart. None of these events is really caused by the new particle – in this case the Higgs – because the simulation assumes that there is no new particle (and Higgs is considered new at this stage). From this number of collisions with a specified outcome, you determine what the cross section $\sigma$ for the Higgs production is. It's only positive if there is a statistical upward fluke in the number of these final states – over the known non-Higgs, old physics events that are known as the "background". You get this cross section for the Higgs production with some error margin etc., more precisely with some distribution. Now, using this distribution for the Higgs cross section (imagine a Gaussian one but the CERN folks actually calculate the exact shape which is not quite Gaussian) you will be able to say that the Higgs cross section is almost certainly not too high because if it were too high, you would have found many more $\tau^+\tau^-$ events. So by running statistical arguments, you determine the upper bound – the maximum Higgs cross section so that you're 95% certain (95% confidence level is "two sigma") that the Higgs cross section can't be higher than this "upper bound". For different runs of the same simulation, this calculated upper bound will be different. If you happen to randomly experience an upward fluke, too many $\tau^+\tau^-$ events, you will only be able to impose a mild upper bound (a high number). If you get a deficit, you will be able to impose a strict upper bound (a small number). From running many simulations of this kind, virtual LHC runs, you may determine the whole distribution of "expected upper bounds". The average or median value is drawn on the graph as the dashed black curve and the green and yellow "Brazil" (named after the flag) bands around it indicate the 1-sigma and 2-sigma intervals. So for every value of $m_H$, you may read the intervals: 68% of the simulation runs were able to deduce that the Higgs cross section is smaller, at 95% certainty, than a point in the green band; 95% of the simulation runs were able to determine that the Higgs cross section must be, at 95% certainty, smaller than a point on the $y$-axis in the green or yellow band. Now, you run the real experiment, the LHC. If the LHC works according to the Standard Model - in this case, we mean the Standard Model without the Higgs contributions because we consider the Higgs boson to be "new physics", not yet a part of the "null hypothesis" – then the real LHC run will behave exactly as one of the random simulation runs. So the chance should be 68% that the upper bound you will be able to impose from the real-world LHC collisions belongs to the green band, and 95% that it belongs either to the green or yellow bands. That's what you depict by the full black curve. So it's expected that the full black curve is probably inside the bands, 95% of the time. If it's outside the green and yellow band, it's unlikely. If it's well above the band, then you have a clear excess. On the other hand, you may exclude the Standard Model Higgs boson if the real full black curve is below the "red line". But the red line is completely independent from the expectations. For example, look at your graph near 145 GeV. The expected upper bound on the Higgs cross section was more than 2 Standard Model cross sections, see the dashed black line. It means that one expects that if one calculates the statistical distribution from the real-world observed $\tau\tau$ events at the LHC and deduces what's the maximum Higgs cross section from that, making sure that there's at most 5% risk that this inequality is wrong, he will be able to derive that the Higgs cross section is smaller than 2 times the Standard Model. In reality, the full black line near 145 GeV, you see that we got about 3 times the Standard Model. That means that there was an excess of these events so we could only say, based on the real LHC collisions, that the cross section isn't greater then 3 times the Standard Model (with the 145 Higgs). So this is milder, less informative inequality than expected. Either way, essentially because the signal-to-noise ratio is poor over there (the noise is the "background" while the signal is the hypothetical "Higgs contribution") it's not enough to decide whether there is a 145 GeV Higgs or not. The non-Higgs "null hypothesis" without a 145 GeV Higgs implies that the Higgs cross section should be 0. The 145 GeV Higgs non-null hypothesis predicts that the Higgs cross section should be $1\sigma_{SM}$, at the red line. But the data are inconclusive, they only say that the right number is below 2 (expected) or 3 (observed) which means it may be both 0 or 1. On the other hand, the expected and observed upper bounds may get closer to the red line or below it. It means that for those parameters (and/or for those colliders, channels, and/or datasets), the LHC experiment in this particular channel becomes sensitive (the signal-to-noise ratio becomes good enough) and able to decide whether the null hypothesis is viable or whether new physics has to be added. In particular, when the full black observed line gets beneath the red line, it may exclude the non-null hypothesis that the new particle – the Higgs bosons of a given mass, in this case, exists. You see that it hasn't happened in your chart: the full black line is never beneath the red line level. Paradoxically enough, they get very close for the 125 GeV Higgs mass, so for this value of the mass, this experiment looking at the $\tau\tau$ channel is able to exclude the 125 GeV Higgs (the Higgs boson we know to exist from other channels!) at nearly 95% level, it could be over 90%. Unless this is a sign of some new physics (the 126 Higgs doesn't interact with the taus as much as expected by the Standard Model), and the evidence for this new physics is so far very weak, the "near exclusion" near 125 GeV is just due to a downward statistical fluctuation in the particular collisions that were used, and it will go away when more collisions are collected. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9467633962631226, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/250102/does-a-function-sequence-decreasing-monotonically-to-0-converge-uniformly
# Does a function sequence decreasing monotonically to 0 converge uniformly? Suppose $\{f_n\}$ be a sequence of continuous function$f_n:S\to \mathbb{R}$ where $S\subset \mathbb{R}$ and $S$ is compact. Suppose for $\{f_n(x)\}$ monotonic decreasing to zero for any $x\in S$. Is $\{f_n\}$ uniformly converge to $0$? I know all the definition of convergence and uniformly convergence and compact but still not sure how to start or prove it - – user51627 Dec 3 '12 at 17:30 Hmm. I didn't see that when I searched for Dini's theorem prior to answering. Though I now realize that I would have gotten more relevant hits if I had used quotes. – Harald Hanche-Olsen Dec 3 '12 at 17:33 Anyhow, to improve the chances of someone finding this one, I edited the title of the question. – Harald Hanche-Olsen Dec 3 '12 at 17:37 ## 1 Answer Yes. This is known as Dini's theorem. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9400013089179993, "perplexity_flag": "middle"}
http://mathhelpforum.com/discrete-math/159205-combinatorics-sum-series.html
Thread: 1. Combinatorics - Sum of series Sum the series $1^2+2^2+ \cdots+n^2$ by observing that $m^2 =2 \dbinom{m}{2} + \dbinom{m}{1}$ and using the identity $\dbinom{0}{k}+ \dbinom{1}{k} + \cdots + \dbinom{n}{k}= \dbinom{n+1}{k+1}$. 2. Originally Posted by tarheelborn Sum the series $1^2+2^2+ \cdots+n^2$ by observing that $m^2 =2 \dbinom{m}{2} + \dbinom{m}{1}$ and using the identity $\dbinom{0}{k}+ \dbinom{1}{k} + \cdots + \dbinom{n}{k}= \dbinom{n+1}{k+1}$. lets see what you've tried. 3. I am really not sure where to start with this. I did verify that $m^2 = 2* \dbinom{m}{2}+ \dbinom{m}{1}$. So $1^2 = 2* \dbinom{1}{2} + \dbinom{1}{1} = (2*0)+1 = 1$, $2^2=2* \dbinom{2}{2}+ \dbinom{2}{1}=2(1)+2=4$ and $n^2=2* \dbinom{n}{2} + \dbinom{n}{1} = 2*\frac{n!}{2!(n-2)!} + \frac{n!}{1*(n-1)!}$. 4. Originally Posted by tarheelborn I am really not sure where to start with this. I did verify that $m^2 = 2* \dbinom{m}{2}+ \dbinom{m}{1}$. So $1^2 = 2* \dbinom{1}{2} + \dbinom{1}{1} = (2*0)+1 = 1$, $2^2=2* \dbinom{2}{2}+ \dbinom{2}{1}=2(1)+2=4$ and $n^2=2* \dbinom{n}{2} + \dbinom{n}{1} = 2*\frac{n!}{2!(n-2)!} + \frac{n!}{1*(n-1)!}$. you're simplifying prematurely. slow down. just plug in the expressions first: $\displaystyle 1^2 + 2^2 + \cdots + n^2 = 2 {1 \choose 2} + {1 \choose 1} + 2 {2 \choose 2} + {2 \choose 1} + \cdots + 2 {n \choose 2} + {n \choose 1}$ Now what? (you can write out the expression for a few more terms if you don't see a pattern or what to do) 5. Combinatorics - sum of series I really just don't see it. It looks like it would be $\dbinom{n}{n+1} + \dbinom{n}{n}$ but that doesn't make sense.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9730388522148132, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/fibonacci-numbers+prime-numbers
# Tagged Questions 0answers 86 views ### Which starting conditions for the Fibonacci sequence, gives most primes I found the following question (at http://aperiodical.com/2012/05/matt-parkers-twitter-puzzle-25-may/): If you start the Fibonacci sequence 2,1 instead of 1,1 do you get more or fewer primes? ... 1answer 232 views ### What is the next “Tribonacci-like” pseudoprime? Given the three roots of $x^3=x^2+x+1$. Then we get the tribonacci-like sequence, $B_n = x_1^n+x_2^n+x_3^n = 3, 1, 3, 7, 11, 21, 39, 71, 131,\dots$ where $B_n = B_{n-1}+B_{n-2}+B_{n-3}$, and the ... 1answer 212 views ### Prime power divisors of the fibonacci numbers I came across a result that if $p^n \mid f_m$ for some $n\geq1$ then $p^{n+1} \mid f_{pm}$. I was wondering if this is true. 4answers 238 views ### Prime Appearances in Fibonacci Number Factorizations Okay, THIS one is considerably more analytical... :P (Used my post here as a basis.) When successive Fibonacci numbers are factored, the primes appear in a specific order, which goes \$2, 3, 5, 13, 7, ... 1answer 100 views ### Prove that If $f_n$ where $n>3$ is prime, then $n$ is prime for a Fibonacci series where $f_1$=$f_2$=1 This problem came up in my conversation with a friend—not sure how basic it is, but it seems quite interesting: Prove that if $f_n$ where $n>3$ is prime, then $n$ is prime for a Fibonacci sequence ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8824508786201477, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/52716/whats-an-example-of-a-group-with-equivalent-uniform-structures-where-multiplica/52734
# What's an example of a group with equivalent uniform structures where multiplication is not uniformly continuous? Say we have a topological group $G$. It's easy to see that if $\cdot: G \times G \rightarrow G$ is uniformly continuous (with respect to either the right or left uniformity), then $G$ must have equivalent uniform structures. I figure the converse is probably false, on the basis that otherwise I would have seen this mentioned somewhere. But I can't think of an example, because I don't know many examples of topological groups, and all the examples I can think of with equivalent uniformities are built out of compact, abelian, or discrete groups, all of which do have uniformly continuous multiplication! Can anyone give me a counterexample? Or are these indeed equivalent? - 2 Exercise 1.8.c on p.79 of Arhangel'skii-Tkachenko, asks the reader to prove that uniform continuity of the multiplication mapping is equivalent to the group being balanced (having equivalent left and right uniformities). – t.b. Jul 20 '11 at 19:23 ...huh. That's a surprise. You should post that as an answer so I can accept it and consider this closed. – Harry Altman Jul 20 '11 at 19:42 ## 1 Answer Disclaimer: I haven't done the exercise carefully myself (and I'm not really in the mood to), but on Harry's request I'm posting it as an answer. According to Exercise 1.8.c on page 79 of Arhangel'skii-Tkachenko, Topological groups and related structures, the following are equivalent for a topological group (uniformly continuous means uniformly continuous with respect to both the left and the right uniform structures): 1. The multiplication map $G^{2} \to G$ is uniformly continuous. 2. The multiplication map $G^{n} \to G$ is uniformly continuous for all $n \geq 2$. 3. The group is balanced in the sense that the left and right uniformities are equivalent. Since you're interested in $3 \implies 1$ that's good enough (provided that it is true). - I would say that there is not much more to it than the fact that inversion is uniformly continuous on a balanced group. – t.b. Jul 20 '11 at 20:04 Yeah, I feel silly - this is actually easy, but I miscalculated when trying to prove it and figured it probably wasn't true. – Harry Altman Jul 20 '11 at 20:13 @Harry: Well, this happens to all of us :) Once we're convinced of the wrong track, we often need a nudge from outside... Glad I could help in that respect. – t.b. Jul 20 '11 at 20:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9535001516342163, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4152984
Physics Forums ## How to solve two coupled pde's I'm new, hi all. I have two coupled equations, one of which is continuity. Basically, my problem comes down to the following system: (1) $u=f(v)$ (similarly, $v=g(u)$. Here, u and v are the components of a vector field, ie u=u(x,y) and v=v(x,y). (2) Continuity: $\nabla \cdot \textbf{u} = 0$ or $u_{x}+v_{y}=0$ From here, I can find the following expressions $u_{x} = -g_{y} \left( u \right)$ $v_{y} = -f_{x} \left( v \right)$ Which I think leaves an equation of the form $G \left( u,u_{x},u_{y} \right)=0$ and $F \left( v,v_{x},v_{y} \right)=0$ It seems to me my original problem has two variables and I have two equations. I think this should be solvable, but I don't know how. Any help please? Thanks in advance! --edit-- p.s. I'm looking for a numerical (discrete) solution. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Quote by keyns I'm new, hi all. I have two coupled equations, one of which is continuity. Basically, my problem comes down to the following system: (1) $u=f(v)$ (similarly, $v=g(u)$. Here, u and v are the components of a vector field, ie u=u(x,y) and v=v(x,y). If u = f(v) and v = g(u), don't you just have a (possibly nonlinear) system of equations to solve for v and u? What is the continuity condition (2) for? Do you get more than one solution by solving (1) and you need (2) to choose a solution of interest? Quote by Mute If u = f(v) and v = g(u), don't you just have a (possibly nonlinear) system of equations to solve for v and u? What is the continuity condition (2) for? Do you get more than one solution by solving (1) and you need (2) to choose a solution of interest? Actually I have only one relation for $u$ and $v$ that I can write it as $u(v)$ or $v(u)$. Sorry for the confusion. Otherwise you would be right. To clarify my equations: (1) A relation for $u$ and $v$ (if I have $u$, I have $v$ and vice versa) (2) A relation for $u_{x}$ and $v_{y}$ (if I have $u_{x}$, I have $v_{y}$ and vice versa --edit-- Which then, after some rewriting, leads to two ODE's $G$ and $F$ as stated before. I just don't know how to solve those. Thread Tools Similar Threads for: How to solve two coupled pde's Thread Forum Replies Differential Equations 0 Quantum Physics 0 Differential Equations 1 Differential Equations 0 Beyond the Standard Model 0
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9595777988433838, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/267017/why-cant-a-monotone-function-have-a-removable-discontinuity?answertab=active
# Why can't a monotone function have a removable discontinuity? Using the definition of removable discontinuity from Wikipedia, why can't a monotone function have this type of discontinuity? In other words, if $x_0\in D(f)$ is a point where the monotone function $f$ is discontinuous, and if $$\lim_{x\to x_{0^-}}f(x)=L^-$$ and $$\lim_{x\to x_{0^+}}f(x)=L^+$$ why cannot be $$L^+=L^-$$ I've been baffled by this for far too long now, thanks for any help! - 2 If $L^+ = L^-$, then $f$ is continuous (at $x_0$). – Daniel Marahrens Dec 29 '12 at 12:39 Only if $f(x_0)=L^+=L^-$. – Eckhard Dec 29 '12 at 13:07 ## 2 Answers An increasing function can't have a removable discontinuity at points in its domain. Indeed observe that for $\epsilon>0$, $$\exists \delta>0:0<\left|x-a\right|<\delta\implies \left|f(x)-L\right|<\epsilon\iff L-\epsilon<f(x)<L+\epsilon$$ For $a-\delta<x<a$, $f(x)<f(a)$ and so $$L-\epsilon<f(x)<f(a)$$ Similarly for $a<x<a+\delta$, $f(x)>f(a)$ and $$f(a)<f(x)< L+\epsilon$$ Therefore, $$\left|f(a)-L\right|<\epsilon$$ for arbitrary $\epsilon>0$ and so $f(a)=L$ - Thanks for the answer. I follow the logic, I am just not sure how does this prove the statement, would appreciate elaboration, thanks! – Dahn Jahn Dec 29 '12 at 18:26 @DahnJahn Sure. I supposed $L^{+}=L^{-}=L$ and proved that $f(a)=L$. This means $f$ is continuous at $a$ – Nameless Dec 29 '12 at 18:29 Thanks, now I understand it completely. – Dahn Jahn Dec 29 '12 at 18:50 Take $f$ and increasing function. Let $\lim\limits_{x\to x_0^-}f(x) = a$ $f(x_0) = b$ $\lim\limits_{x\to x_0^+}f(x) = c$ Since $f$ is not continuous at $x_0$, you have $a\not=b$ or $b\not=c$. Since it is increasing, you have $a\le b \le c$ so $a<b$ or $b<c$. And you can easily conclude that $a<c$ so $a \not= c$, ie $\lim\limits_{x\to x_0^-}f(x) \not= \lim\limits_{x\to x_0^+}f(x)$ And for a decreasing function, you just use that property for $-f$. - Right, what I missed is that, of course(!), $x_0$ can't simply be undefined. Stupid mistake from me, thanks for clearing that up! – Dahn Jahn Dec 29 '12 at 12:44 @Dahn Jahn: You probably meant $f(x_0)$ - And you're welcome :) – xavierm02 Dec 29 '12 at 12:46 1 This does not answer the question. The question is why can't ANY monotone function have a removable discontinuity. You merely showed an example of a monotone function which has a discontinuity that is not removable. – Calvin Lin Dec 29 '12 at 13:39 @CalvinLin Welcome to Math S.E. Calvin! I agree with you wholeheartedly, this does not answer the question. – Nameless Dec 29 '12 at 14:20 I don't get it. You take any monotone function. It's either increasing or decreasing. So we have $a \le b \le c$ or $c\le b\le a$. From the fact it's not continuous, we have at least two of those that aren't equal so $a<c$ or $c>a$ so $c\not=a$ so you can't remove the discontinuity. I don't get why it's wrong... – xavierm02 Dec 29 '12 at 15:26 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9415839314460754, "perplexity_flag": "middle"}
http://matthewkahle.wordpress.com/2010/01/08/a-conjecture-concerning-random-cubical-complexes/?like=1&source=post_flair&_wpnonce=0dd3db7641
# A conjecture concerning random cubical complexes Nati Linial and Roy Meshulam defined a certain kind of random two-dimensional simplicial complex, and found the threshold for vanishing of homology. Their theorem is in some sense a perfect homological analogue of the classical Erdős–Rényi characterization of the threshold for connectivity of the random graph. Linial and Meshulam’s definition was as follows. $Y(n,p)$ is a complete graph on $n$ vertices, with each of the ${n \choose 3}$ triangular faces inserted independently with probability $p$, which may depend on $n$. We say that $Y(n,p)$ almost always surely (a.a.s) has property $\mathcal{P}$ if the probability that $Y(n,p) \in \mathcal{P}$ tends to one as $n \to \infty$. Nati Linial and Roy Meshulam showed that if $\omega$ is any function that tends to infinity with $n$ and if $p = (\log{n} + \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) =0$, and if $p = (\log{n} - \omega) / n$ then a.a.s $H_1( Y(n,p) , \mathbb{Z} / 2) \neq 0$. (This result was later extended to arbitrary finite field coefficients and arbitrary dimension by Meshulam and Wallach. It may also be worth noting for the topologically inclined reader that their argument is actually a cohomological one, but in this setting universal coefficients gives us that homology and cohomology are isomorphic vector spaces.) Eric Babson, Chris Hoffman, and I found the threshold for vanishing of the fundamental group $\pi_1(Y(n,p))$ to be quite different. In particular, we showed that if $\epsilon > 0$ is any constant and $p \le n^{-1/2 -\epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) \neq 0$ and if $p \ge n^{ -1/2 + \epsilon}$ then a.a.s. $\pi_1 ( Y(n,p) ) = 0$. The harder direction is to show that on the left side of the threshold that the fundamental group is nontrivial, and this uses Gromov’s ideas of negative curvature. In particular to show that the $\pi_1$ is nontrivial we have to show first that it is a hyperbolic group. [I want to advertise one of my favorite open problems in this area: as far as I know, nothing is known about the threshold for $H_1( Y(n,p) , \mathbb{Z})$, other than what is implied by the above results.] I was thinking recently about a cubical analogue of the Linial-Meshulam set up. Define $Z(n,p)$ to be the one-skeleton of the $n$-dimensional cube with each square two-dimensional face inserted independently with probability $p$. This should be the cubical analogue of the Linial-Mesulam model? So what are the thresholds for the vanishing of $H_1 ( Z(n,p) , \mathbb{Z} / 2)$ and $\pi_1 ( Z(n,p) )$? I just did some “back of the envelope” calculations which surprised me. It looks like $p$ must be much larger (in particular bounded away from zero) before either homology or homotopy is killed. Here is what I think probably happens. For the sake of simplicity assume here that $p$ is constant, although in realty there are $o(1)$ terms that I am suppressing. (1) If $p < \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) \neq 0$, and if $p > \log{2}$ then a.a.s $H_1 ( Z(n,p) , \mathbb{Z} /2 ) = 0$. (2) If $p < (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) \neq 0$, and if $p > (\log{2})^{1/4}$ then a.a.s. $\pi_1 ( Z(n,p) ) = 0$. Perhaps in a future post I can explain where the numbers $\log{2} \approx 0.69315$ and $(\log{2})^{1/4} \approx 0.91244$ come from. Or in the meantime, I would be grateful for any corroborating computations or counterexamples.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 41, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.932617723941803, "perplexity_flag": "head"}
http://mathoverflow.net/questions/70635?sort=votes
## Manifold with all geodesics of Morse index zero but no negatively curved metric? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A closed oriented Riemannian manifold with negative sectional curvatures has the property that all its geodesics have Morse index zero. Is there a known counterexample to the "converse": if (M,g) is a closed oriented Riemannian manifold (Edit: assumed to be nondegenerate) all of whose geodesics have Morse index zero then M admits a (possibly different) metric g' with negative sectional curvatures? Edit: Motivation for asking this (admittedly naive) question is that Viterbo/Eliashberg have proved that a manifold with a negatively curved metric cannot be embedded as a Lagrangian submanifold of a uniruled symplectic manifold. Actually their proof only seems to use the existence of a nondegenerate metric all of whose geodesics have Morse index zero. I wondered if that was known to be strictly weaker. - ## 2 Answers As mentioned by Rbega the question should be amended to ask whether it's true that a closed manifold $M$ without conjugate points admits a metric of non-positive (rather than negative) curvature (otherwise a torus is an obvious counterexample). In that form this is a well-known open problem. The exponential map at any point is a universal covering of $M$ and the geodesics in $\tilde M$ are unique. This does show that $M$ is aspherical but that is a long way from admitting a metric of nonpositive curvature. There are some partial results suggesting that fundamental groups of manifolds without conjugate points share some properties of fundamental groups of nonpositively curved manifolds. In particular, there is a result of Croke and Shroeder that if the metric is analytic then any abelian subgroup of $\pi_1(M)$ is embedded quasi-isometrically. By the following observation of Bruce Kleiner the analyticity condition can be removed: Croke and Schroeder show that even without assuming analyticity for any $\gamma\in\pi_1(M)$ its minimal displacement $d_\gamma$ satisfies $d_{\gamma^n}=nd_\gamma$ for any $n\ge1$. This then implies that $d_\gamma=\lim_{n\to\infty} d(\gamma^nx,x)/n$ for any $x\in\tilde M$. This in turn implies that the restriction of $d$ to an abelian subgroup $H \simeq \mathbb Z^n$ extends to a norm on $\mathbb R^n$. This implies that $H$ is quasi-isometrically embedded. This result implies for example that nonflat nilmanifolds can not admit metrics without conjugate points and more generally that every solvable subgroup of the fundamental group of a manifold without conjugate points is virtually abelian. But it's unlikely that any such manifold admits a metric of non-positive curvature. It is more probable that its fundamental group must satisfy some weaker condition such as semi-hyperbolicity but even that is completely unclear. The natural bicombing on $\tilde M$ given by geodesics need not satisfy the fellow traveler property (at least there is no clear reason where it should come from). So it might be worth trying to look for counterexamples and the first place I would look is among groups that are semi-hyperbolic but not $CAT(0)$. Specifically, any $CAT(0)$ group has the property that centralizers of non-torsion elements virtually split. This need not hold in a semi-hyperbolic group with the simplest example given by any nontrivial circle bundle over closed surfaces of genus $>1$. To be even more specific one can take the unit tangent bundle $T^1(S_g)$ to a hyperbolic surface. Note however that it's known that a closed homogenous manifold without conjugate points is flat so if there is a metric without conjugate points on $T^1(S_g)$ it can not be homogeneous. *Edit: Actually, this last remark is irrelevant as $T^1(S_g)$ can not admit any homogeneous metrics at all.* - Your last comment implies that the answer is true in dimension 3 by geometrization and results of Leeb characterizing non-positively curved metrics. So the smallest example would have to be in dimension 4. – Agol Jan 29 2012 at 20:06 @Agol sorry, I don't follow. How does this imply that a unit tangent bundle to a higher genus surface can not admit a metric without conjugate points? I don't think that follows from any known results. – Vitali Kapovitch Jan 29 2012 at 20:14 Oh, I misread what you wrote. So I guess the only open 3-dimensional case is for $\tilde{SL}_2R$ metrics? – Agol Jan 29 2012 at 20:34 I'm not sure - I think there are manifolds consisting of several geometric pieces that can not be ruled out. Specifically, I believe there are other examples of 3-manifolds with semi-hyperbolic groups like some graph manifolds which are known not to admit non-positively curved metrics but can not be ruled out from admitting metrics without conjugate points. – Vitali Kapovitch Jan 29 2012 at 21:01 Sorry, I forgot what Leeb's results say. There's many graph-manifolds which do not admit non-positively curved metrics. So I suppose these are candidates for metrics which do not admit conjugate points. – Agol Jan 29 2012 at 22:12 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. (This should be a comment) What about the flat torus $\mathbb{S}^1\times \mathbb{S}^1$? I think you need to amend the question to ask for non-positive sectional curvature. [Added after a little thought] I should add that by infinite dimensional morse theory (for the energy functional on the loop space of $M$-which satisfies the Palais-Smale condition) you should (in principal) be able to conclude that each component of the loop space is contractible. In other words the homotopy groups vanish for $k>1$ that is, $\pi_k(M)=0$ for all $k>1$. I'm not sure if that is enough to ensure the existence of a non-positively curved metric on $M$ but is certainly suggestive... - Good point. What I tacitly had in mind was that g was nondegenerate (=> all geodesics isolated). I'll edit the question. – Jonny Evans Jul 19 2011 at 6:44
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.933661937713623, "perplexity_flag": "head"}
http://mathoverflow.net/questions/52864/colimit-of-locally-finitely-presented-quasi-coherent-modules
## Colimit of locally finitely presented quasi-coherent modules ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $X$ be a quasicompact quasiseparated scheme. Consider the full subcategory $\text{Qcoh}_{fp}(X)$ of $\text{Qcoh}(X)$ which consists of the quasi-coherent modules which are locally of finite presentation. Question Is every quasi-coherent module $M$ the colimit of the homomorphisms $N \to M$, where $N$ runs through $\text{Qcoh}_{fp}(X)$? Note that this makes sense since $\text{Qcoh}_{fp}(X)$ is essentially small. The result is well-known if we allow quasi-coherent modules which are of finite type (in particular, everything is OK if $X$ is noetherian). If $X$ is affine, then the result is trivial. - ## 2 Answers The answer is yes, at least if you believe Thomason-Trobaugh, Higher algebraic $K$-theory of schemes and of derived categories, which David Ben-Zvi already mentioned. I quote from Appendix B.3 (p. 409f): B.3. If $X$ is a quasi-compact and quasi-separated scheme, every sheaf in $\text{Qcoh}(X)$ is a direct colimit of its sub-`$\mathcal{O}_{X}$`-modules of finite type. Also, every sheaf in $\text{Qcoh}(X)$ is a filtering colimit of finitely presented `$\mathcal{O}_{X}$`- modules. ([EGA] I 6.9.9, 6.9.12.) In this case, the set of finitely presented $\mathcal{O}_{X}$-modules forms a set of generators for $\text{Qcoh}(X)$, which is then a Grothendieck abelian category and has enough injectives (cf. B.12.). - Thank you. I knew this appendix and should have remembered it. The "new" EGA I covers more material on such things than the "old" EGA I. – Martin Brandenburg Jan 22 2011 at 23:59 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know the answer, but take it as a good excuse to mention the wonderful theorem of Thomason-Trobaugh in the Grothendieck Festschrift that the analogous statement is true on the derived level --- namely for a quasicompact quasiseparated scheme, the quasicoherent derived category is compactly generated by perfect complexes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091296195983887, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/tagged/function-fields?sort=votes&pagesize=15
# Tagged Questions The function-fields tag has no wiki summary. learn more… | top users | synonyms 1answer 172 views ### Is a field perfect iff the primitive element theorem holds for all extensions, and what about function fields Let $L/K$ be a finite separable extension of fields. Then we have the primitive element theorem, i.e., there exists an $x$ in $L$ such that $L=K(x)$. In particular, the primitive element theorem ... 1answer 154 views ### Another Tangent on Tangents This question asked yesterday got me thinking. While the derivatives of the tangent function span an infinite dimensional vector space over $\mathbb{C},$ the transcendence degree of the field ... 2answers 136 views ### Metric completion of field of fractions The integers have as a field of fractions the rational numbers which have a metric completion as the real numbers. The reals can be represented by infinite decimal expansions which can be approximated ... 1answer 55 views ### Levels of Rings and Fields, -1 as a sum of squares Definition: Let $R$ be a commutative ring. The level of $R$, denoted $s(R)$, is the least positive integer $s$ such that $-1$ can be written as the sum of $s$ many squares in $R$. Set $s(R)=\infty$ if ... 1answer 72 views ### what is the constant field of irreducible components a divisor? Let $D$ be a divisor on an algebraic variety over a field $k$, that is $$D=\sum n_i D_i$$ where $D_i$ are the irreducible components. I came across the expression "the constant field of $D_i$" and ... 1answer 94 views ### A field isomorphism related to polynomial rings and their field of fractions There are 2 ways to approach function fields: the algebraic approach, i.e. looking at finite extensions of $K(s)$, where $s$ is transcendental. The other is geometric, i.e. considering functions over ... 1answer 112 views ### On Intermediate Fields of $\mathbb{C}(x_1,\dots,x_n)$ I am recently reading some Galois Theory, and a question occurred to me: What are the intermediate fields of $K$ of $\mathbb C(x_1,\dots,x_n)$, where $n$ is an arbitrary integer? I am aware of a ... 1answer 73 views ### Why do number rings have no endomorphisms This question is about the analogy between number fields and function fields. It's a soft question and the title misrepresents the question. Consider the projective line over a field. This has many ... 1answer 124 views ### Completion of a rational function field w.r.t. place at infinity When taking the completion a rational function field, say $k(t)$, with respect to the place at infinity, most books refer to this using the notation $k((1/t))$. Since $k((t)) = k((1/t))$ (EDIT: this ... 1answer 124 views ### Two notions of uniformizer Let $X$ be a projective algebraic curve and consider a `uniformizing' map $h:X \rightarrow \mathbb{P}^1$. Is there any connection between this notion of uniformizer and a uniformizer of the maximal ... 1answer 144 views ### Hyper-elliptic curves in positive characteristic I have been looking at hyperelliptic curves in characteristic two, in particular using Algebraic Geometry and Arithmetic Curves by Qing Liu, which gives a description in all characteristics. For the ... 2answers 181 views ### Existence of morphism of curves such that field extension degree > any possible ramification? Throughout I would like to work over an algebraically closed field of characteristic 0 (so no separability issues), say $k$. My question is the following: Do there exist two curves $X$ and $Y$ and a ... 1answer 68 views ### Linear disjointness of two “explicit” field extensions Let $k$ be a characteristic zero field and let $L/k$ be a quadratic extension. Write $L = k(\sqrt{p})$. Let $q$ be a non-square in $k^\star$ and let $r \in k^\star$ be any constant. Consider the ... 0answers 40 views ### Algebraic Curves similar to Hyper-Elliptic Curves Throughout, $F_q$ will denote a finite field of $q$ elements with characteristic $p \neq 2$. It is well-known that the equation $y^2 = f(x)$ (for square-free $f \in F_q[X]$) defines an hyper-elliptic ... 1answer 56 views ### Residue map of a place The following definition for a residue map is given in "Algebraic Curves over a Finite Field" by Hirschfeld, Korchmáros and Torres (page 265): "Let $\Sigma$ be a field of transcendence degree 1 over ... 1answer 87 views ### extension of algebraic function field Let $K$ be a field, $t$ a transcendetal element over $K$ and $F'|K(t)$ an infinite Galois extension. Hence I have a tower of extension $K\subset K(t) \subset F'$. Does exist a subextension $F$ of ... 1answer 122 views ### Separability of compositum of fields Let $E/F$ be a finite separable extension, and let $K$ be a function field with constant field $F$. Is the compositum $KE$ of $K$ and $E$ a separable extension over $E$? 0answers 185 views ### Artin-Schreier extensions over characteristic two fields I have been looking at hyperelliptic curves over an algebraically closed field $k$ of characteristic two, with a view towards finding the basis for the vector space of holomorphic differentials. To do ... 0answers 85 views ### Calculating degree of a finite Kummer Extension Assume I have a field $K$ containing all $n$-th roots of unity. You may even assume that $K$ contains an algebraically closed field $\Bbbk$. Assume furthermore that there are $x_1,\ldots,x_k\in K$ and ... 0answers 32 views ### places of function field and closed point of a scheme Given an integral scheme $X$, let $K(X)=\mathrm{Frac}(R)$ be its function field, where $\mathrm{Spec}(R)$ is some non-empty open affine subscheme of $X$. Take the maximal ideal $P$ of some DVR of ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 67, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9045725464820862, "perplexity_flag": "head"}
http://mathoverflow.net/questions/17142?sort=votes
## Collecting various theories on toy examples: Projective space ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I am looking for text books/notes/papers/documents playing with toy examples: projective space, in particular, $P^{1}$. Because I think this is really a cute example. Although algebraic geometry on $P^{1}$ is comparatively simple, it gave inspirations to treat more general situtation. Precisely, I am looking for something including following topics:(but you can add whatever you want) 1. algebraic geometry of $P^{n}$, say, $Coh(P^{n})$, $D^{b}(Coh(P^{n}))$, say, exceptional collection,semi-orthogonal decomposition,stability conditions 2. Relation to representation theory, say, Hall algebra of $Coh(P^{n})$ and $D^{b}(Coh(P^{n}))$ and its relation to affine quantum group: $U_{q}(\hat{sl_{n+1}})$, representation theory of Kronecker quiver. Tilting theory. Weighted projective line and so on. 3. $D-module$ on flag variety of $U(sl_{2})$(or $P^{1}$) and so on.................. 4. Add whatever you like. 5. Add whatever you like. .................................................. - 2 I approve of this question :) – B. Bischof Mar 5 2010 at 0:56 various things ?? Please talk mathematics instead. – Zoran Škoda Mar 5 2010 at 14:40 2 I mean various mathematical subjects, but do know not how to express precisely – Shizhuo Zhang Mar 5 2010 at 14:55 ## 2 Answers Maybe I can answer this question by myself now. I did some literature research and find some papers and notes illustrating $P^{1}$ to establish various theory Lectures on Hall algebras The author talks about Hall algebra of coherent sheaves on $P^{1}$, relation with representation theory of $U_{q}(\hat{sl_{2}})$ and also The Hall algebra of the category of coherent sheaves on the projective line talks about similar facts. Twisted rings of differential operators on the projective line and the Beilinson-Bernstein theorem. It is a master thesis by Koushik Panda. He established $P^{1}$(flag variety of $sl_{2}$) version of Beilinson-Bernstein localization. His treatment is very detailed. t-stabilities and t-structures on triangulated categoriesillustrates classifications of t-structures on $D^{b}(Coh{P^{1}})$ Introduction to coherent sheaves on weighted projective lines by Chen-Xiaowu and Henning Krause. Very expository notes for coherent sheaves, Tilting theory, derived category of $P^{1}$ - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I'm far from an expert, but I think it's unlikely you're going to find much literature that focuses only on the projective line. But, for example, almost every algebraic geometry book I know uses the projective line and higher dimensional projective spaces as a basic example for everything. And, if a paper or book does not discuss explicitly these examples, then I think that is an opportunity for a student to work it out on his or her own. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9134085774421692, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/9628/finitely-presented-sub-groups-of-gln-c/9832
## Finitely presented sub-groups of GL(n,C) ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Here are two questions about finitely generated and finitely presented groups (FP): 1) Is there an example of an FP group that does not admit a homomorphism to $GL(n,C)$ with trivial kernel for any n? The second question is modified according to the sujestion of Greg below. 2) For which $n$ given two subgroups of $GL(n,C)$ generated by explicit lists of matrices, together with finite lists of relations and the promise that they are sufficient, is there an algorithm to determine if they are isomorphic as groups?" In both cases we don't impose any conidtion on the group (apart from been FP), in particular it need not be discrete in $GL(n,C)$. - ## 8 Answers Here is a more complete picture to go with David's and Richard's answers. It is a theorem of Malcev that a finitely presented group $G$ is residually linear if and only if it is residually finite. The proof is very intuitive: The equations for a matrix representation of $G$ are algebraic, so there is an algebraic solution if there is any solution. Then you can reduce the field of the solution to a finite field, as long as you avoid all primes that occur in the denominators of the matrices. The same proof shows that $G$ has no non-trivial linear representations if and only if it has no subgroups of finite index. So Higman's group has this property. A refined question is to find a finitely presented group which is residually finite, but nonetheless isn't "linear" in the sense of having a single faithful finite-dimensional representation. It seems that the automorphism group of a finitely generated free group, $\text{Aut}(F_n)$, is an example. Nielsen found a finite presentation for this group, it is also known to be residually finite, yet Formanek and Procesi showed that it is not linear when $n \ge 3$. More recently, Drutu and Sapir found an example with two generators and one relator. - Greg, thank you for the answer and for the comment to the second part of my question! I guess, I should modify it. I was having in mind the following: It is said that 3-dimensional manifolds are algorithmically recognisible, while 4-manfiolds not, beacause for FPFG groups there is no algorithm to recognise them. So maybe I should reformulate the question like this: Is there an algorithm that recognise linear FPRG groups (i.e. decides for any two such groups if they are isomorphic)? Will this question make more sense? If yes, I will reformulated it like this. – Dmitri Dec 23 2009 at 20:11 2 Provided you are given the faithful linear representations, not just a promise that they exist, the question could be okay. I.e., "Given two subgroups of GL(n,Q) generated by explicit lists of matrices, together with lists of relations and the promise that they are sufficient, is there an algorithm to determine if they are isomorphic as groups?" Note that for closed hyperbolic 3-manifolds, Mostow rigidity is available, which gives much more information than just linearity. – Greg Kuperberg Dec 23 2009 at 20:19 Greg, may I ask you one more thing? Is there any good exposition, that decribes the result "$G$ is residually linear if and only if it is residually finite". I like a lot the idea of the proof that you have descirbed, but also want to see it written in more detailes. – Dmitri Dec 23 2009 at 23:00 1 A proof of Malcev's theorem is in a book by Merzlyakov "Рациональные группы" (in Russian, the title translates as "Rational groups"). I do not think there is English translation, so probably Malcev's original paper is the most accessible reference. – Igor Belegradek Dec 24 2009 at 4:43 1 This Malcev's theorem is explained (nicely, in context of subsequent developments) at least in two places: B.A.F. Wehrfritz, Infinite Linear Groups, Springer, 1973 and A.E. Zalesskii, Linear groups, Russ. Math. Surv. 36 (1981), N5, 63-128. – Pasha Zusmanovich Dec 28 2009 at 12:59 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. The following counterexample is due to Higman; I learned about it from Terry Tao's blog. Consider the group with generators $a$, $b$, $c$ and $d$, and relations $ab=b^2a$, $bc=c^2 b$, $cd=d^2c$ and $da=a^2d$. This group is infinite (in fact, the subgroup generated by $a$ and $c$ is free), but it has no nontrivial map to $GL_{n}(\mathbb{C})$. See Terry's post, especially Remark 2, for a very nice exposition of this fact. - David, thanks a lot! – Dmitri Dec 23 2009 at 18:33 Another nice example is the Baumslag-Solitar group $\langle a,b \ | \ ab^2a^{-1} = b^3 \rangle$, which isn't hopfian, and so isn't residually finite, and so can't be linear. - You can combine the Bridson--Miller paper that Agol mentions with recent work of Haglund and Wise to show that the algorithmic problem in part (2) of the question is not always solvable. Haglund and Wise's version of the Rips Construction takes as input any finitely presented group Q and outputs a short exact sequence 1 -> K -> G -> Q -> 1 where K is finitely generated (and infinite) and G is a torsion-free, word-hyperbolic subgroup of GL(n,Z). Taking Q to be a non-abelian free group, the resulting G will work as input for the Bridson--Miller result. So you don't need to prove that mapping class groups are linear! Remarks • It's not clear how small you can take n to be. It will be fairly large in this construction. • In forthcoming work of Bridson and yours truly in a similar vein, we show that the hypotheses of part (2) are actually quite difficult to achieve. We produce a sequence of finite sets of integer matrices that each generate a finitely presentable group, but such that there is no algorithm to compute a presentation for these groups. - Henry, great answer, huge thanks! – Dmitri Dec 27 2009 at 9:56 This paper shows that the answer to 2) is false in the category of finitely presented residually finite groups. As Greg points out, this is different from the category of finitely presented linear groups though. Addendum: In a paper of Bridson and Miller (which I found from Igor's link to Miller's survey), they show that the isomorphism problem for subgroups of $\Gamma\times\Gamma\times F$ is undecidable, where $\Gamma$ is a particular hyperbolic group (which is free-by-finitely generated) and $F$ is free. As mentioned in the paper, Mosher constructed free-by-surface hyperbolic groups, which therefore could work as $\Gamma$. These groups embed in the mapping class group of the once-punctured surface, so if mapping class groups of once-punctured surfaces are linear, this would answer 2). However, the only mapping class groups known to be linear are the punctured sphere/braid groups and the genus 2 mapping class group. - I can't say that I made a conscious distinction. In any case your reference is good. – Greg Kuperberg Dec 24 2009 at 2:53 Very interesting! This summer in CRIM I heard from somebody (maybe Benson Farb) that while it is unknown that mapping class group is linear, all the possible corrolaries that hold for linear groups also holds for this group. So your answer shows that proving that mapping class group is linear will have some applications :)! – Dmitri Dec 24 2009 at 12:39 1 There are some highly non-trivial consequences of linearity that are "only just" known for mapping class groups. I'm thinking of the "equationally Noetherian" property, which says that every variety over a group G can be defined using only finitely many equations. This is an immediate consequence of Hilbert's Basis Theorem for linear groups; for mapping class groups, it's a consequence of highly non-trivial forthcoming work of Daniel Groves. – HW Dec 26 2009 at 23:45 Mosher constructed hyperbolic surface-by-free groups, and I think the existence of a hyperbolic free-by-surface group is open. (This could be a matter of the two of us having differing terminology.) – Richard Kent Sep 28 2010 at 1:19 This survey by Chuck Miller discusses among other things the isomorphism problem for linear groups (and for other classes of groups as well). The flow chart on page 31 states that for finitely generated linear groups the isomorphism problem is generally unsolvable, while it seems that for finitely presented linear groups the isomorphism problem is open. Incidentally, finitely presented groups are by definition finitely generated (they have finite presentation), so I think FGFP is just FP. - Igor, thanks a lot for the reference!! – Dmitri Dec 24 2009 at 11:42 I just wanted to make a comment on Mal'cev's theorem (if I could leave this as a comment, I would). Mal'cev's paper is a great exposition of the theorem, as well as a lot of other related material, all written in a basic yet enlightening style. Also, if you know a little commutative algebra (as in the Nullstellensatz, the one given in Eisenbud pg. 132), there is quick and easy proof of Mal'cev's theorem. I could sketch it if necessary, but I am right now in the process of LaTeX-ing it, so I'll probably just come back and post a link. Steve EDIT - a sketch of the argument: Mal'cev's theorem says a finitely generated linear group is residually finite. So let $X\subset GL(n,F)$ be a finite subset of the general linear group over some field $F$, and $G=\langle X \rangle$. First, make $X$ symmetric, so that if $x\in X$ then also $x^{-1}\in X$. Each $x\in X$ is an $n\times n$ matrix, so we can assemble all entries from all elements of $X$, getting a finite subset of $F$. Let $R$ denote the subring of $F$ generated by this subset (along with $1$). Then $R$ is a Jacobson ring, and since it's a subring of $F$, it's Jacobson radical is $0$. Now $G$ is a subgroup of $GL(n,R)$; let $g\in G$ be a non-identity element, so that $g-I_n\neq 0$, where $I_n$ is the identity matrix. Thus $g-I_n$ has a non-zero element, and thus there is some maximal ideal $m\subset R$ not containing this non-zero element. The matrix ring homomorphism $M_n(R)\rightarrow M_n(R/m)$ (reducing everything mod $m$) induces a group homomorphism $G\rightarrow GL(n,R/m)$, where $g$ is not in the kernel. But $R/m$ is finite (by the Nullstellensatz), so $GL(n,R/m)$ is a finite group. - Steve thanks for the answer. Sure I will be very curious to see the proof, so once you have it, please do leave a link! – Dmitri Feb 5 2010 at 9:30 Excuse me guys, but I think it is true that the permutation groups $S_m$ will not admit a faithful representation in dimension $n$ if $m>>n$? I can certainly see this for $m>2n$ atleast. So this will give countably many examples of finitely presented groups not admitting injective homomorphism to $GL(n,C)$ as Dmitri wanted in his question (1). My claim can be seen by either an elementary combinatorial calculation on the involution on the respective spaces or by classification of irreducible representation of $S_n$(that they are either the identity, sign or the standard ones). We don't have to invoke any high powered theorem to do this IMHO. Cheers! - Maharana, thanks for your answer! Well, when I formulated the question I was meaning, "FGFP groups that don't admit a injective homomorphism fo $GL(n,C)$ for ANY n". In this case to consturct an example you surelly need infinite groups. I will modify now the question, so that there is no ambiguity. – Dmitri Dec 25 2009 at 12:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 62, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9431424736976624, "perplexity_flag": "head"}
http://www.physicsforums.com/showpost.php?p=4218016&postcount=1
View Single Post ## Induction with waveguide walls This is a question I've been trying to figure out. I'll try my best to formulate it, so apologies if it's a bit ill defined! Suppose you construct a finite parallel plate waveguide of PEC(perfect electrical conductors) and PMC(perfect magnetic conductors) so that the top/bottom plates are PEC, and the left/right are PMC. For instance, something like this but instead of air on the sides, it's a PMC so H = 0. 1) Then what happens if you propagate a TE wave? More specifically, will there be any sort of induction with the walls? 2) If not, could you construct some internal structure in the waveguide which couples to the walls of it? This is what I have for 1) so far: There can't be any induction with the PMC walls, because along the surface of the PMC, the magnetic field is 0 in all directions. We have then have BC's that: $\vec{E}(x=0) = \vec{E}(x=d) = \vec{0}$ And then from maxwell's equations, the induced surface current is: $J_s = \pm \hat{x} \times \vec{H} = \pm (\hat{y} H_z - \hat{z} H_y)$ I'm a bit rusty on how to calculate any sort of behaviors here via induction onwards. I'm guessing that since this is TE, H only has y and z components. Therefore, there is no induced electric field since the magnetic flux from one plate to another is 0. That would therefore imply, to produce inductive effects with some internal structure (2), we could construct a wire loop slanted at 45° PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9301865100860596, "perplexity_flag": "middle"}
http://blog.brilliant.org/2013/01/23/proof-that-1-1-and-1-3/
Follow: This image was sourced and then altered from the NASA image archive. The chimpanzee pictured was named Ham the Astrochimp. He was a brave space pioneer and proved to NASA that astronauts would be able to survive space travel. This post requires familiarity with complex numbers, in particular $i = \sqrt{-1}$, and $i^2 = -1$. This is taken from a Brilliant discussion, with slight edits. $\begin{aligned} &\mbox{We know that} && 1 \times 1 = (-1) \times (-1) & &(1) \\ &\mbox{Dividing across, we get} && \frac {1}{-1} = \frac {-1}{1} & &(2)\\ &\mbox{Taking square roots, we get} && \frac {\sqrt{1} }{\sqrt{-1}} = \frac {\sqrt{-1}}{\sqrt{1}} &&(3) \\ &\mbox{Hence,} && \frac {1}{i} = \frac {i}{1} && (4)\\ &\mbox{Multiplying out denominators, we get} && 1 \times 1 = i \times i && (5)\\ &\mbox{Hence,} && 1 = 1 \times 1 = i \times i = -1 & &(6)\\ \end{aligned}$ What went wrong? At which step did we go wrong? You can review the discussion to understand the subtleties of the proof. Related to this is the following ‘proof’ that $1 = 3$. $\begin{aligned} & \mbox{We know that} & & (-1)^1 = -1 = (-1)^3 & & (1) \\ & \mbox{Taking logarithms, we get} & & 1 \times \log (-1) = 3 \times \log (-1) & & (2)\\ & \mbox{Since }\log(-1) \neq 0, \mbox{ we can divide by} \log(-1) & & 1 = 3 & & (3) \\ \end{aligned}$ We will be moving the discussion for this post to this Blog post discussion board, which will allow you to respond to comments made by other students, and also vote on the explanation that you think is correct. ### Like this: From → Algebra, Level 3, Level 4, Math 49 Comments 1. [Comment moved to discussion board - Calvin] 2. Paras [Comment moved to discussion board - Calvin] 3. Anonymous log undefined for -ve nos 4. Sri rajamurugan In that proof square root of one is treated as one but actually it’s plus or minus one 5. HIREN square root of 1 is |1| means mod(1) = +/- 1… hear taken only +1. 6. ali raza log(-1) is not defined so our first step to take log of -1 is wrong. • Bear in mind that $e^{i \pi} = -1$, so we can define $\log (-1)$. • Kokkimidis Patatoukos Yeah but we have to put an absolute value when we put the logarithms so absolute value of e^iπ is one and log(1) is 0 isn’t it? • What do you mean by put absolute value? $\log -1 \neq \log 1$. 7. You took the root of a negative 1 not possible. If you want that you have bring and i in so you will have i*sqrt(1) • We took square root of negative 1, and had the value of $i$ in both places. I do not see how your argument is valid. 8. Nikhil Mahajan Using the logic of the second proof, 1^1 = 1^2 = 1^3 = …. could be used as a proof for 1 = 2 = 3 = … This is invalid because for (-1)^1 = (-1)^3 to imply 1 = 3, you need to show that every power of -1 must yield a unique value. You don’t even have to go as far as worrying about logarithms of negative numbers to find a flaw. • I disagree with your second paragraph. We only need the odd powers to yield a negative value. And the do, since $(-1)^{2k+1} = {[(-1)^2]}^k\times (-1) = -1$. 9. Nikhil Mahajan The first proof is wrong at line (3). Where the square root of both sides is taken but the assumption that sqrt(-1/1) = sqrt(-1)/sqrt(1) is made. This is not true for complex numbers. 10. kerokeropong i thought log(-1) does not exist? • Bear in mind that $e^{i \pi} = -1$. 11. Imène Me Log is defined from 0 to +oo, the square as well. • Bear in mind that $e^{i \pi} = -1$. While you may have only seen logarithms defined for positive reals, it is possible to extend that definition. 12. You can’t say that i=/-1 because this value doesn’t exist. It means that can not be carried forward. • Do you mean that “You can’t say \$ i = \sqrt{-1}\$ because this value doesn’t exist”? What do you mean by it doesn’t exist? It exists as a complex number. Why can’t we do addition and multiplication with it? If we can’t, does that mean that all the complex numbers that we learnt are all useless? 13. prince awesome 14. prince there was a confusion with log(-1) . Otherwise its really surprising ! 15. we know that the natural logarithm is inverse function of the exponential, but a function does have an inverse only if it was bijective for instance : e^(i pi) = e^(3 i pi), the exponential function is not bijective over all the complex numbers. • Yes, you’re getting close to the reason why the proof breaks down. The logarithm function is actually a multivalued function, and one of the values of $\log -1$ is $i \pi$, while the other possible value is $3 i \pi$, and the latter is thrice of the former. 16. Anonymous I think the problem is at line (6) because here you have done “i*i=-1″. But i don’t agree with this step as according to the rule of multiplication of complex numbers, a complex no. is always multiplied with it’s complex conjucate.So, it has to be multiplied with “-i” and “i*(-i)=+1″. and this proves that 1 is not equal to -1 but it equals to +1. And to prove “i*i can’t be equal to -1″…we know, exp((i*pi)/2)=i. and if we multi ply exp((i*pi)/2) by exp((i*pi)/2) using the method of multilication of line (6) and we don’t use the complex conjugate of this then it would give”ccos²(љ/2)+ i²sin²(љ/2)” here if we put “i*i=-1 according to the multiplication of line(6) then it will give cos²(љ/2)+ sin²(љ/2)= -1 (as,sin(љ/2)=1) which is not possible because we know, “cos²( Ө)+ i²sin²(Ө)=1″. so this is satisfied only when we follow the rule of multiplication of complex number and it gives “i*(-i)= -i²=+1″and it proves”1=1″. 17. SUCHETA S. I think the problem is at line (6) because here you have done “i*i=-1″. But i don’t agree with this step as according to the rule of multiplication of multiplication a complex no. is always multiplied with it’s complex conjucate.So, i has to be multiplied with “-i” and “i*(-i)=+1″. and this proves that 1 is not equal to -1 but it equals to +1. And to prove “i*i can’t be equal to -1″…we know, exp((i*pi)/2)=i. and if we multi ply exp((i*pi)/2) by exp((i*pi)/2) using the method of multilication of line (6) and we don’t use the complex conjugate of this then it would give”ccos²(љ/2)+ i²sin²(љ/2)” here if we put “i*i=-1 according to the multiplication of line(6) then it will give cos²(љ/2)+ sin²(љ/2)= -1 (as,sin(љ/2)=1) which is not possible because we know, “cos²( Ө)+ i²sin²(Ө)=1″. so this is satisfied only when we follow the rule of multiplication of complex number and it gives “i*(-i)= -i²=+1″and it proves”1=1″. • I don’t understand why we can only multiply a complex number with it’s complex conjugate. We certainly have $(1+2i) \times (1 + 3i) = 1 + 2i + 3i + 6i^2 = 5i - 5$. Multiplying by the complex conjugate is done to obtain the norm of the complex number. $i^2 = -1$ is the definition of the imaginary unit, and taken as a fact / axiom. Also, in your 2nd last line, $\cos^2 \theta + i^2 \sin ^2 \theta \neq 1$. Instead, the trigonometric identity that you’re thinking of is $\cos^2 \theta + \sin^2 \theta = 1$. 18. Kokkimidis Patatoukos Yeah log(-1)=iπ right? But i think the identity is {log(a)}^k=k*log(|a|) That is what I mean • I don’t think that is the identity. Be very careful, especially if you have not seen logs of negative (or even complex) numbers before. 19. Shourya Pandey well sqrt(1/-1) is not the same as sqrt(1)/sqrt(-1). 20. Shourya Pandey (-1)^1 = (-1)^3 so antilog(1*log-1) = antilog(3*log-1) but this doesn’t mean 1*log-1 = 3*log-1, because antilog is not a one-one function. • Yup! Though it could depend on what you call ‘antilog’, and what the equality means to you. But you have to right ideas. 21. AJINKYA I COULDN’T UNDERSTAND YOUR 6 TH STEP, HOW YOU CONSIDER i*i=-1 • That is the definition of the imaginary unit, specifically $i^2 = -1$. 22. SUCHETA S. In line (3) a square root of 1 and -1 has been taken but square root gives both + and -ve value but here only + ve has been considered for next all the steps but it can be considered only when we are taking a mod value which always gives a positive quantity. So in line (5) Mod(±√1)=+1 and Mod(±√-1)=Mod(±i)=i that’s why Mod(i).Mod(i)= (Mod(i)) ² = (i*).(i)= (-i).(i)= – i²=+1.So, in line (6) it gives,, 1=Mod(i).Mod(i)= (Mod(i)) ² = (i*).(i)= – i²=+1. So, 1=+1. And I’m Sorry! because i said “according to the rule of multiplication of complex nos” it is actually the rule of finding the mod value of a complex no. for example, (Mod(a+ib))²=(a+ib)*.(a+ib)=(a-ib).(a+ib)=a²-(ib)²=a²-i²b²=a²+b². So, Mod(a+ib)=√(a²+b²). • I’m not sure what kind of modulus you are working with, esp to say that $Mod(\pm i) = i$. As far as I know, modulus always returns a positive quantity (like you mentioned), that is real valued. What would you say is $Mod(1 - 2i)$? I believe that you are on the right track, but you should crystallize your thoughts, and be certain that what you’re writing is exactly what you’re thinking. P.S. In future, please reply to your previous comments, so that it is easier to keep track of what you have been saying. I.e. this is not a new comment, but a continuation. 23. Srijit Ganguly. The 4th line is wrong because root over 1 has two solutions one is 1 and other is -1. 24. Anonymous (root of -1) * (root of -1) wil b equal to -1 nt 1 so proof is surely wrong 25. I think the mistake is in third step root(a/b) is equal to root(a)/root(b) only if both a and b are positive or negative. 26. Rehan Aslam to prove 1=-1 we have to select only real numbers not the imaginary numbers 27. Rehan Aslam if x+2=3 on taking x=1 equation is satisfied but if 1=3 and on putting 3 in the equation the equation is not satisfied its mean 1 is not equal to 3. 28. Anonymous Log of a number less than zero is not possible. Any real number to any power is always greater than zero, if only by a negligible amount. (e.g. Two to the power of minus infinity). I’m thinking it’s step 3 in the 1=-1 part. (it breaks between stage 2 and 4.) • Why is it not possible? Because you haven’t seen it before? Is $\sqrt{-1}$ possible? It is if you allow for complex numbers. Likewise, we have $e^{\pi i} = -1$. So, can’t we take logarithms on both sides? 29. Good introduction to high school and complex math for a 14 year old? I feel confused by half of the things on this site but feel as if I learn some higher concept every time I visit. I actually understand some of this post! Keep them coming, Calvin. 30. Anonymous there is no log-1 log is only ++++ numpers no 0 no – numpers Lol • By Euler’s Theorem, $e^{i \pi} = -1$. This suggests that we can take the logarithm of -1. 31. Shubham Mishra That’s good but proof(1) can be done without using i=√(-1 ) in the following way:- We know, 1 X 1 = (-1) X (-1) Or, 1^2 = (-1)^2 Or, 1 = (-1) [Taking out square-root] Isn’t it? 32. \(\log(m^n) = n \log (m)\) does not hold for \(m < 0\). 33. Anonymous a^2-a^2=a^2-a^2 => (a+a)(a-a)=a(a-a) => a+a=a => 2a=a => 2=1 ….(1) now 3=1+2 => 3=1+1 (from-1) => 3=2 => 3=1 (from-1) hence any no. can be equated to any no. %d bloggers like this:
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139073491096497, "perplexity_flag": "middle"}
http://pediaview.com/openpedia/Lambert's_cosine_law
# Lambert's cosine law See also: Lambertian reflectance In optics, Lambert's cosine law says that the radiant intensity or luminous intensity observed from an ideal diffusely reflecting surface or ideal diffuse radiator is directly proportional to the cosine of the angle θ between the observer's line of sight and the surface normal.[1][2] The law is also known as the cosine emission law[3] or Lambert's emission law. It is named after Johann Heinrich Lambert, from his Photometria, published in 1760.[4] A surface which obeys Lambert's law is said to be Lambertian, and exhibits Lambertian reflectance. Such a surface has the same radiance when viewed from any angle. This means, for example, that to the human eye it has the same apparent brightness (or luminance). It has the same radiance because, although the emitted power from a given area element is reduced by the cosine of the emission angle, the apparent size (solid angle) of the observed area, as seen by a viewer, is decreased by a corresponding amount. Therefore, its radiance (power per unit solid angle per unit projected source area) is the same. ## Lambertian scatterers and radiators When an area element is radiating as a result of being illuminated by an external source, the irradiance (energy or photons/time/area) landing on that area element will be proportional to the cosine of the angle between the illuminating source and the normal. A Lambertian scatterer will then scatter this light according to the same cosine law as a Lambertian emitter. This means that although the radiance of the surface depends on the angle from the normal to the illuminating source, it will not depend on the angle from the normal to the observer. For example, if the moon were a Lambertian scatterer, one would expect to see its scattered brightness appreciably diminish towards the terminator due to the increased angle at which sunlight hit the surface. The fact that it does not diminish illustrates that the moon is not a Lambertian scatterer, and in fact tends to scatter more light into the oblique angles than would a Lambertian scatterer. The emission of a Lambertian radiator does not depend upon the amount of incident radiation, but rather from radiation originating in the emitting body itself. For example, if the sun were a Lambertian radiator, one would expect to see a constant brightness across the entire solar disc. The fact that the sun exhibits limb darkening in the visible region illustrates that it is not a Lambertian radiator. A black body is an example of a Lambertian radiator. ## Details of equal brightness effect Figure 1: Emission rate (photons/s) in a normal and off-normal direction. The number of photons/sec directed into any wedge is proportional to the area of the wedge. Figure 2: Observed intensity (photons/(s·cm2·sr)) for a normal and off-normal observer; dA0 is the area of the observing aperture and dΩ is the solid angle subtended by the aperture from the viewpoint of the emitting area element. The situation for a Lambertian surface (emitting or scattering) is illustrated in Figures 1 and 2. For conceptual clarity we will think in terms of photons rather than energy or luminous energy. The wedges in the circle each represent an equal angle dΩ, and for a Lambertian surface, the number of photons per second emitted into each wedge is proportional to the area of the wedge. It can be seen that the length of each wedge is the product of the diameter of the circle and cos(θ). It can also be seen that the maximum rate of photon emission per unit solid angle is along the normal and diminishes to zero for θ = 90°. In mathematical terms, the radiance along the normal is I photons/(s·cm2·sr) and the number of photons per second emitted into the vertical wedge is I dΩ dA. The number of photons per second emitted into the wedge at angle θ is I cos(θ) dΩ dA. Figure 2 represents what an observer sees. The observer directly above the area element will be seeing the scene through an aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0. We can assume without loss of generality that the aperture happens to subtend solid angle dΩ when "viewed" from the emitting area element. This normal observer will then be recording I dΩ dA photons per second and so will be measuring a radiance of $I_0=\frac{I\, d\Omega\, dA}{d\Omega_0\, dA_0}$ photons/(s·cm2·sr). The observer at angle θ to the normal will be seeing the scene through the same aperture of area dA0 and the area element dA will subtend a (solid) angle of dΩ0 cos(θ). This observer will be recording I cos(θ) dΩ dA photons per second, and so will be measuring a radiance of $I_0=\frac{I \cos(\theta)\, d\Omega\, dA}{d\Omega_0\, \cos(\theta)\, dA_0} =\frac{I\, d\Omega\, dA}{d\Omega_0\, dA_0}$ photons/(s·cm2·sr), which is the same as the normal observer. ## Relating peak luminous intensity and luminous flux In general, the luminous intensity of a point on a surface varies by direction; for a Lambertian surface, that distribution is defined by the cosine law, with peak luminous intensity in the normal direction. Thus when the Lambertian assumption holds, we can calculate the total luminous flux, $F_{tot}$, from the peak luminous intensity, $I_{max}$, by integrating the cosine law: $F_{tot} = \int\limits_0^{\pi/2}\,\int\limits_0^{2\pi}\cos(\theta)I_{max}\,\sin(\theta)\,\operatorname{d}\phi\,\operatorname{d}\theta$ $= 2\pi\cdot I_{max}\int\limits_0^{\pi/2}\cos(\theta)\sin(\theta)\,\operatorname{d}\theta$ $= 2\pi\cdot I_{max}\int\limits_0^{\pi/2}\frac{\sin(2\theta)}{2}\,\operatorname{d}\theta$ and so $F_{tot}=\pi\,\mathrm{sr}\cdot I_{max}$ where $\sin(\theta)$ is the determinant of the Jacobian matrix for the unit sphere, and realizing that $I_{max}$ is luminous flux per steradian.[5] Similarly, the peak intensity will be $1/(\pi\,\mathrm{sr})$ of the total radiated luminous flux. For Lambertian surfaces, the same factor of $\pi\,\mathrm{sr}$ relates luminance to luminous emittance, radiant intensity to radiant flux, and radiance to radiant emittance.Empty citation‎ (help) Radians and steradians are, of course, dimensionless and so "rad" and "sr" are included only for clarity. Example: A surface with a luminance of say 100 cd/m2 (= 100 nits, typical PC monitor) will, if it is a perfect Lambert emitter, have a luminous emittance of 314 lm/m2. If its area is 0.1 m2 (~19" monitor) then the total light emitted, or luminous flux, would thus be 31.4 lm. ## Uses Lambert's cosine law in its reversed form (Lambertian reflection) implies that the apparent brightness of a Lambertian surface is proportional to the cosine of the angle between the surface normal and the direction of the incident light. This phenomenon is, among others, used when creating moldings, which are a means of applying light- and dark-shaded stripes to a structure or object without having to change the material or apply pigment. The contrast of dark and light areas gives definition to the object. Moldings are strips of material with various cross-sections used to cover transitions between surfaces or for decoration. ## References 1. RCA Electro-Optics Handbook, p.18 ff 2. Modern Optical Engineering, Warren J. Smith, McGraw-Hill, p.228, 256 3. Pedrotti & Pedrotti (1993). Introduction to Optics. Prentice Hall. ISBN 0135015456 [Amazon-US | Amazon-UK]. 4. Incropera and DeWitt, Fundamentals of Heat and Mass Transfer, 5th ed., p.710. ## Source Content is authored by an open community of volunteers and is not produced by or in any way affiliated with ore reviewed by PediaView.com. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Lambert's cosine law", which is available in its original form here: http://en.wikipedia.org/w/index.php?title=Lambert's_cosine_law • ## Finding More You are currently browsing the the PediaView.com open source encyclopedia. Please select from the menu above or use our search box at the top of the page. • ## Questions or Comments? If you have a question or comment about material in the open source encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider. This open source encyclopedia supplement is brought to you by PediaView.com, the web's easiest resource for using Wikipedia content. All Wikipedia text is available under the terms of the Creative Commons Attribution-ShareAlike 3.0 Unported License. Wikipedia® itself is a registered trademark of the Wikimedia Foundation, Inc.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8875253200531006, "perplexity_flag": "middle"}
http://ams.org/bookstore?fn=20&arg1=gsmseries&ikey=GSM-29
New Titles  |  FAQ  |  Keep Informed  |  Review Cart  |  Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education Return to List Fourier Analysis Javier Duoandikoetxea, Universidad del País Vasco/Euskal Herriko Unibertsitatea, Bilbao, Spain SEARCH THIS BOOK: Graduate Studies in Mathematics 2001; 222 pp; hardcover Volume: 29 ISBN-10: 0-8218-2172-5 ISBN-13: 978-0-8218-2172-5 List Price: US\$41 Member Price: US\$32.80 Order Code: GSM/29 Fourier analysis encompasses a variety of perspectives and techniques. This volume presents the real variable methods of Fourier analysis introduced by Calderón and Zygmund. The text was born from a graduate course taught at the Universidad Autónoma de Madrid and incorporates lecture notes from a course taught by José Luis Rubio de Francia at the same university. Motivated by the study of Fourier series and integrals, classical topics are introduced, such as the Hardy-Littlewood maximal function and the Hilbert transform. The remaining portions of the text are devoted to the study of singular integral operators and multipliers. Both classical aspects of the theory and more recent developments, such as weighted inequalities, $$H^1$$, $$BMO$$ spaces, and the $$T1$$ theorem, are discussed. Chapter 1 presents a review of Fourier series and integrals; Chapters 2 and 3 introduce two operators that are basic to the field: the Hardy-Littlewood maximal function and the Hilbert transform. Chapters 4 and 5 discuss singular integrals, including modern generalizations. Chapter 6 studies the relationship between $$H^1$$, $$BMO$$, and singular integrals; Chapter 7 presents the elementary theory of weighted norm inequalities. Chapter 8 discusses Littlewood-Paley theory, which had developments that resulted in a number of applications. The final chapter concludes with an important result, the $$T1$$ theorem, which has been of crucial importance in the field. This volume has been updated and translated from the Spanish edition that was published in 1995. Minor changes have been made to the core of the book; however, the sections, "Notes and Further Results" have been considerably expanded and incorporate new topics, results, and references. It is geared toward graduate students seeking a concise introduction to the main aspects of the classical theory of singular operators and multipliers. Prerequisites include basic knowledge in Lebesgue integrals and functional analysis. Readership Graduate students and research mathematicians interested in Fourier analysis. Reviews "This is a great introductory book to Fourier analysis on Euclidean spaces and can serve as a textbook in an introductory graduate course on the subject ... The chapters on the Hardy-Littlewood maximal function and the Hilbert transform are extremely well written ... this is a great book and is highly recommended as an introductory textbook to Fourier analysis. The students will have a lot to benefit from in the simple and quick presentation of the book." -- Mathematical Reviews • Fourier series and integrals • The Hardy-Littlewood maximal function • The Hilbert transform • Singular integrals (I) • Singular integrals (II) • $$H^1$$ and $$BMO$$ • Weighted inequalities • Littlewood-Paley theory and multipliers • The $$T1$$ theorem • Bibliography • Index AMS Home | Comments: webmaster@ams.org © Copyright 2012, American Mathematical Society Privacy Statement
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 9, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8559486865997314, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/33184/why-lambda-phi4-theory-where-lambda0-is-not-bounded-from-below/33186
# Why $\lambda\phi^4$ theory, where $\lambda>0$, is not bounded from below? Why the following interaction, in QFT, $$\displaystyle{\cal L}_{\rm int} ~=~\frac{\lambda}{4!}\phi^4$$ where $\lambda$ is positive, represents a theory that is unstable (or unbounded from below as it is usually said in textbooks). How can can one show that explicitly? - See e.g. A. Zee, QFT in a Nutshell, p.174. – Qmechanic♦ Jul 31 '12 at 10:04 @Qmechanic I do not understand what Zee means exactly. He simply says one sign means repulsion, an opposite sign means attraction. Do we consider repulsion an unstable solution? why so? – Revo Jul 31 '12 at 21:12 Please note that the coupling constant $\lambda$ in OP's question (v2) is opposite of what is used in Zee's book and drake's answer (v2). – Qmechanic♦ Jul 31 '12 at 21:30 Revo: No, it is attraction that corresponds to an unstable situation. – Qmechanic♦ Jul 31 '12 at 21:33 @Qmechanic OK, thanks. So my question is why the unstable solution is problematic (regardless of whether it means repulsion or attraction, since repulsion or attraction each is physical. What is wrong with either one? aren't both equally physical?) – Revo Jul 31 '12 at 21:36 show 1 more comment ## 2 Answers $\cal H \sim \frac{\lambda}{4!} \phi ^4$ (note that this term goes with the opposite sign in the Lagrangian). $\lambda$ has to be real because of unitarity and has to be positive because of vacuum-stability or, equivalently, since the Hamiltonian must be bounded from bellow. If $\lambda$ were negative, the larger the value of $\phi$, the more negative the Hamiltonian and therefore non-vacuum state (or ground state) could exist. Now, at the quantum level, even if $\lambda$ is positive at the classical level the vacuum can be unstable or metastable. This can happen if the renormalization group drives the classical positive value to a negative one. To know if this is the case you have to know the complete theory. For instance, the measured value of the quartic self-coupling of the Higgs field leads to a non-stable vacuum at high enough energies. See this: Measured Higgs mass and vacuum stability - 3 In pure $\phi^4$ theory, the RG flow can't cross zero, at this point the theory becomes noninteracting, and the RG stops. In the standard model, there are other interactions to keep the Higgs coupling flowing. The wrong-sign theory, the unstable one, is asymptotically free (this was discovered by Symanzik around 1970 and motivated him to use this theory as a model for deep-inelastic scattering). – Ron Maimon Jul 31 '12 at 7:12 – Revo Jul 31 '12 at 21:05 @RonMaimon "The statements of Gross, Wilczek, and Politzer (which are reflected in the reasoning of [6, 7] and unfortunately shared by a great majority of contemporary scientists due to the way theoretical physics is presently taught in textbooks), which were interestingly written after the publication of Symanzik’s manuscript [23], made use of the here not applicable assumption of an underlying Hermitian quantum field theory and were obviously more guided by intuition rather than a rigorous proof." – Revo Jul 31 '12 at 21:06 @Revo: The instability is real, but hard to demonstrate rigorously because of the difficulty with wavefunctions of quantum fields. I will provide a proof. Symanzik was doing perturbation theory, and was always aware that the model was unstable, but he knew something like this was responsible for deep-inelastic, so he studied it anyway. – Ron Maimon Aug 4 '12 at 4:33 The reason attractive $\lambda \phi^4$ is unphysical is because a sufficient density of the $\phi$ particles has a self interaction which compensates for their mass-energy, so it is less energy to make a condensate of particles with a large density than to leave the vacuum alone. This means that the vacuum will spontaneously decay by a monstrous explosion in a bubble into a state where the field is rolling off to plus or minus infinity. To see this, you can consider the energy of the classical field state $$\phi = C$$ which is $$a C^2 - \lambda C^4$$ and is unbouded below. The problem with turning this rigorous (although it is completely persuasive) is that it is hard to write down wave-functions for quantum fields which are finite energy density. You usually use the path-integral to define this. While it is physically obvious that a wavefunction for the field $\phi$ which is peaked at a large constant value will have arbitrarily negative energy, constructing such a thing is a nightmare, because you need to control the short-distance correlations in the wavefunction to make sure that they don't have infinite energy in he ultraviolet, which is a pain. But there is a simple way around this which is how everyone analyzes vacuum stability today, after Coleman. Use the path integral to show that there is an instanton which leads to vacuum decay. In this case, the Euclidean action is $$|\partial_t \phi|^2 + |\nabla \phi| + a \phi^2 - \lambda \phi^4$$ You then note that this can be viewed as a classical system with a potential $$V(\phi) = -|\nabla\phi|^2 - a \phi^2 + \lambda \phi^4$$ and the classical equations of motion for this thing has a closed zero energy solution where $\phi$ oscillates to a big value in a region, until it hits the $\lambda\phi^4$ wall, and comes back. The contribution of the instanton is to give a rate for nucleation out of the $\phi=0$ vacuum, in a way calculated in detail in Coleman's "Aspects of Symmetry". The essential point is that fluctuations around the zero energy solution have one negative eigenvalue, so a negative determinant, so that the square root of the determinant has an imaginary part, leading to slow oscillatory behavior in imaginary time, which is a decay rate in real time. But I will use it in a much easier way to argue that the theory doesn't have a stable vacuum. Supposing the vacuum is stable, then the vacuum wavefunction is the probability of finding a given $\phi$ configuration in any constant time-slice in imaginary time, using the imaginary time action. This is a well-known path-integral relation. But the imaginary time probability distribution for field values is of the form $e^{- S}$ where S is unbounded below! So the field $\phi$ has no ground state wavefunction which is normalized in the usual sense. I gave this answer, even though Drake's was sufficient, because you don't seem convinced. - I think it is in general better to argue from increasing of entropy rather than from decreasing of energy since energy is conserved because of time translational invariance. I know this is a small thing but sometimes brings confusion. – drake Aug 4 '12 at 6:04 @drake: I agree, perhaps that wasn't clear in the first parts of the answer. But the instanton conserves energy, dumping more and more heat into a bubble-wall moving out, so you are right. The argument I gave is from non-normalizability of the Euclidean probability distribution, which contradicts the well-definedness of the Euclidean theory, and this is doesn't involve dynamics, it just says "no vacuum here". – Ron Maimon Aug 4 '12 at 6:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9573787450790405, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/187288-finding-unit-vectors-perpendicular-points.html
# Thread: 1. ## Finding unit vectors perpendicular to points. Find unit vectors perpendicular to a = (1, 3, -1) and b = (2, 0, 1). I understand that the cross product of two vectors will give a vector that is perpendicular to them. So I did the cross product and I have 3i - 3j - 6k. How do I find more vectors that are perpendicular? 2. ## Re: Finding unit vectors perpendicular to points. Originally Posted by deezy Find unit vectors perpendicular to a = (1, 3, -1) and b = (2, 0, 1). I understand that the cross product of two vectors will give a vector that is perpendicular to them. So I did the cross product and I have 3i - 3j - 6k. Here are the two unit vectors: $\pm\frac{3i-3j-6k}{\|3i-3j-6k\|}$ 3. ## Re: Finding unit vectors perpendicular to points. How do you know that those vectors are perpendicular, and does 3i - 3j - 6k count? 4. ## Re: Finding unit vectors perpendicular to points. Originally Posted by deezy How do you know that those vectors are perpendicular, and does 3i - 3j - 6k count? Read the question. Find unit vectors perpendicular to a & b.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9091635942459106, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/82828/elements-of-k-which-are-separable-over-f-form-a-subfield-of-k?answertab=active
# Elements of $K$ which are separable over $F$ form a subfield of $K$? I'm trying to prove the following statement: If $K$ is an extension of $F$ prove that the set of elements in $K$ which are separable over $F$ forms a subfield of $K$. I have a proof for the set of algebraic elements in $K$ forming a subfield, but I'm stuck with the separable elements. I'm assuming I should be splitting this into cases of characteristic = 0 and characteristic $\neq$ 0. Any help? I'm at a loss. - – Zev Chonoles♦ Dec 17 '11 at 7:00 ## 1 Answer Characteristic $0$ is immediate, since all elements are separable (irreducible polynomial has no repeated roots). For characteristic $p$, let $\alpha$ and $\beta$ be separable. The characteristic polynomial of $\beta$ over $F(\alpha)$ divides the characteristic polynomial of $\beta$ over $F$, hence has no repeated roots; so $\beta$ is separable over $F(\alpha)$. Now consider the tower $F(\alpha)\subseteq F(\alpha+\beta)\subseteq F(\alpha,\beta)$. Looking at the separable degrees, we have: $$[F(\alpha,\beta):F]_s = [F(\alpha,\beta),F(\alpha+\beta)]_s[F(\alpha+\beta):F]_s.$$ But $$[F(\alpha,\beta):F]_s = [F(\alpha,\beta):F(\alpha)]_s[F(\alpha):F]_s = [F(\alpha,\beta):F(\alpha)][F(\alpha):F] = [F(\alpha,\beta):F],$$ and $[F(\alpha,\beta):F(\alpha+\beta)]_s\leq [F(\alpha,\beta):F(\alpha+\beta)]$, $[F(\alpha+\beta):F]_s \leq [F(\alpha+\beta):F]$. So in order to get $$\begin{align*} [F(\alpha,\beta):F]&=[F(\alpha,\beta):F]_s \\ &= [F(\alpha,\beta),F(\alpha+\beta)]_s[F(\alpha+\beta):F]_s\\ &\leq [F(\alpha,\beta):F(\alpha+\beta)][F(\alpha+\beta):F]\\ &= [F(\alpha,\beta):F],\end{align*}$$ we must have equality throughout, hence $\alpha+\beta$ is separable over $F$. The same argument holds for $\alpha\beta$. - Thanks very much. I'm also wondering if there's a way to argue without using degree-arguments. This answer is very helpful though. – peter Nov 16 '11 at 22:45 Dear Arturo, I think there is some problem in your proof because $F(\alpha)\subseteq F(\alpha+\beta)$ doesn't hold in general (take $\beta=-\alpha$). – QiL'8 Nov 16 '11 at 22:56 @QiL: Oops; that should be $F$ itself. Thanks! – Arturo Magidin Nov 17 '11 at 5:25 1 @peter: First, there was a mistake, as noted by QiL; it's been fixed. As to your question: what precisely is your definition of separable? there are many equivalent ways of defining it. You could show that $\alpha+\beta$ is separable over $F(\alpha)$ (its irreducible polynomial is a translate of the irreducible of $\beta$) and then use the same constructive argument used to prove algebraic extension of algebraic extension is algebraic to show $\alpha+\beta$ satisfies a polynomial with no repeated roots and coefficients in $F$. Similarly with $\alpha\beta$. – Arturo Magidin Nov 17 '11 at 5:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382529258728027, "perplexity_flag": "head"}
http://physics.stackexchange.com/tags/rotational-dynamics/hot?filter=month
# Tag Info ## Hot answers tagged rotational-dynamics 5 ### What do people actually mean by “rolling without slipping”? You can always decompose a motion like this into two parts: (1) rolling without slipping and (2) slipping without rolling. What is slipping without rolling? It means the object moves uniformly in one direction along the surface, with no angular velocity about the object's own center of mass. For instance, a box that is pushed along the ground can easily ... 2 ### Confusions about rotational dynamics and centripetal force There's are many things wrong in your concepts . Let's attend them one by one , A body without a force acting on it always can never rotate as its velocity is changing at each hence momentum is changing at each instant . But to maintain constant velocity , the force must be such that it never does any work , so as to maintain the constancy of Kinetic ... 2 ### Angular momentum conservation while internal frictional torque is present How can we apply angular momentum conservation when friction is present? Why not? If we have a closed system, momentum and angular momentum are conserved. In this case, the full system is disk A and disk B, and there are no external forces, so the system is closed. There are internal forces, namely in this case, friction, but that doesn't matter. You ... 1 ### Physics of the point of contact for a spinning top It depends on the friction of the contact. With a frictionless plane the top would precess around its center of gravity and the contact point will prescribe a circle. Add friction, and the friction force translates the center of gravity the same way tire traction translates a car. Here you have the cases of a) pure rolling, or b) rolling with slipping. ... 1 ### What do people actually mean by “rolling without slipping”? If the wheel is rolling without slipping, what's the velocity of the point at the base of the wheel?? It is... zero! Convince yourself that the velocity must be zero. Since if it wasn't zero, the wheel wouldn't be rolling without slipping. So far the explanation is correct. "No slipping" refers really to some non-zero interval of time, and to the state of ... 1 ### What do people actually mean by “rolling without slipping”? Basically , it means at each instant the bottom most point has $0$ velocity , it doesn't mean that the point has no acceleration . But at an instant it has $0$ velocity . And because of that at each instant $v_{cm}=\omega r$ for the bottom most point , and if this doesn't happen , then static friction acts to make it $0$ . Its like suppose you are walking ... 1 ### What do people actually mean by “rolling without slipping”? The relative speed of the point of contact of the rolling body w.r.t. the surface on which it rolls is zero. If the surface is at rest then the velocity of the point of contact of rolling body and surface is zero. Mathematically: $$v_1 -\omega R=v_2$$ Also we can get the reltation in accelerations ..... Differentiate the above eq. ... 1 ### Calculating the moment inertia for a circle with a point mass on its perimeter You don't need to apply Steiner's theorem onto the point mass. The point mass finds itself at a distance (apparently) $R$ of the x-axis. Since the moment of inertia is an extensive value, you can simply add all moments of inertia. There's the moment of inertia of the solid disk with respect to it's diameter. You have to 'Steiner' that away from a distance ... 1 ### Rotational Dynamics Firstly , definition of torque is $\vec{r}\times \vec{F}$ and angular momentum $\vec{r}\times \vec{p}$. And now w.r.t. your frame $\vec{F}$ & $\vec{p}$ & $\vec{r}$ are all relative . but newton's second law of rotation holds for all frames. .Because all points are just frames and to maintain the distances in frame , you've to move with that frame , ... 1 ### Newton's Second Law Equivalent in rotational dynamics I'll expand my comment into an answer. I would take $\mathbf{T}=d\mathbf{L}/dt$ as the definition of torque, but it sounds like the OP takes $\mathbf{T}=\mathbf{r}\times\mathbf{F}$ as the definition. Either way, we need to prove that the two expressions are equivalent for a system of particles. The total angular momentum is \mathbf{L}_{tot}=\sum ... Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183751344680786, "perplexity_flag": "middle"}
http://jdh.hamkins.org/gapforcing/
Gap forcing Posted on September 25, 2011 by • J. D. Hamkins, “Gap forcing,” Israel J. Math., vol. 125, pp. 237-252, 2001. ````@article{Hamkins2001:GapForcing, AUTHOR = {Hamkins, Joel David}, TITLE = {Gap forcing}, JOURNAL = {Israel J. Math.}, FJOURNAL = {Israel Journal of Mathematics}, VOLUME = {125}, YEAR = {2001}, PAGES = {237--252}, ISSN = {0021-2172}, CODEN = {ISJMAP}, MRCLASS = {03E40 (03E55)}, MRNUMBER = {1853813 (2002h:03111)}, MRREVIEWER = {Renling Jin}, DOI = {10.1007/BF02773382}, URL = {http://dx.doi.org/10.1007/BF02773382}, eprint = {math/9808011} }```` Many of the most common reverse Easton iterations found in the large cardinal context, such as the Laver preparation, admit a gap at some small delta in the sense that they factor as $P*Q$, where $P$ has size less than $\delta$ and $Q$ is forced to be $\delta$-strategically closed. In this paper, generalizing the Levy-Solovay theorem, I show that after such forcing, every embedding $j:V[G]\to M[j(G)]$ in the extension which satisfies a mild closure condition is the lift of an embedding $j:V\to M$ in the ground model. In particular, every ultrapower embedding in the extension lifts an embedding from the ground model and every measure in the extension which concentrates on a set in the ground model extends a measure in the ground model. It follows that gap forcing cannot create new weakly compact cardinals, measurable cardinals, strong cardinals, Woodin cardinals, strongly compact cardinals, supercompact cardinals, almost huge cardinals, huge cardinals, and so on. This entry was posted in Publications by Joel David Hamkins. Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9167811870574951, "perplexity_flag": "middle"}
http://www.sciforums.com/showthread.php?113616-If-space-time-is-modeled-as-the-torsor-of-translations-rotations-and-changes-of-the&p=2936030
• Forum • New Posts • FAQ • Calendar • Ban List • Community • Forum Actions • Quick Links • Encyclopedia • What's New? 1. If this is your first visit, be sure to check out the FAQ by clicking the link above. You need to register and post an introductory thread before you can post to all subforums: click the register link above to proceed. To start viewing messages, select the forum that you want to visit from the selection below. # Thread: 1. ## If space-time is modeled as the torsor of translations, rotations and changes of the Thinking back to this old theme of mine: http://www.sciforums.com/showthread....56#post2039656 I was trying to get back into the fundamental mathematical framework of relativity (Galilean and Special). -- If space-time is modeled as the torsor of translations, rotations and changes of the standard of rest that preserve the law of inertia, what corresponds to the Lorentz transform? The question is a little ill-formed. Questions/comments welcome. If translations is the Lie group $R^4$ then space-time is 4-dimensional and every pair of events has an associated element that translates between them. The law of inertia requires existence of inertial paths, "straight" lines through space-time where $\vec{u}$ is constant in $\forall k \in \mathbb{R} \; k ( \Delta \vec{x} ) = k ( \Delta t ) \vec{u}$. So for any unit time, $t_0$, we have the event separation $( t_0, t_0 \vec{u} )$ that is naturally associated with an element of the translations. For any two velocities, there should be an element of the rotations (say SO(3)) that makes them parallel. What is needed to complete the picture is a way to "translate" between velocities that are parallel. Some choices are SO(1,1), and Galilean boosts. Questions. Is SO(3,1) a product of SO(3) and SO(1,1) ? Is this effort doomed due to misunderstanding? 2. Originally Posted by rpenner Thinking back to this old theme of mine: http://www.sciforums.com/showthread....56#post2039656 I was trying to get back into the fundamental mathematical framework of relativity (Galilean and Special). IMHO you need to go more fundamental than that, rpenner. The mathematical framework is modelling reality, but in itself it isn't fundamental. Originally Posted by rpenner If space-time is modeled as the torsor of translations, rotations and changes of the standard of rest that preserve the law of inertia, what corresponds to the Lorentz transform? The question is a little ill-formed. Questions/comments welcome. I'll assume we're talking about a simple boost in the x direction for now: it's a trigonometric tilt in a mathematical space. You can call it a rotation, but that doesn't get to the bottom of it. A Lorentz transform occurs when you change your state of motion, which alters the results of your use of motion to measure space and time. To appreciate this, set the mathematics aside for a moment, and get down to the fundamentals of light moving through space. Remember that clocks "clock up" local motion and show you some cumulative display you call the time. Now take a look at the well-worn expression for a spacetime interval in flat Minkowski spacetime: $ds^2 = -dt^2 + dx^2 + dy^2 + dz^2$ It's related to Pythagoras' theorem, used in the Simple inference of time dilation due to relative velocity. We've got two parallel-mirror light clocks, one in front of me, the other which we've sent with you on an out-and-back trip. I see the light moving like this ǁ in my local clock and like this /\ in the moving clock. Treat one side of the angled path as a right-angled triangle, and the hypotenuse is the lightpath where c=1 in natural units, the base is the speed v as a fraction of c, and the height gives the Lorentz factor γ = 1/√(1-v²/c²). Notice there's no literal time flowing in these clocks, merely light moving at a uniform rate through the space of our SR universe. From which we plot straight worldlines through the abstract mathematical space we call Minkowski spacetime. Also note that the underlying reality behind the invariant spacetime interval between the start and end events is that the two light-path lengths are the same. Macroscopic motion comes at the cost of a reduced local rate of motion, because the total rate of motion is c. Hence the minus in front of the t. It really is that simple. The Lorentz transform occurs when you change x from zero to some non-zero positive value. Your worldline is initially vertical, it ends up tilted. But it is restricted, it can only tilt so far. That's when the hypotenuse flatlines and your worldline is at 45 degrees. Originally Posted by rpenner If translations is the Lie group $R^4$ then space-time is 4-dimensional and every pair of events has an associated element that translates between them. It's a light-path length. The light can take many paths, but all the path lengths are the same. It doesn't matter how you move through space, our departure is event 1 and our subsequent meeting is event 2, and the total light path length between your parallel mirrors is the same as between mine. Think of events as collisions and you can't go far wrong. Originally Posted by rpenner The law of inertia requires existence of inertial paths, "straight" lines through space-time where $\vec{u}$ is constant in $\forall k \in \mathbb{R} \; k ( \Delta \vec{x} ) = k ( \Delta t ) \vec{u}$. So for any unit time, $t_0$, we have the event separation $( t_0, t_0 \vec{u} )$ that is naturally associated with an element of the translations. Forget about the law of inertia. I'll tell you about inertia another day. There's a symmetry to it that for some weird reason just isn't in the textbooks, as if E=mc² never happened. Bizarre. Originally Posted by rpenner For any two velocities, there should be an element of the rotations (say SO(3)) that makes them parallel. What is needed to complete the picture is a way to "translate" between velocities that are parallel. Some choices are SO(1,1), and Galilean boosts. Questions. Is SO(3,1) a product of SO(3) and SO(1,1) ? Is this effort doomed due to misunderstanding? No, you just have to start from the bottom, and make sure you understand the x-boost situation and tilting your worldline before you start swivelling it too. Understanding the mathematics isn't enough. You have to understand the why of it. 3. Clearly you haven't understood the intent or the language. When I say rotations, I am talking about rotations, as in Euclidean rotations in 3 spatial dimensions. The law of inertia becomes the law of straight lines in the model. This part, I think I understand is that for a given line, an element of the Lie group of translations transforms the line into itself. 4. Originally Posted by Farsight IMHO you need to go more fundamental than that, rpenner. The mathematical framework is modelling reality, but in itself it isn't fundamental. Watch out, Farsight is going to dazzle us with his grasp of the aforementioned mathematical framework! Originally Posted by Farsight Forget about the law of inertia. I'll tell you about inertia another day. Still have all the answers but can't reply to any questions, aye Farsight? Originally Posted by Farsight There's a symmetry to it that for some weird reason just isn't in the textbooks, as if E=mc² never happened. Bizarre. Because you're so familiar with all those textbooks, right? Come on Farsight, do you really think anyone is going to buy the "I'm well read in the physics literature!" front you're trying to put up? Originally Posted by Farsight Understanding the mathematics isn't enough. You have to understand the why of it. But you don't understand the maths and your 'whys' are just baseless assertions based on personal preference and, quite frankly, ignorance. 5. If $\mathcal{M}$ is the model of space-time, and $L$ is the set of all straight lines in it, and we accept as axiomatic that all straight lines in $\mathcal{M}$ admit at least two distinct points as members, that any two points uniquely identify an element of the translation group, T, and that all points along a straight line may be obtained as images of a point by a suitable translation in the same direction, then it follows that any line is associated with a subgroup of T for which the line is invariant. This is a generalization of "straight" for T could be SO(3), $\mathcal{M}$ could be an ordinary 2-sphere, and so L would be the set of great circles. Almost certainly we want T to be a real Lie group. But right now, I seem to be in a definition muddle. If $\mathcal{M}$ is the model of space-time, does it follow that any follow that any point is the image of a unique translation of another point? There are assumptions that a line can be extended indefinitely in both directions, and that a straight line in the model corresponds to uniform motion in space-time. Should I nail T down to be a connected real Lie group, or will it be sufficient for my purposes to suppose merely that it is a group. I think I need to sketch out a map of where I am going with this and in my next post attempt to start from scratch again. 6. rpenner: make sure you read Light is heavy by van der Mark and 't Hooft. (That's not the 't Hooft by the way). The paper will help you to understand inertia. Then read The Other Meaning of Special Relativity by Robert Close. This will help you to understand the underlying physics. It's rather trivial actually, schoolboy stuff. Once you have this understanding, you won't be "lost in maths" any more. 7. Originally Posted by Farsight This will help you to understand the underlying physics. Because it's helped you so much. Originally Posted by Farsight Once you have this understanding, you won't be "lost in maths" any more. Except that it isn't just getting 'lost in maths', the sorts of mathematical things Rpenner is mentioning are central constructs within much of mathematical physics and provide a precise and concrete formal description of structures seen in nature. For example, Lie groups underline symmetries and thus allow us to compute conserved quantities. The same groups describe how particles should interact with one another and themselves and not just in a "Gluons interact with gluons but photons don't interact with photons". Instead you can compute actual behaviours of particles to a very accurate level which can then be tested. And making precise predictions is an important part of physics, as you keep whining about when it comes to string theory. The irony of you saying such things, along with throwing out comments like "I'll explain inertia to you one day", while simultaneously being unable to provide a single quantitative prediction is not lost on other people. But feel free to show I'm mistaken in my assessment of your claims. If you can 'explain inertia' and what not perhaps you could provide everyone with a working model of some phenomena pertaining to inertia? Perhaps you'd like to derive conservation of linear momentum from your 'explanation'? After all, the use of Poincare invariance in the mathematical formalism Rpenner is talking about, which you consider to be 'getting lost in maths' provides such a prediction from first principles. So before you start giving advice to others perhaps you could show you're at least on their level. 8. Originally Posted by rpenner If space-time is modeled as the torsor of translations, rotations and changes of the standard of rest that preserve the law of inertia, what corresponds to the Lorentz transform? The question is a little ill-formed. One of the ways it is ill-formed is that it starts to give you an appreciation for the physical difference between a Lie group and a Lie algebra. I'm sorry I haven't written up my thoughts in a coherent form, and it looks like I won't have time to do it today, either. 9. Actually, I was hoping to be able the answer the question myself, but I'm scraping the limits of my understanding and will probably have to grow in knowledge to complete it. My algebra and physics books are weak on Lie algebras and Lie groups. Right now, I'm favoring: "If space-time is modeled as the torsor of the group generated by the Lie algebra of Euclidean translations, Euclidean rotations and changes of the standard of rest that preserve the law of inertia and generally preserve the straightness of lines, what corresponds to the Lorentz transform?" What I think you wind up with is a test theory of a local Galilean or Lorentzian manifold and then you bracket it with experimental results to show that a local Lorentzian manifold is the only physical result. However, • If I can prove the straightness of lines follows from the law of inertia, I can drop that assumption. • I need to get a better handle on Lie algebra notation, so that I can talk about the products of the Lie algebras corrsponding to R, R^3, SO(3) and the "change of the standard of rest" • I think if I drop Euclidean from translations and rotations, the preservation of straight lines (which is defined in terms of translations) might not be strong enough. But there's a chance that dropping Euclidean will be interesting on its own 10. If $\mathcal{M}$ is a model of space-time, and $\mathcal{L} \subset \mathcal{P}( \mathcal{M})$ is the set of all straight lines in that model, such that if $t_i$ is an element of an indexed set of generators for the algebra of translations, that there exists some non-zero linear combination of generators $t_L = \sum \alpha_i t_i$ associated with the line $L \in \mathcal{L}$ such that $\forall x \in L \quad t_L \cdot x \in L$ and $\forall t_i \quad ( ! \exists k \; k t_i + t_L = 0 ) \Rightarrow t_i \cdot x \not\in L$. 11. For all points in a line, like x in L, the non-zero translation algebra member t associated with the line transforms x into a point in the line, but any of the generators which is linearly independent of this t associated with the lines transforms x into a point off the line. Sorry for the notation, but I am still working with elements of a basis and don't have a way to express some of my thoughts without subscripts at this time. 12. .... OK, you know you are doing it wrong when you prove $\mathfrak{so}(3)$ isn't a Lie algebra. ... grumble. 13. Feel free to start another thread Farsight. I'm looking forward to learning a bit about curvature now. #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts • • BB code is On • Smilies are On • [IMG] code is On • [VIDEO] code is On • HTML code is Off Forum Rules All times are GMT -5. The time now is 01:51 PM. sciforums.com Powered by vBulletin® Copyright © 2012 vBulletin Solutions, Inc. All rights reserved. Copyrights reserved by SciForums 1996-2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 24, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9429745674133301, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/tagged/photon?sort=faq&pagesize=15
# Tagged Questions The photon tag has no wiki summary. 3answers 1k views ### Scattering of light by light: experimental status Scattering of light by light does not occur in the solutions of Maxwell's equations (since they are linear and EM waves obey superposition), but it is a prediction of QED (the most significant Feynman ... 2answers 585 views ### Does a photon exert a gravitational pull? I know a photon has zero rest mass, but it does have plenty of energy. Since energy and mass are equivalent does this mean that a photon (or more practically, a light beam) exerts a gravitational pull ... 4answers 346 views ### Does $p=mc$ hold for photons? Known that $E=hf$, $p=hf/c=h/\lambda$, then if $p=mc$, where $m$ is the (relativistic) mass, then $E=mc^2$ follows directly as an algebraic fact. Is this the case? 3answers 274 views ### If electromagnetic fields give charge to particles, do photons carry charge? As I understand these two statements: An electromagnetic field gives particles charge A photon is a quantum of electromagnetic field It must mean that a photon carries charge. But I guess it isn't ... 1answer 162 views ### Expression for the (relativistic) mass of the photon [closed] I started learning a bit ahead from an old physics book, and they were discussing the photoelectric effect and after that Planck's hypotheses and energy quantas. The book said that the mass of a ... 3answers 711 views ### Amplitude of an electromagnetic wave containing a single photon Given a light pulse in vacuum containing a single photon with an energy $E=h\nu$, what is the peak value of the electric / magnetic field? 1answer 161 views ### Will photon's energy be exactly same after million years? If photon will travel for million years without collisions, what subtle effects can be accumulated ? Gravity fields affect trajectory, but is energy completely intact after fly by ? Photon has its ... 4answers 271 views ### Why does a photon colliding with an atomic nucleus cause pair production? I understand that the photon needs to have enough energy to produce a lepton and it's antimatter partner, and that all of the properties are conserved, but why does the photon do this in the first ... 0answers 86 views ### Does the passage of time effect a photons entanglement with another? I recently read an article about "Delayed-choice entanglement swapping". Here is an excerpt from the article: Delayed-choice entanglement swapping consists of the following steps. (I use the ... 0answers 42 views ### If photons don't have mass then why do we calculate their momenum? [duplicate] As much I know photon do not have any mass. But while studying my course book saw a topic which included calculation of momentum of photons. I was wondering why was that. Please clear my confusion? 0answers 100 views ### Why angular momentum applies to emitted photons, and how it affects the emitting atom's quantized system From what I've read, photons have spin of 1 (I guess possible by their relativistic mass), and when a photon is emitted from an atom, the production of this spin affects the balance of the atom's ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352966547012329, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/39078/matrix-multiplication
## Matrix multiplication ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let I(n) and U(n) be the number of steps needed to invert an nxn matrix and nxn upper triangle matrix respectively. Can we prove I(n)<=cU(n), where c is some constant? - ## 1 Answer The answer is probably No, if you wish a constant independent of $n$. On the one hand, the naive method gives the optimal result $U(n)=n^2$. On the other hand, it is known that the complexity of inversion and that of matrix multiplication are the same (see for instance the second edition, to appear soon, of my book Matrices;Theory and Applications, GTM 216 Springer-Verlag, 2010). If the answer to your question is positive, this implies therefore that matrix multiplication can be done in $O(n^2)$ operations. This is highly unlikely. The state of the art tells us that it can be done in $O(n^{2.376})$ operations. Optimists believe that it could be done in $O(n^{2+\epsilon})$ for every $\epsilon>0$, but not in $O(n^2)$. - I never would have guessed that inversion and multiplication have the same complexity. Is it a Fourier-ish trick, or something else entirely? – Matt Noonan Sep 17 2010 at 12:16 1 Matt: Inversion can be done by multiplying with so-called "Gauss transforms", which is nothing more than the matrix formalism of the steps done in row reduction/Gaussian elimination. Thus, for any "fast" way of multiplying matrices, there would be a corresponding "fast" way of inverting them. – J. M. Sep 17 2010 at 12:42 3 @Matt. Conversely, inversion of $3n\times 3n$ matrices can be used to multiply $n\times n$ matrices with the same complexity, up to a universal constant. If $A$ and $B$ are given, just invert the block triangular matrix whose diagonal is ($I_n$ $I_n$ $I_n$) and is boardered by the diagonal ($A$ $B$); the other blocks are $0_n$'s. – Denis Serre Sep 17 2010 at 13:37 Denis, I'm confused. To me, the naive method of inverting an upper triangular matrix has complexity $n^3$. Moreover, your answer seem to show that the answer is YES: You have just shown that $U(3n)$ bounds $n \times n$ matrix multiplication. – David Speyer Jul 11 at 14:14 @David. You're right. I confused between solving $Ux=b$ and inverting $U$. – Denis Serre Jul 11 at 14:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9218178987503052, "perplexity_flag": "head"}