url
stringlengths
17
172
text
stringlengths
44
1.14M
metadata
stringlengths
820
832
http://mathhelpforum.com/statistics/158976-conditional-probability-family-blue-eyes.html
# Thread: 1. ## Conditional probability - family with blue eyes Hi all, I'm looking for an answer to the following exercise. I've worked on some but I'm stuck. There is a family with 5 children, and the probability that any child will have blue eyes is 1/4. If you know that at least 1 has blue eyes, what is the probability that 3 or more have blue eyes? (I think I got this part, below.) The next part is: if you know that the youngest child in the family has blue eyes, what is the probability that 3 or more have blue eyes. Supposedly these answers are different. I think for the first part you can say that the probability than n children have blue eyes is: $P_n = {5 \choose n}\frac14^n\frac34^{5-n}$ So the answer to the first part is, I think, a conditional probability: ${P_3 + P_4 + P_5 \over P_1+P_2+P_3 + P_4 + P_5} = \frac{106}{781}$ The textbook has the answer for the second part as .25. But I don't see it. It seems to me you could just ignore the youngest child, and treat it now like a family of four and get the probability of 2, 3, or 4 children having blue eyes. But that's not .25, unless I'm calculating wrong (I get 67/256). Thanks for any help, take care, Rob 2. you got the first part correct.. for the second one: At least 3 have blue eyes if and only if at least 2 of the other 4 have blue eyes. You can add the probabilities of these outcomes: $6 \left(\frac{1}{4}\right)^{2} \left(\frac{3}{4}\right)^{2}+4\left(\frac{1}{4}\ri ght)^{3} \frac{3}{4}+\left(\frac{1}{4}\right)^4$ 3. You're result is also 67/256. Perhaps this textbook (Degroot Probability and Statistics, 3rd ed) is wrong. The back of the book says .25. 4. As far as I know, it should be 0.26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9681609869003296, "perplexity_flag": "head"}
http://nrich.maths.org/347/index?nomenu=1
I keep three circular medallions in a rectangular box in which they just fit with each one touching the other two. The smallest one has radius $4$ cm and touches one side of the box, the middle sized one has radius $9$ cm and touches two sides of the box and the largest one touches three sides of the box. What is the radius of the largest one? [This problem by R B J T Allenby appeared in the Sunday Times on 12.11.00.]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9441181421279907, "perplexity_flag": "head"}
http://mathoverflow.net/questions/68293?sort=votes
## Nerves of simplicial objects in categories/Waldhausen’s S-construction ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a good nerve-like functor from simplicial objects in categories to simplicial sets which takes level-wise equivalences of categories to weak equivalences? To give this some context, I'd like to extract a simplicial set from the Waldhausen S-construction applied to a category with cofibrations, and I realized that my standard way of taking a nerve is for simplicial categories (i.e. simplicial objects in categories for which the objects form a constant simplicial set), and this doesn't clearly apply to the S-construction. - What about taking the nerve in each simplicial degree and then taking the diagonal of the resulting bisimplicial set? – K.J. Moi Jun 20 2011 at 16:21 I think that does something slightly different, but it does helpfully rephrase the question: Taking the levelwise nerve puts us in bisimplicial sets, and takes levelwise equivalences to levelwise homotopy equivalences, i.e. to weak equivalences (w.e.) in the Bousfield-Kan or Reedy structures. Taking the diagonal of a bisimplicial set preserves w.e. in the Moerdijk structure (by definition), but not necessarily in the BK or Reedy structures. So, I'm looking for a functor from bisimplicial sets to simplicial sets which preserves BK or Reedy w.e. – Jesse Wolfson Jun 20 2011 at 16:48 If we consider a bisimplicial set as a simplicial object in simplicial sets then a levelwise weak equivalence induces a weak equivalence on diagonals, right? Sorry if I'm missing the point here. – K.J. Moi Jun 20 2011 at 17:39 All simplicial sets cofibrant (in the Quillen model structure). Do you mean with the Joyal model structure? – David Carchedi Jun 20 2011 at 17:51 @KJ, I don't think that a levelwise w.e. necessarily induces a w.e. on diagonals. That was my point about the different model structures on bisimplicial sets. @David, thanks, you're absolutely right. – Jesse Wolfson Jun 20 2011 at 18:38 show 2 more comments ## 2 Answers Here is an idea. Try the homotopy coherent nerve. (This was originally introduced, sort of, by Boardman and Vogt in a topological context and was formulated for simplicially enriched categories (and please do not use `simplicial category' as it is ambiguous!) by Cordier in 1980. The H.c. nerve is related to the bisimplicial nerve by using the codiagonal of Artin and Mazur. (which has been mentioned in several of my answers!!!). Some details of the H.C. nerve as discussed in the nLab entry on that and there are links to elsewhere. A chatty discussion can be found in Kamps and Porter, (again that has been mentioned before :-))! Hope this helps. I should mention the papers M. Bullejos and A. Cegarra, On the Geometry of 2-Categories and their Classifying Spaces , K-Theory, 29, (2003), 211 – 229. M. Bullejos and A. M. Cegarra, Classifying Spaces for Monoidal Categories Through Geometric Nerves , Canadian Mathematical Bulletin, 47, (2004), 321–331. A. Cegarra and J. Remedios, The relationship between the diagonal and the bar constructions on a bisimplicial set , Topology and its Applications, 153, (2005), 21 – 51. A. Cegarra and J. Remedios, The behaviour of the $\overline{W}$-construction on the homotopy theory of bisimplicial sets , Manuscripta Math., 124, (2007), 427 – 457, ISSN 0025-2611. some of which may help and that there is related discussion in the Menagerie and in Lurie's HTT. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I don't know if it will be a full answer to your question but there is a (diagonal) model structure on the category of simplicial objects in $\mathbf{Cat}$ (denoted by $\mathbf{sCat}$) such that a map $F_{\bullet}:C_{\bullet}\rightarrow D_{\bullet}$ between two object in $\mathbf{sCat}$ is a fibration (weak equivalence) iff the diagonal of the nerve (taken level-wise) of the corresponding level-wise groupoids is a fibration (weak equivalence) of simplicial sets, i.e. $diag ~\mathrm{N}_{\bullet}\mathbf{iso}~F$ is a fibration (a weak equivalence) of simplicial sets. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9046974778175354, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/101270/lattices-in-sln-mathbb-r/101278
## Lattices in $SL(n,\mathbb R)$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) If $\Gamma\subseteq SL(n,\mathbb{R})$ is a lattice (i.e. discrete and finite covolume), does $\Gamma$ necessarily contain some $\mathbb{R}$-diagonalizable copy of $\mathbb{Z}^{n-1}$? I know that the answer is yes if the lattice is cocompact, and that the answer is also yes in the case $\Gamma=SL(n,\mathbb Z)$. So I wonder if every lattice satisfies this property. - 1 What exactly does your second sentence mean??? – Igor Rivin Jul 3 at 23:46 I've edited the question to make it clearer. Hopefully I didn't change what the OP is asking. – unknown (google) Jul 4 at 0:16 Ah, much better! – Igor Rivin Jul 4 at 0:37 That is what I meant, thanks for the correction. – ALB Jul 4 at 3:24 @ALB: I am curious as to why the proof is easier for uniform lattices?! – Igor Rivin Jul 4 at 4:38 show 5 more comments ## 2 Answers The answer is yes. It is theorem [2.13] of the following paper of Prasad and Raghunathan: Prasad, Gopal; Raghunathan, M. S. Cartan subgroups and lattices in semi-simple groups. Ann. of Math. (2) 96 (1972), 296–317. There is also a lot of information in this paper: http://www.math.bgu.ac.il/~barakw/papers/clorbit.pdf Note that diagonalizable copies of $\mathbb{Z}^{n-1}$ in $\Gamma$ correspond to closed orbits for the action of the full diagonal subgroup of $SL(n,\mathbb{R})$ on $SL(n,\mathbb{R})/\Gamma$. This is related to the Margulis conjecture which (with some caveats) states that the closure of any orbit of the full diagonal on $SL(n,\mathbb{R})/\Gamma$ is algebraic, i.e. is itself the closed orbit of some subgroup. This conjecture is the biggest open problem in homogeneous dynamics (and in particular implies the Littlewood conjecture in number theory). - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. This is a theorem of G. Prasad and M.S. Raghunathan. See Theorem 7.2 in this paper of Steve Hurder's (rigidity of Anosov actions) -- the original reference is a bit less friendly. - @Alex Eskin and Igor Rivin: Ok, thank you for your precise answers and additionnal information. – ALB Jul 4 at 16:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9196394681930542, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/94017/why-is-the-the-classifying-space-of-the-natural-numbers-homotopy-equivalent-to-th/94018
## Why is the the classifying space of the natural numbers homotopy equivalent to the circle? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Is there a direct way to seeing that $B{\mathbb{N}}\simeq S^1$, i.e. the classifying space of the monoid of natural numbers is homotopy equivalent to the circle? Here, since the natural numbers ${\mathbb{N}}$ is not a group, some care is needed to define the classifying space $B{\mathbb{N}}$ properly. One way to do this is to consider ${\mathbb{N}}$ as a discrete simplicial monoid, then set $B{\mathbb{N}}:=|N{\mathbb{N}}|$ to be the geometric realization of its nerve. This fact is a special case of (yet surprisingly, equivalent to) a larger theorem of James, namely that James' construction $J[X]$ on a pointed simplicial set $X$ is weakly equivalent to $\Omega\Sigma |X|$. Here $J[X]$ is the free simplicial monoid on $X$ modulo the basepoint $*$. Take $X=S^0$ gives $|N{\mathbb{N}}|\cong |NJ[S^0]|\simeq B\Omega\Sigma S^\simeq |\Sigma S^0|\simeq S^1$. - ## 4 Answers Method I: Symmetric products. Contained inside the simplicial set $N\mathbb{N}$ is a copy of the simplicial circle $S^1$, generated by the zero-simplex and the 1-simplex $[1]$. This consists of all simplices of the form `$e_i = (0,\ldots,0,1,0,\ldots,0)$`, together with the basepoint $(0,\cdots,0)$, in the simplicial object. Moreover, $N\mathbb{N}$ is, levelwise, a commutative monoid, and the face and degeneracy maps are maps of commutative monoids. In fact, $N\mathbb{N}$ visibly is, in level $p$, the free commutative monoid on `$e_1, \ldots, e_p$`, or the infinite symmetric product of the based set `$(S^1)_p \subset (N\mathbb{N})_p$`. As a simplicial set, then, $N\mathbb{N}$ is the infinite symmetric product of the based simplicial set $S^1$. Geometric realization preserves finite products and quotients by group actions (hence symmetric products), as well as colimits, so the geometric realization is homeomorphic to the map $S^1 \to Sym^\infty S^1$ of topological spaces. On homotopy groups, by the Dold-Thom theorem, this is the map `$\pi_* S^1 \to H_* S^1$`, which is known to be an isomorphism. Method II: Covering spaces. Consider the auxiliary simplicial set $E$, which is the nerve of the poset $\mathbb{Z}$ under $\leq$. $E$ is contractible, for example because the functions $f(x) \equiv 0$ and $g(x) = max(x,0)$ satisfy $f(x) \leq g(x) \geq id(x)$; these inequalities give rise to natural transformations of categories and thus a two-stage homotopy from the identity to a trivial map. The group $\mathbb{Z}$ acts on $E$ freely (and properly discontinuously on geometric realization) by translation. I claim that the quotient is isomorphic to $N\mathbb{N}$. The p-simplices of $E$ are all of the form ```$$ z \leq (z + n_1) \leq \cdots \leq (z + n_1 + \cdots + n_p) $$``` and so the quotient can be identified with the collection of tuples `$(n_1,\ldots,n_p)$`. Composition adds adjacent `$n_i$` and inserting an identity inserts $0$, so this really is the simplicial set $N\mathbb{N}$. Since geometric realization preserves quotients by group actions, this makes $B\mathbb{N}$ into a $K(\mathbb{Z},1)$, and hence homotopy equivalent to $S^1$. - Your first method is interesting. Most other answers show that $N{\mathbb{N}}\to N{\mathbb{Z}}$ is a weak equivalence. However, you show that a weak equivalence can be found in the other direction. That is, you identify an embedding of the simplicial circle $S^1\hookrightarrow N{\mathbb{N}}$ that is a weak equivalence. Thank you! – Gao 2Man Apr 21 2012 at 4:34 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Indirect method: Monoids are categories with one object. A simple calculation shows that the inclusion of categories N into Z has contractible homotopy fiber (which is the nerve of the category whose objects are the arrows of the one object category Z and whose arrows are commutative triangles of Z mediated by the elements of N). Thus Quillen's theorem A yields a homotopy equivalence of the corresponding nerves arising from the inclusion of underlying categories. Direct method: Consult this paper by Ken Brown: "The Geometry of Rewriting Systems" You need only the simplest version of his method. With it one can show that the nerve of N and the nerve of Z have cellular models which differ only by collapses of simplices and thus have the same (simple) homotopy type. - Applying the functor "free $\mathbb Z$-module", the abelian monoid becomes a commutative ring $\mathbb Z[T]$, a polynomial ring in one variable. The bar construction, a simplicial set, becomes a simplicial abelian group: the bar construction for the augmented $\mathbb Z$-algebra. It follows that the homology of the classifying space is $Tor^{\mathbb Z[T]}(\mathbb Z,\mathbb Z)$, which is the homology of the circle. That, plus the fact the fundamental group is what it should be, plus the fact that the classifying space inherits its own commutative mponoid structure, gives the result. - I am not sure if this is the type of direct proof that you are looking for, but here it goes. I will start with a more general theorem: Let $M$ be a CANCELABLE monoid, and $K$ be the left adjoint to the forgetful functor, $U:GROUP\rightarrow MONOID$. Then $BM$ is homotopy equivalent to $BK(M)$. The way I like to see this is to think of both monoids and (therefore groups) as categories. By $B$, I mean the nerve of the category that turns the category into a simplicial set. Now I find these sorts of nerve theorems are much easier to see in the world of simplicial sets. To see this particular theorem, it suffices to try to build a minimal fibration (that is a fibrant replacement). In the case of a monoid, all of the inner horns are filled, and we must only find out how to fill the outer horns. But these will just be adding in the inverse that are not yet in the monoid. Further the minimal fibration condition will ensure that horn fillers are unique. Essentially, what you are doing is to perform a geometric version of the $K$ functor in the category of simplicial sets. As an interesting exercise, it would be good to take the minimal fibration associated to $B\mathbb{N}$ and see that you get the simplicial set, $B\mathbb{Z}$. Now for the James construction that you mention (even though this is not part of your question, it is worth mentioning) their is a simplicial set version of this called the Milnor FK construction. What you do is you start with a reduced simplicial set, $X$ (a simplicial set with one vertex). We then define a simplicial group called $FK(X)$ the n-th group is the free group on the elements of $X_n$ modulo the image of the iterated degeneracey, $s_0^n(pnt)$, where $pnt$ is the vertex. - There is a conjecture of Moore: If $M$ is a simplicial monoid such that $\pi_0(|M|)$ is a group, then $BM$ is homotopy equivalent to $BK(M)$. This conjecture is not always true, see arxiv.org/pdf/math/0202260.pdf You seem to be saying that if M is discrete, then Moore's conjecture is true. – Gao 2Man Apr 14 2012 at 7:11 1 You are right, the Monoid has to be cancleable. see www.math.leidenuniv.nl/scripties/LenzMaster.pdf Page 32 – Spice the Bird Apr 14 2012 at 7:39 A conjecture which is not always true? I thought that, when a conjecture is proven or disproven, it stops being a conjecture. – Fernando Muro Apr 14 2012 at 11:49 1 Seems it was not a cancelable conjecture. – Zack Wolske Apr 14 2012 at 18:02 1 I think that if M is a cancelative monoid that has a group of left fractions then you can just use Quillen's A on the inclusion. – Benjamin Steinberg Apr 14 2012 at 21:45 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 63, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9157159328460693, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=4156099
Physics Forums ## a question on flux, and on field integration hey all, i've recently started studying electrostatics, and i have couple of question about things that i did not fully understand, and would very much appreciate if someone could set me straight. 1) how can a cube, with a single charge in the middle of it have a flux? don't the field lines cancel each other, thus achieving 0 electric field for each plane? i mean, you have a field vector going up on the Y axis, and down on the same axis (and so on for the others), so how come the flux $\Phi=A\overline{E}$ isn't zero? 2) if i have a flat disk on a plane, with uniform charge density $\sigma$, why when i integrate, i do so for small rings, and not for very small circles? why is it: $E=\int\frac{kσ2∏rdr}{(...)}$ instead of $E=\int\frac{kσ2∏dr}{(...)}$? thank you very much for your help PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus Quote by Keyser hey all, i've recently started studying electrostatics, and i have couple of question about things that i did not fully understand, and would very much appreciate if someone could set me straight. 1) how can a cube, with a single charge in the middle of it have a flux? don't the field lines cancel each other, thus achieving 0 electric field for each plane? i mean, you have a field vector going up on the Y axis, and down on the same axis (and so on for the others), so how come the flux $\Phi=A\overline{E}$ isn't zero? 2) if i have a flat disk on a plane, with uniform charge density $\sigma$, why when i integrate, i do so for small rings, and not for very small circles? why is it: $E=\int\frac{kσ2∏rdr}{(...)}$ instead of $E=\int\frac{kσ2∏dr}{(...)}$? thank you very much for your help For your first question I would say that flux is a scalar and a unit outward normal is defined before calculating it.For example-if field line along +Y axis and normal is outward then flux is positive.For -Y ,normal is again outward(down in most preferred),so again it is positive because of two - signs. For second one,you will have to cover the whole area of disk(for integration) and a circle does not have any thickness so you must have to select a ring.Also note that it should have the units of area which is only in the first case,not the second. so what you are saying (in regards to question no. 1) is that when i get E=0, it's because im adding vectors for which the total sum is zero (opposite directions) while the flux does not "consider" that case, and is a "per case" thing - each face has it's own flux, even though total E is zero? and another one: how can i find the field exerted by infinite plane on a point along the Z axis, but without using Gauss law? do i just say that a dq part of the charge is equal to kσdxdy and then $\overline{E}$=$\int\int kσdxdy\cdot\frac{z}{\sqrt{x^{2}+y^{2}}}$ ? thank you! Mentor ## a question on flux, and on field integration Quote by Keyser so what you are saying (in regards to question no. 1) is that when i get E=0, it's because im adding vectors for which the total sum is zero (opposite directions) while the flux does not "consider" that case, and is a "per case" thing - each face has it's own flux, even though total E is zero? It doesn't make any sense to talk about the "total E" in this situation. You can only add E's that are located at the same point, produced by different charges or other sources. In your example, there's no meaningful way to add the E at the top of the cube to the E at the bottom of the cube. how can i find the field exerted by infinite plane on a point along the Z axis, but without using Gauss law? do i just say that a dq part of the charge is equal to kσdxdy and then $\overline{E}$=$\int\int kσdxdy\cdot\frac{z}{\sqrt{x^{2}+y^{2}}}$ ? Not quite. E is a vector, so when you add the contributions to the total E at a given point, you have to do each component separately, in general: $$E_z = \int {\int {dE_z}}$$ and similarly for ##E_x## and ##E_y##. In this example, you can argue from symmetry that the total ##E_x## and ##E_y## are both zero. For ##E_z## you have to take into account the angle between the z-direction and the line that defines r. And remember, Coulomb's Law has an r2 in the denominator. $$E_z = \int {\int {\frac {k \sigma dx dy} {r^2} \cos \theta}}$$ but doesn't the E that go in the +Y direction "cancels" the one going in the -Y direction? Mentor Yes, that's what I mean by "argue from symmetry..." so, if i want to treat E as a vector, i must relate it to a certain point? i thought the field was a vector anyhow, so i can add it however i wanted. i'm adding a picture that might make things clear. the green arrows are E exerted by a single charge at (0,0,0). so, from symmetry, you argue that E in that axis cancels each other, but can't you say as well, that if you were to add them up (0,y,0)+(0,-y,0) you'd get the same answer? -note that i refer to E outside the box, as i understand that inside (from gauss law) the field is zero in any case http://img842.imageshack.us/img842/3222/cubeq.jpg thank you! edit: i think i'm getting it: i can treat E in that manner if, like you say, i have a point on which E is exerted, but if i'm talking about a sphere / cube / other as a shape, there is no point for which i can add the vectors? thank for your patience, when i get things slow, i understand them fast :) Recognitions: Homework Help Science Advisor Quote by Keyser i've recently started studying electrostatics, and i have couple of question about things that i did not fully understand, and would very much appreciate if someone could set me straight. What book are you using. Your questions have simple answers in most texts. Quote by Keyser so what you are saying (in regards to question no. 1) is that when i get E=0, it's because im adding vectors for which the total sum is zero (opposite directions) while the flux does not "consider" that case, and is a "per case" thing - each face has it's own flux, even though total E is zero? Flux arises by scalar product,so in the situation where E is non zero flux can be zero because of a cosine term which may cancel all the flux.Think when electric field is passing through a region(closed) with no charge inside of it Recognitions: Gold Member For a closed surface around a charge, all the E-field lines are "poking" in or out, so there IS a net flux. If you replace the charge with a bar magnet, some of the field lines poke out and some poke in, resulting in no net flux. Gauss' Laws. Quote by Meir Achuz What book are you using. Your questions have simple answers in most texts. i have principles of physics, 9th edition, by Halidy & Resnick. @greswd: if i put a bar inside, wouldn't i have 0 E at the top and bottom? and another question please: I understand Gauss's law in regards to plains (thanks to you). but how does it work when dealing with a sphere? i know how to integrate the whole thing, but i'm trying to get it in a more intuitive manner. when i calculate the flux , I'm looking at infinitesimal area dA, which is the perpendicular vector to the surface. so far, so good. but when I'm dealing with spheres, I integrate over infinitesimal spheres - or more accurate, over their surface area which is 3D. what I think is, that because the flux goes through the entire sphere - say we have a 1C charge inside - and is different between these little spheres (the flux is proportional to electric field lines) so you need to sum the flux through all these little spheres in order to get the total flux, and that's why dA is the entire 3D surface area of an infinitesimal sphere. Am I correct? If not, how do these dA look like? if they are 2D (r^2), then how can i get just the flux for one such dA? Don't laugh if my question seems stupid, but I always thought of area as a 2D thing Thread Tools | | | | |-------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: a question on flux, and on field integration | | | | Thread | Forum | Replies | | | Calculus | 1 | | | Classical Physics | 3 | | | Calculus & Beyond Homework | 1 | | | Introductory Physics Homework | 0 | | | Introductory Physics Homework | 7 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9578799605369568, "perplexity_flag": "middle"}
http://mathhelpforum.com/differential-geometry/78673-translation-invariance-integral.html
Thread: 1. Translation Invariance of Integral Theorem. Let $f$ be a real-valued function that is Riemann integrable on $[a,b]$. For a fixed $k \in \mathbb{R}$, let $g_k$ be the function $g_{k}(x) = f(x+k)$ where the domain of $g$ is appropriately chosen so that $x$ is in the domain of $g_k$ if and only if $x+k$ is in the domain of $f$. Then $g_k$ is Riemann integrable on $[a-k, b-k]$ and $\smallint_{a-k}^{b-k} g_k = \smallint_{a}^{b} f$. Proof. Let $\epsilon >0$. Choose $\delta >0$ such that $\delta < \frac{\epsilon}{2}$. So basically we consider $\mathcal{R}(g_k, P) = \sum_{i=1}^{n} g_{k}(x_{i}^{*})(x_{i}-x_{i-1})$? And then show that $|\mathcal{R}(g_k,P)-\mathcal{R}(f,P)| < \epsilon$? 2. Originally Posted by manjohn12 Theorem. Let $f$ be a real-valued function that is Riemann integrable on $[a,b]$. For a fixed $k \in \mathbb{R}$, let $g_k$ be the function $g_{k}(x) = f(x+k)$ where the domain of $g$ is appropriately chosen so that $x$ is in the domain of $g_k$ if and only if $x+k$ is in the domain of $f$. Then $g_k$ is Riemann integrable on $[a-k, b-k]$ and $\smallint_{a-k}^{b-k} g_k = \smallint_{a}^{b} f$. Let $I = \smallint_a^b f$ we know that for any $\epsilon > 0$ there exists $\delta > 0$ so that $|\mathcal{R}(f,P) - I| < \epsilon$ where $P$ is any partition with $||P||<\delta$ and $\mathcal{R}(f,P)$ is any associated Riemann sum. Let $P_k$ be any partition of $[a-k,b-k]$ with $||P_k|| < \delta$. If $\sum_{j=1}^n g_k(t_j)(x_j-x_{j-1})$ is some Riemann sum then it is equal to $\sum_{j=1}^n f(x_j+k) [(x_j+k)-(x_{j-1}+k)]$, but this is a Riemann sum associated with $f$ where the partition norm is less than $\delta$. Thus, $\left| \sum_{j=1}^n g_k(t_j)(x_j-x_{j-1}) - I \right| < \epsilon$, so we see that $g_k$ is integrable with the same value. 3. Originally Posted by ThePerfectHacker Let $I = \smallint_a^b f$ we know that for any $\epsilon > 0$ there exists $\delta > 0$ so that $|\mathcal{R}(f,P) - I| < \epsilon$ where $P$ is any partition with $||P||<\delta$ and $\mathcal{R}(f,P)$ is any associated Riemann sum. Let $P_k$ be any partition of $[a-k,b-k]$ with $||P_k|| < \delta$. If $\sum_{j=1}^n g_k(t_j)(x_j-x_{j-1})$ is some Riemann sum then it is equal to $\sum_{j=1}^n f(x_j+k) [(x_j+k)-(x_{j-1}+k)]$, but this is a Riemann sum associated with $f$ where the partition norm is less than $\delta$. Thus, $\left| \sum_{j=1}^n g_k(t_j)(x_j-x_{j-1}) - I \right| < \epsilon$, so we see that $g_k$ is integrable with the same value. Would it still prove it if we said $\left| \sum_{j=1}^n g_k(t_j)(x_j-x_{j-1}) - \mathcal{R}(f,P) \right| < \epsilon$ instead of $I$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 65, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.92732173204422, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/176062/does-a-disjoint-set-forest-have-multiple-distinct-upwards-closed-partitions?answertab=active
# Does a disjoint set forest have multiple distinct “upwards closed” partitions? The following is an excerpt from a powerpoint on the role of the inverse Ackermann function in determining the complexity of path compression. • Dissection of a disjoint set forest $F$ with node set $X$ • Partition of $X$ into “top part” $X_t$ and “bottom part” $X_b$ so that top part $X_t$ is “upwards closed” • i.e. $x∈X_t ⇒$ every ancestor of $x$ is in $X_t$ also Using the definition of a "upwards closed" partition mentioned above, aren't there several distinct $X_t$ partitions that meet this criteria? Consider the following partitions that all appear to meet the aforementioned definition: • The first distinct partition is only the root. • The second distinct partition is the root and its children • The third distinct partition is the root, its children and its grandchildren • The remaining distinct partitions consist of the root and up to the $n^{th}$ generation of children Do there exist multiple distinct "upwards closed" partitions in a disjoint set forest? - ## 1 Answer Yes, of course. This is just the definition of a "dissection"; there are several partitions that satisfy this definition. The upper part of a dissection need not contain all nodes at a given level; for example, it could consist of the root, just one of its children, and all of the descendants of that child. The next few slides basically argue that you can analyze the effect of a series of path compressions on a forest $F$ by analyzing a corresponding sequence of path compressions in an arbitrary dissection of $F$. Then later, the slides show that by choosing a particular dissection (and arguing recursively), one obtains the inverse-Ackermann amortized time bound. - Thank you for the help. – user26649 Jul 28 '12 at 4:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9271268844604492, "perplexity_flag": "middle"}
http://mathhelpforum.com/algebra/43100-help-simple-math-problem-needed.html
# Thread: 1. ## Help with Simple Math Problem Needed This is a real life problem, but I hope it is ok that I posted it here. I have an inaccurate postal scale. I compiled a list of weights given by the incorrect scale versus the correct weights that it should have given. If I plot them on a graph, they are both straight lines. The incorrect scale weights are lower than the correct weights. The margin of error slowly increases as the amount of weight increases. It seems that I should be able to derive a formula to calculate the correct weight from the incorrect weight using this data, since there is a definite consistent relationship between the two. In an abstract sense, if I have 2 straight lines plotted on a graph, starting at 0 and gradually growing apart, how do figure out the formula which describes the relationship between them? I need to figure out this out for tomorrow, for extremely boring and intricate reasons. Any help would be greatly appreciated as my few years of math studies are far in the past. Max 2. Originally Posted by max6166 This is a real life problem, but I hope it is ok that I posted it here. I have an inaccurate postal scale. I compiled a list of weights given by the incorrect scale versus the correct weights that it should have given. If I plot them on a graph, they are both straight lines. The incorrect scale weights are lower than the correct weights. The margin of error slowly increases as the amount of weight increases. It seems that I should be able to derive a formula to calculate the correct weight from the incorrect weight using this data, since there is a definite consistent relationship between the two. In an abstract sense, if I have 2 straight lines plotted on a graph, starting at 0 and gradually growing apart, how do figure out the formula which describes the relationship between them? I need to figure out this out for tomorrow, for extremely boring and intricate reasons. Any help would be greatly appreciated as my few years of math studies are far in the past. These values are being plotted against the actual weight? So if I understand you correctly, you have a line, which we may consider as a function $f$ of $x$, which gives the measured weight $f(x)$ in terms of the actual weight $x$? In that case you don't need both lines. If you have the measured weight, the actual weight will be given by $f^{-1}(x)$. In other words, if your scale produces a reading $y$ when measuring an object of weight $x$ such that $y = ax + b,$ (since you said there was a linear relationship), then given a measured value $y$, you can find the actual weight to be $x = \frac{y - b}a.$ It is as simple as that, unless I misinterpreted your problem. 3. Originally Posted by Reckoner These values are being plotted against the actual weight? So if I understand you correctly, you have a line, which we may consider as a function $f$ of $x$, which gives the measured weight $f(x)$ in terms of the actual weight $x$? In that case you don't need both lines. If you have the measured weight, the actual weight will be given by $f^{-1}(x)$. In other words, if your scale produces a reading $y$ when measuring an object of weight $x$ such that $y = ax + b,$ (since you said there was a linear relationship), then given a measured value $y$, you can find the actual weight to be $x = \frac{y - b}a.$ It is as simple as that, unless I misinterpreted your problem. Hi Reckoner, Unfortunately, I do not really understand your explanation very well. From the little I can understand, however, I think my initial description may have been imprecise, as I do not think I have a function plotted. It might be better if I explained it this way. Say I have ten 500 gram weights. I added the 500 gram weights to the scale one at a time, and recorded the weights the incorrect scale gave. This results in a series of measurements for the values 0 - 5kg. Plotting these values on a graph (with the number of weights and the y axis and the weight on the x axis) results in a straight line. The correct values for those weights is also a straight line. The incorrect values are less than the correct values, and the distance between the 2 lines grows wider as the weight increases. I apologize for any confusion. Max 4. Originally Posted by max6166 Hi Reckoner, Unfortunately, I do not really understand your explanation very well. From the little I can understand, however, I think my initial description may have been imprecise, as I do not think I have a function plotted. It might be better if I explained it this way. Say I have ten 500 gram weights. I added the 500 gram weights to the scale one at a time, and recorded the weights the incorrect scale gave. This results in a series of measurements for the values 0 - 5kg. Plotting these values on a graph (with the number of weights and the y axis and the weight on the x axis) results in a straight line. The correct values for those weights is also a straight line. That is exactly how I interpreted your original post, and my result still stands. You have a line plotted (which means you have a function), and you have a clear relationship between the two variables, correct? It should be quite simple to solve for one variable in terms of the other. The line can be represented by an equation of the form $y = ax + b$, for real numbers $a$ and $b$ (have you found this equation? If not, use least-squares regression or some similar method to get it; ask if you need help). From there, $y$ represents the measured value, and $x$ the actual value. So you simply solve for $x$, as I demonstrated before. Edit: Let me put it to you this way: You have some known weights which you measure with your scale to produce inaccurate weights. These values are all known, so you can consider each weight and measurement as the coordinates of a point on a curve: the $x$-coordinate being the actual weight, and the $y$-coordinate being the measured weight. You have done this, and you have a line (or something that resembles a line reasonably closely). This means that you can find the slope of the line, and its $y$-intercept, and using these you can represent the line as an equation that relates $x$ to $y$. Thus, the solution to your problem comes down to this: if you have an object of known weight, you are able to simply locate the point on the line with that $x$-coordinate, and the $y$-coordinate will be the measured weight (which would be the reading you would get off of your scale). But your problem is the reverse: you have a measured weight, and you wish to find the actual weight. Thus you find the point on the line with the particular $y$-coordinate, and the $x$-coordinate of that point will be the actual weight. So how do you put this into a convenient formula where you can just "plug in" the measured value? Simple: as I said, you already know the measured value in terms of the actual value ( $y = ax + b$), so you just solve for the actual value in terms of the measured value $\left(x = \frac{y - b}a\right)$. So if your scale gives a measurement of $m$, the actual weight would be $w = \frac{m - b}a$. Does that make sense? 5. Originally Posted by Reckoner That is exactly how I interpreted your original post, and my result still stands. You have a line plotted (which means you have a function), and you have a clear relationship between the two variables, correct? It should be quite simple to solve for one variable in terms of the other. The line can be represented by an equation of the form $y = ax + b$, for real numbers $a$ and $b$ (have you found this equation? If not, use least-squares regression or some similar method to get it; ask if you need help). From there, $y$ represents the measured value, and $x$ the actual value. So you simply solve for $x$, as I demonstrated before. Thanks Reckoner. I *think* I am following you now. I was thrown because I didn't know what the variables a and b represented (I still don't really) in the equation y = ax +b I just skimmed through the Wikipedia page on least squares regression. I am afraid it is far beyond my rudimentary math skills. This stuff is on a whole different level than anything I have dealt with previously. I don't understand even a single symbol on that page... At the very least, this doesn't seem like something I am going to be able to figure out in time. I had mistakenly assumed there was a much simpler solution. I might still try it anyway, but I have no idea where to start. Where can I learn what those symbols mean so that I can decipher the least squares regression equations? 6. Originally Posted by Reckoner Edit: Let me put it to you this way: You have some known weights which you measure with your scale to produce inaccurate weights. These values are all known, so you can consider each weight and measurement as the coordinates of a point on a curve: the $x$-coordinate being the actual weight, and the $y$-coordinate being the measured weight. You have done this, and you have a line (or something that resembles a line reasonably closely). This means that you can find the slope of the line, and its $y$-intercept, and using these you can represent the line as an equation that relates $x$ to $y$. Thus, the solution to your problem comes down to this: if you have an object of known weight, you are able to simply locate the point on the line with that $x$-coordinate, and the $y$-coordinate will be the measured weight (which would be the reading you would get off of your scale). But your problem is the reverse: you have a measured weight, and you wish to find the actual weight. Thus you find the point on the line with the particular $y$-coordinate, and the $x$-coordinate of that point will be the actual weight. So how do you put this into a convenient formula where you can just "plug in" the measured value? Simple: as I said, you already know the measured value in terms of the actual value ( $y = ax + b$), so you just solve for the actual value in terms of the measured value $\left(x = \frac{y - b}a\right)$. So if your scale gives a measurement of $m$, the actual weight would be $w = \frac{m - b}a$. Does that make sense? Thanks again. The edit was very helpful. I was not thinking of the problem that way. I was able to follow your explanation up until the point where it is converted into an equation. The sticking point is that I don't understand what the variables a and b represent. The rest makes sense to me. I must go to bed now, but will try and work it through again in the morning. Thanks again for your time and trouble, Max 7. Originally Posted by max6166 I might still try it anyway, but I have no idea where to start. Where can I learn what those symbols mean so that I can decipher the least squares regression equations? Ah, okay. So you haven't actually found an equation for this line of yours yet. Here is a simple method: plot your points, draw the line. Pick two points on the line, $(x_0,\;y_0)$ and $(x_1,\;y_1)$. The "slope" of the line will be $m = \frac{y_1 - y_0}{x_1 - x_0}.$ This value is the $a$ in the equation I gave you. $b$ is the $y$-intercept (I imagine the scale is calibrated to give a reading of 0 when no weight is on it, so the $y$-intercept should then be 0). Once you determine those, you have your equation. Now, the equation you get depends on which points you pick to find the slope (ideally they are all in a perfectly straight line, but that is usually never the case when actual measurements are involved). So, you may wish to use a regression method if you want to find the line that best fits the data. However, if you plot the points and draw your line, and all the points lie pretty close to the line, or if a high accuracy isn't needed, then the above method should work fine. If your points only very loosely form a line though, you may wish to consider a method for finding a more optimal model (which doesn't necessarily have to be a line). Edit: Sorry I didn't catch you before bed! Good luck with your work! 8. Wow Reckoner! It only took a few minutes to figure out with that explanation. So now, I can just divide any weight on that scale by the slope of my data to get the correct weight. And the slope is just 0.9, which makes makes the whole thing fairly trivial. It seems to work. Fantastic! This got me thinking of a last step. The scale is a typical postal scale with a round clock-like face: The existing 360 degree dial goes from 0 to 20kg. Since I know the real weight is that amount divided by 0.9, I should be able to print out a little replacement dial to go in the centre of the faceplate. I think I can work through this in a manual grunt fashion. I could use a protractor to record the degrees at which the major kg markings lie and divide them by .9. I could then divide the spaces in between by 10. I could then scan that into a graphics program, clean it up, and print it out. I am pretty sure a program exists that I could use that would just chart this all for me, but I have no idea what it would be. Any suggestions or pointers would be appreciated, of course. Thanks again for all your time and help. I can't believe I actually understand how it works. Great explanations! Take care, Mark 9. Originally Posted by max6166 The existing 360 degree dial goes from 0 to 20kg. Since I know the real weight is that amount divided by 0.9, I should be able to print out a little replacement dial to go in the centre of the faceplate. I think I can work through this in a manual grunt fashion. I could use a protractor to record the degrees at which the major kg markings lie and divide them by .9. I could then divide the spaces in between by 10. I could then scan that into a graphics program, clean it up, and print it out. Perhaps I can help with the markings on the new face: One full revolution around the dial corresponds to 20 (measured) kilograms. We can then express the measurement reported on the dial as a function of the angle between the needle and the axis (we'll use the zero point as the axis): $m = \frac{20\,\theta}{360^\circ} = \frac{20\,\theta}{2\pi} = \frac{10\,\theta}\pi$ Now we already know that if a measurement $m$ is reported, the actual weight will be $w = \frac m{0.9} = \frac{10m}9 = \frac{10(10\,\theta / \pi)}9 = \frac{100\,\theta}{9\pi}$ Solving for $\theta$, $\theta = \frac{9\pi w}{100}$ So what does this mean? It means that if we want to know where the marking for a weight of $w$ should go on the dial, we substitute it into the above equation to get the angle. Now, suppose we want to make a marking for a particular weight (for example, you will probably want markings for 1, 2, 3, ... kilograms). Call the weight $w$. For simplicity, let us allow the marking to be a single point that lies some given distance $t$ away from the origin (the center of the dial). Then we will have something that looks like this: Code: ``` 0 | | x M -+----------o | /| | / | y | t / | |θ / | | / | -o----------+---``` Looking at this triangle, we know from simple trigonometry that the coordinates of our marking $M$ will be $M = (t\sin\theta,\;t\cos\theta)$ $= \left(t\sin\left(\frac{9\pi w}{100}\right),\;t\cos\left(\frac{9\pi w}{100}\right)\right).$ Letting $t$ vary between two positive values will give us a line segment. This line segment can be used as a marking for that particular weight. For example, if you were constructing the new face to have markings 5 cm from the center of the dial (let's say), you wanted each marking to be 1 cm in length, and you wanted to make markings at each whole number weight in kilograms, you could simply graph the parametric equations $x = t\sin\left(\frac{9\pi w}{100}\right),\;4.5\le t\le5.5$ $y = t\cos\left(\frac{9\pi w}{100}\right),\;4.5\le t\le5.5$ for each positive integer $w$ between 0 and 20 or so. The process is similar for other measurements. Below is an example that I made using the plotting capabilities of the open source program Maxima, by plotting 220 lines to give 22 whole value markings each with 10 subdivisions (I used different thicknesses to make the markings properly readable; the numbers were added separately in Paint Shop Pro). Attached Thumbnails 10. Thanks so much, Reckoner. You really didn't have to go to all the trouble of making an image and even numbering it. I am really blown away by how helpful you have been. After reading your walkthrough a few times, I am now able to follow the general logic of the approach you gave. There is some knowledge assumed within each step which I don't possess (I didn't even know what the theta symbol meant until now, for example), but I have deciphered enough to understand the purpose and logic of each step at least. I am essentially a high school drop out, so some things, like least squares regression, assume more advanced math skills than I possess. I have developed some interest in math recently though. I think that is partially why I wanted to "figure out" the scale rather than just throw it out. I even purchased a few used high school texts with the intent of working through them. I found them very hard though because there was very little explanation. Are there any texts you are aware of which might be better suited to someone like me? Anyway, thank you again Reckoner. This has been really incredible. I thought I was just looking for a "divide it by 0.9" answer, but instead I got a peek into a whole other world. 11. Originally Posted by max6166 I am essentially a high school drop out, so some things, like least squares regression, assume more advanced math skills than I possess. I have developed some interest in math recently though. I think that is partially why I wanted to "figure out" the scale rather than just throw it out. Well, I wasn't aware of what sort of knowledge you had and I didn't want to seem patronizing by assuming you wouldn't be able to understand me. But by all means, feel free to ask if you need further clarification. I enjoy helping people with "real-life" projects, partly because such people usually have a bit more motivation than the usual high school student just looking for some homework help, and also partly because it offers others a chance to see some of the cool things they can do with the mathematics that they learned in school but maybe neglected a little. Do not be too worried about your current lack of knowledge though. Whatever your reasons for not finishing school, there is still plenty of time to learn the things you missed. You will just need to be a bit more disciplined since you will be teaching yourself mostly. And keep in mind that a lot of this stuff can look more difficult than it is when you aren't familiar with the symbols, notations, and terminology used. Mathematics requires us to use very precise language, so we use symbols to help express our ideas in a sufficiently precise manner, but it is not hard to pick up. Originally Posted by max6166 I even purchased a few used high school texts with the intent of working through them. I found them very hard though because there was very little explanation. Are there any texts you are aware of which might be better suited to someone like me? There are many great texts, and there are many abysmal ones. The important thing to note, however, is that many of the things you may want to learn in mathematics depend on your mastery of more elementary ideas--for example, you are probably easily intelligent enough to learn calculus, but you'll get nowhere if you don't first get an appropriate foundation in algebra and trigonometry. This can be a problem when choosing books, because even though a book may be high school level, it still may require you to know a bit of prerequisite material. While I could give some good recommendations on texts for more advanced material, I'm afraid I can't help you much with the basic stuff. I do, however, have one recommendation: Mathematics: From the Birth of Numbers by Jan Gullberg - This is a wonderful book; it covers an incredible amount of material but doesn't require much prior knowledge other than basic arithmetic. I have read it and I love it. The book covers everything from a basic introduction to numbers, up to differential and integral calculus, also covering many other things along the way, including algebra, geometry, trigonometry, logic and set theory, combinatorics, basic matrix algebra, differential equations and a little taste of more advanced topics like topology and abstract algebra. It's a little over a thousand pages, which may seem long, but that really is pretty short considering all of the topics it treats. It is an easy read, the writing is fairly informal with a good use of humor, and there are some nice illustrations throughout. For me, the best part about it is all of the history that it goes into: with each subject or topic introduced, Gullberg gives a pretty thorough treatment of the history and development of the ideas. Reading the book from start to finish sort of feels like you're sitting in a time machine, watching mathematics develop from its early roots in ancient societies to its modern structure. I do caution you, however, to use this book as a supplement to your learning, rather than relying on it alone. This is because the book can go through topics pretty quickly and it doesn't have a lot of exercises for you to do; I suggest that as you read this, you also go through some normal textbooks and do a number of exercises in each section to reinforce the material. Now, if you want some more book ideas, this site has some good recommendations (though the comments for each book are brief). Take a look around there. Originally Posted by max6166 Anyway, thank you again Reckoner. This has been really incredible. I thought I was just looking for a "divide it by 0.9" answer, but instead I got a peek into a whole other world. Glad to help. Good luck!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 80, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9665778279304504, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?t=611409&page=2
Physics Forums Page 2 of 2 < 1 2 Blog Entries: 4 Recognitions: Gold Member ## Time Dilation and The Twin Paradox? Ah, I see. Someone has since added captions to the image since I last saw it on Wikipedia. I did not use that diagram, myself, because the fourth image shows non-orthogonal space and time axes, which struck me as deeply misleading. Now someone (Loedel?) has put a caption under it, which clears up at least what was intended, if not what was actually conveyed. The time axis should, of course, be vertical in every picture. Thanks, bobc for the note on my user-page. Quote by JDoolin Ah, I see. Someone has since added captions to the image since I last saw it on Wikipedia. I did not use that diagram, myself, because the fourth image shows non-orthogonal space and time axes, which struck me as deeply misleading. Now someone (Loedel?) has put a caption under it, which clears up at least what was intended, if not what was actually conveyed. The time axis should, of course, be vertical in every picture. Thanks, bobc for the note on my user-page. Yes, I touched up the graphics from the Wiki sketches (boxes and captions). I thought the Loedel diagram for the twin paradox was interesting. But you are right, they should have put in the rest frame time axis for the Loedel diagram to make it a little more clear. Here are more graphics to help those who may not be familiar with Loedel diagrams. Again, the Loedel diagram is constructed by finding a rest system (our black perpendicular coordinates below) in which two observers (blue and red twins) are moving in opposite directions at the same speed. The thing that makes it nice is that line distances on the screen have the same actual distance (or time) scaling for the two Lorentz boosted frames (the blue and red frames for the twins). Thus, in the rest frame corresponding to the black perpendicular reference coordinates (again, the vertical axis should have been shown), both of the twins travel the same distance (time) through the 4-D universe for the outgoing trip of the "traveling twin" (of course both twins are traveling with respect to the black perpendicular coordinates). To compare the distances between the twins for the return trip (the blue twin is doing the turn-around and return), you need to use hyberbolic calibration curves. We are using the black rest frame for making distance comparisons. I have added the sketches below to help visualize the space-time scaling that results in the "traveling twin" taking a shorter path on the return trip. The hyperbolic curves needed to see the scaling of space-time distances are (hopefully) developed. The sketch on the left is a Loedel diagram. Two observers (the twins) are moving in opposite directions with the same speed with respect to rest frame represented by the perpendicular coordinates. Thus, the actual time and distance scaling on the diagram are the same for each twin during the out-going trip of the "traveling twin." Again, for the return trip we must use hyperbolic calibration curves for distance (time) comparisons. But, the Loedel diagram allows us to use the Pythagorean theorem directly as shown, from which the metric and Lorentz transformations result. This result also gives us the hyperbolic curves that must be used for calibrating distances (times) in the black rest frame (perpendicular coordinate system). The upper right sketch below illustrates how a hyperbolic curve is used to calibrate points in the black frame located ten years from the origin. The locus of points, all at ten years distance, form a hyperbolic curve in accordance with the derived metric equation. The lower right sketch shows a collection of calibration curves for locating space-time distances (times). Note that a photon worldline always bisects the time and space coordinate axes for all Lorentz coordinate systems (green lines rotated at a 45 degree angle in the rest frame of the black perpendicular coordinates). Here is an example of using the hyperbolic calibration curves. We have perpendicular black coordinates for the rest frame. The stay-at-home twin's worldline is along the black vertical time axis--the path is 13 years long (measured by stay-at-home's clock). The traveling twin's path through space-time is shown with the blue lines--the path is 10 years long (measured by traveling twin's clock). Blog Entries: 4 Recognitions: Gold Member The term "hyperbolic calibration curves" is apt. I kind of think of them being concentric. Just like concentric circles could be made as $$x^2+y^2=r^2$$ as r={0,1,2,3,4,...} etc, all representing circles; the locus of positions equidistant from the origin. you can have "concentric" hyperbolic curves, going $$x^2-c^2 t^2=s^2$$ as s={0,1,2,3,4...}, all representing "concentric" space-like hyperbolas, the locus of events which, in some reference frame, are simultaneous with the origin event, all the same distance to the left or right of the origin in those respective frames, and $$c^2 t^2-x^2=\tau^2$$ as τ={0,1,2,3,4...} for the "concentric" time-like hyperbolas, the locus of events, which, in some reference frame, are all in the same position as the origin, all the same time since or before the origin in those respective frames. In any case, whether you call it a "hyperbolic calibration curve" or a "concentric hyperbola" it's a helpful mental construct. I especially like that you re-centered your hyperbolic calibration curve at the turn-around event. The analogy with circles works so long as you re-center the circle with each measurement. You can't measure the path distance just by looking at the distance to the center to the end-point. You have to move the center of the circle every time the path turns. I'm probably just explaining something really obvious in a complicated way, but I remember myself thinking about it for a few days before it occurred to me, how to perfect the analogy; measuring distances with concentric circles vs measuring space-time intervals with "hyperbolic calibration curves." Quote by JDoolin The term "hyperbolic calibration curves" is apt. I kind of think of them being concentric. Just like concentric circles could be made as $$x^2+y^2=r^2$$ as r={0,1,2,3,4,...} etc, all representing circles; the locus of positions equidistant from the origin. you can have "concentric" hyperbolic curves, going $$x^2-c^2 t^2=s^2$$ as s={0,1,2,3,4...}, all representing "concentric" space-like hyperbolas, the locus of events which, in some reference frame, are simultaneous with the origin event, all the same distance to the left or right of the origin in those respective frames, and $$c^2 t^2-x^2=\tau^2$$ as τ={0,1,2,3,4...} for the "concentric" time-like hyperbolas, the locus of events, which, in some reference frame, are all in the same position as the origin, all the same time since or before the origin in those respective frames. In any case, whether you call it a "hyperbolic calibration curve" or a "concentric hyperbola" it's a helpful mental construct. I especially like that you re-centered your hyperbolic calibration curve at the turn-around event. The analogy with circles works so long as you re-center the circle with each measurement. You can't measure the path distance just by looking at the distance to the center to the end-point. You have to move the center of the circle every time the path turns. I'm probably just explaining something really obvious in a complicated way, but I remember myself thinking about it for a few days before it occurred to me, how to perfect the analogy; measuring distances with concentric circles vs measuring space-time intervals with "hyperbolic calibration curves." Excellent points, JDoolin. I'll adapt your "concentric hyperbola" teminology from now on. Page 2 of 2 < 1 2 Thread Tools | | | | |----------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Time Dilation and The Twin Paradox? | | | | Thread | Forum | Replies | | | Special & General Relativity | 1 | | | Special & General Relativity | 6 | | | Introductory Physics Homework | 2 | | | Special & General Relativity | 11 | | | Special & General Relativity | 32 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9277909398078918, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?p=3786053
Physics Forums Page 2 of 4 < 1 2 3 4 > ## power series; 2 Um I don't know, that if it's less than 1 it converges and if greater than 1 it diverges Mentor Quote by arl146 Um I don't know, that if it's less than 1 it converges and if greater than 1 it diverges Be specific. If what is less than 1. Don't use "it". Write mathematics equations/inequalities. |x-4|<1 converge |x-4|>1 diverge Mentor Quote by arl146 |x-4|<1 converge |x-4|>1 diverge OK, what interval are we talking about for convergence. What about if |x - 4| = 1? The Ratio test doesn't cover this situation, so you have to investigate it as a special case. interval? .... i dont know O_O Mentor Quote by arl146 interval? .... i dont know O_O Solve |x - 4| < 1. oh, x < 5 ... so just (-infinity, 5) ? Mentor Quote by arl146 oh, x < 5 ... so just (-infinity, 5) ? No. You should get a finite length interval. According to you, -300 would satisfy |x - 4| < 1. Does it? You should review working with absolute values in equations and inequalities. ohhhhhhhh, i dont know why I KEEP just looking over the absolute values. sooo .... (4, 5) ? Mentor What about 3.5? Doesn't it satisfy |x - 4| < 1? ohh right .... so (3, 5) because since it's not a square bracket it doesn't include the 3 right but anything after 3 and before 4 would really never actually equal 1 which satisfies the less than sign Mentor Quote by arl146 ohh right .... so (3, 5) because since it's not a square bracket it doesn't include the 3 right but anything after 3 and before 4 would really never actually equal 1 which satisfies the less than sign Between 3 and 5, not 4. OK, this establishes that the radius of convergence is 1. In the interval (3, 5), the series converges absolutely. If you aren't clear on what this means, look up the definition of absolute convergence in your book. To finish the problem you need to check the two endpoints: x = 3 and x = 5. Substitute these numbers in your formula for the series and see what you get. Obviously (??) you won't be able to use the Ratio Test, but you should be able to use what other facts you know about series to say whether the series converges (either absolutely or conditionally) or diverges, at each of these values. If you aren't clear on conditional convergence, look up that term as well. I know that, I was explaining how I understood why it is (3,5) instead of (4,5). I think at x=5 it absolutely converges. It seems to get closer and closer to one and theyre all positive values so Looking for absolute convergence with the absolute values doesn't change that. For x=3 I think it coverages just conditionally. Not absolute because the values don't seem to get closer to a Specific value when you add the absolute value signs... And I don't think it diverges because The values are -,+,-,+ and so on an they seem to end up going towards a specific number Although, I've never really understood when something converges or diverges Mentor You need to be able to prove your conclusions. You can't just look at a few of the partial sums and guess that it's going to converge or not. What series do you get when you set x=5? i got summation [n(1)^n] / [n^3 + 1] for x=5. and the same for x=3 just with the negative, [n(-1)^n] / [n^3 + 1] and thats when n=1 to infinity Mentor Quote by arl146 i got summation [n(1)^n] / [n^3 + 1] for x=5. and the same for x=3 just with the negative, [n(-1)^n] / [n^3 + 1] and thats when n=1 to infinity So when x = 5, the series is $$\sum_{n = 1}^{\infty}\frac{n\cdot 1^n}{n^3 + 1}$$ Does that series converge or diverge? (You should simplify it first.) Why? What about when x = 3? Same questions. no, not n^3 + 3, its n^3 + 1 ... right i said that it absolutely converges. but the other person said i need to prove it. i dont know how i do that Page 2 of 4 < 1 2 3 4 > Thread Tools | | | | |--------------------------------------|----------------------------|---------| | Similar Threads for: power series; 2 | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 2 | | | Calculus & Beyond Homework | 5 | | | Calculus & Beyond Homework | 11 | | | Calculus & Beyond Homework | 11 | | | Calculus & Beyond Homework | 1 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9286465048789978, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2009/04/24/the-parallelogram-law/?like=1&source=post_flair&_wpnonce=7b62a7b6bf
# The Unapologetic Mathematician ## The Parallelogram Law There’s an interesting little identity that holds for norms — translation-invariant metrics on vector spaces over $\mathbb{R}$ or ${C}$ — that come from inner products. Even more interestingly, it actually characterizes such norms. Geometrically, if we have a parallelogram whose two sides from the same point are given by the vectors $v$ and $w$, then we can construct the two diagonals $v+w$ and $v-w$. It then turns out that the sum of the squares on all four sides is equal to the sum of the squares on the diagonals. We write this formally by saying $\displaystyle\lVert v+w\rVert^2+\lVert v-w\rVert^2=2\lVert v\rVert^2+2\lVert w\rVert^2$ where we’ve used the fact that opposite sides of a parallelogram have the same length. Verifying this identity is straightforward, using the definition of the norm-squared: $\displaystyle\begin{aligned}\lVert v+w\rVert^2+\lVert v-w\rVert^2&=\langle v+w,v+w\rangle+\langle v-w,v-w\rangle\\&=\langle v,v\rangle+\langle v,w\rangle+\langle w,v\rangle+\langle w,w\rangle\\&+\langle v,v\rangle-\langle v,w\rangle-\langle w,v\rangle+\langle w,w\rangle\\&=2\langle v,v\rangle+2\langle w,w\rangle\\&=2\lVert v\rVert^2+2\lVert w\rVert^2\end{aligned}$ On the other hand, what if we have a norm that satisfies this parallelogram law? Then we can use the polarization identities to define a unique inner product. $\displaystyle\langle v,w\rangle=\frac{\lVert v+w\rVert^2-\lVert v-w\rVert^2}{4}+i\frac{\lVert v-iw\rVert^2-\lVert v+iw\rVert^2}{4}$ where we ignore the second term when working over real vector spaces. However, if we have a norm that does not satisfy the parallelogram law and try to use it in these formulas, then the resulting form must fail to be an inner product. If we did get an inner product, then the norm would satisfy the parallelogram law, which it doesn’t. Now, I haven’t given any examples of norms on vector spaces which don’t satisfy the parallelogram law, but they show up all the time in functional analysis. For now I just want to point out that such things do, in fact, exist. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 4 Comments » 1. i always thought one starts with an inner product and then the norms are “induced” by the inner product. never thought we start with a norm and derive an inner product from it … Comment by awhan | April 26, 2009 | Reply 2. An inner product is basically a positive definite quadratic form (on finite dimensional spaces) and a positive definite quadratic form gives you a norm. Comment by Zygmund | April 28, 2009 | Reply 3. Yes, Zygmund, that’s basically what I’ve said. awhan was commenting that he hadn’t seen before the idea of starting from a norm instead of ending with one. Comment by | April 28, 2009 | Reply 4. There are discontinuous additive real functions on the real line. That is, there are everywhere discontinuous functions f:R->R such that f(x+y)=f(x)+f(y). The existence of such functions is intimately tied to the axiom of choice, and to the existence of non-measurable sets. One can arrange for f(x)=0 iff x=0. Using any such discontinuous f, define q(x) = f(x)f(x). The function q is non-negative, and obeys the parallelogram law. However, q(x)^{1/2} is not a norm on R because q(ax) != a^{2}q(x) for all scalars ‘a’. If you start with a function q : X x X -> C satisfying the parallelogram law, then you can show that b(x,y) = 1/4\sum_{n=0}^{3} i^{n} q(x+i^{n}y) is additive in both coordinates. Therefore, for fixed x,y, the function t -> b(tx,y) is additive. The only remaining issue of linearity–the issue of whether or not b(ax,y)=ab(x,y) for all scalars ‘a’–reduces to the real case, provided q(ix)=q(x), where ‘i’ is the imaginary constant. This is not a straightforward matter, even for positive forms, as you can see. However, if q(x) >= 0, and q(ax)=|a|^{2}q(x) for all scalars ‘a’, then that is enough to guarantee the continuity of t->b(tx,y) and, hence, also the sesquilinear nature of b(x,y). An additive real function f(x) is linear iff it is Lebesgue measurable. It is not hard to show that f(qx) = q f(x) for all rational numbers q; this follows directly from the additivity of f. Therefore, f is linear over the rational numbers Q. We end up with a natural decomposition of R into equivalence classes [x] containing all y in R such that x-y is rational. This is, in turn, related to the classical construction of a non-measurable set, where one picks a single element from each distinct equivalence class [x]. Very interesting stuff which is not easily summarized. The upshot: a positive definite function q on a complex vector space satisfying q(ix)=q(x) and q(x+y)+q(x-y)=2q(x)+2q(y) does not necessarily produce an inner product over X. However, if q(x)^{1/2} is a norm (which comes down to q(ax)=|a|^{2}q(x)), then a positive-definite q is generated by an inner-product. Or, equivalently, if t -> q(x+ty) is measurable for all x,y in X, then q is generated by an inner product. Comment by Trent | June 15, 2012 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 9, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9048195481300354, "perplexity_flag": "head"}
http://mathoverflow.net/revisions/46430/list
## Return to Answer Post Made Community Wiki by Anton Geraschenko♦♦ 1 I can't answer in general, but I will speak to some examples. For the problem on finite unions of subspaces, I typed finite union subspaces vector into Google, and the top of the resulting list was On the representation of vector spaces as a finite union of subspaces by J Luh - 1972 FINITE UNION OF SUBSPACES. By. J. LUH (Raleigh). It has been proved in [1] that if a vector space V over a field F is a union of n (finite) proper subspaces ... www.springerlink.com/index/GN711127680W310R.pdf For the ${\bf Q}(\sqrt2,\sqrt7)$ problem, I typed in class number biquadratic and got a number of results that look like they might be helpful. Typing in class number computation biquadratic, the top result was A Computation of Some Bi-Quadratic Class Numbers by H Cohn - 1958 2 99191. A Computation of Some Bi-Quadratic Class Numbers. By Harvey Cohn. A fascinating chapter in computational number theory began when Lagrange ... www.jstor.org/stable/2002024 which looks like it might be useful, as do some of the other returns.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8786418437957764, "perplexity_flag": "middle"}
http://www.nag.com/numeric/cl/nagdoc_cl23/html/F11/f11dac.html
# NAG Library Function Documentnag_sparse_nsym_fac (f11dac) ## 1  Purpose nag_sparse_nsym_fac (f11dac) computes an incomplete $LU$ factorization of a real sparse nonsymmetric matrix, represented in coordinate storage format. This factorization may be used as a preconditioner in combination with nag_sparse_nsym_fac_sol (f11dcc). ## 2  Specification #include <nag.h> #include <nagf11.h> void nag_sparse_nsym_fac (Integer n, Integer nnz, double *a[], Integer *la, Integer *irow[], Integer *icol[], Integer lfill, double dtol, Nag_SparseNsym_Piv pstrat, Nag_SparseNsym_Fact milu, Integer ipivp[], Integer ipivq[], Integer istr[], Integer idiag[], Integer *nnzc, Integer *npivm, NagError *fail) ## 3  Description nag_sparse_nsym_fac (f11dac) computes an incomplete $LU$ factorization (Meijerink and Van der Vorst (1977) and Meijerink and Van der Vorst (1981)) of a real sparse nonsymmetric $n$ by $n$ matrix $A$. The factorization is intended primarily for use as a preconditioner with the iterative solver nag_sparse_nsym_fac_sol (f11dcc). The decomposition is written in the form $A = M + R$ where $M = PLDUQ$ and $L$ is lower triangular with unit diagonal elements, $D$ is diagonal, $U$ is upper triangular with unit diagonals, $P$ and $Q$ are permutation matrices, and $R$ is a remainder matrix. The amount of fill-in occurring in the factorization can vary from zero to complete fill, and can be controlled by specifying either the maximum level of fill lfill, or the drop tolerance dtol. The argument pstrat defines the pivoting strategy to be used. The options currently available are no pivoting, user-defined pivoting, partial pivoting by columns for stability, and complete pivoting by rows for sparsity and by columns for stability. The factorization may optionally be modified to preserve the row-sums of the original matrix. The sparse matrix $A$ is represented in coordinate storage (CS) format (see Section 2.1.2 in the f11 Chapter Introduction). The array a stores all the nonzero elements of the matrix $A$, while arrays irow and icol store the corresponding row and column indices respectively. Multiple nonzero elements may not be specified for the same row and column index. The preconditioning matrix $M$ is returned in terms of the CS representation of the matrix $C = L + D -1 + U - 2 I .$ Further algorithmic details are given in Section 8.3. ## 4  References Meijerink J and Van der Vorst H (1977) An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix Math. Comput. 31 148–162 Meijerink J and Van der Vorst H (1981) Guidelines for the usage of incomplete decompositions in solving sets of linear equations as they occur in practical problems J. Comput. Phys. 44 134–155 Salvini S A and Shaw G J (1996) An evaluation of new NAG Library solvers for large sparse unsymmetric linear systems NAG Technical Report TR2/96 ## 5  Arguments 1:     n – IntegerInput On entry: the order of the matrix $A$. Constraint: ${\mathbf{n}}\ge 1$. 2:     nnz – IntegerInput On entry: the number of nonzero elements in the matrix $A$. Constraint: $1\le {\mathbf{nnz}}\le {{\mathbf{n}}}^{2}$. 3:     a[la] – double *Input/Output On entry: the nonzero elements in the matrix $A$, ordered by increasing row index, and by increasing column index within each row. Multiple entries for the same row and column indices are not permitted. The function nag_sparse_nsym_sort (f11zac) may be used to order the elements in this way. On exit: the first nnz entries of a contain the nonzero elements of $A$ and the next nnzc entries contain the elements of the matrix $C$. Matrix elements are ordered by increasing row index, and by increasing column index within each row. 4:     la – Integer *Input/Output On entry: These arrays must be of sufficient size to store both $A$ (nnz elements) and $C$ (nnzc elements); for this reason the length of the arrays may be changed internally by calls to realloc. It is therefore imperative that these arrays are allocated using malloc and not declared as automatic arrays On exit: if internal allocation has taken place then la is set to ${\mathbf{nnz}}+{\mathbf{nnzc}}$, otherwise it remains unchanged. Constraint: ${\mathbf{la}}\ge 2×{\mathbf{nnz}}$. 5:     irow[la] – Integer *Input/Output 6:     icol[la] – Integer *Input/Output On entry: the row and column indices of the nonzero elements supplied in a. Constraints: • irow and icol must satisfy the following constraints (which may be imposed by a call to nag_sparse_nsym_sort (f11zac)):; • $1\le {\mathbf{irow}}\left[\mathit{i}\right]\le {\mathbf{n}}$ and $1\le {\mathbf{icol}}\left[\mathit{i}\right]\le {\mathbf{n}}$, for $\mathit{i}=0,1,\dots ,{\mathbf{nnz}}-1$; • ${\mathbf{irow}}\left[\mathit{i}-1\right]<{\mathbf{irow}}\left[\mathit{i}\right]$ or ${\mathbf{irow}}\left[\mathit{i}-1\right]={\mathbf{irow}}\left[\mathit{i}\right]$ and ${\mathbf{icol}}\left[\mathit{i}-1\right]<{\mathbf{icol}}\left[\mathit{i}\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nnz}}-1$. On exit: the row and column indices of the nonzero elements returned in a. 7:     lfill – IntegerInput On entry: if ${\mathbf{lfill}}\ge 0$ its value is the maximum level of fill allowed in the decomposition (see Section 8.2). A negative value of lfill indicates that dtol will be used to control the fill instead. 8:     dtol – doubleInput On entry: if ${\mathbf{lfill}}<0$ then dtol is used as a drop tolerance to control the fill-in (see Section 8.2); otherwise dtol is not referenced. Constraint: if ${\mathbf{lfill}}<0$, ${\mathbf{dtol}}\ge 0.0$. 9:     pstrat – Nag_SparseNsym_PivInput On entry: specifies the pivoting strategy to be adopted as follows: • if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_NoPiv}$, no pivoting is carried out; • if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_UserPiv}$, pivoting is carried out according to the user-defined input value of ipivp and ipivq; • if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_PartialPiv}$, partial pivoting by columns for stability is carried out; • if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_CompletePiv}$, complete pivoting by rows for sparsity, and by columns for stability, is carried out. Suggested value: ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_CompletePiv}$. Constraint: ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_NoPiv}$, $\mathrm{Nag_SparseNsym_UserPiv}$, $\mathrm{Nag_SparseNsym_PartialPiv}$ or $\mathrm{Nag_SparseNsym_CompletePiv}$. 10:   milu – Nag_SparseNsym_FactInput On entry: indicates whether or not the factorization should be modified to preserve row sums (see Section 8.4): • if ${\mathbf{milu}}=\mathrm{Nag_SparseNsym_ModFact}$, the factorization is modified (milu); • if ${\mathbf{milu}}=\mathrm{Nag_SparseNsym_UnModFact}$, the factorization is not modified. Constraint: ${\mathbf{milu}}=\mathrm{Nag_SparseNsym_ModFact}$ or $\mathrm{Nag_SparseNsym_UnModFact}$. 11:   ipivp[n] – IntegerInput/Output 12:   ipivq[n] – IntegerInput/Output On entry: if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_UserPiv}$, ${\mathbf{ipivp}}\left[k-1\right]$ and ${\mathbf{ipivq}}\left[k-1\right]$ must specify the row and column indices of the element used as a pivot at elimination stage $k$. Otherwise ipivp and ipivq need not be initialized. Constraint: if ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_UserPiv}$, ipivp and ipivq must both hold valid permutations of the integers on $\left[1,{\mathbf{n}}\right]$. On exit: the pivot indices. If ${\mathbf{ipivp}}\left[k-1\right]=i$ and ${\mathbf{ipivq}}\left[k-1\right]=j$ then the element in row $i$ and column $j$ was used as the pivot at elimination stage $k$. 13:   istr[${\mathbf{n}}+1$] – IntegerOutput On exit: ${\mathbf{istr}}\left[\mathit{i}-1\right]-1$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ is the index of arrays a, irow and icol where row $i$ of the matrix $C$ starts. ${\mathbf{istr}}\left[{\mathbf{n}}\right]-1$ is the address of the last nonzero element in $C$ plus one. 14:   idiag[n] – IntegerOutput On exit: ${\mathbf{idiag}}\left[\mathit{i}-1\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{n}}$ holds the index in the arrays a, irow and icol which holds the diagonal element in row $\mathit{i}$ of the matrix $C$. 15:   nnzc – Integer *Output On exit: the number of nonzero elements in the matrix $C$. 16:   npivm – Integer *Output On exit: if ${\mathbf{npivm}}>0$ it gives the number of pivots which were modified during the factorization to ensure that $M$ exists. If ${\mathbf{npivm}}=-1$ no pivot modifications were required, but a local restart occurred (Section 8.4). The quality of the preconditioner will generally depend on the returned value of npivm. If npivm is large the preconditioner may not be satisfactory. In this case it may be advantageous to call nag_sparse_nsym_fac (f11dac) again with an increased value of lfill, a reduced value of dtol, or ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_CompletePiv}$. 17:   fail – NagError *Input/Output The NAG error argument (see Section 3.6 in the Essential Introduction). ## 6  Error Indicators and Warnings NE_2_INT_ARG_LT On entry, ${\mathbf{la}}=〈\mathit{\text{value}}〉$ while ${\mathbf{nnz}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{la}}\ge 2×{\mathbf{nnz}}$. NE_ALLOC_FAIL Dynamic memory allocation failed. NE_BAD_PARAM On entry, argument milu had an illegal value. On entry, argument pstrat had an illegal value. NE_INT_2 On entry, ${\mathbf{nnz}}=〈\mathit{\text{value}}〉$, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: $1\le {\mathbf{nnz}}\le {{\mathbf{n}}}^{2}\text{.}$. NE_INT_ARG_LT On entry, ${\mathbf{n}}=〈\mathit{\text{value}}〉$. Constraint: ${\mathbf{n}}\ge 1$. NE_INTERNAL_ERROR An internal error has occurred in this function. Check the function call and any array sizes. If the call is correct then please contact NAG for assistance. NE_INVALID_ROWCOL_PIVOT On entry, ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_UserPiv}$, but one or both of the arrays ipivp and ipivq does not represent a valid permutation of the integers in $\left[1,{\mathbf{n}}\right]$. An input value of ipivp or ipivq is either out of range or repeated. NE_NONSYMM_MATRIX_DUP A nonzero matrix element has been supplied which does not lie within the matrix $A$, is out of order or has duplicate row and column indices, i.e., one or more of the following constraints has been violated: $1\le {\mathbf{irow}}\left[\mathit{i}\right]\le {\mathbf{n}}$, $1\le {\mathbf{icol}}\left[\mathit{i}\right]\le {\mathbf{n}}$, for $\mathit{i}=0,1,\dots ,{\mathbf{nnz}}-1$. ${\mathbf{irow}}\left[\mathit{i}-1\right]<{\mathbf{irow}}\left[\mathit{i}\right]$, or ${\mathbf{irow}}\left[\mathit{i}-1\right]={\mathbf{irow}}\left[\mathit{i}\right]$ and ${\mathbf{icol}}\left[\mathit{i}-1\right]<{\mathbf{icol}}\left[\mathit{i}\right]$, for $\mathit{i}=1,2,\dots ,{\mathbf{nnz}}-1$. Call nag_sparse_nsym_sort (f11zac) to reorder and sum or remove duplicates. NE_REAL_INT_ARG_CONS On entry, ${\mathbf{dtol}}=〈\mathit{\text{value}}〉$ and ${\mathbf{lfill}}=〈\mathit{\text{value}}〉$. These arguments must satisfy ${\mathbf{dtol}}\ge 0.0$ if ${\mathbf{lfill}}<0$. ## 7  Accuracy The accuracy of the factorization will be determined by the size of the elements that are dropped and the size of any modifications made to the pivot elements. If these sizes are small then the computed factors will correspond to a matrix close to $A$. The factorization can generally be made more accurate by increasing lfill, or by reducing dtol with ${\mathbf{lfill}}<0$. If nag_sparse_nsym_fac (f11dac) is used in combination with nag_sparse_nsym_fac_sol (f11dcc), the more accurate the factorization the fewer iterations will be required. However, the cost of the decomposition will also generally increase. ## 8  Further Comments ### 8.1  Timing The time taken for a call to nag_sparse_nsym_fac (f11dac) is roughly proportional to ${{\mathbf{nnzc}}}^{2}/{\mathbf{n}}$. ### 8.2  Control of Fill-in If ${\mathbf{lfill}}\ge 0$ the amount of fill-in occurring in the incomplete factorization is controlled by limiting the maximum level of fill-in to lfill. The original nonzero elements of $A$ are defined to be of level 0. The fill level of a new nonzero location occurring during the factorization is defined as: $k = max k e , k c + 1 ,$ where ${k}_{e}$ is the level of fill of the element being eliminated, and ${k}_{c}$ is the level of fill of the element causing the fill-in. If ${\mathbf{lfill}}<0$ the fill-in is controlled by means of the drop tolerance dtol. A potential fill-in element ${a}_{ij}$ occurring in row $i$ and column $j$ will not be included if: $a ij < dtol × α ,$ where $\alpha $ is the maximum absolute value element in the matrix $A$. For either method of control, any elements which are not included are discarded unless ${\mathbf{milu}}=\mathrm{Nag_SparseNsym_ModFact}$, in which case their contributions are subtracted from the pivot element in the relevant elimination row, to preserve the row-sums of the original matrix. Should the factorization process break down a local restart process is implemented as described in Section 8.4. This will affect the amount of fill present in the final factorization. ### 8.3  Algorithmic Details The factorization is constructed row by row. At each elimination stage a row index is chosen. In the case of complete pivoting this index is chosen in order to reduce fill-in. Otherwise the rows are treated in the order given, or some user-defined order. The chosen row is copied from the original matrix $A$ and modified according to those previous elimination stages which affect it. During this process any fill-in elements are either dropped or kept according to the values of lfill or dtol. In the case of a modified factorization (${\mathbf{milu}}=\mathrm{Nag_SparseNsym_ModFact}$) the sum of the dropped terms for the given row is stored. Finally the pivot element for the row is chosen and the multipliers are computed for this elimination stage. For partial or complete pivoting the pivot element is chosen in the interests of stability as the element of largest absolute value in the row. Otherwise the pivot element is chosen in the order given, or some user-defined order. If the factorization breaks down because the chosen pivot element is zero, or there is no nonzero pivot available, a local restart recovery process is implemented. The modification of the given pivot row according to previous elimination stages is repeated, but this time keeping all fill. Note that in this case the final factorization will include more fill than originally specified by the user-supplied value of lfill or dtol. The local restart usually results in a suitable nonzero pivot arising. The original criteria for dropping fill-in elements is then resumed for the next elimination stage (hence the local nature of the restart process). Should this restart process also fail to produce a nonzero pivot element an arbitrary unit pivot is introduced in an arbitrarily chosen column. nag_sparse_nsym_fac (f11dac) returns an integer argument npivm which gives the number of these arbitrary unit pivots introduced. If no pivots were modified but local restarts occurred ${\mathbf{npivm}}=-1$ is returned. ### 8.4  Choice of Arguments There is unfortunately no choice of the various algorithmic arguments which is optimal for all types of matrix, and some experimentation will generally be required for each new type of matrix encountered. If the matrix $A$ is not known to have any particular special properties the following strategy is recommended. Start with ${\mathbf{lfill}}=0$ and ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_CompletePiv}$. If the value returned for npivm is significantly larger than zero, i.e., a large number of pivot modifications were required to ensure that $M$ existed, the preconditioner is not likely to be satisfactory. In this case increase lfill until npivm falls to a value close to zero. If $A$ has non-positive off-diagonal elements, is non-singular, and has only non-negative elements in its inverse, it is called an ‘M-matrix’. It can be shown that no pivot modifications are required in the incomplete $LU$ factorization of an M-matrix (Meijerink and Van der Vorst (1977)). In this case a good preconditioner can generally be expected by setting ${\mathbf{lfill}}=0$, ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_NoPiv}$ and ${\mathbf{milu}}=\mathrm{Nag_SparseNsym_ModFact}$. Some illustrations of the application of nag_sparse_nsym_fac (f11dac) to linear systems arising from the discretization of two-dimensional elliptic partial differential equations, and to random-valued randomly structured linear systems, can be found in Salvini and Shaw (1996). ### 8.5  Direct Solution of Sparse Linear Systems Although it is not their primary purpose nag_sparse_nsym_fac (f11dac) and nag_sparse_nsym_precon_ilu_solve (f11dbc) may be used together to obtain a direct solution to a nonsingular sparse linear system. To achieve this the call to nag_sparse_nsym_precon_ilu_solve (f11dbc) should be preceded by a complete $LU$ factorization $A = PLDUQ = M .$ A complete factorization is obtained from a call to nag_sparse_nsym_fac (f11dac) with ${\mathbf{lfill}}<0$ and ${\mathbf{dtol}}=0.0$, provided ${\mathbf{npivm}}\le 0$ on exit. A positive value of npivm indicates that $A$ is singular, or ill-conditioned. A factorization with positive npivm may serve as a preconditioner, but will not result in a direct solution. It is therefore essential to check the output value of npivm if a direct solution is required. The use of nag_sparse_nsym_fac (f11dac) and nag_sparse_nsym_precon_ilu_solve (f11dbc) as a direct method is illustrated in Section 9 in nag_sparse_nsym_precon_ilu_solve (f11dbc). ## 9  Example This example program reads in a sparse matrix $A$ and calls nag_sparse_nsym_fac (f11dac) to compute an incomplete $LU$ factorization. It then outputs the nonzero elements of both $A$ and $C=L+{D}^{-1}+U-2I$. The call to nag_sparse_nsym_fac (f11dac) has ${\mathbf{lfill}}=0$, and ${\mathbf{pstrat}}=\mathrm{Nag_SparseNsym_CompletePiv}$, giving an unmodified zero-fill $LU$ factorization, with row pivoting for sparsity and column pivoting for stability. ### 9.1  Program Text Program Text (f11dace.c) ### 9.2  Program Data Program Data (f11dace.d) ### 9.3  Program Results Program Results (f11dace.r)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 142, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7630833983421326, "perplexity_flag": "middle"}
http://mathhelpforum.com/calculus/156566-area-integral-polar-co-ords-surface.html
# Thread: 1. ## Area integral in polar co-ords of a surface Express as an area integral in polar coordinates the area of the surface z= y^2 +2xy -x^2 +2 lying over the annulus defined by 8/3 ≤ x^2 + y^2 ≤ 1. Hence evaluate this area in terms of π. Thanks 2. Have you made any attempt at this at all? The area you are to integrate over, in polar coordinates, is just $8/3\le r^2\le 1$ which, since r must be positive is exactly $\sqrt{8/3}\le r\le 1$. That makes the limits of integration easy. And don't forget that the "differential of area" in polar cordinates is $rdrd\theta$. Write $z= y^2+ 2xy- x^2+ 2$ in polar coordinates and integrate! Ouch! I just realized this is a surface integral, not a volume integral. But the "polar coordinates" part is the same. To get the surface area integrand you can think of the surface $z= y^2+ 2xy- x^2+ 2$ as given by the vector function $\vec{r}= x\vec{i}+ y\vec{j}+ z\vec{k}= x\vec{i}+ y\vec{j}+ (y^2- 2xy- x^2+ 2)\vec{k}$ with the x and y coordinates as parameters. The derivative vectors $\vec{r}_x= \vec{i}- (2y+ 2x)\vec{k}$ and $\vec{r}_y= \vec{j}+ (2y- 2x)\vec{k}$ are tangent to the surface and their cross product, $\left|\begin{array}{ccc}\vec{i} & \vec{j} & \vec{k} \\ 1 & 0 & -(2y+2x) \\ 0 & 1 & 2y- 2x\end{array}\right|= -(2y+ 2x)\vec{i}- (2y+ 2x)\vec{j}+ \vec{k}$ is perpendicular to the surface and gives the "vector differential of surface area". Its length, $\sqrt{1+ 8x^2+ 8y^2}$ gives the "scalar differential of surface area", $\sqrt{1+ 8x^2+ 8y^2}dx dy$. I think you can see that will be much easier to integrate in polar coordinates!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9265061020851135, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/159958/easy-proof-of-trivial-fusion-implies-normal-p-complement
# Easy proof of trivial fusion implies normal p-complement Theorem: Suppose G is a finite group with Sylow p-subgroup P. Then the following are equivalent: • The set K of elements of G of order relatively prime to p (the p′-elements) form a subgroup • If A and Ag are both subsets of P, then there is some x in CG(A) such that xg is in P That the first implies the second is a silly trick: $(a^{-1} a^g)^{-1} = g^{-1} g^a \in P \cap K = 1$ for any $a \in P$ and $g \in K$ such that $a^g \in P$. The second implies the first is not too hard (Frobenius normal p-complement theorem), but I'm trying to use this as a first example, and so don't want to have any prerequisites outside a very gentle undergraduate group theory course. Most of the rest of the talk is just using Sylow's theorem. Is there a very low-tech, short, few-preliminaries proof that "absolutely no fusion" implies a normal p-complement? I would be ok with assuming P is abelian, so that we get: • If A and Ag are both subsets of P, then g in CG(A). I'm also fine with assuming p = 2 so that "relatively prime to p" shortens to "odd". I don't think using the transfer is appropriate, as it won't be used again, and the whole point of this proof is to motivate learning something else. For a "no" answer to my question, it would be sufficient to convince me that transfer is needed (and nice if you can suggest a special case where it wouldn't be needed, other than P of order 2). - You can do it with characters, but I don't suppose you would accept that as a "few preliminaries" proof. – Geoff Robinson Jun 18 '12 at 17:08 – Jack Schmidt Jun 18 '12 at 17:39 I am confused. Isn't $A_4$ a counterexample, with $P=V_4$, and $A$ being any subgroup of order 2? The set of odd order elements is certainly a group. Moreover, any $A^g$ is in $P$, since $P$ is normal. But $C_G(A)=1$, so there is no $x$ such that $xg\in P$ if $g$ is an element of order 3. – Alex B. Jun 18 '12 at 18:17 @Alex: (1,2,3) and (2,3,4) has a product of order 2, so the odd order elements are not a group. A4 satisfies both hypotheses with p=3, but neither with p=2. – Jack Schmidt Jun 18 '12 at 18:21 2 It's not hard to see that what you are asking for in the case $P$ abelian is equivalent to Burnside's Transfer Theorem. (If $A,A^g \subseteq P$ then $P$ and $P^{g^{-1}}$ are conjugate in $C_G(A)$, so $P^{hg}=P$ with $h \in C_G(A)$ and then the hypothesis of BTT hives $hg \in C_G(P)$, so $g \in C_G(A)$ and we have your hypothesis.) So you are really just asking for a transfer-free proof of BTT. – Derek Holt Jun 18 '12 at 18:48 show 6 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9574026465415955, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/24735/why-do-you-get-electric-field-of-a-light-wave
# Why do you get electric field of a light wave? Why do you get electric field of a light wave in following form: $E(x,t)=A cos(kx-\omega t- \theta)$?( look at: https://public.me.com/ricktrebino -> OpticsI-02-Waves-Fields.ppt, p. 18) - Far from the source, the propagating wave has a spherical form and propagates along the radius. So its local amplitude is a harmonic function of time. As to the coordinate dependence, it is something like $E=\frac{E_0}{R}\cdot cos(\vec{k}\vec{R}-\omega t - \theta)$. If R changes within $\delta R \approx 1/k<<R$, the R-dependence of the amplitude is weak (nearly constant), so locally you have a plane wave in direction of $\vec{R}$. – Vladimir Kalitvianski May 2 '12 at 16:41 ## 2 Answers As it happens I've just been reading "17 Equations That Changed the World" by Ian Stewart and he gives the derivation. I strongly recommend the book, but if you just want the derivation you can find it on Wikipedia. Since we're not supposed to just give links I'll copy the stuff from Wikipedia here: $$\nabla \cdot \boldsymbol{E} = 0$$ $$\nabla \times \boldsymbol{E} = -\frac{\partial \boldsymbol{B}}{\partial t}$$ $$\nabla \cdot \boldsymbol{B} = 0$$ $$\nabla \times \boldsymbol{B} = \mu_0\epsilon_0\frac{\partial \boldsymbol{E}}{\partial t}$$ Taking the curl of the curl equation for $\boldsymbol{E}$ gives: $$\nabla \times (\nabla \times \boldsymbol{E}) = -\frac{\partial}{\partial t} \nabla \times \boldsymbol{E} = -\mu_0\epsilon_0\frac{\partial^2 \boldsymbol{E}}{\partial t^2}$$ But for any vector space $\boldsymbol{V}$ there is an identity: $$\nabla \times (\nabla \times \boldsymbol{V}) = \nabla(\nabla \cdot \boldsymbol{V}) - \nabla^2 \boldsymbol{V}$$ so $$\nabla \times (\nabla \times \boldsymbol{E}) = \nabla(\nabla \cdot \boldsymbol{E}) - \nabla^2 \boldsymbol{E} = - \nabla^2 \boldsymbol{E}$$ because $\nabla \cdot \boldsymbol{E} = 0$ so: $$\nabla^2 \boldsymbol{E} = \mu_0\epsilon_0\frac{\partial^2 \boldsymbol{E}}{\partial t^2}$$ and this is just the wave equation: $$c^2\nabla^2 \boldsymbol{E} = \frac{\partial^2 \boldsymbol{E}}{\partial t^2}$$ where $\mu_0\epsilon_0$ is equal to $c^{-2}$, $c$ is the speed of light. You wanted to know why $$\boldsymbol{E}(x,t)=\boldsymbol{A} \cos(kx-\omega t- \theta)$$ is one possible equation for a light wave, well substitute it into the equation above and you get: $$-c^2 \boldsymbol{A} k^2 \cos(kx-\omega t- \theta) = -\boldsymbol{A} \omega^2 \cos(kx-\omega t- \theta)$$ and obviously this satifies the wave equation if $c = \omega/k$. This is one solution of the wave equation, but of course there are lots of others. - Hmm, but doesn't the amplitude factor out and get cancelled here? – Manishearth♦ May 2 '12 at 15:46 @Manishearth: I spotted an error and was editing my post while you commented. Does your comment still apply? – John Rennie May 2 '12 at 15:54 unfortunately, yes, since it still doesn't determine $A$. Unless I've misinterpreted the question--which I now think I have :/ . I thought the OP wanted the relation between a given light wave and the equation. +1 then :) – Manishearth♦ May 2 '12 at 15:58 $A$ is just an arbitrary constant isn't it? It's the intensity of the light wave, but the wave can be any intensity. – John Rennie May 2 '12 at 16:25 The wave amplitude $A$ is determined either with the boundary conditions or with the source in the field equation. In your case (no source) it is the boundary conditions that "supply" the wave amplitude. – Vladimir Kalitvianski May 2 '12 at 16:26 show 3 more comments If the light wave has frequency $\nu$, wavelength $\lambda$, then it becomes: $$E=E_0\sin(\frac{2\pi}{\lambda}x-2\pi\nu t+\phi)$$ $\phi$ is arbitrary. I'm not sure of this, but you can only calculate $E_0$ if you have the amplitude of magnetic field and the energy density of the wave($U$): $$U=\frac12\epsilon_0E_0^2+\frac1{2\mu_0}E_0^2$$ - shouldn't $nu$ be $T$ (period) ? – Nic May 2 '12 at 17:17 @Nic yep, thanks :) – Manishearth♦ May 2 '12 at 17:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9228377938270569, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2008/07/23/the-euler-characteristic-of-an-exact-sequence-vanishes/?like=1&source=post_flair&_wpnonce=9a71b83933
# The Unapologetic Mathematician ## The Euler Characteristic of an Exact Sequence Vanishes Naturally, one kind of linear map we’re really interested in is an isomorphism. Such a map has no kernel and no cokernel, and so its index is definitely zero. If it weren’t clear enough already, this shows that isomorphic vector spaces have the same dimension! But remember that in abelian categories we’ve got diagrams to chase and exact sequences to play with. And these have something to say about the matters at hand. First, remember that a linear map whose kernel vanishes looks like this in terms of exact sequences: $\mathbf{0}\rightarrow V\xrightarrow{f}W$ And one whose cokernel vanishes looks like this: $V\xrightarrow{f}W\rightarrow\mathbf{0}$ So an isomorphism is just an exact sequence like this: $\mathbf{0}\rightarrow V\xrightarrow{f}W\rightarrow\mathbf{0}$ And then we have the equation $-\dim(V)+\dim(W)=0$ Yes, I’m writing the negative of the index here, but there’s a good reason for it. Now what if we have a segment of an exact sequence: $...\rightarrow U\xrightarrow{f}V\xrightarrow{g}W\rightarrow...$ Considering the map $f$ allows us to break up $V$ as $\mathrm{Im}(f)\oplus\mathrm{Cok}(f)$ (since short exact sequences split). On the other hand, considering the map $g$ allows us to break up $V$ as $\mathrm{Ker}(g)\oplus\mathrm{Coim}(g)$. Exactness tells us that $\mathrm{Im}(f)=\mathrm{Ker}(g)$, which gives us the isomorphism $V\cong\mathrm{Im}(f)\oplus\mathrm{Coim}(g)$. Now the rank-nullity theorem says that $\mathrm{Im}(f)\cong\mathrm{Coim}(f)$, and similarly for all other linear maps. So we get $V\cong\mathrm{Coim}(f)\oplus\mathrm{Im}(g)$ — which expresses $V$ as the direct sum of one subspace of $U$ and one of $W$. And each of those vector spaces has another part to hand off to the vector space on its other side, and so on! What does this mean? It says that if we look at every other term of an exact sequence and take their direct sum, the result is the same whether we look at the odd or the even terms. More explicitly, let’s say we have a long exact sequence $\mathbf{0}\rightarrow V_n\xrightarrow{f_n}V_{n-1}\xrightarrow{f_{n-1}}...\xrightarrow{f_2}V_1\xrightarrow{f_1}V_0\rightarrow\mathbf{0}$ Then we can decompose each term as either $V_k\cong\mathrm{Im}(f_{k+1})\oplus\mathrm{Coim}(f_k)$ or $V_k\cong\mathrm{Coim}(f_{k+1})\oplus\mathrm{Im}(f_k)$ — one for the even terms and the other for the odd terms. Then direct-summing everything up we have an isomorphism $\displaystyle\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}V_k\cong\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}V_k$ which tells us that $\displaystyle\dim\left(\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}V_k\right)-\dim\left(\bigoplus\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}V_k\right)=0$ But since direct sums add dimensions this means $\displaystyle\sum\limits_{\substack{0\leq k\leq n\\k\mathrm{~even}}}\dim\left(V_k\right)-\sum\limits_{\substack{0\leq k\leq n\\k\mathrm{~odd}}}\dim\left(V_k\right)=0$ And now we can just combine these sums: $\displaystyle\sum\limits_{k=0}^n(-1)^k\dim\left(V_k\right)=0$ Which generalizes the formula we wrote above in the case of an isomorphism. This alternating sum we call the “Euler characteristic” of a sequence, and we’ll be seeing a lot more of that sort of thing in the future. But here the major result is that for exact sequences we always get the value zero. In fact, this amounts to a far-reaching generalization of the rank-nullity theorem. And that theorem, of course, is essential to the proof. Yet again we see this pattern of “bootstrapping” our way from a special case to a larger theorem. Despite some mathematicians being enamored of reductio ad absurdum, this induction from special to general has to be one of the most useful tools we keep running across. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 2 Comments » 1. [...] the Euler characteristic we find [...] Pingback by | July 24, 2008 | Reply 2. [...] sum satisfies inclusion-exclusion because of the Mayer-Vietoris sequence (and the fact that the Euler characteristic of an exact sequence vanishes), and it obviously satisfies the cardinality axiom, so by homotopy invariance it must be the Euler [...] Pingback by | June 10, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 25, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145171046257019, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/29840/infinite-sum-of-random-numbers
# Infinite sum of random numbers Let $x_n$, with $n=1,2,\ldots$, be uniformly distributed random variables in $(0,1)$ What is the expected value and probability distribution of the sum $$\sum_{n=1}^\infty x_n^{n^n}$$ - 1 Can you provide some motivation for this question? – Shai Covo Mar 30 '11 at 0:54 ## 2 Answers Elaborating on Steven's answer. Since $\sum\nolimits_{n = 1}^\infty {{\rm E}[X_n^{n^n } ]} < \infty$ and $\sum\nolimits_{n = 1}^\infty {{\rm Var}[X_n^{n^n } ]} < \infty$, the series $\sum\nolimits_{n = 1}^\infty {X_n^{n^n } }$, assuming that the $X_n$ are independent, converges with probability $1$ (see Theorem 1 on page 130 of the book Probability theory: collection of problems). Call the limit $X$. Then, by the monotone convergence theorem, $${\rm E}[X] = \mathop {\lim }\limits_{n \to \infty } {\rm E}\bigg[\sum\limits_{k = 1}^n {X_k^{k^k } } \bigg] = \mathop {\lim }\limits_{n \to \infty } \sum\limits_{k = 1}^n {{\rm E}[X_k^{k^k } ]} = \mathop {\lim }\limits_{n \to \infty } \sum\limits_{k = 1}^n {\frac{1}{{k^k + 1}}} = \sum\limits_{n = 1}^\infty {\frac{1}{{n^n + 1}}} .$$ EDIT: The fact that the series $\sum\nolimits_{n = 1}^\infty {X_n^{n^n } }$ converges with probability $1$ follows from the monotone convergence theorem (MCT), and we don't have to assume that the $X_n$ are independent (it suffices that they are positive). Indeed, by MCT, the limit $X:=\sum\nolimits_{n = 1}^\infty {X_n^{n^n } }$ must be almost surely finite, since the integral $\int_\Omega {XdP} \,(= \sum\nolimits_{n = 1}^\infty {\frac{1}{{n^n + 1}}})$ is finite. - – Ross Millikan Mar 30 '11 at 3:11 @Ross: Thanks for this information. – Shai Covo Mar 30 '11 at 3:15 Your answer almost certainly has no closed form, but it's easy enough to write the EV as a sum; just use the linearity of expectation, and use the defining integral to calculate the expectation of each of the individual terms. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9176880717277527, "perplexity_flag": "head"}
http://www.mathplanet.com/education/algebra-2/how-to-graph-functions-and-linear-equations/graph-inequalities
# Graph inequalities In order to graph an inequality we work in 3 steps: First we graph our boundaries; we dash the line if the values on the line are not included in the boundary. If the values are included we draw a solid line as before. Second we test a point in each region. If one point on one side of the line satisfies our inequality, the coordinates of all the points on the same side of the line will satisfy the same inequality. Last we shade the region whose coordinates satisfy our inequality. Example Graph $y>\mid x\mid +1$ Our absolute value function has two conditions to consider: $\\ when\; x<0\; and\; when\; x>0\\ y>-x+1\; \; \; \; \; \; \; \; \; \; y>x+1\\$ As the first step we graph our boundaries, the lines will be dashed since the values on the lines are not included in the boundaries: Second we should test one point, we choose to test (0,2): $\\ y>\mid x\mid +1\\ 2>\mid 0\mid +1\\ 2>0+1\\ 2>1\\ True\\$ Since our point satisfies our inequality, the coordinates of all the points on the same side of the lines will satisfy the same inequality. As a last step we shade this region: Videolesson: Graph f(x) < Ix-1I-7 Next Class:  How to solve system of linear equations, Solving systems of equations in two variables • Pre-Algebra • Algebra 1 • Algebra 2 • Geometry • Sat • Act
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 3, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8598673343658447, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/120067?sort=votes
## What do theta functions have to do with quadratic reciprocity? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The theta function is the analytic function $\theta:U\to\mathbb{C}$ defined on the (open) right half-plane $U\subset\mathbb{C}$ by $\theta(\tau)=\sum_{n\in\mathbb{Z}}e^{-\pi n^2 \tau}$. It has the following important transformation property. Theta reciprocity: $\theta(\tau)=\frac{1}{\sqrt{\tau}}\theta\left(\frac{1}{\tau}\right)$. This theorem, while fundamentally analytic—the proof is just Poisson summation coupled with the fact that a Gaussian is its own Fourier transform—has serious arithmetic significance. • It is the key ingredient in the proof of the functional equation of the Riemann zeta function. • It expresses the modularity of the theta function (of course, $\theta$ is not literally a modular form, since it is not even defined on all of the upper half-plane, but a simple change of variables can fix that). Theta reciprocity also provides an analytic proof (actually, the only proof, as far as I know) of the Landsberg-Schaar relation $$\frac{1}{\sqrt{p}}\sum_{n=0}^{p-1}\exp\left(\frac{2\pi i n^2 q}{p}\right)=\frac{e^{\pi i/4}}{\sqrt{2q}}\sum_{n=0}^{2q-1}\exp\left(-\frac{\pi i n^2 p}{2q}\right)$$ where $p$ and $q$ are arbitrary positive integers. To prove it, apply theta reciprocity to $\tau=2iq/p+\epsilon$, $\epsilon>0$, and then let $\epsilon\to 0$. This reduces to the formula for the quadratic Gauss sum when $q=1$: $$\sum_{n=0}^{p-1} e^{2 \pi i n^2 / p} = \begin{cases} \sqrt{p} & \textrm{if } \; p\equiv 1\mod 4 \\ i\sqrt{p} & \textrm{if } \; p\equiv 3\mod 4 \end{cases}$$ (where $p$ is an odd prime). From this, it's not hard to deduce Gauss's "golden theorem". Quadratic reciprocity: $\left(\frac{p}{q}\right)\left(\frac{q}{p}\right)=(-1)^{(p-1)(q-1)/4}$ for odd primes $p$ and $q$. For reference, this is worked out in detail in the paper "Applications of heat kernels on abelian groups: $\zeta(2n)$, quadratic reciprocity, Bessel integrals" by Anders Karlsson. I feel like there is some deep mathematics going on behind the scenes here, but I don't know what. Why should we expect theta reciprocity to be related to quadratic reciprocity? Is there a high-concept explanation of this phenomenon? If there is, can it be generalized to other reciprocity laws (like Artin reciprocity)? Hopefully some wise number theorist can shed some light on this! - Reminds me of how the modular transformation law for the Dedekind $eta$ function gives rise to Dedekind sums which have their own reciprocity law from which quadratic reciprocity can be derived. – Dan Piponi Jan 28 at 1:36 Interesting! I didn't know about Dedekind sums. I'll have to read up on this some time soon. Yet another piece of the grand puzzle... – Aleksandar Bahat Jan 28 at 21:12 ## 2 Answers Going in the direction of more generality: With $\theta(\tau)=\sum_n\exp(\pi i n^2 \tau)$, theta reciprocity describes how the function behaves under the linear fractional transformation `$[\begin{smallmatrix} 0&1 \\ -1&0\end{smallmatrix}]$`. From this one can show it's an automorphic form (of half integral weight, on a congruence subgroup). Automorphic forms and more generally automorphic representations are linked by the Langlands program to a very general approach to a non-abelian class field theory. Your "Why should we expect ..." question is dead-on. This is very deep and surprising stuff. In the direction of more specificity, the connection to the heat kernel is fascinating. (In this context, Serge Lang was a great promoter of 'the ubiquitous heat kernel.') The theta function proof is also discussed in Dym and McKean's 1972 book "Fourier Series and Integrals" and in Richard Bellman's 1961 book "A Brief Introduction to Theta Functions." Bellman points out that theta reciprocity is a remarkable consequence of the fact that when the theta function is extended to two variables, both sides of the reciprocity law are solutions to the heat equation. One is, for $t\to 0$ what physicists call a 'similarity solution' while the other is, for $t\to \infty$ the separation of variables solution. By the uniqueness theorem for solutions to PDEs, the two sides must be equal! A special case of quadratic reciprocity is that an odd prime $p$ is a sum of two squares if and only if $p\equiv 1\bmod 4$. This can be be done via the theta function and is in fact given in Jacobi's original 1829 book "Fundamenta nova theoriae functionum ellipticarum." - Re your first para: here is something I've never understood. $\theta$ has half-integral weight, so is an automorphic form not on $GL(2)$ or $GL(1)$ but on some metaplectic group. My understanding is that it is not clear what the link is between the representation theory of the metaplectic group, and Galois representations (because the metaplectic group is not an algebraic group so the general yoga doesn't apply), so although your first para sounds appealing from some "general overview" point of view, I am always very confused about the actual details of what is going on--can you supply them? – wccanard Jan 28 at 22:01 I suspected there was something Langlands-y about this. That's probably the best "big picture" explanation. Unfortunately, I don't know enough about Langlands to get very far with this. I like what you're getting at in the last paragraph; Jacobi's proof is "natural" (in the sense that generated functions are natural), and so I guess it's not a big leap to try to generalize that. – Aleksandar Bahat Jan 31 at 15:50 Correction: that should read "generating functions" rather than "generated functions". – Aleksandar Bahat Feb 7 at 16:57 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hecke generalizing the argument that you mention to prove quadratic reciprocity relative to any given number field $K$ (see, e.g. his Lectures on the Theory of Algebraic Numbers). In The Fourier-Analytic Proof of Quadratic Reciprocity Michael C. Berg describes the subsequent development of this line of research. Quoting from the book's summary: The relative quadratic case was first settled by Hecke in 1923, then recast by Weil in 1964 into the language of unitary group representations. The analytic proof of the general n-th order case is still an open problem today, going back to the end of Hecke's famous treatise of 1923. - I'll add Hecke's quote on p.201 (in my English translation): "precise knowledge of the behavior of an analytic function in the neighborhood of its singular points is a source of number-theoretic theorems." – Matt Young Jan 28 at 1:31 I've heard of Hecke's generalization before, but I still feel my "Why should we expect..." question is unresolved. Although the usefulness of analytic functions in number theory is no longer surprising to me, I'd like to understand specifically why somebody would see quadratic reciprocity and think, "Hmm, $\theta(z)=z^{-1/2}\theta(1/z)$ is relevant." Why this piece of analysis in particular? – Aleksandar Bahat Jan 31 at 14:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.919128954410553, "perplexity_flag": "head"}
http://nrich.maths.org/1351/index
### Good Approximations Solve quadratic equations and use continued fractions to find rational approximations to irrational numbers. ### There's a Limit Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely? ### Not Continued Fractions Which rational numbers cannot be written in the form x + 1/(y + 1/z) where x, y and z are integers? # Continued Fractions I ##### Stage: 4 and 5 Article by Alan and Toni Beardon Continued fractions are written as fractions within fractions which are added up in a special way, and which may go on for ever. Every number can be written as a continued fraction and the finite continued fractions are sometimes used to give approximations to numbers like $\sqrt 2$ and $\pi$. To see how to work out a continued fraction let $$X = {1\over\displaystyle 2\;+\; {\strut 3\over \displaystyle 4 }}.$$ Adding the fractions in the denominator of $X$ we see the denominator is $11/4$. So $X= 1/(11/4)=4/11$. Let us work out the slightly longer continued fraction $$Y = {1\over\displaystyle 2\;+\; {\strut 3\over \displaystyle 4\;+\; {\strut 5\over \displaystyle 6 }}}.$$ We can calculate $Y$ as follows: $$Y = {1\over\displaystyle 2\;+\; {\strut 3\over \displaystyle 4\;+\; {\strut 5\over \displaystyle 6 }}} = {1\over\displaystyle 2\;+\; {\strut 3\over \displaystyle 29/6}} = {1\over\displaystyle 2\;+\; {\strut 18\over \displaystyle 29}} = {1\over\displaystyle 76/29}={29\over 76}.$$ Can you show that $${1\over\displaystyle 2\;+\; {\strut 2\over \displaystyle 2\;+\; {\strut 2\over \displaystyle 2\;+\; {\strut 2\over \displaystyle 3 }}}}\quad = \quad {11\over 30}\quad ?$$ Now we have got the idea we are in for some surprises! Watch out for some patterns in the numbers that come up. Work out the values of the five continued fractions: $$1,\quad {1\over 1+1},\quad {1\over\displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1 }}},\quad {1\over\displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1 }}}},\quad {1\over\displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; {\strut 1 }}}}}.$$ Check your answers Did you find the easiest way to calculate these? For example, you should be able to see that the last one is $${1\over\displaystyle 1\;+\; {\strut 3\over \displaystyle 5 }}\quad = \quad {5\over 8}.$$ In this sequence of continued fractions you can always calculate one quickly by using the previous answer. The next fraction in this sequence is $${1\over\displaystyle 1\;+\; {\strut 5\over \displaystyle 8 }}\quad = \quad {8\over 13}.$$ The numbers we get in order are $1, 2, 3, 5, 8, 13$. What do you think the next number is? Yes, these are the Fibonacci numbers. What do you think the next continued fraction in the sequence is? Now let us find out what happens if the continued fraction goes on for ever. We write this as $$f = {1\over\displaystyle 1+ {\strut 1\over \displaystyle 1\;+\; {\strut 1\over \displaystyle 1\;+\; \cdots }}}.$$ Can you see why we have $$f = {1\over 1+f}\quad ?$$ This gives the quadratic equation $f^2 + f -1 = 0$. Because $f$ is positive we get the one solution $$f = {\sqrt{5}-1\over 2},$$ the ratio of the shorter to the longer side of the Golden Rectangle! Now investigate the continued fraction $${6\over\displaystyle 1\;+\; {\strut 6\over \displaystyle 1\;+\; {\strut 6\over \displaystyle 1\;+\; {\strut 6\over \displaystyle 1\;+\; \cdots }}}}.$$ The answer is a small whole number (which is obviously less than 6). You may like to try a problem on continued fractions and its solution or the problem Fibonacci numbers and its solution. You can find out more about the many contexts in which Fibonacci numbers appear by exploring a website which is devoted entirely to them. This site has practical activities for work on Fibonacci numbers and references to books giving many more. The next part of the series is here. See also the article Approximations, Euclid's Algorithm and Continued Fractions to find out how continued fractions are used to give very quickly better and better rational approximations to numbers like pi. The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 11, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9528043866157532, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/tagged/soft-question?page=3&sort=newest&pagesize=30
# Tagged Questions Questions that ask about some aspect of physics research or study which doesn't involve the actual physics. In general, soft questions can be answered without using physical reasoning. 1answer 190 views ### Texts on field theory in classical physics I need a very good text on field theory and it should provide good understanding of why this concept cant be ignored?I only need that text which will tell me how field theory is an integral part of ... 2answers 385 views ### How should a theoretical physicist study maths? [duplicate] Possible Duplicate: How should a physics student study mathematics? If some-one wants to do research in string theory for example, Would the Nakahara Topology, geometry and physics book and ... 1answer 279 views ### How to express $ds$?(when we know expression for $ds^{2}$) [closed] We know that $$ds^2 = g_{\mu\nu}dx^{\mu}dx^{\nu},$$ Can you say how to calculate $ds$? 0answers 91 views ### What pre-requisite knowledge is required to understand the Alcubierre equation? Similar to here. I have decided to start the long journey to understanding higher levels of physics. My first goal is understanding the Alcubierre equation at a mathematical level. I have a CS ... 2answers 393 views ### The end of theoretical physics? [closed] About the meta questions 1193 and 2609, I've heard parallelly, that the complete branch theoretical physics is already done and that there isn't any thing else to do in this field, how true is it? 0answers 77 views ### What are some fields where physics is leading mathematics? [closed] I know developments in physics and mathematics often nourish each other, and at different points in history, physics and math have taken turns leading each other. An example of a physicist "inventing" ... 7answers 383 views ### Physical intuition for higher order derivatives Could somebody give me an intuitive physical interpretation of higher order derivatives (from 2 and so on), that is not related to position - velocity - acceleration - jerk - etc? 0answers 51 views ### Searching for a collaborator for a physics simulation of multi-party elections [closed] I have completed a series of physics simulations on Matlab to find the equilibrium positions of parties in 2-D or 3-D voting space and compared them to the optimal positions that would provide the ... 2answers 320 views ### How can some-one independently do research in particle physics? I'm not affiliated with a physics department and I want to do independent research. I'm working my way through Peskin et. al. QFT now. Let's say that I've finished Peskin et. al. and Weinberg QFT ... 1answer 131 views ### What is the importance of electrodynamics and magnetism in physics as a whole? [closed] At my university the second half of a year long sequence in basic calculus based physics focuses on electrodynamics and magnetism. I am wondering what is the significance of these topics to physics in ... 1answer 124 views ### Probability in Quantum Mechanics Do you need to take a probability/statistics course for Quantum Mechanics, or is the probability in quantum mechanics so rudimentary that you can just learn it along the way? I'm in doubt as to ... 4answers 782 views ### Help an aspiring physicists what to self-study [closed] This is probably not the kind of question you'll often encounter on this forum, but I think a bit of background is needed for this question to make sense and not seem like a duplicate: 2012 has been ... 1answer 119 views ### Who are some prominent groups or individuals pursuing realist physics? [closed] I'm interested to know of any well-known physics schools or individuals attempting to advance fundamental physics or reinterpret it from a realist standpoint. Presumably most physicists by contrast ... 2answers 194 views ### Electromagnetism for Mathematician I am trying to find a book on electromagnetism for mathematician (so it has to be rigorous). Preferably a book that extensively uses Stoke's theorem for Maxwell's equations (unlike other books that on ... 0answers 2k views ### Where can I get Young & Freedman University Physics 13th Edition Instructor's Solution Manual? [closed] I am a Chinese mainland student learning physics on my own,I bought Young & Freedman University Physics 13th Edition International Edition,could someone please tell me how can I get the ... 4answers 221 views ### Physics background for Quantum Mechanics Very often on this site people ask what background in math is needed to be able to understand quantum mechanics (based on a short search of this site). So that question is answered. However, I want to ... 1answer 117 views ### Meaning of the word “Moment”? This question is more of a question about the origin of a physical term used in many contexts. My question is about the linguistic or historical meaning of the word "moment". Please don't provide ... 0answers 398 views ### Are there any list of String Theory Equations? [closed] I Google it but did not find anything. here is a list of QM Formuals and a list of relativistic equations Are there any list of String Theory Equations? 4answers 231 views ### Statistics in physics What are the uses of statistics in physics? I am about to embark upon a study of statistics and I would like to know what the particular benefits I gain in physics. 2answers 242 views ### Applications of QFT in theoretical physics I would like to know which fields in physics have seen growth or benefited by applying QFT? I know that approaches to quantum gravity such as string theory use QFT, HEP and also some branches of ... 2answers 128 views ### Computational Science involve programming? [closed] I read what is computational science in Wikipedia but the explanation and understanding are not very clear. So, I could you please give a simple example computational science project and what all ... 5answers 418 views ### Is physics rigorous in the mathematical sense? I am a student studying Mathematics with no prior knowledge of Physics whatsoever except for very simple equations. I would like to ask, due to my experience with Mathematics: Is there a set of ... 1answer 129 views ### Where did Karl Schwarzschild derived his solution? Does anyone know more about circumstances of Karl Schwarzschild at the Russian front in 1915 where he allegedly derived his famous solution of the Einstein equations (describing a black hole)? Sources ... 1answer 244 views ### Download only physics and maths wikipedia pages for offline use [closed] There are tools for downloading the entire wikipedia database (above 8 Gb without pictures), but I would like to download only physics and maths pages, to view them offline. Wikipedia pages have a ... 1answer 47 views ### Recommendations for a physics-related book for a child? [closed] So it's that time of year again, where one must buy gifts for all children in sight. I want to buy a book, related to physics, ideally, for my girlfriend's sister. What books are educational and ... 0answers 132 views ### Theoretical physics quiz book [closed] I`m an android developer who recently published a quiz game called " The Big Bang Freak " ! The quiz contains a section dedicated to the famous sitcom "The Big Bang Theory ", but it ... 3answers 205 views ### What's the first physics textbook for undergraduate self-learner? [closed] I am kind of studying physics on my own now. I choose University Physics (13th Edition) for myself,is it fine? I am also studying Calculus using Thomas' textbook. ... 0answers 132 views ### How to publish scientific papers on the internet for free? [closed] I have written some scientific papers and I want to publish them for free so peoples can take benefit from it and I also want to make it safe so no one can steal it. PS: I am very young and I don't ... 3answers 369 views ### Physics book recommendations for transition to PhD study Now in the end of my MSc in physics I've been contemplating in enrolling in a Phd in theoretical physics. Before that I think I would do best to re-study some subjects. My question is then, what books ... 2answers 252 views ### Quantum phyics project for a high schooler [duplicate] Possible Duplicate: Study Quantum Physics I am a high schooler who is interested in physics and mathematics, and I have a kind of 'high-school thesis' coming up in a year and a half or so. ... 1answer 631 views ### What came first, Rice Crispy or “Snap,” “Crackle,” and “Pop”? [closed] The fourth, fifth, and sixth derivatives of position are called "Snap" "Crackle" and "Pop". What came first, the rice crispy characters, or the physics units? 2answers 124 views ### Will a one year undergraduate course of Linear Algebra be enough for QM? [duplicate] Possible Duplicate: Linear Algebra for Quantum Physics Can you get all/most of the knowledge you need of Linear Algebra for QM in a one year course? I know for certain my course also ... 2answers 227 views ### Mathematically challenging areas in Quantum information theory and quantum cryptography I am a physics undergrad and thinking of exploring quantum information theory. I had a look at some books in my college library. What area in QIT, is the most mathematically challenging and rigorous? ... 2answers 428 views ### What math is needed to understand the Schrödinger equation? If I now see the Schrödinger equation, I just see a bunch of weird symbols, but I want to know what it actually means. So I'm taking a course of Linear Algebra and I'm planning on starting with PDE's ... 2answers 404 views ### Differences between classical, analytical, rational and theoretical mechanics Can you explain me what are the differences between the four following subjects? analytical mechanics rational mechanics classical mechanics theoretical mechanics 3answers 331 views ### How can I find a very old paper by W.L. Bragg from 1913? I'm often looking for old physics papers that had a big impact on science (Nobel prize, for example). But I can't seem to find a lot of them. Is there a reason why some papers are not digitally ... 0answers 397 views ### Conceptual questions in Randall D. Knight- Physics for scientist and engineers [closed] Where can one get answers for conceptual questions in book Physics for scientist and engineers by Randall D. Knight? 0answers 164 views ### Integrals given by Landau [closed] Discussion about Landau's "Theoretical Minimum" has already been posted here. Unfortunately I couldn't find much about some examples of questions he gave to students. There are three questions in the ... 0answers 88 views ### Road to understanding Quantum Mechanics [closed] First of all, I would like to apologize in advance for this mundane question which you have been probably asked a thousand times on this forum. But I would like to understand what (rudimentary) QM is ... 5answers 439 views ### Linear Algebra for Quantum Physics A week ago I asked people on this forum what mathematical background was needed for understanding Quantum Physics, and most of you mentioned Linear Algebra, so I decided to conduct a self-study of ... 4answers 472 views ### Study Quantum Physics I'm an aspiring physicist who wants to self study some Quantum Physics. My thirst for knowledge is unquenchable and I can not wait 2 more years until I get my first quantum physics class in ... 1answer 89 views ### Why the world is so deep and dark? We live in a dark world, and the light is results of big bang. The world is really made mostly of dark matter and dark energy? Why the world is so deep and dark? 4answers 208 views ### Which part of making an atomic bomb is the hard part and how many people know how to do that? [closed] I mean which part of the process of making an Atomic bomb is the hard and how many people know how to do that on Earth. Is it in hundreds or less i.e. 20 - 50 people or even less? Don't get me wrong ... 0answers 103 views ### What would physics be like today if Einstein hadn't existed? [closed] If the many worlds interpretation is correct then there is some parallel universe in which Einstein didn't exist. How has the study of physics developed in that world without Einstein around ? 2answers 69 views ### Is an electric lamp a transducer? [closed] Silly thought. A transducer, by definition, is a device that converts variations in one form of energy to another. An electric lamp converts electricity into visible light - the brightness may vary ... 1answer 64 views ### What is the pause called at the apex of an object's trajectory? My apologies for such a basic question--I am a musician, not a physicist. But I cannot anywhere find the word, if one exists, that describes that elegant pause of an object such as a ball, thrown ... 1answer 71 views ### Didactics question (“teams and times”) [closed] In sports it is commonplace to distinguish a "team" (as characterized by the players who took part in a match, playing together against another team), from the "score" (such as the final score of ... 0answers 105 views ### MIT OpenCourseware vs Real MIT course [closed] Is MIT course much different from MIT OpenCourseWare? I am curious, because as a high schooler, I have some intent to study from MIT OpenCourseWare. Will this allow me to be more comfortable if I am ... 2answers 98 views ### How can we claim something violates some physical law, when so many physical laws have been postulated? For example, Einstein postulated that the speed of light, c, is constant in all inertial frames of reference. Bohr postulated that electrons go around the atom in ... 1answer 460 views ### Walter Lewin Lectures in HD I like the lectures by Walter Lewin 8.0x. However the quality of the videos is pretty bad. Is there any way (DVD, web,...) to get the lecture videos in a good quality, best in HD?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946281909942627, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/04/17/more-on-tensor-products-and-direct-sums/
# The Unapologetic Mathematician ## More on tensor products and direct sums We’ve defined the tensor product and the direct sum of two abelian groups. It turns out they interact very nicely. One thing we need is another fact about the tensor product of abelian groups. If we take three abelian groups $A$, $B$, and $C$, we can form the tensor product $A\otimes B$, and then use that to make $(A\otimes B)\otimes C$. On the other hand, we could have started with $B\otimes C$ and then built $A\otimes(B\otimes C)$. If we look at the construction we used to show that tensor products actually exist we see that these two groups are not the same. However, they are isomorphic. To see this, let’s make a bilinear function from $(A\otimes B)\times C$ to $A\otimes(B\otimes C)$. By our construction, any element of $A\otimes B$ can be represented as a sum $\sum\limits_i a_i\otimes b_i$, so linearity says we just need to consider elements of the form $a\otimes b$. Define $f(a\otimes b,c)=a\otimes(b\otimes c)$. This induces a unique linear function given by $\bar{f}((a\otimes b)\otimes c)=a\otimes(b\otimes c)$ and extending to sums of such elements. Similarly we get a linear function $\bar{f}^{-1}(a\otimes(b\otimes c))=(a\otimes b)\otimes c)$, so we have an isomorphism of abelian groups. We can thus (somewhat) unambiguously talk about “the” tensor product $A\otimes B\otimes C$. Now let’s take a collection of abelian groups $A_i$ with $i$ running over an index set $\mathcal{I}$, and let $B$ be any other abelian group. We want to consider the tensor product $\left(\bigoplus\limits_{i\in\mathcal{I}}A_i\right)\otimes B$ Since the direct sum is a direct product of groups, it comes with projections $\pi_k:\bigoplus_i A_i\rightarrow A_k$. Since the free product is in general a subgroup of the direct sum (a proper subgroup when the index set is infinite), we also have injections $\iota_k:A_k\rightarrow\bigoplus_i A_i$ coming from the free product. We can use these to build homomorphisms $\iota_i\otimes1_B:A_i\otimes B\rightarrow\left(\bigoplus\limits_{i\in\mathcal{I}}A_i\right)\otimes B$ applying $\iota_i$ to $A_i$ and the identity to $B$. By the universal property of direct sums (the one it gets from free products of groups) this gives us a homomorphism $\alpha:\bigoplus\limits_{i\in\mathcal{I}}(A_i\otimes B)\rightarrow\left(\bigoplus\limits_{i\in\mathcal{I}}A_i\right)\otimes B$ On the other hand, for each $k$ we have a bilinear function sending $(a,b)$ in $\left(\bigoplus_i A_i\right)\times B$ to $\pi_k(a)\otimes b$ in $A_k\otimes B$. By the universal properties of tensor products this gives a linear function $\left(\bigoplus_i A_i\right)\otimes B\rightarrow A_k\otimes B$. The universal property of direct sums (the one it gets from direct products of groups) gives us a linear function $\beta:\left(\bigoplus\limits_{i\in\mathcal{I}}A_i\right)\otimes B\rightarrow\bigoplus\limits_{i\in\mathcal{I}}(A_i\otimes B)$ Now there’s a lot of juggling of functions and injections and projections here that I really don’t think is very illuminating. The upshot is that $\alpha$ and $\beta$ are inverses of each other, giving us an isomorphism of the two abelian groups. There’s nothing really special about the left side of the tensor product either. A similar result holds if the direct sum is the right tensorand. We can even put them together to get the really nice isomorphism: $\left(\bigoplus\limits_{i\in\mathcal{I}}A_i\right)\otimes\left(\bigoplus\limits_{j\in\mathcal{J}}B_j\right)\cong\bigoplus\limits_{(i,j)\in\mathcal{I}\times\mathcal{J}}(A_i\otimes B_j)$ Neat! ### Like this: Posted by John Armstrong | Abelian Groups, Algebra, Group theory ## 9 Comments » 1. “More on tensor products and direct sums”: The direct sum is a co-product, not a direct product. It therefore comes with injections, not with projections. It seems to me that the proof given here might be incorrect. Shlomi Comment by Shlomi | July 31, 2010 | Reply 2. The direct sum is both a product and a coproduct, and it comes with both injections and projections. As I say in the second link in the post, “When we restrict our attention to abelian groups, direct products and free products are the same thing”. See also here, keeping in mind that an Abelian group is a $\mathbb{Z}$-module. Comment by | July 31, 2010 | Reply 3. I also find this proof troublesome. I believe you are confusing the terms “direct sum” and “direct product”, and unnecessarily introducing the term “free product” (I assume you mean “free product of abelian groups”, in which case this would by definition be identical to the direct sum). As you are aware, the direct sum is a subgroup of the direct product, and only equal when \$\mathcal{I}\$ is finite. Thus, your construction of the map \$\beta\$ by arguing that a collection of maps to each \$A_i\otimes B\$ induces a map to \$\bigoplus_{i\in\mathcal{I}}(A_i\otimes B)\$ is incorrect when \$\mathcal{I}\$ is infinite. I’m not disputing the theorem itself – the tensor product *does* distribute over arbitrary direct sums – but I believe your proof is flawed. For an example of a correct proof, see Theorem 5.4 on p.22 of http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf I also wanted to say I have used your site many times to understand things better, and you have an impressive command of an extremely broad collection of areas in math. I don’t intend to sound like I’m claiming you don’t know what you’re doing; I just think you made an honest mistake here. Many thanks for your continued blathing, Zev Comment by Zev Chonoles | February 13, 2011 | Reply • The direct sum of Abelian groups plays two roles that come from group theory, each of which gives different properties. I refer to those names (which were defined earlier and are available through tracing backlinks) to emphasize and use those properties. Comment by | February 13, 2011 | Reply • But the direct sum only plays those two roles, both product and coproduct, in the case of finitely many factors. Shlomi’s comment is not quite right; a direct sum \$A=\bigoplus A_i\$ (even one with infinitely many factors) does, of course, come with projections \$\pi_i\$ to each factor \$A_i\$. However, these projections do not serve to make \$A\$ the product of the \$A_i\$ if there are infinitely many \$A_i\$. In other words, given an infinite collection of (non-trivial) abelian groups \$A_i\$, their direct sum \$A\$ *does not* have the property that any collection of maps of abelian groups \$f_i:X\rightarrow A_i\$ factors uniquely through a map \$f:X\rightarrow A\$. For example, let \$I\$ be any infinite set, let each \$A_i=\mathbb{Z}\$, let \$X=\mathbb{Z}\$, and let all the maps \$f_i:X\rightarrow A_i\$ be the identity map of \$\mathbb{Z}\$. But there is no map \$f:X\rightarrow A\$ such that \$f_i=\pi_i \circ f\$, because where could \$f\$ send \$1\in X\$ to? Not \$(1,1,\ldots)\$, because that is not an element of the direct sum. Comment by Zev Chonoles | February 13, 2011 | Reply • I would point to the Wikipedia page on biproducts, http://en.wikipedia.org/wiki/Biproduct, but it is, unhelpfully, poorly worded about this very issue. It is not made explicitly clear that when they say “In the category of abelian groups, biproducts always exist and are given by the direct sum.”, they are only referring to biproducts of finitely many factors. However, this is alluded to in other parts of the page. Comment by Zev Chonoles | February 13, 2011 | Reply • Let me clarify the first sentence of my antepenultimate post. I should say, “The direct sum always plays the role of the coproduct; it only additionally plays the role of the product in the case of finitely many factors.” Comment by Zev Chonoles | February 13, 2011 | Reply 4. Zev has decided to passive-aggressively demand that I pay attention to him by complaining to me on Formspring. So yes, Zev, go through and mentally add a finiteness condition. If you want to fill in the gap in the infinite case, start a weblog of your own and write up the proof. I’m sorry that I don’t instantly bend to your whims, and that I have other projects and a day job that keep me from doing so. If you want more responsiveness, pay for my time. Comment by | February 27, 2011 | Reply • I truly do apologize if I seemed passive-aggressive; I don’t remember my phrasing, but I’m sure it could have been much better if I made you angry. However, your response to my (quite kindly worded) initial post indicated that you thought I was in error. Indeed, you were rather dismissive of both me and Shlomi. Perhaps I am suffering from a bad case of http://xkcd.com/386/, but all I wanted to do by contacting you on Formspring (since you have no email listed) was to follow up on this post – privately(!) – and see if you still felt there was no problem. I don’t feel that was too unreasonable, although I again apologize for my phrasing. Respectfully, Zev Comment by Zev Chonoles | February 27, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 39, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473112225532532, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2189/aspherical-lenses?answertab=active
# Aspherical lenses It's known that single spherical lens cannot focus parallel beam of monochromatic light into single point. Could you suggest how should aspherical lens look like to be able to focus in single point (again, light is monochromatic)? Do we need conic factor, or higher components? Could you suggest any decent books on aspherical optics & it's manufacturing? - 5 its a parabola; f(x)=a*x^2+bx+c – Holowitz Dec 23 '10 at 14:43 Hmm.. You deadly right, thanks :-) I wonder how do they manufacture them... Question about books is still actual... – BarsMonster Dec 23 '10 at 14:51 1 point-to-point focusing in the geometric optics limit would be a hyperbola. – Mark Eichenlaub Dec 23 '10 at 21:22 1 The ideal form for lens and a mirror is different. After my new years buzz wears off, I'll post a nice answer :) – Colin K Jan 1 '11 at 5:47 ## 3 Answers The last part of your question is the easiest to answer, so I'll get to that first. The best book on the fundamentals of optical design is "Modern Optical Engineering" by Warren J. Smith. It is not specific to aspheric optics, but does cover them in addition to the rest of geometrical optics and lens design. It is probably the single most common reference book among optical engineers. Now, the rest of your question is a bit complicated, and needs a little bit of background, so bear with me for a moment. As has been mentioned, even an ideal lens will produce a focal spot of some minimum size, determined by the ratio of the lens focal length to its aperture (this quantity is called the "f-number, or $f/\#$") and the wavelength of the light. This is what optical engineers call the diffraction limited spot size. For a circular aperture, the diameter of the diffraction limited spot size will be $$2.44 \times \lambda \times f/\#$$ where $\lambda$ is the wavelength. So as the $f/\#$ decreases (as the lens gets "faster") the diffraction limited spot will become smaller. However, any aberrations in the lens will also become more significant! This means that a very slow lens (one with a long focal length, relative to its aperture) can produce a diffraction limited spot even though it may have some aberration relative to an ideal lens, while a very fast lens will need to have a slightly aspheric shape to achieve diffraction limited performance. This is important to understand because it means that, in some cases, a spherical lens can indeed focus light as close to a point as is physically possible, even though a sphere isn't the ideal shape. So what is that ideal shape? Well again, it depends on a few things. For both lenses and mirrors, the ideal shape will change depending on the distance from the object plane to the lens, and from the lens to the image plane. In the case you've asked about, where the incoming light is collimated, optical engineers would say that the object plane is at infinity. In this case, as some other people have pointed out, the ideal shape for a mirror is indeed a parabola. However, for a lens this is not the case. As it turns out, the ideal shape for a lens to focus a collimated beam of light to a point is to have the first surface of the lens (the one the light hits first) be elliptical, and the back surface be hyperbolic. Lens designers usually specify the shape of a lens surface with the following equation: $$Z = \frac{C r^2}{1 + \sqrt{1-(1+\kappa) C^2 r^2}}$$ where $Z$ is the "sag" of the lens surface, or its departure from a plane tangent to the lens surface at the center of the lens, $r$ is the radial distance from the center of the lens, $c$ is the curvature of the lens (the reciprocal of its radius of curvature) and $\kappa$ is called the "conic constant." It is the value of $\kappa$ which determines what sort of conic section describes the surface: • $\kappa > 0$ Oblate ellipse • $\kappa = 0$ Sphere • $0 > \kappa > -1$ Prolate Ellipse • $\kappa = -1$ Parabola • $-1 > \kappa$ Hyperbola On a related note, it is more than just the conic constant that can be adjusted to control aberrations. Even with purely spherical surfaces, the relative curvature of the front and back lens surface can be varied, while keeping the effective focal length constant. Adjusting this is more common than adding aspeheric surfaces to a lens, because aspheric surfaces are expensive to manufacture. Many optical supply companies even offer off-the-shelf optics with an ideal bending ratio for a given application. These are often sold as "best form" lenses. - I see, that was an awesome answer :-) So I guess ring illumination is something which should produce minimal possible spot? – BarsMonster Jan 11 '11 at 0:18 @Bars: ring illumination is something typically seen in a phase contrast microscope. Is that where you heard of it? The reason for ring illumination would actually make a very good question on its own, but it's completely unrelated to the focusing ability of a lens. In fact it can't even be explained by purely geometrical optics. – Colin K Jan 11 '11 at 0:58 That's from optical lithography. Doesn't it offer reduced abberations as we use only small piece of lens AND gives good diffraction limit? – BarsMonster Jan 11 '11 at 2:31 @bars: Not really. The illumination pattern doesn't change the aberrations introduced by the lens, and the diffraction limited spot size can't be reduced by using ring illumination. But I'm not really an expert on the state of the art in lithography these days. – Colin K Jan 11 '11 at 14:40 BTW I've found out how expensive is to make custom aspheric lens - ~2500 euro for d=20mm, so I can make 200 custom sperical lenses for the cost of 1 custom aspherical one :-) – BarsMonster Jul 7 '11 at 10:02 The problem you are talking about is called the spherical aberration. Spherical lenses are much easier to make, while from geometrical point of view the ideal focusing surface is a parabola. Since the light on optical instruments goes close to optical axis, one uses the paraxial approximation where sphere and parabola are the same up to the quadratic term. There is a large variety of aspheric lenses, with parabolic lenses among them. But it is usually simpler to use combinations of lenses to deal with spherical and other aberrations. - Thorlabs has a little information on aspheric lens design on their product page (scroll down below the product pictures and click on "Lens Formula"). Note that no traditional lens can focus light down to a single point; the minimum size is subject to the diffraction limit and is in the order of the wavelength. - Surely. wavelength & na. – BarsMonster Dec 23 '10 at 16:00 1 "the minimum size is subject to the diffraction limit and is in the order of the wavelength." - unless you're working with meta-materials (negative index of refraction materials) and the traditional criteria of optics don't apply anymore. – user346 Jan 9 '11 at 6:58 1 You are of course absolutely right, but I don't think the OP will find any books on aspherical metamaterials and their manufacturing. ;-) – ptomato Jan 9 '11 at 16:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9483742713928223, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/276231/how-to-graph-trigonometric-functions
# How to graph trigonometric functions I am trying to complete some homework for my physics course and I have come to realise that I do not understand how parameters inside a trigonometric function affect the function and therefore a graph of the function. I have the following question where I need to graph some functions on a disturbance vs time (t) graph where T is the period of the function. For an example the function $y=\sin{\left(\frac{2\pi t}{T}\right)}$, is one that I have to put on this graph. My first idea was as the sine function has a period of $2\pi$ and $T$ is the given period of $y$ that I could cancel the two periods leaving just $\sin{(t)}$ but after more thought I do not really think this is correct as it is not given that $2\pi=T$. After this I am not really sure how to further work the problem out. I would like to know how to visualise or workout how parameters inside a trig function affect the behaviour of the function. EDIT: I do not understand how to calculate the period of the function given above, I am confused because I understand the period of the function to be given by the coefficient of in this case $t$ as $t$ is multiplied by $2\pi$ and $\frac{1}{T}$ I am not sure which one I would use to calculate the period or both of them? I am mostly confused that the function presented above seems to include a variable for its period as a parameter which would then be used to calculate its period and I don't see how this can happen. I hope that makes sense, at the moment basically when I look at the function and try to calculate its period I just see an infinite loop scenario, I hope that is clear and if it isn't let me know so I can clarify further. Thanks! EDIT 2: Along with my last edit, the only real stab I could make at this would be the following the period $T$ is given by $T=\frac{2\pi}{2\pi}=1$, so in this case the function $y=\sin\left(\frac{2\pi t}{T}\right)$ would just have a period $T=1$ and therefore be a normal sine wave? - ## 1 Answer The period of the trigonometric function $sin(x)$ is as you said $2\pi$. The period of $sin(ax)$ is $T=\frac{2\pi}{a}$. You can then use $0, \frac{T}{4},2\frac{T}{4},3\frac{T}{4}$ and $T$. You'll see that nice angles will appear and computations will be easy. take a look at the following images, I think they 'll help you clear out things a bit I used a freeware program to draw those (geogebra easy to use, you can download it if you like and experiment a bit on your own). If the constant is negative you'll have something like that (since $sin(-x)=-sin(x)$ (of course functions $sin(2x)$ is actually $sin(2x)+2$ etc but I left that out. It's easier I believe to visualize it like this) - Thanks, this is a nice answer but on reflection I am hoping you can explain one thing further to me, about calculating the period of the function. In the example presented in my initial question the function has as one of its parameters $T$ which is given to be its period. I am confused as to how a function can have a period which is dependent upon its period..if that makes sense? – Aesir Jan 12 at 15:53 I have added an additional part to the question and would be grateful if your answer could be extended to cover that part. – Aesir Jan 12 at 16:00 @Aesir I believe constant $a$ in this case is $a=\frac{2\pi}{T}$ so the period of the function is (let's say T') $T'=\frac{2\pi}{\frac{2\pi}{T}}=\frac{2\pi T}{2\pi}=T$. – epsilon Jan 12 at 16:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9654402136802673, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/34130/how-to-make-a-monopole-magnet/34168
# How to make a monopole magnet? I want to create a monopole magnet. Is this practically / theoretically possible? - 2 – Emilio Pisanty Aug 14 '12 at 4:09 thanks for the advice! – Pranit Bauva Aug 17 '12 at 13:22 – user12345 Apr 12 at 11:08 ## 6 Answers As Mark M says in his answer,you cannot have a monopole magnet. You can simulate one. After all when you are at the north pole of earth, to all intents and purposes that is a monopole for magnets in the area. By spreading the magentic lines of one of the poles on a large area and concentrating the other to a very small one. Look at the images here. If you take a bunch of long supple permanent magnets and open one side to a large area, effectively you will have one strong pole in the small area. If you make an electromagnet, make the turns wider and wider on one side so that the field is dispersed in a large area. - ## Did you find this question interesting? Try our newsletter email address Magnetic monopoles can be created according to numerous Grand Unified Theories (GUT). The idea is that at sufficiently high energies you can reach an energy range where three of the four fundamental forces (strong nuclear, weak nuclear, and electromagnetism) couple to one another and are the same force. Such a state existed in the universe a tiny fraction of a second after the Big Bang. As the universe cools, the universe undergoes a phase transition where a this highly symmetric state is lost (Symmetry breaking). Depending on the topology of the group defining GUT, this can result in a number of different types of cosmic defects, such as cosmic strings, domain walls, textures, and... magnetic monopoles! The framework to understand their creation is often called the "Kibble Mechanism", where the essential idea is different parts of the universe undergo the phase transition at slightly different times and the topological defects emerge based on which symmetry breaks (discrete, cylindrical, etc). In the case of magnetic monopoles, one needs to break spherical symmetry. Sounds so easy right? Just break some spherical symmetry and you get your monopoles... Except that in order to create this highly symmetric state, you need absurdly large amounts of energy (and probably in some non-traditional geometry) that it is probably firmly out of the range of any current experiments or cosmic processes (it is estimated that the a magnetic monopole would have a mass of about $10^{15}$ GeV, compared to LHC's $10^3$ GeV range). Also it could be the Universe admits a particular GUT that doesn't have the correct symmetries so that, when it is broken to the standard model, it won't create monopoles. However, this hasn't stopped a team from trying at the LHC to try and create some monopoles. In every-day energy ranges, magnetic monopole production is impossible due to the divergent less property of magnetic fields. - 2 "magnetic monopole production is impossible due to the divergent less property of magnetic fields." ... You have to explain why, because this property is a priori empirical. – Chris Gerig Aug 14 '12 at 18:38 No, it isn't possible. Gauss' Law for magnetism states that $$\nabla \cdot B = 0$$ This means that the divergence of the magnetic field is zero - which translates into the fact that there are no magnetic charges, that both poles are equal. So, no magnetic monopoles. Or at least, none that would be available for you to build. The issue is a bit more complicated in QFT. - 3 Remark: This is only true if there are no actual monopoles in the vicinity. – Chris Gerig Aug 14 '12 at 5:12 2 The zero in Gauss's law is empirical. It's in there because we haven't see any magnetic monopoles, but it is not required to be that way by more fundamental physics, and you can't use it to justify the non-existance of the things. – dmckee♦ Jan 8 at 22:37 Actually, I'm not sure we can rule such things out! The divergenceless property of the magnetic field is empirical, because we haven't seen any monopoles. That being said, it is at least practically impossible, because of edge effects of the material which destroy any true radial field lines. The true reasoning for this to be "theoretically impossible". If there are no physical source monopoles in the vicinity, then any configuration will be made up of dipoles (or possible higher-order multipoles). But any collection of dipoles cannot mathematically equal a monopole. - 2 what do you mean by "edge effects"? And yes, it should be specially emphasized that you can't construct a true monopole from dipoles, that's what OP probably meant. That is without some kind of new elementary particles it is impossible. – Yrogirg Aug 14 '12 at 7:55 "BUT it does NOT say you cannot build your own configuration!" It does say that you can't build your own. You either find magnetic monopoles or you are stuck with induced dipoles. Those are your only choices. – dmckee♦ Aug 14 '12 at 16:50 No that was before my update. The point is, people simply say "it can't happen because $\triangledown\cdot B=0$", but some further explanation is required... that condition is definitely violated when monopoles are in the area, but when they aren't, WHY can't it be simulated? And the reason is that dipoles cannot represent monopoles mathematically. – Chris Gerig Aug 14 '12 at 17:35 Practically at the present Maxwell's equations of electromagnetism relate the electric and magnetic fields to each other and to the motions of electric charges. The standard equations provide for electric charges, but they posit no magnetic charges. Except for this difference, the equations are symmetric under the interchange of the electric and magnetic fields. Symmetric Maxwell's equations can be written when all charges (and hence electric currents) are zero, and this is how the electromagnetic wave equation is derived. Fully symmetric Maxwell's equations can also be written if one allows for the possibility of "magnetic charges" analogous to electric charges. With the inclusion of a variable for the density of these magnetic charges, say $ρ_m$, there will also be a "magnetic current density" variable in the equations, $j_m$. If magnetic charges do not exist - or if they do exist but are not present in a region of space - then the new terms in Maxwell's equations are all zero, and the extended equations reduce to the conventional equations of electromagnetism such as ∇•B = 0 (where ∇• is divergence and B is the magnetic B field). At the moment, magnetic monopoles cannot be created. Theoretically and perhaps futuristically However, when thinking theoretically it is slightly different. When you take into account Grand Unified Theories (GUT), magnetic monopoles can be predicted. At energies that are currently not possible to reach, you can reach a point where the strong force, weak force and electromagnetism combine to become essentially the same force. The GUT can, in theory, result in magnetic monopoles, so long as spherical symmetry is broken. I call this "futuristically" because it will be a hell of a long time before we can reach energy levels necessary (roughly 10^16 GeV) - See these publications of the discovery of monopoles. The first two were published in Science: Morris, J. et al. Science advanced online publication doi:10.1126/science.1178868 (2009). Fennell, T. et al. Science advance online publication doi:10.1126/science.1177582 (2009). Kadowaki, H. et al. preprint at http://arXiv.org/abs/0908.3568v2 (2009). Bramwell, S. T. et al. preprint at http://arxiv.org/abs/0907.0956 (2009). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145392775535583, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/152905/extending-a-function-to-become-odd-or-even?answertab=votes
# Extending a function to become odd or even? "Suppose we have a function defined on an interval [0,K], then we extend it as an even or odd function of period K so as to produce a Fourier cosine or sine series." (1): What exactly is extending a function? (2): How do you extend a function to become odd or even? - What you write is not possible. You need period $2K$ (at least) in order to have sufficient freedom to make the function even or odd as well as periodic. And even then, making it odd only works if already $f(0)=0$. – Marc van Leeuwen Jun 2 '12 at 15:50 ## 1 Answer The idea is to force the function to be even or odd on the interval $[-K, K]$. E.g. if you want to extend it as an odd function define $g$ on $[-K, K]$ by $g(x) = -f(-x)$ for $-K \leq x < 0$ and $g(x) = f(x)$ for $0 \leq x \leq K$. This function is then odd as $g(-x) = -g(x)$. Similarly you can extend it to an even function, i.e. $g(-x) = g(x)$ for $x\in [-K, K]$. edit: this is of period $2K$ which is what I assume you meant. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9254747629165649, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/71407/elementary-question-in-partial-differentiation?answertab=votes
# Elementary question in partial differentiation Let's say we have a function of the form $f(x+vt)$ where $v$ is a constant and $x,t$ are independent variables. How is $\frac{\partial f}{\partial x} = \frac{1}{v}\frac{\partial f}{\partial t}$ equal to $f$? If I let $u=x+vt$ then $\frac{\partial f}{\partial x} = \frac{\partial f}{\partial u}\frac{\partial u}{\partial x} = \frac{\partial f/\partial t}{\partial u/\partial t}\frac{\partial u}{\partial x}=\frac{1}{v}\frac{\partial f}{\partial t}$ but I cannot infer that $\frac{1}{v}\frac{\partial f}{\partial t} = f$ unless I assume the form of D'Alembert's Solution to be the harmonic (exponential). For the general solution I do not know how this was arrived at. Edit: I still don't get it, as the context does not help. But I assume since it is a physics text, $f$ can be written as a Fourier series/integral of exponentials. Assuming that, the above holds. - You are correct. Where did you see it written that you can make this inference? – Chris Taylor Oct 10 '11 at 10:33 There is a function $f:\ s\mapsto f(s)$ of one variable and a function $u:\ (x,t)\mapsto f(x+vt)$ of two variables. All you can say is that $${\partial u\over\partial x}={1\over v}{\partial u\over\partial t}=f'(x+vt)\qquad \forall x, \ \forall t\ .$$ – Christian Blatter Oct 10 '11 at 12:11 Are you by any chance of south asian descent? Your username "kuch nahi" means "nothing" in some sense in urdu/hindi – Tyler Hilton Oct 14 '11 at 2:13 @Tyler Yes (hindi). As I wrote in chat an hour ago, it is a close approximation of my progress in mathematics. – kuch nahi Oct 14 '11 at 2:41 ## 1 Answer You are correct - you can't infer that $f'(x) = f(x)$ unless $f$ is exponential, i.e. if $f(x)=A\exp(x)$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9262173175811768, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/86192/how-shall-i-prove-this-stochastic-integral-equation?answertab=active
# How shall I prove this Stochastic integral equation? I want to prove $$\int_0^T B_t^2 dB_t = \frac{B_T^3}{3} - \int_0^T B_t dt$$ by the definition of Ito integral. I have tried this so far. Given a partition $0=t_0 < t_1 < ... < t_n=T$, I want to have $$\sum_i B_{t_i}^2 (B_{t_{i+1}} - B_{t_i}) - \sum_i \frac{B_{t_{i+1}}^3 - B_{t_i}^3}{3} + \sum_i B_{t_i} (t_{i+1} - t_i) \to 0$$ as the partition becomes finer and finer. But I am stuck here. How shall I proceed? Thanks a lot! - 3 It looks like it follows from a direct application of the Ito-Doeblin formula $$f(B_T)-f(B_0) = \int_0^T f'(B_t) dB_t + (1/2) \int_0^T f''(B_t) dt$$ with $$f(x)=x^3/3$$. – Flounderer Nov 28 '11 at 1:12 @Flounderer: Thanks! I hope to prove it by definition. – steveO Nov 28 '11 at 1:26 Do you mean that you should prove it by definition? Otherwise, it is a very simple example for the application of Ito formula and @Flounderer told you. – Ilya Nov 28 '11 at 9:21 @Ilya: Yes, I should. Thanks for any idea. – steveO Nov 28 '11 at 14:23 Could you calculate the expectation and variance of the expression you've obtained? – Ilya Nov 28 '11 at 14:43 show 4 more comments ## 1 Answer use the following identity: $3\cdot B^{2}_{t_{i}}(B_{t_{i+1}}-B_{t_{i}})=B^{3}_{t_{i+1}}-B^{3}_{t_{i}}-(B_{t_{i+1}}-B_{t_{i}})^{3}-3\cdot B_{t_{i}}(B_{t_{i+1}}-B_{t_{i}})^{2}$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9173352718353271, "perplexity_flag": "middle"}
http://divisbyzero.com/2009/09/11/cardinality1-html/?like=1&_wpnonce=5e79d40c83
# Division by Zero A blog about math, puzzles, teaching, and academic technology Posted by: Dave Richeson | September 11, 2009 ## Cardinality of infinite sets, part 1: four nonstandard proofs of countability The study of cardinalities of infinite sets is one of the most intriguing areas of mathematics that an undergraduate mathematics major will encounter. It never fails to bring crooked smiles of joy, disbelief, confusion and wonder to their faces. The results are beautiful, deep, and unexpected. Recall that two sets have the same cardinality if they can be put in a 1-1 correspondence. For example, the fingers on my hands can be put in a 1-1 correspondence with the set {1,2,3,4,5,6,7,8,9,10}, thus we say that I have ten fingers. Things become more interesting when we turn to infinite sets. For example, the positive integers have the same cardinality as the integers. They can be paired up as follows: $\begin{array}{ccc}1&\to& 0\\2&\to& -1\\3&\to& 1\\4&\to &-2\\5&\to& 2\\6&\to &-3\\7&\to& 3\\\vdots&&\vdots\end{array}$ This fact usually wows the students (some refuse to accept it at first), but we’re just getting started. We say that a set is countable if it is finite or if it can be put into a 1-1 correspondence with the positive integers (in this latter case we often say that the set is countably infinite). If a set is not countable, it is uncountable. Essentially, a set is countable if the elements can be listed sequentially: $x_1, x_2, x_3,\ldots$ Examples of countably infinite sets include the integers, the even integers, and the prime numbers. Finally, we turn to the rational numbers (the set of numbers that can be written as a fraction). We know that there are “many” of them. Not only are they infinite in number, they form a dense subset of the real number line; they are not discrete like the sets mentioned above. Between every two real numbers, regardless of how close together, you can find infinitely many rational numbers. Shockingly, despite their seeming abundence, they are countable, just like the integers! The usual proof (which I believe was Cantor’s 1873 proof) is to put the positive rationals in a rectangular grid. In each row the numerators are the same and the denominators increase by 1, and in each column the denominators are the same and the numerators increase by 1. Then we list the numbers by zig-zaging through the grid along 45 degree lines, skipping over fractions that have already appeared in the list (for example, if we had 1/2, then we’d skip over 2/4, 3/6, etc.), but hitting every entry. This shows that the positive rationals are countable. It is not hard to see that this implies that the rationals are countable also. This is a fine, but somewhat inelegant proof. (See this website for details.) I’ll now present four different, less well-known proofs of the countability of the rationals. Proof I. In their 2000 paper “Recounting the Rationals” Neil Calkin and Herbert Wilf gave a new and extremely elegant proof that the positive rational numbers are countable. First, construct a binary tree with 1/1 at the top. Under 1/1 put 1/2 and 2/1. Continue down the tree as follows. Below each rational number $a/b$, place the two rational numbers $a/(a+b)$ and $(a+b)/b$. Part of the tree is shown below. 1. Every positive rational number appears somewhere in this tree. 2. No rational number appears twice in this tree. 3. All the entries are in reduced form. Now create a list of the rational numbers by proceeding through the tree breadth first. That is, list the first row, then the second row, then the third row, etc. The first 15 terms are: 1/1, 1/2, 2/1, 1/3, 3/2, 2/3, 3/1, 1/4, 4/3, 5/2, 2/5, 3/4, 4/1,… This shows that the positive rational numbers are countable. Incidentally, although the sequence of rationals may appear unordered, it has some interesting properties. 1. The denominator of one fraction is the numberator of the next one. 2. The $n$th denominator is the number of ways to write $n$ as powers of two in which each power of two is allowed at most twice. (For example, when $n=6$ we have a denominator of 3 because $6=4+2=4+1+1=2+2+1+1$.) The proofs of all of these facts are not too difficult. For more information see Calkin and Wilf’s original paper or this nice 5-part blog post at The Math Less Traveled. Proof II. The next two proofs use the fact that union of countably many finite sets is countable. This is easy to see. If the sets $A_1, A_2, A_3,\ldots$ are finite, then we can list $\bigcup_{n=1}^\infty A_n$ by listing the elements of $A_1$, followed by the elements of $A_2$, $A_3$, $A_4$, and so on (removing duplicates, if any). Let $A_n=\{\pm p/q: p/q\text{ is reduced, and }p+q=n\}$. For example, $A_1=\{0/1\}$, $A_2=\{1/1, -1/1\}$, $A_3=\{1/2,-1/2, 2/1, -2/1\}$, etc. Clearly each such set is finite. Moreover $\bigcup_{n=1}^\infty A_n$ is precisely the set of rational numbers. Thus the set of rational numbers is countable. (I’m currently teaching real analysis, and this is the proof found in our textbook, Stephen Abbott’s Understanding Analysis.) Proof III. The third proof is actually much more versatile than the others. It is found in Rob Kantrowitz’s paper “A Principle of Countability” (Mathematics Magazine, Vol. 73, No. 1 (Feb., 2000), pp. 40-42). He proves that the set of all possible words that can be written with a finite alphabet is countable. The justification is easy. Let $A_n$ be the set of words of length $n$. Each $A_n$ is finite (if there are $m$ letters in the alphabet, then $A_n$ has $m^n$ words). Reasoning as before, $\bigcup_{n=1}^\infty A_n$ is countable. Of course, we may not be interested in all words (all possible concatenations of letters), but only some words (ones that make sense in the context). Clearly a subset of a countable set is also countable. Here’s our one sentence proof that the rational numbers are countable (as a corollary of the theorem above). Every rational number is a word written with letters in the following finite alphabet $\{-,/,0,1,2,3,4,5,6,7,8,9\}$. For example, we have rational numbers $5, 1/2, -22/7$. We can use the theorem to prove that many other sets are countable too. The set of all surds is countable. By surds we mean any number that can be obtained from the integers using addition, subtraction, multiplication, division, powers, and roots. Any such value can be written using the following alphabet $\{+,-,/,(,),\wedge,0,1,2,3,4,5,6,7,8,9\}$. For example, $\displaystyle\sqrt[3]{5-4\sqrt[5]{\frac{2}{3}}}$ can be written as $(5-4(2/3)\wedge(1/5))\wedge(1/3)$. This final shocking example is not in Kantrowitz’s paper, but can be proved using his method: the set of describable numbers is countable. That is, the collection of all numbers that could possibly be described by anyone in any fashion, using any symbols in any language, must be countable. Examples of describable numbers are 5, $\pi$, $e$, $\int_1^\infty e^{-x^2}\,dx$, and “the smallest positive root of the function $x^3-x\sin(x^4-x+\tan(x))+\pi-\frac{\sqrt{2}}{x}$.” Why is this set countable? The describable numbers can only be described using some finite alphabet. This alphabet could be large—our 26 letters (capital and lower case), the greek alphabet (capital and lower case), binary operations, a (space), the integral symbol, punctuation, etc., etc. As we will see in the next posting, the real numbers are uncountable. This means that the vast majority of the real numbers (uncountably many) are not describable! I find this technique in Kantrowitz’s paper to be extremely satisfying. It is seems very intuitive and easy to apply in other contexts. Proof IV. Here is yet another proof of the countability of the positive rational numbers. It can be found in Yoram Sagher’s 2-paragraph note, “Counting the Rationals” (The American Mathematical Monthly, Vol. 96, No. 9 (Nov., 1989), p. 823). Each positive rational number can be written as $m/n$ where $m$ and $n$ are relatively prime. Suppose they have prime factorizations $m=p_1^{a_1}p_2^{a_2}\cdots p_r^{a_r}$ and $n=q_1^{b_1}q_2^{b_2}\cdots q_s^{b_s}$. Note that since the fraction is reduced, $p_i\ne q_j$ for all $i$ and $j$. Create a 1-1 correspondence with the positive integers as follows. $m/n$ is paired up with the integer $p_1^{2a_1}p_2^{2a_2}\cdots p_r^{2a_r}q_1^{2b_1-1}q_2^{2b_2-1}\cdots q_s^{2b_s-1}$. For example $22/7$ is paired with the value $11^2 2^2 7^1=3388$. Likewise, we can go backward—each positive integer is paired with one rational number. For example, the number 360 has prime factorization $3^2 2^3 5$. Break this up into factors with even exponents and odd exponents ($3^2$ and $2^3 5$). This implies that the numerator is $3$ and the denominator is $2^2 5=20$. So the rational number associated to $360$ is $3/20$. I am planning to write a follow-up post that showcases less well-known proofs that the real numbers are uncountable. ### Like this: Posted in Math | Tags: Cantor, cardinality, countable, rational, real analysis, real numbers, uncountable ## Responses 1. Check out a children’s book called The Cat in Numberland, where the Hilberts are the innkeepers at the Hotel Infinity, and the cat wonders how it happens. By: Sue VanHattum on September 14, 2009 at 12:05 am • I’ve never heard of that book. Thank you for letting me know. I remember learning about Hilbert’s Hotel when I was in high school or junior high school from Martin Gardner’s “Aha Gotcha!” book. At the time it seemed like he was cheating, but I couldn’t put my finger on the problem. I must admit that I’m a little disappointed to have been scooped on this idea. I’ve been saying to my friends for the last few years that I wanted to write a children’s book about infinity (although I think it would be highly unlikely that I’d ever follow through on the idea). By: Dave Richeson on September 14, 2009 at 9:19 am 2. I’ve often thought that the process of learning about countability, uncountability, and transfinite numbers is a lot like the Zen story of the bowl and tea: “Before enlightnement, the bowl is a bowl and tea is tea. In the process of achieving enlightenment, one discovers that the bowl is not a bowl, and tea is not tea. After enlightenment, the bowl is again a bowl, and tea is again tea.” When you first start thinking about numbers like this, confusion sets in – the familiar numbers that you’ve known since childhood are suddenly changed. Personally, I didn’t find countability to be too much of a problem, but what I did struggle with was how we construct numbers out of sets and cuts and equivalence classes etc. Recently I’ve been reading Conway’s ONAG (http://books.google.com/books?id=tXiVo8qA5PQC), and bowls are not bowls again for me right now. What do you think of using Conway’s approach in teaching undergrads? Is it ever done? By: Dan M on September 14, 2009 at 9:21 am • That’s a great analogy. In our particular course we aren’t going to construct the real numbers. Instead, we just assume the axiom of completeness. Abbott’s book does have the construction (using Dedekind cuts), but sticks it at the very end of the book. We’re not going to get there. I have actually never read about Conway’s surreal numbers. It is on my to-do list. I know that they are an extension of the reals to include infinity and infinitesimals, but I don’t know much more than that. By: Dave Richeson on September 14, 2009 at 12:46 pm • If for no other reason than to further whet your appetite, the surreal numbers can actually be used to model types of combinatorial games (i.e. games with finitely many moves and perfect information). Then “surreal arithmetic” can be used to solve several game theory problems… and vice versa! By: Travis on September 14, 2009 at 3:05 pm 3. [...] appearing once and only once, in reduced form. I originally saw it mentioned on Division by Zero here (along with a bunch of other great non-standard proofs), Calkin and Wilf’s original published [...] By: Hyperbinary numbers and latex plugins « An Extremely Satisfactory Totalitarianism on October 24, 2009 at 4:19 am 4. [...] and its connection to the rational numbers before, but to recap: I first came across it on Division by Zero (still waiting for the rest of that series) and the original paper by Calkin and Wilf is [...] By: Web Worker Example: The Rationals « Extremely Satisfactory Totalitarianism on February 7, 2010 at 5:15 am 5. [...] The rational numbers are countable [...] By: Mathematical surprises « Division by Zero on August 25, 2010 at 1:57 pm 6. [...] be showing my Discrete Math class how to “count” the positive rational numbers. (See this old blog post for more information about countable sets.) I used TikZ to create the picture [...] By: Countability of the rationals drawn using TikZ | Division by Zero on April 16, 2013 at 9:46 pm 7. Some infinities are bigger than other infinities. By: Jake Diaz on May 5, 2013 at 9:23 pm
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 55, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9326199889183044, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/219888/irreducible-polynomial-fracxnxm-2x-gcdn-m-1-over-mathbbq
# Irreducible polynomial $\frac{x^{n}+x^{m}-2}{x^{\gcd(n,m)}-1}$ over $\mathbb{Q}$. I have to show that the polynomial $$f(x)=\frac{x^{n}+x^{m}-2}{x^{\gcd(n,m)}-1}$$ is irreducible over $\mathbb{Q}$, for all $n,m \in \mathbb{N}$. Any idea as to how I can show this. - 1 – Julian Kuelshammer Oct 24 '12 at 5:54 1 Here is a generalization: mathproblems123.wordpress.com/2009/11/02/… And here is a useful lemma: mathproblems123.wordpress.com/2009/11/09/position-of-roots – Beni Bogosel Oct 24 '12 at 7:00 @JulianKuelshammer: Since this is an old Miklos Schweitzer problem, I don't think it was given as homework. Although, the mention of the OP's work would make the question more valuable. – Beni Bogosel Oct 24 '12 at 10:43 ## 1 Answer Hint: Say that $gcd(n,m)=d$, and write $f(x) = \dfrac{x^n+x^m-2}{x^d-1}$ as $$f(x) = x^{(c-1)d}+x^{(c-2)d} + \cdots x^{bd}+2x^{(b-1)d)}+ \cdots +2x^d +2$$ where $n = cd$, $m=bd$. Consider the polynomial $g(x) = x^{c-1} + \cdots + x^b +2x^{b-1}+ \cdots +2x + 2$. Show that this polynomial only has roots of absolute value greater than 1. Use this to show that the roots of f satisfies a similar property, and derive a contradiction. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8894869685173035, "perplexity_flag": "head"}
http://mathhelpforum.com/trigonometry/33573-trigonometry-stationary-points-second-derivative-print.html
Trigonometry, Stationary Points and the Second Derivative Printable View • April 7th 2008, 03:34 PM Flay Trigonometry, Stationary Points and the Second Derivative I'm having trouble getting the answer to this particular question out; Given that the $\frac{d^2y}{dx^2}$ is equal to $9\sin3x$; 1. Find y if there is a stationary point at $(\frac{\pi}{2}, 1)$ 2. Show that $\frac{d^2y}{dx^2} + 9y = 0$ • April 7th 2008, 04:00 PM Mathstud28 What you have here Quote: Originally Posted by Flay I'm having trouble getting the answer to this particular question out; Given that the $\frac{d^2y}{dx^2}$ is equal to $9\sin3x$; 1. Find y if there is a stationary point at $(\frac{\pi}{2}, 1)$ 2. Show that $\frac{d^2y}{dx^2} + 9y = 0$ is a differntial equation...here is what you do...since $\frac{d^2y}{dx^2}=9sin(3x)$......ok now seperate to get $d^2y=9sin(3x)dx$ then integrate to get $y'=-3cos(3x)+C$...now stop before we get to y we must find that c...we know there is a stationary point at c which means the slope is 0 at $\frac{\pi}{2{$ which means that y'=0...so thefore $0=-3cos\bigg(3\cdot\frac{\pi}{2}\bigg)+C$ solve and you see c=0...next integrate again to get y and you see that $y=-sin(3x)+C$ now using that condition again we know that $1=-sin\bigg(3\cdot\frac{\pi}{2}\bigg)+C_1$ solving we get x=0..therfore $y=-sin(3x)$ for part two just add nine times what we just found to what $\frac{d^2y}{dx^2}$ is to verify its zero • April 7th 2008, 08:11 PM Flay Got it. I was making yet another stupid mistake. All times are GMT -8. The time now is 05:12 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461238384246826, "perplexity_flag": "middle"}
http://quant.stackexchange.com/questions/1329/do-people-use-unbounded-interest-rate-models-and-what-alternatives-exist/1952
# Do people use unbounded interest rate models, and what alternatives exist? A simple interest rate model in discrete time is the autoregressive model, $$I_{n+1} = \alpha I_n+w_n$$ where $\alpha\in [0,1)$ and $w_n\geq 0$ are i.i.d. random variables. When working with ruin probabilities in a model which incorporates this interest rate model, I've faced that $I$ can reach any positive value with a positive probability. Hence I'd like to know: 1. Are there any interest rate models in discrete time which assume bounded interest? 2. Why do people use models with unbounded interest (which is unrealistic) at all? - 1 – Tal Fishman Sep 15 '11 at 20:47 @Tal Fishman: Nice observation, though most of the models seem to be developed for a kind of "equilibrium" state of economy (like in a mode of growth), so the example with extremely high interest rates won't be enlightened in that models anyway. Btw, thanks for bounty, it worked as appeared. – Ilya Sep 19 '11 at 8:34 Hello @Tal Fishman and Gortaur, has such an auto regressive model forecasted the presently observed negative interest rate bonds, without taking into consideration any influence of the credit risk on this market risk factor (governmental downgradings), taking into account that France and USA have lost their triple-A and Germany is about to loose it? – user7056 Aug 27 '12 at 16:17 ## 1 Answer There are certainly (short-rate) models which assume bounded interest rates. I suppose I should clarify - the design of the model prohibits negative interest rates. Further, some models asymptotically reach some target, or mean rate which is considered mean reversion, the most famous perhaps the Vasicek. Short rate models where rates cannot go negative: Cox-Ingersoll-Ross Black-Derman-Toy Black-Karasinsky Exponential Vasicek Hull-White Short rate models where rates can go negative: Ho-Lee Vasicek These are all stochastic models that can be solved in discrete time. Of course, each of these models have there own shortcomings. For example, the mean reversion in the Black-Derman-Toy model is dependent on volatility decay which only happens in practice if the modeled volatility is fit to traded securities whose volatility diminishes over time. People may use an interest rate model which has the possibility of negative interest rates if they are valuing derivatives that might have a payoff of 0 when rates get low, or below some low but positive value. In other words, the interesting stuff happens at some high positive interest rate. For a simple example, think of a put option on a bond. This contract only gets interesting when the price of the bond goes down (with the corresponding rates going up) so it really doesn't matter if the model has negative rates in it because we're only interested in the payoffs when rates are high. - Cool, I will take a look of them – Ilya Sep 19 '11 at 8:35 1 It seems that all these models still allow the interest rate to be unbounded. – Ilya Sep 19 '11 at 13:46 @Gortaur I think that is because all the modelers behind these seminal models made the determination that a zero lower bound and no upper bound are realistic assumptions. The put option on a bond is a good example of where all the action is specifically in the very high interest rate scenarios, and it would make little sense to artificially cap interest rates within a model. Since some models even allow rates to go negative, perhaps that makes a zero lower bound seem less extreme to you? – Tal Fishman Sep 19 '11 at 15:05 @TalFishman: I would say that interest rate is something which allows you to receive small but positive profit - otherwise you just do not invest there, but maybe I am missing smth. – Ilya Sep 19 '11 at 15:27 @Gortaur there are extreme situations where even nominal rates can be negative, so positive yet very low rates seem positively plausible (sorry for the pun). – Tal Fishman Sep 19 '11 at 15:31 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9506221413612366, "perplexity_flag": "middle"}
http://www.aimath.org/textbooks/beezer/Warchetype.html
Archetype W Archetype W Summary: Domain is polynomials, codomain is polynomials. Domain and codomain both have dimension 3. Injective, surjective, invertible, 3 distinct eigenvalues, diagonalizable. ◊ A linear transformation: (Definition LT) \begin{equation*} \ltdefn{T}{P_2}{P_2},    \lt{T}{a+bx+cx^2}= \left(19a+6b-4c\right)+ \left(-24a-7b+4c\right)+ \left(36a+12b-9c\right) \end{equation*} ◊ A basis for the null space of the linear transformation: (Definition KLT) \begin{equation*} \set{\ } \end{equation*} ◊ Injective: Yes. (Definition ILT) Since the kernel is trivial Theorem KILT tells us that the linear transformation is injective. ◊ A basis for the range of the linear transformation: (Definition RLT) Evaluate the linear transformation on a standard basis to get a spanning set for the range (Theorem SSRLT): \begin{equation*} \set{ 19-24x+36x^2,\, 6-7x+12x^2,\, -4+4x-9x^2 } \end{equation*} If the linear transformation is injective, then the set above is guaranteed to be linearly independent (Theorem ILTLI). This spanning set may be converted to a nice'' basis, by making the vectors the rows of a matrix (perhaps after using a vector reperesentation), row-reducing, and retaining the nonzero rows (Theorem BRS), and perhaps un-coordinatizing. A basis for the range is: \begin{equation*} \set{1,\,x,\,x^2} \end{equation*} ◊ Surjective: Yes. (Definition SLT) A basis for the range is the standard basis of $\complex{5}$, so $\rng{T}=\complex{5}$ and Theorem RSLT tells us $T$ is surjective. Or, the dimension of the range is 5, and the codomain ($\complex{5}$) has dimension 5. So the transformation is surjective. ◊ Subspace dimensions associated with the linear transformation. Examine parallels with earlier results for matrices. Verify Theorem RPNDD. \begin{align*} \text{Domain dimension: }3&& \text{Rank: }3&& \text{Nullity: }0 \end{align*} ◊ Invertible: Yes. Both injective and surjective (Theorem ILTIS). Notice that since the domain and codomain have the same dimension, either the transformation is both injective and surjective (making it invertible, as in this case) or else it is both not injective and not surjective. ◊ Matrix representation (Definition MR): \begin{align*} B&=\set{1,\,x,\,x^2}\\ &\\ C&=\set{1,\,x,\,x^2}\\ &\\ \matrixrep{T}{B}{C}&= \begin{bmatrix} 19 & 6 & -4 \\ -24 & -7 & 4 \\ 36 & 12 & -9 \end{bmatrix} \end{align*} ◊ Since invertible, the inverse is also a linear transformation. (Definition IVLT): \begin{equation*} \ltdefn{\ltinverse{T}}{P_2}{P_2},    \lt{\ltinverse{T}}{a+bx+cx^2} = (-5a-2b+\frac{4}{3}c)+ (24a+9b-\frac{20}{3}c)x + (12a+4b-\frac{11}{3}c)x^2 \end{equation*} ◊ Eigenvalues and eigenvectors (Definition EELT, Theorem EER): \begin{align*} \eigensystem{T}{-1}{2x+3x^2}\\ \eigensystem{T}{1}{-1+3x}\\ \eigensystem{T}{3}{1-2x+x^2} \end{align*} ◊ A diagonal matrix representation relative to a basis of eigenvectors, $B$. \begin{align*} B&= \set{2x+3x^2,\,-1+3x,\,1-2x+x^2} \\ &\\ \matrixrep{T}{B}{B}&=\begin{bmatrix} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 3 \end{bmatrix} \end{align*}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.5675323605537415, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2011/01/05/standard-tableaux/?like=1&source=post_flair&_wpnonce=420e8d64d7
# The Unapologetic Mathematician ## Standard Tableaux So we’ve described the Specht modules, and we’ve shown that they give us a complete set of irreducible representations for the symmetric groups. But we haven’t described them very explicitly, and we certainl can’t say much about them. There’s still work to be done. We say that a Young tableau $t$ is “standard” if its rows and columns are all increasing sequences. In this case, we also say that the Young tabloid $\{t\}$ and the polytabloid $e_t=\kappa_t\{t\}$ are standard. Recall that we had a canonical Young tableau for each shape $\lambda$ that listed the numbers from $1$ to $n$ in each row from top to bottom, as in $\displaystyle\begin{array}{ccc}1&2&3\\4&5&\\6&&\end{array}$ It should be clear that this canonical tableau is standard, so there is always at least one standard tableau for each shape. There may be more, of course. For example: $\displaystyle\begin{array}{ccc}1&3&6\\2&5&\\4&&\end{array}$ Clearly, any two distinct standard tableaux $s^\lambda$ and $t^\lambda$ give rise to distinct tabloids $\{s\}$ and $\{t\}$. Indeed, if $\{s\}=\{t\}$, then $s$ and $t$ would have to be row-equivalent. But only one Young tableau in any row-equivalence class has increasing rows, and only that one even has a chance to be standard. Thus if $s$ and $t$ are row-equivalent standard tableaux, they must be equal. What’s not immediately clear is that the standard polytabloids $e_s$ and $e_t$ are distinct. Further, it turns out that the collection of standard polytabloids $e_t$ of shape $\lambda$ is actually independent, and furnishes a basis for the Specht module $S^\lambda$. This is our next major goal. ## 10 Comments » 1. [...] it should be made clear that this is not a standard tableau, despite the fact that the rows and columns increase. The usual line is that we imagine the tableau [...] Pingback by | January 6, 2011 | Reply 2. Dear John, As you seem to be heading towards stating it, I was wondering if you were going prove the hook length formula. If so, have you seen Jason Bandlow’s elementary proof of it? It is by far the most accesible proof I’ve seen, requiring only the Fundamental Theorem of Algebra. Best, Andy Comment by Andrew Poulton | January 10, 2011 | Reply • No, I haven’t seen it, though I’m going for a much more overarching treatment at the moment. Do you have a link? Comment by | January 10, 2011 | Reply • Here you go: http://www.math.upenn.edu/~jbandlow/papers/hookFormula.pdf Comment by Andrew Poulton | January 11, 2011 | Reply 3. [...] Maximality of Standard Tableaux Standard tableaux have a certain maximality property with respect to the dominance order on tabloids. Specifically, [...] Pingback by | January 13, 2011 | Reply 4. [...] are Independent Now we’re all set to show that the polytabloids that come from standard tableaux are linearly independent. This is half of showing that they form a basis of our Specht modules. [...] Pingback by | January 13, 2011 | Reply 5. [...] by polytabloids of shape . But these polytabloids are not independent. We’ve seen that standard polytabloids are independent, and it turns out that they also span. That is, they provide an explicit basis for [...] Pingback by | January 21, 2011 | Reply 6. [...] that we have a canonical basis for our Specht modules composed of standard polytabloids it gives us a matrix representation of for each . We really only need to come up with matrices for [...] Pingback by | January 26, 2011 | Reply 7. [...] as a quick use of this concept, think about how to fill a Ferrers diagram to make a standard Young tableau. It should be clear that since is the largest entry in the tableau, it must be in [...] Pingback by | January 26, 2011 | Reply 8. [...] means that we can eliminate some intertwinors from consideration by only working with things like standard tableaux. We say that a generalized tableau is semistandard if its columns strictly increase (as for [...] Pingback by | February 8, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 22, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9252427220344543, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/101146/proving-if-a-is-a-null-set-and-b-is-a-countable-set-then-ab-is-null?answertab=active
# Proving if $A$ is a null set and $B$ is a countable set then $A+B$ is null If $A$ is a null set and $B$ is a countable set, both in $\mathbb{R}$ can anyone help me show that $A+B$ is null. So I'm slightly unsure of where to start here, but how about assuming that $A+B$ is not null. therefore there must exist an interval of real numbers in $A+B$, but that would be uncountable, contradiction. Am I along the right lines here? - 1 Hint: If $A= \{ x_i \}_{i \in N}$, then $A+B= \cup_{i \in N} A+x_i$.. What can you say about a countable union of null sets? – N. S. Jan 21 '12 at 22:29 Hint: write $A + B$ as a countable union of null sets in a natural way. – Zarrax Jan 21 '12 at 22:29 Hi again, N.S ;) – Zarrax Jan 21 '12 at 22:30 Hi again, LOL.. – N. S. Jan 21 '12 at 22:31 5 A simple example of a non-null set that does not contain any intervals would also be the set of all irrational numbers. – Dejan Govc Jan 21 '12 at 22:33 show 1 more comment ## 2 Answers Note that $A + B = \{ a + b \mid a \in A , b \in B \} = \bigcup_{b \in B} b + A$ Then you have: $$\mu (A + B) \leq \sum_{i = 1}^\infty \mu(b_i + A) = \sum_{i = 1}^\infty \mu (A) = 0$$ Where the inequality follows from the countable subadditivity of $\mu$ and you have $\mu(b_i + A) = \mu(A)$ because the Lebesgue measure is translation invariant. - 2 Thank you! both you and Arturo have given me perfect answers, but i'll give you the accept as you have less rep :) – Freeman Jan 21 '12 at 22:40 @LHS That's very kind of you, thank you. Glad I could help. : ) – Matt N. Jan 21 '12 at 22:41 ## Did you find this question interesting? Try our newsletter To prove the statement, note that $A+b$ is null and measurable for each $b\in B$. Since $$A+B = \bigcup_{b\in B}(A+b)$$ and there are only countably many elements in $B$, it follows that $$\mu(A+B) = \mu\left(\bigcup_{b\in B}(A+b)\right) \leq \sum_{b\in B}\mu(A+b).$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 18, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9320930242538452, "perplexity_flag": "head"}
http://en.wikisource.org/wiki/Arithmetic_and_Reality:_A_Development_of_Popper's_Ideas
# Arithmetic and Reality: A Development of Popper's Ideas From Wikisource Arithmetic and Reality: A Development of Popper's Ideas  (1996)  by Frank Hutson Gregory Arithmetic and Reality: A Development of Popper's Ideas Frank Hutson Gregory 1996 Publishing history: This paper was first published as Working Paper Series No. WP96/01, June 1996. Dept. of Information Systems, City University of Hong Kong. Editor: Dr. Matthew Lee. It was republished in Philosophy of Mathematics Education Journal No. 26 (December 2011). ## INTRODUCTION FOR THE INFORMATION SYSTEMS READER Most computerized information systems operate by means of rules that are incorrigible within the system. They have the same status as necessary, or logical, truths. There is a problem here that dates back to the beginning of British Empiricism. According to David Hume "there is no necessity in the object". In other words the rules that govern the behaviour of the physical world are not necessary but contingent truths. We are, therefore, faced with the problem of explaining how a systems of necessary truths can tell us anything about, or be useful in dealing with, a contingent world. The problem is not unique to computer systems. Prima facie it seems that mathematical formulae are logically true. The question of how, given this, they can apply to reality has been the subject of lengthy debate in the philosophy of mathematics. The present paper recounts how the problem has been structured and offers a new solution. The downstream relevance to information system design should be obvious. Whatever, principles underlie the application of arithmetic to reality will also need to underlie the design of any information system intended to be informative about the real world. ## INTRODUCTION There are two basic questions that can be asked in respect of mathematical propositions. One is "what are they about?" the other is "how are they justified?". Korner [1968] makes a distinction between, what he calls, pure and applied mathematics. A pure mathematical proposition is of the form "1 + 1 = 2" while a proposition of the form "one apple and one apple makes two apples" is a proposition of applied mathematics. This distinction opens the door to the possibility that there are two different types of mathematical proposition and that these are about different things. Prima facie the propositions of applied mathematics appear to be about objects and events in the real world while those of pure mathematics do not. If it were true that propositions of applied mathematics are about real world objects then this would suggest that they are justified empirically. Here we can identify two broad schools of thought. Following Tymczko , accounts of the nature of arithmetic and mathematics can be described as realist or constructivist. "Realism assumes the reality of a mathematical universe which is independent of mathematicians who discover truths about this reality. Constructivism insists that any mathematical reality is conditioned by the actual and potential constructions of mathematicians who invent mathematics." [p xiv, 1985] Mathematics is a large subject and it not obvious that it is completely homogenous. The idea that parts of mathematics are invented and parts discovered should not discounted out of hand. However, if the inquiry is limited to basic propositions of arithmetic i.e. the addition or subtraction of finite numbers, as will be the case in this paper, then the realist and constructivist accounts have the appearance of contradictories. It might be assumed that arguments against one would count in favour of the other. However, if the realist/contructivist distinction is combined with the pure/applied distinction then there are four permutations and in two of these realism and constructivism are not even contraries let alone contradictories. These are: First: A realist account of pure and applied arithmetic. Second: A constructivist account of pure and applied arithmetic. Third: A constructivist account of pure arithmetic and a realist account of applied arithmetic. Fourth: A realist account of pure mathematics and a constructivist account of applied mathematics. All four permutations are open to immediate difficulties. The first needs to explain why arithmetic propositions do not appear to be falsifiable by experience. The second needs to explain why a mental construct, such as arithmetic, can be informative about reality while other analogous mental constructs, such as chess, are not. The third has similar difficulties, it needs to explain how a mental construct relates to a real world discovery. The fourth appears to combine the worst aspects of the other three permutations, it needs to explain why pure arithmetical propositions are falsifiable while the apparently contingent applied arithmetical propositions are not. Popper put forward a version of the third permutation. However, he did not regard arithmetic as comprising two distinct types of statement, statements of pure arithmetic and statements of applied arithmetic. Rather his idea was that a number statement such as "2 apples + 2 apples = 4 apples" can be taken in two senses. In one sense it is irrefutable and logically true in the second sense it is factually true and falsifiable. Another way of putting this is to say that a single number statement can express two proposition one of which can be explained on constructivist lines the other on realist lines. Popper's argument is not tenable as it stands. This is because it functions at a psychological level rather than at a logical level. However, a similar but tenable, logical, argument can be formulated. This is undertaken in Part I of the present paper. Here it will be argued that there cannot be a meaningful system that consists only of logically true universals and factual particulars. Factual universals must be introduced into the system to make it workable. Part I argues the case for a realist element in any number system. Part II makes the much stronger claim that there cannot be a meaningful system that consists only of factually universals and factual particulars. Here logical universals must be introduced into the system to make it workable. Part II argues for a constructivist element in any number system. The main thrust of the paper is to develop a tenable version of the third permutation. Somewhat surprisingly the consequences of the Part II arguments show that the fourth permutation, while not necessarily a practical perspective, is also logically tenable. ## PART I ### Popper's account The question "Why are the calculi of logic and arithmetic applicable to reality?" was the subject of a symposium at which Gilbert Ryle, Karl Popper, and C. Lewy presented papers. Both Ryle [1946] and Lewy [1946] limited their papers to a discussion of logic, but Popper directly addressed the issue of how arithmetic applies to reality. Ryle contended that the rules of logic are rules of procedure and therefore do not apply to reality at all. In the earlier sections of his paper Popper [1946] agreed with Ryle that the rules of logic (or of inference) are rules of procedure and as such they are not meant to fit the facts of the world. Thus the problem disappears. But Popper felt that there was an underlying problem that had not been solved. This was the question of how the rules of logic can be useful in dealing with the world: "Why are the rules of logic good, or useful, or helpful rules of procedure?" Popper thought that this could be answered rather easily. A man will find "the procedure useful because he finds that, whenever he observes the rules of logic, whether consciously or intuitively, the conclusion will be true, provided the premises were true". Here we would expect the argument to move into a discussion of theories of truth, but Popper does not do this. Instead he says "... a "good" or "valid" rule of inference is useful because no counter example can be found," and continues ... since we can say of a true description that it fits the facts ... we can say that rules of inference apply to facts in so far as every observance of them which starts with a fitting description can be relied on to lead to a description which likewise fits the facts. [Popper, 1946, p48] The key point here is what counts as a counter example. Popper could be making the point that a rule of inference will only be valid if its use in an axiomatic system will not lead that system into inconsistency. That is, the use of a rule of inference will not lead to the production of any theorem and its contradictory. On this interpretation the theorem and its contradictory would be the counter example. But it seems unlikely that this is what Popper had in mind as this would not go far towards solving the usefulness problem. There are many consistent systems that have no relation to and no use in the real world. A more likely candidate is that he was saying that rules of inference are open to falsification by facts. I.e that if "All men are mortal" is a description that fits the world and "Socrates is a man" is a description that fits world, but "Socrates is mortal" is a description that does not fit the world, then it would be shown that modus ponens is not valid. In this case it must be at least logically possible for modus ponens to be false, therefore, modus ponens is contingent. This is effectively an inductive account of deduction. However, this was not Popper's position either. This becomes clear when he extends his ideas on logic to arithmetic: In so far as a calculus is applied to reality, it loses its character as a logical calculus and becomes a descriptive theory which may be empirical refutable; and in so far as it is treated as irrefutable, i.e., as a system of logically true formulae, rather than a descriptive scientific theory, it is not applied to reality. [Popper, 1946, p 54] So, it would appear, that a calculus is only useful when it becomes a descriptive theory and therefore falsifiable. Two questions now need to be answered: firstly, how does a calculus become a descriptive theory, and, secondly, which calculi can become descriptive theories? (it is not clear that all calculi can become descriptive theories, some calculi have been developed merely to explore the properties of formal systems, for example the MIU-system Post Production System in Hofstadter [Hofstadter, 1980]. Popper attempts to answer the second question as follows: ...if we consider a proposition such as "2 + 2 = 4", then it may be applied - for example to apples - in two different senses... In the first of these senses, the statement "2 apples + 2 apples = 4 apples" is taken to be irrefutable and logically true. But it does not describe any fact involving apples - any more than "All apples are apples" does. ...it is based ... on certain definitions of the signs "2", "4", "+" and "=". More important is the application in the second sense. In this sense, "2 + 2 = 4" may be taken to mean that, if somebody has put two apples in a basket, and then again two, and has not taken any apples out of the basket, there will be four in it. In this interpretation "2 + 2 = 4" helps us to calculate, i.e., to describe certain physical facts, and the symbol "+" stands for a physical manipulation - for physically adding certain things to other things. ...But in this interpretation "2 + 2 = 4" becomes a physical theory, rather than a logical one; and as a consequence, we cannot be sure whether it remains universally true. As a matter of fact, it does not. ...It may hold for apples, but it hardly holds for rabbits. If you put 2 + 2 rabbits in a basket you may soon find 7 or 8 in it. [Popper, 1946, p 55]. The key question here is when is "2 + 2 = 4" operative in the logical and when is it operative in the physical, factual and contingent sense. Popper seems to be giving a psychological account here. He could be saying that people do, as a matter of fact, interpret "2 + 2 = 4" in two ways. That, as a matter of fact, there is an oscillation of "2 + 2 = 4" between being a logical truth and a physical truth in every person's thinking. As a psychological account it has a lot to commend it. It can help to explain why the problem is such an intractable problem and why it has a now you see it, now you don't quality. "2 + 2 = 4" taken as purely logical throughout a system or narrative, will not be a problem; nor will it be a problem if it is taken as purely physical throughout a system or narrative. The errors that undoubtedly occur in this area are when a given instance of "2 + 2 = 4" is taken to be logical and physical in the same system or narrative. We then have the situation were people claim that there must, as a matter of logic, be four rabbits in a basket; and the opposite error where people claim that arithmetic is a branch of physics. The problem is how to deal with "2 + 2 = 4" in such a way that it has logical and physical implications in the same system or narrative. A psychological account will not solve this problem because we require a logical account of when, where and how logical systems apply reality. If Popper's account is taken as purely psychological then he will not have explained how and why "2 + 2 = 4" taken as logical and a calculus can determine, or help to determine, what the physical state of affairs is with regard to apples. The psychological account says only that arithmetic is logical and it can work in the real world and people have learned to use it. It does not explained which calculi can become descriptive theories. It does not say why arithmetic can work in the real world; therefore it cannot explain how people have learned that it can work in the real world. Briefly, arithmetic can work in the real world but we don't know how, and people have learned that it can work in the real world but we don't know how they have done that either; however, we do that they have learned to use it. But this says no more than that people have learned to use arithmetic and this, I think, we knew already. ### A logical reformulation The apples example can be reformulated as an experiment. Take a basket that contains a pair of apples and nothing else. Take a bucket that contains a pair of apples and nothing else. Empty the entire contents of the basket into the bucket taking care to make sure that everything that is in the basket goes into the bucket. Now how can we determine how many apples are in the bucket? One way is to use the calculus of arithmetic. We can take the contents of the basket as an instantiation of the arithmetical notion "2". We can take the contents of the bucket as another instantiation of the arithmetical notion "2". We can take the act of emptying the entire contents of the basket into the bucket as an instantiation of the arithmetical notion of "+". Given this we can describe our experiment arithmetically as "2 + 2". We can use to the calculus of arithmetic to show "2 + 2 = 4" and from this we can conclude that there are four apples in the bucket. Let us call this the "calculation method". There is another way to determine the number of apples in the bucket and this is by counting them. We can take an apple out of the bucket and say "one", then we can then take another apple out and say "two" and so forth. When there are no more apples left in the bucket we know we have counted them all. Let us call this the "counting method". The contention that arithmetic, understood in the constructivist sense, applies to reality is the contention that the calculation and counting methods will always give the same results. Popper's mistake was to take "2 + 2 = 4" as being at one time (the time depending on psychological factors) logically true and at another factually, and therefore contingently, true. A better account is that "2 + 2 = 4" is always logically true. What is only contingently true is that objects and events in the world are instantiations of its components: "2", "+", "=", "4". If two apples are taken as being a contingent instantiation of the arithmetic "2", four apples as being a contingent instatiation of the arithmetic "4" and emptying the contents of a basket into a bucket as a contingent instantation of the arithmetic "+", then the problem is on the way to being solved. We can say that it is true as a matter of logic that any instatiation of "2" combined with an instatiation of plus and another intantiation of "2", is an instatiation of "4" while it remains contingent whether apples are an instatiation. This can be set up as follows: Apple System 1 (1) Apples when counted as two are an intantiation of "2 apples". (factual hypothesis). (2) The apples in the basket have been counted as "2 apples" (factual particular) (3) The apples in the bucket have been counted as "2 apples" (factual particular) (4) Emptying a basket into a bucket is an instantiation of "+" for the things in the bucket (factual hypothesis) (5) Any instatiation of "2x" combined with an instatiation of "+" and another intantiation of "2x", is an instatiation of "4x". (definition) (6) An intantiation of "4 apples" when counted will be counted as four apples. (factual hypothesis) Suppose we count the apples in the basket as two, count the apples in the bucket as two, empty the basket into the bucket and then count the apples in the bucket. Further suppose that the count results in three apples. Then we could assume that the count has gone wrong somewhere. But we could repeat the count using other methods of counting. If we are satisfied that our counting is correct then we might think that (4) is false or we might think that (1) or (6) is false. Whatever the circumstances we would never have to conclude that (5) was false. This gives us necessity and falsifibility in all the places where we want it. In fact (4) is false as it stands, as Popper points out two rabbits plus two more rabbits may produce seven or eight rabbits. In order to avoid completely abandoning (4) the universe of discourse will need to exclude rabbits, we could perhaps limit it to inanimate objects. But this limitation placed on the universe of discourse only effects (4), (1) and (2), it has no effect what so ever on (5) we do not need to posit a limited universe of discourse for arithmetic. It can be understood as a set of logical truths that apply to any universe of discourse. ### Realist objections Apple System 1 shows how non-falsifiable statements such as (5) can play a role in our calculation of quantities in the real world. Unfortunately it does not show that such statements are necessary for our calculation of real world quantities. This is because all six statements in Apple System 1 could be replaced by a single factual hypothesis: When the apples in a basket are counted as "2" and the apples in a bucket are counted as "2" and the contents of the basket are emptied into the bucket then the contents of the bucket will be counted as "4". At first glance it might be thought that a non-falsifiable system of arithmetic is necessary in order to extrapolate. Inductively one would not be able to say that 67 apples in the basket and 95 apples in the bucket would result in 162 unless one had observed these quantities being put together before. To make the extrapolation requires the abstract, i.e. definitional, notion of arithmetic. But this argument does not stand up to a more subtle version of the realism. We could adopt a similar strategy to that adopted by Field [p 274, 1989] in his version of logicism "What ... is the value of the search for modal translations (or any other sort of translations of mathematics into acceptable nominalistic terms)? Why not instead adopt the easier course of simply trying to translate each of the applications of mathematics?" In order to give the empirical/realist account we need not say that every statement of arithmetic is induced from observations of real world quantities. Nor need we say that the system of arithmetic is open to falsification, but is not in fact never falsified by the observation of real world quantities. All we need to say is that in any system for the calculation of real world quantities that employs logically true and non-falsifiable statements of arithmetic these statements can be replaced by a statement or statements that are not non-falsifiable statements of arithmetic. This opens the door for the contention that non-falsifiable arithmetic is just a useful but non-essential tool, rather like a typist's shorthand, or that it is a useful fiction. This is a position that is counter to our intuition and today few would advocate it. Ayer, in what Lakatos [p 30, 1985] described as logical empiricist orthodoxy, came close to it when he claimed that truths of mathematics are analytic and a priori, that there can be no a priori knowledge of reality, and that if a proposition is true a priori it is a tautology. For Ayer "tautologies [such as the propositions of mathematics], though they may serve to guide us in our empirical search for knowledge, do not in themselves contain any information about any matter of fact." [1946, p 87]. Gaskin produced an argument that counts against this sort of realist account. Gaskin argues that an arithmetic formula such as "7 + 5 = 12" cannot mean the same as an empirical proposition such as would be obtained from counting groups of objects. He argues that in order to explain mistakes in counting we need to invoke the notion of counting correctly. But Gaskin argues the meaning of correct counting is dependent on logically true propositions of arithmetic. Therefore, empirical propositions based on counting do not have equivalent meaning, nor can they be used as equivalent substitutions for, arithmetical propositions. "... what is the criterion for correctness in counting? ... "Correctness" has no meaning in this context, independent of the mathematical proposition. So our suggested analysis of the meaning of "7 + 5 = 12" runs when suitably expanded: "7 + 5 = 12" means "If you count objects correctly (i.e. in such a way as to get 12 on adding 7 and 5) you will, on adding 7 to 5, get 12.""[Gaskin, 1940] If Gaskin's argument were correct this paper could be rapidly brought to a close because it would show the necessity of logically true statements, such as (5) in Apple System 1, in every system of applied arithmetic. It would show that the substitution of a single factual hypothesis, such as the one suggested at the beginning of this section, was inadequate. It would, along with earlier arguments, establish the main point of the present essay which is that every system, that is informative about reality, must contain factual particulars, factual universals and logically true universals. However, Gaskin's argument is not, as it stands, sufficient to prove the point. ### Mistakes in counting Gaskin's idea that there must some form of logical truth underlying our notion of "correctness" in the assignment of number, is as I shall argue, quite right. However, he says that the notion of incorrect counting would be meaningless without mathematical propositions such as "7 + 5 = 12". This suggests that mistakes in counting cannot be identified without an arithmetic calculus and this is plainly not true. Four types of mistakes in counting can occur: Case C1. A child counting apples in a bucket says "one apple, two apples, four apples, five apples" and concludes that there are five apples in the bucket when there are in fact only four. In this case it is clear that the child has not learned how to count. The mistake can be identified and corrected by a parent or teacher. Case C2. A person who has learned to count correctly makes a mistake through inattention. This mistake can be identified and corrected by subsequent counts by the same person or by other people. If a second, third and fourth count all agree then we will conclude that the first count was incorrect. Case C3. Most people counting by means of saying aloud or in silently soliloqy "one, two, three" etc. will make mistakes when counting large numbers. These mistakes can be identified and corrected by other methods of counting. There are many other ways of counting apples: i) writing the count down by taking an apple out and writing down "1" taking out another and writing down "2" etc. ii) using a tally board and crossing off "1" then "2" then "3" as the apples are removed, iii) using a machine, banks have bank note counting machines and, no doubt, somewhere in some packing factory or cannery there is a machine that counts apples. In none of these three different ways of identifying mistakes is there any need to use the calculus of arithmetic. Gaskin's contention that the notion of correct counting is, on the basis of the arguments so far considered, rather implausible. The situation is made worse when we consider that people make mistakes in arithmetic and these mistakes in arithmetic can be identified and corrected by counting. A person might use addition to determine the sum of a bucket containing seven apples and a basket containing five. He might come up with the answer "eleven". This mistake could be identified and corrected by a continuous count of the total. We could reverse Gaskin's argument and argue that the notion of correctness in arithmetic is meaningless without propositions resulting from counting. That some form of logical truth underlining our notion of "correctness" in the assignment of number requires a more powerful and more general argument than Gaskin's simple counting example. ## PART II ### The need for logical truth A comprehensive account of the distinction between logical and factual truth would involve a discussion of the terms: a priori, a posteriori, empirical, analytic, synthetic, necessary and contingent. Such a massive digression into philosophical logic can, for present purposes, be circumvented if the distinction between logical and factual truth is based on the key terms used to describe the difference between realism and contructivism. That is, logically true statements are those that are invented and what follows from them, factually true statements are those that are discovered to be true and what follows from them. Following Popper we can say that all factually true universals are open to falsification. Therefore they are contingent. Logically true statements by contrast are not open to falsification, they are necessarily true. The relations between the two types of statement can be seen in axiomatic systems. The axioms, definitions and rules of production are inventions of the person or persons developing the system and are, therefore, logically true. Any theorems that follow from the axioms and definitions by means of the rules of production will also be logically true. Factually true premises can be introduced into an axiomatic system and theorems that follow from axioms and factual premises by means of the rules of production will inherit the contingency of the premises and be factually true. The problem is to determine why we need the logical truths. Axioms and definitions could be replaced by factual premises and factual theorems generated by the rules of production, and, as was suggested above, the rules of production could themselves be open to falsification and therefore be factual. However, a comprehensive system that comprises only factual statements, that is a system that is not underpinned by any logical statement, is not possible. The later Wittgenstein argued that all languages are rule based. Rules may change but they not falsifiable. As they are not falsifiable they have a very similar status to logically true statements. If an informative system consisted entirely of falsifiable statements, then in the case that two statements contradicted each other we would not know which had been falsified. Suppose we take "all swans are white" to be a factual statement. Then this can be falsified by "Donald is a swan and Donald is white". However, in order to know that Donald is a swan we must have a criterion for including Donald in the class of swans that is independent of Donald's colour. This criterion might be "being a water-fowl with a long neck". However, if we are going to say that, on the basis of Donald being a black water-fowl with a long neck, that "all swans are white" is false then we have taken "being a water-fowl with a long neck" as being a defining criterion for swans. That is we will have taken it to be logically true. A fixed pivotal point is needed if we are going to operate the lever of falsification. With Donald, the newly discovered black water-fowl with a long neck. The crucial point is that before you can say "Donald is a swan" or "Donald is not a swan" you must have decided if white is a logical or a contingent identifying criterion for swans. The need for both factually true and logically true statements in any informative system can be seen clearly when we consider how the two forms of definition, intensive and extensive, can be useful. ### Intensive and extensive definition We can use the notions of "conjunction" and "disjunction" to make a distinction between intensive and extensive definition. An extensive definition, where it consists of more than one term, will be characterized by the disjunction of the terms. Extensive definition gives the reference (denotation) of the definiendum. An extension specifies members of a class, in the case of extensive definition we need to specify all the members of the class i.e. all the extensions. Where a class F has three members, G, H, I, we can express its extension as $(\forall x) (Fx \rightarrow (Gx \lor Hx \lor Ix))$. If this class has, as a matter of logic, only these three members we can formulate an extensive definition: $L (\forall x) (Fx \leftrightarrow (Gx \lor Hx \lor Ix))$ Extensive definitions can be useful. Given that we have fixed members of a class we can generate factual hypotheses about them. Suppose we define "cat" in terms of its member species. E.g. every cat is a lion or a tiger or a leopard or a puma etc. On the basis of this we might formulate various factual hypotheses, i.e. that only cats have claws and that all cats have sharp teeth. These could lead to other factual universals e.g. that anything that has claws also has sharp teeth. Thus, extensive definitions can be instrumental in formulation of factual universals. An intensive definition, where it consists of more than one term, will be characterized by the conjunction of the defining terms. Intensive definitions give the sense (connotation) of the definiendum. An intension will give a criterion for class inclusion, in the case of intensive definition we need to specify all the criteria. Where a member of a class J must meet three criteria, K, L, M, we can express these as $(\forall x) ((Kx \And Lx \And Mx)\rightarrow Jx)$. If as a matter of logic there only three criteria, we can formulate an intensive definition: $L (\forall x) ((Kx \And Lx \And Mx)\leftrightarrow Jx)$ Intensive definitions can be useful. A fixed criteria for class membership will enable us to identify members of the class. If we define a tiger as a cat with stripes then we can say that if X is a cat and X has stripes then X is a Tiger. It might also be true as a matter of fact that all wild Tigers live in Bengal or Assam. In this case if we find an animal that is a cat and has strips and lives in Africa we will know that it is not wild. Thus, intensive definitions can be useful in the formulation of particular factual conclusions. In these examples there has been a logical extension with a factual intension or a logical intension with a factual extension. Now let us consider the case where a term has a logical extension and a logical intension. Surely such a formulation is useless. If the extension is fixed then intension can play no part in helping us identify members of the class, nor is it factual. In this case, therefore, the intension is useless. The situation is hardly better where a term has a factual intension and a factual extension. As neither are fixed both are open to revision. But if one is to be revised it must surely be revised in the light of the other. We can discover that the a putative intension is false based upon the extension. Or we can discover that a putative extension is false based upon the intension. But we cannot make any discoveries about one without taking the other as fixed. If we are to determine that something is a member of a class there must be some criterion for class inclusion that we use to make the determination. Alternatively if we are to determine a criterion for class inclusion then that criterion must be true of all members of the class, therefore, in order to make the determination we must have identified the members of the class. It can be concluded that any useful term or class that has a logical extension must have a factual and contingent intension; and any term or class that has a logical intension must have a factual and contingent extension. An example will make this clear. The intention of "a snake" could be "any reptile that does not have legs and does not have eyelids", an extension could be "any member of the viper family or cobra family or boa family or colubrid family or hydrophida family". Suppose we adopt this extension as a definition of "snake", and suppose we give "viper" the intensive definition of "any reptile with retractable fangs" Then, if we find a reptile that has retractable fangs and eyelids, then we will have discovered that some snakes have eyelids. We will have discovered that the putative intension of "snake" as "any reptile that does not have legs and does not have eyelids" is false. Alternatively we could take the intension of snake as definitional. In this case our discovery of the reptile with retractable fangs and eyelids would be the discovery of a viper (because of the intensive definition of "viper") but it would also be a discovery that not all vipers are snakes. The putative extension of "snake" that included all members of the viper family would have been discovered to be false. An important point here is that as things currently stand in the world both the intension and the extension given above are sufficient for the identification of snakes. This means that for the practical purpose of identifying a snake we do not have to know, or do not have to decide, whether it is the intension or the extension that is definitional. It is only when something like the reptile with retractable fangs and eyelids is discovered that we have to make a decision. These are situations which offer no precedence. A situation where the existing rules of language will not provide a decision procedure. They require that a new rule be made but this will not necessarily be the product of existing rules. There may be a host of psychological factors that go into the decision but logically it will be arbitrary. In these situation a stipulation is required in order to proceed. We need to make a stipulative definition. Take the following: i) An animal is a snake if and only if it is a reptile that does not have legs and does not have eyelids. ii) An animal is a snake if and only if it is a member of the viper family or cobra family or boa family or colubrid family or hydrophida family. The players in a language game might assent to both statements without having decided which is a definition. Both are sufficient identification criteria. They therefore inhabit a logical limbo which will not be resolved until a particular fact forces the issue. Lets imagine a animal called "Olga" and the following: iii) Olga is a reptile without eyelids or legs. iv) Olga is a snake. From i) and iii) by modus ponens. As iii) is factual iv) is bound to be factual whether or not i) is factual. However, the truth of iv) may depend on whether i) is factual or not. v) By definition a viper is any reptile with retractable fangs. vi) Karl is a reptile with retractable fangs and eyelids. This forces a decision about whether i) or ii) is false. And this decision is not factual, it is solely about whether the language game player choose to take one or other as a definition. If i) is taken to be definitional then ii) will be false - it will not be true that all vipers are reptiles. However, if ii) is taken as definitional then i) will be false and we will have insufficient grounds for asserting that Karl is a snake. The truth value of particulars is therefore dependent on definitions. Though the definitions may not have been accepted as definitions as yet. One might say that the truth value of particulars is dependent on definitions present or future. ### Two systems of making a tally A case can now be constructed to show the relation between the calculus of arithmetic and systems of counting are of the same order as that between intensions and extensions. However, the word "counting" will be dropped because this issometimes and somtimes not, used, like "knowledge" as a success word. It could be argued, one way or the other, that a correspondence with the arithmetic calculus is built into the concept of counting. The word "tally" will be used in its place. The word "tally" will imply nothing more that a system or ritual for producing totals. Tally System 1 There is a tribe of goat-herds who live in an enclosed valley from which no goat can escape. Each member of the tribe has a tally stick onto which beads are threaded. When a tribe member is given a goat, or when one of his goats gives birth, a new bead is threaded on to the owners tally stick. When one of his goats dies a bead is taken off the owner's tally stick. We can imagine that in the tribe social prestige and privilege is the determined by the number of goats that a person owns. Given this the tally system will be useful. It can be determined who has the most goats by placing different owners' tally sticks side by side. Tally System 2 In a second tribe the system beads are added as follows: first goat 0 second goat 00 third goat 0000 fourth goat 00000000 This form of tally system differentiates the social ranking more clearly that in Tally System 1, therefore one might argue, it is more useful. However, in this system when a goat dies only one bead is removed from the tally stick. Therefore, we can assume that the number of beads on the tally stick will not normally correspond to the number of goats that a goat herd owns. The number of beads on the tally stick will not normally even correspond to the number of goats that a goat herd has owned. A goat herd who has had four goats and four have died and a goat herd who has had three goats and none have died will both have 0000 on their tally sticks. But we need not assume that this system is any less useful that system 1. Perhaps goats require skill to breed but die largely by accident. It is, we might imagine, quite right that a man who has had four goats, but been unlucky and lost them all, should be given the same respect as a man who has only ever had three. We need not assume that the goat-herds using either Tally System 1 or Tally System 2 have any knowledge of arithmetic. Nor need we assume that they can count in any way independently of their tally sticks. Beads are threaded and taken off as part of a public semi-religious ritual. Everybody in the tribe can agree when this ritual is properly performed. Children are taught the ritual along with various occult rituals. ### The definition of number The way in which number is defined can now be considered. One possibility is to define number in terms of arithmetic formulae. An intensive definition of the number "three" can be as follows: "(1 + 1 + 1) & (1 + 2) & (2 + 1)". Given this logical intension the contingent extension would be the total returned by a system or ritual that produced a corresponding result in appropriate circumstances, that is "a total from System 1 or a total from System 2 or a total from System 3 etc.". Any system that produced a total would be a candidate for inclusion. Tally System 1 and Tally System 2 would both be candidates and, as a matter of fact, totals from Tally System 1 would be part of the extension but those from Tally System 2 would not. It is fortuitous that the bead threading ritual of the tribe using Tally System 1 corresponds to numbers defined by arithmetic. Tally System 1 is meaningful quite independently of arithmetic formulas. However, as such correspondence does exist we are entitled to call it a system of counting. It is not that a logically true arithmetic is required in order to determine what correct counting is, as Gaskin suggested, but that a logically true arithmetic will enable us to determine what systems are to count as counting systems. Given arithmetic formulae as the logical intension of number it will be a matter of empirical inquiry and discovery which systems and rituals can be part of the extension i.e. which systems and rituals are counting systems where counting is defined in terms of the calculus of arithmetic. As Tally System 1 is contingent with regard to number there is no logical problem in explaining how it applies to reality (our problem was explaining how logically true number systems could apply to reality). It can now be explained, which Popper failed to do, why a logically true system of arithmetic can be useful. Given that tally systems are contingent with regard to number, the logically true system of arithmetic is not only useful but essential. This might not be immediately apparent because it might seem that tally systems can be identified as systems of counting by the fact that the totals they produce stand in a one-to-one relation with objects in the real world. But it is difficult to see how "one" can be given meaning independently of some stipulative and logically true definition. It is only because arithmetic provides such a definition, i.e. "(3 - 2) & (4 - 3) & (5 - 4)" that one can identify the "one" in a one-to-one correspondence. A second point is why do we have to say that a counting system must produce one-to-one totals unless we have stipulated this by defining number in terms of arithmetic. Tally System 2, which by the present account is not a system of counting, is just as dependent on real world objects as Tally System 1, which is. Tally system 2 is also in some mathematical relation to real world objects. If number were defined in some other way it would open up the possibility of a system of "counting" in which the total were not in a one-to-one relation with real world objects. This answers the question of "why, if arithmetic is logically true, is it necessary in an account of real world quantities?". Neither Popper nor Gaskin offered an adequate answer to this question. However, the arguments used above can also be used to challenge the contention that the formulae of arithmetic are logically true. This possibility must now be briefly examined. It is logically tenable to present a case for a definition of number opposite to that given in the previous section. Number and counting could be defined in terms of Tally System 1. Numbers could be determined by placing tally sticks upright next to each other. A "one" tally stick is higher than an empty tally stick but shorter than a "two" tally stick. A "two" tally stick is higher than a "one" tally stick but shorter than a "three" tally stick. Counting is the act of assigning the number to a tally stick. Given that numbers are defined in this way it will be contingent and a matter of empirical discovery that arithmetic formulas correspond to them. As an account of the historical development of numbers it seems probable that tally systems existed before arithmetic. In this case originally it must have been that arithmetic was in fact contingent and a discovery. ## CONCLUSION There are two tenable accounts of number. One is to regard numbers as defined by the propositions of arithmetic in which case these propositions are, incorrigible and logically true, while the propositions based on tally systems are fasifiable and factually true. The other is to regard number as defined by a tally system in which case propositions based on the system will be incorrigible and logically true while the propositions of arithmetic will be falsifiable and factually true. The most plausible historical account is that arithmetic was a series of discoveries based on tally systems used in different cultures. As arithmetic knowledge spread and the notation for expressing it because increasingly uniform more people began to regard it as logically true. This trend continued to the present day when most people would take any tally system and possibly every tally system as falsifiable rather than regard arithmetic as falsifiable. Most philosophers of mathematics have assumed that there is only one tenable account of formulas such as "2 + 2 = 4", they have assumed that they are either logically true or contingent in an absolute sense. They have assumed that some arguments would be produced that would show conclusively that they are one or the other. Popper is, I think, the only one to have come up with the idea that "2 + 2 = 4" can at one time be logically true and at another be factually true, that the formula can have two different senses. Popper's mistake was to think that it was the formula "2 + 2 = 4" rather than the number "4" that could be taken in two sense. As we have seen "4" is in one sense the product of an arithmetic formula in another sense it is the product of a tally system. Popper also regarded arithmetic as being logically true. This might be true, as far as most people, as a matter of fact but it is not true as a matter of logic. As has been shown, there need not be any self contradiction involved in taking arithmetic as factually true provided the products of a given tally system are taken as logically true. Although there are two logically tenable accounts of number, it is a legitimate question to ask which is more practical. There could be logical problems in defining numbers in terms of more than one tally system. As things currently stand it would be quite impractical to define numbers in terms of a single tally system. It would be an enterprise similar to basing linear measurement on the standard meter in Paris. This was possible for Napoleon but would be difficult to arrange today. ## REFERENCES AYER, A.J. [1946] Language, Truth and Logic. Second Edition. London: Gollancz. FIELD, H. [1989]: Realism, Mathematics and Modality. Oxford: Basil Blackwell. GASKIN, D.A.T. [1940]: Mathematics and the World. The Australian Journal of Philosophy, 18, no. 2, pp 97 - 116. HOFSTADTER, D.R. [1980]: Godel, Esher, Bach: an Eternal Golden Braid. London: Penguin Books. KORNER, S. [1968]: The Philosophy of Mathematics. New York: Dover Publications. LAKATOS, I. [1985]: A Renaissance of Empiricism in the Recent Philosophy of Mathematics? in TYMOCZKO, T. New Directions in the Philosophy of Mathematics. Boston: Birkhauser. LEWY, C. [1946]: Aristotelian Society Supplementary Volume XX. POPPER, K.R. [1946]: Aristotelian Society Supplementary Volume XX. RYLE, G. [1946]: Aristotelian Society Supplementary Volume XX. TYMOCZKO, T. (Ed) [1985]: Introduction, New Directions in the Philosophy of Mathematics. Boston: Birkhauser. This work is released under the Creative Commons Attribution-ShareAlike 3.0 Unported license, which allows free use, distribution, and creation of derivatives, so long as the license is unchanged and clearly noted, and the original author is attributed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9563652873039246, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41337/nodal-analysis-of-an-electrical-circuit
# Nodal Analysis of an electrical circuit I have several doubts about solving circuits. 1. Can any circuit be solved using Nodal Analysis? 2. If some circuit can be solved using Nodal Analysis, can it be solved using Mesh Analysis too? 3. Why do we need these techniques to solve circuits? - there may be some complex networks which cannot be solved by equivalent methods ,current division,voltage division rules.in such cases these methods are useful – harshitha Jan 30 at 11:42 – Qmechanic♦ Mar 4 at 20:46 ## 2 Answers Yes, we can solve any question using nodal analysis, and it is also possible to solve that question using mesh analysis. - 1 Hi Roshani, welcome to Physics.SE. Perhaps consider explaining more in your answer? It would be a lot better if there was a justification for your answer. – Kitchi Feb 2 at 15:57 It is prudent to remain somewhat dubious about methods in solving circuits at higher frequencies. Solving simple and complex circuits take into account the "Lumped Matter Discipline" (LMD) which defines the properties of some element (e.g. resistor, capacitor, etc.) as the voltage across the terminals, $V(t)$, and current through the element $I(t)$. Why do we have the LMD? It was proposed as a way to make our lives much simpler in that every time we need to solve a simple or complex circuit, we do not have to use Maxwell's equations, which can be quite tedious. Lumped Matter Discipline Assumptions • The rate of magnetic flux linked with any closed link outside an element must be zero for all time: $\frac{\partial \phi_{B}}{\partial t} = 0$ • There is no time varying charge within the element for all time: $\frac{\partial q}{\partial t} = 0$, where $q$ is the total charge within the element. • Use LMD to operate within small signal timescales relative to the propogation delay of EM waves across lumped elements. As engineers are beginning to find out, we are pushing #3 to its limit because we are approaching higher GHz frequencies. This, in turn, affects the other assumptions of the LMD. It is for this reason that in circuit simulation, such as LTSpice (which uses nodal analysis), you have to be very careful about the analyses of higher-frequency circuits. It is worth noting that electronic components are by no means "ideal". A capacitor, for example, would have a marginal Equivalent Series Reistance (ESR) that affects its performance depending on the frequency taken into account. If you were to pick and choose a manufacture's capacitor to use in your intended product, simulating it would require that you include a resistor to give a more accurate representation in real-life. Furthermore, you would also have to include some inductance and even capacitance. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9515272378921509, "perplexity_flag": "middle"}
http://www.nag.com/numeric/CL/nagdoc_cl23/html/D06/d06intro.html
# NAG Library Chapter Introductiond06 – Mesh Generation ## 1  Scope of the Chapter This chapter is concerned with automatic mesh generation • with line segments, over the boundary of a closed two-dimensional connected polygonal domain; • with triangles, over a given two-dimensional region using only its boundary mesh. ## 2  Background to the Problems An important area of scientific computing in engineering is the solution of partial differential equations of various type (for solid mechanics, fluid mechanics, thermal modelling, $\dots $) by means of the finite element method. In essence, the finite element method is a numerical technique which solves the governing equations of a complicated system through a discretization process. You may wish to consult Cheung et al. (1996) to see an application of the finite element method to solid mechanics and field problems. A key requirement of the Finite Element method is a mesh, which subdivides the region on which the partial differential equations are defined. Note that such meshes are also essential to other discretization processes, such as the Finite Volume method. However, for the purpose this description we focus (without loss of generality) on the Finite Element method. Thus, meshing algorithms are of crucial importance in every numerical simulation based on the finite element method. In particular, the accuracy and even the validity of a solution is strongly tied to the properties of the underlying mesh of the domain under consideration. In this chapter, the Delaunay constrained $2D$ triangulation (see George and Borouchaki (1998) or Chapter 7 of Cheung et al. (1996)) is considered and functions are provided to triangulate a closed polygonal domain of ${ℝ}^{2}$, given a mesh of its boundary (in a later Mark of the Library, software for the $3D$ case will be available). A domain in ${ℝ}^{2}$ is given via a discretization of its boundary. The boundary is described as a list of segments, with given end point coordinates. Then an incremental method is used to generate the set of interior vertices. Let $\Omega $ be a closed bounded domain in ${ℝ}^{2}$ or ${ℝ}^{3}$. The question is how to construct a triangulation (mesh) of this domain suitable for a finite element framework. Following the definition in George and Borouchaki (1998): • $\mathcal{T}$ is a mesh of $\Omega $ if • $\Omega ={\bigcup }_{K\in \mathcal{T}}K\text{.}$ • Every element $K$ in $\mathcal{T}$ is non-empty. • The intersection of the interior of any two elements is empty. • The intersection of any two elements in $\mathcal{T}$ is either, • the empty set, • a vertex, • an edge, • a face (in ${ℝ}^{3}$). In the finite element method, the meshes are in general denoted $\mathcal{T}$ or ${\mathcal{T}}_{h}$, where the index $h$ refers to a measure of the diameter (length of the longest edge) of the elements in the mesh. A triangulation is a set of entities described in a suitable manner by picking an adequate data structure. The algorithm for triangulation construction creates a table of elements in the triangulation as well as the neighbourhood relationships between the elements. Those elements are meant to satisfy the so-called ‘empty sphere criterion’ which means that the open ball associated with the element (the circumcircle of the triangle in $2D$, and the circumsphere of the tetrahedron in $3D$) does not contain any vertices (while the closed ball contains the vertices of the element in consideration only). This criterion is a characterisation of the Delaunay triangulation. Given ${\mathcal{T}}_{i}$ the Delaunay triangulation of the convex hull of the first $i$ points, the purpose of the incremental method (which is the main method to generate nodes and elements inside the domain) is to obtain ${\mathcal{T}}_{i+1}$ the Delaunay triangulation which includes an $\left(i+1\right)$th point $P$ as an element vertex. To this end, one can introduce a procedure referred to as the ‘Delaunay kernel’ construction. This kernel is $Ti+1=Ti-CP+BP,$ where ${\mathcal{B}}_{P}$ is the ball associated with $P$ and ${\mathcal{C}}_{P}$ is the associated cavity. The ball associated with a given point $P$ is the set of elements in the triangulation including $P$ as a vertex, while the cavity is the set of elements whose circumcircles or circumballs enclose the point $P$. One can prove that, given ${\mathcal{T}}_{i}$ a Delaunay triangulation of a convex hull of the first $i$ points, then ${\mathcal{T}}_{i+1}$ is a Delaunay triangulation of the hull that includes $P$ as the $\left(i+1\right)$th vertex. The completion of a Delaunay triangulation relies on applying the Delaunay kernel procedure to every point. The problems here are • to choose the input data ${\mathcal{T}}_{0}$ of the incremental method, and • to generate at each iteration this $\left(i+1\right)$th point, such that ${\mathcal{T}}_{i+1}$ is still a Delaunay triangulation of the convex hull of the $\left(i+1\right)$ points. For a finite element application, it is required to construct a mesh of the domain $\Omega $ whose elements are as close to equilateral as possible. The mesh generation methods include an initial creation stage resulting in a mesh ${\mathcal{T}}_{0}$, without internal points, except for any specified interior points (see George and Borouchaki (1998) for more details). Such a mesh is referred to as the ‘empty mesh’. This mesh consists of a box which includes the whole geometry plus some vertices on the edge of that box. From here the methods differ in how the required internal points are created. The general principle of interior mesh generation is to either create a point and insert it immediately by means of the Delaunay method (the so-called Delaunay kernel), repeating the process as long as points can be created, or to generate a series of points, insert this series and iterate the process as long as a non-empty series is created. At this stage it is quite useful to define the notion of a control space to govern the internal point creation. The ‘ideal’ control is the input of a function defined analytically at any point of ${ℝ}^{2}$ and which specifies the size and the direction features that must be conformed to anywhere in the space. To construct such a function, one can consider several approaches. For our purpose in this chapter, this control function computes, from data, the local step sizes (the desired distance between two points) related to the given points. A generalized interpolation then enables us to obtain the function everywhere. This process is purely geometric in the sense that it relies only on the geometric data properties: boundary edge lengths, and so on. You are advised to consult George and Borouchaki (1998) for more details about this strategy, especially about the other approaches which can be considered to construct the control function. ## 3  Recommendations on Choice and Use of Available Functions ### 3.1  Boundary Mesh Generation The first step to mesh any domain of ${ℝ}^{2}$ or ${ℝ}^{3}$ is to generate a mesh of the domain boundary. In this chapter, since only the $2D$ case is considered, the relevant function is nag_mesh2d_bound (d06bac). This function meshes with segments a boundary of a closed connected polygonal domain of ${ℝ}^{2}$, given a set of characteristic points and characteristic lines which define the shape of the frontier. The boundary has to be partitioned into geometrically simple lines. Each line segment may be a straight line, a curve defined by an equation of the type $f\left(x,y\right)=0$, or simply a polygonal curve, delimited by characteristic points (end points of the lines). Then, you can assemble those lines into connected components of the domain boundary. ### 3.2  Interior Mesh Generation In this chapter three functions are provided to mesh a domain given a discretization of its boundary with optionally specified interior points. • nag_mesh2d_delaunay (d06abc) uses an internal point construction method along the internal edges. Using the control function, a small number of points are generated along each edge. • nag_mesh2d_front (d06acc) uses a point creation method based on an advancing front point placement strategy, starting from the ‘empty mesh’. • nag_mesh2d_inc (d06aac) uses a simple incremental method based on a control function given analytically via the argument power. Any point construction method results in a set of points. These points are then inserted by means of the Delaunay kernel. The point insertion process is completed by successive waves. The first wave results from the empty mesh edge analysis (edge method) or from the empty mesh front analysis (advancing front method). Subsequent waves correspond to the analysis of the edges of the previous mesh. For the advancing front strategy, the waves follow the analysis of the front associated with the current mesh. One can propose a general scheme for a mesh generation method. Seven steps can be identified as follows. • Preparation step. • Data input: point coordinates, boundary edges and internal edges (if any), • construction of the bounding box, • meshing of this box by means of a few triangles. • Construction of the box mesh. • Insertion of the given points in the box mesh using the Delaunay kernel. • Construction of the empty mesh. • Search for the missing specified edges, • enforcement of these edges, • definition of the connected components of the domain. • Internal point creation and point insertion. • Control space definition, • $\left(1\right)$ internal edge analysis, point creation along these edges, • point insertion via the Delaunay kernel and return to $\left(1\right)$. • Domain definition. • Removal of the elements exterior to the domain, • classification of the elements with respect to the connected components. • Optimization. • edge swapping, • point relocation, $\dots $ • File output. When using the advancing front approach described earlier, one has to replace the step denoted by $\left(1\right)$ of the general scheme. The analysis of the edges of the current mesh is then replaced by the front analysis. Due to the fact that the particular mesh generated by nag_mesh2d_inc (d06aac), nag_mesh2d_delaunay (d06abc) and nag_mesh2d_front (d06acc) may be sensitive to the platform being used; there may be differences between generated nodal coordinates and connectivities. However all meshes generated should be expected to satisfy the ‘empty sphere criterion’. ### 3.3  Mesh Management and Utility Routines In addition to meshing functions, management and utility functions are also available in this chapter. A mesh smoother function nag_mesh2d_smooth (d06cac), is provided to improve mesh triangle quality. Since the Finite Element framework includes a requirement to solve matrices based on meshes, the function nag_mesh2d_sparse (d06cbc) generates the sparsity pattern of such a matrix. Due to the fact that the numbering of unknowns in a linear system could be crucial in term of storage and performance issues, a vertex renumbering function nag_mesh2d_renum (d06ccc) is provided. This function also returns the new sparsity pattern based on the renumbered mesh. To mesh a complicated geometry, it is sometimes better to partition the whole geometry into a set of geometrically simpler ones. Some geometry could also be deducted from another geometry by an affine transformation and nag_mesh2d_trans (d06dac) could be used for that purpose. nag_mesh2d_join (d06dbc) is provided to join all the simple geometry meshes. This function can also handle the joining of two adjacent as well as overlapping meshes, which may be useful in a domain decomposition framework. ## 4  Example of Use in the Solution of a Partial Differential Equation The use of Chapter d06 mesh generation functions, together with sparse solver functions from Chapter f11 to solve partial differential equations with the finite element method is described in a NAG Technical Report (see Bouhamou (2001)). This report, and accompanying source code, is available from the NAG web site, or by contacting one of the NAG Response Centres. ## 5  Functionality Index Boundary mesh generation, 2D boundary mesh generation nag_mesh2d_bound (d06bac) Interior mesh generation, 2D mesh generation using advancing front method nag_mesh2d_front (d06acc) 2D mesh generation using a simple incremental method nag_mesh2d_inc (d06aac) 2D mesh generation using Delaunay–Voronoi method nag_mesh2d_delaunay (d06abc) Mesh Management and Utility function, 2D mesh smoother using a barycentering technique nag_mesh2d_smooth (d06cac) 2D mesh transformer by an affine transformation nag_mesh2d_trans (d06dac) 2D mesh vertex renumbering nag_mesh2d_renum (d06ccc) finite Element matrix sparsity pattern generation nag_mesh2d_sparse (d06cbc) joins together two given adjacent (possibly overlapping) meshes nag_mesh2d_join (d06dbc) None. ## 7  References Bouhamou N (2001) The use of NAG mesh generation and sparse solver routines for solving partial differential equations NAG Technical Report TR 1/01 NAG Ltd, Oxford Cheung Y K, Lo S H and Leung A Y T (1996) Finite Element Implementation Blackwell Science George P L and Borouchaki H (1998) Delaunay Triangulation and Meshing: Application to Finite Elements Editions HERMES, Paris Quarteroni A and Valli A (1997) Numerical approximation of partial differential equations Comp. Maths. 23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 53, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8410103917121887, "perplexity_flag": "middle"}
http://www.reference.com/browse/Zone+Plate
Definitions Nearby Words # Zone plate A zone plate is a device used to focus light. Unlike lenses however, zone plates use diffraction instead of refraction. Created by Augustin-Jean Fresnel [freɪ'nel], they are sometimes called Fresnel zone plates in his honor. The zone plate's focusing ability is an extension of the Arago spot phenomenon caused by diffraction from an opaque disc. A zone plate consists of a set of radially symmetric rings, known as Fresnel zones, which alternate between opaque and transparent. Light hitting the zone plate will diffract around the opaque zones. The zones can be spaced so that the diffracted light constructively interferes at the desired focus, creating an image there. Zone plates produce equivalent diffraction patterns no matter whether the central disk is opaque or transparent, as long as the zones alternate in opacity. ## Design and manufacture To get constructive interference at the focus, the zones should switch from opaque to transparent at radii where $r_n = sqrt\left\{n lambda f + frac\left\{n^2lambda^2\right\}\left\{4\right\}\right\}$ where n is an integer, λ is the wavelength of the light the zone plate is meant to focus and f is the distance from the center of the zone plate to the focus. When the zone plate is small compared to the focal length, this can be approximated as $r_\left\{n\right\} simeq sqrt\left\{n f lambda\right\}$. For plates with many zones, you can calculate the distance to the focus if you only know the radius of the outermost zone, r N, and its width, Δ rN: $f = frac\left\{2 r_\left\{N\right\} Delta r_\left\{N\right\}\right\}\left\{lambda\right\}$ In order to get complete constructive interference at the focus, the amplitude of the diffracted light waves from each zone in the zone plate must be the same. This means that for an evenly illuminated zone plate, the area of each zone is equal. Because the area of each zone is equal, the width of the zones must decrease farther from the center. The maximum possible resolution of a zone plate depends on the smallest zone width, $frac\left\{Delta l\right\}\left\{Delta r_\left\{N\right\}\right\} = 1.22$ Because of this, the smallest size object you can image, Δl, is limited by how small you can reliably make your zones. Zone plates are frequently manufactured using lithography. As lithography technology improves and the size of features that can be manufactured decreases, the possible resolution of zone plates manufactured with this technique can improve. Unlike a standard lens, a binary zone plate produces subsidiary intensity maxima along the axis of the plate at odd fractions (f/3, f/5, f/7, etc.), though these are less intense than the principal focus. However, if the zone plate is constructed so that the opacity varies in a gradual, sinusoidal manner, the resulting diffraction causes only a single focal point to be formed. This type of zone plate pattern is the equivalent of a transmission hologram of a converging lens. For a smooth zone plate, the opacity (or transparency) at a point can be given by: $frac \left\{1 pm cos\left(kr^2\right)\right\}\left\{2\right\},$ Binary zone plates use almost the same formula, however they depend only on the sign: $frac\left\{1 pm sgn\left(cos\left(kr^2\right)\right)\right\}\left\{2\right\},$ where r is the distance from the plate center and k determines the plate's scale. ## Applications ### Physics There are many wavelengths of light outside of the visible area of the electromagnetic spectrum where traditional lens materials like glass are not transparent, and so lenses are more difficult to manufacture. Likewise, there are many wavelengths for which there are no materials with a refractive index significantly larger than one. X-rays, for example, are only weakly refracted by glass or other materials, and so require a different technique for focusing. Zone plates eliminate the need for finding transparent, refractive, easy-to-manufacture materials for every region of the spectrum. The same zone plate will focus light of many wavelengths to different foci, which means they can also be used to filter out unwanted wavelengths while focusing the light of interest. ### Photography Zone plates are also used in photography in place of a lens or pinhole for a glowing, soft-focus image. One advantage over pinholes (aside from the unique, fuzzy look achieved with zone plates) is that the transparent area is larger than that of a comparable pinhole. The result is that the effective f-number of a zone plate is lower than for the corresponding pinhole and the exposure time can be decreased.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9250705242156982, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/34024/special-relativity-second-postulate?answertab=oldest
# Special Relativity Second Postulate That the speed of light is constant for all inertial frames is the second postulate of special relativity but this does not means that nothing can travel faster than light. • so is it possible the point that nothing can travel faster than light was wrong? - 1 That the speed of light is constant for all inertial frames is the second postulate of special relativity. – Mark M Aug 12 '12 at 17:06 ## 5 Answers It says, "The speed of light in vacuum is constant in all inertial frames of reference (i.e. for all inertial observers)". To explain in a simple manner, "Light cannot be measured relative to any objects and is always constant in all inertial frames. In other words, Light is the maximum velocity allowed by nature. If something approaches near $c$ - Time, Length and even Mass changes. No... Of course there's no possibility for the second principle of Special relativity to be wrong. And for this reason, it was accepted and has been in existence for nearly a century. This restriction increased the interest for Physicists to concentrate on Tachyons - From a purely theoretical point of view, the Special Relativity (SR) is based on a space-time metric $$\eta=\begin{bmatrix}+&0&0&0\\0&−&0&0\\0&0&-&0\\0&0&0&-\end{bmatrix}$$ The most general transformation to preserve metric $\eta$ is global Poincaré group which is the limit of the de sitter group with sphere radius $R\rightarrow \infty$. There is an other type of de Sitter transformation with $R \rightarrow$ finite which also leads to a special relativity theory. Basically one plays with cphoton and c. But, keep in mind that if it is possible that SR be a finite large $R$ de Sitter transformation, it has not been experimentally confirmed, and as far as we know we can use Einstein special relativity. - The fact that the speed of light is a maximum speed is a derived conclusion from the postulates of special relativity. it is not one of the axioms themselves. You can Demonstrate this in a large variety of ways, the most convincing one is the fact that the energy required to create a particle and accelerate it to a speed $v$ is given by $$E=\frac{mc^{2}}{\sqrt{1-\left(\frac{v}{c}\right)^{2}}}$$ which approaches infinity as $v\rightarrow c$. - 1 This does not preclude the existence of "tachyons", i.e., particles born moving faster than light. – C.R. Aug 13 '12 at 4:39 @KarsusRen: such things would have imaginary energies. And, like I said, there are other ways of getting at this result. A better argument against tachyons is that you can always boost to a reference frame where a tachyon is not travelling through time at all, or travelling into the past. – Jerry Schirmer Aug 13 '12 at 12:40 1 No, such things will not have imaginary energies but instead their rest mass (and rest energy) are imaginary. Because they can never be at rest, the imaginary nature of rest mass is not a problem. – C.R. Aug 14 '12 at 6:06 @KarsusRen: like I said, there is a reference frame where they will sit a constant time, and ''evolve'' in space. There's no sense to be made of a dynamic particle that behaves like that. – Jerry Schirmer Aug 14 '12 at 12:36 There are various ways to formulate special relativity. The different approaches illustrate various different aspects of the theory so one of the tricks is to choose the formulation best suited to the question you're asking. My own favourite approach is based on the invarience of the proper time, and in fact this answers your question rather neatly. If you think back to learning about Pythagorus' theorem, this tells you that the distance from the origin to the point in space (x, y, z) is: $$d^2 = x^2 + y^2 + z^2$$ Special Relativity extends this idea and defines a quantity called proper time, $\tau$, defined by: $$\tau^2 = c^2t^2 - x^2 - y^2 - z^2$$ where $c$ is a constant that will turn out to be the speed of light. The key thing about Special Relativity is that it states that the proper time is an invariant, that is all observers will calculate it has the same value. All the weird effects in SR like length contraction and time dilation come from the fact that $\tau$ is a constant. So what about that constant $c$? Well the quantity $\tau^2$ can't be negative otherwise you can't take the square root - well, you can, but it would give you an imaginary number and this is unphysical. So suppose we let $\tau^2$ get as low as it can i.e. zero, then: $$0 = c^2t^2 - x^2 - y^2 - z^2$$ and rearranging this gives: $$c^2 = \frac {x^2 + y^2 + z^2}{t^2}$$ but $x^2 + y^2 + z^2$ is just the distance (squared) as calculated by Pythagorus so the right hand side is distance divided by time (squared) so it's a velocity, $v^2$, that is: $$c^2 = v^2$$ or obviously $$c = v$$ So that constant $c$ is actually a velocity, and what's more it's the fastest velocity that anything can travel because if $v > c$ the proper time becomes imaginary. That's why in special relativity there is a maximum velocity for anything to move. Although it's customary to call this the speed of light, in fact it's the speed that any massless particle will move at. It just so happens that light is massless. - So is it possible the point that nothing can travel faster than light was wrong? No. The "nothing can travel faster than light" restriction logically follows from the two postulates of special relativity. I'll try to briefly show you how to get to the conclusion. 1. First you have to convince yourself that the two postulates imply the phenomenon called the relativity of simultaneity. That is the first thing discussed in every textbook on special relativity, so I'm not getting into it. 2. Now we use a following claim from p.1: "If one would be able to get from event A to event B only if he could move with faster-than-light speed (spacelike events). Then we can change the time order of the events A and B just by changing our reference frame." We can make A and B simultaneous, make A precede B or make B precede A -- all that just by moving to different reference frame. 3. Now we can start a proof by contradiction. Suppose that we have some way to transmit faster-than-light signals. It then immediately follows follows from p.2 that we can transmit instantaneous signals (by making emission and reception events simultaneous) and even signals that are received before they are transmitted (by swapping the order of emission and reception events). 4. Imagine that we have two guys $\alpha$ and $\beta$, equipped with such a spectacular communication channel. Then $\alpha$ could send a signal to $\beta$ "back in time", and then $\beta$ will return the signal to $\alpha$ instantaneously. Which means that $\alpha$ will receive his own signal from the future. Such ability instantly leads one to lots of self-contradictory situations. Hence our assumption was false. - ## protected by Qmechanic♦Apr 9 at 1:36 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9533883333206177, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/2597/energy-conservation-in-general-relativity/2609
# Energy conservation in General Relativity I understand that energy conservation is not a rule in general relativity, but I'd like to know under what circumstances it can still be possible. In other words, when is it possible to associate a potential energy to the gravitational field, so that the energy is constant in the evolution of the system? Here are some examples, is there a convenient way to define energy in these scenarios? • Just a system of gravitational waves. • A point mass moving in a static (but otherwise arbitrary) space-time. Equivalent (if I'm not mistaken) to a test mass moving in the field of a second much larger mass, the larger mass wouldn't move. • Two rotating bodies of similar mass. Overall, I'm trying to understand what keeps us from associating a potential energy to the metric. When we break the time translation symmetry of a system by introducing an electromagnetic field, we can still conserve energy by defining an electromagnetic potential energy. Why can't we do the same when we break TT symmetry by making space-time curved? - – Marek Jan 7 '11 at 13:43 I am not sure, but I think it has something to do with the global properties of the metric, especially how it behaves asymptotically. – Sklivvz♦ Jan 7 '11 at 16:34 @Skli: I think so too. It must be some property of the metric that keeps us from defining a potential energy. And it probably comes from Einstein's field equations. But I have no idea what it could be. – Bruce Connor Jan 7 '11 at 16:54 @Sklivvz : it actually comes directly from the metric. The problem is that GR has the metric serving a dual role of being a measuring stick for objects AND also being the thing that defines what the gravitational field is. A dynamical field means that your measuring stick is changing, and then what do you use to measure THAT. – Jerry Schirmer Jan 7 '11 at 19:56 ## 3 Answers There are a few different ways of answering this one. For brevity, I'm going to be a bit hand-wavey. There is actually still some research going on with this. Certain spacetimes will always have a conserved energy. These are the spacetimes that have what is called a global timelike (or, if you're wanting to be super careful and pedantic, perhaps null) Killing vector. Math-types will define this as a vector whose lowered form satisfies the Killing equation: $\nabla_{a}\xi_{b} + \nabla_{b} \xi_{a} = 0$. Physicists will just say that $\xi^{a}$ is a vector that generates time (or null) translations of the spacetime, and that Killing's equation just tells us that these translations are symmetries of the spacetime's geometry. If this is true, it is pretty easy to show that all geodesics will have a conserved quantity associated with the time component of their translation, which we can interpret as the gravitational potential energy of the observer (though there are some new relativistic effects--for instance, in the case of objects orbiting a star, you see a coupling between the mass of the star and the orbiting objects angular momentum that does not show up classically). The fact that you can define a conserved energy here is strongly associated with the fact that you can assign a conserved energy in any Hamiltonian system in which the time does not explicitly appear in the Hamiltonian--> time translation being a symmetry of the Hamiltonian means that there is a conserved energy associated with that symmetry. If time translation is a symmetry of the spacetime, you get a conserved energy in exactly the same way. Secondly, you can have a surface in the spacetime (but not necessarily the whole spacetime) that has a conserved killing tangent vector. Then, the argument from above still follows, but that energy is a charge living on that surface. Since integrals over a surface can be converted to integrals over a bulk by Gauss's theorem, we can, in analogy with Gauss's Law, interpret these energies as the energy of the mass and energy inside the surface. If the surface is conformal spacelike infinity of an asymptotically flat spacetime, this is the ADM Energy. If it is conformal null infinity of an asymptotically flat spacetime, it is the Bondi energy. You can associate similar charges with Isolated Horizons, as well, as they have null Killing vectors associated with them, and this is the basis of the quasi-local energies worked out by York and Brown amongst others. What you can't have is a tensor quantity that is globally defined that one can easily associate with 'energy density' of the gravitational field, or define one of these energies for a general spacetime. The reason for this is that one needs a time with which to associate a conserved quantity conjugate to time. But if there is no unique way of specifying time, and especially no way to specify time in such a way that it generates some sort of symmetry, then there is no way to move forward with this procedure. For this reason, a great many general spacetimes have quite pathological features. Only a very small proprotion of known exact solutions to Einstein's Equation are believed to have much to do with physics. - I should point out that the essential difference with E&M here is that, although the electromagnetic field is dyanmic, the hamiltonian of the E&M field still does not explicitly include time--it only includes time in the dependence of $\vec E$ and $\vec B$ on time. This makes time translation a symmetry of its hamiltonian. For a spacetime with no timelike or null killing vector, there is no vector that generates time translations in this way. And with no such symmetry, there is no energy defined. – Jerry Schirmer Jan 7 '11 at 19:59 2 Sorry I am nit-picky, but math-types would definitely define the Killing vector as ${\mathcal L}_{\xi}\, g = 0$ :-P – Marek Jan 7 '11 at 20:30 Heh. Fair enough. But they're equivalent if you have a metric-compatible connexion, and I didn't want to get into what the Lie Derivative was. – Jerry Schirmer Jan 7 '11 at 20:37 1 @jerry, added "small" in "only a very small proportion" ;-) Hopefully that's what you meant (maybe it was "tiny") – Sklivvz♦ Jan 7 '11 at 21:06 @Jerry: Thanks. I knew it had to be about breaking time translation, but I didn't know what broke it. – Bruce Connor Jan 8 '11 at 23:46 show 1 more comment Energy conservation does work perfectly in general relativity. The overall Lagrangian is invariant under time translations and Noether's Theorem can be used to derive a non-trivial and exact conserved current for energy. The only thing that makes general relativity a little different from electromagnetism is that the time translation symmetry is part of a larger gauge symmetry so time is not absolute and can be chosen in many ways. However there is no problem with the derivation of conserved energy with respect to any given choice of time translation. There is a long and interesting history to this problem. Einstein gave a valid formula for the energy in the gravitational field shortly after publishing general relativity. The mathematicians Hilbert and Klein did not like the coordinate dependence in Einstein's formulation and claimed it reduced to a trivial identity. They enlisted Noether to work out a general formalism for conservation laws and claimed that her work supported their view. The debate continued for many years especially in the context of gravitational waves which some people claimed did not exist. They thought that the linearised solutions for gravitational waves were equivalent to flat space via co-ordinate transformations and that they carried no energy. At one point even Einstein doubted his own formalism, but later he returned to his original view that energy conservation holds up. The issue was finally resolved when exact non-linear gravitational wave solutions were found and it was shown that they do carry energy. Since then this has even been verified empirically to very high precision with the observation of the slowing down of binary pulsars in exact agreement with the predicted radiation of gravitational energy from the system. The formula for energy in general relativity is usually given in terms of pseudo tensors such as those proposed by Laundau & Lifshitz, Dirac, Weinberg or Einstein himself. Wikipedia has a good article on these and how they confirm energy conservation. Although pseudotensors are mathematically rigorous objects which can be understood as sections of jet bundles, some people don't like their apparent co-ordinate dependence. There are other covariant approaches such as the Komar Superpotential or a more general formula of mine which gives the energy current in terms of the time translation vector $k^{\mu}$ as $J^{\mu}_G = \frac{1}{16\pi G} (k^{\mu}R - 2k^{\mu}\Lambda - 2{{k^{\alpha}}_{;\alpha}}^{\mu} + {{k^{\alpha}}_{;}}^{\mu}_{\alpha}+ {{k^{\mu}}_{;}}^{\alpha}_{\alpha})$ Despite these general formulations of energy conservation in general relativity there are some cosmologists who still take the view that energy conservation is only approximate or that it only works in special cases or that it reduces to a trivial identity. In each case these claims can be refuted either by studying the formulations I have referenced or by comparing the arguments given by these cosmologists with analogous situations in other gauge theories where conservation laws are accepted and follow analogous rules. One area of particular contention is energy conservation in a homogeneous cosmology with cosmic radiation and a cosmological constant. Despite all the contrary claims, a valid formula for energy conservation in this case can be derived from the general methods and is given by this equation. $E = Mc^2 + \frac{\Gamma}{a} + \frac{\Lambda c^2}{\kappa}a^3 - \frac{3}{\kappa}\dot{a}^2a - Ka = 0$ $a(t)$ is the universal expansion factor as a funcrtion of time normalised to 1 at the current epoch. $E$ is the total energy in an expanding region of volume $a(t)^3$. This always comes to zero in a perfectly homogeneous cosmology. $M$ is the total mass of matter in the region $c$ is the speed of light $\Gamma$ is the density of cosmic radiation normalised to the current epoch $\Lambda$ is the cosmological constant, thought to be positive. $\kappa$ is the gravitational coupling constant $K$ is a constant that is positive for spherical closed space, negative for hyperbolic space and zero for flat space. The first two terms describe the energy in matter and radiation with the matter energy not changing and the radiation decreasing as the universe expands. Both are positive. The third term is "dark energy" which is currently though to be positive and contributing about 75% of the non-gravitational energy, but this increases with time. The final two terms represent the gravitational energy which is negative to balance the other terms. This equation holds as a consequence of the well-known Friedmann cosmological equations, that come from the Einstein field equations, so it is in no sense trivial as some people have claimed it must be. - Seems like a very thorough work to. – Robert Filter Jan 20 '11 at 10:56 Ok, so you can always define a gravitational energy so that the overall energy of the system conserves. But, is this definition of gravitational energy always a same predefined function of the metric, or do you need to a new way to define it whenever you work with a new gravitational field? For instance, there's a single definition I can use for electromagnetic potential energy (in terms of the electric and magnetic field density), and this definition will guarantee conservation of energy for any system with any electromagnetic field. Is there similar definition for gravitaty energy? – Bruce Connor Jan 20 '11 at 13:19 Even for the electromagnetic field the energy depends on the reference frame. In general relativity there is a greater choice of valid reference frames which define different quantities for the energy. This is not a problem either in practice or in principle. For example in the case of the cosmological solutions the choice of reference frame is usually taken to be comoving co-ordinates which leads to the formula above. Other choices would also be valid. – Philip Gibbs Jan 20 '11 at 13:33 I gave a general expression above for the energy current 4-vector that works for any gravitational field. Perhaps I should have clarified that getting the energy in a region of space from this current just requires integrating the time component of the current over the region. – Philip Gibbs Jan 20 '11 at 13:36 Nah, it's clear enough. I just had to read your answer a couple more times. :-) – Bruce Connor Jan 20 '11 at 13:40 show 1 more comment o carry on with Jerry Schirmer, the Killing vector defines isometries on a manifold. If there exists a Killing vector $K_t~=~\partial/\partial t$ this means the momentum $K_t\cdot P~=~$ constant. This then is a statement which can be interpreted as the constancy of an observable labeled energy. As a rule of thumb, if a metric component involves time in an explicit way, and for example $K_t~\propto~\sqrt{g_{tt}(t)}$ is not proper or the action of this vector is not an isometry. This happens with the FLRW equation of cosmology. In a de Sitter form we have $$ds^2~=~dt^2~-~e^{\sqrt{\Lambda/3}t}(dr^2~+~r^2d\Omega^2),$$ which has a time dependency. So we are not able to derive a conservation of energy from first principles. The Ricci curvature is $R_{\mu\nu}~=~\Lambda g_{\mu\nu}$, and for $k~=~0$ the spatial curvature is zero. The cosmological constant is dependent on the vacuum energy density, plus pressure terms. With the equation of state $p~=~-\rho$, which approximates observational data pretty well, one can do some detailed balance work to show the universe is a net nothing and remains so. Does this connects with something deeper than just a "detailed balance? It might, and I suspect that Phillip's analysis connects with this. The deSitter metric is a time dependent conformal theory of a flat metric. For $g'~=~\Omega^2g$ the line element for $g'$ $$ds^2~=~\Omega^2(du^2~-~d\sigma_{space}^2).$$ However for the time variable $du^2~=~\Omega^{-2}dt^2$ $\Omega$ is time dependent and $$ds^2~=~dt^2~-~\Omega^2(t)d\sigma_{space}^2.$$ This recovers the de Sitter metric for $\Omega^2(t)~=~exp(\sqrt{\Lambda/3}t)$. The de Sitter spacetime is then conformally equivalent to a flat spacetime which trivially has a $K_t~=~\partial/\partial t$. So the spacetime we observe with the equation of state $p~=~-\rho$ is a class of spacetimes which are conformal to flat spacetime and which also preserve $E~=~constant$. I think that Phillip’s work on this matter projects out this special case of conformal spacetimes. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.942771315574646, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3799243
Physics Forums ## Time scale of the atom Hi I am reading about the force of a coherent EM-beam acting upon an atom, and I have a question in this regard. It is regarding the explanation on page 150 of this book, starting from "The geometric approximation of atom optics is valid when": http://books.google.dk/books?id=SUBH...20atom&f=false. It is only the first part of that page. As far as I understand, what they try to tell us is that in order to treat the atom as a classical particle, the time it takes for the internal state to change (1/Gamma) has to be very short compared to the time it takes for the external dynamics to change. That is at least what the inequality says. Physically I don't see why this condition must be satisfied. Does it simply mean that the atom has to be in equilibrium at all times? Best regards, Niles. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> Nanocrystals grow from liquid interface>> New insights into how materials transfer heat could lead to improved electronics I can't see the page of the book, but your understanding is correct. Another way to look at this is to consider the energy scale of things. $E = \hbar \omega = \frac{h}{T}$ If the time scale of the atom is much faster than the external perturbation, then the perturbation will be on a much smaller energy scale than what's going on in the atom. Hence the atom will not be affected significantly. This is of course not an exact relation, it just gives you a rule of thumb of what is relevant and what can be neglected. Thanks for replying. That relation makes good sense. Maybe this link works better (it is on page 150 of the book): http://books.google.dk/books?id=SUBH...page&q&f=false The authors mention that this corresponds to the internal atomic dynamics following the center-of-mass motion of the atom adiabatically. When I hear "adiabatic", I would believe that it means that the atom is in the same internal state during its center-of-mass motion. The above inequality states that the internal dynamics are very fast, so isn't it wrong to say that there is adiabatic following? It is the opposite of adiabatic following, since the internal state changes very fast during the external motion (?). Thanks in advance. Best, Niles. ## Time scale of the atom adiabatic in this context means without transfer of energy. Note the formula they give, $\omega_{\mathrm{rec}} = \frac{\hbar k^2}{2m} \ll \Gamma$ If you multiply both sides (all 3 sides :-) ) by $\hbar$, the formula compares 3 energies. $\frac{\hbar^2 k^2}{2m}$ is a kinetic energy $\hbar \omega$ is the energy of an oscillator or wave (written as such to derive a characteristic time scale from the energy) $\hbar \Gamma$ is the width of an emission line (for example), which is finite because of the finite life time of the initial state. Quote by M Quack Note the formula they give, $\omega_{\mathrm{rec}} = \frac{\hbar k^2}{2m} \ll \Gamma$ If you multiply both sides (all 3 sides :-) ) by $\hbar$, the formula compares 3 energies. $\frac{\hbar^2 k^2}{2m}$ is a kinetic energy $\hbar \omega$ is the energy of an oscillator or wave (written as such to derive a characteristic time scale from the energy) $\hbar \Gamma$ is the width of an emission line (for example), which is finite because of the finite life time of the initial state. Ah, thanks for making that clear. Quote by M Quack adiabatic in this context means without transfer of energy. I have to admit I still don't fully understand "adiabatic" in this context. So the internal dynamics follows the external COM-motion without any energy transfer(?). I'm not even sure I know what that means. Does it refer to the fact that the spontaneously emitted photons (characterized by $\Gamma^{-1}$) are emitted *much* faster than the atom moves, so their effect is zero on average since they are emitted so often? Thanks in advance. Best regards, Niles. Quote by Niles I have to admit I still don't fully understand "adiabatic" in this context. So the internal dynamics follows the external COM-motion without any energy transfer(?). I'm not even sure I know what that means. Yes. Below, I try to give a more intuitive example. Does it refer to the fact that the spontaneously emitted photons (characterized by $\Gamma^{-1}$) are emitted *much* faster than the atom moves, so their effect is zero on average since they are emitted so often? It's more like the external perturbations are so weak that they do not excite any transitions in the atom. Whenever there is an emission line with a characteristic frequency or energy, there is a corresponding absorption or excitation process. What goes up must come down, so if the atom gets excited by some external stimulus it will eventually emit a photon and drop back into its ground state. That is very non-classical behavior, so the classical approximation is only good if you avoid that happening. The concept of adiabatic changes is not limited to the quantum world. Think of the characteristic frequency of the atom as a resonance. A weight on a spring for example, has a natural frequency. Hold the spring in your hand and let the weight bounce up and down. If you move your hand rapidly, the oscillation amplitude will increase. If you move your hand very slowly (on a time scale much slower than the oscillation period), the amplitude will not change noticably. Recognitions: Homework Help Science Advisor To use M Quack's example, imagine that you have a spring in your hand with a weight attached. The spring has some damping and the you can move your hand around in a spatially varying gravitational field. You know that the equilibrium position of the spring depends on the local gravitational field. You also know that because of the damping the spring will reach this equilibrium position in a time that is roughly 1/(decay rate) provided you hold your hand still. But suppose your hand does move slowly. A question you could ask is, how slowly should your hand move so that the spring is always in local equilibrium. It's reasonable to suppose that you would want a large decay rate compared to the timescale of hand motion so that you are effectively sitting in one place for much longer than it takes to reach equilbrium. However, I'm also not sure this is precisely what this book is talking about. Looking at page 150 just above section 6.3, the book states that $\omega_{rec} \ll \Gamma$ which implies that $1/\Gamma \ll 1/\omega_{rec}$. However, just below that they state the internal timescale $1/\Gamma$ should be MUCH SLOWER than the external timescale $1/\omega_{rec}$. I would interpret much slower to mean the internal time scale is longer than the external timescale, which gives the opposite inequality. Perhaps I misread or misunderstood or perhaps they meant much shorter? Quote by M Quack It's more like the external perturbations are so weak that they do not excite any transitions in the atom. Whenever there is an emission line with a characteristic frequency or energy, there is a corresponding absorption or excitation process. What goes up must come down, so if the atom gets excited by some external stimulus it will eventually emit a photon and drop back into its ground state. That is very non-classical behavior, so the classical approximation is only good if you avoid that happening. I'm sorry, but I have to admit that I don't agree with the bolded part. In order to exert a force on the atom, the photons have to induce transitions in the atom. Quote by Physics Monkey However, I'm also not sure this is precisely what this book is talking about. Looking at page 150 just above section 6.3, the book states that $\omega_{rec} \ll \Gamma$ which implies that $1/\Gamma \ll 1/\omega_{rec}$. However, just below that they state the internal timescale $1/\Gamma$ should be MUCH SLOWER than the external timescale $1/\omega_{rec}$. I would interpret much slower to mean the internal time scale is longer than the external timescale, which gives the opposite inequality. Perhaps I misread or misunderstood or perhaps they meant much shorter? Thanks for that. I also paused when I read "much slower" for the first time. It must be a mistake by them, because the inequality is correct. I believe they also make another error when saying that (page 150, top): "... ensures that the internal quasi-stationary state of the atom follows the center-of-mass dynamics adiabatically". If they by "quasi-stationary" mean "equilbrium", then I agree. Thanks for both your explanations of the mass-spring-hand system. It is a good analogy of the system Best wishes, Niles. Quote by Niles I'm sorry, but I have to admit that I don't agree with the bolded part. In order to exert a force on the atom, the photons have to induce transitions in the atom. No, you can perfectly accelerate a Na+ ion in a weak electric field without exciting any transtions. Mass spectrometers and residual gas analyzers do that all over the place every day. Thanks for that. I also paused when I read "much slower" for the first time. It must be a mistake by them, because the inequality is correct. I believe they also make another error when saying that (page 150, top): "... ensures that the internal quasi-stationary state of the atom follows the center-of-mass dynamics adiabatically". If they by "quasi-stationary" mean "equilbrium", then I agree. I also thinki it is a mistake in the book. 1/Gamma is not a time scale, it is an energy scale. As for the mass-spring example, just try it out. Btw, it also works with a pendulum. If you wait much longer than the damping time, the everything will be in the ground state, no matter where you started. But if you move slowly compared to the oscillation period, the state of the oscillator will not change significantly, even if it is already in motion. Just found this here: http://en.wikipedia.org/wiki/Adiabatic_theorem Obviously, in a classical system there is no energy gap. Quote by M Quack No, you can perfectly accelerate a Na+ ion in a weak electric field without exciting any transtions. Mass spectrometers and residual gas analyzers do that all over the place every day. I see, I was specifically talking about slowing atoms with light. Quote by M Quack I also thinki it is a mistake in the book. 1/Gamma is not a time scale, it is an energy scale. I would say 1/Gamma is a time scale, if Gamma is the inverse of the lifetime of the excited state. I believe that is how Gamma is usually defined. I think it is correct to say that in our case, the adiabatic approximation ultimately means that the external perturbation does not perturb the resonance condition of the atom. Best, Niles. Quote by Niles I would say 1/Gamma is a time scale, if Gamma is the inverse of the lifetime of the excited state. I believe that is how Gamma is usually defined. Oops. you are right. Too much back and forth between Gamma and 1/Gamma. Thanks to both of you, I learned a lot from this. Best, Niles. My pleasure. It is always nice to see a qualified question that is not just a homework problem :-) Thread Tools | | | | |---------------------------------------------|----------------------------|---------| | Similar Threads for: Time scale of the atom | | | | Thread | Forum | Replies | | | Calculus & Beyond Homework | 0 | | | Quantum Physics | 0 | | | Astrophysics | 1 | | | Beyond the Standard Model | 0 | | | General Astronomy | 23 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 21, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9494714140892029, "perplexity_flag": "head"}
http://www.euro-math-soc.eu/node/2805
# The European Mathematical Society Hosted by: ## PhD Thesis proposal in Math & Computer Science June 24, 2012 - 12:28 — Anonymous Organization: University of Franche-Comté, France Email: raphael.couturier@univ-fcomte.fr Job Description: Title: Semi-decentralized distributed control Goal: The goal of this PhD is to study the design and the implementation of generic control algorithms for systems composed of distributed MEMS in a semi-centralized context (i.e. only the direct neighbors can communicate). Experiments will concern the “active skin” developed by the Mechanics department in the context of the Labex project ACTION (see icb.u-bourgogne.fr/labex/index.html). First of all, this platform will be adapted by a student master. Currently, this system performs distributed control close to noise propagating in a nozzle. The control is handled by microphones and speakers disposed along the nozzle boundary. It is computed in real time by a network of independent microcontrollers, each of them being associated with a couple of sensor and actuator. So the challenge is to introduce the possibility of communications between neighbor microcontrollers in order to implement control laws based on communications between neighbors. The thesis will consist in a theoretical study of distributed control laws suited for the platform and their approximation for an implementation on the microcontrollers network. Three controllers should be established, simulated and then implemented on the platform: two passive ones based on the advection equation and on the square root of the wave equation operator (i.e. the d’Alembertien), and a dynamic one governed by an optimal control law as LQG or H-infinite controls. They will be tested on a simulation that will be built for the occasion. The work will be based on the theoretical frameworks introduced in [1] and [2]. The three steps regarding modeling, control law design and its approximation will have to be optimized as far as computation and communication constraints are concerned. The proposed models and the resulting algorithms should take into account the limited computing capacities of microcontrollers and communications only available between direct neighbors. The key element is certainly the algorithmic complexity in order to be able to obtain a real-time control. In this context, communications play a major role. Finally, taking into account the disturbance of communications and fault of computing units, sensors and actuators will provide more robustness to the system. Required skills: The candidate must have strong skills in Mathematics especially in Partial Differential Equations and in Computer Science. Conditions: PhD Thesis at FEMTO-ST. Duration 3 years. Earning: about 1400€/month. Start of the PhD Thesis September 2012. Location: Belfort, France Application deadline: 9th July. 2012. Please email a CV and a letter outlining your areas of research interest, along with names and contact details of two referees who can comment on your academic suitability to: raphael.couturier$univ-fcomte.fr, michel.lenczner$utbm.fr [1] M. Lenczner, G. Montseny, Y. Yakoubi Diffusive Realizations for Solutions of Some Operator Equations : the One-Dimensional. Math.of Comp. 81(2012), 319-344 . [2] M. Lenczner, Y. Yakoubi Semi-decentralized Approximation of Optimal Control for Partial Differential Equations in Bounded Domains. Comptes rendus de l'Académie des Sciences - Mécanique, Vol.337, Issue 4, pp.245--250, 2009 Job Categories: Research institutes Deadline for Application: Jul 9 2012
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8867372274398804, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/109395/is-there-a-geometric-intuition-underlying-the-notion-of-normal-varieties/117333
## Is there a “geometric” intuition underlying the notion of normal varieties? ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I first got concious of the notion of normal varieties around 3 years ago and despite the fact that by now I can manipulate with it a bit, this notion still puzzles me a lot. One thing that strikes me is that the definition of normality is so entirely algebraic. From my common sense understanding the notion of normal varieties restricts the class of spaces that we consider to more-less reasonable ones. It looks to me that this definition is analogous to the definition of pseudo-manifold. At least the obvious similarity is that in both cases the set of non-singular points is connected. Normality pops up everywhere and its definition is very short. But it is hard for me to imagine that a differential topologist or differential geometer could come up with such a definition. Why is the notion of normatilty is so omnipresent? What is "geometric" meaning of normality? Maybe a more concrete question would be like this. Suppose $X$ is an irreducible algebraic subvariety in $\mathbb C^n$ with singularities in co-dimension $2$. Can one somehow looking on singularities, their stratification and the way $X$ lies in $\mathbb C^n$ say if it is normal or not? Added. Who was the person who invented this notion? I would like to thank everybody for useful comments and links. - 3 I don't know how helpful this is for you purposes, but there is the Serre criterion for normality: That $X$ satisfies your condtion (that is, regular in codimension 1, usually written $R_1$) and the more technical condition $S_2$, which can be difficult to verify. See also Hailong Dao's answer to this question: mathoverflow.net/questions/60097/… – J.C. Ottem Oct 11 at 17:48 3 Maybe you want to check out Sandor Kovacs answer to this question. mathoverflow.net/questions/35736/… – Thomas Kahle Oct 11 at 18:54 13 mumford emphasizes that normal implies locally unibranch. i like the criterion that a variety is normal iff any surjective finite birational map onto it is an isomorphism. – roy smith Oct 11 at 23:19 3 This might be a silly comment but it seems to me that normality is a way to get an algebraic version Hartog's principle. – DamienC Oct 12 at 19:01 5 normal varieties were introduced by Zariski, in a paper in the Amer. Journal of Math, vol. 61, 1939,p.249ff? but announced in an earlier paper in 1937. – roy smith Oct 12 at 20:11 show 11 more comments ## 5 Answers This is basically the same as roy smith's excellent comment, but I'd like to put a slightly different spin on it. A normal variety is a variety that has no undue gluing of subvarieties or tangent spaces. Let me explain what I mean by gluing. Given a variety $X$, a closed sub-scheme $Y \subseteq X$ and a finite (even surjective) map $Y \to Z$, you can glue $X$ and $Z$ along $Y$ (identifying points and tangent information). This is the pushout of the diagram $X \leftarrow Y \rightarrow Z$. You might not always get a scheme (although you do in the affine case) but you always get an algebraic space. In the affine case, this just corresponds to the pullback in the category of rings. Example 1: $X = \mathbb{A}^1$ glued to $Z = \bullet$ (one point) along $Y = \bullet, \bullet$ (two points) is a nodal curve. Example 2: $X = \mathbb{A}^1$ glued to $Z = \bullet$ (one point) along $Y = \star = \text{Spec } k[x]/x^2$ a fuzzy point gives you a cuspidal curve. Example 3: $X = \mathbb{A}^2$ glued to $Z = \mathbb{A}^1$ along one of the axes $Y = \mathbb{A}^1$ via the map $Y \to Z$ corresponding to $k[t^2] \subseteq k[t]$ gives you the pinch point / Whitney's umbrella = $\text{Spec } k[x^2, xy, y]$. If I recall correctly, all non-normal varieties $W$ come about this way for some appropriate choice of normal $X$ (the normalization of $W$) and $Y$ and $Z$ (NOT UNIQUE). Roughly speaking, if you are given $W$ and want to construct $X, Y, Z$, do the following: Let $X$ be the normalization, let $Z$ be some sufficiently deep thickening of the non-normal locus of $X$ and let $Y$ be some appropriate pre-image scheme of $Z$ in $X$. Assuming this is true, you can see that all non-normal things are non-normal because they either have some points identified (as in 1 or 3) or some tangent space information killed / collapsed (as in example 2), or some combination of the two. - Karl, thank you for the answer! In example 3, you did not mean to take take $X=\mathbb A^2$? – aglearner Oct 12 at 21:01 very nice job. much more pregnant and also more clear and precise than my comment. this puts flesh on it and cries out for exploration. – roy smith Oct 13 at 0:03 @aglearner, you are right, that should be a 2. I fixed it. – Karl Schwede Oct 13 at 2:37 Here is an example I find hard to visualize fully geometrically by the method I callee "geometric": take a smooth rational quartic curve in P^3 and let X be the cone over it in P^4. This seems to be a standard example of a non normal surface satisfying R1 but not S2. One can use geometry to give a finite birational map to X (I hope), (by projecting the cone in P^5 over a rational quartic curve in P^4 onto X), but is there a "geometric" way to see this map is non trivial? My feeling now is that S2, i.e. depth, is hard to make fully geometric. – roy smith Oct 24 at 2:50 1 So, the other way to break non-S2-ness, besides gluing points, is to kill tangent information at a point. That is what's going on with the first example (cone over the quartic rational curve in $\mathbb{P}^3$) although I must admit, I don't have a good way to visualize that example But generally, the easiest way to make a non-normal graded ring is to kill some low degree terms in a normal graded ring. This will make something that is not S2 (unless the graded ring has dimension $\leq 1$). – Karl Schwede Oct 24 at 11:33 show 3 more comments ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As J. C. Ottern has pointed out, Serre's criterion gives a clue. Normality is equivalent to $R_1 + S_2$. The interpretation of $R_1$ is easy: regular in codimension 1, the singular locus has codimension at least two, it is small, e.g. for a surface this means that the singular points are isolated. The interpretation of $S_2$ is the "extension property", every algebraic function defined on a open set whose complement is of codimension at least two, extends to all the variety. The proof of this fact is essentially contained in EGA IV$_2$ $\S$5.10. - Leo, I looked up the reference in numdam. Unfortunately this language for the moment is superior to what I can possibly understand. But thank you for the answer and the reference! – aglearner Oct 11 at 23:29 2 See a explanation here: mathoverflow.net/questions/45347/… – temp Oct 12 at 1:24 At least in the case of complex algebraic varieties one can give a nice topological interpretation of the normality condition. Let us consider $V$ a complex algebraic variety, then its complex points $V(\mathbb{C})$ has the structure of a stratified pseudomanifold. Let me recall that a stratified pseudomanifold $X$ is a filtered topological space $$X_0\subset\ldots \subset X_n$$ such that each stratum, i.e. a connected component of $X_i-X_{i-1}$ is a manifold of dimension $i$ and such that $X_{n-1}=X_{n-2}$ and such that the regular part $X_n-X_{n-2}$ is dense in $X$. Together with a local condition: the existence of conical charts. Thus $V(\mathbb{C})$ comes equipped with such a geometric structure. In the setting of stratified pseudomanifold one has a notion of normal pseudomanifold and normalization is a fundamental concept in intersection homology. A pseudomanifold $X$ of dimension $n$ is said to be normal if for every point $x\in X$ the local homology group $H_n(X,X-x,\mathbb{Z})$ is isomorphic to $\mathbb{Z}$. Notice that a homological manifold is normal. Using Zariski’s Main Theorem, one can prove that a normal complex algebraic variety is a normal pseudomanifold. If you consider a triangulation $T$ of $X$ ($dim(X)=n$) then you can also prove that $X$ is normal if and only if the link of eack simplex in the $n-2$-skeleton of $T$ is connected. This is proved in Goresky, MacPherson "Intersection Homology theory" (Topology Vol. 19 (1980)). In this paper the authors also explains how to build normalization topologically and how topological normalization satisfies a universal property. In the case of $V(\mathbb{C})$ its topological normalization in the sense of Goresky-MacPherson is homeomorphic to $V'(\mathbb{C})$ where $V'$ is the algebraic normalization of $V$. Thus topologically normality corresponds to the connectivity of the links, the link of a point in an $n$-dimensional manifold being a $n-1$ sphere we see that topological normalization is the very first step to desingularization of stratified pseudomanifolds. Here are two examples: 1) The pinched torus is not normal. It is a complex projective curve $C$ of equation $x^3+y^3=xyz$ in homogeneous coordinates $[x:y:z]$. It has a unique singular point $[0:0:1]$ and the link of this point $p$ is homeomorphic to two circles (we have $H_2(C,C-p;\mathbb{Z})\cong \mathbb{Z}\oplus \mathbb{Z}$). 2) The quadric cone is normal. It is an algebraic surface $S$ of equation $x^2+y^2+z^2=0$ in $\mathbb{P}^3(\mathbb{C})$ in homogeneous coordinates $[x:y:z:w]$ it has a unique singular point $[0:0:0:1]$. We notice that this space is homeomorphic to the Thom space of the tangent bundle of the $2$-sphere $S^2$. This remark gives a homeomorphism between the link of the singular point and the unit sphere bundle of the tangent bundle of $S^2$ which is connected (we get that $S$ is topologically normal). Historicaly these two examples were important for our understanding of the failure of Poincaré duality for singular spaces, they appear in Zeeman's thesis: E. C. Zeeman, "Dihomology III. A generalization of the Poincaré duality for manifolds", Proc. London Math. Soc. (3), 13 (1963), 155-183. and also in McCrory's thesis: C. McCrory, "Poincaré duality in spaces with singularities", Ph.D. thesis (Brandeis University, 1972) - I'm confused about the case of a cuspidal curve. Isn't the link of the singularity of $y^2=x^3$ the trefoil knot? This is not normal, but the link seems to be connected. What am I missing? – Jim Bryan Dec 28 at 16:56 I should have said that normal in the topological sense does not imply normal in the algebraic sense. The only thing we can say is that the cuspidal curve is topologically normal thus homeomorphic to its algebraic normalization. – David C Dec 28 at 17:18 Dear David, thank you for this asnwer! – aglearner Dec 30 at 12:21 An excellent non-algebraic meaning (using analysis) of normality is found in Kollar's article in the Bulletin of AMS (1987). Restrict to irreducible varieties $X$ so we can talk of function fields. A point $x_0\in X$ is considered normal whenever a rational function exhibits decent behaviour in a neighbourhood of $x_0$ then it finds a place in the local ring of $X$ at $x_0$. Decent behaviour here is: If $f\in K(X)$ and if $|f(x)|$ remains a bounded function as $x$ approaches $x_0$ by paths lying in $X$, then $x_0$ should be good enough to admit $f$ in its local ring. This survey article of Kollar is about Mori's Fields-medal winning work on 3-folds. But it starts from the scratch defining what an algebraic variety is. It is a great source to learn the meanings of fundamentals objects of algebraic geometry. (for example Kollar explains why we have to deal with line bundles when we study projective varieties). - Dear P Vanchinathan, thank you very much for recommending this article! – aglearner Dec 30 at 12:11 Regarding the question "Who was the person who invented this notion?", a paper of H. T. Muhly provides interesting background (as well as a geometric interpretation) for projectively normal: In the terminology of the Italian School an algebraic variety is called "normal" if its system of hyperplane sections is complete. O. Zariski applies the term "normal" to an algebraic variety whose associated ring of homogeneous coordinates is integrally closed. The two concepts are not equivalent. Zariski refers to a variety which satisfies the former condition as "normal in the geometric sense" and to one which satisfies the latter condition as "normal in the arithmetic sense". (...) The object of this note is to characterize geometrically those algebraic varieties which are normal in the arithmetic sense. To this end we propose the following theorem: A necessary and sufficient condition that the $r$-dimensional algebraic variety $V_r$ be normal in its ambient projective space $P_n$ is that for every integer $m$ the linear system cut out on $V_r$ by the hypersurfaces of order $m$ in $P_n$ be complete. - Dear Francois, I think the last quoted paragraph is about characterizing projectively normal varieties. Regards, Matthew – Emerton Dec 27 at 21:38 2 Matthew -- Thanks; you are quite right and I have edited accordingly. (To me the main interest here was the statement that normality has origins earlier than Zariski.) – Francois Ziegler Dec 27 at 22:22 Dear Francois, it is ineed very interesting to know that Italians were already thinking of normality :) – aglearner Dec 30 at 12:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 109, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9300470352172852, "perplexity_flag": "head"}
http://mathoverflow.net/questions/118310/base-change-of-semi-stable-curve-still-semi-stable
## Base change of semi-stable curve still semi-stable ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $S$ be a scheme. A smooth curve over $S$ is a smooth projective $S$-scheme of relative dimension $1$ with geometrically connected fibres. A semi-stable curve over $S$ is a flat projective $S$-scheme of relative dimension $1$ whose geometric fibres are connected, reduced and have only ordinary double singularities. Let $T\to S$ be a morphism. Then, if $X\to S$ is a smooth curve over $S$, the base change $X_T\to T$ is a smooth curve over $T$. Suppose that $X\to S$ is a semi-stable curve over $S$. Why is the base change $X_T\to T$ a semi-stable curve over $T$? This should be true, but I can't seem to prove it easily. The problem is that I fear the singularities might become worse after a base change. Let me emphasize that I do not assume the total space $X$ to be non-singular in these definitions. Neither do I assume anything on $S$, besides maybe some finiteness conditions if you'd like. - What do you mean by a semi-stable curve? – Sasha Jan 7 at 22:01 5 Flatness is preserved by base-change, as is projectivity and relative dimension (if $T/S$ is locally of finite type). So the only condition left to check is on the geometric fibers. But the geometric fibers of $X_T\to T$ are literally the same as the geometric fibers of $X\to S$. More specifically, if $f: T\to S$ is the morphism and $t$ is a geometric point of $T$, $X_t=X_{f(t)}$ canonically. As $X\to S$ is semi-stable, the geometric fibers satisfy the required conditions so we're done. – Daniel Litt Jan 7 at 22:19 3 @Daniel Litt: You invoke a fact not obvious to a beginner (as Masse is likely to be) that underlies your use of the phrase "geometric point": if $K/k$ is an extension of algebraically closed fields and $X$ is a $k$-scheme of finite type with pure dimension 1 then $X$ is reduced with $O_{X,x}^{\wedge} = k[[u,v]]/(uv)$ for all non-smooth $x\in X(k)$ if and only if $X_K$ is reduced with $O_{X_K,x}^{\wedge} = K[[u,v]]/(uv)$ for all non-smooth $x\in X_K(K)$. Strictly speaking you use the easier "only if" direction, but one needs "iff" to have a robust notion and proving "if" uses serious technique. – kreck Jan 7 at 23:31 4 @Masse: It is absolutely not true that $X_T$ is normal as a scheme; consider the case when $T$ is a point! Rethink whatever "exercises in commutative algebra" made you think it is normal (e.g., perhaps you assumed $T$ is normal noetherian and the generic fibers are smooth?). – kreck Jan 7 at 23:33 1 @Daniel Litt and kreck. Thank you very much. This answers my question fully. – Masse Jan 8 at 7:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9306076765060425, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/14751/how-did-l-h-thomas-derive-his-1927-expressions-for-an-electron-with-an-axis
# How did L.H. Thomas derive his 1927 expressions for an electron with an axis? I'm looking at the 1927 paper of Thomas, The Kinematics of an Electron with an Axis, where he shows that the instantaneous co-moving frame of an accelerating electron rotates and moves with some infinitesimal velocity. He states: At $t=t_0$ let the electron have position $\mathbf{r}_0$ and velocity $\mathbf{v}_0$, with $\beta_0=(1-{\mathbf{v}_0}^2/c^2)^{-\frac 1 2}$, in $(\mathbf{r}, t)$. Then, by (2.1), that definite system of coordinates $(\mathbf{R}_0, T_0)$ in which the electron is instantaneously at rest at the origin and which is obtained from $(\mathbf{r},t)$ by a translation and a Lorentz transformation without rotation is gven by $$\begin{align*}\mathbf{R}_0 &= \mathbf{r} - \mathbf{r}_0 + (\beta_0 - 1)\frac{ (\mathbf{r}-\mathbf{r}_0)\cdot \mathbf{v}_0}{{\mathbf{v}_0}^2}\mathbf{v}_0-\beta_0 \mathbf{v}_0(t-t_0)\tag{3.1a}\\ T_0 &= \beta_0\left( t - t_0 - \frac {(\mathbf{r} - \mathbf{r}_0)\cdot \mathbf{v}_0}{c^2}\right)\tag{3.1b}\end{align*}$$ By eliminating $(\mathbf{r},t)$ from equations (3.1) and the similar equations for $(\mathbf{R}_1, T_1)$,$$\begin{align*}\mathbf{R}_1 &= \mathbf{R}_0 + \frac{(\beta_0 - 1)} {{\mathbf{v}_0}^2}(\mathbf{R}_0\times (\mathbf{v}_0\times \mathbf{dv}_0)) - \beta_0 T_0(\mathbf{dv}_0 + (\beta_0 - 1)\frac{(\mathbf{v}_0\cdot \mathbf{dv}_0)}{{\mathbf{v}_0}^2}\mathbf{v}_0)\tag{a}\\ T_1 &= T_0 - \frac {\beta_0} c^2((\mathbf{R}_0\cdot(\mathbf{dv}_0 + (\beta_0 - 1)\frac{(\mathbf{v}_0\cdot \mathbf{dv}_0)}{{\mathbf{v}_0}^2}\mathbf{v}_0))) - d\tau_0\tag{3.3b} \end{align*}$$ What are the steps to get from (3.1) to (3.3)? - ## 2 Answers I will reference the original paper ( http://www.clifford.org/drbill/csueb/4250/topics/thomas_papers/Thomas1927.pdf ). One key to the Thomas Procession is understanding infinitesimal Lorentz transformations, since one needs to understand the effects of transforming from the lab frame into the "rotating" rest frame of the electron and vice versa. First, you need to understand how to get from 2.1 to 2.2. Equation 2.1 is the Lorentz transformation for the motion of a frame with an arbitrary direction. Look here for a good discussion of arbitrary Lorentz transformations and Thomas precession in a modern context using only vector notation at first starting in section 15.2 and the discussion continues on your topic through section 17. To derive 2.2, we need to understand how successive Lorentz transformations behave. I think the approach in Thomas is pretty good. What you need to do is compute the transformation with velocity -v, followed by a transformation with v+dv. It is more clearly seen if you do this for motion along one axis at first, the algebra is easier to follow. Then you will see that equation 2.2 collapses to the result for 1D motion and this is also done explicitly in the above reference. Equation 3.1 is the full NON-rotational Lorentz transformation, but that therein lies the problem. We need to transform from a rotating frame back into the lab frame. Lets look at the time transformation in equation 3.23. Tau1 is T1 and Tau0 is T0. Substitute dt0/beta0 for ds0. Thomas states that n0 is the linear acceleration which is equal to dv0/dt0. Substitute and cancel dt0 (scalar). The transformation in 3.23 assumes the electron is stationary but rotating, so rho0 is R0-- essentially the angular position indicated by the eularian angles can be decoupled from the relative position. After substitution note that the time equation in 3.23 has the same form as the time equation in 3.1 except we now have a dv0 term instead of v0. Thomas states "infinitesimal...must be of the form" 3.23, which makes sense and come from classical mechanics. But what is dv0? It is the infinitesimal velocity given in 2.2. Substitute this infinitesimal velocity into 3.23 and the time transformation in 3.3 follows. The R0 term in 3.1 transforms in the way, except the algebra is a bit messier. The form for the infinitesimal transformation for R0 given by 3.23 is a consequence of classical mechanics (cf Coriolis effect). A position vector in a non–rotating frame can be expressed in a rotating (for instance the electrons rest frame) frame by adding the vectors time rate of change in direction due to the rotation of the frame. This is also discussed in the reference. - I love your first paragraph which is very useful, but you didn't answer the question I asked apart from "The R0 term in 3.1 transforms in the way, except the algebra is a bit messier" which isn't useful. – Larry Harson Sep 17 '11 at 12:43 Thomas is using suboptimal language in his paper. I will answer the specific question you asked, but it is best to do it in a more modern way, which has more feeling for the geometry of Lorentz space. I will digress to do this first. ### Geometrical precession To understand the effect, it is best to start with the analogous effect in geometry. Suppose I have a curve parametrized by arclength x(s), where x is a vector in 3-space. Consider sliding a frame of three vectors along the curve, so that the z-axis is always parallel to the tangent of the curve, and at any time, you go forward along the curve by tilting the frame so that the tilting does no rotation of the x-y plane at any instant. Suppose the curve starts and ends parallel to the z-axis. What is the total net x-y rotation of the frame beginning to end? The answer is not zero. As you go around the curve, the tangent vector is making a loop on the two sphere. It starts out at the north pole, wanders around, then comes back to the north pole. As this is happening, the x-y plane is parallel transported to stay tangent to the sphere, and the total turn angle you get is equal to the curvature of the sphere times the area of the loop. That this is true is easy to see (it's one of Gauss's theorems)--- the turning angle is additive over loops end to end, and for an infinitesimal square, it is the definition of the intrinsic curvature. This gives the rotation angle for a loop. The corresponds to a curve which ends with the same tangential direction as where it started. You can't define a frame everywhere on the tangent plane of the sphere at once and ask what angle you make with respect to this frame, because you can't comb the hair on a sphere. Things are only this simple in 3d, because the 2d rotation group is abelian. ### Thomas Precession in 2+1 dimensions This is just as simple Now you have a curve in space-time x(s) parametrized by relativistic arc-length. The tangent vector to the curve is making a path on the unit hyperbola in Minkowksi space $t^2 -x^2 - y^2 = 1$. This curve is constant curvature inside Minkowkski space, because any point can be moved to the origin by a boost. The curvature of this space is one, as can be seen from the commutator of two infinitesimal Lorentz transformations. So if you start the electron at rest, and you return at rest, the angle of rotation is equal to the area cut out by the path of the tangent vector on the unit hyperbola. This solves the problem, except for the area of the hyperbola. This area can be worked out by noting that for any two vectors v and w, the area they bound is: $$|A| = \int_{(x,y)\in A} {1\over 1+x^2 + y^2} dx dy$$ If the electron doesn't wind up at rest, you can still work out the angle relative to a frame at every point of the hyperbola, because you can comb the hair on a hyperbola. The way you do this is to boost from the origin to velocity v, and define the result of boosting the x,y,z,t vectors to be unrotated. This is what Thomas does at each point of the velocity hyperbola. This then allows him to define the precession amount when you go from any vector to a nearby one. ### Precession in 3+1 dimensions For this, all you need to note is that at any time, the velocity, the acceleration and time make a 2+1 dimensional space. At any one time, the amount of precession is just given by the 2 dimensional precession from above. ### Thomas's paper Thomas first writes down the Lorentz transformations to boost the time axis to have slope v. I set c to 1, capitalized (r,t) to (R,T) because these are frame variables and this really should be consistent, and got rid of the bolding on the vectors: $$R_0 = R - r_0 + (\beta_0 - 1){ (R-r_0)\cdot v_0 \over v_0^2} v_0- \beta_0 v_0(T-t_0)$$ $$T_0 = \beta_0 ( T - t_0 - (R - r_0)\cdot v_0)$$ That is copied from your question, to get the notation straight. All the capital R's and T's, are frame variables--- as they vary, they describe the whole space, and their only purpose is place-holders for describing the Lorentz transformation involved. He then writes down the Lorentz transformation for another time by copy-paste, changing 0 to 1: $$R_1 = R - r_1 + (\beta_1 - 1){ (R-r_1)\cdot v_1 \over v_1^2} v_0- \beta_1 v_1(T-t_1)$$ $$T_1 = \beta_1 ( T - t_1 - (R - r_1)\cdot v_0)$$ Then he notes that $v_1$ is only infinitesimally different from $v_0$, $v_1=v_0 + dv_0$ (you should have said that in the question), and expands the above to first order in dv. Next he solves for $(R,T)$ in terms of $(R_1,T_1)$, getting that they are related by boost of -v. He then substitutes the R,T solution into the equation for $R_1,T_1$ to determine what Lorentz transformation has occurred between frame $R_1,T_1$ and $R_0,T_0$. He gets: $$R_1 = R_0 + {(\beta_0 - 1)\over v_0^2}(R_0\times (v_0\times dv_0)) - \beta_0$$ $$T_0(dv_0 + (\beta_0 - 1){(v_0\cdot dv_0)\over v_0^2}v_0)$$ $$T_1 = T_0 - \beta_0(R_0\cdot dv_0 + (\beta_0 - 1){v_0\cdot dv_0)\over v_0^2}v_0))) - d\tau_0$$ This is just an inane way to compose Lorentz transformations. The factor of $dv_0 + (\beta_0 - 1){(v_0\cdot dv_0)\over v_0^2}$ is the infinitesimal velocity of the particle at 2 when viewed in frame 1. The result is a translation, an infinitesimal boost, and an infinitesimal rotation. That the rotation is significant is because it is relative to the combing of the hyperbola defined before. ### Checking Thomas's work Thomas almost certainly didn't do the steps above in his personal notebook--- that's just what he wrote in the published paper. Nobody should check using the steps he gives in his paper, it would be silly. The way you check his work is to ignore the translations, just look at Lorentz boosts between the frames. Then you choose your x axis in the direction of $v_0$, which I will call v below, and you choose the y-axis so the acceleration dv is in the x-y plane. To boost from zero velocity to velocity "v" in the x-direction in three dimensions (which is the general case) you do: $$x_0 = \beta(x - v t) = x+ (\beta-1)x - \beta vt$$ $$y_0 = y$$ $$t_0 = \beta(t - vx)$$ The reason for separating out $\beta-1$ is to write the Lorentz transformation in vector form. From the above, you conclude that the general Lorentz boost is: $$r = r + (\beta -1) {v\cdot r\over v^2} v - \beta vt$$ $$t = \beta (t - v\cdot r )$$ Where the dot product times v over v^2 is the vector way of selecting a component in the direction of v. This is correct, because it is in vector language, so it is rotationally invariant, and reduces to the equations above it when you choose the x-axis along v. This is what Thomas means by a "Lorentz boost without rotation", a Lorentz boost of this type. These boosts are not a group, if you compose them, you get rotations. That's all that Thomas is doing. Going back to the case of v in the x direction, write the reverse transformation, which tells you x,y,t in terms of $x_0,y_0,z_0$: $$x = \beta( x_0 + vt_0)$$ $$y = y_0$$ $$t = \beta(t_0 + vx_0)$$ This is what Thomas means by "solve for $(r,t)$ in terms of $(R_0,T_0)$". Next you need a boost which takes zero velocity to a velocity v in the x direction and dv in the y direction (I am assuming that the acceleration dv is all in the y direction, this turns out the be the general case, the x component doesn't do any rotation). This doesn't change $\beta$ at all to leading order in dv. Using the vectorial form for Lorentz transformations $$x_1 = x + (\beta-1)(x + y{dv \over v}) - \beta vt = \beta(x-vt) + (\beta-1) {dv\over v}y$$ $$y_1 = y - (\beta-1) {dv\over v} x - \beta dv t$$ $$t_1 = \beta (t - v x) - \beta dv y$$ Now you plug in the formulas for $x,y,t$ in terms of $x_0,y_0,t_0$ to get $$x_1 = x_0 + (\beta-1){dv \over v} y_0$$ $$y_1 = y_0 - (\beta-1){dv \over v} x_0 - \beta^2 dv t_0$$ $$t_1 = t_0 - \beta^2 dv y_0$$ This is a superposition of an infinitesimal rotation in the x-y plane of magnitude $(\beta_0 -1) {dv\over v}$ and an infinitesimal boost in the y direction of some magnitude you don't care about. The answer is linear in dv, because it is to linear order. If $dv$ were in the x direction, the answer would have been just a boost, because the 1-d Lorentz boosts form a group just by themselves, and who cares. From the above, you conclude that the amount of rotation is given by the component of dv perpendicular to v times the magnitude of v divided by v^2, so that the $\omega$ vector is $$\omega = (\beta-1){v\times dv \over v^2}$$ This verifies that the rotation part of Thomas's paper is accurate. When reading old papers which use vector notation, it is important to read between the lines like this, because this is almost certainly what Thomas did privately before publishing. When reproducing the work, you can't just fill in the intermediate steps. - I think Thomas fixes some arbitary point $(R,t)$ in the lab frame, and subtracts its position in $(R_1, T_1)$ from its position in $(R_0, T_0)$ to give the change in orientation of the co-moving frame of the electron. This is why showing your working in detail would help in proving what you're doing gives the right answer. – John McVirgo Sep 18 '11 at 0:37 @John: He doesn't do any subtraction explicitly. He is just writing the inverse-boost R,T as a function of R_0,T_0, then substituting this in the formula for R_1,T_1. I will write more. The answer is obviously a rotation, which is given by the area of the 0-v-v+dv triangle on the hyperbola, plus a boost by an amount you don't care about dv times something, which I didn't bother to check, which is what he wrote down. I would add, this is all best done abstractly--- this is the wrong way to compose Lorentz transformations – Ron Maimon Sep 18 '11 at 2:37 "Nobody should check using the steps he gives in his paper, it would be silly." But this is what the question asks! Maybe the questioner doesn't possess your level of mathematical sophistication. – John McVirgo Sep 18 '11 at 9:20 I did check the steps! I just didn't do it the exact way it is described in the paper, but by picking a coordinate system with the x-axis along v and the y-axis along dv. This is the opposite of sophistication--- the algebra is more elementary. I insist that nobody, including Thomas and the referee, ever checked it by doing the substitution and expansion that Thomas suggests (although it is clear that they would work). – Ron Maimon Sep 18 '11 at 9:45 Where does he say ""solve for $(r,t)$ in terms of $(R_0,T_0)$"? I can see where he says "By eliminating (r,t) from equations (3.1) and the similar equations for \$(R_1 ,T_1)" which isn't what you've done, is it? – John McVirgo Sep 18 '11 at 10:34 show 5 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 25, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9411793947219849, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/190294-derivative-using-product-rule.html
# Thread: 1. ## Derivative by using product rule I have been doing a promplem but i am currently lost.I want to know how to do these type of problems since they will be on a test.The probles is as follows x^2(2/x-1/(x+1)) plz help and show and explain each step.I have looked everywhere for help but nobody wants to or knows how to.Please help 2. ## Re: Derivative by using product rule Originally Posted by jordanchamp07 I have been doing a promplem but i am currently lost.I want to know how to do these type of problems since they will be on a test.The probles is as follows x^2(2/x-1/(x+1)) plz help and show and explain each step.I have looked everywhere for help but nobody wants to or knows how to.Please help No wonder since that's an awful way to write it $x^2 \left(\dfrac{\frac{2}{x-1}}{x+1}}\right) = x^2 \left(\dfrac{2}{(x-1)(x+1)}\right)$ OR $x^2 \left(\dfrac{2}{\frac{x-1}{x+1}}\right) = x^2(x+1) \left(\dfrac{2}{x-1}\right)$ 3. ## Re: Derivative by using product rule well its x^2 ( (2/X) - (1/x+1)) 4. ## Re: Derivative by using product rule Originally Posted by jordanchamp07 well its x^2 ( (2/X) - (1/x+1)) $x^2 \left(\dfrac{2}{x} - \dfrac{1}{x+1}\right) = \dfrac{x^2}{2x} - \dfrac{x^2}{x+1}$ The first term is simple enough to do. For the second write it as $u = x^2$ and $v = (x+1)^{-1}$ and use the product rule 5. ## Re: Derivative by using product rule man im still lost sorry i am not very bright 6. ## Re: Derivative by using product rule Originally Posted by jordanchamp07 man im still lost sorry i am not very bright Then you need to say exactly what part or parts of post #4 you don't understand. It may well be that you need to go back and review easier questions before attempting this question. You might also need to go back and review your algebra - most difficulties in calculus are caused by a lack of algebraic competency (competency in algebra will be assumed by your instructor). Sine the name of your post is use the product rule, I'd suggest you start by noting that if you have $x^2 \left( \frac{2}{x} - \frac{1}{x+1}\right)$ then $u = x^2$ and $v = \frac{2}{x} - \frac{1}{x+1}$. Now go and look up the product rule to see what to do. Now go to your notes and review how to differentiate standard forms. 7. ## Re: Derivative by using product rule Well i really dont have much time to go back in review(if i had time i would go back).They teach me to do fs'+sf' but i do not know how to find the derivitive of second term.Is their a rewrite for it? my notes doesnt have this type of problems 8. ## Re: Derivative by using product rule What "second term" are you talking about? If you mean the denominator in $(x+1)^{-1}$ just use the power rule: $(x^n)'= nx^{n-1}$ together with the chain rule: $[(x+1)^{-1}]'= (-1)(x+1)^{-2}$ times the derivative of x+ 1. Do you know how to differentiate x+ 1?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.964823305606842, "perplexity_flag": "middle"}
http://stats.stackexchange.com/questions/44261/how-to-determine-the-quality-of-a-multiclass-classifier/44522
# How to determine the quality of a multiclass classifier Given • a dataset with instances $x_i$ together with $N$ classes where every instance $x_i$ belongs exactly to one class $y_i$ • a multiclass classifier After the training and testing I basically have a table with the true class $y_i$ and the predicted class $a_i$ for every instance $x_i$ in the test set. So for every instance I have either a match ($y_i= a_i$) or a miss ($y_i\neq a_i$). How can I evaluate the quality of the match ? The issue is that some classes can have many members, i.e. many instances belong to it. Obviously if 50% of all data points belong to one class and my final classifier is 50% correct overall, I have gained nothing. I could have just as well made a trivial classifier which outputs that biggest class no matter what the input is. Is there a standard method to estimate the quality of a classifier based on the known testing set results of matches and hits for each class? Maybe it's even important to distinguish matching rates for each particular class? The simplest approach I can think of is to exclude the correct matches of the biggest class. What else? - – steffen Nov 27 '12 at 9:27 I think this is the source of my confusion: In the first paragraph you state ..where yi is the real classes and...: Do you mean that an instance $x_i$ can belong to / has more than one class ? Or does every $x_i$ belongs to / has exactly one class ? Can you please clarify ? – steffen Nov 27 '12 at 9:30 @steffen: I've seen the confusion matrix. In my particular case I have 4 classes. So I'm not sure which derived measures can be used and would make sense. Each $x_i$ belongs to only one class. However there are more than two possible classes overall $i\in [1,\cdots,N]$. – Gerenuk Nov 27 '12 at 13:24 @steffen Those derived measures are primarily applicable to binary classification, whereas this question explicitly is dealing with more than two classes. This then requires a modified understanding of terms like "true positive." – Michael McGowan Nov 27 '12 at 14:36 @MichaelMcGowan I have asked the OP for clarification and afterwards performed an edit to explicitly reflect the multiclass issue, which was not obvious before the edit (IMHO). – steffen Nov 27 '12 at 14:47 show 1 more comment ## 2 Answers Like binary classification, you can use the empirical error rate for estimating the quality of your classifier. Let $g$ be a classifier, and $x_i$ and $y_i$ be respectively an example in your data base and its class. $$err(g) = \frac{1}{n} \sum_{i \leq n} \mathbb{1}_{g(x_i) \neq y_i}$$ As you said, when the classes are unbalanced, the baseline is not 50% but the proportion of the bigger class. You could add a weight on each class to balance the error. Let $W_y$ be the weight of the class $y$. Set the weights such that $\frac{1}{W_y} \sim \frac{1}{n}\sum_{i \leq n} \mathbb{1}_{y_i = y}$ and define the weighted empirical error $$err_W(g) = \frac{1}{n} \sum_{i \leq n} W_{y_i} \mathbb{1}_{g(x_i) \neq y_i}$$ As Steffen said, the confusion matrix could be a good way to estimate the quality of a classifier. In the binary case, you can derive some measure from this matrix such as sensitivity and specificity, estimating the capability of a classifier to detect a particular class. The source of error of a classifier might be in a particular way. For example a classifier could be too much confident when predicting a 1, but never say wrong when predicting a 0. Many classifiers can be parametrized to control this rate (false positives vs false negatives), and you are then interested in the quality of the whole family of classifier, not just one. From this you can plot the ROC curve, and measuring the area under the ROC curve give you the quality of those classifiers. ROC curves can be extended for your multiclass problem. I suggest you to read the answer of this thread. - To evaluate multi-way text classification systems, I use micro- and macro-averaged F1 (F-measure). The F-measure is essentially a weighted combination of precision and recall that. For binary classification, the micro and macro approaches are the same, but, for the multi-way case, I think they might help you out. You can think of Micro F1 as a weighted combination of precision and recall that gives equal weight to every document, while Macro F1 gives equal weight to every class. For each, the F-measure equation is the same, but you calculate precision and recall differently: $$F = \frac{(\beta^{2} + 1)PR}{\beta^{2}P+R},$$ where $\beta$ is typically set to 1. Then, $$P_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FP_{i}}, R_{micro}=\frac{\sum^{|C|}_{i=1}TP_{i}}{\sum^{|C|}_{i=1}TP_{i}+FN_{i}}$$ $$P_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FP_{i}}, R_{macro}=\frac{1}{|C|}\sum^{|C|}_{i=1}\frac{TP_{i}}{TP_{i}+FN_{i}}$$ where $TP$ is True Positive, $FP$ is False Positive, $FN$ is False Negative, and $C$ is class. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9255089163780212, "perplexity_flag": "head"}
http://mathhelpforum.com/geometry/187758-problem-calculating-points-form-appropriate-shape-computer-implementation.html
# Thread: 1. ## Problem calculating points to form appropriate shape (for a computer implementation) I am building a mapping software, and i have a problem in calculating specific points for building the roads. The way i build them is that i have the start and end point (all information is in x and y coordinates), the width of the road, and i can calculate the slope. To draw the middle (dashed lines on the road) line is simple, it is from starting to ending point, but i am having problem finding the starting and ending points of the road boundaries (which has distance of width/2 above and below the middle line). Now if the road is completely horizontal, i simply can add width/2 in the y coordinate of the above boundary's starting and ending point, and subtract width/2 from the y coordinate of the starting and ending points to calculate the starting and ending points of the lower boundary. But i am stuck at how to find these points when the road is not horizontal or vertical, but tilted or diagonal (when ill need to add something to both x and y coordinates). I can have the slope of the road, and slope of the perpendicular (on which the start points of both upper and lower boundary will lie), and i have the width, but how to continue from here? I have attached an image to make the point clear. Thanks in advance for the replies! Attached Thumbnails 2. ## Re: Problem calculating points to form appropriate shape (for a computer implementati If "m" is the slope of the middle line (I am assuming you mean the slope in the "Cartesian geometry" sense and not how the road slopes going up a hill!) then the slope of the line perpendicular to it is -1/m. If the coordinates of the point you are given are $(x_0, y_0)$, then the line perpendicular to the "road" is given by $y= -(1/m)(x- x_0)+ y_0$. The two desired points lie at width/2 from $(x_0,y_0)$ and so lie on a circle with center at $(x_0, y_0)$ and radius width/2. Such a circle has equation $(x- x_0)^2+ (y- y_0)^2= \frac{width^2}{4}$. Replace y in that equation with $-(1/m)(x- x_0)+ y_0$ from the first equation and you have $(x- x_0)^2+ (1/m^2)(x- x_0)^2= (1+ 1/m^2)(x- x_0)^2= \frac{m^2+ 1}{m^2}(x- x_0)^2= \frac{width^2}{4}$ $(x- x_0)^2= \frac{m^2 (width)}{4(m^2+ 1)}$ $x- x_0= \pm\frac{m\sqrt{width}}{2\sqrt{m^2+ 1}}$ $x= x_0\pm\frac{m\sqrt{width}}{2\sqrt{m^2+ 1}}$ Take + for one point and - for the other. Of course, y for each point is given by $y= -(1/m)(x- x_0)+ y_0$. 3. ## Re: Problem calculating points to form appropriate shape (for a computer implementati Thanks for reply. And roads are not uphill or downhill, it is the bird's eye view from the top, so the slope i was referring to is indeed the slope in Cartesian coordinate system. There is one thing i forgot to mention: The design environment i am using has (0,0) at the top left corner of the drawing area, and x coordinate increase towards the right as usual but y coordinate increase downwards. I respect the effort gone into the last post to answer my question, and I know I should have mentioned it earlier, but here most of the time we tell the computer what to do, and never go into calculations like this because they are done by algorithms written by someone else, but my application requires efficiency and speed, so for the first time, i have to give direct coordinates instead of high level instructions that are slow, and because of the fact that i never had to do it before, it completely forgot about the difference between the Cartesian coordinate system and the system that the computer implements to calculate location. How much this change things? 4. ## Re: Problem calculating points to form appropriate shape (for a computer implementati Ok, i have implemented the solution as it is, and despite the fact that the Y coordinate work in reverse, it is working perfectly. Only one change that I have used is in the first equation (to calculate x), i have used width instead of square root of width, and it works perfectly, with the only difference being that to draw the upper boundary (considering the y coordinate increase downward) i had to add (instead of subtracting) to the original y, and to draw the lower boundary, i had to subtract from original y. At a later stage, i'll have to calculate every point on these lines, in order to do hit testing to make sure that nothing is placed on top of the road, all of which will be done by calculations, so i need to know what is happening, as there'll be a lot of math later on. I request HallsofIvy to explain why the solution worked despite the fact that y coordinate works in reverse (and so the slope of the line that is tilted upward is negative)? #### Search Tags View Tag Cloud Copyright © 2005-2013 Math Help Forum. All rights reserved.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9455332159996033, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/51401/how-do-the-sets-emptyset-times-b-a-times-emptyset-emptyset-times-em/51403
# How do the sets $\emptyset\times B,\ A\ \times \emptyset, \ \emptyset \times \emptyset$ look like? If we have a function $f:A \rightarrow B$, then one way to give meaning, I think, to this function, in terms of set theory, is to say, that $f$ is actually a binary relation $f=(A,B,G_f)$, where $G_f \subseteq A \times B$ is the graph of the function. Now my question is: what is $f$ if $\bullet \ A=\emptyset, \ B\neq\emptyset$,? $\bullet \ B=\emptyset, \ A\neq\emptyset$ ? $\bullet \ B=\emptyset, \ A=\emptyset$ ? (Another way to formulate this, I think, would be: How do the sets $\emptyset\times B,\ A\ \times \emptyset, \ \emptyset \times \emptyset$ look like ? Are they all $\emptyset$ ?) - ## 1 Answer Yes, they're all empty sets. For example, $\emptyset \times A$ consists of all pairs of the form $(o,a)$ with $o \in \emptyset, a \in A$. But the empty set has no elements, hence $\emptyset \times A$ has no elements, hence $\emptyset \times A$ is the empty set. A similar argument works for the other two sets. Here is how this problem can be interpreted in terms of cardinalities. For any sets $A,B$ the cardinality of $A \times B$ is the product of cardinalities of $A$ and $B$. Hence the cardinality of $\emptyset \times A$ is just $0 \cdot |A| = 0$ so $\emptyset \times A$ has $0$ elements, and hence $\emptyset \times A = \emptyset$. And a similar argument will work in the other two cases. - Although I understand well what you mean, isn't writing $o\in \emptyset$ a bit weird, since the emptyset should not contain any other set ? – temo Jul 14 '11 at 9:07 Yes, you are correct. I just used the definition of Cartesian product to write down how an element of $\emptyset \times A$ "looks like". And since $\emptyset$ has no elements, the condition $o \in \emptyset$ is never satisfied, hence $\emptyset \times A$ is empty. Basically, I was trying to say "the empty set has no elements, hence $\emptyset \times A$ has no elements" rigorously. – algebra_fan Jul 14 '11 at 9:15 The first answer is spot on, but I don't see a way of formalizing the second that doesn't pass through the first, especially if $B$ is infinite (but even if $B$ is finite, there's something to show). – ccc Jul 14 '11 at 10:01 – algebra_fan Jul 14 '11 at 10:18 It's not that I think it's overkill. I think it's begging the question argue that $\emptyset \times B$ is empty since $0 \times \kappa = 0$ for all $\kappa$. Like I said, I think the first answer is spot on, and I'm sorry if this seems pedantic, but I think it's misleading to suggest that changing the language of the question to an equivalent statement in cardinal arithmetic (and implicitly appealing to an unjustified "$0$ times anything is $0$" intuition) is a suitable replacement for the actual argument. – ccc Jul 14 '11 at 10:37 show 3 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9566211104393005, "perplexity_flag": "head"}
http://mathhelpforum.com/math-topics/41278-what-weight-beam.html
# Thread: 1. ## What is the weight of the beam? Question: A long uniform beam is pivoted at one end. A force of 300N is applied to hold the beam horizontally. What is the weight of the beam? Attempt: Need Help... Don't know how to start 2. I can't really help you exactly.. though doesn't it have something to see with the strenght of gravity? You need an amount of strenght equal to the gravity to hold the beam in air. This is just a gamble... so you best wait 'till someone proves me wrong (which 'll probably be the case but anyway...) The strenght of a gravity field: $g= 9.81 N/kg$ As they apply 300N to hold the beam in the air... $300/9.81 = 30.58... kg$ I could be terribly wrong though... Edit: and 'topsquark' has proven me wrong so ignore me ^^ 3. Originally Posted by looi76 Question: A long uniform beam is pivoted at one end. A force of 300N is applied to hold the beam horizontally. What is the weight of the beam? Attempt: Need Help... Don't know how to start I don't have time to fully analyze this, but the CM of the beam will be 1.25 m from the pivot, so that's where the weight acts from. If we take the sum of the torques using the pivot as our axis of rotation we will get an equation for w. (The net torque is, of course, 0 Nm.) -Dan 4. Originally Posted by looi76 Question: A long uniform beam is pivoted at one end. A force of 300N is applied to hold the beam horizontally. What is the weight of the beam? Attempt: Need Help... Don't know how to start Since the beam is uniform the center of gravity C is 1.25 m from the pivot. The weight W of the beam creates a right-turning momentum $m_r = 1.25 \cdot W \ m$ while the force F = 300 N creates a left-turning momentum of $m_l = 2.0 \cdot 300 \ Nm = 600\ Nm$ Both momenti must be equal that means the beam doesn't turn in any direction: $1.25 \cdot W \ m = 600\ Nm~\implies~ \boxed{W = 480\ N}$ Attached Thumbnails 5. Originally Posted by shinhidora I can't really help you exactly.. though doesn't it have something to see with the strenght of gravity? You need an amount of strenght equal to the gravity to hold the beam in air. This is just a gamble... so you best wait 'till someone proves me wrong (which 'll probably be the case but anyway...) The strenght of a gravity field: $g= 9.81 N/kg$ As they apply 300N to hold the beam in the air... $300/9.81 = 30.58... kg$ I could be terribly wrong though... Edit: and 'topsquark' has proven me wrong so ignore me ^^ I want to quickly mention why this doesn't work. There is a reaction force at the pivot point and we don't know what size that force is. The reason this isn't a difficulty using the torque method is that by taking the axis of revolution as the pivot the torque due to this reaction force is 0 Nm because it has a 0 moment-arm. -Dan 6. Originally Posted by topsquark I want to quickly mention why this doesn't work. There is a reaction force at the pivot point and we don't know what size that force is. The reason this isn't a difficulty using the torque method is that by taking the axis of revolution as the pivot the torque due to this reaction force is 0 Nm because it has a 0 moment-arm. -Dan Just to make sure Torque = the force required to rotate something around a certain axis? And could you define pivot please? English not being my mothertongue makes this a bit hard sometimes ^^ 7. Originally Posted by shinhidora Just to make sure Torque = the force required to rotate something around a certain axis? And could you define pivot please? English not being my mothertongue makes this a bit hard sometimes ^^ If you speak French then the meaning should be clear: In both languages (English and French) the word pivot means the same. If you speak Vlaams then a pivot is een ashals. 8. Originally Posted by shinhidora Just to make sure Torque = the force required to rotate something around a certain axis? And could you define pivot please? English not being my mothertongue makes this a bit hard sometimes ^^ Torque is the rotational version of force. $\vec{\tau} = \vec{r} \times \vec{F}$. A "pivot" is a point around which a beam (or other object) would rotate about since it is attached to that point. (This differs slightly from "axis of rotation" because an axis of rotation could be anywhere on the object. A pivot is a point that is attached to the object and forces the object to rotate around it.) -Dan
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9435782432556152, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/111077/line-projection-on-top-of-a-plane/111086
# line projection on top of a plane If I have a horizontal line (a 3d point and 3d vector with zero z component) and another plane (could be an oblique or a horizontal; i have normal vector of the plane); then how do we get the direction (3d) of the 3d line which lie on top of the plane. For that, I wish to project the above horizontal line on to the given plane. (I made more clear the original post.) - ## 1 Answer I'm not sure what a 2d vector is. I'm assuming you're specifying the line by the span of some vector $v$ translated by the 3d point $p$: $L = p + vt.$ You can specify a plane by two vectors and a point, or by a point and a vector. For the first, call the two vectors $v_1$ and $v_2$, and the point $q$. The plane is $rv_1 + sv_2 + q$. If $p + vt$ does not intersect the plane, the projection can be written as a translation. If it does intersect the plane, pick $p$ and $q$ so that they coincide with the intersection of the line and the plane. Change coordinates so that $p=q=0$. Now all you do is project the vector $v$ onto $v_1$ and $v_2$. The projection map is $$tv\mapsto t\langle v,v_1\rangle v_1 + t\langle v,v_2\rangle v_2.$$ If you want to work with a point $q$ and a single vector $w$ which specifies the plane by $\{x\ |\ \langle x,w\rangle = 0\} + q$, again translate coordinates to the intersection point $p=q=0$. Then project onto the span of $w$, and subtract that new line from the old line: $$vt\mapsto vt - t\langle v,w\rangle w.$$ - I think that, with regards to the 2D vector, the OP is using the fact that the line is horizontal (i.e. it's really a 3D vector but with the third coordinate zero). – Lopsy Feb 19 '12 at 22:26 @Neal: sorry i didn't get you clearly. i know normal vector of the plane and the horizontal line (a point and direction of line). wouldn't it be taken by getting cross products of some vectors as i cannot get your previous explanation. – lenin Feb 19 '12 at 23:42 Yes, the a normal vector to a plane can be gotten by taking the cross product of two spanning vectors. – Neal Feb 19 '12 at 23:51 @lopsy That makes sense. I think of vectors and 1-forms as "1D", planes and 2-forms as "2D", and so forth, so referring to a vector by the dimension of its ambient space confused me. – Neal Feb 19 '12 at 23:53 @ Neal: No i want to know how can i get my required line (which lies on the plane) direction by getting the cross product of normal vectors? is it possible? if so, please tell me – lenin Feb 19 '12 at 23:56 show 2 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9394411444664001, "perplexity_flag": "head"}
http://mathhelpforum.com/differential-geometry/188708-simply-ordered-set.html
# Thread: 1. ## Simply ordered set Every simply ordered set is a Hausdorff space in the order topology. Since it is simply ordered, the relation is reflexive, anti-symmetric, and transitive. How can this coupled with the Hausdorff condition help show that it is the order topology? 2. ## Re: Simply ordered set Originally Posted by dwsmith Every simply ordered set is a Hausdorff space in the order topology. Since it is simply ordered, the relation is reflexive, anti-symmetric, and transitive. How can this coupled with the Hausdorff condition help show that it is the order topology? You are misinterpreting the question. It's saying that if $X$ is a space where you haven given it the order topology for some total ordering then the resulting topological space is Hausdorff. 3. ## Re: Simply ordered set Originally Posted by Drexel28 You are misinterpreting the question. It's saying that if $X$ is a space where you haven given it the order topology for some total ordering then the resulting topological space is Hausdorff. Ok. However, I am not sure how to do that one either. 4. ## Re: Simply ordered set Originally Posted by dwsmith Ok. However, I am not sure how to do that one either. Let $x,y\in X$ be distinct. Since $<$ is a total ordering on $X$ we may assume without loss of generality that $x<y$. We have two choices, either there exists $z$ with $x<z<y$ in which case take $U=(-\infty,z)$ and $V=(z,\infty)$, else $(x,y)$ is empty and take $U=(-\infty,y)$ and $V=(x,\infty)$. Regardless, $U$ and $V$ are disjoint neighborhoods of $x,y$ respectively. The conclusion follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9223240613937378, "perplexity_flag": "head"}
http://mathoverflow.net/questions/75567?sort=votes
## Spanning trees in 3 regular graphs. ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Suppose I have a 3 regular graph, and I cut enough edges to get a spanning tree. The leaves (which we often call "half edges") are identified in pairs, and what we are interested in is the length of the paths in the graphs joining these half edges. The background is that what we are really looking at is Riemann surfaces, and the graph corresponds to a triangulation, and the tree corresponds to a fundamental domain, where the "half edges" are identified sides - think Belyi and Grothendieck. As an example, consider the usual two holed torus, which can be given by identifying the edges of an octagon in pairs. Triangulate the octagon, put a vertex in every triangle, and connect neighboring vertices with an edge. Include vertices which have a paired external side, so you get a 3 regular graph. Now imagine reversing direction, so you start with a random 3 regular graph, and you cut edges until you have a tree, put a triangle at each vertex, and you get an octagon with paired sides, thus a two holed torus (depending on the pairings, you might of course get a torus or a sphere). Now imagine a much bigger graph, but the same idea. Start with a random graph, cut edges, and generate a fundamental domain (note again that it is unclear what the genus of the related surface will actually be, but ignore that for now). The hope would be that one could get the "usual" domain $aba^{−1}b^{−1}cdc^{−1}d^{−1}\ldots$ with sides identified in alternating pairs, but of course this is not usually possible while respecting the triangulation. So the question is can one somehow get something with all the paired leaves roughly the same distance or with one set far apart and the others close. The question is not exactly well formed, but I think you can think of two basic possibilities - 1) a spanning tree with one long path and many short ones 2) a spanning tree with most paths roughly the same length. @jc's suggestion, I moved the clarifications up here, duh :-) - 1 What do "clip" and "half edges" mean? – Alon Amit Sep 16 2011 at 1:57 1 Sorry, I'm using the language I'm used to using with my colleague Eran Makover. What I mean by "clip" is to cut the edge, of course when you cut enough edges you get a tree. If you then embed the tree in the plane, each edge which was cut shows up in two places, and those we consider identified. The background is that what we are really looking at is Riemann surfaces, and the graph corresponds to a triangulation, and the tree corresponds to a fundamental domain, where the "half edges" are identified sides - think Belyi and Grothendieck. Does that make sense? – Jeff McGowan Sep 16 2011 at 2:13 So are the half edges just the leaves of the tree? – Daniel Mansfield Sep 16 2011 at 2:27 Also, I'm sure you can construct an example of 3-regular graphs which has many "long" paths and many "short" ones. It would look something like a big Y. Perhaps more detail about what is considered long and short would be helpful. Or, if you didn't like the big "Y", here's another example t1.gstatic.com/… – Daniel Mansfield Sep 16 2011 at 2:37 @Jeff McGowan, it may help if you describe an explicit example of a cubic graph and the process of "clipping" it to a spanning tree in your question. You might also be able to explain more what you're looking for in 1) and 2) with an example at hand. – jc Sep 16 2011 at 2:56 show 3 more comments ## 2 Answers I cannot see a clear question here, and so this is certainly not an answer. But perhaps you could clarify the question via an explicit example. As I understand it, if you restricted attention to triangulated surfaces homemorphic to a sphere (which I know is not your interest), your cutting would produce what is called a net, or an unfolding. You are looking at the dual of the net, and want either bushy trees or Hamiltonian paths. There are exactly 43,380 distinct nets for the icosahedron. Left below is an unfolding with a bushy dual tree; on the right an unfolding whose dual is a Hamiltonian path. The only points I want to make with this example are: (a) There are many spanning trees (exponential in the number of triangles), and (b) among them you can probably find spanning trees of any desired shape. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I hope that I understand your question well. (Say that you have leaves $a^+$ and $a^-$ obtained by cutting an edge $a$ and similarly $b^+$ and $b^-$ by cutting an edge $b$. My understanding is that you consider only $a^+a^-$ path, $b^+b^-$ path, but not, e.g., $a^+b^+$ path.) If the graph is given (say by an "enemy") you may not succed with any of your goals. First, I will show a construction where you cannot have one long path and many short ones. 1. Start with a vertex of degree 3 attached to three leaves. 2. Replace every leaf with another vertex of degree three (now it is attached to two leaves and one vertex of the original graph). 3. Repeat this step until you obtain a tree $T$ with $3\cdot2^k$ leaves and $3\cdot 2^k - 2$ vertices of degree 3. 4. Replace every leaf of $T$ with the following graph: $$V(H) = {1,2,3,4,5};$$ $$E(H) = {12, 23, 34, 45, 51, 24, 35}.$$ More precisely, identify the vertex number 1 with the leaf. Thus you obtain a 3-regular graph. Now If you want to cut the edges of the resulting graph in order to obtain a spanning tree, you can only cut edges inside copies of $H$. Thus all paths are very short and you do not achieve goal 1). In addition you may play a bit with steps 1., 2., 3. in the construction; depending on your choice, you may more or less force the lengths of paths. For instance, start with cycle to on $j$ vertices. Attach a leaf to every vertex and then proceed with the step 4. Then you have to have one path with length $j$ and many short paths weakly disallowing case 2). However, you can also start with many cycles connected with edges in some tree-like structure. Then you even disallow 1) and 2) at once (depending what does "most" in your question mean). Perhaps some assumptions should be put on the graphs. - Martin, Yes, you would definitely need assumptions to get either thing to always be true, we are not expecting that. Our arguments are probabilistic, in that we look at what happens as the number of vertices goes to infinity, in which case the probability of certain things happening goes to 1. For example, as you probably know, the probability that there is a Hamiltonian cycle in a 3 regular graph goes to 1 as the size goes to infinity. Your construction is very nice, it's giving me some ideas about how to imagine taking a given spanning tree and rearranging into another tree...Thanks! – Jeff McGowan Sep 19 2011 at 14:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9567103981971741, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/156859-partial-fractions.html
# Thread: 1. ## partial fractions I'm a little rusty with my partial fractions so I just need a refresher. The equation is $1/(x+1)(x+2)(x+4)$ I have $A/x+1 + B/x+2 + C/x+4$ Then common denominators $(x+1)(x+2)(x+4)= A(x+2)(x+4) + B(x+1)(x+4) + C(x+1)(x+2)$ After this can I do where, x=-2 for example? Cause then I will get 0=-2B And then keep plugging in values for x? Or is there another way which I complete forgot about 2. Yes there is another way. For starters, you should actually have gotten $\frac{A(x + 2)(x + 4) + B(x + 1)(x + 4) + C(x + 1)(x + 2)}{(x + 1)(x + 2)(x + 4)} = \frac{1}{(x + 1)(x + 2)(x + 4)}$ $A(x + 2)(x + 4) + B(x + 1)(x + 4) + C(x + 1)(x + 2)=1$ $A(x^2 + 6x + 8) + B(x^2 + 5x + 4) + C(x^2 + 3x + 2) = 1$ $Ax^2 + 6Ax + 8A + Bx^2 + 5Bx + 4B + Cx^2 + 3Cx + 2C = 1$ $(A + B + C)x^2 + (6A + 5B + 3C)x + 8A + 4B + 2C = 0x^2 + 0x + 1$. This means $A + B + C = 0$ $6A + 5B + 3C = 0$ $8A + 4B + 2C = 1$. Solve these equations simultaneously to find $A, B, C$. 3. thanks. Forgot about solving with coefficients 4. so I have the series from 1 to infinity of $1/3(x+1) -1/5(x+2) +1/6(x+4)$ How do I solve the sum from there? I tried telescoping, and it doesn't work 5. When you get to $1 \equiv a(x+2)(x+4)+b(x+1)(x+4)+c(x+1)(x+2)$, you can put $x = -1, -2, -4$ to respectively find $a, b, c$ (instead of solving the system from the coefficient relations). 6. Originally Posted by TheCoffeeMachine Ok. When you get to $1 \equiv a(x+2)(x+4)+b(x+1)(x+4)+c(x+1)(x+2)$, you can put $x = -1, -2, -4$ to respectively find $a, b, c$ (instead of solving the system from the coefficient relations). Will telescoping series work from there to find the sum? 7. Originally Posted by guyonfire89 Will telescoping series work from there to find the sum? IF the series is meant to be the sum of what you posted in post #1 (which I corrected because of the typos) , then yes, it is a telescoping series. To realise this you need to: 1. Find the correct partial fraction decomposition. 2. Write out the first few terms ( I suggest at least the first four terms) and start cancelling in the usual way. If you take greater care with the details you will probably have better luck in solving questions like this one. 8. Hello, guyonfire89! An error in your thiird statement. $\text{The fraction is: }\;\dfrac{1}{(x+1)(x+2)(x+4)}$ $\text{I have: }\;\dfrac{A}{x+1} + \dfrac{B}{x+2} + \dfrac{C}{x+4}$ We have: . $\displaystyle \frac{1}{(x+1)(x+2)(x+4)} \;=\;\frac{A}{x+1} + \frac{B}{x+2} + \frac{C}{x+4}$ Multiply through by the LCD: . . $\displaystyle 1 \;=\;A(x+2)(x+4) + B(x+1)(x+4) + C(x+1)(x+2)$ m↑ $\begin{array}{cccccccccccc}<br /> \text{Let }x = \text{-}1\!: & 1 &=& A(1)(3) + B(0) + C(0) & \Rightarrow & A &=& \frac{1}{3} \\ \\[-3mm]<br /> \text{Let }x = \text{-}2\!: & 1 &=& A(0) + B(\text{-}1)(2) + C(0) & \Rightarrow & B &=& \text{-}\frac{1}{2} \\ \\[-3mm]<br /> \text{Let }x = \text{-}4\!: & 1 &=& A(0) + B(0) + C(\text{-}3)(\text{-}2) & \Rightarrow & C &=& \frac{1}{6} \end{array}$ Therefore: . $\displaystyle \frac{1}{(x+1)(x+2)(x+4)} \;=\;\frac{\frac{1}{3}}{x+1} + \frac{\text{-}\frac{1}{2}}{x+2} + \frac{\frac{1}{6}}{x+4}$ . . . . . . . . . . . . . . . . . . . . . . . . $\displaystyle =\;\frac{1}{6}\left(\frac{2}{x+1} - \frac{3}{x+2} + \frac{1}{x+4}\right)$ 9. thanks guys, but when I try to prove its a telescoping series, only the first two terms cancel out. I've tried out the first 4 terms
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9244306683540344, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/134651/the-nambu-bracket
# The Nambu bracket Does anybody know how to show the Jacobi identity for the Nambu bracket in $\mathbb{R}^3$? The Nambu bracket with respect to $c \in \mathcal{F}(\mathbb{R}^3)$ is defined as $$\{F,G\}_c = \langle\nabla c , \nabla F \times\nabla G\rangle$$ where $F,G \in \mathcal{F}(\mathbb{R}^3).$ I don't know if this will help but, if one shows that the homomorphism $$F \rightarrow \{F, \bullet\}_c = \langle\nabla \bullet , \nabla c \times \nabla F\rangle,$$ between $\mathcal{F}(\mathbb{R}^3)$ and the divergenceless vector fields $\nabla c \times \nabla F,$ preserve the lie algebra structure then it is done... Thank you very much for the help! - I think that your observation "the map $F\to\nabla c\times\nabla F$ translates the Nambu bracket into the Lie bracket" is indeed a necessary and sufficient condition for the Nambu to satisfy the Jacobi identity. – Giuseppe Apr 21 '12 at 17:14 1 @Rppacheco, it may be useful to give the bibliographic reference: Geometric Mechanics, by Darryl Holm, part II. – matgaio Apr 21 '12 at 18:28 Oh yes! Sorry about that. I think that this is also an exercise in Marsden, Ratiu's Introduction to Mechanics and Symmetry. – Rppacheco Apr 21 '12 at 19:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9099068641662598, "perplexity_flag": "head"}
http://mathhelpforum.com/algebra/128570-logarithm-issues.html
# Thread: 1. ## Logarithm issues. I have a question that asks me to find the value of the expression $(.65)^5$ in logarithmic form. Am I correct in assuming that I should be looking the logarithm that is five times greater than that of .65? Or is the answer: $\log (.68)^5=5\log(.68)=5\log (6.8*10^-1)$Therefore the answer is $5*.8129$ or $4.0645+(-1)=3.0645$ ? 2. Originally Posted by MathBlaster47 I have a question that asks me to find the value of the expression $(.65)^5$ in logarithmic form. Am I correct in assuming that I should be looking the logarithm that is five times greater than that of .65? e^(i*pi) : Yep, due to the power law Or is the answer: $\log (.68)^5=5\log(.68)=5\log (6.8*10^-1)$ e^(i*pi): Indeed Therefore the answer is $5*.8129$ or $4.0645+(-1)=3.0645$ ? e^(i*pi) : No, you've made a decimal approximation and the question doesn't ask for this $\log (.68)^5=5\log(.68)=5\log (6.8 \cdot 10^{-1}) = 5\log (6.8) - 1$ I think I should have been a little more careful how I phrased my question, the question asks me to use a table of logarithms to derive my answer, I just wanted to do my due diligence and try to do the working myself. Follow up question: Is my decimal approximation satisfactory as an answer, given that I was asked to use a table? 4. Originally Posted by MathBlaster47 I think I should have been a little more careful how I phrased my question, the question asks me to use a table of logarithms to derive my answer, I just wanted to do my due diligence and try to do the working myself. Follow up question: Is my decimal approximation satisfactory as an answer, given that I was asked to use a table? Hi Mathblaster47, you made one error, apart from writing 0.68 instead of 0.65 ... $5[log(6.5X10^{-1})]=5[log(6.5)+log\left(10^{-1}\right)]=5[log(6.5)-1]=5log(6.5)+5(-1)$ 5. Originally Posted by Archie Meade Hi Mathblaster47, you made one error, apart from writing 0.68 instead of 0.65 ... $5[log(6.5X10^{-1})]=5[log(6.5)+log\left(10^{-1}\right)]=5[log(6.5)-1]=5log(6.5)+5(-1)$ Hmmm....gotta stop typing so fast....Silly typos! Ok, so the final answer is $5(.8129)+5(-1)=4.0645+(-5)$? 6. Yes Mathblaster47, the answer is negative, so check that $(0.65)^5=10^{answer}$ or $5log(0.65)=(answer)[log(10)]$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9552749395370483, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/132125-minimal-polynomial.html
# Thread: 1. ## Minimal Polynomial Without finding the minimal polynomial for $r=i\cdot 2^{\frac{1}{3}}+i$, show the degree of said polynomial must be $6$. I know this is true because I was able to derive $r\text{'s}$ minimal polynomial ( $f(x)=x^6+3x^4-9x^2+9$), but in doing so I used the assumption that it was of degree $6$ as opposed to $2$ or $3$. 2. Originally Posted by chiph588@ Without finding the minimal polynomial for $r=i\cdot 2^{\frac{1}{3}}+i$, show the degree of said polynomial must be $6$. I know this is true because I was able to derive $r\text{'s}$ minimal polynomial ( $f(x)=x^6+3x^4-9x^2+9$), but in doing so I used the assumption that it was of degree $6$ as opposed to $2$ or $3$. show that both $i$ and $\sqrt[3]{2}$ are in $\mathbb{Q}(i\sqrt[3]{2} + i)$ and thus $\mathbb{Q}(i\sqrt[3]{2} + i)=\mathbb{Q}(\sqrt[3]{2},i)$. we also have $[\mathbb{Q}(\sqrt[3]{2},i):\mathbb{Q}]=[\mathbb{Q}(\sqrt[3]{2},i):\mathbb{Q}(\sqrt[3]{2})] \times [\mathbb{Q}(\sqrt[3]{2}): \mathbb{Q}]=2 \times 3 = 6.$ 3. Originally Posted by NonCommAlg show that both $i$ and $\sqrt[3]{2}$ are in $\mathbb{Q}(i\sqrt[3]{2} + i)$ and thus $\mathbb{Q}(i\sqrt[3]{2} + i)=\mathbb{Q}(\sqrt[3]{2},i)$. we also have $[\mathbb{Q}(\sqrt[3]{2},i):\mathbb{Q}]=[\mathbb{Q}(\sqrt[3]{2},i):\mathbb{Q}(\sqrt[3]{2})] \times [\mathbb{Q}(\sqrt[3]{2}): \mathbb{Q}]=2 \times 3 = 6.$ Let $\alpha=i\sqrt[3]{2}+i$. It turns out $\sqrt[3]{2} = -\frac{1}{6}\alpha^4-\alpha^2+\frac{1}{2}$ and $i = \frac{1}{6}\alpha^5+\frac{2}{3}\alpha^3-\frac{1}{2}\alpha$, so indeed we know $\mathbb{Q}(\sqrt[3]{2},i) \subseteq \mathbb{Q}(\alpha)$. But is there and easier way to show $\sqrt[3]{2},i\in \mathbb{Q}(\alpha)$? 4. Originally Posted by chiph588@ Let $\alpha=i\sqrt[3]{2}+i$. It turns out $\sqrt[3]{2} = -\frac{1}{6}\alpha^4-\alpha^2+\frac{1}{2}$ and $i = \frac{1}{6}\alpha^5+\frac{2}{3}\alpha^3-\frac{1}{2}\alpha$, so indeed we know $\mathbb{Q}(\sqrt[3]{2},i) \subseteq \mathbb{Q}(\alpha)$. But is there and easier way to show $\sqrt[3]{2},i\in \mathbb{Q}(\alpha)$? well, this is how i did it: $\alpha^3=-3i(1+\sqrt[3]{2} + \sqrt[3]{4})=-3 \alpha - 3i \sqrt[3]{4}$ and so $\beta=i \sqrt[3]{4} \in \mathbb{Q}(\alpha)$ and we're done because $\sqrt[3]{2}=\frac{-\beta^2}{2}$ and $i=\frac{-\beta^3}{4}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9733110666275024, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/1989/what-is-terminal-velocity
# What is terminal velocity? What is terminal velocity? I've heard the term especially when the Discovery Channel is covering something about sky diving. Also, it is commonly known that HALO (Hi-Altitude, Lo-Opening) infantry reach terminal velocity before their chutes open. Can the terminal velocity be different for one individual weighing 180 pounds versus an individual weighing 250 pounds? - 6 – mbq♦ Dec 16 '10 at 14:34 ## 3 Answers Terminal velocity is the maximum velocity that you can reach during free-fall. If you imagine yourself falling in gravity, and ignore air resistance, you would fall with acceleration $g$, and your velocity would grow unbounded (well, until special relativity takes over). This effect is independent of your mass, since $F = ma = mg \Rightarrow a = g$ Where terminal velocity arises is that air resistance is a velocity dependent force acting against your free fall. If we had, for example, a drag force of $F_D=KAv^2$ ($K$ is just a constant to make all the units work out, and depends on the properties of the fluid you're falling through, and $A$ is your surface area along the direction of motion) then the terminal velocity is the velocity at which the forces cancel (i.e., no more acceleration, so the velocity becomes constant): $F = 0 = mg - KAv_t^2 \Rightarrow v_t=\sqrt{mg/KA}$ So we see that a more massive object can in fact have a larger terminal velocity. - 1 I will note that terminal velocity is not necessarily the maximum velocity you reach, since you can start out faster than $v_t$. – David Zaslavsky♦ Dec 16 '10 at 21:43 And one example of what David is talking about is rolling a ball: it will eventually stop with terminal velocity of $v_t = 0$ :-) In any case, it's worth pointing out that the concept of terminal velocity is just a special case of systems trying to reach stable equilibrium using the second law of thermodynamics (that is, friction). – Marek Dec 16 '10 at 22:45 Ah! Extremely good point. I was supposing a system falling from rest, but now I'm supposing I wasn't explicit on that. – wsc Dec 16 '10 at 23:09 1 Actually, the real terminal velocity on Earth is $0$. Smack!... – Raskolnikov Dec 17 '10 at 10:57 You can find a good article here: http://en.wikipedia.org/wiki/Terminal_velocity In the context you provide, terminal velocity is the maximum speed that an object in free fall reaches in the atmosphere. When an object is falling, or in free fall, there are two forces that determine whether it will accelerate downwards or not: • gravity (trying to accelerate the body downwards) • air friction (trying to push the body upwards) Initially, as the body is not moving, there is no air drag, and the object starts falling due to gravity. Now, as the object speeds up, the gravity contribution remains constant, whereas the drag increases with the speed of the object. Finally a point is reached where the drag is so much that the object does not accelerate anymore. Velocity stays constant and it is called terminal velocity. The value for it is proportional to $\sqrt{m}$ so clearly objects of different weights have, in general different terminal velocities (heavier objects having higher values), but there are also other factors to account for, like how aerodynamic the object is. A sphere has higher terminal velocity than a sheet of metal of the same mass. - If the falling body is non-spherical, then the drag will be dependent upon the bodies orientation. Skydivers exploit this to fly(fall) in formation, assume a higher drag configuration to slow down, or a lower drag configuation to speed up. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.917299211025238, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/253854/defining-the-normalizer-showing-its-a-subgroup-and-h-gn-gh/253859
# Defining the normalizer, showing its a subgroup and $|H| = |G:N_G(H)|$ "Let G be a finite group. For a subgroup $H \subset G$ defing the normalizer $N_G(H) \subset G$. Show that the normalizer is a subgroup, that $H \unlhd N_G(H)$ and that the number of subgroups $H'$ conjugate to $H$ in $G$ is equal to the index of $|G:N_G(H)|$ of the normalizer". For the normalizer, I have the definition as the biggest subgroup $\supset H$, such that$H \unlhd$ in it: $H \unlhd N_G(H)$. What does this exactly mean though? Is it basically the biggest normal subgroup in G? Also, I don't understand how I would show the other stuff. - There are many groups which have subgroups that are not normal. The normalizer of a sbgp. $\,H\,$ is a subgroup (i) containing $\,H\,$ and (ii) in which $\,H\,$ is normal, and this normalizer sbgp. is the maximal one wrt these two properties. Note that the normalizer itself is NOT, in general, normal in the big group. – DonAntonio Dec 8 '12 at 16:39 So G has a subgroup $G_1$. Within this subgroup, there is another subgroup H which is normal to $G_1$. Therefore $G_1$ is the normaliser of G? – Kaish Dec 8 '12 at 16:57 If it is the maximal such one, yes. – DonAntonio Dec 8 '12 at 17:03 What do you mean by "maximal" one? The one with the most elements? – Kaish Dec 8 '12 at 17:10 Maximal wrt set inclusing: for any $\,H\leq N\leq G\,$ s.t. $\,H\triangleleft N\,$ , then $\,N\leq N_G(H)\,$ – DonAntonio Dec 8 '12 at 17:15 ## 2 Answers Make the group $\,G\,$ act on the set $\,X:=\{K\;\;;\;\ K\leq G\}\,$ by conjugation. Thus, by the orbit-stabilizer theorem: $$|\mathcal Orb(H)|=[G:Stab(H)]$$ but $\,\mathcal Orb(H)\,$ is just the set of all subgroups of $\,G\,$ conjugate to $\,H\,$ , and $\,Stab(H)\,$ is just $\,N_G(H)\,$, so... - Given a subgroup $H$, there are possibly a bunch of intermediate subgroups $K$ lying in between $H$ and $G$. $H$ has to be normal in at least one of these, since it's normal in itself. So we can keep going up the chain of subgroups until we arrive at a "largest" subgroup that $H$ is normal inside. Now, that subgroup need not be normal in $G$. It's also not always the largest normal subgroup of $G$ because the largest normal subgroup in $G$ is just $G$!. Now, in order to show the number of conjugates is equal to the index of the normalizer, I would start by writing out the conjugates of $H$: $\{H, g_1Hg_1^{-1}, \ldots, g_nHg_n^{-1}\}$. Now try and show that $\{N(H), g_1N(H), \ldots, g_nN(H)\}$ are precisely the left cosets of $N(H)$. That is, you need to show no two cosets in that list are equal, and that every coset appears on that list. Now you just use the fact that $|G : N(H)|$ is by definition the number of cosets. - The "Going up the chain of subgroups" can prove to be a rather difficult task if the cardinality of all subgroups "between" $\,H\,$ and $\,G\,$ is, say more than $\,\aleph_0\,$ . Kurosh, in his very interesting and important book, talks of this stuff, but it is far from being obvious or trivial. – DonAntonio Dec 8 '12 at 16:54 I wasn't trying to give a precise explanation, but just the idea. Since the group is finite, we can certainly pass up the chain of subgroups in this way. Perhaps the tricky part with this approach is showing that a "maximal" subgroup in the chain actually exists. There's no reason a priori that I must get a unique normal subgroup of largest order using this method. – Zach L. Dec 8 '12 at 17:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9546825885772705, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2007/09/14/ab-categories/?like=1&_wpnonce=68d26d17fa
# The Unapologetic Mathematician ## Ab-Categories Now that we’ve done a whole lot about enriched categories in the abstract, let’s look at the very useful special case of categories enriched over $\mathbf{Ab}$ — the category of abelian groups. We know that $\mathbf{Ab}$ is a monoidal category, with the tensor product of abelian groups as its monoidal structure and the free abelian group $\mathbb{Z}$ as the monoidal identity. Even better, it’s symmetric, and even closed. That is, for any two abelian groups $A$ and $B$ we have an isomorphism $A\otimes B\cong B\otimes A$, and there is a natural abelian group structure on the set of homomorphisms $B^A=\hom_\mathbf{Ab}(A,B)$ satisfying the adjunction $\hom_\mathbf{Ab}(A\otimes B,C)\cong\hom_\mathbf{Ab}(A,C^B)$. Further, $\mathbf{Ab}$ is complete and cocomplete. All together, this means it’s a great candidate as a base category on which to build enriched categories. Of course, these will be called $\mathbf{Ab}$-categories. So let’s read the definitions. An $\mathbf{Ab}$-category $\mathcal{C}$ has a collection of objects, and between objects $A$ and $B$ there is an abelian hom-group $\hom_\mathcal{C}(A,B)$. For each object $C$ we have a homomorphism of abelian groups $\mathbb{Z}\rightarrow\hom_\mathcal{C}(C,C)$ which picks out the “identity morphism” from $C$ to itself at the level of the underlying sets. Remember that we’re no longer thinking of an abelian group as having elements — only its underlying set has elements anymore, and the underlying set of an abelian group $X$ is the set of abelian group homomorphisms $\mathbb{Z}\rightarrow X$. Given three objects $A,B,C\in\mathcal{C}$ we have a “composition” arrow in $\mathbf{Ab}$: $\circ:\hom_\mathcal{C}(B,C)\otimes\hom_\mathcal{C}(A,B)\rightarrow\hom_\mathcal{C}(A,C)$. This is associative and the identity morphism acts as an identity in the sense that the appropriate diagrams commute. Of course, since the composition arrows are morphisms in $\mathbf{Ab}$ they are linear functions in each input. An $\mathbf{Ab}$-functor $F$ between $\mathbf{Ab}$-categories $\mathcal{C}$ and $\mathcal{D}$ is defined by a function $F$from the objects of $\mathcal{C}$ to the objects of $\mathcal{D}$, and for each pair of objects $C,C'\in\mathcal{C}$ a homomorphism of abelian groups $F_{C,C'}:\hom_\mathcal{C}(C,C')\rightarrow\hom_\mathcal{D}(F(C),F(C'))$. Two diagrams are required to commute, saying that these linear functions preserve the composition and identity functions. An $\mathbf{Ab}$-natural transformation is one of two forms. In one we’re given two $\mathbf{Ab}$-functors $F$ and $G$. Then a natural $\eta:F\rightarrow G$ is a collection of linear functions $\eta_C:\mathbb{Z}\rightarrow\hom_\mathcal{D}(F(C),G(C))$ making one diagram commute. In the other we’re given an object $K\in\mathcal{D}$ and a bifunctor $T:\mathcal{C}^\mathrm{op}\otimes\mathcal{C}\rightarrow\mathcal{D}$. Then $\eta:K\rightarrow T$ is a collection of linear functions $\eta:K\rightarrow T(C,C)$ making another diagram commute. Together, $\mathbf{Ab}$-categories, $\mathbf{Ab}$-functors between them, and $\mathbf{Ab}$-natural transformations (of the first kind) form a 2-category. We can pair off $\mathbf{Ab}$-categories $\mathcal{C}$ and $\mathcal{D}$ to get the product category $\mathcal{C}\otimes\mathcal{D}$ (in fact we already did once above) and we can take the opposite category $\mathcal{C}^\mathrm{op}$. Thus $\mathbf{Ab}$-categories form a symmetric monoidal 2-category with a duality involution. There’s a whole lot of structure here, but ultimately it boils down to “the hom-sets all have the structure of abelian groups, and everything in sight is $\mathbb{Z}$-linear”. And that’s the usual definition given, that I decided to forgo back when I started in on enriched categories. ### Like this: Posted by John Armstrong | Category theory ## 1 Comment » 1. [...] of Ab-Categories There are a number of things we can say right off about the -categories we defined last time. As is common practice, we’ll blur the distinction between an abelian group and its [...] Pingback by | September 17, 2007 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 54, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9094445705413818, "perplexity_flag": "head"}
http://mathhelpforum.com/calculus/21840-need-help-limit-proof-print.html
need help with limit proof Printable View • November 2nd 2007, 07:55 AM MKLyon need help with limit proof I'm having trouble thinking this out. I have this problem: Suppose that for all x in (−c, c) the identity a0 + a1x + .... + an-1x^(n-1) + (an + A(x))x^n = b0 + b1x + .... + bn-1x^(n-1) + (bn + B(x))x^n, where the limit as x goes to 0 of A(x) = lim as x goes to 0 of B(x) = 0. How would I show that a0 = b0, a1 = b1, ...... , an = bn. It seems to make sense, but I'm not quite show sure how to show it. How would you do it? Thanks for any help on this. • November 2nd 2007, 07:59 AM ThePerfectHacker Quote: Originally Posted by MKLyon I'm having trouble thinking this out. I have this problem: Suppose that for all x in (−c, c) the identity a0 + a1x + .... + an-1x^(n-1) + (an + A(x))x^n = b0 + b1x + .... + bn-1x^(n-1) + (bn + B(x))x^n, where the limit as x goes to 0 of A(x) = lim as x goes to 0 of B(x) = 0. How would I show that a0 = b0, a1 = b1, ...... , an = bn. It seems to make sense, but I'm not quite show sure how to show it. How would you do it? Thanks for any help on this. $a_0 + a_1x+...+ (a_n +A(x))x^n = b_0+b_1x+...+(b_n+B(x))x^n$ Take the limit $x\to 0$ of both sides, and we get, $a_0=b_0$. Subtract these from both sides to get, $a_1x+...+(a_n+A(x))x^n = b_1x+...+(b_n+B(x))x^n$ Divide by $x\not = 0$ to get, $a_1+...+(a_n+A(x))x^{n-1}=b_1+...+(b_n+B(x))x^{n-1}$. Take the limit again, $a_1 = b_1$ Keep on repeating this argument. • November 2nd 2007, 08:13 AM MKLyon That's really clever. Thank you. One other question: If the polynomial p(x) = the sum from k = 0 to n of akx^k for all x in (-c,c), what would be the coefficients ak? Thanks again for the help. • November 2nd 2007, 10:36 AM ThePerfectHacker Quote: Originally Posted by MKLyon That's really clever. Thank you. One other question: If the polynomial p(x) = the sum from k = 0 to n of akx^k for all x in (-c,c), what would be the coefficients ak? Thanks again for the help. For any polynomial $p(x)$ we can write $p(x) = \sum_{n=0}^{\infty} \frac{p^{(n)}(0)}{n!}x^n$*. *)This is actually a finite sum because eventually the derivative is zero. Take for example $p(x) = 1+x+x^2$ then $p'''(x) = 0$ so $p^{(n)}(x) = 0$ for all $n\geq 3$. • November 2nd 2007, 11:24 AM MKLyon If the polynomial (I'll write out the sum in long form) is: a0 + a1x^1 + .... + anx^n and it equals zero, doesn't it mean all the coeffcients ak must be zero? Is this the answer to my second question? • November 2nd 2007, 11:32 AM ThePerfectHacker Quote: Originally Posted by MKLyon If the polynomial (I'll write out the sum in long form) is: a0 + a1x^1 + .... + anx^n and it equals zero, doesn't it mean all the coeffcients ak must be zero? Is this the answer to my second question? A polynomial $f(x)$ which is non-zero has at most $\deg f(x)$ zeros. So if $f(x)$ is always zero on an interval then it must be the zero polynomial because it can only have finite number of zeros. All times are GMT -8. The time now is 10:50 AM.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9387727379798889, "perplexity_flag": "middle"}
http://physics.stackexchange.com/questions/41856/does-lambda-nu-c-hold-for-all-the-waves-in-the-universe
# Does $\lambda\nu = c$ hold for all the waves in the universe? Are all waves in the universe the same as electromagnetic waves? Basically, my question arises from an equation I found in my chemistry textbook: $$\lambda \nu ~=~ c.$$ This states that the wavelength (distance from crest to crest) times the frequency (amount of times the wave passes the center point) equals the speed of light. Now, I know this applies in a vacuum and that the speed of light changes based on density. However, does this apply to all waves? If so, does this apply to a wave in water? It seems as though it should. You have a high frequency of light that is incredibly high compared to water, but you have a incredibly small wavelength which causes that value to be lower. In the case of water, if you considered only the wavelength, the value would be too high, but if you also consider that the frequency is going to be orders of magnitude lower than light, you can see where I might arrive at this conclusion. If we measure the crest of a wave of water and the frequency of that wave (I assume from the surface of the body of water) and consider the density of the water in our calculation (as some other variable that is normally used when calculating this), would the result also be the speed of light? In addition, if water was within a vacuum and we were to create a wave, how would this react? If we could create a wave in water in this vacuum, would our values reflect $c$ or a variation of $c$ based on the density of the water? If $\lambda \nu ~=~ c$ is valid for all waves, what other attributes must be supplied within the equation to make the math work out to give the correct answer of $c$? - This equation is not derived, it's straight forward, in sense it is something like $a\frac{b}{c}\frac{c}{b}=a$ and there is really no hidden physics inside, to understand this better I suggest you to see how one construct it. – TMS Oct 28 '12 at 20:10 ## 1 Answer Yes, this equations applies to all waves... with the caveat that you replace c by the speed of the wave you're studying! In a water wave, the product of the wavelength and the frequency will be the speed of the water wave, not of light. For sound waves in air, it will be the speed of sound, etc. Because of this, the general form of the equation you provided is: $$\lambda \nu ~=~ v_{wave}.$$ Another interesting thing is that the speed of the wave need not be constant. The equation is always valid, but it might be possible that the wavelength depends on the frequency, in which case the speed will also depend on the frequency. This happens with water waves; you can notice not all waves travel at the same speed on the ocean. It also happens with light; light of different colours travel at different speeds through glass, which is what allows a prism to disperse white light into a rainbow. When the wavelength depends on the frequency, we call it "dispersion". If it doesn't, then the wave speed is the same for all waves of that type. - Why is it $v_{wave}$ instead of $c$? Is it because the wave has mass or because some attribute of the wave is not accounted for in the equation (such as its density)? – Jonathan Hickman Oct 28 '12 at 12:34 @JonathanHickman The speed of the wave depends on the nature of the medium carrying the disturbance. For electromagnetic waves the disturbance is carried along in the electric and magnetic fields obeying Maxwell's equations, and the wave speed is determined by the vacuum permittivity and vacuum permeability that appear as physical constants in those equations. But waves show up in other media, too. For a wave on a string, the speed will be determined by the tension of the string and its mass/length, and this speed will be very different from the speed of light $c$. – kleingordon Oct 28 '12 at 23:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9571959376335144, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/243761/prove-that-if-a-n-a-n-1-frac12n1-and-a-0-frac12-then-a/243766
# Prove that if $|a_n-a_{n-1}| < \frac{1}{2^{n+1} }$ and $a_0=\frac12$, then $\{a_n\}$ converges to $0<a<1$ I try to solve this question but I don't know how. given $a_0 = \frac12$ and for each $n\geq 1$: $$|a_n-a_{n-1}| < \frac{1}{2^{n+1}}$$ show that $\{a_n\}$ converges and the limit is $a$ such that $0<a<1$ Update (Edited): I showed by cauchy that $|a_m-a_n| < |a_m-a_{m-1}+a_{m+1}-...+a_{n+1}-a_n| < \frac{1}{2^{m+1}} + \frac{1}{2^{m}}+...+\frac{1}{2^{n+2}}$ by the sum of Geometric series, $q=2, a_1=\frac{1}{2^{m+1}}$ then $s_n=\frac{1}{2^{m+1}}[\frac{2^{m-n}-1}{2-1}]$, so $$\frac{1}{2^{m+1}} + \frac{1}{2^{m}}+...+\frac{1}{2^{n+2}} = \frac{1}{2^{m+1}}[\frac{2^{m-n}-1}{2-1}] = \frac{1}{2^{n+1}}-\frac{1}{2^{m+1}}\leq\frac{1}{2^{n+1}}$$ now, it converges! Can someone help me please to show that the limit is $a$ with $0<a<1$? Thank you! - 2 You can prove that $(a_n)$ is Cauchy. – sos440 Nov 24 '12 at 15:13 but what $a_n$ is? – Alon Shmiel Nov 24 '12 at 15:15 By series what do you mean? – sos440 Nov 24 '12 at 15:17 4 Try to estimate $|a_n-a_m| = |a_n-a_{n-1}+a_{n-1}-...-a_m|$. – copper.hat Nov 24 '12 at 15:17 1 Thanks, Alon. Let me know if the edit/update is correct. – amWhy Nov 25 '12 at 2:46 ## 3 Answers Hint: Fix any $m$, and then use triangle inequality and induction to show that $$|a_n-a_m|<\frac1{2^{m+1}}-\frac1{2^{n+1}}$$ for all $n>m$. It follows from this that $\{a_n\}$ is Cauchy, and so converges, say to $a$. In particular, setting $m=0$, we have $$|a_n-a_0|<\frac12-\frac1{2^{n+1}}$$ for all $n$, and since $a_0=\frac12$, we have $0<a_n<1$ for all $n>0$. Thus, $0\leq a\leq 1$. It remains only to show that $\{a_n\}$ cannot converge to $0$ or to $1$. Note that if we set $b_n=1-a_n$, then $\{b_n\}$ has all the same characteristics as $\{a_n\}$. If we can show that $a\neq 0$, then identical arguments show $b\neq 0$, and so $a=1-b\neq 1$, completing the proof. Thus, we need only show that $a>0$. I recommend noting that $a_1=\frac14+c$ for some $c>0$, and use the work above to conclude that $a\geq c$. Addendum: Don't waste your time trying to determine the precise values of all the $a_n$s, nor of $a$--there simply isn't enough information given for us to determine this. Fortunately, we don't need that much information to prove the desired results. - Hint: $\vert a_n - a_0 \vert = \vert a_n - a_{n-1} + a_{n-1} - \dots - a_1 + a_1 - a_0 \vert \leq \sum_{k=1}^n \vert a_k - a_{k-1} \vert$ - You can also prove this generalization the same way: if $|a_{n+1}-a_n| < c_n$ where $c_n$ is decreasing and $\sum_{n=1}^{\infty} c_n$ converges, then $\lim_{n \to \infty} a_n$ exists and is less than $a_1+\sum_{n=1}^{\infty} c_n$. - thank you Marty, but what's about: 0 < a < 1 ? does it show me that? – Alon Shmiel Nov 25 '12 at 3:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9276646375656128, "perplexity_flag": "head"}
http://mathoverflow.net/questions/12154?sort=newest
## Local view of setting p*n out of n bits to 1 ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For p a constant in (0,1) and n going to infinity such that pn is an integer, consider the distribution on n bits that selects a random subset of pn bits, sets those to 1, and sets the others to 0. What is the largest k = k(n,p) so that the induced distribution on any k bits is 1/10 close in total variation distance (a.k.a. statistical distance) to the distribution that sets each bit to 1 independently with probability p? For every p I would like to know k up to a sublinear (i.e. o(n)) additive term. (For starters, p = 1/8 is good too.) Does anybody know of a place where this is worked out? Thanks! Emanuele - ## 2 Answers You want $\frac15 = \sum_t |P_1(count=t) - P_2(count=t)|$. where $P_1$ has a binomial distribution and $P_2$ is hypergeometric. The difference between these distributions is shown in this Mathematica demonstration. I believe both are reasonably well approximated by normal distributions. Both have mean $pk$. The variance for the binomial distribution is $kp(1-p)$, while it is $\frac{n-k}{n-1}*k(p)(1-p)$ for the hypergeometric distribution. So, the value of k should be so that the normal distributions $N(0,1)$ and $N(0,\sqrt{\frac{n-k}{n-1}})$ have total variation distance $\frac1{10}$. That should be at about $k=(1-c)n$ where $N(0,1)$ and $N(0,\sqrt{c})$ are $\frac1{10}$ apart. Numerically, it seems that $c$ should be about 0.6605 so $\sqrt{c}$ should be about 0.8127. $k = 0.3395n$. It appears this is not sensitive to the value of $p$. - Thanks, does the normal approximation to the hypergeometric hold with negligible error even for such large k? – Emanuele Viola Jan 18 2010 at 0:41 1 Yes: dartmouth.edu/~chance/teaching_aids/… The difficulty in using a normal approximation occurs when p is near 0 or 1 or k is near n. This has also been studied stat.tamu.edu/~cha/sub-gaussian-jspi-07.pdf but the first reference is the relevant one. – Douglas Zare Jan 18 2010 at 1:59 Thanks for the references! But the error bound is not clear to me even at a high level: already in Berry–Esseen's bound the pointwise error is about $1/\sqrt{n}$, which won't allow to sum over $k = \Theta(n)$ points. Perhaps one can combine this with a tail bound, but it seems it won't be easy to get an estimate on k up to an additive o(n), right? – Emanuele Viola Jan 18 2010 at 17:03 1 As the second reference indicates, the Berry-Esseen error estimate for the normal approximation is about as good for the hypergeometric as for the binomial distribution. You don't need a pointwise error bound. You can use Berry-Esseen on the three intervals, where the middle one is where $P_2 \gt P_1$, or in fact, you can just use it on the middle interval. The total variation distance is double the sum of $P_2-P_1$ over that interval. – Douglas Zare Jan 18 2010 at 17:56 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Hi Emanuele. Short answer: take a look at Theorem 3.2 in this paper by Diaconis and Holmes: http://www-stat.stanford.edu/~susan/papers/steinbirthdeath.pdf as well as its reference to Diaconis and Freedman (1981). It seems the optimal k is known to be $\Theta(n)$, independent of $p$. I have some questions for you though: Your choice of 1/10 seems a bit "arbitrary", which makes me curious to know whether you really want the correct value of k up to o(n)... I think changing 1/10 to 1/20, say, would change k by a linear amount. So if the answer is k = cn, you really want to know how c depends on 1/10? Another question: Perhaps another way to attack this problem is to identify the event A on which the hypergeometric and binomial random variables have the most differing probabilities. Is it possible to compute this exactly, or at least decide whether it is an event of the form $A = {u : a \leq u \leq b}$? - Thanks a lot for the reference! Indeed that's very relevant. In fact I was hoping to get a tight bound on k as a function of both p and the error (now set to 1/10). The paper by Diaconis and Freedman gets close enough by giving reasonable constants. Maybe doing better than that is painful, so I'll accept your answer. Regarding your other question, I agree that's a possible approach but doesn't look easy to me. – Emanuele Viola Jan 18 2010 at 22:26 @Emanuele: So, you don't actually want k up to o(n), which was what my answer gave? – Douglas Zare Jan 19 2010 at 11:08 @Douglas: Indeed I'd prefer to have k up to o(n), but I was slow at understanding the details of your answer. So you are saying the points where the hypergeometric is larger than the binomial are always an interval (seems like what Ryan was asking too), so we just apply the bound in your second reference to that interval. That makes sense. Is it easy to verify this interval property? – Emanuele Viola Jan 19 2010 at 19:08 Yes, expand the explicit formulas for the binomial and hypergeometric probabilities. Look at the ratio (P_1(t)/P_2(t))/(P_1(t+1)/P_2(t+1)). Cancel almost everything to get a simple quotient of polynomials. Use this near where the ratio P_1(t)/P_2(t) is close to 1. – Douglas Zare Jan 20 2010 at 17:27 I haven't verified if this works (partially because the statement of the Berry–Esseen's bound for the hypergeometric distribution looks a bit scary to me) but I am accepting Douglas' answer because the combination of the observations in the comments should indeed answer my original question. Thanks! – Emanuele Viola Jan 22 2010 at 23:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 24, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432949423789978, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/37191/list
2 edited tags 1 # Integration involving the complete elliptic integral of the first kind K(k)? Is there any reference showing how to do definite integrals involving the complete elliptic integral of the first kind K(k)? Something like 1. $\int_0^1 K(k) dk$ 2. $\int_0^1 k^nK(k) dk$ 3. $\int_0^1 \frac{K(k)}{1+k} dk$ etc...Thanks a lot.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.7322914600372314, "perplexity_flag": "middle"}
http://unapologetic.wordpress.com/2007/12/27/the-chain-rule/?like=1&source=post_flair&_wpnonce=e94a57fbb5
# The Unapologetic Mathematician ## The Chain Rule Today we get another rule for manipulating derivatives. Along the way we’ll see another way of viewing the definition of the derivative which will come in handy in the future. Okay, we defined the derivative of the function $f$ at the point $x$ as the limit of the difference quotient: $\displaystyle f'(x)=\lim\limits_{\Delta x\rightarrow0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$ The point of the derivative-as-limit-of-difference-quotient is that if we adjust our input by $\Delta x$, we adjust our output “to first order” by $f'(x)\Delta x$. That is, the the change in output is roughly the change in input times the derivative, and we have a good idea of how to control the error: $\displaystyle\left(f(x+\Delta x)-f(x)\right)-f'(x)\Delta x=\epsilon(\Delta x)\Delta x$ where $\epsilon$ is a function of $\Delta x$ satisfying $\lim\limits_{\Delta x\rightarrow0}\epsilon(\Delta x)=0$. This means the difference between the actual change in output and the change predicted by the derivative not only goes to zero as we look closer and closer to $x$, but it goes to zero fast enough that we can divide it by $\Delta x$ and still it goes to zero. (Does that make sense?) Okay, so now we can use this viewpoint on the derivative to look at what happens when we follow one function by another. We want to consider the composite function $f\circ g$ at the point $x_0$ where $f$ is differentiable. We’re also going to assume that $g$ is differentiable at the point $f(x_0)$. The differentiability of $f$ at $x_0$ tells us that $\displaystyle\left(f(x_0+\Delta x)-f(x_0)\right)=f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x$ and the differentiability of $g$ at $y_0$ tells us that $\displaystyle\left(g(y_0+\Delta y)-g(y_0)\right)=g'(y_0)\Delta y+\eta(\Delta y)\Delta y$ where $\lim\limits_{\Delta x\rightarrow0}\epsilon(\Delta(x)=0$, and similarly for $\eta$. Now when we compose the functions $f$ and $g$ we set $y_0=f(x_0)$, and $\Delta y$ is exactly the value described in the first line! That is, $\displaystyle \left[f\circ g\right](x_0+\Delta x)-\left[f\circ g\right](x_0)=g(f(x_0)+f'(x_0)\Delta x+\epsilon(\Delta x))-g(f(x_0))=$ $\displaystyle g'(f(x_0))\left(f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x\right)+\eta(\Delta y)\left(f'(x_0)\Delta x+\epsilon(\Delta x)\Delta x\right)=$ $\displaystyle g'(f(x_0))f'(x_0)\Delta x+\left(g'(f(x_0))\epsilon(\Delta x)+\eta(\Delta y)\left(f'(x_0)+\epsilon(\Delta x)\right)\right)\Delta x$ The last quantity in parentheses which we multiply by $\Delta x$ goes to zero as $\Delta x$ does. First, $\epsilon(\Delta x)$ does by assumption. Then as $\Delta x$ goes to zero, so does $\Delta y$, since $f$ must be continuous. Thus $\eta(\Delta y)$ must go to zero, and the whole quantity is then zero in the limit. This establishes that not only is $f\circ g$ differentiable at $x_0$, but that its derivative there is $\displaystyle\left[f\circ g\right]'(x_0)=\frac{d}{dx}g(f(x))\bigg|_{x=x_0}=g'(f(x_0))f'(x_0)$ This means that since “to first order” we get the change in the output of $f$ by multiplying the change in its input by $f'(x_0)$, and “to first order” we get the change in the output of $g$ by multiplying the change in its input by $g'(y_0)$, we get the change in the output of their composite by multiplying first by $f'(x_0)$ and then by $g'(y_0)=g'(f(x_0))$. Another way we often write the chain rule is by setting $y=f(x)$ and $z=g(y)$. Then the derivative $f'(x)$ is written $\frac{dy}{dx}$, while $g'(y)$ is written $\frac{dz}{dy}$. The chain rule then says: $\displaystyle \frac{dz}{dx}=\frac{dz}{dy}\frac{dy}{dx}$ This is nice since it looks like we’re multiplying fractions. The drawback is that we have to remember in our heads where to evaluate each derivative. Now we can take this rule and use it to find the derivative of the inverse of an invertible function $f$. More specifically, if a function $f$ is one-to-one in some neighborhood of a point $x_0$, we can find another function $f^{-1}$ whose domain is the set of values $f$ takes — the range of $f$ — and so that $f(f^{-1}(x))=x=f^{-1}(f(x))$. Then if the function is differentiable at $x_0$ and the derivative $f'(x_0)$ is not zero, the inverse function will be differentiable, with a derivative we will calculate. First we set $y=f(x)$ and $x=f^{-1}(y)$. Then we take the derivative of the defining equation of the inverse to get $\frac{df^{-1}}{dy}\frac{df}{dx}=1$, which we could write even more suggestively as $\frac{dx}{dy}\frac{dy}{dx}=1$. That is, the derivative of the composition inverse of our function is the multiplicative inverse of the derivative. But as we noted above, we have to remember where to evaluate everything. So let’s do it again in the other notation. Since $f^{-1}(f(x))=x$, we differentiate to find $\left[f^{-1}\right]'(f(x))f'(x)=1$. Then we substitute $x=f^{-1}(y)$ and juggle some algebra to write $\displaystyle\left[f^{-1}\right]'(y)=\frac{1}{f'(f^{-1}(y))}$ ### Like this: Posted by John Armstrong | Analysis, Calculus ## 43 Comments » 1. [...] push away to the point . There we find a difference of . But we saw this already in the lead-up to the chain rule! This is the function , where . That is, not only does the difference go to zero — the line [...] Pingback by | December 28, 2007 | Reply 2. I will admit that couldn’t understand much of it. Comment by Barimalch | December 28, 2007 | Reply 3. Well the important bit is the formula in the middle that tells how to find the derivative of the composite. Above that is just making sure we have the right answer, and below that we’re just spinning off a side result about inverse functions. Comment by | December 28, 2007 | Reply 4. It might help to say a little, too, on what John’s argument is intended to improve upon. There’s a somewhat fallacious line of reasoning that would say that the derivative of the composite function which sends x to f(g(x)) can be computed as $\lim\limits_{y \to x} \frac{f(g(y)) - f(g(x))}{y - x} = \lim\limits_{y \to x} \frac{f(g(y)) - f(g(x))}{g(y) - g(x)} \frac{g(y) - g(x)}{y - x}$, and noting that as y approaches x, g(y) approaches g(x), so that the limit of the first factor is the derivative of f evaluated at g(x), and the limit of the second factor is the derivative of g at x. One trouble with this informal argument though is that g(y) might equal g(x) infinitely often as y approaches x, so that the first factor may involve illegal division by 0 infinitely often, as y moves toward x. From one point of view, John’s argument is a clever way around that objection. But from a deeper point of view, John’s argument is emphasizing the role of the derivative f’(x) in giving the best linear approximation to a function f at a point x: the so-called “slope” m = f’(x) gives, to a first order of approximation, the expansion or dilation factor involved in passing from a small directed line segment $\left[x, x+\Delta x\right]$ to the corresponding segment from $f(x)$ to $f(x + \Delta x)$. The point of the chain rule is that when you first apply g and then apply f, you apply one rate of expansion and then another: that means the expansion rates get multiplied. The only thing you have to be careful about is to specify the points where the expansion rates are computed. If you start off at x and apply g, you first apply the expansion rate g’(x). If you then pick up where you left off, g(x), the expansion rate of f there is f’(g(x)). So the expansion rate at x of the composite “g followed by f” is g’(x) times f’(g(x)). That’s the chain rule. In multivariate calculus, the multidimensional analogue of “expansion rate” is not just a number but a matrix, or better yet, a “linear transformation” which describes, to a first order of approximation, what f does to a small parallelpiped based at x (as opposed to just a small line segment based at x). The chain rule holds in higher dimensions; the slogan is “matrices multiply” (just as in the ordinary chain rule). Comment by Todd Trimble | December 28, 2007 | Reply 5. exactly right, Todd (given a few LaTeX tweaks). The post was a bit long to get into the tempting-but-wrong approach. Naturally I’ll be doing the matrices and emphasizing the operator nature of the derivative (more explicitly) when I get to multiple variables. And of course I’ll get to whip out the old f-word. Comment by | December 28, 2007 | Reply 6. If you look at diferentiation as division in a certain class of functions (say, continuous ones), the chain rule bocomes obvious. See my web page at http://www.mathfoolery.org and especially a short summary of treating calculus via algebra and uniform estimates at http://www.mathfoolery.org/talk-2004.pdf I like this web site and would like to put some of my stuff on it. It looks like it’s easy to unclude LaTex formatted material here. Comment by | January 5, 2008 | Reply 7. Michael, that works to give an idea of how to think about some of the differentiation rules, but the rubric of quotients doesn’t work at all once we move to multi-variable functions. The chain rule, in particular, looks a lot less like multiplying fractions. As for the site as a whole. Do you mean WordPress, or my own weblog in particular. If it’s the latter you mean, I’m sorry but I’m not looking to co-host. It’s easy enough to get a free WordPress hosted weblog, though, as many mathematicians and students already have. Comment by | January 6, 2008 | Reply 8. I guess I meant WordPress, although your blog is not bad either. As for the chain rule, in many variables we can look at differentiation as linear approximation, i.e., the estimate $|f(x+h)-f(x)-f'(x)h|<= |h|m(|h|)$ where $m$ is some modulus of continuity. That will work. Comment by | January 15, 2008 | Reply 9. Yes, that’s another definition, but it’s still not really a “division” as you asserted. Comment by | January 15, 2008 | Reply 10. I initially asserted nothing for many variables, it was your idea. Division works great for 1 variable, you must admit. But if you insist on division in many variables, you can do it too, say, for partial or directional derivatives, by saying that $f$ is differentiable in direction $h$ if $t$ divides $f(x+th)-f(x)$ in a reasonable function ring (or module). In a sense, differentiable functions a those that take their values in a neat way, as polynomials do. One more example, with 2nd order partial: $f(x,y)-f(x,0)-f(0,y)+f(0,0)$ vanishes for $x=0$ and $y=0$, so, if $f$ is nice it must be divisible by $xy$, and the value of the fraction $(f(x,y)-f(x,0)-f(0,y)+f(0,0))/xy$ at $(0,0)$ is just $f_{xy}(0,0)$. When we work with the ring of continuous functions we recapture the classical analysis. So, differentiation cam be reduced to division. Comment by | January 15, 2008 | Reply 11. Division works as a mnemonic, but in the long run thinking of differentiation as division is harmful to a student because that viewpoint very much does not generalize. Comment by | January 15, 2008 | Reply 12. Yes it does, want to differentiate a distribution? Just divide $f(x)-f(a)$ by $x-a$ and evaluate at $x=a$. Besides being generalizable and rigorous, division is totally elementary for polynomials and is nice stepping stone to classical analysis if the student wants to study it. There are already plenty of definitions of differentiation, viewing it algebraically is a nice unifying idea that simplifies the whole area. When you learn something new, it is better to start with simple examples to get going and motivated, the abstractions and generalities can wait. Also once you derive the differentiation rules for polynomials, you don’t have to unlearn them since they extend to continuously differentiable functions by continuity because any continuous function can be approximated by polynomials. To the approach via algebra and uniform estimates is even better for the future math majors since it is closer to modern mathematical ideas. It’s a bit unusual, but I’m sure you will like it once you try it. It allows to present most of the subject in a bunch of problem sets that students can handle and enjoy. Comment by | January 15, 2008 | Reply 13. Michael, I’m sorry but I just disagree here. Comment by | January 15, 2008 | Reply 14. Hey, that’s O.K. at least you have an opinion Comment by | January 16, 2008 | Reply 15. But I still hope you will take a look at http://www.mathfoolery.org/talk-2004.pdf and http://www.ams.org/bull/2007-44-04/S0273-0979-07-01174-3/home.html and see how much simpler everything becomes before you make up you mind. There is also a book by a Chinese mathematician on the way: http://www.fetchbook.info/fwd_description/search_9789812704597.html Comment by | January 17, 2008 | Reply 16. Again, the notion of “divison” provides a useful temporary way of thinking about differentiation, but a derivative is not a quotient, and I see students forced to unlearn exactly that crutch all the time. In the end, I think it does more harm than good because the average calculus student simply doesn’t understand the subtlety of exactly where it’s a valid analogy and where it isn’t. Comment by | January 17, 2008 | Reply 17. But if you deal with continuous functions, classical differentiation IS division. See, for example, page 4 of Weyl’s Classical Groups, their Invariants and Representations or page 236 of Analysis by Its Historey (Caratheodory formulation). It is not an analogy, it’s a fully legitimate approach, can’t you see? Comment by | January 17, 2008 | Reply 18. No, it’s not division, but it plays division on TV. Mathematicians spent a century working out the concept of topology and limit precisely to make rigorous sense of this intuitive concept. Why did they have to do all that work? Because it fails at the edges. Seriously, I’m getting very tired of discussing this. Comment by | January 17, 2008 | Reply 19. And if you are so much against division, you can use uniform differentiability right away, like Hermann Karcher from Bonn University did and Peter Las did in his calculus book, http://www.fetchbook.info/fwd_description/search_9780387901794.html Arguing what a derivative Is is rather naive, we should think instead about what approach to differentiation is simpler and and easier to understand, and in my opinion algebra and uniform estimates win hands down, especially if we are concerned with practical and applied parts of the subject. Comment by | January 17, 2008 | Reply 20. So go ahead and teach it that way. I disagree, and I’ve said that over and over and over. What, precisely, do you want from me? Comment by | January 17, 2008 | Reply 21. But you are not even discussing it, you just resist your dogmas to be questioned, acting like a true believer, I’m sorry to say. Comment by | January 17, 2008 | Reply 22. I don’t want anything from you, I just wanted to share some ideas with you, hoping that you would find them interesting. Sorry for annoying you. Comment by | January 17, 2008 | Reply 23. I’m not discussing it because I’ve had this discussion. And I’ve had it over and over and over again. And I’ve nursemaided student after student after student past the pitfalls where the intuition breaks down. I’m sick and tired of it. You have your own reasons for loving your approach, but I simply disagree. In my experience my own pedagogical approach communicates the concepts well enough. I tell students that thinking of division can help as a mnemonic, but can mislead you if taken too seriously, and I try to keep them ready to understand gradients and other more advanced derivatives when they come up later. Besides which, I’ve even been rather explicit in this weblog itself that I consider, for example $\frac{x}{x}$ and $1$ to be different functions because the former is undefined at $x=0$. You seem to say that the two are the same function, while I have my reasons for keeping them separate, if closely related. Comment by | January 17, 2008 | Reply 24. But $x/x=1$ in the ring of polynomials and in the ring of continuous functions that’s exactly the point! Comment by | January 17, 2008 | Reply 25. Look, instead of prohibiting the students from using algebra, i.e. synplify-and-plug-in and forcing them into using limits that are usually not well explined and difficult to understand, aren’t we better off explaining them why their simple-minded approach works? When we divide functions, we usually divide their expressions, not the individual values. There is no contradiction here, when we take a limit, we simply extend $(f(x+h)-f(x))/h to h=0$ by continuity, it’s just a special case of division, we can use some other class of functions as well, such as polynomials, Lipschitz, etc. Comment by | January 17, 2008 | Reply 26. No, they’re not the same in the ring of continuous functions, because they have different domains. You’re thinking of some fuzzily-defined “ring”, where I’m looking at the sheaf of rings. The function $\frac{x}{x}$ is just not defined at $x=0$, though we have standard ways of talking around that problem. I even went through this in the link I gave. What you’re advocating is exactly what mathematicians have been fighting against for decades. It’s called “algebrizing the calculus”, and the debate’s been done to death. Again, you’re free to advocate whatever position you want to your own students and in your own expositions, but I’ve looked at this before. I’m not interested in discussing it because I’ve looked at it so many times. You’re not telling me anything I haven’t already seen, and then you complain that I don’t take you seriously. Go, spread your gospel to someone who wants to hear it. I’m not buying. Comment by | January 17, 2008 | Reply 27. Come on, polynomials, continuous functions, Lipschitz functions are rings, without qutation makrs, by the way, and not at all fuzzy, and $x/x=1$ in these rings, what is wrong with you? Why are you offended? Because I question your understanding of the subject you assume you know all about? As for buying, you don’t have to, the product is free. Comment by | January 17, 2008 | Reply 28. Let me make it clear, when I say that $f/g=h$ I mean $f=gh$, no need for sheafs here, rings of functions sre enough. Of course, the catch is that we deal only with functions of a certain class, say continuous. If you look at it this way, calculus becomes an algebraic theory. I don’t understand why you find it so offensive, calculus is just a part of algebra of functions. Comment by | January 17, 2008 | Reply 29. Sorry, the king has no clothes, calculus is an algebraic theory, I hope you will get over it. Comment by | January 17, 2008 | Reply 30. Functions on what domain? You keep leaving that bit out. You can’t divide by $x$ in the ring of continuous functions on $(-1,1)$. You sweep topological and analytic considerations into the word “continuous” and then declare that they don’t exist, leaving calculus as some purely algebraic theory, when it simply isn’t. There are large swaths of it which can be dealt with algebraically, but there are parts that can’t. What about a function for which we have no formula? A black-box we can’t tinker with the insides? We can’t algebraically manipulate it, but we can still take limits, and we can still do calculus with it. I’m not offended. I just disagree with your opinion on whether this is a valuable pedagogical viewpoint or not. And I’ve said that over and over again, and you keep screaming back. What do you want? Comment by | January 17, 2008 | Reply 31. Sorry to butt in, but can I offer some points of view? It sounds like Michael is arguing that for many of the typical functions f encountered in (1-dimensional) calculus, it’s more or less harmless to speak of a globally defined quotient (f(x)-f(a))/(x-a), insofar as a singularity at x = a is obviously removable. For example, for polynomials f, it’s obvious (e.g., by finite geometric series) that x-a divides f(x)-f(a). In part, John is arguing back that to make the notion of removable singularity rigorous, one needs at least some analysis. I can see some merit in each of these points, and I’d be surprised if John and Michael didn’t (I think there are larger points in the background, but I’ll get to those.) For example, I think it’s reasonable to argue that when teaching calculus to beginners, it can definitely be a distraction to harp on limits too much — for one thing, the definition is not easy to fully grasp at first. It sounds like Michael is saying that when teaching the derivative of say $f(x) = \sqrt{x}$ from first principles, there really is a sense in which the singularity at x = a of (f(x)-f(a)/(x-a) is obviously removable after a short algebraic manipulation, so why make heavy weather over limits here? In more sophisticated applications, one may have to calculate some estimates and establish some Lipschitz constants before it becomes “obvious” (in the same sense) that a singularity is removable, but in practice this hope is often fulfilled. So, to put it jokily, in practice limits are frequently “removable singularities” — and maybe there’s some good pedagogy there. But, to do the mathematics honestly and in greater generality (thinking here especially of situations in which one can’t establish suitable uniform bounds or Lipschitz constants), John would be right that in order to make the notion of removable singularity fully rigorous, one has to deal with limits (I am ignoring for now something like nonstandard analysis, which is a whole other kettle of fish). Insofar as John is trying to tell an honest story on his blath — a story with a long and distinguished history behind it as we know — I think his reaction is quite understandable. So, (sorry, Michael): I think it’s overshooting to baldly assert that calculus is an algebraic theory, although I think we can all agree [and John did say] that certainly huge chunks of it are algebraizable, and usefully so (for example, one can [and wants to] do differential calculus in algebraic geometry), and that there are at least some advantages to taking that POV. Regarding the derivative as a quotient: it is definitely arguable that that should be downplayed when teaching multidimensional calculus. There is such a thing as the derivative of a function f: R^m –> R^n, which returns a linear transformation at each point where it is defined, and I can remember being confused about this when I was younger, thinking that it should involve division of vectors. Of course, one doesn’t divide by vectors. Perhaps one can take the POV that such “derivatives” involve matrices of partial derivatives, which brings us back to the 1-dimensional case discussed above, but this is a kind of reductionism which for one thing doesn’t do justice to a coordinate-free approach. I don’t think there’s any really useful way to think of such higher-dimensional derivatives as difference quotients. Comment by Todd Trimble | January 17, 2008 | Reply 32. Let me address John’s objections first. Let’s take a look at division by $x$ in the ring of polynomials on $(-1,1)$. Only the polynomials that vanish at 0 are divisible by $x$. But for any polynomial $p$ $p(x)-p(0)$ is divisible by $x$ (long division) and we can write $p(x)-p(0)=xq(x)$ and define $p'(0)=q(0)$. Likewise, not all Lipschitz functions on $(-1,1)$ are divisible by $x$, but some are. If $x$ divides $f(x)-f(0)$ we can write $f(x)-f(0)=xq(x)$ and define $f'(0)=q(0)$. See the analogy? Substitute “continuous” for “Lipschitz”, and you will get classical differentiability. Instead of Lipschiz we can use any modulus of continuity (and any continuous function has one), for example, $x^\alpha$. So, by using this algebraic approach, we are not missing any of the classical theory. Differentiability of $f$ at $a$ is the same as divisibility of $f(x)-f(a)$ by $x-a$, in this respect differentiable functions behave like polynomials. Now, all polynomials are Lipschitz (long division again), so the Lipschitz theory is simply a generalization of the polynomial theory. See how it flows, from special to general, from examples to definitions, from observations and calculations to theorems, all the way using the tools familiar to the students? Isn’t it better than clubbering them with the abstractions they are not ready for (for every epsilon there is delta such that for every x… and what “is” is, as Bill Clinton had put it aptly?) or invoking “intuition” to describe the general notion of continuity that is so terribly remote from intuition? But onward with the Lipschitz theory! Take a look at the factoring: $f(x)-f(a)=(x-0)q_a(x)$. Here $q_a(x)$ is Lipschitz, in particular, $|q_a(x)-q_a(a)| \leq K_a|x-a|$ with some constant $K$. Let’s plug this inequality in and use our definition $f'(a)=q_a(a). We get the estimate,$latex |f(x)-f(a)-f’(a)(x-a)| \leq K_a(x-a)^2\$. Now, the dependence of $K_a$ on $a$ may be rather nasty in general, and in the most favorable (and practical) situation, when $f'$ is Lipschitz, $K_a$ will be bounded, and the estimate will be uniform: $|f(x)-f(a)-f'(a)(x-a)| \leq K(x-a)^2$. We can take this estimate as the definition of the uniform Lipschitz differentiability (ULD), and see immediately (by dividing the inequality by $|x-a|$, switching $x$ and $a$ and comparing these inequalities) that $|f'(x)-f'(a)| \leq 2K|x-a|$, i.e., $f'$ is Lipschitz. See the analogy with the polynomials? The derivatives of polynomials are polynomials, the derivatives of ULD functions are Lipschiz. The inequality defining ULD can be arrived at when we examine polynomials and try to understand why a tangent looks like a tangent and why polynomials with positive derivatives are increasing. This monotonicity theorem takes the center stage in the uniform approac to calculus. Again, the flow of material is from examples to definitions, from special to general, from problems to theorems. It reflects better how the mathematical notions are discovered and evolve naturally. Now, instead of Lipschitz we can use any modulus of continuity, and recapture uniform differentiability in the classical sense, see a nice article by Mark Bridger at http://www.math.neu.edu/~bridger/LBC/lbcswp.pdf for a nice exposition. As he says in this article, lots of theorems proofs in anlysis start with: “the function is continuous, therefore it’s uniformly continuous,” so, why not start with uniform continuity and differentiafility right off the bat? Now, John, for your question “on what domain?” When we work with uniform estimates the question of domain becomes immaterial, since any uniformly continuous, or Lipschiz function automatically extends to the completion of its original domain. We can start with rational numbers, or algebraic numbers, or any other dense subset of the interval, it doesn’t matter, the results will be the same. And frankly, we don’t even need the reals in the full generality in this approach, the subtle notions of compactness and completeness play almost no role in it. About the only thing that we need is the Archimedes axiom: any number that is less than any negative number is either zero or positive. Now, about topological and analytic considerations. I never claimed that they don’t exist, but I claim that in an introductory calculus such considerations are better handled by explicit uniform estimates. I think that dragging in the generalities and clubbering students with terminology at this stage is counterproductive. Continuity and limits can be introduced later (or in a different course of introductory analysis), and will look more natural to those who saw the uniform estimates. What about a function that have no formula? It’s fine, it has some other description (it may be a solution to a differential equation) or properties (it may be Lipschitz, or integrable, ect.). The box is not usually totally black, otherwise we can’t say anything. How can you take a limit of a totally black box function? You can just dream about it, that’s about it. We usually have some estimates or other properties to work with. Surely, neither naive algebraic nor uniform estimates approach work universally, but neither the classical approach breaks down too when we move to measures and other distributions, and there the algebraic and absract point of view shows us the way forward. There is nothing sacred about our definitions, they just codify the situations that we encounter when we try to solve this problem or that, the idea that there is THE DEFINITION for everythig is a fallacy. The only mesure of the value of a definition is how useful it is in solving problems. Now about pedagogy. When you teach physics, do you start with quantum field theory or general relativity? Surely not, you start with some simple mechanics, conceptual parts of heat, electricity etc. You treat elementary problems by elementary means. You don’t drag in the advanced notions when the simple notions do the job. Why not do the same with introductory calculus? Why use complicated explanations and advanced tools when simple tools do the job and simple explanations are available? Just compare the complexity new approach that I (and other people) are suggesting with complexity of the classical analysis and see the difference. You can actually explain in all the details the new approach to a student who mastered high school algebra and geometry, with very little overhead. Can you do the same for classical approach? I seriously doubt it. See? I’m not screaming, I wanted you to take a look and you didn’t want to see, it was you who kept screaming. What do I want? I want to shake you out of your complacency, I want you to stop hiding behind the memorized abstractions and take a fresh look at the subject you are teaching. I want you to recognize a promising idea when you see it, even if it’s an unusual one and you knee-jerk reaction is just to brush it away. Now for you, Todd, and I’m glad you have joined our discussion and brought some concilliatory note into our somewhat billigerent conversation. I hope I have made my case on viewing differentiation in 1 variable as division in a ring of functions, and I hope you understand that it doesn’t involve any fudging or hand-waving. To the contrary, it’s more earnest than limits (and simpler at that:-). To me,doing mathematics honestly doesn’t mean doing it in greater generality, it means using the appropriate tools to solve a problem at hand. To me it’s a good mathematical taste to solve elementary problems by elementary means. That’s the bone that I have with the standard approach to elementary calculus, it uses power tools where the simple tools suffice. Using the power tools and generalities indiscriminately more often obfuscates rather than clarifies the situation. One of the most egregious exmples is a popular use of the mean value theorem to prove the fundamental theorem of calculus, while it’s clear that one has nothing to do with the other, and a more straight-forward approach (based on positivity of the integral) works just fine. You are wrong that one needs limits to deal with removable singularity. Again, “fully rigorous” doesn’t mean “excessively general” the explicit estimates + the Archmedes principle suffice when we demand the explicit estimates (like Lipschiz or any modulus of continuity) from the quotient. Speaking about non-standard analysis, it’s not at all “a whole other kettle of fish,” as you put it, it’s just a sleek way to sweep the limits under the rug by using the utra-filter technology, and it’s not difficult to see through these tricks. Ultra-filters are very removed from reality, nobody ever constructed an explicit example of a nontrivial ultrafilter, and nobody ever will, it’s unconstructive and uncostructible, a sheer mathematical fantasy. By the way, the feature of the nonstandard approach to calculus that its advocates boast about the most is the automatic continuity of a functions differentiable on a hyperreal interval. Well, why then not use the uniform differentiability that gives the same result (for a real or a rational or whatever interval) much cheaper and save all the sweat of learning hyperreals? You can still learn them if you love them, but they they don’t provide the simplest approach to elementary calculus by a long shot. About multivariable derivatifes: the uniform estimates work fine, just replace $f'(a)$ with a linear map and the absolute value with a norm of vectors. It’s coordinate-free. Comment by | January 18, 2008 | Reply 33. Let me address John’s objections first. Let’s take a look at division by $x$ in the ring of polynomials on $(-1,1)$. Only the polynomials that vanish at 0 are divisible by $x$. But for any polynomial $p$ $p(x)-p(0)$ is divisible by $x$ (long division) and we can write $p(x)-p(0)=xq(x)$ and define $p'(0)=q(0)$. Likewise, not all Lipschitz functions on $(-1,1)$ are divisible by $x$, but some are. If $x$ divides $f(x)-f(0)$ we can write $f(x)-f(0)=xq(x)$ and define $f'(0)=q(0)$. See the analogy? Substitute “continuous” for “Lipschitz”, and you will get classical differentiability. Instead of Lipschiz we can use any modulus of continuity (and any continuous function has one), for example, $x^\alpha$. So, by using this algebraic approach, we are not missing any of the classical theory. Differentiability of $f$ at $a$ is the same as divisibility of $f(x)-f(a)$ by $x-a$, in this respect differentiable functions behave like polynomials. Now, all polynomials are Lipschitz (long division again), so the Lipschitz theory is simply a generalization of the polynomial theory. See how it flows, from special to general, from examples to definitions, from observations and calculations to theorems, all the way using the tools familiar to the students? Isn’t it better than clubbering them with the abstractions they are not ready for (for every epsilon there is delta such that for every x… and what “is” is, as Bill Clinton had put it aptly?) or invoking “intuition” to describe the general notion of continuity that is so terribly remote from intuition? But onward with the Lipschitz theory! Take a look at the factoring: $f(x)-f(a)=(x-0)q_a(x)$. Here $q_a(x)$ is Lipschitz, in particular, $|q_a(x)-q_a(a)| \leq K_a|x-a|$ with some constant $K$. Let’s plug this inequality in and use our definition $f'(a)=q_a(a)$. We get the estimate, $|f(x)-f(a)-f'(a)(x-a)| \leq K_a(x-a)^2$. Now, the dependence of $K_a$ on $a$ may be rather nasty in general, and in the most favorable (and practical) situation, when $f'$ is Lipschitz, $K_a$ will be bounded, and the estimate will be uniform: $|f(x)-f(a)-f'(a)(x-a)| \leq K(x-a)^2$. We can take this estimate as the definition of the uniform Lipschitz differentiability (ULD), and see immediately (by dividing the inequality by $|x-a|$, switching $x$ and $a$ and comparing these inequalities) that $|f'(x)-f'(a)| \leq 2K|x-a|$, i.e., $f'$ is Lipschitz. See the analogy with the polynomials? The derivatives of polynomials are polynomials, the derivatives of ULD functions are Lipschiz. The inequality defining ULD can be arrived at when we examine polynomials and try to understand why a tangent looks like a tangent and why polynomials with positive derivatives are increasing. This monotonicity theorem takes the center stage in the uniform approac to calculus. Again, the flow of material is from examples to definitions, from special to general, from problems to theorems. It reflects better how the mathematical notions are discovered and evolve naturally. Now, instead of Lipschitz we can use any modulus of continuity, and recapture uniform differentiability in the classical sense, see a nice article by Mark Bridger at http://www.math.neu.edu/~bridger/LBC/lbcswp.pdf for a nice exposition. As he says in this article, lots of theorems proofs in anlysis start with: “the function is continuous, therefore it’s uniformly continuous,” so, why not start with uniform continuity and differentiafility right off the bat? Now, John, for your question “on what domain?” When we work with uniform estimates the question of domain becomes immaterial, since any uniformly continuous, or Lipschiz function automatically extends to the completion of its original domain. We can start with rational numbers, or algebraic numbers, or any other dense subset of the interval, it doesn’t matter, the results will be the same. And frankly, we don’t even need the reals in the full generality in this approach, the subtle notions of compactness and completeness play almost no role in it. About the only thing that we need is the Archimedes axiom: any number that is less than any negative number is either zero or positive. Now, about topological and analytic considerations. I never claimed that they don’t exist, but I claim that in an introductory calculus such considerations are better handled by explicit uniform estimates. I think that dragging in the generalities and clubbering students with terminology at this stage is counterproductive. Continuity and limits can be introduced later (or in a different course of introductory analysis), and will look more natural to those who saw the uniform estimates. What about a function that have no formula? It’s fine, it has some other description (it may be a solution to a differential equation) or properties (it may be Lipschitz, or integrable, ect.). The box is not usually totally black, otherwise we can’t say anything. How can you take a limit of a totally black box function? You can just dream about it, that’s about it. We usually have some estimates or other properties to work with. Surely, neither naive algebraic nor uniform estimates approach work universally, but neither the classical approach breaks down too when we move to measures and other distributions, and there the algebraic and absract point of view shows us the way forward. There is nothing sacred about our definitions, they just codify the situations that we encounter when we try to solve this problem or that, the idea that there is THE DEFINITION for everythig is a fallacy. The only mesure of the value of a definition is how useful it is in solving problems. Now about pedagogy. When you teach physics, do you start with quantum field theory or general relativity? Surely not, you start with some simple mechanics, conceptual parts of heat, electricity etc. You treat elementary problems by elementary means. You don’t drag in the advanced notions when the simple notions do the job. Why not do the same with introductory calculus? Why use complicated explanations and advanced tools when simple tools do the job and simple explanations are available? Just compare the complexity new approach that I (and other people) are suggesting with complexity of the classical analysis and see the difference. You can actually explain in all the details the new approach to a student who mastered high school algebra and geometry, with very little overhead. Can you do the same for classical approach? I seriously doubt it. See? I’m not screaming, I wanted you to take a look and you didn’t want to see, it was you who kept screaming. What do I want? I want to shake you out of your complacency, I want you to stop hiding behind the memorized abstractions and take a fresh look at the subject you are teaching. I want you to recognize a promising idea when you see it, even if it’s an unusual one and you knee-jerk reaction is just to brush it away. Now for you, Todd, and I’m glad you have joined our discussion and brought some concilliatory note into our somewhat billigerent conversation. I hope I have made my case on viewing differentiation in 1 variable as division in a ring of functions, and I hope you understand that it doesn’t involve any fudging or hand-waving. To the contrary, it’s more earnest than limits (and simpler at that:-). To me,doing mathematics honestly doesn’t mean doing it in greater generality, it means using the appropriate tools to solve a problem at hand. To me it’s a good mathematical taste to solve elementary problems by elementary means. That’s the bone that I have with the standard approach to elementary calculus, it uses power tools where the simple tools suffice. Using the power tools and generalities indiscriminately more often obfuscates rather than clarifies the situation. One of the most egregious exmples is a popular use of the mean value theorem to prove the fundamental theorem of calculus, while it’s clear that one has nothing to do with the other, and a more straight-forward approach (based on positivity of the integral) works just fine. You are wrong that one needs limits to deal with removable singularity. Again, “fully rigorous” doesn’t mean “excessively general” the explicit estimates + the Archmedes principle suffice when we demand the explicit estimates (like Lipschiz or any modulus of continuity) from the quotient. Speaking about non-standard analysis, it’s not at all “a whole other kettle of fish,” as you put it, it’s just a sleek way to sweep the limits under the rug by using the utra-filter technology, and it’s not difficult to see through these tricks. Ultra-filters are very removed from reality, nobody ever constructed an explicit example of a nontrivial ultrafilter, and nobody ever will, it’s unconstructive and uncostructible, a sheer mathematical fantasy. By the way, the feature of the nonstandard approach to calculus that its advocates boast about the most is the automatic continuity of a functions differentiable on a hyperreal interval. Well, why then not use the uniform differentiability that gives the same result (for a real or a rational or whatever interval) much cheaper and save all the sweat of learning hyperreals? You can still learn them if you love them, but they they don’t provide the simplest approach to elementary calculus by a long shot. About multivariable derivatifes: the uniform estimates work fine, just replace $f'(a)$ with a linear map and the absolute value with a norm of vectors. It’s coordinate-free. Comment by | January 18, 2008 | Reply 34. Oops! I thought I could go back to the comment window and correct a typo, but it didn’t work, sorry. You can disregard comment #32 or remove it, as the owner of the blog. Comment by | January 18, 2008 | Reply 35. Yes, I see the analogy. I’ve acknowledged the analogy from the beginning, but it’s an analogy, and one which, I feel, does students more harm than good in the long run if not handled delicately. You say I didn’t look, and I keep assuring you that I have looked. You’re welcome to your own opinion, but I just don’t share it. Somehow that doesn’t seem to satisfy you. Comment by | January 18, 2008 | Reply 36. I didn’t say you didn’t look, I said you didn’t want to see. Maybe you wanted to see, but could not. Most likely you have just a different mind-set and prefer to start with lofty definitions and mighty theorems instead of special examples and problems that help you see a glimpse of a general theory, and that’s how most of the new mathematics originates. It’s a pity that you see it as just an analogy and not see it as a legitimate way of looking at the subject. Even if it’s just an analogy, you are probably aware how important analogies are in discovery. Taking some decisive property (like the inequality characterizing ULD) that you use in proving the monotonicity theorem for polynomials and promoting it to a definition is a true example of how mathematics is done. Let’s not overestimate the rigor, it comes last, while the guess comes first. As George Polya puts it, logic is the lady at the exit of the supermarket that checks the price of the items in the basket whose content she did not choose. Comment by | January 18, 2008 | Reply 37. I didn’t say it wasn’t legitimate. I said that I find it to cause problems further down the line. Stop putting words in my mouth. Comment by | January 18, 2008 | Reply 38. I’m sorry, I didn’t mean to put words into your mouth. Speaking of problems, Hermann Karcher from Bonn University taught calculus via uniform estimates to science and engineering students, and had some anecdotal evidence that they had less trobles with more advanced subjects that use calculus, such as numercal analysis. So the evidence is somewhat to the contrary to what you say. Of course this evidence is not very strong, but still… Unless we try new things we will never know if they work better. My hunch is that the new approach will be better for people who take more active approach to learning mathematics, who enjoy serious problem solving. Comment by | January 18, 2008 | Reply 39. Michael, if I were to summarize what I think you were saying earlier, it’s simply that a continuous function f has a derivative at a if there is a continuous function g such that f(x) – f(a) = g(x)(x – a) [cf. comments 25, 28]. But how is that very different from using limits? You agree, don’t you, that the notions of continuity and limit are closely linked? Comment by Todd Trimble | January 18, 2008 | Reply 40. Todd, I agree that limits and continuity of functions are related, and in fact reducible to each other in the following sense. $L=\lim_{x \rightarrow a}f(x)$ if and only if we can make $f$ continuous at $a$ by setting $f(a)=L$. So we can define continuiy first, and use it to define limits, in fact E.Chech (ever heard of Stone-Chech compactification or Chech cohomology?) did just that when he taught introductory analysis at U.Chicago, and it worked fine. From that duo, continuity was introduced first, is more important and is easier to understand, since we don’t have to worry what the $\lim_{x \rightarrow a}f(x)$ is, we want it to be $f(a)$, so the defition of f being continuous at a becomes “for every e>0 there is d>0 such that |f(x)-f(a)|<e as soon as |x-a|<d”. In topology continuity is defined in term of open set (see John’s blog), and limits hardly ever mentioned. I’d say that of the duo, continuity is one of “bread-and-butter” notions, while limit the limit of a function is just a convenient technicality. From the practical perspective, these definitions, as well as the definition of the limit of a sequence are problematic. They don’t ansewr the question “how small d we should take for a given e?” or, for sequences, “how big N should be?” and these questions are important when we turn from mathematical theorizing to practical calculations. That’s where the modulus of continuity comes in. Also it is usually of interest whether we can take d that will be good for e and any a in a certain range of values (uniform continuity). In classical approach they say that f is continuous if it is continuous everywhere it is defined, and then they prove the theorem that says that a function f, continuous on a closed interval [A,B], is uniformly continuous. It is a very subtle and difficult theorem, and a proof is usually relegated to an introductory analysis course. But this theorem is very important for calculus, it is used in constructing the Riemann-like integral for continuous functions. This leaves a wide gap in understanding for poor calculus students, and it’s not the only mystery that they have to deal with. To alleviate this problem, some people dispose with the pointwise notions of continuity and differentiability altogether and use the uniform notions instead (see the links in my comments #19 and #33). Some go even further and deal mostly with Lipschitz functions, at least at the beginning, see a text in German (with an English summary and an essay in the last 14 pages) by Hermann Karcher at http://www.math.uni-bonn.de/people/karcher/MatheI_WS/ShellSkript.pdf Now, you implied that viewing differentiatibility of f at a as divisibility of f(x)-f(a) by x-a in the class of continuous functions is not that different from that standard definition. That is true, of course, it is another way of looking at it. But you should not say “just another way,” different ways of looking at the same thing suggest different generalizations and/or modifications, they also contribute to a depper understanding of the subject and put more tools in our hands. Consider the chain rule, for example. The division point of view makes it almost obvious, and makes its proof simple, while the proof that uses limits is more cumbersome. Finally, I suspect that John is already tired of us (especially me) squatting on his blog and want us to take our conversation elsewhere, right, John? Comment by | January 18, 2008 | Reply 41. [...] we do have a higher-dimensional analogue for the chain rule. If we have a function defined on some open region and another function defined on a region [...] Pingback by | October 7, 2009 | Reply 42. [...] The Radon-Nikodym Chain Rule Today we take the Radon-Nikodym derivative and prove that it satisfies an analogue of the chain rule. [...] Pingback by | July 12, 2010 | Reply 43. [...] should note, here, how this recalls the Newtonian notation for the chain rule, where we wrote . Of course, multiplication is changed into composition of linear maps, but that [...] Pingback by | April 7, 2011 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 182, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9425968527793884, "perplexity_flag": "middle"}
http://catalog.flatworldknowledge.com/bookhub/reader/128?e=fwk-redden-ch07_s06
# Elementary Algebra, v. 1.0 by John Redden Study Aids: Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards). Study Pass: Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes. Highlighting and Taking Notes: If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination. Printing: If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections. Search: To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result). View Full Student FAQs ## 7.6 Applications of Rational Equations ### Learning Objectives 1. Solve applications involving relationships between real numbers. 2. Solve applications involving uniform motion (distance problems). 3. Solve work-rate applications. ## Number Problems Recall that the reciprocalThe reciprocal of a nonzero number n is 1/n. of a nonzero number n is 1/n. For example, the reciprocal of 5 is 1/5 and 5 ⋅ 1/5 = 1. In this section, the applications will often involve the key word “reciprocal.” When this is the case, we will see that the algebraic setup results in a rational equation. Example 1: A positive integer is 4 less than another. The sum of the reciprocals of the two positive integers is 10/21. Find the two integers. Solution: Begin by assigning variables to the unknowns. Next, use the reciprocals $1n$ and $1n−4$ to translate the sentences into an algebraic equation. We can solve this rational expression by multiplying both sides of the equation by the least common denominator (LCD). In this case, the LCD is $21n(n−4)$. Solve the resulting quadratic equation. The question calls for integers and the only integer solution is $n=7$. Hence disregard 6/5. Use the expression $n−4$ to find the smaller integer. Answer: The two positive integers are 3 and 7. The check is left to the reader. Example 2: A positive integer is 4 less than another. If the reciprocal of the smaller integer is subtracted from twice the reciprocal of the larger, then the result is 1/30. Find the two integers. Solution: Set up an algebraic equation. Solve this rational expression by multiplying both sides by the LCD. The LCD is $30n(n−4)$. Here we have two viable possibilities for the larger integer. For this reason, we will we have two solutions to this problem. As a check, perform the operations indicated in the problem. Answer: Two sets of positive integers solve this problem: {6, 10} and {20, 24}. Try this! The difference between the reciprocals of two consecutive positive odd integers is 2/15. Find the integers. Answer: The integers are 3 and 5. ## Uniform Motion Problems Uniform motionDescribed by the formula $D=rt$, where the distance, D, is given as the product of the average rate, r, and the time, t, traveled at that rate. problems, also referred to as distance problems, involve the formula where the distance, D, is given as the product of the average rate, r, and the time, t, traveled at that rate. If we divide both sides by the average rate, r, then we obtain the formula For this reason, when the unknown quantity is time, the algebraic setup for distance problems often results in a rational equation. Similarly, when the unknown quantity is the rate, the setup also may result in a rational equation. We begin any uniform motion problem by first organizing our data with a chart. Use this information to set up an algebraic equation that models the application. Example 5: Mary spent the first 120 miles of her road trip in traffic. When the traffic cleared, she was able to drive twice as fast for the remaining 300 miles. If the total trip took 9 hours, then how fast was she moving in traffic? Solution: First, identify the unknown quantity and organize the data. To avoid introducing two more variables for the time column, use the formula $t=Dr$. Here the time for each leg of the trip is calculated as follows: Use these expressions to complete the chart. The algebraic setup is defined by the time column. Add the times for each leg of the trip to obtain a total of 9 hours: We begin solving this equation by first multiplying both sides by the LCD, 2x. Answer: Mary averaged 30 miles per hour in traffic. Example 6: A passenger train can travel, on average, 20 miles per hour faster than a freight train. If the passenger train covers 390 miles in the same time it takes the freight train to cover 270 miles, then how fast is each train? Solution: First, identify the unknown quantities and organize the data. Next, organize the given data in a chart. Use the formula $t=Dr$ to fill in the time column for each train. Because the trains travel the same amount of time, finish the algebraic setup by equating the expressions that represent the times: Solve this equation by first multiplying both sides by the LCD, $x(x+20)$. Use x + 20 to find the speed of the passenger train. Answer: The speed of the passenger train is 65 miles per hour and the speed of the freight train is 45 miles per hour. Example 7: Brett lives on the river 8 miles upstream from town. When the current is 2 miles per hour, he can row his boat downstream to town for supplies and back in 3 hours. What is his average rowing speed in still water? Solution: Rowing downstream, the current increases his speed, and his rate is x + 2 miles per hour. Rowing upstream, the current decreases his speed, and his rate is x − 2 miles per hour. Begin by organizing the data in the following chart: Use the formula $t=Dr$ to fill in the time column for each leg of the trip. The algebraic setup is defined by the time column. Add the times for each leg of the trip to obtain a total of 3 hours: Solve this equation by first multiplying both sides by the LCD, $(x+2)(x−2)$. Next, solve the resulting quadratic equation. Use only the positive solution, $x=6$ miles per hour. Answer: His rowing speed is 6 miles per hour. Try this! Dwayne drove 18 miles to the airport to pick up his father and then returned home. On the return trip he was able to drive an average of 15 miles per hour faster than he did on the trip there. If the total driving time was 1 hour, then what was his average speed driving to the airport? Answer: His average speed driving to the airport was 30 miles per hour. ## Work-Rate Problems The rate at which a task can be performed is called a work rateThe rate at which a task can be performed.. For example, if a painter can paint a room in 8 hours, then the task is to paint the room, and we can write In other words, the painter can complete $18$ of the task per hour. If he works for less than 8 hours, then he will perform a fraction of the task. For example, Obtain the amount of the task completed by multiplying the work rate by the amount of time the painter works. Typically, work-rate problems involve people working together to complete tasks. When this is the case, we can organize the data in a chart, just as we have done with distance problems. Suppose an apprentice painter can paint the same room by himself in 10 hours. Then we say that he can complete $110$ of the task per hour. Let t represent the time it takes both of the painters, working together, to paint the room. To complete the chart, multiply the work rate by the time for each person. The portion of the room each can paint adds to a total of 1 task completed. This is represented by the equation obtained from the first column of the chart: This setup results in a rational equation that can be solved for t by multiplying both sides by the LCD, 40. Therefore, the two painters, working together, complete the task in $449$ hours. In general, we have the following work-rate formula$1t1⋅t+1t2⋅t=1$, where $1t1$ and $1t2$ are the individual work rates and t is the time it takes to complete the task working together.: Here $1t1$ and $1t2$ are the individual work rates and t is the time it takes to complete one task working together. If we factor out the time, t, and then divide both sides by t, we obtain an equivalent work-rate formula: In summary, we have the following equivalent work-rate formulas: Example 3: Working alone, Billy’s dad can complete the yard work in 3 hours. If Billy helps his dad, then the yard work takes 2 hours. How long would it take Billy working alone to complete the yard work? Solution: The given information tells us that Billy’s dad has an individual work rate of $13$ task per hour. If we let x represent the time it takes Billy working alone to complete the yard work, then Billy’s individual work rate is $1x$, and we can write Working together, they can complete the task in 2 hours. Multiply the individual work rates by 2 hours to fill in the chart. The amount of the task each completes will total 1 completed task. To solve for x, we first multiply both sides by the LCD, 3x. Answer: It takes Billy 6 hours to complete the yard work alone. Of course, the unit of time for the work rate need not always be in hours. Example 4: Working together, two construction crews can build a shed in 5 days. Working separately, the less experienced crew takes twice as long to build a shed than the more experienced crew. Working separately, how long does it take each crew to build a shed? Solution: Working together, the job is completed in 5 days. This gives the following setup: The first column in the chart gives us an algebraic equation that models the problem: Solve the equation by multiplying both sides by 2x. To determine the time it takes the less experienced crew, we use 2x: Answer: Working separately, the experienced crew takes 7½ days to build a shed, and the less experienced crew takes 15 days to build a shed. Try this! Joe’s garden hose fills the pool in 12 hours. His neighbor has a thinner hose that fills the pool in 15 hours. How long will it take to fill the pool using both hoses? Answer: It will take both hoses $623$ hours to fill the pool. ### Key Takeaways • In this section, all of the steps outlined for solving general word problems apply. Look for the new key word “reciprocal,” which indicates that you should write the quantity in the denominator of a fraction with numerator 1. • When solving distance problems where the time element is unknown, use the equivalent form of the uniform motion formula, $t=Dr$, to avoid introducing more variables. • When solving work-rate problems, multiply the individual work rate by the time to obtain the portion of the task completed. The sum of the portions of the task results in the total amount of work completed. ### Topic Exercises Part A: Number Problems Use algebra to solve the following applications. 1. A positive integer is twice another. The sum of the reciprocals of the two positive integers is 3/10. Find the two integers. 2. A positive integer is twice another. The sum of the reciprocals of the two positive integers is 3/12. Find the two integers. 3. A positive integer is twice another. The difference of the reciprocals of the two positive integers is 1/8. Find the two integers. 4. A positive integer is twice another. The difference of the reciprocals of the two positive integers is 1/18. Find the two integers. 5. A positive integer is 2 less than another. If the sum of the reciprocal of the smaller and twice the reciprocal of the larger is 5/12, then find the two integers. 6. A positive integer is 2 more than another. If the sum of the reciprocal of the smaller and twice the reciprocal of the larger is 17/35, then find the two integers. 7. The sum of the reciprocals of two consecutive positive even integers is 11/60. Find the two even integers. 8. The sum of the reciprocals of two consecutive positive odd integers is 16/63. Find the integers. 9. The difference of the reciprocals of two consecutive positive even integers is 1/24. Find the two even integers. 10. The difference of the reciprocals of two consecutive positive odd integers is 2/99. Find the integers. 11. If 3 times the reciprocal of the larger of two consecutive integers is subtracted from 2 times the reciprocal of the smaller, then the result is 1/2. Find the two integers. 12. If 3 times the reciprocal of the smaller of two consecutive integers is subtracted from 7 times the reciprocal of the larger, then the result is 1/2. Find the two integers. 13. A positive integer is 5 less than another. If the reciprocal of the smaller integer is subtracted from 3 times the reciprocal of the larger, then the result is 1/12. Find the two integers. 14. A positive integer is 6 less than another. If the reciprocal of the smaller integer is subtracted from 10 times the reciprocal of the larger, then the result is 3/7. Find the two integers. Part B: Uniform Motion Problems Use algebra to solve the following applications. 15. James can jog twice as fast as he can walk. He was able to jog the first 9 miles to his grandmother’s house, but then he tired and walked the remaining 1.5 miles. If the total trip took 2 hours, then what was his average jogging speed? 16. On a business trip, an executive traveled 720 miles by jet aircraft and then another 80 miles by helicopter. If the jet averaged 3 times the speed of the helicopter and the total trip took 4 hours, then what was the average speed of the jet? 17. Sally was able to drive an average of 20 miles per hour faster in her car after the traffic cleared. She drove 23 miles in traffic before it cleared and then drove another 99 miles. If the total trip took 2 hours, then what was her average speed in traffic? 18. Harry traveled 15 miles on the bus and then another 72 miles on a train. If the train was 18 miles per hour faster than the bus and the total trip took 2 hours, then what was the average speed of the train? 19. A bus averages 6 miles per hour faster than a trolley. If the bus travels 90 miles in the same time it takes the trolley to travel 75 miles, then what is the speed of each? 20. A passenger car averages 16 miles per hour faster than the bus. If the bus travels 56 miles in the same time it takes the passenger car to travel 84 miles, then what is the speed of each? 21. A light aircraft travels 2 miles per hour less than twice as fast as a passenger car. If the passenger car can travel 231 miles in the same time it takes the aircraft to travel 455 miles, then what is the average speed of each? 22. Mary can run 1 mile per hour more than twice as fast as Bill can walk. If Bill can walk 3 miles in the same time it takes Mary to run 7.2 miles, then what is Bill’s average walking speed? 23. An airplane traveling with a 20-mile-per-hour tailwind covers 270 miles. On the return trip against the wind, it covers 190 miles in the same amount of time. What is the speed of the airplane in still air? 24. A jet airliner traveling with a 30-mile-per-hour tailwind covers 525 miles in the same amount of time it is able to travel 495 miles after the tailwind eases to 10 miles per hour. What is the speed of the airliner in still air? 25. A boat averages 16 miles per hour in still water. With the current, the boat can travel 95 miles in the same time it travels 65 miles against it. What is the speed of the current? 26. A river tour boat averages 7 miles per hour in still water. If the total 24-mile tour downriver and 24 miles back takes 7 hours, then how fast is the river current? 27. If the river current flows at an average 3 miles per hour, then a tour boat makes the 9-mile tour downstream with the current and back the 9 miles against the current in 4 hours. What is the average speed of the boat in still water? 28. Jane rowed her canoe against a 1-mile-per-hour current upstream 12 miles and then returned the 12 miles back downstream. If the total trip took 5 hours, then at what speed can Jane row in still water? 29. Jose drove 15 miles to pick up his sister and then returned home. On the return trip, he was able to average 15 miles per hour faster than he did on the trip to pick her up. If the total trip took 1 hour, then what was Jose’s average speed on the return trip? 30. Barry drove the 24 miles to town and then back in 1 hour. On the return trip, he was able to average 14 miles per hour faster than he averaged on the trip to town. What was his average speed on the trip to town? 31. Jerry paddled his kayak upstream against a 1-mile-per-hour current for 12 miles. The return trip downstream with the 1-mile-per-hour current took 1 hour less time. How fast can Jerry paddle the kayak in still water? 32. It takes a light aircraft 1 hour more time to fly 360 miles against a 30-mile-per-hour headwind than it does to fly the same distance with it. What is the speed of the aircraft in calm air? Part C: Work-Rate Problems Use algebra to solve the following applications. 33. James can paint the office by himself in 7 hours. Manny paints the office in 10 hours. How long will it take them to paint the office working together? 34. Barry can lay a brick driveway by himself in 12 hours. Robert does the same job in 10 hours. How long will it take them to lay the brick driveway working together? 35. Jerry can detail a car by himself in 50 minutes. Sally does the same job in 1 hour. How long will it take them to detail a car working together? 36. Jose can build a small shed by himself in 26 hours. Alex builds the same small shed in 2 days. How long would it take them to build the shed working together? 37. Allison can complete a sales route by herself in 6 hours. Working with an associate, she completes the route in 4 hours. How long would it take her associate to complete the route by herself? 38. James can prepare and paint a house by himself in 5 days. Working with his brother, Bryan, they can do it in 3 days. How long would it take Bryan to prepare and paint the house by himself? 39. Joe can assemble a computer by himself in 1 hour. Working with an assistant, he can assemble a computer in 40 minutes. How long would it take his assistant to assemble a computer working alone? 40. The teacher’s assistant can grade class homework assignments by herself in 1 hour. If the teacher helps, then the grading can be completed in 20 minutes. How long would it take the teacher to grade the papers working alone? 41. A larger pipe fills a water tank twice as fast as a smaller pipe. When both pipes are used, they fill the tank in 5 hours. If the larger pipe is left off, then how long would it take the smaller pipe to fill the tank? 42. A newer printer can print twice as fast as an older printer. If both printers working together can print a batch of flyers in 45 minutes, then how long would it take the newer printer to print the batch working alone? 43. Working alone, Henry takes 9 hours longer than Mary to clean the carpets in the entire office. Working together, they clean the carpets in 6 hours. How long would it take Mary to clean the office carpets if Henry were not there to help? 44. Working alone, Monique takes 4 hours longer than Audrey to record the inventory of the entire shop. Working together, they take inventory in 1.5 hours. How long would it take Audrey to record the inventory working alone? 45. Jerry can lay a tile floor in 3 hours less time than Jake. If they work together, the floor takes 2 hours. How long would it take Jerry to lay the floor by himself? 46. Jeremy can build a model airplane in 5 hours less time than his brother. Working together, they need 6 hours to build the plane. How long would it take Jeremy to build the model airplane working alone? 47. Harry can paint a shed by himself in 6 hours. Jeremy can paint the same shed by himself in 8 hours. How long will it take them to paint two sheds working together? 48. Joe assembles a computer by himself in 1 hour. Working with an assistant, he can assemble 10 computers in 6 hours. How long would it take his assistant to assemble 1 computer working alone? 49. Jerry can lay a tile floor in 3 hours, and his assistant can do the same job in 4 hours. If Jerry starts the job and his assistant joins him 1 hour later, then how long will it take to lay the floor? 50. Working alone, Monique takes 6 hours to record the inventory of the entire shop, while it takes Audrey only 4 hours to do the same job. How long will it take them working together if Monique leaves 2 hours early? ### Answers 1: {5, 10} 3: {4, 8} 5: {6, 8} 7: {10, 12} 9: {6, 8} 11: {1, 2} or {−4, −3} 13: {4, 9} or {15, 20} 15: 6 miles per hour 17: 46 miles per hour 19: Trolley: 30 miles per hour; bus: 36 miles per hour 21: Passenger car: 66 miles per hour; aircraft: 130 miles per hour 23: 115 miles per hour 25: 3 miles per hour 27: 6 miles per hour 29: 40 miles per hour 31: 5 miles per hour 33: $4217$ hours 35: $27311$ minutes 37: 12 hours 39: 2 hours 41: 15 hours 43: 9 hours 45: 3 hours 47: $667$ hours 49: $217$ hours Close Search Results Study Aids Need Help? Talk to a Flat World Knowledge Rep today: • 877-257-9243 • Live Chat • Contact a Rep Monday - Friday 9am - 5pm Eastern We'd love to hear your feedback! Leave Feedback! Edit definition for #<Bookhub::ReaderController:0x0000000fbf5478> show #<Bookhub::ReaderReporter:0x0000000fae2e00> 369344
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9374517798423767, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/91472/invariants-of-symmetric-group/91478
## Invariants of Symmetric group ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The celebrated Chevalley–Shephard–Todd theorem says that $\mathbb C[V]^{S_n}$ is a polynomial algebra and gives the generators of this algebra, where $V$ is the standard (or natural) representation of the symmetric group $S_n$. I am just curious to know for what other representations of $S_n$ the generators of this algebra is known ? When is this algebra a polynomial algebra ? - 3 It is a polynomial algebra exactly for the groups generated by pseudoreflections (this in particular tells you the precise representation) This is the full content of the C-S-T theorem. – Mariano Suárez-Alvarez Mar 17 2012 at 16:12 (See mathoverflow.net/questions/52457/…, which is relevant here) – Mariano Suárez-Alvarez Mar 17 2012 at 16:12 3 I know the theorem you state as the fundamental theorem of symmetric functions. Chevalley-Shephard-Todd is a more general theorem. – Qiaochu Yuan Mar 17 2012 at 16:44 5 @Mariano: This doesn't quite cover the question of whether $S_n$ acts as a reflection group is some other representation though, and in fact the realisation of $S_6$ as a reflection group in a non-standard way in $5$-dimension shows that there is something to check. – Geoff Robinson Mar 17 2012 at 21:50 1 @Mariano: It's what was intended by the bracketed "this in particular tells you the precise representation" that was unclear to me. – Geoff Robinson Mar 18 2012 at 22:52 show 3 more comments ## 2 Answers Let $n \ge 7$. If $V$ is an irreducible representation of $S_n$ such that $\mathbb{C}[V]^{S_n}$ is a polynomial algebra then either $V$ is the trivial representation, the sign representation or the $(n-1)$-dimensional standard representation. Outline Proof: Let $\rho : S_n \rightarrow \mathrm{GL}(V)$ be an irreducible representation affording the irreducible character $\chi^\lambda$ where $\lambda$ is a partition of $n$. Suppose that $V$ is $d$-dimensional. By the Chevalley-Shephard-Todd theorem, $\mathbb{C}[V]^{S_n}$ is a polynomial algebra if and only if $\rho(S_n)$ is generated by pseudo-reflections. If $\rho(g)$ is a pseudo-reflection then $\rho(g)$ is similar to a diagonal matrix $\mathrm{diag}(1,1,\ldots,1,\zeta)$ where $\zeta$ is a root of unity. Hence $\chi(g) = d-1 + \zeta$. However, the irreducible characters of symmetric groups are real valued, so if $g \not= 1_{S_n}$ then $g$ is an involution and $\chi^\lambda(g) = d-2$. It therefore suffices to show that if $n \ge 7$ and $g \in S_n$ is an involution such that $\chi^\lambda(g) = \chi^\lambda(1)-2$ then $g$ is a transposition and either $\lambda = (1^n)$ or $\lambda = (n-1,1)$. This follows by induction using the Murnaghan-Nakayama rule. (The details are fiddly but routine.) Edit: Geoff Robinson shows in his answer (posted at the same time as mine) that $g$ is a product of at most $3$ transpositions. This leads to a quick inductive proof: suppose that $\lambda$ has two removable boxes, whose removal gives partitions $\mu$ and $\nu$ of $n-1$. If $\mu \not= (n-1)$ and $\nu \not= (n-1)$ then, by induction, we have $\chi^\lambda(g) \le \chi^\lambda(1)-4$. Hence $\lambda = (n-1,1)$. In the remaining case $\lambda$ is a rectangular partition, and provided $n\ge 8$, we can repeat this argument after removing two boxes from $\lambda$ in two different ways. For smaller $n$ there are two exceptional cases, corresponding to the partitions $(2,2)$ of $4$ and $(2,2,2)$ of $6$. The former representation is $2$-dimensional and is obtained from the standard representation of $S_3$ via the quotient map $S_4 \rightarrow S_3$. The latter is $5$-dimensional, and can be obtained by applying the outer automorphism of $S_6$ to the standard $5$-dimensional representation of $S_6$; the relevant character value is $\chi^{(2,2,2)}(g) = 3$ where $g = (12)(34)(56) \in S_6$. - ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. I only concern myself with faithful representations of $S_n$ and for $n >4$. The only way to get a polynomial algebra of invariants is to represent $S_n$ as a complex reflection group (so generated by pseudo-reflections, that is elements wih a fixed-point space of codimension $1$). A complex relfection group is easily checked to always be a direct product of ireducible complex reflection groups, so from now on I consider only irreducible representations. The question then becomes: is there an irreducible representation of $S_n$ other than the usual $n-1$-dimensional irreducible constituent of the natural permutation character, where $S_n$ is represented as a complex reflection group? Note that tensoring with the sign representation of the stated $n-1$-dimensional representation does not produce a representation of $S_n$ as a complex reflection group. There is still some content to this question. It can probably be easily addressed by the Murnaghan-Nakayama rule, or from a more detailed knowledge of the character table of $S_n,$ but I attempt a direct argument here. Since the characters of $S_n$ are rational-valued, any element represented as a pseudo reflection in the given representation must be represented as a genuine (or "real") reflection, since its trace must be rational. The generating reflections for $S_n$ in the given representation act with determinant $-1,$ so lie outside the derived group $A_n.$ Hence they are odd permutations, and expressible as a product of an odd number of (disjoint) transpositions, since they have order $2$. Since $A_n$ is simple ( as $n \geq 5$,) any single conjugacy class of these reflections generates the whole of $S_n,$ so we assume that all the generating reflections are conjugate. Note that we are not yet entitled to assume that the generating reflections are transpositions. Now we note that the product of two generating reflections has order $1,2,3,4$ or $6.$ For such a product has a fixed point space of codimension $0$ or $2,$ has determinant $1$, and has a rational trace. Now if $k >1$ is odd, a product of $k$ transpositions inverts a $2k+1$-cycle in $S_{2k+1}.$ Hence if $k > 3,$ and $n \geq 2k+1,$ then $S_{n}$ contains a pair of conjugate permutations, each a product of $k$ disjoint transpositions, whose product has order $2k+1.$ hen $n = 2k$ we may express a $2k$-cycle as a product of two permutations, one a product of $k$ disjoint transpositions, the other a product of $k-1$ disjoint transpositions. Since $k > 3,$ if we adjoin a transposition interchanging the fixed points of the second permutation, we get a product of two permutations, each a product of $k$ disjoint transpositions, where the product is a product of two disjoint cycles of lengths (allowing a $1$-cycle) whose sum adds to $n.$ Since $n \geq 10,$ sucha product never has order $1,2,3,4,5$ or $6.$ Hence we can assume that our generating reflections are products of at most three transpositions. Furthermore, if $n \geq 8,$ we can express a $5$-cycle in $S_n$ as a product of permutations, each a product of three disjoint transpositions. For we may express a $5$-cycle in $S_5$ as a product of two permutatons, each a product of two disjoint two-cycles. Affix a transposition commuting with the $5$-cycle to each of them, and this does not change the product. Hence if our reflections are products of $3$ disjoint $2$-cycles, we are left with the case $n =6.$ In fact, this case does occur. "Twist" the natural (non-unimodular) irreducible $5$-dimensional representation of $S_6$ by the exceptional outer automorphism of $S_6,$ and we obtain a reflection representation of $S_6$ in which products of $3$ disjoint $2$-cycles act as reflections. It remains to deal with reflection representations of $S_n$ in which transposition act as reflections. I presume that the Murnaghn-Nakayama rule or the character table of $S_n$ then shows that only the expected representation arises, but I leave that issue open here. (Later Edit: This now seems to be covered by Mark Wildon's answer). - Thanks Robinson and Wildon for the beautiful answers. Now come to the 1st part of the question. Is there any reference for the generators of the ring of invariants of the reflection representation of $S_6$ ? Any references for the generators and relations for other representations of $S_n$ ? – mark Mar 18 2012 at 16:10 3 @mark: The ring of nvariants of the non-standard $5$-dimensional reflection representation of $S_6$ is the same as the ring of invariants for the standard one. The matrices whih are acting are the same, it's just that bacuase of teh action of the outer automorphism, in the twisted version they are associated to different group elements. If you like, the outer automorphism of $S_6$ induces an automorphism of the ring of invariants. There may be a question of knowing explicitly what the action of that automorphism is. Other question: try DJ Benson's book. – Geoff Robinson Mar 19 2012 at 8:22 @mark: Thanks for the accept, but I think Mark Wildon's answer is more complete and definitive. – Geoff Robinson Mar 19 2012 at 10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 127, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279365539550781, "perplexity_flag": "head"}
http://math.stackexchange.com/questions/277038/primitive-solutions-to-a2-4b2-c2
# Primitive solutions to $a^2 + 4b^2 = c^2$ I am trying to generate primitive solutions (GCD is 1 for $a, b, c$) to the equation $a^2+4b^2=c^2$. I attempted to do this by modifying the usual Pythagorean triplet $(m^2-n^2)^2 + (2mn)^2 = (m^2+n^2)^2$ but was unable to get anywhere with that approach. - ## 1 Answer $$a^2+4b^2=z^2\iff a^2+(2b)^2=z^2\iff \\ a=m^2-n^2, \ b=mn , \ z=m^2+n^2 , \ \ (m,n)=1 , m-n>0.$$ - This was actually the approach I tried – WhatsInAName Jan 13 at 17:38 What about that approach do you think fails, @WhatsInAName? – Thomas Andrews Jan 13 at 17:41 I think maybe I went in the wrong direction with what I was trying to compensate for with the multiplication (for some reason I had 4mn... feeling a little silly right about now) – WhatsInAName Jan 13 at 17:42 Oh, I remember what I was trying to do now: I was getting negative values when I tried this hours back. I am trying to generate solutions for which a,b,c are positive. Sorry if that changes anything – WhatsInAName Jan 13 at 17:45 @WhatsInAName: To generate positive values $m$ must be larger than $n$. – P.. Jan 13 at 17:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9565314054489136, "perplexity_flag": "middle"}
http://mathhelpforum.com/advanced-algebra/113366-linear-transformations-polynomials.html
# Thread: 1. ## Linear Transformations and Polynomials Consider the transformation of ax^2 + bx + c in P2 to absolute value of a in P0. Show that it does not correspond to a linear transformation by showing that there is no matrix that maps (a,b,c) in R^3 to absolute value of a in R. Any help is appreciated 2. Originally Posted by Shapeshift Consider the transformation of ax^2 + bx + c in P2 to absolute value of a in P0. Show that it does not correspond to a linear transformation by showing that there is no matrix that maps (a,b,c) in R^3 to absolute value of a in R. Any help is appreciated Actually, I would think that the best way to show this is not a linear transformation would be to show that $L(\alpha v)$ is not always equal to $\alpha L(v)$. And do that by taking $\alpha= -1$. However, since you are asked to do this by showing that there is no matrix representing this linear transformation, take $v_1= 1$, $v_2= x$, and $v_3= x^2$ as basis for P2 and 1 as basis for P0. Then $ax^2+ bx+ c= av_3+ bv_2+ cv_1$. A standard way to construct a matrix for a linear transformation, given a basis for domain and range spaces, is to take that transformation of each of the basis vectors of the domain, in turn, and write the result as a linear combination of the basis vecotrs in the range. The coefficients are the columns of the matrix. Here, $L(x^2)= L(1x^2+ 0x+ 0)= 1$, L(x)= $L(0x^2+ 1x+ 0)= 0$ and $L(1)= L(0x^2+ 0x+ 1)= 0$. That is, if this were a linear transformation, its matrix in the standard bases would be the row matrix [1 0 0]. But $\begin{bmatrix}1 & 0 & 0\end{bmatrix}\begin{bmatrix}-1 \\ 0 \\ 0\end{bmatrix}= -1$ while [tex]L(-x^2)= |-1|= 1.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9461491703987122, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3963748
Physics Forums ## Stumped on Definite Integral Question 1. The problem statement, all variables and given/known data Definite integral, from 0 to 3, of Square root of 1+t^3. 2. Relevant equations Tried substitution 3. The attempt at a solution There should be a simple way to do this but I can't seem to figure it out. Tried the substitution and whatnot but couldn't reach an answer. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Quote by limelightdevo 1. The problem statement, all variables and given/known data Definite integral, from 0 to 3, of Square root of 1+t^3. 2. Relevant equations Tried substitution 3. The attempt at a solution There should be a simple way to do this but I can't seem to figure it out. Tried the substitution and whatnot but couldn't reach an answer. u-sub won't work because the derivative of the inside is not outside. trig-sub won't work because it should be t^2. You might try u^2=1+t^3, though I haven't tried it so no idea. i would say maybe some extra special clever application of integration by parts, but i wouldn't bet on it. How did the problem arise, are you sure this is the problem you are supposed to do? Is it possible that using a computer is expected? MAPLE output for both the definite and indefinite integral looks ridiculous with all kinds of special functions. ## Stumped on Definite Integral Question Same with Wolfram. This integral is not expressable in elementary terms (verified by Risch's algorithm.) It involves the elliptic integral function. If that is acceptable, you can do the problem; but it will still be ridiculously long. Quote by NewtonianAlch MAPLE output for both the definite and indefinite integral looks ridiculous with all kinds of special functions. As much as I have a warm place in my heart for MAPLE (created at UW?), I would just plug it in online at wolfram. It mentions something about the hypergeometric series, and about 7.3. Quote by algebrat As much as I have a warm place in my heart for MAPLE (created at UW?), I would just plug it in online at wolfram. It mentions something about the hypergeometric series, and about 7.3. Haha yea, I use both to be honest, certain things are easier for me to find on MAPLE and in this instance I had it open already. Well, the actual question is different from the question I asked. I didn't want to post the actual one because I partially solved it. Here it is though if there is in fact a different approach. I did find the derivative and was trying to get f^-1(0) by saying f^-1(0) = x so f(x) = 0, but couldn't solved the integral to be able to equal it to 0 to get x. If f(x) = integral, from 3 to x , of square root of (1+t^3) dt, find (f^-1)'(0). By the way, I would like to have the simplest and cleanest approach to this to be efficient. Thanks Recognitions: Homework Help Quote by limelightdevo Well, the actual question is different from the question I asked. I didn't want to post the actual one because I partially solved it. Here it is though if there is in fact a different approach. I did find the derivative and was trying to get f^-1(0) by saying f^-1(0) = x so f(x) = 0, but couldn't solved the integral to be able to equal it to 0 to get x. If f(x) = integral, from 3 to x , of square root of (1+t^3) dt, find (f^-1)'(0). By the way, I would like to have the simplest and cleanest approach to this to be efficient. Thanks Let $y = f(x) = \int_3^x \sqrt{1+t^3}dt$ Use the Fundamental Theorem of Calculus to figure out $\frac{dy}{dx} = f'(x)$ Now, if $y = f(x)$, then $f^{-1}(y) = x$ and ${(f^{-1})}'(y) = \frac{dx}{dy} = \frac{1}{\frac{dy}{dx}} = \frac{1}{f'(x)}$. You're asked to determine ${(f^{-1})}'(0)$, so first figure out what (obvious) value of $x$ would make $y$ zero. The rest is trivial. Thread Tools Similar Threads for: Stumped on Definite Integral Question Thread Forum Replies Calculus & Beyond Homework 1 Calculus 1 Calculus & Beyond Homework 10 Calculus 11 General Math 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9473329186439514, "perplexity_flag": "middle"}
http://nrich.maths.org/6426
### Brimful 2 Which of these infinitely deep vessels will eventually full up? # Brimful ##### Stage: 5 Challenge Level: Constants $A, B, C, D$ are chosen so that the following $4$ curves pass through the point $(2.5, 10)$ $$y = Ax\quad y = Bx^2 \quad y = Cx^3\quad y = Dx^4+x$$ What values must the constants take? Can you identify each curve in the following accurately drawn chart? These curves are now used to design some mathematical vessels of height $10$ by rotating the curves about the $y$ axis. Assuming that $x$ and $y$ are measured in centimetres, what are the volumes of the vessels? Water is poured slowly at a rate of 1cm$^3$ per minute into these vessels. At what depth of water, to the nearest mm, will each of them be half full? Do your results make sense from the diagram? The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9356597661972046, "perplexity_flag": "middle"}
http://mathhelpforum.com/math-topics/117629-natural-logs.html
# Thread: 1. ## Natural Logs Can someone tell me to get e to the other side of the equation you multiply it by ln, as well as the other side. What happens to '-mt' in this instance? ((Ti-T0)/(T1-T0))=e^-mt Thanks in advance 2. Originally Posted by jimmychoo Can someone tell me to get e to the other side of the equation you multiply it by ln, as well as the other side. What happens to '-mt' in this instance? ((Ti-T0)/(T1-T0))=e^-mt Thanks in advance I don't quite understand your post, but the natural log isn't a number that you multiply. You can take the natural log of an expression and when you have an equation you can take the ln of both sides and maintain equality, but it isn't a multiplication process. As for the right hand side: $\ln(e^{-mt})=(-mt)\ln(e)=(-mt)(1)=-mt$ This is applying one of the 3 common log rules to manipulate terms from outside the log with inside. 3. That's what I was suppose mean. Take natural log of e, not multiply. Thanks
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9575055241584778, "perplexity_flag": "head"}
http://mathoverflow.net/questions/41310/any-sum-of-2-dice-with-equal-probability
## Any sum of 2 dice with equal probability ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible. Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations: $q_{1} \cdot p_{1} = \frac{1}{11}$ $q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$ $q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$ $q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$ $q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$ $q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$ $q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$ $q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$ $q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$ $q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$ $q_{6} \cdot p_{6} = \frac{1}{11}$ I don't really now how to start with this. Any suggestions are welcome. - Just a side note: A famous example of the generating function techniques described in the answers is the derivation of the "Sicherman dice", two unequal dice with the same distribution for the sum as a pair of ordinary six-sided dice. See en.wikipedia.org/wiki/Sicherman_dice. – Hans Lundmark Oct 6 2010 at 20:31 3 I just want to note that I am one of the people voting to close. My concern is that this is a very standard exercise on how to use generating functions to work with probability; I was assigned it as an undergrad and I'm sure I will assign it in turn. – David Speyer Oct 6 2010 at 21:25 I hadn't seen it, which is why I answered it. (I almost voted to close, actually, until I realized this was not the Sicherman dice problem.) – Michael Lugo Oct 6 2010 at 21:44 Fair enough, and your answer is well written. This sort of thing is always going to grey areas, but I didn't like that there were three votes to close with no explanation. – David Speyer Oct 6 2010 at 21:59 ## 5 Answers You can write this as a single polynomial equation $$p(x)q(x)=\frac1{11}(x^2+x^3+\cdots+x^{12})$$ where $p(x)=p_1x+p_2x^2+\cdots+p_6x^6$ and similarly for $q(x)$. So this reduces to the question of factorizing $(x^2+\cdots+x^{12})/11$ where the factors satisfy some extra conditions (coefficients positive, $p(1)=1$ etc.). This is a standard method (generating functions). - Ah thanks. Generating functions really seem to be the right choice here. – jakab922 Oct 6 2010 at 21:19 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. You can write a polynomial that encodes the probabilities for each die: $$P(x) = p_1 x^1 + p_2 x^2 + p_3 x^3 + p_4 x^4 + p_5 x^5 + p_6 x^6$$ and similarly $$Q(x) = q_1 x^1 + q_2 x^2 + q_3 x^3 + q_4 x^4 + q_5 x^5 + q_6 x^6.$$ Then the coefficient of $x^n$ in $P(x) Q(x)$ is exactly the probability that the sum of your two dice is $n$. As Robin Chapman points out, you want to know if it's possible to have $$P(x) Q(x) = (x^2 + \cdots + x^{12})/11$$ where $P$ and $Q$ are both sixth-degree polynomials with positive coefficients and zero constant term. For simplicity, I'll let $p(x) = P(x)/x, q(x) = Q(x)/x$. Then we want $$p(x) q(x) = (1 + \cdots + x^{10})/11$$ where $p$ and $q$ are now fifth-degree polynomials. We can rewrite the right-hand side to get $$p(x) q(x) = {(x^{11}-1) \over 11(x-1)}$$ or $$11 (x-1) p(x) q(x) = x^{11} - 1.$$ The roots of the right-hand side are the eleventh roots of unity. Therefore the roots of $p$ must be five of the eleventh roots of unity which aren't equal to one, and the roots of $q$ must be the other five. But the coefficients of $p$ and $q$ are real, which means that their roots must occur in complex conjugate pairs. So $p$ and $q$ must have even degree! Since five is not even, this is impossible. (This proof would work if you replace six-sided dice with any even-sided dice. I suspect that what you want is impossible for odd-sided dice, as well, but this particular proof doesn't work.) - You forgot the curly brackets in $x^{11}$ in one place. – Hans Lundmark Oct 6 2010 at 19:18 Great solution. Thanks. – jakab922 Oct 6 2010 at 21:24 Hans, thanks for pointing that out! It's fixed now. – Michael Lugo Oct 6 2010 at 21:43 If I am not mistaken, you can make this solution to work if some of the faces of the dice have the same numbers (i.e. 1,1,2,3,4,5). – Nick S Oct 6 2010 at 22:54 You can't even solve this with two-sided dice. Consider two dice with probabilities p and q of rolling 1, and probabilities (1-p) and (1-q) of rolling 2. The probability of rolling a sum of 2 is pq, and the probability of rolling a sum of 4 is (1-p)(1-q). These are equal only if p=(1-q). Hence they are equal to one third only if p satisfies the quadratic equation p(1-p) = 1/3. Since this has no real roots, it cannot be done. This logic extends to multisided dice. - Here is an alternate solution, which I ran across while looking through Jim Pitman's undergraduate probability text. (It's problem 3.1.19.) Let $S$ be the sum of numbers obtained by rolling two dice,, and assume $P(S=2)=P(S=12) = 1/11$. Then $P(S=7) \ge p_1 q_6 + p_6 q_1 = P(S=2) {q_6 \over q_1} + P(S=12) {q_1 \over q_6}$ and so $P(S=7) \ge 1/11 (q_1/q_6 + q_6/q_1)$. The second factor here is at least two, so $P(S=7) \ge 2/11$. - Tricky solution. Thanks. – jakab922 Oct 12 2010 at 5:42 sorry misread as identical to standard case. Delete this if you know how. We want $p(x)q(x)$ to be the same as $r(x)^2$ where $r(x)$ encodes a standard die. Put in that $r(x)$ and factor. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 46, "mathjax_display_tex": 7, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9486388564109802, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/192063/are-there-any-famous-number-system-competely-independence-from-the-real-number-s
# Are there any famous number system competely independence from the real number system that show its signifance in math history? I know that both of the binary number system and complex number system depend on each others with real number system respectively and share some of their conditions and operation properties. My question is: Are there any famous number system competely independence from the real number system that show its signifance in math history? - What does independent mean? The integers and rational numbers are real numbers, but they can be defined without using the real numbers. – William Sep 6 '12 at 20:19 @William - Competely independence mean that no operation properties and definitions could be share... – Victor Sep 6 '12 at 20:21 Then how about the $\mathbb{Z} / n \mathbb{Z}$ and the $p$-adic numbers $\mathbb{Q}_p$. – William Sep 6 '12 at 20:24 @William - Explain what that is in your answer, thanks in advance! – Victor Sep 6 '12 at 20:27 – user2468 Sep 7 '12 at 13:40 ## 1 Answer To expand on William's comment: ${\bf Z}/n{\bf Z}$ is the integers modulo $n$. You probably know about modular arithmetic and if you don't you can find tons of information about it on the web and in texts about Number Theory and/or Discrete Mathematics. It doesn't contain the reals and it's not contained in the reals and the operations are not the operations in the reals. Now pick a prime number $p$. Every non-zero rational number $a/b$ can be written as $(r/s)p^t$ where $r,s,t$ are integers and $p$ divides neither $r$ nor $s$. Define a sort of absolute value on the rationals by $|a/b|_p=p^{-t}$. This extends to a distance on the rationals by defining the distance $d(x,y)$ between $x$ and $y$ to be $|x-y|_p$. Now if you know how to get the reals from the rationals by putting in all the limits of convergent sequences, you can do the same thing but using $|\ |_p$ instead of the usual absolute value, and what you get is the $p$-adic numbers. Again, it's not a subset of the reals, nor does it contain the reals, and its distance structure is very different from that on the reals. Again, tons of info on the web and in (somewhat more advanced) Number Theory texts. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9315428733825684, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/5025/space-as-flat-plane
# Space as “flat” plane I was watching the documentary Carl Sagan did about gravity (I believe it's quite old though) and wondered about space being "flat" and that mass creates dents in this plane as shown at about 3 minutes in this clip (http://www.youtube.com/watch?v=Y-db4iC0aHw) Is this simply a metaphor or is it something more than that? Does gravity follow this just instead of being flat the dent is "rotated" in all directions? - ## 3 Answers The picture by Sagan is somewhat of a simplification of the true geometry - so in a sense that picture is a metaphor. The interpretation given there of Einstein's equations is indeed that of curvature of 4 dimensional Space-Time (not just 3 dimensional space). However in these higher dimensional geometries there is more than one notion of curvature: Scalar Curvature R; Ricci curvature; Riemann Curvature and Weyl Curvature. The Scalar curvature exists in all dimensions and is simply a function giving the curvature at each point. For an embedded (2-dim) sphere it is R$=2/r^2$ (twice the Gaussian curvature). So for a sphere it is non-zero everywhere on its surface. Higher dimensional spaces use these other Curvature objects (built like generalised matrices and called Tensors) to represent their curvature more precisely than a single number. In higher dimensions the Scalar represents a kind of "average curvature" (at each point). Corresponding to these different notions of curvature, there are different notions of "flatness" as we will see below. Now the Einstein equations directly equate the Ricci curvature to something; and in the example shown in the Sagan excerpt it was equated to zero. Ricci$= 0$ is the Einstein Vacuum equation alluded to in other answers, and is appropriate because outside a star there is a vacuum. This has an immediate mathematical consequence that R$=0$ ie the scalar is zero as well! So in this sense the vacuum is flat (called Ricci-flat). However there is still curvature around in that space, so it is not Minkowski (ie Euclidean) flat. The experimental demonstration of this was the bending of light rays near the Sun (assuming as one does in Einstein's theory that light rays measure the "straight lines"). So where does the curvature come from if it is Scalar and Ricci flat? The answer is that it is not Riemann flat: the tensor Riemann$\neq 0$. However this does not quite explain the origin of curvature here. Expressed very loosely we have the following equation: Weyl = Riemann - Ricci - R So the real source of curvature in the Sagan excerpt is the Weyl component of the Riemann tensor: everything else is zero. Now we come to the representation problem that Sagan had: the Weyl tensor is always zero in two and three dimensions. In other words the kind of curvature it represents does not exist in two and three dimensions: only four and above dimensions have this kind of curvature. So it cannot be directly represented on a 2 or 3 dimensional picture. Instead what Sagan appears to have represented here is the gravitational potential (like in Newton's theory) but expressed as space curvature. It is not completely wrong perhaps, but it is not quite correct and so is just a metaphor. - Nicely put..+1 – Gordon Feb 12 '11 at 18:02 Lets look at Einstein's field equations: $G{\mu\nu}=8\pi.T{\mu\nu}$ where $G{\mu\nu}=R{\mu\nu}-1/2g{\mu\nu}R$ The left side is the curvature of space (the metric). The right side is the stress-energy-momentum tensor, which is the totality of what is producing the curvature. $g{\mu\nu}$ is the metric tensor , R, the scalar curvature, and $R{\mu\nu}$ the Ricci curvature tensor, but the terminology doesn't matter here. Simply the equation is saying the the curvature of space (the metric) is produced by what is in the space (pressure, energy etc) John Wheeler, who always used colorful language said, "Matter tells space how to curve. Space tells matter how to move." - In the context of general relativity theory the following things happen: In the absence of matter, spacetime remains flat. The relevant space is a 4 dimensional Minkowski space. It has some similarities with Euclidean flat geometry, which you have learnt in the school. There are also important difference. In an Euclidean space the symmetry group is Euclidean group whereas in Minkowki space it is called a Poincare group. The former space has only space like dimensions, the latter has 3 space and 1 time like dimensions. However the 4 dimensional Minkowski space is also like a table top. But if there is some mass energy the geometry of the surrounding spacetime no longer remains Minkowskian, It changes to a more general geometry (although locally within an infinitesimal region it remains Minkowskian). The geometry of Minkowski space is analogous/corresponds to geometry of a plane surface and the latter more general geometry is analogous/corresponds to a geometry of a curved surface. In this curved geometry the sum of three angles of a triangle may be more than or less than $\pi$ depending on the curvature. This curvature does not move itself in static case. Only a particle follow the straightest possible path. Since the spacetime is itself curved the particle appears to be moving in a curved path as if by a force. - You conflate space-time and space in this answer. Some of those "Euclidean" should in fact be "Minkowski". Also, what space you get depends on the observer because different space-like slices of curved space-time can generally differ (contrary to the Minkowski space-time where you'll always get Euclidean space as a slice). – Marek Feb 12 '11 at 7:42 1 @Marek: You are right. In fact I made a deliberate attempt to avoid terms like "Minkowski space" and present the affairs in a simple manner assuming (perhaps unjustifiably) the questioner does not already know about special relativity. Secondly I thought putting things in this way may appeal to those members who might be interested in learning relativity but never studied even SR. Maybe, I have made oversimplifications. – user1355 Feb 12 '11 at 8:07 Technically, we don't really know if the massless space-time is flat or curved, because of $\Lambda$ which can be thought to be non zero in even empty space. – Sklivvz♦ Feb 12 '11 at 9:10 @Sklivvz: There is something called Einstein's vacuum equation. Where we simply put $R{\mu\nu} = 0$ i.e. Ricci curvature vanishes. – user1355 Feb 12 '11 at 10:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9373016357421875, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/127918/the-decomposition-of-inner-product-space?answertab=votes
# The decomposition of inner product space If $V$,$W$are two inner product spaces and $L:V\to W$ is a linear map with its adjoint $L^\star$, then is there a decomposition of $W$=ker$(L^\star)$ $\oplus$ im$(L)$ ? (It is easy that the conclusion holds if $V$ and $W$ are finite-dimensional) - What if $\operatorname{im}{L}$ isn't closed? – t.b. Apr 4 '12 at 9:57 Which duals do you consider? Algebraic or continuous? – Norbert Apr 4 '12 at 12:13 @Norbert algebraic, since we do not assume that $L$ is continuous. – Hezudao Apr 4 '12 at 13:17 @t.b. I see. If the decomposition does exist, then im($L$) should be closed. But what if we assume first that im($L$) is closed? – Hezudao Apr 4 '12 at 13:42 I think, $L^*$ is not well definied, so you should give your definition. – Norbert Apr 6 '12 at 5:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123654365539551, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=380027
Physics Forums ## Theory Question about Electric Potential Energy I'm having a little bit of trouble understanding the concept that the potential electrical energy between two opposite charges is negative while between like charges it is positive. Can someone please explain in detail why this is so? Thanks in advance. PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus I think you need to reformulate your question as it doesn't tie in with the facts. The electrical potential energy (of a unit positive charge, which is how PE is defined here) between two like charges is positive if the charges are positive, and negative if the charges are negative. Between two opposite charges it can be either positive or negative depending on where you are in the field in relation to the two charges. The definition of electrical potential energy at a point is in terms of the work done bringing a unit positive charge from infinity to that point. In generally the work expended against a force field is positive. The work expended towards a force field is negative. With this arbitrary definition we obtain the result you have stated. So if you have a charge $$q_1>0$$ at the origin and a charge $$q_2<0$$ at $$r_2$$ then the work you must expend on $$q_2$$ to pull the charge from infinity to $$r_2$$ is $$W = V(r_2) = - q_2 \, \int \limits_{\infty}^{r_2} \mathrm{d} \vec r ~ \vec E_1(r) = q_1 \,\int \limits_{\infty}^{r_2} \mathrm{d} \vec r ~ \vec \nabla \phi_1(r) = q_1 \Bigl[\phi_1(r_2) - \phi_1(\infty) \Bigr] = q_2 q_1 \frac{1}{4\pi \varepsilon_0 r_2}$$ The fact that $$W<0$$ (with the above definition of $$q_2, q_1$$) shows, that you have to expend the work towards the force field to bring the charge $$q_2$$ from infinty to $$r_2$$ (the force between opposite charges is attractive). That's it! I hope i could help you!? ## Theory Question about Electric Potential Energy Quote by Stonebridge The electrical potential energy (of a unit positive charge, which is how PE is defined here) between two like charges is positive if the charges are positive, and negative if the charges are negative. I don't agree with you! Compare electric potential with electric potential energy. Because two like charges are always repulsing each other the electric potential energy between them is always positive (cause you have to expend work against the force field to get one charge from infinity to any position). Quote by saunderson I don't agree with you! Compare electric potential with electric potential energy. Because two like charges are always repulsing each other the electric potential energy between them is always positive (cause you have to expend work against the force field to get one charge from infinity to any position). Not if the charges are negative. Electrical potential refers to the potential energy of a unit POSITIVE charge. zerobladex asked for Electric Potential Energy and not for Electrical potential! $$V(r) = E - T ~ \ne ~ q_{+} \, \phi(r) \qquad \mbox{with} ~ q_{+} ~ \mbox{as unit POSITIVE charge}$$where $$T$$ is the kinetic energy of the particle and $$E$$ the total energy. I interpreted the words "between two positive charges", to be referring to a point between the two charges; and the question to be asking about the potential energy of some charge at that point. Apologies to all. Thread Tools | | | | |----------------------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Theory Question about Electric Potential Energy | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 6 | | | Introductory Physics Homework | 5 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 3 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 13, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9454806447029114, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/6550/what-is-the-stress-energy-distribution-of-a-string-in-target-space/6565
# What is the stress-energy distribution of a string in target space? If $| \psi \rangle$ is a string mode, how do you compute $\langle \psi | \hat{T}^{\mu\nu}(\vec{x}) | \psi \rangle$ where $\vec{x}$ is a point in target space? This information will tell us the energy distribution of a string. In string theory, the size of a string grows logarithmically as the worldsheet regulator scale. In the limit of zero regulator size, all strings are infinite in size, and this ought to show up in the stress-energy distribution. What about multi-string configurations? The vacuum has virtual string pairs. Only the spatial average $\int d^9x \hat{T}^{00}(\vec{x}) |0\rangle = 0$. $\hat{T}^{00}(0) | 0 \rangle \neq 0$ even though $\langle 0 | \hat{T}^{00}(0) | 0 \rangle = 0$. Here, $|0\rangle$ is the string vacuum. The stress-energy operator can create a pair of strings. It doesn't preserve the total number of strings. - String theory isn't field theory--- the string is all there is. You compute S-matrix elements, not off-shell things, and local T is a field theory concept. – Ron Maimon May 14 '12 at 7:02 ## 1 Answer There's no stress-energy tensor in string theory. If there were, the Weinberg-Witten argument can be applied to the graviton string modes, and that would lead to a contradiction. - This is a really good answer +1, but it should be mentioned that the stress-energy issues are directly analogous to those of General Relativity, and that Weinberg Witten argument is based on Case and Gasiorowicz 1962 argument. – Ron Maimon May 14 '12 at 7:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 7, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.870613157749176, "perplexity_flag": "middle"}
http://www.physicsforums.com/showthread.php?t=268707
Physics Forums Fourier Analysis 1. The problem statement, all variables and given/known data I need to find the Fourier coefficients for a function f(t)=1 projected onto trigonometric polynomials of infinite order 2. Relevant equations Equation finding the first coefficient, the constant term: 3. The attempt at a solution So I feel quite stupid because this should be a very simple integral. The integral of a monotonic function over an interval that is symmetric about the origin SHOULD be equal to zero. It seems my book has just integrated from 0 to pi for first Fourier coefficient. I'm here pulling my hair out trying to figure out why/how this is correct because the two integrals certainly are not equivalent. Can someone help me out here? PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Recognitions: Homework Help Shouldn't that equation be: $$a_0=\frac{1}{\pi} \int_{-\pi}^{\pi} f(t)dt$$ which gives $a_0=0$? ...Or were you trying to say that the answer key gives $a_0=\frac{1}{\sqrt{2}}$?...Are you sure that you are looking for the first term $a_0$ and not the first non-zero term? See, that's what I think it should be (and yes I'm looking for the first term a(sub 0)). But the answer key gives the answer I posted above. I'm beginning to think its wrong. Although, going by what we think the answer is, the transform of the function f(t)=1 would just be 0, which doesn't make sense. Fourier Analysis I can post the entire answer key answer if you would like to see it Recognitions: Homework Help Are you looking for the Fourier Transform of f(t), or the Fourier Series representation? Assuming you are looking for the latter; just because the first term in the series is zero, doesn't mean the entire series is zero. I'm looking for the transform, not the series. Recognitions: Homework Help Well the fourier transform is $$\hat{f}(\omega)= \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{\infty} f(t) e^{-i\omega t} dt$$ which is not an infinite series with coefficients: this is why I assumed you were looking for the fourier series ...your original question makes no sense to me if you are looking for the transform and not the series! I'm pretty sure your looking for the series, not the transform....are you sure f(t)=1? If I were a betting man, I'd bet dollars to dimes that the question actually has f(t)=1(t), where 1(t) is the unit (Heaviside) step function, namely: $$1(t)= \left\{ \begin{array}{rl} 0, & t<0 \\ 1, & t \geq 0$$ In which case, the equation $$a_0=\frac{1}{\sqrt{2} \pi} \int_{-\pi}^{\pi} f(t)dt$$ should give you the correct series coefficient. Thread Tools | | | | |---------------------------------------|-------------------------------|---------| | Similar Threads for: Fourier Analysis | | | | Thread | Forum | Replies | | | Calculus | 4 | | | General Math | 0 | | | Quantum Physics | 11 | | | Introductory Physics Homework | 1 | | | General Physics | 13 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9432171583175659, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/18403/p-split-hecke-characters/18407
## p-split Hecke characters ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) Let $K$ be a quadratic imaginary field, $\bf n$ an ideal in the ring of integers `${\cal O}_K$` and $\xi$ an algebraic Hecke character of type $(A_0)$ for the modulus $\bf n$. One knows (from Weil) that there exists a number field `$E=E_\xi\supseteq K$` with the property that $\xi$ takes values in $E^\times$. Let $p$ be a prime that splits in $K$. Consider the following condition: there exists an unramified place $v\mid p$ in $E$ with residue field $k_v={\Bbb F}_p$ such that $\xi$ takes values in the group of $v$-units in $E$. The condition implies the existence of a $p$-adic avatar of $\xi$ with values in ${\Bbb Z}_p^\times$. I would like to know: 1) to the best of your knowledge, has been this condition considered somewhere? does it have a "name"? 2) I'm tempted to say that $\xi$ is $p$-split if the condition is satisfied (and that $v$ splits $\xi$). Would this name conflict with other situations that I should be aware of? - ## 2 Answers I think your $\xi$ had better be algebraic, but perhaps this implicit somehow in your terminology. If $v$ is any finite place of $E$, there is a $p$-adic avatar of $\xi$ with values in $E_v^\times$ (whether or not $p$ splits in $K$). This construction is as far as I know due to Weil. You seem to be highlighting this construction in the special case $E_v=\mathbf{Q}_p$. I guess $E$ is called the coefficient field of $\xi$ and you're just asking that $E$ contains a prime above $p$ which is unramified of degree 1. What am I saying? I'm saying that your condition above seems to me to have nothing to do with $\xi$, it's simply asking for a name for primes of a number field whose completion is $\mathbf{Q}_p$. I don't see why $\xi$ should enter into the terminology at all. Let me know if I have misunderstood! - @Kevin: yes, character of type $(A_0)$. I'll edit the question accordingly. About your other comment(s): the field of coefficients $E$ does depend on $\xi$. Hence, what $p$'s may work depends on $\xi$ as well. Also, if you fix $p$ in the first place then some $\xi$'s will satisfy the condition, some won't. Thus I feel that $\xi$ should enter into the terminology. – Andrea Mori Mar 16 2010 at 17:51 I think my main point is that whether or not p splits in K, or what the degree of K is, is irrelevant. Your condition is purely a condition on E, as you clearly know. I don't have anything to say about the terminology I'm afraid---perhaps this should have been a comment rather than an answer! The reason I answered was that I wanted to make sure that you knew that even if E_v wasn't Q_p, everything you said was true, and the avatar takes values in the units of E_v. – Kevin Buzzard Mar 16 2010 at 18:57 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Just to second Kevin's comment: you are considering the coefficient field $E$ of $\xi$, or perhaps you could also call it the field of definition. It happens that you are considering $p$ which split in $E$. I would call them "primes that split in the coefficient field of $\xi$", or just "primes that split in $E$" (as Kevin suggests), if $E$ has already been introduced. Anything else is a little ambiguous, and non-standard, I think. (It is not uncommon in this context, and in other arguments involving coefficients of motives, to consider primes with various splitting properties in the field of coefficients, and I think it is common to just use the usual algebraic number theoretic terminology with regard to this field, as Kevin suggests.) [This would have been a comment on Kevin's answer, but the comment box is too small!] -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9678762555122375, "perplexity_flag": "head"}
http://unapologetic.wordpress.com/2009/08/05/normal-transformations/?like=1&_wpnonce=81a70c627a
# The Unapologetic Mathematician ## Normal Transformations All the transformations in our analogy — self-adjoint and unitary (or orthogonal), and even anti-self-adjoint (antisymmetric and “skew-Hermitian”) transformations satisfying $T^*=-T$ — all satisfy one slightly subtle but very interesting property: they all commute with their adjoints. Self-adjoint and anti-self-adjoint transformations do because any transformation commutes with itself and also with its negative, since negation is just scalar multiplication. Orthogonal and unitary transformations do because every transformation commutes with its own inverse. Now in general most pairs of transformations do not commute, so there’s no reason to expect this to happen commonly. Still, if we have a transformation $N$ so that $N^*N=NN^*$, we call it a “normal” transformation. Let’s bang out an equivalent characterization of normal operators while we’re at it, so we can get an idea of what they look like geometrically. Take any vector $\lvert v\rangle$, hit it with $N$, and calculate its squared-length (I’m not specifying real or complex, since the notation is the same either way). We get $\displaystyle\lVert\lvert N(v)\rangle\rVert^2=\langle N(v)\vert N(v)\rangle=\langle v\rvert N^*N\lvert v\rangle$ On the other hand, we could do the same thing but using $N^*$ instead of $N$. $\displaystyle\lVert\lvert N^*(v)\rangle\rVert^2=\langle N^*(v)\vert N^*(v)\rangle=\langle v\rvert NN^*\lvert v\rangle$ But if $N$ is normal, then $N^*N$ and $NN^*$ are the same, and thus $\lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2$ for all vectors $\lvert v\rangle$ Conversely, if $\lVert\lvert N(v)\rangle\rVert^2=\lVert\lvert N^*(v)\rangle\rVert^2$ for all vectors $\lvert v\rangle$, then we can use the polarization identities to conclude that $N^*N=NN^*$. So normal transformations are exactly those that the length of a vector is the same whether we use the transformation or its adjoint. For self-adjoint and anti-self-adjoint transformations this is pretty obvious since they’re (almost) the same thing anyway. For orthogonal and unitary transformations, they don’t change the lengths of vectors at all, so this makes sense. Just to be clear, though, there are matrices that are normal, but which aren’t any of the special kinds we’ve talked about so far. For example, the transformation represented by the matrix $\displaystyle\begin{pmatrix}1&1&0\\{0}&1&1\\1&0&1\end{pmatrix}$ has its adjoint represented by $\displaystyle\begin{pmatrix}1&0&1\\1&1&0\\{0}&1&1\end{pmatrix}$ which is neither the original transformation nor its negative, so it’s neither self-adjoint nor anti-self-adjoint. We can calculate their product in either order to get $\displaystyle\begin{pmatrix}2&1&1\\1&2&1\\1&1&2\end{pmatrix}$ since we get the same answer, the transformation is normal, but it’s clearly not unitary because if it were we’d get the identity matrix here. ### Like this: Posted by John Armstrong | Algebra, Linear Algebra ## 3 Comments » 1. [...] and Eigenvectors of Normal Transformations Let’s say we have a normal transformation . It turns out we can say some interesting things about its eigenvalues and [...] Pingback by | August 6, 2009 | Reply 2. [...] have a diagonal matrix with respect to some basis. First of all, such a transformation must be normal. If we have a diagonal matrix we can find the matrix of the adjoint by taking its conjugate [...] Pingback by | August 10, 2009 | Reply 3. [...] We also have its adjoint . Then is positive-semidefinite (and thus self-adjoint and normal), and so the spectral theorem applies. There must be a unitary transformation (orthogonal, if [...] Pingback by | August 17, 2009 | Reply « Previous | Next » ## About this weblog This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”). I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 20, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8841027021408081, "perplexity_flag": "head"}
http://mathhelpforum.com/pre-calculus/110412-exponential-growth.html
# Thread: 1. ## Exponential Growth So, a scientist is growing a colony of bacteria. After 2 hours, there are 40 bacteria. After 3 hours, there are 120 bacteria. If the population of bacteria grows exponentially, then how many bacteria will there be after 4 hours? We have (2, 40), (3, 120), and we need to find (4, ?). I've done the following: $40=ae^2k$, and $120=ae^3k$ 2) $a=40/e^2k$ 3) $120/40=(40/e^2k)e^3k$ 4) $3=3^3k/e^2k$ That's where I am. In most of the examples I've seen, in step 3, the equation is usually being multiplied by something simple, like $e^4k/e^2k$, but for this one, I guess I have to divide $e^3k/e^2k$, which in step 4, I am confused. 2. Originally Posted by BeSweeet So, a scientist is growing a colony of bacteria. After 2 hours, there are 40 bacteria. After 3 hours, there are 120 bacteria. If the population of bacteria grows exponentially, then how many bacteria will there be after 4 hours? Exponential functions can have any base, not just e. In this case, the colony of bacteria is trippling every hour. So the base is 3. Say the colony started with 'a' bacteria $a * 3^0$. After one hour it had $a * 3^1$. After two hours it had $a * 3^2$. After three hours it had $a * 3^3$. After four hours it will have $a * 3^4$ or the number after three house times 3. Notice what is changing in this sequence - the exponent. That makes this an exponential (when the variable is in the exponent). If you want to see this using the $P = ke^{rt}$ formula, then follow this: $40 = k e^{r*2}$ $120 = k e^{r*3}$ or $40 = \frac{k e^{r*3}}{3}$ Since $40 = 40$ you can say $k e^{2r}=\frac{k e^{3r}}{3}$ Multiply both sides by $\frac{3}{k}$ to get $3e^{2r}=e^{3r}$ Then $3=e^r$. Plugging this back in to the $P = ke^{rt}$ you get $P = k*3^t$, which is the same as what is in the series above.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9417381286621094, "perplexity_flag": "middle"}
http://mathhelpforum.com/statistics/118050-additive-counting-principle.html
# Thread: 1. ## additive counting principle our teacher assigned some pre-req questions but i have no idea how to do this one: how many 13-card bridge hands include either seven hearts or eight diamonds? i was thinking you would do 52C13 to figure out how many different hands you could have all together.. but then that doesn't really seem to make sense and i wouldn't know where to go from there. any help would be great, thanks. 2. Originally Posted by sophiaroth how many 13-card bridge hands include either seven hearts or eight diamonds? Note that no hand can contain seven hearts and eight diamonds. So these events are disjoint. Therefore we and just add, $\binom{13}{7}\cdot\binom{39}{6}+\binom{13}{8}\cdot \binom{39}{5}$ 3. thanks for the reply, but i'm not really sure where you got those numbers from. or the notation you're using with the numbers over top of eachother 4. Originally Posted by sophiaroth thanks for the reply, but i'm not really sure where you got those numbers from. or the notation you're using with the numbers over top of eachother $\binom{N}{k}=_N \mathcal{C}_k =\frac{N!}{k!(N-k)!}$ That is standard for combinations. It called binominal coefficient. Choose seven hearts and six others or choose eight diamonds and five others.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9628552794456482, "perplexity_flag": "middle"}
http://mathoverflow.net/revisions/31623/list
## Return to Question 1 [made Community Wiki] # An advanced exposition of Galois theory My knowledge of Galois theory is woefully inadequate. Thus, I'd be interested in an exposition that assumes little knowledge of Galois theory, but is advanced in other respects. For instance, it would be nice if it were to include remarks like the following: A finite field extension $K / k$ is separable iff the geometric fiber of Spec k -> Spec K is a finite union of reduced points. [I was never able to remember what "separable" meant until I saw this equivalence while studying unramified morphisms. The proof is by the Chinese Remainder Theorem. Also note: this definition is incomplete, in the sense that it does not specify when a non-finite extension is separable.] Is there any such exposition?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9335993528366089, "perplexity_flag": "middle"}
http://mathoverflow.net/questions/57515/one-point-in-the-post-of-terence-tao-on-ax-grothendieck-theorem
## One point in the post of Terence Tao on Ax-Grothendieck theorem ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) I was trying to understand completely the post of Terrence Tao on Ax-Grothendieck theorem. http://terrytao.wordpress.com/2009/03/07/infinite-fields-finite-fields-and-the-ax-grothendieck-theorem/ This is very cute. Using finite fields you prove that every injective polynomial map $\mathbb C^n\to \mathbb C^n$ is bijective. It seems to me that the only point in the proof presented in the post that is not explained completely is the following lemma: Lemma. Take any finitely generated ring over $\mathbb Z$ and quotient it by a maximal ideal. Then the quotient is a finite filed. Is there some comprehensible reference for the proof of this lemma? In slightly different wording, the question is the following: assuming Nullstelensatz, can one really give a complete proof of Ax-Grothendick theorem in two pages, so that it can be completely explained in one (2 hours) lecture of an undergraduate course on algebraic geometry? - This was previously answered: mathoverflow.net/questions/30599/… – fherzig Mar 14 2011 at 22:19 ## 4 Answers To prove Nullstellensatz over $\mathbb{Z}$: as the morphism $f: \mathrm{Spec}(R)\to\mathrm{Spec}(\mathbb Z)$ is of finite type, a theorem of Chevalley says that the image of any constructible subset is constructible. So the image of any closed point by $f$ is a point which is a constructible subset. This can not be the generic point of $\mathrm{Spec}(\mathbb Z)$, so it must be a closed point. Note that this does not hold in general. For example, over the ring of $p$-adic integers, the ideal $(pX-1)\mathbb{Z}_p[X]$ is maximal, but its preimage in $\mathbb{Z}_p$ is $0$ and it not maximal. [EDIT] Another proof using Noether's normalization lemma: http://mathoverflow.net/questions/42276: if a maximal ideal $\mathfrak m$ of $R$ is such that $\mathfrak m\cap \mathbb Z=0$, then $R/\mathfrak m$ is finite type over (and contains) $\mathbb Z$. So there exits $f\in\mathbb Z$ non-zero and a finite injective homomorphism $\mathbb Z_f[X_1,\dots, X_d]\hookrightarrow R/\mathfrak m$. But then $\mathbb Z_f[X_1,\dots, X_d]$ must be a field. This is impossible because the units of this ring are $\pm f^k$, $k$ relative integers. - This first argument is quite nice! – Daniel Litt Mar 6 2011 at 2:49 2 The theorem of Chevalley can be proved by induction, using the second argument :). – Qing Liu Mar 6 2011 at 10:17 @Qing Liu, thanks for the answer. I am lost when you say in lines 2-3 of Edit: "So there exits $f\in \mathbb Z$ non-zero and a finite injective homomorphism ...". Why is there such $f$? – aglearner Mar 7 2011 at 0:16 Sorry, my punctions were not good. I should write "If a maximal ideal ..., so there exists f". The existence of such a $f$ is a form of Noether's normalization lemma that you can find at mathoverflow.net/questions/42276 – Qing Liu Mar 7 2011 at 8:14 After one year and half I guess I understand the logic of both answers. Could you please say if there is a pedagogic explanation of Chevalet's theorem somewhere on the web? – aglearner Oct 14 at 13:34 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. Let $R$ be a finitely generated $\mathbb{Z}$-algebra, and $\mathfrak{m}\subset R$ are maximal ideal. We wish to show $R/\mathfrak{m}$ is a finite field. Let $i: \mathbb{Z}\to R$ be the unique ring map; then $i^{-1}(\mathfrak{m})$ is a maximal ideal in $\mathbb{Z}$ (as $R$ is finitely generated over $\mathbb{Z})$, and thus $\mathbb{Z}/i^{-1}(\mathfrak{m})$ is a finite field $\mathbb{F}_p$ for some prime $p$. As $R$ is finitely generated over $\mathbb{Z}$, $R/\mathfrak{m}$ is finitely generated over $\mathbb{F}_p$. But all finite field extensions of $\mathbb{F}_p$ are still finite, completing the proof. - Daniel, thanks for the proof of the lemma! – aglearner Mar 6 2011 at 0:16 14 Perhaps it's worth emphasizing that what makes this argument work is the non-trivial fact (proved in Qing Liu's answer) that if $f:\mathbb{Z}\rightarrow\mathbb{R}$ is of finite type, then the pre-image of a maximal ideal of $R$ under $f$ is non-zero. This is a particular case of the general Nullstellensatz on Jacobson rings (found in, e.g., Eisenbud's book). A proof of the statement about fields finitely generated as rings using the usual Nullstellensatz is outlined in Exercise 6 of the Noetherian rings chapter of Atiyah and Macdonald. – Keenan Kidwell Mar 6 2011 at 1:07 @Keenan Kidwell: Of course; thanks for making that explicit. – Daniel Litt Mar 6 2011 at 1:13 1 An ignorant question: "$R/\mathfrak{m}$ is finitely generated over $\mathbb F_p$. But all finite field extensions of $\mathbb F_p$ are still finite, completing the proof". I'm missing something here. How did finitely generated extension get to be the same as finite field extension? (in particular isn't $\mathbb F_p(x)$ a finitely generated extension but not a finite extension)? – Anthony Quas Mar 6 2011 at 21:44 1 $\mathbb{F}_p(x)$ is not finitely generated as an $\mathbb{F}_p$-algebra; in particular, all the irreducibles need to be inverted. See e.g. mathreference.com/ag,fgaf.html – Daniel Litt Mar 6 2011 at 21:54 show 7 more comments One can give a more elementary proof of the fact that $\mathfrak{m} \cap \mathbb{Z} \neq 0$ - By more elementary I mean a proof that only uses the Nullstellensatz over $\mathbb{Q}$. Notice that it is enough to verify the claim for $R=\mathbb{Z}[x_1,..,x_n]$, and $\mathfrak{m} \in Max(R)$. Suppose there is $\mathfrak{m} \in Max(R)$ such that $\mathfrak{m} \cap \mathbb{Z} =0$. Then, we may assume that $\mathbb{Z} \subseteq F :=\mathbb{Z}[x_1,..,x_n]/\mathfrak{m}$. If we denote by $\alpha_{i}=x_i+\mathfrak{m}$ we have that $F=\mathbb{Z}[\alpha_1,..,\alpha_n]$. Since $F$ is a field we conclude that $\mathbb{Z}[\alpha_1,..,\alpha_n]=\mathbb{Q}(\alpha_1,..,\alpha_n)$. Claim: $F/\mathbb{Q}$ is an algebraic extension. proof: $F/\mathbb{Q}$ is a finitely generated field extension- generated as an algebra- in particular $F$ is of the form $\mathbb{Q}[y_1,..,y_m]/M$ for some $M$ maximal ideal of $\mathbb{Q}[y_1,..,y_m].$ By the Nullstellensatz $M$ has a zero $(\beta_1,...,\beta_m)$ where each $b_i$ is algebraic over $\mathbb{Q}$, so $F=\mathbb{Q}(\beta_1,...,\beta_m)$ is algebraic over $\mathbb{Q}$. Since each $\alpha_{i}$ is algebraic, there are integers $q_i$'s such that $q_{i}\alpha_{i}$ is integral over $\mathbb{Z}$ for all $i$. In particular $F=\mathbb{Z}[\alpha_1,..,\alpha_n]$ is an integral extension of $\displaystyle \mathbb{Z}[\frac{1}{q_1},..,\frac{1}{q_n}]$. Since $F$ is a field we have that $\displaystyle \mathbb{Z}[\frac{1}{q_1},..,\frac{1}{q_n}]$ is a field, which is a contradiction( $p$ is not invertible for any prime not dividing $q_{1}...q_{n}$). - Unfortunately, I can not understand when you write " By the Nullstellensatz we have that each $\alpha_i$ is algebraic over $\mathbb Q$". Could you please explain this point? Are you using Nullstelensatz over $\mathbb Q$ here? To which ring are you applying it? – aglearner Mar 7 2011 at 0:12 @aglearner: I've added an explanation to what you are wondering. The point is that one version of the Nullstellensatz, which I learned by the name algebraic Nullstellensatz, is the following: A finitely generated extensions of fields $F/K$ is algebraic. – Guillermo Mantilla Mar 7 2011 at 1:21 This is not an answer to your question, but let me point out that the Ax-Grothendieck theorem is now easy to prove using E-polynomials (Hodge-Deligne polynomials). If $f:X \to X$ is an injective endomorphism of a complex algebraic variety, then $E(X) = E(f(X))=E(X)-E(X\setminus f(X))$. So $E(X\setminus f(X))=0$ and $X\setminus f(X) = \emptyset$, because the degree of a constructible set is twice its dimension. Since one supposes the mixed Hodge theory, this proof is not trivial at all. But at least for me, this looks more natural. - I hadn't heard about this, and I don't know anything about E-polynomials. Do you know if the proof you describe uses anything in characteristic p in the background? Any "spreading" over Z? – Marty Mar 6 2011 at 3:17 This proof uses only the mixed Hodge theory, and one does not need to switch the base field or ring. On the other hand, both proofs by E-polynomial and finite fields have the "motivic" nature. Namely the E-polynomial and the number of rational points are generalized Euler charqcteristics, that is, they have the additivity and multipicativity. – Takehiko Yasuda Mar 6 2011 at 9:39 But I wonder, since much of mixed Hodge theory (after Deligne) lifts characteristic p results to get characteristic zero results. And perhaps even transcendental proofs in Hodge theory (like those of Saito) might hide some characteristic p aspects. Do you know if characteristic p is hidden in the background for the results on the E-polynomial? – Marty Mar 10 2011 at 22:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 91, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9145935773849487, "perplexity_flag": "head"}
http://mathoverflow.net/questions/40431/maps-that-admit-local-sections-through-each-point-in-the-domain
## Maps that admit local sections through each ‘point’ in the domain ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) In a recent MO question I asked about the relation between surjective submersions (in the category of smooth or otherwise manifolds) and maps that admit local sections. The latter, it turns out, are more general than the former, as surjective submersions $f:X\to Y$ admit local sections through each point of the domain. This means that for each $y\in Y$ there is an $x\in X$ such that $f(x)=y$ and there is an open neighbourhood $U\ni y$ and a map $\sigma:U\to X$ such that $f\circ \sigma$ is the inclusion of $U$ into $Y$ and $x=\sigma(y)$. Now there is a class of local-section-admitting maps in $Top$ which have a similar property: every point in the codomain has a local section through it; these are called (surjective) topological submersions. In some respects these are better than just maps which have local sections, because a very complicated map could have very few local sections, none of which pass through regions of interest. Having many local sections seems to force the map to be rather nice. For example, a topological submersion with discrete fibres is a local homeomorphism, whereas a map with local sections with discrete fibres can be pretty wild. Ditto submersions in Diff: such a thing with discrete fibres is a local diffeomorphism. Now for a general Grothendieck pretopology $J$ on a concrete category $C$ one could ask formally for this sort of map. Define a surjective $C$-submersion to be a map $f:X\to Y$ in $C$ such that for each point $x$ of $X$ there is a covering family $(U_i \to Y)_{i\in I}$ from $J$ such that $x$ is in the image of some family of maps $\sigma_i:U_i \to X$ which are local sections of $f$ (or equivalently, some map $U_i \to X$ for some $i$ which is a local section of $f$). I can imagine an extension beyond concrete categories, and so here's the question(s) (and I hope it justifies the alg.geom. tag): (1)Is this sort of setup seen in other categories? (most notably in Schemes (or some subcategory) or a topos) (2) Since in Schemes (or a subcategory) one has $R$-points for arbitrary (nice enough) rings, one can consider how the collection of local sections changes under extension/restriction of scalars. Is this sort of thing considered? (3) Do we recover some sort of characterisation of etale maps (or similar) by analogy with the result for Diff- and Top-submersions with discrete fibres? - Consider the affine line $X$ over $Y = {\rm{Spec}}(k)$ for an imperfect field $k$. What do you propose to do for closed points $x \in X$ such that $k(x)/k$ is not separable? Demanding sections through all points of the sources seems much too strong if one wishes to work with the etale topology. (The analogy with manifolds misses some aspects.) In practice it is very often used that smooth maps admit etale-local sections, and fppf maps admit quasi-finite flat sections, but neither can be expected to pass through an arbitrary point in a fiber in general. – BCnrd Sep 29 2010 at 8:57 I'm a complete novice when it comes to alg.geom., so this is just the sort of information I am after. A short (but unsatisfactory) answer would then be 'no, this doesn't generalise from manifolds to schemes', but I presume that there are nice situations when something like this idea has merit? – David Roberts Sep 29 2010 at 9:28 ## 1 Answer I think that the property you want is the criterion for smoothness in terms of lifting maps over nilpotent thickenings. This is a way of expressing in algebraic terms the idea in manifold theory of having local sections. The statement is that $f:X \to S$ is a morphism of schemes, locally of finite presentation, then $f$ is smooth precisely when the following is satisfied: given a map Spec $A \to X$, a square zero thickening Spec $B$ of Spec $A$ (so $B$ surjects onto $A$, and the kernel $I$ satisfies $I^2 = 0$), and a map Spec $B \to S$ making the obvious diagram commute, then we can lift the latter map to a map Spec $B \to X$. If you look at Ex. II.8.6 in Hartshorne it discusses this somewhat obliquely. The book Neron models discusses it more clearly. You will probably have to think quite a bit yourself to really understand the analogy. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 43, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9395509362220764, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/52300/factors-of-c-in-the-hamiltonian-for-a-charged-particle-in-electromagnetic-fiel/52301
# Factors of $c$ in the Hamiltonian for a charged particle in electromagnetic field I've been looking for the Hamiltonian of a charged particle in an electromagnetic field, and I've found two slightly different expressions, which are as follows: $$H=\frac{1}{2m}(\vec{p}-q \vec{A})^2 + q\phi$$ and also $$H=\frac{1}{2m}(\vec{p}-\frac{q}{c}\vec{A})^2 + q\phi$$ with $\vec{p}$ the momentum, $q$ the electric charge, $\vec{A}$ the vector potential, $\phi$ the scalar potential and $c$ the speed of light. So basically the difference is in the term $1/c$ multiplying $\vec{A}$, present in the second form (which I use in my lectures) but not in the first one (used by Griffiths in Introduction to quantum dynamics to treat the Aharonov-Bohm effect). Why does this difference exist and what does it mean? And how does the term $1/c$ affect the dimensional analysis (the units) of the problem? - 3 The first is the expression in the SI units (which most modern books are written in). The second is the expression in Gauss units (which still most scientific literature and old books use). – Fabian Jan 27 at 18:39 Thank you! Now everything makes sense! I'll look for the details of how to pass from one system to another. Thanks!!! – Ajayu Jan 27 at 20:27 Ajayu, you should probably accept the answer you like best by clicking on the gray check symbol next to the answer. – Rafael Reiter Jan 28 at 12:56 ## 3 Answers The missing $1/c$ in your first expression is simply a consequence of the units used. The second expression is in Gaussian units while the first one is in either SI units or in natural units. In the latter system of units (natural units) certain constants like $\hbar$ and $c$ have a numerical value of 1, so they can be left out of the equations.$^1$ This is common practice in physics and it doesn't change anything about the dimensional analysis of the problem, as long as you keep in mind that you're working with those natural units. The same goes for any other system as well. Every system of units $A$ is consistent with any other system of units $B$ as long as you yourself are consistent in their usage and correctly transform everything between $A$ and $B$ when desired. So there is no fundamental difference between a dimensional analysis in SI, Gaussian or natural units, as long as you keep in mind what units you're working with. The units themselves will (obviously) vary between systems, but dimensional analysis in one system will be entirely consistent with dimensional analysis in another.$^2$ $^1$ Note that this is not the case for SI units. As is rather well-known, the numerical value of $c$ in SI units is about $3\times10^8$ $(\mathrm{m/s})$. The reason for the absence of $1/c$ in SI units is a conventional difference. Wikipedia has a comparison between Gaussian and SI units explaining the major differences here. $^2$ Perhaps one important note concerning Gaussian and SI units here is that due to the different conventions, it can be more difficult to transform between them. E.g. making an equation dimensionless in SI units, might yield a non-dimensionless equation when transformed into Gaussian units. One example is when we consider Gauss's law in Gaussian units divided by the free charge density: $(1/\rho)\vec{\nabla}\cdot\vec{E} = 4\pi$. The quantity on the left-hand side is dimensionless in Gaussian units, but not in SI units, where it is $(1/\rho)\vec{\nabla}\cdot\vec{E} = 1/\epsilon_0$. So you have to watch out for that when transforming your equations. Dimensional analysis may therefore also yield seemingly different results in SI or Gaussian units, but there is no problem if you remember the conventional differences and, again, stay consistent. - Thank you for the answer! I thought about it at first, but it still gave me some problems (this is just a part of an excercise and I had more problems with the units later), so I don't think it's the natural units, and the book I took it from explicitly says it's in SI units. It seems to me that it's like Fabian says in another comment, the second expresion is in Gaussian units. However, thanks for your help! – Ajayu Jan 27 at 20:30 @Ajayu Indeed, if you know the first expression is in SI units, that's your answer. I edited my answer to incorporate that possibility. I left the other possibility of natural units in there because that's something you come across often as well. – Wouter Jan 28 at 17:06 Maybe clarify that only natural units have $c = 1$, because now it sounds like both SI and natural units set $c$ and $\hbar$ as $1$. – Kitchi Jan 28 at 19:30 1 @Kitchi I've added some clarification. In the mean time I've also elaborated on the differences between SI and Gaussian units, for the sake of completeness. And I think I've hammered the point of consistency to death. I hope it's all correct and clear. – Wouter Jan 28 at 21:12 As in any problem in physics you can use whatever units you like the most. In electromagnetism factor like $\frac{1}{4\pi \epsilon_{0}}$ or $\frac{1}{c}$ appear a lot. So you can define a new system of units in which $c=1$ such as the Natural units or the Planck units. See http://en.wikipedia.org/wiki/Natural_units - 1 Thank you for the answer! I thought about it at first, but it still gave me some problems (this is just a part of an excercise and I had more problems with the units later), so I don't think it's the natural units, and the book I took it from explicitly says it's in SI units. It seems to me that it's like Fabian says in another comment, the second expresion is in Gaussian units. However, thanks for your help! – Ajayu Jan 27 at 20:31 The $\frac{1}{c}$ makes the electric field and magnetic field have the same units. Along with the other answers provided, this makes second equation very useful. - 1 Thank you very much! – Ajayu Jan 27 at 20:47 ## protected by Qmechanic♦Jan 28 at 12:56 This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 34, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9606532454490662, "perplexity_flag": "head"}
http://mathoverflow.net/questions/73236/periodic-matrices
## Periodic matrices ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) A square matrix $M$ such that $M^{k+1}=M$, for some positive integer $k$, is called a periodic matrix. 1. Can we characterize the periodic matrices in $\mathcal{M}_n(\mathbb{Z})$? 2. If we replace $\mathbb{Z}$ by an Euclidean domain? 3. If we replace $\mathbb{Z}$ by a PID? - ## 2 Answers Geoff already gave a description. Here is a semigroup theory approach. $M^{k+1}=M$ means that $E=M^k$ is an idempotent, $E^2=E$, and $EM=M=ME$. All idempotents in the matrix semigroup over $Z$ are easily described as matrices similar to diag$(0,...,0,1,1,...,1)$ (several 0's followed by several $1$'s) with unimodular conjugator. Hence we can assume that $E$ has that form. Therefore $M=EM=ME$ must have the form described in Geoff's answer. The same description holds for matrices over any ring if the structure of idempotents is similar to the above. Edit. As Geoff pointed out below, in fact since $EM=ME=M$, we get that the block $A$ in $M$ is 0, so $M$ looks like $$\left(\begin{array}{ll} 0&0\\ 0 & B\end{array}\right)$$ where $B$ is an integer matrix with $B^k=1$. This is of course an "if and only if" description. I am pretty sure this has been known since the 50s, but I do not have time to dig it up. It should follow from the description of Green relations in the matrix semigroups. - Actually, your result is a bit stronger, since some of the matrices in my answer are conjugate. Because you have $M = EM = ME,$ you have actually conjugated them to a matrix of the form in my answer which also has $A = 0.$ – Geoff Robinson Aug 20 2011 at 7:39 @Geoff: Yes, you are right. – Mark Sapir Aug 20 2011 at 8:07 Very nice Mark, Geoff, thanks. – Portland Aug 21 2011 at 2:14 @Geoff: I used a standard semigroup theoretic argument (an element of a semigroup satisfying $x^{n+1}=x$ is called a group element, it belongs to the maximal subgroup with identity element $x^n$, etc.). When you deal with a (finite) semigroup, you should start with describing idempotents, then group elements, then regular D-classes, etc. The argument works for every (not necessarily commutative) ring where the structure of idempotents is as I described. – Mark Sapir Aug 23 2011 at 0:49 ### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. It seems that they are those matrices which are conjugate via a unimodular integral matrix to a matrix of the form `$\left( \begin{array}{clcr} 0 & A\\0 & B \end{array} \right),$` where $B$ is an invertible integral (square) matrix of finite order and $A$ is an arbitrary integral matrix. I think you just extend a $\mathbb{Z}$-basis of the pure submodule of integral vectors with $Mv = 0$ to a basis for $\mathbb{Z}^{n}$ (as column space). Then it's just a matter of seeing what the matrix looks like with respect to that basis, and computing its powers. This argument works over any PID (so, in particular, over any Euclidean domain). -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.940951406955719, "perplexity_flag": "head"}
http://physics.stackexchange.com/questions/40975/growth-condition-for-the-potential?answertab=votes
# growth condition for the potential what growth conditions should the potential inside the Hamiltonian $H=p^{2}+ V(x)$ has in order to get ALWAYS a discrete spectrum ?? for example how can we know for teh cases $|x|^{a}$ , $exp(b|x|)$ and so on with real parameters a,b and c how can we prove that for theses potentials we will have only a discrete set of eigenvalues ?? another question is it possible to find a potential which have the eigenvalues $E_{n}=n^{c}$ for some positive real number 'c' , i have heard that no potential in QM could have eigenvalues bigger than $E_{n}=n^{2}$ - I guess the only thing you need is that $V(x) \to \infty$ for $|x|\to\infty$. – Fabian Oct 16 '12 at 21:44 ## 1 Answer For the first question, I leave it up to the experts in point spectra and stuff like that (even though I think that you only need $V(x)\to\infty$ for $|x|\to\infty$). Regarding your second question: spectra with $E_n = n^c$. I present below the reason why $c>2$ should be impossible. In fact you can easily find a potential where $E_n \sim n^c$ for $n\to\infty$ by using the correspondence principle: The differences of energies scale like $\Delta E_n \sim n^{c-1}$ which should correspond to $h/T$ where $T$ is the period of the classical motion in the potential. Given that $$T(E) = \int_{V\leq E} \frac{dx}{\sqrt{2(E- V)/m}}$$ we find that with $V(x) \sim x^{\alpha}$, we have $T(E) \sim R^{(\alpha-2)/\alpha} \sim E^{(2-\alpha)/2\alpha}$ with $R$ the turing point such that $V(R) = E$. So in order to have $E_n = n^c$, we need to have $\Delta E_n \sim n^{c-1} \sim E^{(c-1)/c}$ to match $T^{-1} \sim E^{(\alpha-2)/2\alpha}$ which leads to $$\frac{c-1}{c} = \frac{\alpha-2}{2\alpha}$$ or $$\alpha= \frac{2c}{2-c}.$$ For $c\to 2^-$, we have $\alpha$ approaching $\infty$. So for the steepest possible potential, we only get $c=2$. Note that: • for $c=1$, we get $\alpha=2$ (harmonic oscillator) • for $c=-2$, we get $\alpha=-1$ (H-Atom) By the way, looking at the inverse relation: $$c = \frac{2\alpha}{2+\alpha}$$ we see that there is also something happening at $\alpha=-2$: this is the point at which Heisenbergs uncertainty principle is not enough to hinder the particle from falling into the centre. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 32, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.937993049621582, "perplexity_flag": "head"}
http://mathoverflow.net/questions/90160/root-system-generalizations-of-sekiguchi-debiard-aka-laplace-beltrami-operators
## root system generalizations of Sekiguchi-Debiard (aka Laplace-Beltrami) operators ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) For the root system $A_n$, taking the limit $q = t^\alpha$ and $t \to 1$, and letting $Y = (t-1) X -1$ one obtains from the Macdonald operator the so-called Sekiguchi-Debiard operator: $$D_\alpha(X) = a_\delta(x)^{-1} \sum_{w \in S_n} \epsilon(w) x^{w \delta} \prod_{j=1}^n (X + (w \delta)_j + \alpha x_j \partial_j).$$ The eigenfunctions of this family of operators indexed by powers of $X$ are called symmetric Jack polynomials with parameter $\alpha$. It is well-known that Heckman-Opdam polynomials generalize Jack symmetric polynomials to other root systems. My question is, are there generalizations of the Laplace-Beltrami operators to root systems as well? The operator is defined in $A_{n+1}$ up to affine transformation as $$D_\alpha^2 f = \frac{\alpha}{2} \sum_{i=1}^n (x_i \partial_i)^2 + \sum_{i < j} \frac{x_i^2 \partial_i - x_j^2 \partial_j}{x_i - x_j}.$$ Applying this operator to power sum polynomial $p_\lambda$ has a natural random walk interpretation, see the paper "a probabilistic interpretation of Macdonald polynomials" for detail. Edit: I figured out that the notation $\partial_\alpha$ with respect to a weight $\alpha$ is indeed the same as $x_i \partial_i$ since the latter is differentiation with respect to maximal torus element, i.e., $\partial_\alpha e^{k \alpha} = k e^{k \alpha} = \partial_{x_i} x_i^k$ if $e^\alpha = x_i$. Thus Heckman-Opdam operators do provide the Laplace Beltrami operators in the other root systems. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 13, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8908299803733826, "perplexity_flag": "head"}
http://www.physicsforums.com/showthread.php?p=3833279
Physics Forums ## Centripetal force/motion experiment 1. The problem statement, all variables and given/known data Hi, the aim of this experiment is the investigate the relationship between the centripetal force acting on an object moving in a circle of constant radius and its frequency of revolution. For the experiment, there is a rubber stopper attached to fishing line, that is passed through a glass tube, and an alligator clip is used to keep the radius constant. Hanging on the fishing line are slotted masses, which are altered in order to alter the weight force, and in turn alter the centripetal force supplied by tension. I have to amend a diagram to include the real rotation of the stopper and draw labelled vectors representing the forces acting on the rubber stopper whilst it is orbiting. I have tried to draw this diagram, but I am unsure as to whether it is correct, and have hence come here in seek of verification. 2. Relevant equations Nil. 3. The attempt at a solution The ideal diagram looks like this, Whilst the amended one looks like this, But with the use of vector addition, I am unsure as to how this produces the tension resultant vector. Shouldn't there be a horizontal force too? If so, what is it? Please help. Thank you in advance. PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug Seems like if you just take the x component of $F_{c}$ in diagram 2, the problem simplifies to the ideal case. You would have to know the angle between the glass tube and the wire, or the distance from the circular path to the top of the glass tube. Please describe how the experiment is performed. When are the weights added? ## Centripetal force/motion experiment hey there could you tell me why you hung the mass to the thread down below.........the weight would simply slide down and reduce the radius. i believe you are missing a clip on the top end or are you?.... could you post a detailed procedure of the experiment . it would make it easier to answer you then..... Quote by LawrenceC Please describe how the experiment is performed. When are the weights added? Quote by the-ever-kid hey there could you tell me why you hung the mass to the thread down below.........the weight would simply slide down and reduce the radius. i believe you are missing a clip on the top end or are you?.... could you post a detailed procedure of the experiment . it would make it easier to answer you then..... The experiment was conducted as followed: Apparatus - Thin glass tubing (approximately 20cm long, no sharp edges) - Fishing line (1.5m length) - Alligator clip - Large 2-hole rubber stopper - Mass carrier and slotted masses (50g each) - Stopwatch(es) - Metre ruler - Graph paper 1. Securely tie one end of the fishing line to the rubber stopper. Pass the line through the tube and attach a 50g mass carrier to this end as shown. Attach an alligator clip to the line so as to set the radius of the circular path of the stopper to a value between 80 and 100cm. Add five 50g masses to the carrier making the total mass 300g. 2. Whirl the stopper over your head in a horizontal circular path at such a rate that the alligator clip is just pulled up to, but does not touch, the lower end of the tube. 3. Maintain the rate of revolution so that the alligator clip remains at the same position and measure time taken for 30 revolutions of the stopper, at least 4 times. 4. Add 50g to the mass carrier, keeping the alligator clip in the same position. Repeat steps 2 and 3. 5. Repeat step 4 until you have used all your masses. Enter your results in a table as shown. Calculate the average time taken for 30 revolutions of the stopper for each mass. Then, determine the time taken for one revolution (period). With the two tables looking like: So, we vary the mass on the hanger, thus producing different magnitudes of weight. Tension is supplied by this weight. I'm not quite sure about how it actually works and how the stopper can rotate without the radius altering. The alligator clip can move up and down if the experiment is not conducted properly, thus altering the radius. It is the person who is using the apparatus' responsibility to ensure the alligator clip is in relatively the same position throughout the duration of the experiment in order try and maintain a constant radius. Due to gravity, the stopper is not 90 degrees to the tubing (as shown in the 'amended diagram'). We are to draw labelled vectors of all of the forces acting on the system, and I am unsure as to whether my diagram is completely correct, or not. Recognitions: Homework Help Quote by ibwm 1. The problem statement, all variables and given/known data Hi, the aim of this experiment is the investigate the relationship between the centripetal force acting on an object moving in a circle of constant radius and its frequency of revolution. For the experiment, there is a rubber stopper attached to fishing line, that is passed through a glass tube, and an alligator clip is used to keep the radius constant. Hanging on the fishing line are slotted masses, which are altered in order to alter the weight force, and in turn alter the centripetal force supplied by tension. I have to amend a diagram to include the real rotation of the stopper and draw labelled vectors representing the forces acting on the rubber stopper whilst it is orbiting. I have tried to draw this diagram, but I am unsure as to whether it is correct, and have hence come here in seek of verification. 2. Relevant equations Nil. 3. The attempt at a solution The ideal diagram looks like this, Whilst the amended one looks like this, But with the use of vector addition, I am unsure as to how this produces the tension resultant vector. Shouldn't there be a horizontal force too? If so, what is it? Please help. Thank you in advance. In the real situation, the Tension and the Weight force combine to make the horizontal Centripetal force. In the ideal situation, the centripetal force is Tension, while the radius of rotation is L, the length of fishing line out the top of the tube. In the second, real situation, where the stopper string is "drooping" at angle θ the horizontal, the Centripetal Force is only Tcosθ. However, the radius of the actual circle followed is only Lcosθ and when you finally get the relationship you will see that it is not surprising that those two cosθ factors effectivel compensate for each other / cancel out. It is thus entirely reasonable to ignore the droop. For those posters unfamiliar with the experiment, the slotted masses are added before each trial. In earlier editions of this experiment was performed with steel washers of a fixed, but undetermined mass, so the relation was derived in terms of F, 2F, 3F, 4F loads, rather than xx grams. Once the weights are added, the tube is held above your head [not vertically, your heand operates at aboiut 45 degrees - you have to be able to see your hand holding the glass tube. WHile holding the hanging weights, you begin spinning the stopper. As you do it you can feel the masses being supported by the tension in the fishing line - the same tension that supplies the centripetal force to the stopper. When you experience approximate balance, you release the masses and try for balance. If you rotate too fast, the stopper moves out / the masses move up. If you rotate too slowly, the stopper moves in / the masses move down. Get the speed just right and the masses stay in position - and a constant radius is maintained. The paper clip is there so you can judge the balance situation. You usually try to maintain the paperclip at a chosen distance below the tube - perhaps 1 cm. It is surprisingly easy to achieve balance after only a few minutes of trialling. Your partner uses a stop watch to time, say, 10 rotations so that you can then calculate the Period - and thus frequency if you would like. okay you wanted to know But with the use of vector addition, I am unsure as to how this produces the tension resultant vector. Shouldn't there be a horizontal force too? If so, what is it? Please help. listen don't use vector addition it is a concept created solely to confuse guys like use........with mind boggling algebra....(trigonometry is a concept that rescues physics(mechanics)) let us see................ in the real case the string is inclined right? yup let that angle that you have be theta..... Spoiler (BTW while doing the experiment you can make sure that there is no inclination by practicing....i mean alot) back to the explanation see you measure the side of the cone kay...that is your hypotenuse so the force vector along it is a vector sum of some two vectors that are perpendicular to each other. let them be the perpendicular and horizontal components... the vertical one is the one that balances F_g and the horizontal one balances the centripetal force of the apparent circular path. Spoiler (*BTW the ideal case is really a special case of the real case where the angle theta is 90*) do your basic trig and find the perpendicular and horizontal components... you know how to measure frequency i assume... Spoiler Your partner uses a stop watch to time, say, 10 rotations so that you can then calculate the Period - and thus frequency if you would like. now as you know that the centrefugal force surely relates to the mass of the cork the radius and the frequency in some way or the other... remember these quantities are fundamental as the SI units are fundamental and the force if the only derived unit...... kay..... now the frequency has a unit second inverse.....mass as kg and length as metre..... in force the si unit is kg*m/s2 right compare the poweres you get a relation force is proportional to mass of the cork the length of the string and inversely proportional to the square of the timeperiod or directly to the square of frequency.... figure the constants using the data given ..... and sams your uncle..... Recognitions: Homework Help Quote by the-ever-kid hey there could you tell me why you hung the mass to the thread down below.........the weight would simply slide down and reduce the radius. i believe you are missing a clip on the top end or are you?.... could you post a detailed procedure of the experiment . it would make it easier to answer you then..... I gave you the detailed procedure, but unfortunately you responded with several, very incorrect, assertions. I trust OP can sift through the correct / incorrect parts of your post. Recognitions: Homework Help Quote by the-ever-kid okay you wanted to know listen don't use vector addition it is a concept created solely to confuse guys like use........with mind boggling algebra....(trigonometry is a concept that rescues physics(mechanics)) I presume you were trying to say "guys like us" - and even got that wrong. let us see................ in the real case the string is inclined right? yup let that angle that you have be theta..... Spoiler (BTW while doing the experiment you can make sure that there is no inclination by practicing....i mean alot) Practice all you like - that just isn't going to happen. Rotate fast enough and you won't be able to notice the droop, but the masses will be jammed against the bottom of the glass tube meaning a centripetal force much greater than the weight of the masses back to the explanation see you measure the side of the cone kay...that is your hypotenuse so the force vector along it is a vector sum of some two vectors that are perpendicular to each other. let them be the perpendicular and horizontal components... the vertical one is the one that balances F_g and the horizontal one balances the centripetal force of the apparent circular path. Spoiler (*BTW the ideal case is really a special case of the real case where the angle theta is 90*) Some more errors here: The tension IS the hypotenuse, and the horizontal component of the Tension is the centripetal force. do your basic trig and find the perpendicular and horizontal components... you know how to measure frequency i assume... Spoiler Your partner uses a stop watch to time, say, 10 rotations so that you can then calculate the Period - and thus frequency if you would like. now as you know that the centrefugal force surely relates to the mass of the cork the radius and the frequency in some way or the other... Alert! Alert! - there is no such thing as centrifugal Force so anything that follows should be suspected remember these quantities are fundamental as the SI units are fundamental and the force if the only derived unit...... kay..... now the frequency has a unit second inverse.....mass as kg and length as metre..... in force the si unit is kg*m/s2 right compare the poweres you get a relation force is proportional to mass of the cork the length of the string and inversely proportional to the square of the timeperiod or directly to the square of frequency.... figure the constants using the data given ..... and sams your uncle..... The unit comparison in the last paragraph is useful. I presume you were trying to say "guys like us" - and even got that wrong. that was a silly typo...... Alert! Alert! - there is no such thing as centrifugal Force so anything that follows should be suspected i was merely quoting the name given to the psuedo-force acting in a rotating non-inertial frame. and the horizontal component of the Tension is the centripetal force i figured that it was an error later sorry.. The tension IS the hypotenuse i have never mentioned anything about being the hypotenuse or not.i said that the force of tension was directed along the hypotenuse.... now you're the one assuming things... The unit comparison in the last paragraph is useful. Thank you..... Tags circular, motion, orbit, rotation, tension Thread Tools | | | | |----------------------------------------------------------|-------------------------------|---------| | Similar Threads for: Centripetal force/motion experiment | | | | Thread | Forum | Replies | | | Introductory Physics Homework | 5 | | | Introductory Physics Homework | 3 | | | Introductory Physics Homework | 1 | | | Introductory Physics Homework | 2 |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9304479956626892, "perplexity_flag": "middle"}
http://mathhelpforum.com/trigonometry/162543-vectors-problem.html
# Thread: 1. ## vectors problem. An ultralight plane si headed N30W at 40km/h. A 12km/h wind is blowing in the direction E20S. What is the resultant velocity of the ultralight plane with respect to the ground? I cannot figure out how to draw the wind blowing E20S. 2. Is this sketch okay for you to solve the problem? The longer black arrow is the motion of the plane and the shorter black arrow is the direction of the wind. And red double arrowhead is the resultant motion of the plane. 3. ## no angles "In mathematics, you don't understand things. You just get used to them." -- Johann von Neumann I would have wanted to solve the triangle using the cosine/sine law but I can not see an angle. If I go with a scale and 1cm=4km/h I get the resultant 7cm/21km/h Any suggetions to solve it using trigonometry? 4. method of components ... $A_x + W_x = G_x$ $-40\sin(30) + 12\cos(20) = G_x$ $A_y + W_y = G_y$ $40\cos(30) - 12\sin(20) = G_y$ $|G| = \sqrt{(G_x)^2 + (G_y)^2}$ $\theta = \arctan\left(\frac{G_y}{G_x}\right) + 180$ 5. hello ...i am doing +2 in non medical and i had many problems in maths...i am weak in math so i have many problems in vector,percentages and many more...please can anyone suggest me the best tutor for math so i can clear my concepts. math tutoring 6. And you can still use geometry: Put another direction at the upper tip of the triangle. From parallel lines, you know that the lower angle is 30 degrees. And since those angles are found in a right angle, you can deduce that the middle angle is 90 - (30 + 20) = 40 degrees. From there, you can use the cosine rule and the sine rule to get angle x, which you use to determine the resultant angle.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9113564491271973, "perplexity_flag": "middle"}
http://math.stackexchange.com/questions/tagged/balls-in-bins+probability
# Tagged Questions 4answers 71 views ### $n$ balls into $n+1$ urns (with one special urn) Assume that there are $n$ balls numbered from $1,2,\ldots,n$ and $n+1$ urns, numbered as $0,1,\ldots,n$ Throw each ball randomly into one of $n$ urns: urn 1, ... 0answers 109 views ### Balls and Bins problem with constraint Assume we have $B$ black balls and $R$ red balls, where $R+B$ is a multiple of 4. We want to distribute the balls in $\frac{R+B}{4}$ bins such that each bin has at least 1 red ball and at least 1 ... 2answers 67 views ### How many boxes will be empty? $150$ balls randomly put into $100$ boxes, each ball could be put into any of these 100 boxes with same probability, after that, on average, how many boxes will be empty? No calculator. Choose one of ... 2answers 67 views ### Probability of red ball i before any black ball Assume we have $r$ red balls and $b$ black balls in a box and we remove one ball at a time without replacement. Red balls are labeled from $1$ to $r$. We want to calculate the probability a particular ... 0answers 35 views ### Poisson Distribution? There exists 1000 boxes. These boxes are randomly filled with balls. How many balls are required in order that only 1 in 100 boxes are left empty? This sounds like a Poisson distribution problem to ... 1answer 148 views ### Repeatedly Toss Balls into Bins $n$ balls are randomly tossed into $m$ bins, each bin can hold $k$ balls. If a ball is tossed into a full bin (already has $k$ balls in it), it can be tossed repeatedly and randomly into the $m$ bins ... 1answer 77 views ### what is the expected number of days in year in which exactly k people in a group of n people have been born? there is a group of n people and we must find the average number of days that in each of them exactly k people are born (k and n are given). This question assumes that a year has 365 days, and each ... 1answer 56 views ### $k$ balls into $n$ bins — Number of occupied bins Suppose we throw $k$ balls into $n$ bins. Assume that $\log^2n\le k\le n$. Is there a high probability bound (preferably exponential) on the number of occupied (i.e., non-empty) bins? Something ... 0answers 48 views ### balls and bins: the first time when max-loaded is less than twice min-loaded We have $n$ bins, in each step we throw a ball in a bin chosen uniformly and independently from the $n$ bins we have. We repeat the process $k$ times. Let $B_k$ be the number of balls in ... 1answer 109 views ### Bins in balls where bin size grows exponentially I have $k$ bins. The first bin can fit $1$ ball. Each subsequent bin can fit two times more balls than the previous one. In other words, the $i$th bin can fit $2^i$ balls. We randomly assign \$U = ... 1answer 105 views ### Amount of distinct numbers in a sequence of $k$ random numbers in range $[1,\ldots,n]$ Let $D$ be the amount of distinct numbers in a sequence of $k$ random numbers in range $[1,\ldots,n]$ (n>k). I want to show that: $D=\Omega(k)$ with exponential high probability. I'm interested in the ... 2answers 140 views ### Throwing $k$ balls into $n$ bins. I have the following question: Throwing $k$ balls into $n$ bins. What is the probability that exactly $z$ bins are not empty? I thought about something like: \Pr(z)=\frac{n! z^{k-z}}{n^k ... 1answer 114 views ### Maximum load of a bin in the $n$ balls with weights into $m$ bins problem $n$ balls, each with a weight $p_i$, are thrown into $m$ bins. Each bin is chosen with uniform probability. Prove or disprove that the expected value of the maximum load among the loads of bins is ... 1answer 145 views ### Probability - Balls and Buckets; variance question I've been working on this problem for a while and its giving me no end of trouble! The question is this: Suppose we have 2k buckets, numbered 1 through 2k. We throw x black balls and y white balls, at ... 5answers 193 views ### Probability that all bins contain strictly more than one ball? Here's the problem I'm working on: Given that I'm distributing $N$ balls into $K$ bins, what is the probability that all bins contain at least two (strictly more than 1) balls? This seems like a very ... 1answer 150 views ### Yet another balls and bins problem If $p_n$ denote the probability that when $n$ balls are randomly put in $n$ bins then there is at least one bin with exactly one ball. Is there a simple (involving only little computation) reason for ... 1answer 100 views ### Expected value for a function concerning a balls and bins problem I'm optimizing a hash function mapping $M$ items into $N$ bins and I need a criterion for evaluating the quality of the mapping. Denoting the number of items put into bin $i$ by $x_i$, an ideal ... 1answer 750 views ### If n balls are thrown into k bins, what is the probability that every bin gets at least one ball? If $n$ balls are thrown into $k$ bins (uniformly at random and independently), what is the probability that every bin gets at least one ball? i.e. If we write $X$ for the number of empty bins, what ... 1answer 214 views ### Hyper Birthday Paradox? There are $N$ buckets. Each second we add one new ball to a random bucket - so at $t=k$, there are a total of $k$ balls collectively in the buckets. At $t=1$, we expect that at least one bucket ... 3answers 2k views ### How can I solve bins-and-balls problems? Below is the problem that I wanted to solve When there are $m$ balls and $n$ bins, balls are thrown into bins where each ball is thrown into a bin uniformly at random. What is the expected number ...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 70, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9450198411941528, "perplexity_flag": "head"}
http://diracseashore.wordpress.com/category/physics/relativity/
# Shores of the Dirac Sea Feeds: Posts Comments ## If some of my students were writing problems Posted in Academia, humor, Physics, relativity on November 8, 2012 | 5 Comments » $\vec v_A, \vec v_B; v_{AB}?$ (more…) Read Full Post » ## Faster than light neutrino claim Posted in Experiments, high energy physics, Physics, relativity on September 23, 2011 | 5 Comments » Well, the press is all fired up about a claim of faster than light neutrinos. The claim from the OPERA experiment can be found in this paper. The paper was released on September 22nd and it has already gotten 20 blog links. Not bad for a new claim. Considering that the news organizations are happily bashing special relativity, one can always rely on XKCD to spin it correctly. Now more to the point: the early arrival time is claimed to be 60 nanoseconds. The distance between the emitter and the observer is claimed to be known to about 20 cm, certified by various National Standards bodies. A whole bunch of different systematic errors are estimated and added in quadrature, not to mention that they need satellite relays to match various timings. 60 nanoseconds is about the same as 20 meters uncertainty (just multiply by the speed of light) and they claim this to be both due to statistical errors and systematics. The statistical error is from a likelihood fit. The  systematic error is irreducible and in a certain sense it is the best guess for what the number actually is. They did a blind analysis: this means that the data is kept in the dark until all calibrations have been made, and then the number is discovered for the measurement. My first reaction is that it could have been worse. It is a very complicated measurement. Notice that if we assume all systematic errors in table 2 are aligned we get a systematic error that can be three times as big. It is dominated by what they call BCT calibration. The errors are added in quadrature assuming that they are completely uncorrelated, but it is unclear if that is so. But the fact that one error dominates so much means that if they got that wrong by a factor of 2 or 3 (also typical for systematic errors), the result loses a bit on the significance. My best guess right now is that there is a systematic error that was not taken into account: this does not mean that the people that run the experiment are not smart, it’s just that there are too many places where a few extra nanoseconds could have sneaked in.  It should take a while for this to get sorted out. You can also check Matt Strassler’s blog and Lubos Motl’s blog for more discussion. Needless to say, astrophysical data from SN1987a point to neutrinos behaving just fine and they have a much longer baseline. I have heard claims that the effect must depend on the energy of the neutrinos. This can be checked directly: if I were running the experiment, I would repeat it with lower energy neutrinos (for which we have independent data)  and see if the effect goes away then. Read Full Post » ## Black holes as frozen stars Posted in gravity, high energy physics, quantum fields, Quantum Gravity, relativity, string theory, thermodynamics on February 19, 2009 | 13 Comments » We now have a few working examples of a microscopic theory of quantum gravity, all come with specific boundary conditions (like any other equation in physics or mathematics), but otherwise full background independence. In particular, all those theories include quantum black holes, and we can ask all kinds of puzzling questions about those fascinating objects. Starting with, what is exactly a black hole? (more…) Read Full Post » ## First quantization, first pass Posted in high energy physics, Physics, quantum fields, Quantum Gravity, relativity on November 5, 2008 | 6 Comments » Suppose you want to solve a linear partial differential equation of the form $O \psi(x) = j(x)$,  which determines some quantity $\psi(x)$ in terms of its source $j(x)$. Here x could stand for possibly many variables, and the differential operator $O$ can be pretty much anything. This is a very general type of problem, not even specific to physics. An example in physics could be the Klein-Gordon equation, or with some more bells and whistles the Maxwell equation, which determines the electric and magnetic fields. Let us replace this problem with the following equivalent one. If we find a function $\psi(x,s)$ such that: $\frac{\partial \psi}{\partial s} +O \psi =0$ with the initial condition $\psi(x, s=0) = j(x)$, and assuming the regularity condition $\psi (x,s= \infty) \rightarrow 0$, then it is easy to see that the function $\psi(x) = \int_0^\infty \psi(x,s) ds$ satisfies the original equation we set out to solve. Now, this new equation for $\psi(x,s)$ looks kind of familiar, if we are willing to overlook a few details. If we wish, we can think about $\psi(x,s)$ as a time dependent wave function, with the parameter s playing the role of time. The equation for  $\psi(x,s)$  could then be interpreted as a Schrödinger equation, with the original operator $O$  playing the role of the Hamiltonian. We are ignoring a few issues to do with convergence, analytic continuation, and the related fact that the Schrödinger equation is complex, and the one we are discussing is not. Never mind, these are subtleties which need to be considered at a later stage. The point is that we can now use any technique we learned in quantum mechanics to solve the original equation – path integral, canonical quantization, you name it. We can talk about the states $|x\rangle$ and the Hilbert space they form, Fourier transform to get another basis for that Hilbert space, even discuss “time” evolution (that is, the dependence of various states on the auxiliary parameter s). We can get the state $\psi(x,s)$ by summing over all paths of a “particle” with an appropriate worldline action and boundary conditions. Depending on the problem, we may be interested in various (differential) operators acting on $\psi(x,s)$, and they of course do not commute, resulting in uncertainty relations. You get the picture. This technique is sometimes called first quantization, or Schwinger proper time method, or heat Kernel expansion. Whatever you call it, it has a priori nothing to do with quantum mechanics,  there are no probabilities, Planck constant or any wavefunctions in any real sense. At this point we may be discussing the financial markets, population dynamics of bacteria, or simply classical field theory. In the second pass, we can apply this idea to linear fields, generating solutions to various linear differential equations. Some of those equations are Lorentz invariant (Klein-Gordon, Dirac, Maxwell equations), but they have nothing to do with quantum mechanics, despite the original way they were referred to as “relativistic wave equations”.  Once we add spin to the game, we start having the fascinating structures of (worldline) fermions and supersymmetry (not to be confused with spacetime fermions and supersymmetry), and we are also in a good shape to make the leap from classical field theory to classical string theory. Maybe I’ll get to that sometime… Read Full Post » ## c sells c shells by the c shore Posted in Physics, relativity, tagged Physics, relativity on September 16, 2008 | 9 Comments » In our previous episodes we have discussed the notion of length and time. Now it’s time to start writing some equations. You might have noticed that the title of the post has the letter c prominently displayed. In physics letters usually stand for variables or constants in a given situation. The letter ‘a’ usually stands for acceleration, ‘F’ for force, ‘E’ for energy or electric field, ‘P’ for pressure, ‘V’ for volume, and you might have noticed that there is a pattern of naming variables in a mnemonic way after the initial of the word you are describing.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 17, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360458254814148, "perplexity_flag": "head"}
http://mathoverflow.net/questions/12462/limsup-and-liminf-for-a-sequence-of-sets/12485
limsup and liminf for a sequence of sets Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) how does limsup and liminf for a sequence of sets, apply to probability theory. any real world examples would be much appreciated - 6 As written, this question is too vague. What exactly do you have in mind? – Qiaochu Yuan Jan 20 2010 at 23:22 4 See mathoverflow.net/howtoask for some hints on how to turn this into a real question :) – Mariano Suárez-Alvarez Jan 20 2010 at 23:33 I agree with Qiaochu and Mariano, so I'm voting to close. I'm happy to reopen the question if it is edited to be more focused (though I guess that's unlikely since there's already an accepted answer). – Anton Geraschenko♦ Jan 22 2010 at 2:25 3 Douglas Zare said on meta, "if you have gone through the en.wikipedia.org/wiki/Borel-Cantelli_lemma under the notation usually used, then you should recognize what the questioner meant, which is why the question got detailed answers even though it looks close to meaningless to some." (meta.mathoverflow.net/discussion/216) It looks like the question is not as vague as I thought it was, so I'm voting to reopen. I still would prefer that the question defined its terms and asked something more precise. It wouldn't have been hard to make the question clearly meaningful. – Anton Geraschenko♦ Feb 13 2010 at 21:19 4 Answers For a sequence of subsets $A_n$ of a set $X$, the $\limsup A_n$ $= \cap_{N=1}^\infty ( \cup_{n\ge N} A_n )$ and $\liminf A_n$ $= \cup_{N=1}^\infty (\cap_{n \ge N} A_n)$. If $x \in \limsup A_n$ then $x$ is in all of the $\cup_{n\ge N} A_n$, which means no matter how large you pick $N$ you will find an $A_n$ with $n>N$ of which $x$ is a member. Thus members of $\limsup A_n$ are those elements of $X$ that are members of infinitely many of the $A_n$'s. If $A_n$ are thought of as events (in the sense of probability) $\limsup A_n$ will be another event. It corresponds exactly to the occurance of infinitely many of the $A_n$'s. This is why $\limsup A_n$ is sometimes written $x \in A_n$ infinitely often. Similarly, if $x\in \liminf A_n$ then $x$ is in one of $\cap_{n\ge N} A_n$, which means $x \in A_n$ for all $n > N$. Thus, for $x$ to be in the $\liminf$, it must be in all of the $A_n$, with finitely many exceptions. This is how the phrase "ultimately all of them" comes up. Both of these operations, similar to their counterparts in metric spaces, concern the tail of the sequence ${A_n}$. I.e., neither changes if an initial portion of the sequence is truncated. As a previous response pointed out, often the sets $A_n$ are defined to track the deviation of a sequence of random variables from a candidate limit by setting $A_n = \{x: |Y_n(x) -Y(x)| \ge \epsilon\}$. The members of $\limsup A_n$ then represents those sequences that every now and then deviate $\epsilon$ away from $Y(x)$, which is solely determined by the tail of the sequence $Y_n$. Here is a conceptual game that can be partially understood using these concepts: We have a deck of cards, on the face of each card an integer is printed; thus the cards are ${1,2,3...}.$ At the nth round of this game, the first $n^2$ cards are taken, they are shuffled. You pick one of them. If your pick is 1, you win that round. Let $A_n$ denote the event that you win the nth round. The complement $A_n^c$ of $A_n$ will represent that you lose the $n^{th}$ round. The event $\limsup A_n$ represents those scenarios in which you win infinitely many rounds. The complement of this event is $\liminf A_n^c$, and this represents those scenarios in which you ultimately lose all of the rounds. By the Borel Cantelli Lemma $P(\limsup A_n)$ $=0$ or equivalently $P(\liminf A_n^c)=1$. Thus, a player of this game will deterministically experience that there comes a time, after which he never wins. - thanks for this, it made a lot of sense to me. – cappadonza Jan 21 2010 at 14:30 You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you. As Johannes stated, the Borel-Cantelli lemmas (there are two) are the primary way in which these quantities (referred to as "infinitely often" and "almost always") appear. The most common use is to prove things about limits of random variables. To see why this is the case, suppose you can show that, for any $\epsilon > 0$, $$P([|X_n| > \epsilon]\textrm{ i.o.}) = 0.$$ (To show this with the first Borel-Cantelli lemma, you would establish $\sum_n P([|X_n| > \epsilon]) < \infty$.) From here, it follows that $P([\lim X_n = 0]) = 1$, because $$P([\lim X_n = 0]) = P(\cap_i \cup_N \cap_{n\geq N} [|X_n| > 1/i]) =: P(A)$$ by definition of limit, but $$P(A^c) =1-P(\cup_i \cap_N \cup_{n\geq N} [|X_n| \leq 1/i]) \geq 1- \sum_i P([|X_n| > \epsilon]\textrm{ i.o.}) = 1,$$ where the union bound (subadditivity) and definition of infinitely often were employed. Note that i worked out a bunch of symbols to make sure the math was correct, but you can see it in words: if, for every $\epsilon >0$, you have the property that probability of infinitely many of your random variables exceeding $\epsilon$ is zero, then it is intuitive that the limit of this sequence is 0 with probability 1. To get a feel for more details (and the relationship to specific probabilistic quantities), maybe try using this technique to prove certain limiting properties of certain sequences of random variables (any probability textbook will have many, for instance the excellent book by Resnick). I'll also add that you can prove a weakened form of the SLLN (weakened means you need some extra assumptions on which moments are finite) using Chebyshev's inequality and the limiting technique above. As you can guess, Chebyshev allows you to say something of the form $\sum_n P([|X_n| > \epsilon) < \infty$, where $X_n$ is something fancier as needed for the SLLN (a normalized sum). - What about something like $A_1\subseteq A_2\subseteq A_3\subseteq\ldots \implies \limsup A_n=\liminf A_n=\bigcup\limits_n A_n$ ? If these sets are measurable sets in a (finite) measure space $(\mathcal{A},\mu)$, then $\liminf\limits_{m\to\infty} \mu(A_m) \geq \mu(\liminf A_m)$ (and $\limsup\limits_{m\to\infty} \mu(A_m) \leq \mu(\limsup A_m)$). Is that "application" enough? EDIT: The Borel–Cantelli lemma is another application. - Here's another simple example, in a similar vein as has2's above. Let $X_n$ be a sequence of independent, identically-distributed exponential variables, i.e., $$\mathbb P(X_n > u) = e^{-\lambda u},$$ for some real $\lambda > 0$. Let $E_n$ be the event that $X_n > n$, and let $E = \limsup E_n$, that is, the event $E$ occurs if there's an infinite (random) subsequence $n_k$ such that $X_{n_k} > n_k$. We compute $$\sum \mathbb P(E_n) = \sum e^{-\lambda n} < \infty,$$ thus by the Borel-Cantelli lemma, $E$ has probability zero. With probability one, there exists a (random) number $N$ such that for all $n \ge N$, $X_n \le n$. Let's analyze this graphically. Make a plot with the horizontal axis representing time $n$, and the vertical axis $x = X_n$. Draw the line $x = n$. For small times $n$, these random points might jump above the line $x = n$. But the argument above shows that there is some (random) time $N$ after which the points $X_n$ all lie below the line $x = n$. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 78, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9611691832542419, "perplexity_flag": "head"}
http://mathoverflow.net/questions/77055?sort=newest
## Find the least prime $p$ such that $mn$ divides $p-1$ ### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points) My hope is that this question is "trivial," but it is outside my knowledge base, so I'd appreciate some advice. Given positive integers $m$ and $n$, find the least prime $p$ such that $p-1 = mnk$ for some $k \geq 1$. For what I am trying to do, I need an explicit algorithm to find $p$, as opposed to an approximation. Is there a best one known? What is the upper bound on how much larger $p$ might be than $mn$? I am happy to assume that $m$ and $n$ are "sufficiently large" for the algorithm to have nice properties, if that helps. Thank you. Hopefully the answer is obvious to everyone but me. :-) - 1 When you write "for $k\ge1$", do you mean "for some $k\ge1$" or "for every $k\ge1$" ?:-( – Chandan Singh Dalawat Oct 4 2011 at 3:11 @Chandan Singh Dalawat: I mean some $k$, in fact the smallest possible $k$. Now edited. – Aaron Sterling Oct 4 2011 at 15:24 ## 1 Answer Firstly, I don't understand the point of having both $m$ and $n.$ Since only their product appears, call it $k.$ You are then trying to find the smallest prime congruent to $1$ modulo $k.$ The bound for such is a highly nontrivial matter, see (eg) http://en.wikipedia.org/wiki/Linnik%27s_theorem EDIT It is believed that you don't have to examine more than $\log^2 k$ multiples to find the first prime in a progression of your type. - 2 The question asks for an algorithm. The algorithm will be to test all numbers of the form $mns+1$, for $s=1,2,\ldots$ for primality, using e.g., AKS (or something faster and probabilistic like Miller-Rabin). I don't think there is anything else you can do, as there are no simple ways of generating primes. The running time of this algorithm depends on the smallest prime that you meet and for that see Igor's answer. – Felipe Voloch Oct 3 2011 at 17:56 @Felipe: Yes, thank you. I meant (implicitly) that you should test all the numbers until the bound, but it is much better to be explicit. (although the OP does also ask for the bound explicitly) I suppose that theoretically there could be a better algorithm then testing all the multiples, but I am pretty sure no one knows what it is. – Igor Rivin Oct 3 2011 at 18:08 Thanks, both of you, this is a big help. – Aaron Sterling Oct 3 2011 at 18:14 It might help to do some pre-sieving first, Eratostenes-like. Eg, if mn is odd there is no point in testing the odd multiples of mn; and make similar considerations for all other 'small' primes. – quid Oct 3 2011 at 18:22 1 @quid: the $\log q^2$ is apparently a heuristic estimate due to Wagstaff. And you do have a good point about sieving: sieving wants to find many primes, here you are looking for one, so it might or might not help, depending on various subtleties... – Igor Rivin Oct 3 2011 at 18:57 show 10 more comments
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9443293809890747, "perplexity_flag": "middle"}